text
stringlengths
10
951k
source
stringlengths
39
44
Set (mathematics) In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. The arrangement of the objects in the set does not matter. For example, the numbers 2, 4, and 6 are distinct objects when considered separately, but when they are considered collectively they form a single set of size three, written as , which could also be written as . The concept of a set is one of the most fundamental in mathematics. Developed at the end of the 19th century, set theory is now a ubiquitous part of mathematics, and can be used as a foundation from which nearly all of mathematics can be derived. The German word "Menge", rendered as "set" in English, was coined by Bernard Bolzano in his work "The Paradoxes of the Infinite". A set is a well-defined collection of distinct objects. The objects that make up a set (also known as the set's "elements" or "members") can be anything: numbers, people, letters of the alphabet, other sets, and so on. Georg Cantor, one of the founders of set theory, gave the following definition of a set at the beginning of his "Beiträge zur Begründung der transfiniten Mengenlehre": Sets are conventionally denoted with capital letters. Sets "A" and "B" are equal if and only if they have precisely the same elements. For technical reasons, Cantor's definition turned out to be inadequate; today, in contexts where more rigor is required, one can use axiomatic set theory, in which the notion of a "set" is taken as a primitive notion and the properties of sets are defined by a collection of axioms. The most basic properties are that a set can have elements, and that two sets are equal (one and the same) if and only if every element of each set is an element of the other; this property is called the "extensionality of sets". There are two common ways of describing, or specifying the members of, a set: roster notation and set builder notation.. These are examples of extensional and intensional definitions of sets, respectively. The "Roster notation" (or "enumeration notation") method of defining a set consist of listing each member of the set. More specifically, in roster notation (an example of extensional definition), the set is denoted by enclosing the list of members in curly brackets: For sets with many elements, the enumeration of members can be abbreviated. For instance, the set of the first thousand positive integers may be specified in roster notation as where the ellipsis ("...") indicates that the list continues in according to the demonstrated pattern. In roster notation, listing a member repeatedly does not change the set, for example, the set is identical to the set . Moreover, the order in which the elements of a set are listed is irrelevant (unlike for a sequence or tuple), so is yet again the same set. In set-builder notation, the set is specified as a subset of a larger set, where the subset is determined by a statement or condition involving the elements. For example, a set "F" can be specified as follows: In this notation, the vertical bar ("|") means "such that", and the description can be interpreted as ""F" is the set of all numbers "n", such that "n" is an integer in the range from 0 to 19 inclusive". Sometimes the colon (":") is used instead of the vertical bar. Set-builder notation is an example of intensional definition. Another method is by using a rule or semantic description: This is another example of intensional definition. If "B" is a set and "x" is one of the objects of "B", this is denoted as "x" ∈ "B", and is read as "x is an element of B", as "x belongs to B", or "x is in B". If "y" is not a member of "B" then this is written as "y" ∉ "B", read as "y is not an element of B", or "y is not in B". For example, with respect to the sets "A" = , "B" = , and "F" = , If every element of set "A" is also in "B", then "A" is said to be a "subset" of "B", written "A" ⊆ "B" (pronounced "A is contained in B"). Equivalently, one can write "B" ⊇ "A", read as "B is a superset of A", "B includes A", or "B contains A". The relationship between sets established by ⊆ is called "inclusion" or "containment". Two sets are equal if they contain each other: "A" ⊆ "B" and "B" ⊆ "A" is equivalent to "A" = "B". If "A" is a subset of "B", but not equal to "B", then "A" is called a "proper subset" of "B", written "A" ⊊ "B", or simply "A" ⊂ "B" ("A is a proper subset of B"), or "B" ⊋ "A" ("B is a proper superset of A", "B" ⊃ "A"). The expressions "A" ⊂ "B" and "B" ⊃ "A" are used differently by different authors; some authors use them to mean the same as "A" ⊆ "B" (respectively "B" ⊇ "A"), whereas others use them to mean the same as "A" ⊊ "B" (respectively "B" ⊋ "A"). Examples: There is a unique set with no members, called the "empty set" (or the "null set"), which is denoted by the symbol ∅ (other notations are used; see empty set). The empty set is a subset of every set, and every set is a subset of itself: A partition of a set "S" is a set of nonempty subsets of "S" such that every element "x" in "S" is in exactly one of these subsets. That is, the subsets are pairwise disjoint (meaning any two sets of the partition contain no element in common), and the union of all the subsets of the partition is "S". The power set of a set "S" is the set of all subsets of "S". The power set contains "S" itself and the empty set because these are both subsets of "S". For example, the power set of the set is . The power set of a set "S" is usually written as "P"("S"). The power set of a finite set with "n" elements has 2"n" elements. For example, the set contains three elements, and the power set shown above contains 23 = 8 elements. The power set of an infinite (either countable or uncountable) set is always uncountable. Moreover, the power set of a set is always strictly "bigger" than the original set in the sense that there is no way to pair every element of "S" with exactly one element of "P"("S"). (There is never an onto map or surjection from "S" onto "P"("S").) The cardinality of a set "S", denoted , is the number of members of "S". For example, if "B" = , then . Repeated members in roster notation are not counted, so , too. The cardinality of the empty set is zero. Some sets have infinite cardinality. The set N of natural numbers, for instance, is infinite. Some infinite cardinalities are greater than others. For instance, the set of real numbers has greater cardinality than the set of natural numbers. However, it can be shown that the cardinality of (which is to say, the number of points on) a straight line is the same as the cardinality of any segment of that line, of the entire plane, and indeed of any finite-dimensional Euclidean space. There are some sets or kinds of sets that hold great mathematical importance and are referred to with such regularity that they have acquired special names and notational conventions to identify them. One of these is the empty set, denoted or ∅. A set with exactly one element, "x", is a unit set, or singleton, . Many of these sets are represented using blackboard bold or bold typeface. Special sets of numbers include Each of the above sets of numbers has an infinite number of elements, and each can be considered to be a proper subset of the sets listed below it. The primes are used less frequently than the others outside of number theory and related fields. Positive and negative sets are sometimes denoted by superscript plus and minus signs, respectively. For example, ℚ+ represents the set of positive rational numbers. There are several fundamental operations for constructing new sets from given sets. Two sets can be "added" together. The "union" of "A" and "B", denoted by "A" ∪ "B", is the set of all things that are members of either "A" or "B". Examples: Some basic properties of unions: A new set can also be constructed by determining which members two sets have "in common". The "intersection" of "A" and "B", denoted by is the set of all things that are members of both "A" and "B". If then "A" and "B" are said to be "disjoint". Examples: Some basic properties of intersections: Two sets can also be "subtracted". The "relative complement" of "B" in "A" (also called the "set-theoretic difference" of "A" and "B"), denoted by (or ), is the set of all elements that are members of "A" but not members of "B". It is valid to "subtract" members of a set that are not in the set, such as removing the element "green" from the set ; doing so has no effect. In certain settings all sets under discussion are considered to be subsets of a given universal set "U". In such cases, is called the "absolute complement" or simply "complement" of "A", and is denoted by "A"′. Examples: Some basic properties of complements: An extension of the complement is the symmetric difference, defined for sets "A", "B" as For example, the symmetric difference of and is the set . The power set of any set becomes a Boolean ring with symmetric difference as the addition of the ring (with the empty set as neutral element) and intersection as the multiplication of the ring. A new set can be constructed by associating every element of one set with every element of another set. The "Cartesian product" of two sets "A" and "B", denoted by "A" × "B" is the set of all ordered pairs ("a", "b") such that "a" is a member of "A" and "b" is a member of "B". Examples: Some basic properties of Cartesian products: Let "A" and "B" be finite sets; then the cardinality of the Cartesian product is the product of the cardinalities: Set theory is seen as the foundation from which virtually all of mathematics can be derived. For example, structures in abstract algebra, such as groups, fields and rings, are sets closed under one or more operations. One of the main applications of naive set theory is constructing relations. A relation from a domain "A" to a codomain "B" is a subset of the Cartesian product "A" × "B". For example, considering the set "S" = { rock, paper, scissors } of shapes in the game of the same name, the relation "beats" from "S" to "S" is the set "B" = { (scissors,paper), (paper,rock), (rock,scissors) }; thus "x" beats "y" in the game if the pair ("x","y") is a member of "B". Another example is the set "F" of all pairs ("x", "x"2), where "x" is real. This relation is a subset of R × R", because the set of all squares is subset of the set of all real numbers. Since for every "x" in R"', one, and only one, pair ("x"...) is found in "F", it is called a function. In functional notation, this relation can be written as "F"("x") = "x"2. Although initially naive set theory, which defines a set merely as "any well-defined" collection, was well accepted, it soon ran into several obstacles. It was found that this definition spawned , most notably: The reason is that the phrase "well-defined" is not very well-defined. It was important to free set theory of these paradoxes because nearly all of mathematics was being redefined in terms of set theory. In an attempt to avoid these paradoxes, set theory was axiomatized based on first-order logic, and thus axiomatic set theory was born. For most purposes, however, naive set theory is still useful. The inclusion–exclusion principle is a counting technique that can be used to count the number of elements in a union of two sets, if the size of each set and the size of their intersection are known. It can be expressed symbolically as A more general form of the principle can be used to find the cardinality of any finite union of sets: Augustus De Morgan stated two laws about sets. If A and B are any two sets then, The complement of A union B equals the complement of A intersected with the complement of B. The complement of A intersected with B is equal to the complement of A union to the complement of B.
https://en.wikipedia.org/wiki?curid=26691
Science Science (from the Latin word "scientia", meaning "knowledge") is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. The earliest roots of science can be traced to Ancient Egypt and Mesopotamia in around 3500 to 3000 BCE. Their contributions to mathematics, astronomy, and medicine entered and shaped Greek natural philosophy of classical antiquity, whereby formal attempts were made to provide explanations of events in the physical world based on natural causes. After the fall of the Western Roman Empire, knowledge of Greek conceptions of the world deteriorated in Western Europe during the early centuries (400 to 1000 CE) of the Middle Ages but was preserved in the Muslim world during the Islamic Golden Age. The recovery and assimilation of Greek works and Islamic inquiries into Western Europe from the 10th to 13th century revived "natural philosophy", which was later transformed by the Scientific Revolution that began in the 16th century as new ideas and discoveries departed from previous Greek conceptions and traditions. The scientific method soon played a greater role in knowledge creation and it was not until the 19th century that many of the institutional and professional features of science began to take shape; along with the changing of "natural philosophy" to "natural science." Modern science is typically divided into three major branches that consist of the natural sciences (e.g., biology, chemistry, and physics), which study nature in the broadest sense; the social sciences (e.g., economics, psychology, and sociology), which study individuals and societies; and the formal sciences (e.g., logic, mathematics, and theoretical computer science), which study abstract concepts. There is disagreement, however, on whether the formal sciences actually constitute a science as they do not rely on empirical evidence. Disciplines that use existing scientific knowledge for practical purposes, such as engineering and medicine, are described as applied sciences. Science is based on research, which is commonly conducted in academic and research institutions as well as in government agencies and companies. The practical impact of scientific research has led to the emergence of science policies that seek to influence the scientific enterprise by prioritizing the development of commercial products, armaments, health care, and environmental protection. Science in a broad sense existed before the modern era and in many historical civilizations. Modern science is distinct in its approach and successful in its results, so it now defines what science is in the strictest sense of the term. Science in its original sense was a word for a type of knowledge, rather than a specialized word for the pursuit of such knowledge. In particular, it was the type of knowledge which people can communicate to each other and share. For example, knowledge about the working of natural things was gathered long before recorded history and led to the development of complex abstract thought. This is shown by the construction of complex calendars, techniques for making poisonous plants edible, public works at national scale, such as those which harnessed the floodplain of the Yangtse with reservoirs, dams, and dikes, and buildings such as the Pyramids. However, no consistent conscious distinction was made between knowledge of such things, which are true in every community, and other types of communal knowledge, such as mythologies and legal systems. Metallurgy was known in prehistory, and the Vinča culture was the earliest known producer of bronze-like alloys. It is thought that early experimentation with heating and mixing of substances over time developed into alchemy. Neither the words nor the concepts "science" and "nature" were part of the conceptual landscape in the ancient near east. The ancient Mesopotamians used knowledge about the properties of various natural chemicals for manufacturing pottery, faience, glass, soap, metals, lime plaster, and waterproofing; they also studied animal physiology, anatomy, and behavior for divinatory purposes and made extensive records of the movements of astronomical objects for their study of astrology. The Mesopotamians had intense interest in medicine and the earliest medical prescriptions appear in Sumerian during the Third Dynasty of Ur ( 2112 BCE – 2004 BCE). Nonetheless, the Mesopotamians seem to have had little interest in gathering information about the natural world for the mere sake of gathering information and mainly only studied scientific subjects which had obvious practical applications or immediate relevance to their religious system. In classical antiquity, there is no real ancient analog of a modern scientist. Instead, well-educated, usually upper-class, and almost universally male individuals performed various investigations into nature whenever they could afford the time. Before the invention or discovery of the concept of "nature" (ancient Greek "phusis") by the Pre-Socratic philosophers, the same words tend to be used to describe the "natural" "way" in which a plant grows, and the "way" in which, for example, one tribe worships a particular god. For this reason, it is claimed these men were the first philosophers in the strict sense, and also the first people to clearly distinguish "nature" and "convention." Natural philosophy, the precursor of natural science, was thereby distinguished as the knowledge of nature and things which are true for every community, and the name of the specialized pursuit of such knowledge was "philosophy" – the realm of the first philosopher-physicists. They were mainly speculators or theorists, particularly interested in astronomy. In contrast, trying to use knowledge of nature to imitate nature (artifice or technology, Greek "technē") was seen by classical scientists as a more appropriate interest for artisans of lower social class. The early Greek philosophers of the Milesian school, which was founded by Thales of Miletus and later continued by his successors Anaximander and Anaximenes, were the first to attempt to explain natural phenomena without relying on the supernatural. The Pythagoreans developed a complex number philosophy and contributed significantly to the development of mathematical science. The theory of atoms was developed by the Greek philosopher Leucippus and his student Democritus. The Greek doctor Hippocrates established the tradition of systematic medical science and is known as "The Father of Medicine". A turning point in the history of early philosophical science was Socrates' example of applying philosophy to the study of human matters, including human nature, the nature of political communities, and human knowledge itself. The Socratic method as documented by Plato's dialogues is a dialectic method of hypothesis elimination: better hypotheses are found by steadily identifying and eliminating those that lead to contradictions. This was a reaction to the Sophist emphasis on rhetoric. The Socratic method searches for general, commonly held truths that shape beliefs and scrutinizes them to determine their consistency with other beliefs. Socrates criticized the older type of study of physics as too purely speculative and lacking in self-criticism. Socrates was later, in the words of his "Apology", accused of corrupting the youth of Athens because he did "not believe in the gods the state believes in, but in other new spiritual beings". Socrates refuted these claims, but was sentenced to death. Aristotle later created a systematic programme of teleological philosophy: Motion and change is described as the actualization of potentials already in things, according to what types of things they are. In his physics, the Sun goes around the Earth, and many things have it as part of their nature that they are for humans. Each thing has a formal cause, a final cause, and a role in a cosmic order with an unmoved mover. The Socratics also insisted that philosophy should be used to consider the practical question of the best way to live for a human being (a study Aristotle divided into ethics and political philosophy). Aristotle maintained that man knows a thing scientifically "when he possesses a conviction arrived at in a certain way, and when the first principles on which that conviction rests are known to him with certainty". The Greek astronomer Aristarchus of Samos (310–230 BCE) was the first to propose a heliocentric model of the universe, with the Sun at the center and all the planets orbiting it. Aristarchus's model was widely rejected because it was believed to violate the laws of physics. The inventor and mathematician Archimedes of Syracuse made major contributions to the beginnings of calculus and has sometimes been credited as its inventor, although his proto-calculus lacked several defining features. Pliny the Elder was a Roman writer and polymath, who wrote the seminal encyclopedia "Natural History", dealing with history, geography, medicine, astronomy, earth science, botany, and zoology. Other scientists or proto-scientists in Antiquity were Theophrastus, Euclid, Herophilos, Hipparchus, Ptolemy, and Galen. Because of the collapse of the Western Roman Empire due to the Migration Period an intellectual decline took place in the western part of Europe in the 400s. In contrast, the Byzantine Empire resisted the attacks from invaders, and preserved and improved upon the learning. John Philoponus, a Byzantine scholar in the 500s, questioned Aristotle's teaching of physics and to note its flaws. John Philoponus' criticism of Aristotelian principles of physics served as an inspiration to medieval scholars as well as to Galileo Galilei who ten centuries later, during the Scientific Revolution, extensively cited Philoponus in his works while making the case for why Aristotelian physics was flawed. During late antiquity and the early Middle Ages, the Aristotelian approach to inquiries on natural phenomena was used. Aristotle's four causes prescribed that four "why" questions should be answered in order to explain things scientifically. Some ancient knowledge was lost, or in some cases kept in obscurity, during the fall of the Western Roman Empire and periodic political struggles. However, the general fields of science (or "natural philosophy" as it was called) and much of the general knowledge from the ancient world remained preserved through the works of the early Latin encyclopedists like Isidore of Seville. However, Aristotle's original texts were eventually lost in Western Europe, and only one text by Plato was widely known, the "Timaeus", which was the only Platonic dialogue, and one of the few original works of classical natural philosophy, available to Latin readers in the early Middle Ages. Another original work that gained influence in this period was Ptolemy's "Almagest", which contains a geocentric description of the solar system. During late antiquity, in the Byzantine empire many Greek classical texts were preserved. Many Syriac translations were done by groups such as the Nestorians and Monophysites. They played a role when they translated Greek classical texts into Arabic under the Caliphate, during which many types of classical learning were preserved and in some cases improved upon. In addition, the neighboring Sassanid Empire established the medical Academy of Gondeshapur where Greek, Syriac and Persian physicians established the most important medical center of the ancient world during the 6th and 7th centuries. The House of Wisdom was established in Abbasid-era Baghdad, Iraq, where the Islamic study of Aristotelianism flourished. Al-Kindi (801–873) was the first of the Muslim Peripatetic philosophers, and is known for his efforts to introduce Greek and Hellenistic philosophy to the Arab world. The Islamic Golden Age flourished from this time until the Mongol invasions of the 13th century. Ibn al-Haytham (Alhazen), as well as his predecessor Ibn Sahl, was familiar with Ptolemy's "Optics", and used experiments as a means to gain knowledge. Alhazen disproved Ptolemy's theory of vision, but did not make any corresponding changes to Aristotle's metaphysics. Furthermore, doctors and alchemists such as the Persians Avicenna and Al-Razi also greatly developed the science of Medicine with the former writing the Canon of Medicine, a medical encyclopedia used until the 18th century and the latter discovering multiple compounds like alcohol. Avicenna's canon is considered to be one of the most important publications in medicine and they both contributed significantly to the practice of experimental medicine, using clinical trials and experiments to back their claims. In Classical antiquity, Greek and Roman taboos had meant that dissection was usually banned in ancient times, but in Middle Ages it changed: medical teachers and students at Bologna began to open human bodies, and Mondino de Luzzi (c. 1275–1326) produced the first known anatomy textbook based on human dissection. By the eleventh century most of Europe had become Christian; stronger monarchies emerged; borders were restored; technological developments and agricultural innovations were made which increased the food supply and population. In addition, classical Greek texts started to be translated from Arabic and Greek into Latin, giving a higher level of scientific discussion in Western Europe. By 1088, the first university in Europe (the University of Bologna) had emerged from its clerical beginnings. Demand for Latin translations grew (for example, from the Toledo School of Translators); western Europeans began collecting texts written not only in Latin, but also Latin translations from Greek, Arabic, and Hebrew. Manuscript copies of Alhazen's "Book of Optics" also propagated across Europe before 1240, as evidenced by its incorporation into Vitello's "Perspectiva". Avicenna's Canon was translated into Latin. In particular, the texts of Aristotle, Ptolemy, and Euclid, preserved in the Houses of Wisdom and also in the Byzantine Empire, were sought amongst Catholic scholars. The influx of ancient texts caused the Renaissance of the 12th century and the flourishing of a synthesis of Catholicism and Aristotelianism known as Scholasticism in western Europe, which became a new geographic center of science. An "experiment" in this period would be understood as a careful process of observing, describing, and classifying. One prominent scientist in this era was Roger Bacon. Scholasticism had a strong focus on revelation and dialectic reasoning, and gradually fell out of favour over the next centuries, as alchemy's focus on experiments that include direct observation and meticulous documentation slowly increased in importance. New developments in optics played a role in the inception of the Renaissance, both by challenging long-held metaphysical ideas on perception, as well as by contributing to the improvement and development of technology such as the camera obscura and the telescope. Before what we now know as the Renaissance started, Roger Bacon, Vitello, and John Peckham each built up a scholastic ontology upon a causal chain beginning with sensation, perception, and finally apperception of the individual and universal forms of Aristotle. A model of vision later known as perspectivism was exploited and studied by the artists of the Renaissance. This theory uses only three of Aristotle's four causes: formal, material, and final. In the sixteenth century, Copernicus formulated a heliocentric model of the solar system unlike the geocentric model of Ptolemy's "Almagest". This was based on a theorem that the orbital periods of the planets are longer as their orbs are farther from the centre of motion, which he found not to agree with Ptolemy's model. Kepler and others challenged the notion that the only function of the eye is perception, and shifted the main focus in optics from the eye to the propagation of light. Kepler modelled the eye as a water-filled glass sphere with an aperture in front of it to model the entrance pupil. He found that all the light from a single point of the scene was imaged at a single point at the back of the glass sphere. The optical chain ends on the retina at the back of the eye. Kepler is best known, however, for improving Copernicus' heliocentric model through the discovery of Kepler's laws of planetary motion. Kepler did not reject Aristotelian metaphysics, and described his work as a search for the Harmony of the Spheres. Galileo made innovative use of experiment and mathematics. However, he became persecuted after Pope Urban VIII blessed Galileo to write about the Copernican system. Galileo had used arguments from the Pope and put them in the voice of the simpleton in the work "Dialogue Concerning the Two Chief World Systems", which greatly offended Urban VIII. In Northern Europe, the new technology of the printing press was widely used to publish many arguments, including some that disagreed widely with contemporary ideas of nature. René Descartes and Francis Bacon published philosophical arguments in favor of a new type of non-Aristotelian science. Descartes emphasized individual thought and argued that mathematics rather than geometry should be used in order to study nature. Bacon emphasized the importance of experiment over contemplation. Bacon further questioned the Aristotelian concepts of formal cause and final cause, and promoted the idea that science should study the laws of "simple" natures, such as heat, rather than assuming that there is any specific nature, or "formal cause", of each complex type of thing. This new science began to see itself as describing "laws of nature". This updated approach to studies in nature was seen as mechanistic. Bacon also argued that science should aim for the first time at practical inventions for the improvement of all human life. As a precursor to the Age of Enlightenment, Isaac Newton and Gottfried Wilhelm Leibniz succeeded in developing a new physics, now referred to as classical mechanics, which could be confirmed by experiment and explained using mathematics (Newton (1687), "Philosophiæ Naturalis Principia Mathematica"). Leibniz also incorporated terms from Aristotelian physics, but now being used in a new non-teleological way, for example, "energy" and "potential" (modern versions of Aristotelian ""energeia" and "potentia""). This implied a shift in the view of objects: Where Aristotle had noted that objects have certain innate goals that can be actualized, objects were now regarded as devoid of innate goals. In the style of Francis Bacon, Leibniz assumed that different types of things all work according to the same general laws of nature, with no special formal or final causes for each type of thing. It is during this period that the word "science" gradually became more commonly used to refer to a "type of pursuit" of a type of knowledge, especially knowledge of nature – coming close in meaning to the old term "natural philosophy." During this time, the declared purpose and value of science became producing wealth and inventions that would improve human lives, in the materialistic sense of having more food, clothing, and other things. In Bacon's words, "the real and legitimate goal of sciences is the endowment of human life with new inventions and riches", and he discouraged scientists from pursuing intangible philosophical or spiritual ideas, which he believed contributed little to human happiness beyond "the fume of subtle, sublime, or pleasing speculation". Science during the Enlightenment was dominated by scientific societies and academies, which had largely replaced universities as centres of scientific research and development. Societies and academies were also the backbone of the maturation of the scientific profession. Another important development was the popularization of science among an increasingly literate population. Philosophes introduced the public to many scientific theories, most notably through the "Encyclopédie" and the popularization of Newtonianism by Voltaire as well as by Émilie du Châtelet, the French translator of Newton's "Principia". Some historians have marked the 18th century as a drab period in the history of science; however, the century saw significant advancements in the practice of medicine, mathematics, and physics; the development of biological taxonomy; a new understanding of magnetism and electricity; and the maturation of chemistry as a discipline, which established the foundations of modern chemistry. Enlightenment philosophers chose a short history of scientific predecessors – Galileo, Boyle, and Newton principally – as the guides and guarantors of their applications of the singular concept of nature and natural law to every physical and social field of the day. In this respect, the lessons of history and the social structures built upon it could be discarded. The nineteenth century is a particularly important period in the history of science since during this era many distinguishing characteristics of contemporary modern science began to take shape such as: transformation of the life and physical sciences, frequent use of precision instruments, emergence of terms like "biologist", "physicist", "scientist"; slowly moving away from antiquated labels like "natural philosophy" and "natural history", increased professionalization of those studying nature lead to reduction in amateur naturalists, scientists gained cultural authority over many dimensions of society, economic expansion and industrialization of numerous countries, thriving of popular science writings and emergence of science journals. Early in the 19th century, John Dalton suggested the modern atomic theory, based on Democritus's original idea of individible particles called "atoms". Both John Herschel and William Whewell systematized methodology: the latter coined the term scientist. When Charles Darwin published "On the Origin of Species" he established evolution as the prevailing explanation of biological complexity. His theory of natural selection provided a natural explanation of how species originated, but this only gained wide acceptance a century later. The laws of conservation of energy, conservation of momentum and conservation of mass suggested a highly stable universe where there could be little loss of resources. With the advent of the steam engine and the industrial revolution, there was, however, an increased understanding that all forms of energy as defined in physics were not equally useful: they did not have the same energy quality. This realization led to the development of the laws of thermodynamics, in which the free energy of the universe is seen as constantly declining: the entropy of a closed universe increases over time. The electromagnetic theory was also established in the 19th century, and raised new questions which could not easily be answered using Newton's framework. The phenomena that would allow the deconstruction of the atom were discovered in the last decade of the 19th century: the discovery of X-rays inspired the discovery of radioactivity. In the next year came the discovery of the first subatomic particle, the electron. Albert Einstein's theory of relativity and the development of quantum mechanics led to the replacement of classical mechanics with a new physics which contains two parts that describe different types of events in nature. In the first half of the century, the development of antibiotics and artificial fertilizer made global human population growth possible. At the same time, the structure of the atom and its nucleus was discovered, leading to the release of "atomic energy" (nuclear power). In addition, the extensive use of technological innovation stimulated by the wars of this century led to revolutions in transportation (automobiles and aircraft), the development of ICBMs, a space race, and a nuclear arms race. The molecular structure of DNA was discovered in 1953. The discovery of the cosmic microwave background radiation in 1964 led to a rejection of the Steady State theory of the universe in favour of the Big Bang theory of Georges Lemaître. The development of spaceflight in the second half of the century allowed the first astronomical measurements done on or near other objects in space, including manned landings on the Moon. Space telescopes lead to numerous discoveries in astronomy and cosmology. Widespread use of integrated circuits in the last quarter of the 20th century combined with communications satellites led to a revolution in information technology and the rise of the global internet and mobile computing, including smartphones. The need for mass systematization of long, intertwined causal chains and large amounts of data led to the rise of the fields of systems theory and computer-assisted scientific modelling, which are partly based on the Aristotelian paradigm. Harmful environmental issues such as ozone depletion, acidification, eutrophication and climate change came to the public's attention in the same period, and caused the onset of environmental science and environmental technology. The Human Genome Project was completed in 2003, determining the sequence of nucleotide base pairs that make up human DNA, and identifying and mapping all of the genes of the human genome. Induced pluripotent stem cells were developed in 2006, a technology allowing adult cells to be transformed into stem cells capable of giving rise to any cell type found in the body, potentially of huge importance to the field of regenerative medicine. With the discovery of the Higgs boson in 2012, the last particle predicted by the Standard Model of particle physics was found. In 2015, gravitational waves, predicted by general relativity a century before, were first observed. Modern science is commonly divided into three major branches that consist of the natural sciences, social sciences, and formal sciences. Each of these branches comprise various specialized yet overlapping scientific disciplines that often possess their own nomenclature and expertise. Both natural and social sciences are empirical sciences as their knowledge is based on empirical observations and is capable of being tested for its validity by other researchers working under the same conditions. There are also closely related disciplines that use science, such as engineering and medicine, which are sometimes described as applied sciences. The relationships between the branches of science are summarized by the following table. Natural science is concerned with the description, prediction, and understanding of natural phenomena based on empirical evidence from observation and experimentation. It can be divided into two main branches: life science (or biological science) and physical science. Physical science is subdivided into branches, including physics, chemistry, astronomy and earth science. These two branches may be further divided into more specialized disciplines. Modern natural science is the successor to the natural philosophy that began in Ancient Greece. Galileo, Descartes, Bacon, and Newton debated the benefits of using approaches which were more mathematical and more experimental in a methodical way. Still, philosophical perspectives, conjectures, and presuppositions, often overlooked, remain necessary in natural science. Systematic data collection, including discovery science, succeeded natural history, which emerged in the 16th century by describing and classifying plants, animals, minerals, and so on. Today, "natural history" suggests observational descriptions aimed at popular audiences. Social science is concerned with society and the relationships among individuals within a society. It has many branches that include, but are not limited to, anthropology, archaeology, communication studies, economics, history, human geography, jurisprudence, linguistics, political science, psychology, public health, and sociology. Social scientists may adopt various philosophical theories to study individuals and society. For example, positivist social scientists use methods resembling those of the natural sciences as tools for understanding society, and so define science in its stricter modern sense. Interpretivist social scientists, by contrast, may use social critique or symbolic interpretation rather than constructing empirically falsifiable theories, and thus treat science in its broader sense. In modern academic practice, researchers are often eclectic, using multiple methodologies (for instance, by combining both quantitative and qualitative research). The term "social research" has also acquired a degree of autonomy as practitioners from various disciplines share in its aims and methods. Formal science is involved in the study of formal systems. It includes mathematics, systems theory, and theoretical computer science. The formal sciences share similarities with the other two branches by relying on objective, careful, and systematic study of an area of knowledge. They are, however, different from the empirical sciences as they rely exclusively on deductive reasoning, without the need for empirical evidence, to verify their abstract concepts. The formal sciences are therefore "a priori" disciplines and because of this, there is disagreement on whether they actually constitute a science. Nevertheless, the formal sciences play an important role in the empirical sciences. Calculus, for example, was initially invented to understand motion in physics. Natural and social sciences that rely heavily on mathematical applications include mathematical physics, mathematical chemistry, mathematical biology, mathematical finance, and mathematical economics. Scientific research can be labeled as either basic or applied research. Basic research is the search for knowledge and applied research is the search for solutions to practical problems using this knowledge. Although some scientific research is applied research into specific problems, a great deal of our understanding comes from the curiosity-driven undertaking of basic research. This leads to options for technological advance that were not planned or sometimes even imaginable. This point was made by Michael Faraday when allegedly in response to the question "what is the "use" of basic research?" he responded: "Sir, what is the use of a new-born child?". For example, research into the effects of red light on the human eye's rod cells did not seem to have any practical purpose; eventually, the discovery that our night vision is not troubled by red light would lead search and rescue teams (among others) to adopt red light in the cockpits of jets and helicopters. Finally, even basic research can take unexpected turns, and there is some sense in which the scientific method is built to harness luck. Scientific research involves using the scientific method, which seeks to objectively explain the events of nature in a reproducible way. An explanatory thought experiment or hypothesis is put forward as explanation using principles such as parsimony (also known as "Occam's Razor") and are generally expected to seek consilience – fitting well with other accepted facts related to the phenomena. This new explanation is used to make falsifiable predictions that are testable by experiment or observation. The predictions are to be posted before a confirming experiment or observation is sought, as proof that no tampering has occurred. Disproof of a prediction is evidence of progress. This is done partly through observation of natural phenomena, but also through experimentation that tries to simulate natural events under controlled conditions as appropriate to the discipline (in the observational sciences, such as astronomy or geology, a predicted observation might take the place of a controlled experiment). Experimentation is especially important in science to help establish causal relationships (to avoid the correlation fallacy). When a hypothesis proves unsatisfactory, it is either modified or discarded. If the hypothesis survived testing, it may become adopted into the framework of a scientific theory, a logically reasoned, self-consistent model or framework for describing the behavior of certain natural phenomena. A theory typically describes the behavior of much broader sets of phenomena than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses. In addition to testing hypotheses, scientists may also generate a model, an attempt to describe or depict the phenomenon in terms of a logical, physical or mathematical representation and to generate new hypotheses that can be tested, based on observable phenomena. While performing experiments to test hypotheses, scientists may have a preference for one outcome over another, and so it is important to ensure that science as a whole can eliminate this bias. This can be achieved by careful experimental design, transparency, and a thorough peer review process of the experimental results as well as any conclusions. After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be. Taken in its entirety, the scientific method allows for highly creative problem solving while minimizing any effects of subjective bias on the part of its users (especially the confirmation bias). John Ziman points out that intersubjective verifiability is fundamental to the creation of all scientific knowledge. Ziman shows how scientists can identify patterns to each other across centuries; he refers to this ability as "perceptual consensibility." He then makes consensibility, leading to consensus, the touchstone of reliable knowledge. Mathematics is essential in the formation of hypotheses, theories, and laws in the natural and social sciences. For example, it is used in quantitative scientific modeling, which can generate new hypotheses and predictions to be tested. It is also used extensively in observing and collecting measurements. Statistics, a branch of mathematics, is used to summarize and analyze data, which allow scientists to assess the reliability and variability of their experimental results. Computational science applies computing power to simulate real-world situations, enabling a better understanding of scientific problems than formal mathematics alone can achieve. According to the Society for Industrial and Applied Mathematics, computation is now as important as theory and experiment in advancing scientific knowledge. Scientists usually take for granted a set of basic assumptions that are needed to justify the scientific method: (1) that there is an objective reality shared by all rational observers; (2) that this objective reality is governed by natural laws; (3) that these laws can be discovered by means of systematic observation and experimentation. Philosophy of science seeks a deep understanding of what these underlying assumptions mean and whether they are valid. The belief that scientific theories should and do represent metaphysical reality is known as realism. It can be contrasted with anti-realism, the view that the success of science does not depend on it being accurate about unobservable entities such as electrons. One form of anti-realism is idealism, the belief that the mind or consciousness is the most basic essence, and that each mind generates its own reality. In an idealistic world view, what is true for one mind need not be true for other minds. There are different schools of thought in philosophy of science. The most popular position is empiricism, which holds that knowledge is created by a process involving observation and that scientific theories are the result of generalizations from such observations. Empiricism generally encompasses inductivism, a position that tries to explain the way general theories can be justified by the finite number of observations humans can make and hence the finite amount of empirical evidence available to confirm scientific theories. This is necessary because the number of predictions those theories make is infinite, which means that they cannot be known from the finite amount of evidence using deductive logic only. Many versions of empiricism exist, with the predominant ones being Bayesianism and the hypothetico-deductive method. Empiricism has stood in contrast to rationalism, the position originally associated with Descartes, which holds that knowledge is created by the human intellect, not by observation. Critical rationalism is a contrasting 20th-century approach to science, first defined by Austrian-British philosopher Karl Popper. Popper rejected the way that empiricism describes the connection between theory and observation. He claimed that theories are not generated by observation, but that observation is made in the light of theories and that the only way a theory can be affected by observation is when it comes in conflict with it. Popper proposed replacing verifiability with falsifiability as the landmark of scientific theories and replacing induction with falsification as the empirical method. Popper further claimed that there is actually only one universal method, not specific to science: the negative method of criticism, trial and error. It covers all products of the human mind, including science, mathematics, philosophy, and art. Another approach, instrumentalism, colloquially termed "shut up and multiply," emphasizes the utility of theories as instruments for explaining and predicting phenomena. It views scientific theories as black boxes with only their input (initial conditions) and output (predictions) being relevant. Consequences, theoretical entities, and logical structure are claimed to be something that should simply be ignored and that scientists shouldn't make a fuss about (see interpretations of quantum mechanics). Close to instrumentalism is constructive empiricism, according to which the main criterion for the success of a scientific theory is whether what it says about observable entities is true. Thomas Kuhn argued that the process of observation and evaluation takes place within a paradigm, a logically consistent "portrait" of the world that is consistent with observations made from its framing. He characterized "normal science" as the process of observation and "puzzle solving" which takes place within a paradigm, whereas "revolutionary science" occurs when one paradigm overtakes another in a paradigm shift. Each paradigm has its own distinct questions, aims, and interpretations. The choice between paradigms involves setting two or more "portraits" against the world and deciding which likeness is most promising. A paradigm shift occurs when a significant number of observational anomalies arise in the old paradigm and a new paradigm makes sense of them. That is, the choice of a new paradigm is based on observations, even though those observations are made against the background of the old paradigm. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn's position, however, is not one of relativism. Finally, another approach often cited in debates of scientific skepticism against controversial movements like "creation science" is methodological naturalism. Its main point is that a difference between natural and supernatural explanations should be made and that science should be restricted methodologically to natural explanations. That the restriction is merely methodological (rather than ontological) means that science should not consider supernatural explanations itself, but should not claim them to be wrong either. Instead, supernatural explanations should be left a matter of personal belief outside the scope of science. Methodological naturalism maintains that proper science requires strict adherence to empirical study and independent verification as a process for properly developing and evaluating explanations for observable phenomena. The absence of these standards, arguments from authority, biased observational studies and other common fallacies are frequently cited by supporters of methodological naturalism as characteristic of the non-science they criticize. A scientific theory is empirical and is always open to falsification if new evidence is presented. That is, no theory is ever considered strictly certain as science accepts the concept of fallibilism. The philosopher of science Karl Popper sharply distinguished truth from certainty. He wrote that scientific knowledge "consists in the search for truth," but it "is not the search for certainty ... All human knowledge is fallible and therefore uncertain. New scientific knowledge rarely results in vast changes in our understanding. According to psychologist Keith Stanovich, it may be the media's overuse of words like "breakthrough" that leads the public to imagine that science is constantly proving everything it thought was true to be false. While there are such famous cases as the theory of relativity that required a complete reconceptualization, these are extreme exceptions. Knowledge in science is gained by a gradual synthesis of information from different experiments by various researchers across different branches of science; it is more like a climb than a leap. Theories vary in the extent to which they have been tested and verified, as well as their acceptance in the scientific community. For example, heliocentric theory, the theory of evolution, relativity theory, and germ theory still bear the name "theory" even though, in practice, they are considered factual. Philosopher Barry Stroud adds that, although the best definition for "knowledge" is contested, being skeptical and entertaining the "possibility" that one is incorrect is compatible with being correct. Therefore, scientists adhering to proper scientific approaches will doubt themselves even once they possess the truth. The fallibilist C. S. Peirce argued that inquiry is the struggle to resolve actual doubt and that merely quarrelsome, verbal, or hyperbolic doubt is fruitless – but also that the inquirer should try to attain genuine doubt rather than resting uncritically on common sense. He held that the successful sciences trust not to any single chain of inference (no stronger than its weakest link) but to the cable of multiple and various arguments intimately connected. Stanovich also asserts that science avoids searching for a "magic bullet"; it avoids the single-cause fallacy. This means a scientist would not ask merely "What is "the" cause of ...", but rather "What "are" the most significant "causes" of ...". This is especially the case in the more macroscopic fields of science (e.g. psychology, physical cosmology). Research often analyzes few factors at once, but these are always added to the long list of factors that are most important to consider. For example, knowing the details of only a person's genetics, or their history and upbringing, or the current situation may not explain a behavior, but a deep understanding of all these variables combined can be very predictive. Scientific research is published in an enormous range of scientific literature. Scientific journals communicate and document the results of research carried out in universities and various other research institutions, serving as an archival record of science. The first scientific journals, "Journal des Sçavans" followed by the "Philosophical Transactions", began publication in 1665. Since that time the total number of active periodicals has steadily increased. In 1981, one estimate for the number of scientific and technical journals in publication was 11,500. The United States National Library of Medicine currently indexes 5,516 journals that contain articles on topics related to the life sciences. Although the journals are in 39 languages, 91 percent of the indexed articles are published in English. Most scientific journals cover a single scientific field and publish the research within that field; the research is normally expressed in the form of a scientific paper. Science has become so pervasive in modern societies that it is generally considered necessary to communicate the achievements, news, and ambitions of scientists to a wider populace. Science magazines such as "New Scientist", "Science & Vie", and "Scientific American" cater to the needs of a much wider readership and provide a non-technical summary of popular areas of research, including notable discoveries and advances in certain fields of research. Science books engage the interest of many more people. Tangentially, the science fiction genre, primarily fantastic in nature, engages the public imagination and transmits the ideas, if not the methods, of science. Recent efforts to intensify or develop links between science and non-scientific disciplines such as literature or more specifically, poetry, include the "Creative Writing Science" resource developed through the Royal Literary Fund. Discoveries in fundamental science can be world-changing. For example: The replication crisis is an ongoing methodological crisis primarily affecting parts of the social and life sciences in which scholars have found that the results of many scientific studies are difficult or impossible to replicate or reproduce on subsequent investigation, either by independent researchers or by the original researchers themselves. The crisis has long-standing roots; the phrase was coined in the early 2010s as part of a growing awareness of the problem. The replication crisis represents an important body of research in metascience, which aims to improve the quality of all scientific research while reducing waste. An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is sometimes referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe they are doing science because their activities have the outward appearance of science but actually lack the "kind of utter honesty" that allows their results to be rigorously evaluated. Various types of commercial advertising, ranging from hype to fraud, may fall into these categories. Science has been described as "the most important tool" for separating valid claims from invalid ones. There can also be an element of political or ideological bias on all sides of scientific debates. Sometimes, research may be characterized as "bad science," research that may be well-intended but is actually incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. The term "scientific misconduct" refers to situations such as where researchers have intentionally misrepresented their published data or have purposely given credit for a discovery to the wrong person. The scientific community is a group of all interacting scientists, along with their respective societies and institutions. Scientists are individuals who conduct scientific research to advance knowledge in an area of interest. The term "scientist" was coined by William Whewell in 1833. In modern times, many professional scientists are trained in an academic setting and upon completion, attain an academic degree, with the highest degree being a doctorate such as a Doctor of Philosophy (PhD). Many scientists pursue careers in various sectors of the economy such as academia, industry, government, and nonprofit organizations. Scientists exhibit a strong curiosity about reality, with some scientists having a desire to apply scientific knowledge for the benefit of health, nations, environment, or industries. Other motivations include recognition by their peers and prestige. The Nobel Prize, a widely regarded prestigious award, is awarded annually to those who have achieved scientific advances in the fields of medicine, physics, chemistry, and economics. Science has historically been a male-dominated field, with some notable exceptions. Women faced considerable discrimination in science, much as they did in other areas of male-dominated societies, such as frequently being passed over for job opportunities and denied credit for their work. For example, Christine Ladd (1847–1930) was able to enter a PhD program as "C. Ladd"; Christine "Kitty" Ladd completed the requirements in 1882, but was awarded her degree only in 1926, after a career which spanned the algebra of logic (see truth table), color vision, and psychology. Her work preceded notable researchers like Ludwig Wittgenstein and Charles Sanders Peirce. The achievements of women in science have been attributed to their defiance of their traditional role as laborers within the domestic sphere. In the late 20th century, active recruitment of women and elimination of institutional discrimination on the basis of sex greatly increased the number of women scientists, but large gender disparities remain in some fields; in the early 21st century over half of new biologists were female, while 80% of PhDs in physics are given to men. In the early part of the 21st century, women in the United States earned 50.3% of bachelor's degrees, 45.6% of master's degrees, and 40.7% of PhDs in science and engineering fields. They earned more than half of the degrees in psychology (about 70%), social sciences (about 50%), and biology (about 50-60%) but earned less than half the degrees in the physical sciences, earth sciences, mathematics, engineering, and computer science. Lifestyle choice also plays a major role in female engagement in science; women with young children are 28% less likely to take tenure-track positions due to work-life balance issues, and female graduate students' interest in careers in research declines dramatically over the course of graduate school, whereas that of their male colleagues remains unchanged. Learned societies for the communication and promotion of scientific thought and experimentation have existed since the Renaissance. Many scientists belong to a learned society that promotes their respective scientific discipline, profession, or group of related disciplines. Membership may be open to all, may require possession of some scientific credentials, or may be an honor conferred by election. Most scientific societies are non-profit organizations, and many are professional associations. Their activities typically include holding regular conferences for the presentation and discussion of new research results and publishing or sponsoring academic journals in their discipline. Some also act as professional bodies, regulating the activities of their members in the public interest or the collective interest of the membership. Scholars in the sociology of science argue that learned societies are of key importance and their formation assists in the emergence and development of new disciplines or professions. The professionalization of science, begun in the 19th century, was partly enabled by the creation of distinguished academy of sciences in a number of countries such as the Italian in 1603, the British Royal Society in 1660, the French in 1666, the American National Academy of Sciences in 1863, the German Kaiser Wilhelm Institute in 1911, and the Chinese Academy of Sciences in 1928. International scientific organizations, such as the International Council for Science, have since been formed to promote cooperation between the scientific communities of different nations. Science policy is an area of public policy concerned with the policies that affect the conduct of the scientific enterprise, including research funding, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care and environmental monitoring. Science policy also refers to the act of applying scientific knowledge and consensus to the development of public policies. Science policy thus deals with the entire domain of issues that involve the natural sciences. In accordance with public policy being concerned about the well-being of its citizens, science policy's goal is to consider how science and technology can best serve the public. State policy has influenced the funding of public works and science for thousands of years, particularly within civilizations with highly organized governments such as imperial China and the Roman Empire. Prominent historical examples include the Great Wall of China, completed over the course of two millennia through the state support of several dynasties, and the Grand Canal of the Yangtze River, an immense feat of hydraulic engineering begun by Sunshu Ao (孫叔敖 7th c. BCE), Ximen Bao (西門豹 5th c.BCE), and Shi Chi (4th c. BCE). This construction dates from the 6th century BCE under the Sui Dynasty and is still in use today. In China, such state-supported infrastructure and scientific research projects date at least from the time of the Mohists, who inspired the study of logic during the period of the Hundred Schools of Thought and the study of defensive fortifications like the Great Wall of China during the Warring States period. Public policy can directly affect the funding of capital equipment and intellectual infrastructure for industrial research by providing tax incentives to those organizations that fund research. Vannevar Bush, director of the Office of Scientific Research and Development for the United States government, the forerunner of the National Science Foundation, wrote in July 1945 that "Science is a proper concern of government." Scientific research is often funded through a competitive process in which potential research projects are evaluated and only the most promising receive funding. Such processes, which are run by government, corporations, or foundations, allocate scarce funds. Total research funding in most developed countries is between 1.5% and 3% of GDP. In the OECD, around two-thirds of research and development in scientific and technical fields is carried out by industry, and 20% and 10% respectively by universities and government. The government funding proportion in certain industries is higher, and it dominates research in social science and humanities. Similarly, with some exceptions (e.g. biotechnology) government provides the bulk of the funds for basic scientific research. Many governments have dedicated agencies to support scientific research. Prominent scientific organizations include the National Science Foundation in the United States, the National Scientific and Technical Research Council in Argentina, Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia, in France, the Max Planck Society and in Germany, and CSIC in Spain. In commercial research and development, all but the most research-oriented corporations focus more heavily on near-term commercialisation possibilities rather than "blue-sky" ideas or technologies (such as nuclear fusion). The public awareness of science relates to the attitudes, behaviors, opinions, and activities that make up the relations between science and the general public. it integrates various themes and activities such as science communication, science museums, science festivals, science fairs, citizen science, and science in popular culture. Social scientists have devised various metrics to measure the public understanding of science such as factual knowledge, self-reported knowledge, and structural knowledge. The mass media face a number of pressures that can prevent them from accurately depicting competing scientific claims in terms of their credibility within the scientific community as a whole. Determining how much weight to give different sides in a scientific debate may require considerable expertise regarding the matter. Few journalists have real scientific knowledge, and even beat reporters who know a great deal about certain scientific issues may be ignorant about other scientific issues that they are suddenly asked to cover. Politicization of science occurs when government, business, or advocacy groups use legal or economic pressure to influence the findings of scientific research or the way it is disseminated, reported, or interpreted. Many factors can act as facets of the politicization of science such as populist anti-intellectualism, perceived threats to religious beliefs, postmodernist subjectivism, and fear for business interests. Politicization of science is usually accomplished when scientific information is presented in a way that emphasizes the uncertainty associated with the scientific evidence. Tactics such as shifting conversation, failing to acknowledge facts, and capitalizing on doubt of scientific consensus have been used to gain more attention for views that have been undermined by scientific evidence. Examples of issues that have involved the politicization of science include the global warming controversy, health effects of pesticides, and health effects of tobacco. Publications Resources
https://en.wikipedia.org/wiki?curid=26700
Statistic A statistic (singular) or sample statistic is any quantity computed from values in a sample that is used for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypothesis. The average (aka mean) of sample values is a statistic. The term statistic is used both for the function and for the value of the function on a given sample. When a statistic is being used for a specific purpose, it may be referred to by a name indicating its purpose. When a statistic is used to estimate a population parameter, is called an estimator. A population parameter is any characteristic of a population under study, but when it is not feasible to directly measure the value of a population parameter, statistical methods are used to infer the likely value of the parameter on the basis of a statistic computed from a sample taken from the population. For example, the mean of a sample is an unbiased estimator of the population mean. This means that the average of multiple sample means will tend to converge to the true mean of the population.. In descriptive statistics, a descriptive statistic is used to describe the sample data in some useful way. In statistical hypothesis testing, a test statistic is used to test a hypothesis. Note that a single statistic can be used for multiple purposes – for example the sample mean can be used to estimate the population mean, to describe a sample data set, or to test a hypothesis. Some examples of statistics are: There are a variety of functions that are used to calculate statistics. Some include: A statistic is an "observable" random variable, which differentiates it both from a "parameter" that is a generally unobservable quantity describing a property of a statistical population, and from an unobservable random variable, such as the difference between an observed measurement and a population average. A parameter can only be computed exactly if the entire population can be observed without error; for instance, in a perfect census or for a population of standardized test takers. Statisticians often contemplate a parameterized family of probability distributions, any member of which could be the distribution of some measurable aspect of each member of a population, from which a sample is drawn randomly. For example, the parameter may be the average height of 25-year-old men in North America. The height of the members of a sample of 100 such men are measured; the average of those 100 numbers is a statistic. The average of the heights of all members of the population is not a statistic unless that has somehow also been ascertained (such as by measuring every member of the population). The average height that would be calculated using "all" of the individual heights of "all" 25-year-old North American men is a parameter, and not a statistic. Important potential properties of statistics include completeness, consistency, sufficiency, unbiasedness, minimum mean square error, low variance, robustness, and computational convenience. Information of a statistic on model parameters can be defined in several ways. The most common is the Fisher information, which is defined on the statistic model induced by the statistic. Kullback information measure can also be used.
https://en.wikipedia.org/wiki?curid=26703
Sean Connery Sir Thomas Sean Connery (born 25 August 1930) is a Scottish retired actor and producer, who has won an Academy Award, two BAFTA Awards (one being a BAFTA Academy Fellowship Award), and three Golden Globes, including the Cecil B. DeMille Award and a Henrietta Award. Connery was the first actor to portray the character James Bond in film, starring in seven Bond films (every film from "Dr. No" to "You Only Live Twice", plus "Diamonds Are Forever" and "Never Say Never Again"), between 1962 and 1983. In 1988, Connery won the Academy Award for Best Supporting Actor for his role in "The Untouchables". His films also include "Marnie" (1964), "Murder on the Orient Express" (1974), "The Man Who Would Be King" (1975), "The Name of the Rose" (1986), "Highlander" (1986), "Indiana Jones and the Last Crusade" (1989), "The Hunt for Red October" (1990), "Dragonheart" (1996), "The Rock" (1996), and "Finding Forrester" (2000). Connery has been polled in "The Sunday Herald" as "The Greatest Living Scot" and in a EuroMillions survey as "Scotland's Greatest Living National Treasure". He was voted by "People" magazine as both the “Sexiest Man Alive” in 1989 and the “Sexiest Man of the Century” in 1999. He received a lifetime achievement award in the US with a Kennedy Center Honor in 1999. Connery was knighted in the 2000 New Year Honours for services to film drama. Thomas Sean Connery, named Thomas after his grandfather, was born in Fountainbridge, Edinburgh, Scotland on 25 August 1930. His mother, Euphemia "Effie" McBain McLean, was a cleaning woman. She was born the daughter of Neil McLean and Helen Forbes Ross, and named after her father's mother Euphemia McBain, wife of John McLean and daughter of William McBain from Ceres in Fife. Connery's father, Joseph Connery, was a factory worker and lorry driver. His paternal grandfather's parents emigrated to Scotland from Ireland in the mid-19th century. The remainder of his family was of Scottish descent, and his maternal great-grandparents were native Scottish Gaelic speakers from Fife (unusually, for a speaker of the language), and Uig on Skye. His father was a Roman Catholic, and his mother was a Protestant. He has a younger brother, Neil. Connery has said that he was called Sean, his middle name, long before becoming an actor, explaining that when he was young he had an Irish friend named Séamus and that those who knew them both had decided to call Connery by his middle name whenever both were present. He was generally referred to in his youth as "Tommy". Although he was small in primary school, he grew rapidly around the age of 12, reaching his full adult height of at 18. He was known during his teen years as "Big Tam", and has stated that he lost his virginity to an adult woman in an ATS uniform at the age of 14. Connery's first job was as a milkman in Edinburgh with St. Cuthbert's Co-operative Society. In 2009 Connery recalled a conversation in a taxi: In 1946, at the age of 16, Connery joined the Royal Navy, during which time he acquired two tattoos, of which his official website says "unlike many tattoos, his were not frivolous—his tattoos reflect two of his lifelong commitments: his family and Scotland. ... One tattoo is a tribute to his parents and reads 'Mum and Dad,' and the other is self-explanatory, 'Scotland Forever.'" He trained in Portsmouth at the naval gunnery school and in an anti-aircraft crew. He was later assigned as an Able Seaman on HMS "Formidable". Connery was later discharged from the navy age 19 on medical grounds because of a duodenal ulcer, a condition that affected most of the males in previous generations of his family. Afterwards, he returned to the co-op, then worked as, among other things, a lorry driver, a lifeguard at Portobello swimming baths, a labourer, an artist's model for the Edinburgh College of Art, and after a suggestion by former Mr. Scotland, Archie Brennan, a coffin polisher. The modelling earned him 15 shillings an hour. Artist Richard Demarco, at the time a student who painted several early pictures of Connery, described him as "very straight, slightly shy, too, too beautiful for words, a virtual Adonis". Connery began bodybuilding at the age of 18, and from 1951 trained heavily with Ellington, a former gym instructor in the British Army. While his official website claims he was third in the 1950 Mr. Universe contest, most sources place him in the 1953 competition, either third in the Junior class or failing to place in the Tall Man classification. Connery stated that he was soon deterred from bodybuilding when he found that the Americans frequently beat him in competitions because of sheer muscle size and, unlike Connery, refused to participate in athletic activity which could make them lose muscle mass. Connery was a keen footballer, having played for Bonnyrigg Rose in his younger days. He was offered a trial with East Fife. While on tour with "South Pacific", Connery played in a football match against a local team that Matt Busby, manager of Manchester United, happened to be scouting. According to reports, Busby was impressed with his physical prowess and offered Connery a contract worth £25 a week () immediately after the game. Connery admits that he was tempted to accept, but he recalls, "I realised that a top-class footballer could be over the hill by the age of 30, and I was already 23. I decided to become an actor and it turned out to be one of my more intelligent moves." Seeking to supplement his income, Connery helped out backstage at the King's Theatre in late 1951. He became interested in the proceedings, and a career was launched. During a bodybuilding competition held in London in 1953, one of the competitors mentioned that auditions were being held for a production of "South Pacific", and Connery landed a small part as one of the Seabees chorus boys. By the time the production reached Edinburgh, he had been given the part of Marine Cpl Hamilton Steeves and was understudying two of the juvenile leads, and his salary was raised from £12 to £14–10s a week. The production returned the following year out of popular demand, and Connery was promoted to the featured role of Lieutenant Buzz Adams, which Larry Hagman had portrayed in the West End. While in Edinburgh, Connery was targeted by the Valdor gang, one of the most violent in the city. He was first approached by them in a billiard hall where he prevented them from stealing his jacket and was later followed by six gang members to a 15-foot-high balcony at the Palais. There Connery launched an attack singlehandedly against the gang members, grabbing one by the throat and another by a biceps and cracked their heads together. From then on he was treated with great respect by the gang and gained a reputation as a "hard man". Connery first met Michael Caine at a party during the production of "South Pacific" in 1954, and the two later became close friends. During the production of "South Pacific" at the Opera House, Manchester over the Christmas period of 1954, Connery developed a serious interest in the theatre through American actor Robert Henderson who lent him copies of the Henrik Ibsen works "Hedda Gabler", "The Wild Duck", and "When We Dead Awaken", and later listed works by the likes of Marcel Proust, Leo Tolstoy, Ivan Turgenev, George Bernard Shaw, James Joyce and William Shakespeare for him to digest. Henderson urged him to take elocution lessons and got him parts at the Maida Vale Theatre in London. He had already begun a film career, having been an extra in Herbert Wilcox's 1954 musical "Lilacs in the Spring" alongside Anna Neagle. Although Connery had secured several roles as extras, he was struggling to make ends meet, and was forced to accept a part-time job as a babysitter for journalist Peter Noble and his actress wife Mary, which earned him 10 shillings a night. He met Hollywood actress Shelley Winters one night at Noble's house, who described Connery as "one of the tallest and most charming and masculine Scotsmen" she'd ever seen, and later spent many evenings with the Connery brothers drinking beer. Around this time Connery was residing at TV presenter Llew Gardner's house. Henderson landed Connery a role in a £6 a week Q Theatre production of Agatha Christie's "Witness for the Prosecution", during which he met and became friends with fellow-Scot Ian Bannen. This role was followed by "Point of Departure" and "A Witch in Time" at Kew, a role as Pentheus opposite Yvonne Mitchell in "The Bacchae" at the Oxford Playhouse, and a role opposite Jill Bennett in Eugene O'Neill's production of "Anna Christie". During his time at the Oxford Theatre, Connery won a brief part as a boxer in the TV series "The Square Ring", before being spotted by Canadian director Alvin Rakoff, who gave him multiple roles in "The Condemned", shot on location in Dover in Kent. In 1956, Connery appeared in the theatrical production of "Epitaph", and played a minor role as a hoodlum in the "Ladies of the Manor" episode of the BBC Television police series "Dixon of Dock Green". This was followed by small television parts in "Sailor of Fortune" and "The Jack Benny Program". In early 1957, Connery hired agent Richard Hatton who got him his first film role, as Spike, a minor gangster with a speech impediment in Montgomery Tully's "No Road Back" alongside Skip Homeier, Paul Carpenter, Patricia Dainton and Norman Wooland. In April 1957, Rakoff—after being disappointed by Jack Palance—decided to give the young actor his first chance in a leading role, and cast Connery as Mountain McLintock in BBC Television's production of "Requiem For a Heavyweight", which also starred Warren Mitchell and Jacqueline Hill. He then played a rogue lorry driver, Johnny Yates, in Cy Endfield's "Hell Drivers" (1957) alongside Stanley Baker, Herbert Lom, Peggy Cummins and Patrick McGoohan. Later in 1957, Connery appeared in Terence Young's poorly received MGM action picture "Action of the Tiger" opposite Van Johnson, Martine Carol, Herbert Lom and Gustavo Rojo; the film was shot on location in southern Spain. He also had a minor role in Gerald Thomas's thriller "Time Lock" (1957) as a welder, appearing alongside Robert Beatty, Lee Patterson, Betty McDowall and Vincent Winter; this commenced filming on 1 December 1956 at Beaconsfield Studios. Connery had a major role in the melodrama "Another Time, Another Place" (1958) as a British reporter named Mark Trevor, caught in a love affair opposite Lana Turner and Barry Sullivan. During filming, star Turner's possessive gangster boyfriend, Johnny Stompanato, who was visiting from Los Angeles, believed she was having an affair with Connery. Connery and Turner had attended West End shows and London restaurants together. Stompanato stormed onto the film set and pointed a gun at Connery, only to have Connery disarm him and knock him flat on his back. Stompanato was banned from the set. Two Scotland Yard detectives advised Stompanato to leave and escorted him to the airport, where he boarded a plane back to the US. Connery later recounted that he had to lie low for a while after receiving threats from men linked to Stompanato's boss, Mickey Cohen. In 1959 Connery landed a leading role in Robert Stevenson's Walt Disney Productions film "Darby O'Gill and the Little People" (1959) alongside Albert Sharpe, Janet Munro, and Jimmy O'Dea. The film is a tale about a wily Irishman and his battle of wits with leprechauns. Upon the film's initial release, A. H. Weiler of "The New York Times" praised the cast (save Connery whom he described as "merely tall, dark, and handsome") and thought the film an "overpoweringly charming concoction of standard Gaelic tall stories, fantasy and romance." He also had prominent television roles in Rudolph Cartier's 1961 productions of "Adventure Story" and "Anna Karenina" for BBC Television, in the latter of which he co-starred with Claire Bloom. Connery's breakthrough came in the role of British secret agent James Bond. He was reluctant to commit to a film series, but understood that if the films succeeded, his career would greatly benefit. He played 007 in the first five Bond films: "Dr. No" (1962), "From Russia with Love" (1963), "Goldfinger" (1964), "Thunderball" (1965), and "You Only Live Twice" (1967) – then appeared again as Bond in "Diamonds Are Forever" (1971) and "Never Say Never Again" (1983). All seven films were commercially successful. James Bond, as portrayed by Connery, was selected as the third-greatest hero in cinema history by the American Film Institute. Connery's selection for the role of James Bond owed a lot to Dana Broccoli, wife of producer Albert "Cubby" Broccoli, who is reputed to have been instrumental in persuading her husband that Connery was the right man. James Bond's creator, Ian Fleming, originally doubted Connery's casting, saying, "He's not what I envisioned of James Bond looks", and "I'm looking for Commander Bond and not an overgrown stunt-man", adding that Connery (muscular, 6' 2", and a Scot) was unrefined. Fleming's girlfriend Blanche Blackwell told him that Connery had the requisite sexual charisma, and Fleming changed his mind after the successful "Dr. No" première. He was so impressed, he wrote Connery's heritage into the character. In his 1964 novel "You Only Live Twice", Fleming wrote that Bond's father was Scottish and from Glencoe. Connery's portrayal of Bond owes much to stylistic tutelage from director Terence Young, which helped polish the actor while using his physical grace and presence for the action. Lois Maxwell, who played Miss Moneypenny, related that "Terence took Sean under his wing. He took him to dinner, showed him how to walk, how to talk, even how to eat." The tutoring was successful; Connery received thousands of fan letters a week after "Dr. No’s" opening, and the actor became a major male sex symbol in film. During the filming of "Thunderball" in 1965, Connery's life was in danger in the sequence with the sharks in Emilio Largo's pool. He had been concerned about this threat when he read the script. Connery insisted that Ken Adam build a special Plexiglas partition inside the pool, but this was not a fixed structure, and one of the sharks managed to pass through it. He had to abandon the pool immediately. In 2005, "From Russia with Love" was adapted by Electronic Arts into a video game, titled "James Bond 007: From Russia with Love", which featured all-new voice work by Connery, recorded by Terry Manning in the Bahamas, as well as his likeness, and those of several of the film's supporting cast. Although Bond had made him a star, Connery grew tired of the role and the pressure the franchise put on him, saying "[I am] fed up to here with the whole Bond bit" and "I have always hated that damned James Bond. I'd like to kill him". Michael Caine said of the situation, "If you were his friend in these early days you didn't raise the subject of Bond. He was, and is, a much better actor than just playing James Bond, but he became synonymous with Bond. He'd be walking down the street and people would say, "Look, there's James Bond." That was particularly upsetting to him." While making the Bond films, Connery also starred in other films such as Alfred Hitchcock's "Marnie" (1964) and "The Hill" (1965). Connery was offered the lead role in Michaelangelo Antonioni's film about "swinging London", "Blowup" (1966), but turned it down because Antonioni would not show him the complete script: only a summary that was stored in a cigarette packet. Having played Bond six times, Connery's global popularity was such that he shared a Golden Globe Henrietta Award with Charles Bronson for "World Film Favorite – Male" in 1972. He appeared in John Huston’s "The Man Who Would Be King" (1975), starring opposite Michael Caine, with both actors regarding it as their favourite film. The same year, he appeared in "The Wind and the Lion", and in 1976 played Robin Hood in "Robin and Marian" where he starred opposite Audrey Hepburn who played Maid Marian. Film critic Roger Ebert – who had praised the double act of Connery and Caine in "The Man Who Would Be King" – praised Connery’s chemistry with Hepburn, writing, "Connery and Hepburn seem to have arrived at a tacit understanding between themselves about their characters. They glow. They really do seem in love." In the 1970s Connery was part of ensemble casts in films such as "Murder on the Orient Express" (1974) with Vanessa Redgrave and John Gielgud, and "A Bridge Too Far" (1977) co-starring Dirk Bogarde and Laurence Olivier. In 1981 Connery appeared in the film "Time Bandits" as Agamemnon. The casting choice derives from a joke Michael Palin included in the script, in which he describes the character removing his mask as being "Sean Connery — or someone of equal but cheaper stature". When shown the script, Connery was happy to play the supporting role. In 1982, Connery narrated "G'olé!", the official film of the 1982 FIFA World Cup. Connery agreed to reprise Bond as an ageing agent 007 in "Never Say Never Again", released in October 1983. The title, contributed by his wife, refers to his earlier statement that he would "never again" return to the role. Although the film performed well at the box office, it was plagued with production problems: strife between the director and producer, financial problems, the Fleming estate trustees' attempts to halt the film, and Connery's wrist being broken by fight choreographer, Steven Seagal. As a result of his negative experiences during filming, Connery became unhappy with the major studios and did not make any films for two years. Following the successful European production "The Name of the Rose" (1986), for which he won a BAFTA award, Connery's interest in more commercial material was revived. That same year, a supporting role in "Highlander" showcased his ability to play older mentors to younger leads, which became a recurring role in many of his later films. The following year, his acclaimed performance as a hard-nosed Irish-American cop in "The Untouchables" (1987) earned him his only Academy Award for Best Supporting Actor. His subsequent box-office hits included "Indiana Jones and the Last Crusade" (1989), in which he played Henry Jones, Sr., the title character's father, "The Hunt for Red October" (1990) (where he was reportedly called in at two weeks' notice), "The Russia House" (1990), "The Rock" (1996), and "Entrapment" (1999). In 1996, he voiced the role of Draco the dragon in the film "Dragonheart". In 1998, Connery received a BAFTA Academy Fellowship Award. Connery's later films included several box office and critical disappointments such as "First Knight" (1995), "Just Cause" (1995), "The Avengers" (1998), and "The League of Extraordinary Gentlemen" (2003); he received positive reviews for his performance in "Finding Forrester" (2000). He also received a Crystal Globe for outstanding artistic contribution to world cinema. In a 2003 poll conducted by Channel 4 Connery was ranked eighth on their list of the 100 Greatest Movie Stars. The failure of "The League of Extraordinary Gentlemen" was especially frustrating for Connery, who sensed during shooting that the production was "going off the rails" announced that the director, Stephen Norrington should be "locked up for insanity", and spent considerable effort in trying to salvage the film through the editing process, ultimately deciding to retire from acting rather than go through such stress ever again. Connery was offered the role of Gandalf in "The Lord of the Rings" series but declined it, claiming he didn't understand the script. Connery was reportedly offered $30 million along with 15 percent of the worldwide box office receipts for the role, which—had he accepted—would have earned him $450 million. Connery also turned down the opportunity to appear as the Architect in "The Matrix" trilogy for similar reasons. Connery's disillusionment with the "idiots now making films in Hollywood" was cited as a reason for his eventual decision to retire from film-making. In 2005 he recorded voiceovers for a new video game version of his Bond film "From Russia with Love". In an interview on the game disc, Connery stated that he was very happy that the producers of the game (EA Games) had approached him to voice Bond. When Connery received the American Film Institute's Lifetime Achievement Award on 8 June 2006, he confirmed his retirement from acting. On 7 June 2007, he denied rumours that he would appear in the fourth "Indiana Jones" film, stating that "retirement is just too much damned fun". In 2010, a bronze bust sculpture of Connery was placed in Tallinn, the capital of Estonia. The work is located outside Tallinn's Scottish Club, whose membership includes Estonian Scotophiles and a handful of expatriate Scots. Connery briefly came out of retirement in 2012 by voice acting the title character in the animated movie "Sir Billi the Vet". Connery served as executive producer for an expanded 80-minute version. During the production of "South Pacific" in the mid-1950s, Connery dated a "dark-haired beauty with a ballerina's figure", Carol Sopel, but was warned off by her Jewish family. He then dated Julie Hamilton, daughter of documentary filmmaker and feminist Jill Craigie. Given Connery's rugged appearance and rough charm, Hamilton initially thought he was an appalling person and was not attracted to him until she saw him in a kilt, declaring him to be the most beautiful thing she'd ever seen in her life. He also shared a mutual attraction with jazz singer Maxine Daniels, whom he met at the Empire Theatre. He made a pass at her, but she informed him that she was already happily married with a baby daughter. Connery was married to actress Diane Cilento from 1962 to 1973, though they separated in 1971. They had a son, actor Jason Connery. In her autobiography in 2006 she alleged that he had abused her mentally and physically during their relationship; Connery had been quoted as saying that occasionally hitting a woman was "no big deal". Connery cancelled an appearance at the Scottish Parliament because of the controversy, and said he had been misquoted and that any abuse of women was unacceptable. Connery was separated in the early 1970s when he dated Jill St. John, Lana Wood, Carole Mallory and Magda Konopka. Connery has been married to Moroccan-French painter Micheline Roquebrune (born 1929) since 1975. The marriage has survived a well-documented affair Connery had in the late 1980s with Lynsey de Paul. A keen golfer, Connery owned the Domaine de Terre Blanche in the South of France for twenty years (from 1979) where he planned to build his dream golf course on the of land; the dream was realised when he sold it to German billionaire Dietmar Hopp in 1999. He has been awarded an honorary rank of "Shodan" (1st dan) in Kyokushin karate. Connery was knighted by Elizabeth II at an investiture ceremony at Holyrood Palace in Edinburgh on 5 July 2000. He had been nominated for a knighthood in 1997 and 1998, but these nominations were reported to have been vetoed by Donald Dewar due to Connery's political views. Sean Connery has a villa in Kranidi, Greece. His neighbour is King Willem-Alexander of the Netherlands, with whom he shares a helicopter platform. Michael Caine (who co-starred with Connery in "The Man Who Would Be King" in 1975) is among Connery's closest friends. Connery is a keen supporter of Scottish Premiership football club Rangers F.C., having changed his allegiance from Celtic. Connery is a member of the Scottish National Party, a centre-left political party campaigning for Scottish independence from the United Kingdom, and has supported the party financially and through personal appearances. His funding of the SNP ceased in 2001, when the UK Parliament passed legislation that prohibited overseas funding of political activities in the UK. In response to accusations that he was a tax exile, Connery released documents in 2003 showing that he had paid £3.7 million in UK taxes between 1997/98 and 2002/03; critics pointed out that had he been continuously resident in the UK for tax purposes, his tax rate would have been far higher. In the run-up to the 2014 Scottish independence referendum, Connery's brother Neil said that Connery would not come to Scotland to rally independence supporters since his tax exile status greatly limited the number of days he could spend in the country. After Connery sold his Marbella villa in 1999, Spanish authorities launched an investigation into alleged tax evasion by him and his wife, alleging that the Spanish treasury had been defrauded of £5.5 million. Connery was subsequently cleared by the Spanish officials but his wife and 16 others were charged with attempting to defraud the Spanish treasury.
https://en.wikipedia.org/wiki?curid=26709
Sculpture Sculpture is the branch of the visual arts that operates in three dimensions. It is one of the plastic arts. Durable sculptural processes originally used carving (the removal of material) and modelling (the addition of material, as clay), in stone, metal, ceramics, wood and other materials but, since Modernism, there has been an almost complete freedom of materials and process. A wide variety of materials may be worked by removal such as carving, assembled by welding or modelling, or moulded or cast. Sculpture in stone survives far better than works of art in perishable materials, and often represents the majority of the surviving works (other than pottery) from ancient cultures, though conversely traditions of sculpture in wood may have vanished almost entirely. However, most ancient sculpture was brightly painted, and this has been lost. Sculpture has been central in religious devotion in many cultures, and until recent centuries large sculptures, too expensive for private individuals to create, were usually an expression of religion or politics. Those cultures whose sculptures have survived in quantities include the cultures of the ancient Mediterranean, India and China, as well as many in Central and South America and Africa. The Western tradition of sculpture began in ancient Greece, and Greece is widely seen as producing great masterpieces in the classical period. During the Middle Ages, Gothic sculpture represented the agonies and passions of the Christian faith. The revival of classical models in the Renaissance produced famous sculptures such as Michelangelo's "David". Modernist sculpture moved away from traditional processes and the emphasis on the depiction of the human body, with the making of constructed sculpture, and the presentation of found objects as finished art works. A basic distinction is between sculpture in the round, free-standing sculpture, such as statues, not attached (except possibly at the base) to any other surface, and the various types of relief, which are at least partly attached to a background surface. Relief is often classified by the degree of projection from the wall into low or bas-relief, high relief, and sometimes an intermediate mid-relief. Sunk-relief is a technique restricted to ancient Egypt. Relief is the usual sculptural medium for large figure groups and narrative subjects, which are difficult to accomplish in the round, and is the typical technique used both for architectural sculpture, which is attached to buildings, and for small-scale sculpture decorating other objects, as in much pottery, metalwork and jewellery. Relief sculpture may also decorate steles, upright slabs, usually of stone, often also containing inscriptions. Another basic distinction is between subtractive carving techniques, which remove material from an existing block or lump, for example of stone or wood, and modelling techniques which shape or build up the work from the material. Techniques such as casting, stamping and moulding use an intermediate matrix containing the design to produce the work; many of these allow the production of several copies. The term "sculpture" is often used mainly to describe large works, which are sometimes called monumental sculpture, meaning either or both of sculpture that is large, or that is attached to a building. But the term properly covers many types of small works in three dimensions using the same techniques, including coins and medals, hardstone carvings, a term for small carvings in stone that can take detailed work. The very large or "colossal" statue has had an enduring appeal since antiquity; the largest on record at is the 2018 Indian Statue of Unity. Another grand form of portrait sculpture is the equestrian statue of a rider on horse, which has become rare in recent decades. The smallest forms of life-size portrait sculpture are the "head", showing just that, or the bust, a representation of a person from the chest up. Small forms of sculpture include the figurine, normally a statue that is no more than tall, and for reliefs the plaquette, medal or coin. Modern and contemporary art have added a number of non-traditional forms of sculpture, including sound sculpture, light sculpture, environmental art, environmental sculpture, street art sculpture, kinetic sculpture (involving aspects of physical motion), land art, and site-specific art. Sculpture is an important form of public art. A collection of sculpture in a garden setting can be called a sculpture garden. One of the most common purposes of sculpture is in some form of association with religion. Cult images are common in many cultures, though they are often not the colossal statues of deities which characterized ancient Greek art, like the Statue of Zeus at Olympia. The actual cult images in the innermost sanctuaries of Egyptian temples, of which none have survived, were evidently rather small, even in the largest temples. The same is often true in Hinduism, where the very simple and ancient form of the lingam is the most common. Buddhism brought the sculpture of religious figures to East Asia, where there seems to have been no earlier equivalent tradition, though again simple shapes like the "bi" and "cong" probably had religious significance. Small sculptures as personal possessions go back to the earliest prehistoric art, and the use of very large sculpture as public art, especially to impress the viewer with the power of a ruler, goes back at least to the Great Sphinx of some 4,500 years ago. In archaeology and art history the appearance, and sometimes disappearance, of large or monumental sculpture in a culture is regarded as of great significance, though tracing the emergence is often complicated by the presumed existence of sculpture in wood and other perishable materials of which no record remains; The totem pole is an example of a tradition of monumental sculpture in wood that would leave no traces for archaeology. The ability to summon the resources to create monumental sculpture, by transporting usually very heavy materials and arranging for the payment of what are usually regarded as full-time sculptors, is considered a mark of a relatively advanced culture in terms of social organization. Recent unexpected discoveries of ancient Chinese Bronze Age figures at Sanxingdui, some more than twice human size, have disturbed many ideas held about early Chinese civilization, since only much smaller bronzes were previously known. Some undoubtedly advanced cultures, such as the Indus Valley civilization, appear to have had no monumental sculpture at all, though producing very sophisticated figurines and seals. The Mississippian culture seems to have been progressing towards its use, with small stone figures, when it collapsed. Other cultures, such as ancient Egypt and the Easter Island culture, seem to have devoted enormous resources to very large-scale monumental sculpture from a very early stage. The collecting of sculpture, including that of earlier periods, goes back some 2,000 years in Greece, China and Mesoamerica, and many collections were available on semi-public display long before the modern museum was invented. From the 20th century the relatively restricted range of subjects found in large sculpture expanded greatly, with abstract subjects and the use or representation of any type of subject now common. Today much sculpture is made for intermittent display in galleries and museums, and the ability to transport and store the increasingly large works is a factor in their construction. Small decorative figurines, most often in ceramics, are as popular today (though strangely neglected by modern and Contemporary art) as they were in the Rococo, or in ancient Greece when Tanagra figurines were a major industry, or in East Asian and Pre-Columbian art. Small sculpted fittings for furniture and other objects go well back into antiquity, as in the Nimrud ivories, Begram ivories and finds from the tomb of Tutankhamun. Portrait sculpture began in Egypt, where the Narmer Palette shows a ruler of the 32nd century BCE, and Mesopotamia, where we have 27 surviving statues of Gudea, who ruled Lagash c. 2144–2124 BCE. In ancient Greece and Rome, the erection of a portrait statue in a public place was almost the highest mark of honour, and the ambition of the elite, who might also be depicted on a coin. In other cultures such as Egypt and the Near East public statues were almost exclusively the preserve of the ruler, with other wealthy people only being portrayed in their tombs. Rulers are typically the only people given portraits in Pre-Columbian cultures, beginning with the Olmec colossal heads of about 3,000 years ago. East Asian portrait sculpture was entirely religious, with leading clergy being commemorated with statues, especially the founders of monasteries, but not rulers, or ancestors. The Mediterranean tradition revived, initially only for tomb effigies and coins, in the Middle Ages, but expanded greatly in the Renaissance, which invented new forms such as the personal portrait medal. Animals are, with the human figure, the earliest subject for sculpture, and have always been popular, sometimes realistic, but often imaginary monsters; in China animals and monsters are almost the only traditional subjects for stone sculpture outside tombs and temples. The kingdom of plants is important only in jewellery and decorative reliefs, but these form almost all the large sculpture of Byzantine art and Islamic art, and are very important in most Eurasian traditions, where motifs such as the palmette and vine scroll have passed east and west for over two millennia. One form of sculpture found in many prehistoric cultures around the world is specially enlarged versions of ordinary tools, weapons or vessels created in impractical precious materials, for either some form of ceremonial use or display or as offerings. Jade or other types of greenstone were used in China, Olmec Mexico, and Neolithic Europe, and in early Mesopotamia large pottery shapes were produced in stone. Bronze was used in Europe and China for large axes and blades, like the Oxborough Dirk. The materials used in sculpture are diverse, changing throughout history. The classic materials, with outstanding durability, are metal, especially bronze, stone and pottery, with wood, bone and antler less durable but cheaper options. Precious materials such as gold, silver, jade, and ivory are often used for small luxury works, and sometimes in larger ones, as in chryselephantine statues. More common and less expensive materials were used for sculpture for wider consumption, including hardwoods (such as oak, box/boxwood, and lime/linden); terracotta and other ceramics, wax (a very common material for models for casting, and receiving the impressions of cylinder seals and engraved gems), and cast metals such as pewter and zinc (spelter). But a vast number of other materials have been used as part of sculptures, in ethnographic and ancient works as much as modern ones. Sculptures are often painted, but commonly lose their paint to time, or restorers. Many different painting techniques have been used in making sculpture, including tempera, oil painting, gilding, house paint, aerosol, enamel and sandblasting. Many sculptors seek new ways and materials to make art. One of Pablo Picasso's most famous sculptures included bicycle parts. Alexander Calder and other modernists made spectacular use of painted steel. Since the 1960s, acrylics and other plastics have been used as well. Andy Goldsworthy makes his unusually ephemeral sculptures from almost entirely natural materials in natural settings. Some sculpture, such as ice sculpture, sand sculpture, and gas sculpture, is deliberately short-lived. Recent sculptors have used stained glass, tools, machine parts, hardware and consumer packaging to fashion their works. Sculptors sometimes use found objects, and Chinese scholar's rocks have been appreciated for many centuries. Stone sculpture is an ancient activity where pieces of rough natural stone are shaped by the controlled removal of stone. Owing to the permanence of the material, evidence can be found that even the earliest societies indulged in some form of stone work, though not all areas of the world have such abundance of good stone for carving as Egypt, Greece, India and most of Europe. Petroglyphs (also called rock engravings) are perhaps the earliest form: images created by removing part of a rock surface which remains "in situ", by incising, pecking, carving, and abrading. Monumental sculpture covers large works, and architectural sculpture, which is attached to buildings. Hardstone carving is the carving for artistic purposes of semi-precious stones such as jade, agate, onyx, rock crystal, sard or carnelian, and a general term for an object made in this way. Alabaster or mineral gypsum is a soft mineral that is easy to carve for smaller works and still relatively durable. Engraved gems are small carved gems, including cameos, originally used as seal rings. The copying of an original statue in stone, which was very important for ancient Greek statues, which are nearly all known from copies, was traditionally achieved by "pointing", along with more freehand methods. Pointing involved setting up a grid of string squares on a wooden frame surrounding the original, and then measuring the position on the grid and the distance between grid and statue of a series of individual points, and then using this information to carve into the block from which the copy is made. Bronze and related copper alloys are the oldest and still the most popular metals for cast metal sculptures; a cast bronze sculpture is often called simply a "bronze". Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Their strength and lack of brittleness (ductility) is an advantage when figures in action are to be created, especially when compared to various ceramic or stone materials (see marble sculpture for several examples). Gold is the softest and most precious metal, and very important in jewellery; with silver it is soft enough to be worked with hammers and other tools as well as cast; repoussé and chasing are among the techniques used in gold and silversmithing. Casting is a group of manufacturing processes by which a liquid material (bronze, copper, glass, aluminum, iron) is (usually) poured into a mould, which contains a hollow cavity of the desired shape, and then allowed to solidify. The solid casting is then ejected or broken out to complete the process, although a final stage of "cold work" may follow on the finished cast. Casting may be used to form hot liquid metals or various materials that "cold set" after mixing of components (such as epoxies, concrete, plaster and clay). Casting is most often used for making complex shapes that would be otherwise difficult or uneconomical to make by other methods. The oldest surviving casting is a copper Mesopotamian frog from 3200 BCE. Specific techniques include lost-wax casting, plaster mould casting and sand casting. Welding is a process where different pieces of metal are fused together to create different shapes and designs. There are many different forms of welding, such as Oxy-fuel welding, Stick welding, MIG welding, and TIG welding. Oxy-fuel is probably the most common method of welding when it comes to creating steel sculptures because it is the easiest to use for shaping the steel as well as making clean and less noticeable joins of the steel. The key to Oxy-fuel welding is heating each piece of metal to be joined evenly until all are red and have a shine to them. Once that shine is on each piece, that shine will soon become a 'pool' where the metal is liquified and the welder must get the pools to join together, fusing the metal. Once cooled off, the location where the pools joined are now one continuous piece of metal. Also used heavily in Oxy-fuel sculpture creation is forging. Forging is the process of heating metal to a certain point to soften it enough to be shaped into different forms. One very common example is heating the end of a steel rod and hitting the red heated tip with a hammer while on an anvil to form a point. In between hammer swings, the forger rotates the rod and gradually forms a sharpened point from the blunt end of a steel rod. Glass may be used for sculpture through a wide range of working techniques, though the use of it for large works is a recent development. It can be carved, with considerable difficulty; the Roman Lycurgus Cup is all but unique. Hot casting can be done by ladling molten glass into moulds that have been created by pressing shapes into sand, carved graphite or detailed plaster/silica moulds. Kiln casting glass involves heating chunks of glass in a kiln until they are liquid and flow into a waiting mould below it in the kiln. Glass can also be blown and/or hot sculpted with hand tools either as a solid mass or as part of a blown object. More recent techniques involve chiseling and bonding plate glass with polymer silicates and UV light. Pottery is one of the oldest materials for sculpture, as well as clay being the medium in which many sculptures cast in metal are originally modelled for casting. Sculptors often build small preliminary works called maquettes of ephemeral materials such as plaster of Paris, wax, unfired clay, or plasticine. Many cultures have produced pottery which combines a function as a vessel with a sculptural form, and small figurines have often been as popular as they are in modern Western culture. Stamps and moulds were used by most ancient civilizations, from ancient Rome and Mesopotamia to China. Wood carving has been extremely widely practiced, but survives much less well than the other main materials, being vulnerable to decay, insect damage, and fire. It therefore forms an important hidden element in the art history of many cultures. Outdoor wood sculpture does not last long in most parts of the world, so that we have little idea how the totem pole tradition developed. Many of the most important sculptures of China and Japan in particular are in wood, and the great majority of African sculpture and that of Oceania and other regions. Wood is light, so suitable for masks and other sculpture intended to be carried, and can take very fine detail. It is also much easier to work than stone. It has been very often painted after carving, but the paint wears less well than the wood, and is often missing in surviving pieces. Painted wood is often technically described as "wood and polychrome". Typically a layer of gesso or plaster is applied to the wood, and then the paint is applied to that. Worldwide, sculptors have usually been tradesmen whose work is unsigned; in some traditions, for example China, where sculpture did not share the prestige of literati painting, this has affected the status of sculpture itself. Even in ancient Greece, where sculptors such as Phidias became famous, they appear to have retained much the same social status as other artisans, and perhaps not much greater financial rewards, although some signed their works. In the Middle Ages artists such as the 12th-century Gislebertus sometimes signed their work, and were sought after by different cities, especially from the Trecento onwards in Italy, with figures such as Arnolfo di Cambio, and Nicola Pisano and his son Giovanni. Goldsmiths and jewellers, dealing with precious materials and often doubling as bankers, belonged to powerful guilds and had considerable status, often holding civic office. Many sculptors also practised in other arts; Andrea del Verrocchio also painted, and Giovanni Pisano, Michelangelo, and Jacopo Sansovino were architects. Some sculptors maintained large workshops. Even in the Renaissance the physical nature of the work was perceived by Leonardo da Vinci and others as pulling down the status of sculpture in the arts, though the reputation of Michelangelo perhaps put this long-held idea to rest. From the High Renaissance artists such as Michelangelo, Leone Leoni and Giambologna could become wealthy, and ennobled, and enter the circle of princes, after a period of sharp argument over the relative status of sculpture and painting. Much decorative sculpture on buildings remained a trade, but sculptors producing individual pieces were recognised on a level with painters. From the 18th century or earlier sculpture also attracted middle-class students, although it was slower to do so than painting. Women sculptors took longer to appear than women painters, and were less prominent until the 20th century. Aniconism remained restricted to Judaism, which did not accept figurative sculpture until the 19th century, before expanding to Early Christianity, which initially accepted large sculptures. In Christianity and Buddhism, sculpture became very significant. Christian Eastern Orthodoxy has never accepted monumental sculpture, and Islam has consistently rejected nearly all figurative sculpture, except for very small figures in reliefs and some animal figures that fulfill a useful function, like the famous lions supporting a fountain in the Alhambra. Many forms of Protestantism also do not approve of religious sculpture. There has been much iconoclasm of sculpture from religious motives, from the Early Christians, the Beeldenstorm of the Protestant Reformation to the 2001 destruction of the Buddhas of Bamyan by the Taliban. The earliest undisputed examples of sculpture belong to the Aurignacian culture, which was located in Europe and southwest Asia and active at the beginning of the Upper Paleolithic. As well as producing some of the earliest known cave art, the people of this culture developed finely-crafted stone tools, manufacturing pendants, bracelets, ivory beads, and bone-flutes, as well as three-dimensional figurines. The 30 cm tall Löwenmensch found in the Hohlenstein Stadel area of Germany is an anthropomorphic lion-man figure carved from woolly mammoth ivory. It has been dated to about 35–40,000BP, making it, along with the Venus of Hohle Fels, the oldest known uncontested example of figurative art. Much surviving prehistoric art is small portable sculptures, with a small group of female Venus figurines such as the Venus of Willendorf (24–26,000BP) found across central Europe. The Swimming Reindeer of about 13,000 years ago is one of the finest of a number of Magdalenian carvings in bone or antler of animals in the art of the Upper Paleolithic, although they are outnumbered by engraved pieces, which are sometimes classified as sculpture. Two of the largest prehistoric sculptures can be found at the Tuc d'Audobert caves in France, where around 12–17,000 years ago a masterful sculptor used a spatula-like stone tool and fingers to model a pair of large bison in clay against a limestone rock. With the beginning of the Mesolithic in Europe figurative sculpture greatly reduced, and remained a less common element in art than relief decoration of practical objects until the Roman period, despite some works such as the Gundestrup cauldron from the European Iron Age and the Bronze Age Trundholm sun chariot. From the ancient Near East, the over-life sized stone Urfa Man from modern Turkey comes from about 9,000 BCE, and the 'Ain Ghazal Statues from around 7200 and 6500 BCE. These are from modern Jordan, made of lime plaster and reeds, and about half life-size; there are 15 statues, some with two heads side by side, and 15 busts. Small clay figures of people and animals are found at many sites across the Near East from the Pre-Pottery Neolithic, and represent the start of a more-or-less continuous tradition in the region. The Protoliterate period in Mesopotamia, dominated by Uruk, saw the production of sophisticated works like the Warka Vase and cylinder seals. The Guennol Lioness is an outstanding small limestone figure from Elam of about 3000–2800 BCE, part human and part lioness. A little later there are a number of figures of large-eyed priests and worshippers, mostly in alabaster and up to a foot high, who attended temple cult images of the deity, but very few of these have survived. Sculptures from the Sumerian and Akkadian period generally had large, staring eyes, and long beards on the men. Many masterpieces have also been found at the Royal Cemetery at Ur (c. 2650 BCE), including the two figures of a "Ram in a Thicket", the "Copper Bull" and a bull's head on one of the Lyres of Ur. From the many subsequent periods before the ascendency of the Neo-Assyrian Empire in the 10th century BCE Mesopotamian art survives in a number of forms: cylinder seals, relatively small figures in the round, and reliefs of various sizes, including cheap plaques of moulded pottery for the home, some religious and some apparently not. The Burney Relief is an unusually elaborate and relatively large (20 x 15 inches, 50 x 37 cm) terracotta plaque of a naked winged goddess with the feet of a bird of prey, and attendant owls and lions. It comes from the 18th or 19th centuries BCE, and may also be moulded. Stone stelae, votive offerings, or ones probably commemorating victories and showing feasts, are also found from temples, which unlike more official ones lack inscriptions that would explain them; the fragmentary Stele of the Vultures is an early example of the inscribed type, and the Assyrian Black Obelisk of Shalmaneser III a large and solid late one. The conquest of the whole of Mesopotamia and much surrounding territory by the Assyrians created a larger and wealthier state than the region had known before, and very grandiose art in palaces and public places, no doubt partly intended to match the splendour of the art of the neighbouring Egyptian empire. Unlike earlier states, the Assyrians could use easily carved stone from northern Iraq, and did so in great quantity. The Assyrians developed a style of extremely large schemes of very finely detailed narrative low reliefs in stone for palaces, with scenes of war or hunting; the British Museum has an outstanding collection, including the "Lion Hunt of Ashurbanipal" and the Lachish reliefs showing a campaign. They produced very little sculpture in the round, except for colossal guardian figures of the human-headed lamassu, which are sculpted in high relief on two sides of a rectangular block, with the heads effectively in the round (and also five legs, so that both views seem complete). Even before dominating the region they had continued the cylinder seal tradition with designs which are often exceptionally energetic and refined. The monumental sculpture of ancient Egypt is world-famous, but refined and delicate small works exist in much greater numbers. The Egyptians used the distinctive technique of sunk relief, which is well suited to very bright sunlight. The main figures in reliefs adhere to the same figure convention as in painting, with parted legs (where not seated) and head shown from the side, but the torso from the front, and a standard set of proportions making up the figure, using 18 "fists" to go from the ground to the hair-line on the forehead. This appears as early as the Narmer Palette from Dynasty I. However, there as elsewhere the convention is not used for minor figures shown engaged in some activity, such as the captives and corpses. Other conventions make statues of males darker than females ones. Very conventionalized portrait statues appear from as early as Dynasty II, before 2,780 BCE, and with the exception of the art of the Amarna period of Ahkenaten, and some other periods such as Dynasty XII, the idealized features of rulers, like other Egyptian artistic conventions, changed little until after the Greek conquest. Egyptian pharaohs were always regarded as deities, but other deities are much less common in large statues, except when they represent the pharaoh "as" another deity; however the other deities are frequently shown in paintings and reliefs. The famous row of four colossal statues outside the main temple at Abu Simbel each show Rameses II, a typical scheme, though here exceptionally large. Small figures of deities, or their animal personifications, are very common, and found in popular materials such as pottery. Most larger sculpture survives from Egyptian temples or tombs; by Dynasty IV (2680–2565 BCE) at the latest the idea of the Ka statue was firmly established. These were put in tombs as a resting place for the "ka" portion of the soul, and so we have a good number of less conventionalized statues of well-off administrators and their wives, many in wood as Egypt is one of the few places in the world where the climate allows wood to survive over millennia. The so-called reserve heads, plain hairless heads, are especially naturalistic. Early tombs also contained small models of the slaves, animals, buildings and objects such as boats necessary for the deceased to continue his lifestyle in the afterworld, and later "Ushabti" figures. The first distinctive style of ancient Greek sculpture developed in the Early Bronze Age Cycladic period (3rd millennium BCE), where marble figures, usually female and small, are represented in an elegantly simplified geometrical style. Most typical is a standing pose with arms crossed in front, but other figures are shown in different poses, including a complicated figure of a harpist seated on a chair. The subsequent Minoan and Mycenaean cultures developed sculpture further, under influence from Syria and elsewhere, but it is in the later Archaic period from around 650 BCE that the kouros developed. These are large standing statues of naked youths, found in temples and tombs, with the kore as the clothed female equivalent, with elaborately dressed hair; both have the "archaic smile". They seem to have served a number of functions, perhaps sometimes representing deities and sometimes the person buried in a grave, as with the Kroisos Kouros. They are clearly influenced by Egyptian and Syrian styles, but the Greek artists were much more ready to experiment within the style. During the 6th century Greek sculpture developed rapidly, becoming more naturalistic, and with much more active and varied figure poses in narrative scenes, though still within idealized conventions. Sculptured pediments were added to temples, including the Parthenon in Athens, where the remains of the pediment of around 520 using figures in the round were fortunately used as infill for new buildings after the Persian sack in 480 BCE, and recovered from the 1880s on in fresh unweathered condition. Other significant remains of architectural sculpture come from Paestum in Italy, Corfu, Delphi and the Temple of Aphaea in Aegina (much now in Munich). Most Greek sculpture originally included at least some colour; the Ny Carlsberg Glyptotek Museum in Copenhagen, Denmark, has done extensive research and recreation of the original colours. There are fewer original remains from the first phase of the Classical period, often called the Severe style; free-standing statues were now mostly made in bronze, which always had value as scrap. The Severe style lasted from around 500 in reliefs, and soon after 480 in statues, to about 450. The relatively rigid poses of figures relaxed, and asymmetrical turning positions and oblique views became common, and deliberately sought. This was combined with a better understanding of anatomy and the harmonious structure of sculpted figures, and the pursuit of naturalistic representation as an aim, which had not been present before. Excavations at the Temple of Zeus, Olympia since 1829 have revealed the largest group of remains, from about 460, of which many are in the Louvre. The "High Classical" period lasted only a few decades from about 450 to 400, but has had a momentous influence on art, and retains a special prestige, despite a very restricted number of original survivals. The best known works are the Parthenon Marbles, traditionally (since Plutarch) executed by a team led by the most famous ancient Greek sculptor Phidias, active from about 465–425, who was in his own day more famous for his colossal chryselephantine Statue of Zeus at Olympia (c. 432), one of the Seven Wonders of the Ancient World, his "Athena Parthenos" (438), the cult image of the Parthenon, and "Athena Promachos", a colossal bronze figure that stood next to the Parthenon; all of these are lost but are known from many representations. He is also credited as the creator of some life-size bronze statues known only from later copies whose identification is controversial, including the "Ludovisi Hermes". The High Classical style continued to develop realism and sophistication in the human figure, and improved the depiction of drapery (clothes), using it to add to the impact of active poses. Facial expressions were usually very restrained, even in combat scenes. The composition of groups of figures in reliefs and on pediments combined complexity and harmony in a way that had a permanent influence on Western art. Relief could be very high indeed, as in the Parthenon illustration below, where most of the leg of the warrior is completely detached from the background, as were the missing parts; relief this high made sculptures more subject to damage. The Late Classical style developed the free-standing female nude statue, supposedly an innovation of Praxiteles, and developed increasingly complex and subtle poses that were interesting when viewed from a number of angles, as well as more expressive faces; both trends were to be taken much further in the Hellenistic period. The Hellenistic period is conventionally dated from the death of Alexander the Great in 323 BCE, and ending either with the final conquest of the Greek heartlands by Rome in 146 BCE or with the final defeat of the last remaining successor-state to Alexander's empire after the Battle of Actium in 31 BCE, which also marks the end of Republican Rome. It is thus much longer than the previous periods, and includes at least two major phases: a "Pergamene" style of experimentation, exuberance and some sentimentality and vulgarity, and in the 2nd century BCE a classicising return to a more austere simplicity and elegance; beyond such generalizations dating is typically very uncertain, especially when only later copies are known, as is usually the case. The initial Pergamene style was not especially associated with Pergamon, from which it takes its name, but the very wealthy kings of that state were among the first to collect and also copy Classical sculpture, and also commissioned much new work, including the famous Pergamon Altar whose sculpture is now mostly in Berlin and which exemplifies the new style, as do the Mausoleum at Halicarnassus (another of the Seven Wonders), the famous "Laocoön and his Sons" in the Vatican Museums, a late example, and the bronze original of "The Dying Gaul" (illustrated at top), which we know was part of a group actually commissioned for Pergamon in about 228 BCE, from which the Ludovisi Gaul was also a copy. The group called the Farnese Bull, possibly a 2nd-century marble original, is still larger and more complex, Hellenistic sculpture greatly expanded the range of subjects represented, partly as a result of greater general prosperity, and the emergence of a very wealthy class who had large houses decorated with sculpture, although we know that some examples of subjects that seem best suited to the home, such as children with animals, were in fact placed in temples or other public places. For a much more popular home decoration market there were Tanagra figurines, and those from other centres where small pottery figures were produced on an industrial scale, some religious but others showing animals and elegantly dressed ladies. Sculptors became more technically skilled in representing facial expressions conveying a wide variety of emotions and the portraiture of individuals, as well representing different ages and races. The reliefs from the Mausoleum are rather atypical in that respect; most work was free-standing, and group compositions with several figures to be seen in the round, like the "Laocoon" and the Pergamon group celebrating victory over the Gauls became popular, having been rare before. The Barberini Faun, showing a satyr sprawled asleep, presumably after drink, is an example of the moral relaxation of the period, and the readiness to create large and expensive sculptures of subjects that fall short of the heroic. After the conquests of Alexander Hellenistic culture was dominant in the courts of most of the Near East, and some of Central Asia, and increasingly being adopted by European elites, especially in Italy, where Greek colonies initially controlled most of the South. Hellenistic art, and artists, spread very widely, and was especially influential in the expanding Roman Republic and when it encountered Buddhism in the easternmost extensions of the Hellenistic area. The massive so-called Alexander Sarcophagus found in Sidon in modern Lebanon, was probably made there at the start of the period by expatriate Greek artists for a Hellenized Persian governor. The wealth of the period led to a greatly increased production of luxury forms of small sculpture, including engraved gems and cameos, jewellery, and gold and silverware. Early Roman art was influenced by the art of Greece and that of the neighbouring Etruscans, themselves greatly influenced by their Greek trading partners. An Etruscan speciality was near life size tomb effigies in terracotta, usually lying on top of a sarcophagus lid propped up on one elbow in the pose of a diner in that period. As the expanding Roman Republic began to conquer Greek territory, at first in Southern Italy and then the entire Hellenistic world except for the Parthian far east, official and patrician sculpture became largely an extension of the Hellenistic style, from which specifically Roman elements are hard to disentangle, especially as so much Greek sculpture survives only in copies of the Roman period. By the 2nd century BCE, "most of the sculptors working at Rome" were Greek, often enslaved in conquests such as that of Corinth (146 BCE), and sculptors continued to be mostly Greeks, often slaves, whose names are very rarely recorded. Vast numbers of Greek statues were imported to Rome, whether as booty or the result of extortion or commerce, and temples were often decorated with re-used Greek works. A native Italian style can be seen in the tomb monuments, which very often featured portrait busts, of prosperous middle-class Romans, and portraiture is arguably the main strength of Roman sculpture. There are no survivals from the tradition of masks of ancestors that were worn in processions at the funerals of the great families and otherwise displayed in the home, but many of the busts that survive must represent ancestral figures, perhaps from the large family tombs like the Tomb of the Scipios or the later mausolea outside the city. The famous bronze head supposedly of Lucius Junius Brutus is very variously dated, but taken as a very rare survival of Italic style under the Republic, in the preferred medium of bronze. Similarly stern and forceful heads are seen on coins of the Late Republic, and in the Imperial period coins as well as busts sent around the Empire to be placed in the basilicas of provincial cities were the main visual form of imperial propaganda; even Londinium had a near-colossal statue of Nero, though far smaller than the 30-metre-high Colossus of Nero in Rome, now lost. The Romans did not generally attempt to compete with free-standing Greek works of heroic exploits from history or mythology, but from early on produced historical works in relief, culminating in the great Roman triumphal columns with continuous narrative reliefs winding around them, of which those commemorating Trajan (CE 113) and Marcus Aurelius (by 193) survive in Rome, where the Ara Pacis ("Altar of Peace", 13 BCE) represents the official Greco-Roman style at its most classical and refined. Among other major examples are the earlier re-used reliefs on the Arch of Constantine and the base of the Column of Antoninus Pius (161), Campana reliefs were cheaper pottery versions of marble reliefs and the taste for relief was from the imperial period expanded to the sarcophagus. All forms of luxury small sculpture continued to be patronized, and quality could be extremely high, as in the silver Warren Cup, glass Lycurgus Cup, and large cameos like the Gemma Augustea, Gonzaga Cameo and the "Great Cameo of France". For a much wider section of the population, moulded relief decoration of pottery vessels and small figurines were produced in great quantity and often considerable quality. After moving through a late 2nd-century "baroque" phase, in the 3rd century, Roman art largely abandoned, or simply became unable to produce, sculpture in the classical tradition, a change whose causes remain much discussed. Even the most important imperial monuments now showed stumpy, large-eyed figures in a harsh frontal style, in simple compositions emphasizing power at the expense of grace. The contrast is famously illustrated in the Arch of Constantine of 315 in Rome, which combines sections in the new style with roundels in the earlier full Greco-Roman style taken from elsewhere, and the "Four Tetrarchs" (c. 305) from the new capital of Constantinople, now in Venice. Ernst Kitzinger found in both monuments the same "stubby proportions, angular movements, an ordering of parts through symmetry and repetition and a rendering of features and drapery folds through incisions rather than modelling... The hallmark of the style wherever it appears consists of an emphatic hardness, heaviness and angularity—in short, an almost complete rejection of the classical tradition". This revolution in style shortly preceded the period in which Christianity was adopted by the Roman state and the great majority of the people, leading to the end of large religious sculpture, with large statues now only used for emperors. However, rich Christians continued to commission reliefs for sarcophagi, as in the Sarcophagus of Junius Bassus, and very small sculpture, especially in ivory, was continued by Christians, building on the style of the consular diptych. The Early Christians were opposed to monumental religious sculpture, though continuing Roman traditions in portrait busts and sarcophagus reliefs, as well as smaller objects such as the consular diptych. Such objects, often in valuable materials, were also the main sculptural traditions (as far as is known) of the barbaric civilizations of the Migration period, as seen in the objects found in the 6th-century burial treasure at Sutton Hoo, and the jewellery of Scythian art and the hybrid Christian and animal style productions of Insular art. Following the continuing Byzantine tradition, Carolingian art revived ivory carving, often in panels for the treasure bindings of grand illuminated manuscripts, as well as crozier heads and other small fittings. Byzantine art, though producing superb ivory reliefs and architectural decorative carving, never returned to monumental sculpture, or even much small sculpture in the round. However, in the West during the Carolingian and Ottonian periods there was the beginnings of a production of monumental statues, in courts and major churches. This gradually spread; by the late 10th and 11th century there are records of several apparently life-size sculptures in Anglo-Saxon churches, probably of precious metal around a wooden frame, like the Golden Madonna of Essen. No Anglo-Saxon example has survived, and survivals of large non-architectural sculpture from before 1,000 are exceptionally rare. Much the finest is the Gero Cross, of 965–970, which is a crucifix, which was evidently the commonest type of sculpture; Charlemagne had set one up in the Palatine Chapel in Aachen around 800. These continued to grow in popularity, especially in Germany and Italy. The rune stones of the Nordic world, the Pictish stones of Scotland and possibly the high cross reliefs of Christian Great Britain, were northern sculptural traditions that bridged the period of Christianization. From about 1000 there was a general rebirth of artistic production in all Europe, led by general economic growth in production and commerce, and the new style of Romanesque art was the first medieval style to be used in the whole of Western Europe. The new cathedrals and pilgrim's churches were increasingly decorated with architectural stone reliefs, and new focuses for sculpture developed, such as the tympanum over church doors in the 12th century, and the inhabited capital with figures and often narrative scenes. Outstanding abbey churches with sculpture include in France Vézelay and Moissac and in Spain Silos. Romanesque art was characterised by a very vigorous style in both sculpture and painting. The capitals of columns were never more exciting than in this period, when they were often carved with complete scenes with several figures. The large wooden crucifix was a German innovation right at the start of the period, as were free-standing statues of the enthroned Madonna, but the high relief was above all the sculptural mode of the period. Compositions usually had little depth, and needed to be flexible to squeeze themselves into the shapes of capitals, and church typanums; the tension between a tightly enclosing frame, from which the composition sometimes escapes, is a recurrent theme in Romanesque art. Figures still often varied in size in relation to their importance portraiture hardly existed. Objects in precious materials such as ivory and metal had a very high status in the period, much more so than monumental sculpture — we know the names of more makers of these than painters, illuminators or architect-masons. Metalwork, including decoration in enamel, became very sophisticated, and many spectacular shrines made to hold relics have survived, of which the best known is the Shrine of the Three Kings at Cologne Cathedral by Nicholas of Verdun. The bronze Gloucester candlestick and the brass font of 1108–17 now in Liège are superb examples, very different in style, of metal casting, the former highly intricate and energetic, drawing on manuscript painting, while the font shows the Mosan style at its most classical and majestic. The bronze doors, a triumphal column and other fittings at Hildesheim Cathedral, the Gniezno Doors, and the doors of the Basilica di San Zeno in Verona are other substantial survivals. The aquamanile, a container for water to wash with, appears to have been introduced to Europe in the 11th century, and often took fantastic zoomorphic forms; surviving examples are mostly in brass. Many wax impressions from impressive seals survive on charters and documents, although Romanesque coins are generally not of great aesthetic interest. The Cloisters Cross is an unusually large ivory crucifix, with complex carving including many figures of prophets and others, which has been attributed to one of the relatively few artists whose name is known, Master Hugo, who also illuminated manuscripts. Like many pieces it was originally partly coloured. The Lewis chessmen are well-preserved examples of small ivories, of which many pieces or fragments remain from croziers, plaques, pectoral crosses and similar objects. The Gothic period is essentially defined by Gothic architecture, and does not entirely fit with the development of style in sculpture in either its start or finish. The facades of large churches, especially around doors, continued to have large typanums, but also rows of sculpted figures spreading around them. The statues on the Western (Royal) Portal at Chartres Cathedral (c. 1145) show an elegant but exaggerated columnar elongation, but those on the south transept portal, from 1215 to 1220, show a more naturalistic style and increasing detachment from the wall behind, and some awareness of the classical tradition. These trends were continued in the west portal at Reims Cathedral of a few years later, where the figures are almost in the round, as became usual as Gothic spread across Europe. In Italy Nicola Pisano (1258–1278) and his son Giovanni developed a style that is often called Proto-Renaissance, with unmistakable influence from Roman sarcophagi and sophisticated and crowded compositions, including a sympathetic handling of nudity, in relief panels on their pulpit of Siena Cathedral (1265–68), the Fontana Maggiore in Perugia, and Giovanni's pulpit in Pistoia of 1301. Another revival of classical style is seen in the International Gothic work of Claus Sluter and his followers in Burgundy and Flanders around 1400. Late Gothic sculpture continued in the North, with a fashion for very large wooden sculpted altarpieces with increasingly virtuoso carving and large numbers agitated expressive figures; most surviving examples are in Germany, after much iconoclasm elsewhere. Tilman Riemenschneider, Veit Stoss and others continued the style well into the 16th century, gradually absorbing Italian Renaissance influences. Life-size tomb effigies in stone or alabaster became popular for the wealthy, and grand multi-level tombs evolved, with the Scaliger Tombs of Verona so large they had to be moved outside the church. By the 15th century there was an industry exporting Nottingham alabaster altar reliefs in groups of panels over much of Europe for economical parishes who could not afford stone retables. Small carvings, for a mainly lay and often female market, became a considerable industry in Paris and some other centres. Types of ivories included small devotional polyptychs, single figures, especially of the Virgin, mirror-cases, combs, and elaborate caskets with scenes from Romances, used as engagement presents. The very wealthy collected extravagantly elaborate jewelled and enamelled metalwork, both secular and religious, like the Duc de Berry's Holy Thorn Reliquary, until they ran short of money, when they were melted down again for cash. Renaissance sculpture proper is often taken to begin with the famous competition for the doors of the Florence Baptistry in 1403, from which the trial models submitted by the winner, Lorenzo Ghiberti, and Filippo Brunelleschi survive. Ghiberti's doors are still in place, but were undoubtedly eclipsed by his second pair for the other entrance, the so-called "Gates of Paradise", which took him from 1425 to 1452, and are dazzlingly confident classicizing compositions with varied depths of relief allowing extensive backgrounds. The intervening years had seen Ghiberti's early assistant Donatello develop with seminal statues including his "Davids" in marble (1408–09) and bronze (1440s), and his Equestrian statue of Gattamelata, as well as reliefs. A leading figure in the later period was Andrea del Verrocchio, best known for his equestrian statue of Bartolomeo Colleoni in Venice; his pupil Leonardo da Vinci designed an equine sculpture in 1482 "The Horse" for Milan-but only succeeded in making a clay model which was destroyed by French archers in 1499, and his other ambitious sculptural plans were never completed. The period was marked by a great increase in patronage of sculpture by the state for public art and by the wealthy for their homes; especially in Italy, public sculpture remains a crucial element in the appearance of historic city centres. Church sculpture mostly moved inside just as outside public monuments became common. Portrait sculpture, usually in busts, became popular in Italy around 1450, with the Neapolitan Francesco Laurana specializing in young women in meditative poses, while Antonio Rossellino and others more often depicted knobbly-faced men of affairs, but also young children. The portrait medal invented by Pisanello also often depicted women; relief plaquettes were another new small form of sculpture in cast metal. Michelangelo was an active sculptor from about 1500 to 1520, and his great masterpieces including his "David", "Pietà", "Moses", and pieces for the Tomb of Pope Julius II and Medici Chapel could not be ignored by subsequent sculptors. His iconic David (1504) has a "contrapposto" pose, borrowed from classical sculpture. It differs from previous representations of the subject in that David is depicted before his battle with Goliath and not after the giant's defeat. Instead of being shown victorious, as Donatello and Verocchio had done, David looks tense and battle ready. As in painting, early Italian Mannerist sculpture was very largely an attempt to find an original style that would top the achievement of the High Renaissance, which in sculpture essentially meant Michelangelo, and much of the struggle to achieve this was played out in commissions to fill other places in the Piazza della Signoria in Florence, next to Michelangelo's "David". Baccio Bandinelli took over the project of "Hercules and Cacus" from the master himself, but it was little more popular than it is now, and maliciously compared by Benvenuto Cellini to "a sack of melons", though it had a long-lasting effect in apparently introducing relief panels on the pedestal of statues. Like other works of his and other Mannerists it removes far more of the original block than Michelangelo would have done. Cellini's bronze "Perseus with the head of Medusa" is certainly a masterpiece, designed with eight angles of view, another Mannerist characteristic, but is indeed mannered compared to the "David"s of Michelangelo and Donatello. Originally a goldsmith, his famous gold and enamel Salt Cellar (1543) was his first sculpture, and shows his talent at its best. As these examples show, the period extended the range of secular subjects for large works beyond portraits, with mythological figures especially favoured; previously these had mostly been found in small works. Small bronze figures for collector's cabinets, often mythological subjects with nudes, were a popular Renaissance form at which Giambologna, originally Flemish but based in Florence, excelled in the later part of the century, also creating life-size sculptures, of which two joined the collection in the Piazza della Signoria. He and his followers devised elegant elongated examples of the "figura serpentinata", often of two intertwined figures, that were interesting from all angles. In Baroque sculpture, groups of figures assumed new importance, and there was a dynamic movement and energy of human forms— they spiralled around an empty central vortex, or reached outwards into the surrounding space. Baroque sculpture often had multiple ideal viewing angles, and reflected a general continuation of the Renaissance move away from the relief to sculpture created in the round, and designed to be placed in the middle of a large space—elaborate fountains such as Bernini's Fontana dei Quattro Fiumi (Rome, 1651), or those in the Gardens of Versailles were a Baroque speciality. The Baroque style was perfectly suited to sculpture, with Gian Lorenzo Bernini the dominating figure of the age in works such as "The Ecstasy of St Theresa" (1647–1652). Much Baroque sculpture added extra-sculptural elements, for example, concealed lighting, or water fountains, or fused sculpture and architecture to create a transformative experience for the viewer. Artists saw themselves as in the classical tradition, but admired Hellenistic and later Roman sculpture, rather than that of the more "Classical" periods as they are seen today. The Protestant Reformation brought an almost total stop to religious sculpture in much of Northern Europe, and though secular sculpture, especially for portrait busts and tomb monuments, continued, the Dutch Golden Age has no significant sculptural component outside goldsmithing. Partly in direct reaction, sculpture was as prominent in Roman Catholicism as in the late Middle Ages. Statues of rulers and the nobility became increasingly popular. In the 18th century much sculpture continued on Baroque lines—the Trevi Fountain was only completed in 1762. Rococo style was better suited to smaller works, and arguably found its ideal sculptural form in early European porcelain, and interior decorative schemes in wood or plaster such as those in French domestic interiors and Austrian and Bavarian pilgrimage churches. The Neoclassical style that arrived in the late 18th century gave great emphasis to sculpture. Jean-Antoine Houdon exemplifies the penetrating portrait sculpture the style could produce, and Antonio Canova's nudes the idealist aspect of the movement. The Neoclassical period was one of the great ages of public sculpture, though its "classical" prototypes were more likely to be Roman copies of Hellenistic sculptures. In sculpture, the most familiar representatives are the Italian Antonio Canova, the Englishman John Flaxman and the Dane Bertel Thorvaldsen. The European neoclassical manner also took hold in the United States, where its pinnacle occurred somewhat later and is exemplified in the sculptures of Hiram Powers. Greco-Buddhist art is the artistic manifestation of Greco-Buddhism, a cultural syncretism between the Classical Greek culture and Buddhism, which developed over a period of close to 1000 years in Central Asia, between the conquests of Alexander the Great in the 4th century BCE, and the Islamic conquests of the 7th century CE. Greco-Buddhist art is characterized by the strong idealistic realism of Hellenistic art and the first representations of the Buddha in human form, which have helped define the artistic (and particularly, sculptural) canon for Buddhist art throughout the Asian continent up to the present. Though dating is uncertain, it appears that strongly Hellenistic styles lingered in the East for several centuries after they had declined around the Mediterranean, as late as the 5th century CE. Some aspects of Greek art were adopted while others did not spread beyond the Greco-Buddhist area; in particular the standing figure, often with a relaxed pose and one leg flexed, and the flying cupids or victories, who became popular across Asia as apsaras. Greek foliage decoration was also influential, with Indian versions of the Corinthian capital appearing. The origins of Greco-Buddhist art are to be found in the Hellenistic Greco-Bactrian kingdom (250–130 BCE), located in today's Afghanistan, from which Hellenistic culture radiated into the Indian subcontinent with the establishment of the small Indo-Greek kingdom (180–10 BCE). Under the Indo-Greeks and then the Kushans, the interaction of Greek and Buddhist culture flourished in the area of Gandhara, in today's northern Pakistan, before spreading further into India, influencing the art of Mathura, and then the Hindu art of the Gupta empire, which was to extend to the rest of South-East Asia. The influence of Greco-Buddhist art also spread northward towards Central Asia, strongly affecting the art of the Tarim Basin and the Dunhuang Caves, and ultimately the sculpted figure in China, Korea, and Japan. Chinese ritual bronzes from the Shang and Western Zhou Dynasties come from a period of over a thousand years from c. 1500 BCE, and have exerted a continuing influence over Chinese art. They are cast with complex patterned and zoomorphic decoration, but avoid the human figure, unlike the huge figures only recently discovered at Sanxingdui. The spectacular Terracotta Army was assembled for the tomb of Qin Shi Huang, the first emperor of a unified China from 221–210 BCE, as a grand imperial version of the figures long placed in tombs to enable the deceased to enjoy the same lifestyle in the afterlife as when alive, replacing actual sacrifices of very early periods. Smaller figures in pottery or wood were placed in tombs for many centuries afterwards, reaching a peak of quality in Tang dynasty tomb figures. The tradition of unusually large pottery figures persisted in China, through Tang sancai tomb figures to later Buddhist statues such as the near life-size set of Yixian glazed pottery luohans and later figures for temples and tombs. These came to replace earlier equivalents in wood. Native Chinese religions do not usually use cult images of deities, or even represent them, and large religious sculpture is nearly all Buddhist, dating mostly from the 4th to the 14th century, and initially using Greco-Buddhist models arriving via the Silk Road. Buddhism is also the context of all large portrait sculpture; in total contrast to some other areas, in medieval China even painted images of the emperor were regarded as private. Imperial tombs have spectacular avenues of approach lined with real and mythological animals on a scale matching Egypt, and smaller versions decorate temples and palaces. Small Buddhist figures and groups were produced to a very high quality in a range of media, as was relief decoration of all sorts of objects, especially in metalwork and jade. In the earlier periods, large quantities of sculpture were cut from the living rock in pilgrimage cave-complexes, and as outside rock reliefs. These were mostly originally painted. In notable contrast to literati painters, sculptors of all sorts were regarded as artisans and very few names are recorded. From the Ming dynasty onwards, statuettes of religious and secular figures were produced in Chinese porcelain and other media, which became an important export. Towards the end of the long Neolithic Jōmon period, some pottery vessels were "flame-rimmed" with extravagant extensions to the rim that can only be called sculptural, and very stylized pottery dogū figures were produced, many with the characteristic "snow-goggle" eyes. During the Kofun period of the 3rd to 6th century CE, haniwa terracotta figures of humans and animals in a simplistic style were erected outside important tombs. The arrival of Buddhism in the 6th century brought with it sophisticated traditions in sculpture, Chinese styles mediated via Korea. The 7th-century Hōryū-ji and its contents have survived more intact than any East Asian Buddhist temple of its date, with works including a "Shaka Trinity" of 623 in bronze, showing the historical Buddha flanked by two bodhisattvas and also the Guardian Kings of the Four Directions. The wooden image (9th century) of Shakyamuni, the "historic" Buddha, enshrined in a secondary building at the Murō-ji, is typical of the early Heian sculpture, with its ponderous body, covered by thick drapery folds carved in the hompa-shiki (rolling-wave) style, and its austere, withdrawn facial expression. The Kei school of sculptors, particularly Unkei, created a new, more realistic style of sculpture. Almost all subsequent significant large sculpture in Japan was Buddhist, with some Shinto equivalents, and after Buddhism declined in Japan in the 15th century, monumental sculpture became largely architectural decoration and less significant. However sculptural work in the decorative arts was developed to a remarkable level of technical achievement and refinement in small objects such as inro and netsuke in many materials, and metal "" or Japanese sword mountings. In the 19th century there were export industries of small bronze sculptures of extreme virtuosity, ivory and porcelain figurines, and other types of small sculpture, increasingly emphasizing technical accomplishment. The first known sculpture in the Indian subcontinent is from the Indus Valley civilization (3300–1700 BCE), found in sites at Mohenjo-daro and Harappa in modern-day Pakistan. These include the famous small bronze female dancer. However, such figures in bronze and stone are rare and greatly outnumbered by pottery figurines and stone seals, often of animals or deities very finely depicted. After the collapse of the Indus Valley civilization there is little record of sculpture until the Buddhist era, apart from a hoard of copper figures of (somewhat controversially) c. 1500 BCE from Daimabad. Thus the great tradition of Indian monumental sculpture in stone appears to begin, relative to other cultures, and the development of Indian civilization, relatively late, with the reign of Asoka from 270 to 232 BCE, and the Pillars of Ashoka he erected around India, carrying his edicts and topped by famous sculptures of animals, mostly lions, of which six survive. Large amounts of figurative sculpture, mostly in relief, survive from Early Buddhist pilgrimage stupas, above all Sanchi; these probably developed out of a tradition using wood that also embraced Hinduism. The pink sandstone Hindu, Jain and Buddhist sculptures of Mathura from the 1st to 3rd centuries CE reflected both native Indian traditions and the Western influences received through the Greco-Buddhist art of Gandhara, and effectively established the basis for subsequent Indian religious sculpture. The style was developed and diffused through most of India under the Gupta Empire (c. 320–550) which remains a "classical" period for Indian sculpture, covering the earlier Ellora Caves, though the Elephanta Caves are probably slightly later. Later large-scale sculpture remains almost exclusively religious, and generally rather conservative, often reverting to simple frontal standing poses for deities, though the attendant spirits such as apsaras and yakshi often have sensuously curving poses. Carving is often highly detailed, with an intricate backing behind the main figure in high relief. The celebrated bronzes of the Chola dynasty (c. 850–1250) from south India, many designed to be carried in processions, include the iconic form of Shiva as Nataraja, with the massive granite carvings of Mahabalipuram dating from the previous Pallava dynasty. The sculpture of the region tends to be characterised by a high degree of ornamentation, as seen in the great monuments of Hindu and Buddhist Khmer sculpture (9th to 13th centuries) at Angkor Wat and elsewhere, the enormous 9th-century Buddhist complex at Borobudur in Java, and the Hindu monuments of Bali. Both of these include many reliefs as well as figures in the round; Borobudur has 2,672 relief panels, 504 Buddha statues, many semi-concealed in openwork stupas, and many large guardian figures. In Thailand and Laos, sculpture was mainly of Buddha images, often gilded, both large for temples and monasteries, and small figurines for private homes. Traditional sculpture in Myanmar emerged before the Bagan period. As elsewhere in the region, most of the wood sculptures of the Bagan and Ava periods have been lost. Traditional Anitist sculptures from the Philippines are dominated by Anitist designs mirroring the medium used and the culture involved, while being highlighted by the environments where such sculptures are usually placed on. Christian and Islamic sculptures from the Philippines have different motifs compared to other Christian and Islamic sculptures elsewhere. In later periods Chinese influence predominated in Vietnam, Laos and Cambodia, and more wooden sculpture survives from across the region. Islam is famously aniconic, so the vast majority of sculpture is arabesque decoration in relief or openwork, based on vegetable motifs, but tending to geometrical abstract forms. In the very early Mshatta Facade (740s), now mostly in Berlin, there are animals within the dense arabesques in high relief, and figures of animals and men in mostly low relief are found in conjunction with decoration on many later pieces in various materials, including metalwork, ivory and ceramics. Figures of animals in the round were often acceptable for works used in private contexts if the object was clearly practical, so medieval Islamic art contains many metal animals that are aquamaniles, incense burners or supporters for fountains, as in the stone lions supporting the famous one in the Alhambra, culminating in the largest medieval Islamic animal figure known, the Pisa Griffin. In the same way, luxury hardstone carvings such as dagger hilts and cups may be formed as animals, especially in Mughal art. The degree of acceptability of such relaxations of strict Islamic rules varies between periods and regions, with Islamic Spain, Persia and India often leading relaxation, and is typically highest in courtly contexts. Historically, with the exception of some monumental Egyptian sculpture, most African sculpture was created in wood and other organic materials that have not survived from earlier than a few centuries ago; older pottery figures are found from a number of areas. Masks are important elements in the art of many peoples, along with human figures, often highly stylized. There is a vast variety of styles, often varying within the same context of origin depending on the use of the object, but wide regional trends are apparent; sculpture is most common among "groups of settled cultivators in the areas drained by the Niger and Congo rivers" in West Africa. Direct images of deities are relatively infrequent, but masks in particular are or were often made for religious ceremonies; today many are made for tourists as "airport art". African masks were an influence on European Modernist art, which was inspired by their lack of concern for naturalistic depiction. The Nubian Kingdom of Kush in modern Sudan was in close and often hostile contact with Egypt, and produced monumental sculpture mostly derivative of styles to the north. In West Africa, the earliest known sculptures are from the Nok culture which thrived between 500 BCE and 500 CE in modern Nigeria, with clay figures typically with elongated bodies and angular shapes. Later West African cultures developed bronze casting for reliefs to decorate palaces like the famous Benin Bronzes, and very fine naturalistic royal heads from around the Yoruba town of Ife in terracotta and metal from the 12th–14th centuries. Akan goldweights are a form of small metal sculptures produced over the period 1400–1900, some apparently representing proverbs and so with a narrative element rare in African sculpture, and royal regalia included impressive gold sculptured elements. Many West African figures are used in religious rituals and are often coated with materials placed on them for ceremonial offerings. The Mande-speaking peoples of the same region make pieces of wood with broad, flat surfaces and arms and legs are shaped like cylinders. In Central Africa, however, the main distinguishing characteristics include heart-shaped faces that are curved inward and display patterns of circles and dots. Populations in the African Great Lakes are not known for their sculpture. However, one style from the region is pole sculptures, carved in human shapes and decorated with geometric forms, while the tops are carved with figures of animals, people, and various objects. These poles are, then, placed next to graves and are associated with death and the ancestral world. The culture known from Great Zimbabwe left more impressive buildings than sculpture but the eight soapstone Zimbabwe Birds appear to have had a special significance and were mounted on monoliths. Modern Zimbabwean sculptors in soapstone have achieved considerable international success. Southern Africa's oldest known clay figures date from 400 to 600 CE and have cylindrical heads with a mixture of human and animal features. The creation of sculptures in Ethiopia and Eritrea can be traced back to its ancient past with the kingdoms of Dʿmt and Aksum. Christian art was established in Ethiopia with the conversion from paganism to Christianity in the 4th century CE, during the reign of king Ezana of Axum. Christian imagery decorated churches during the Asksumite period and later eras. For instance, at Lalibela, life-size saints were carved into the Church of Bet Golgotha; by tradition these were made during the reign of the Zagwe ruler Gebre Mesqel Lalibela in the 12th century, but they were more likely crafted in the 15th century during the Solomonic dynasty. However, the Church of Saint George, Lalibela, one of several examples of rock cut architecture at Lalibela containing intricate carvings, was built in the 10th–13th centuries as proven by archaeology. In ancient Sudan, the development of sculpture stretches from the simple pottery of the Kerma culture beginning around 2500 BC to the monumental statuary and architecture of the Kingdom of Kush, its last phase—the Meroitic period—ending around 350 CE (with its conquest by Ethiopia's Aksum). Beyond pottery items, the Kerma culture also made furniture that contained sculptures, such as gold cattle hoofs as the legs of beds. Sculpture during the Kingdom of Kush included full-sized statues (especially of kings and queens), smaller figurines (most commonly depicting royal servants), and reliefs in stone, which were influenced by the contemporary ancient Egyptian sculptural tradition. Sculpture in what is now Latin America developed in two separate and distinct areas, Mesoamerica in the north and Peru in the south. In both areas, sculpture was initially of stone, and later of terracotta and metal as the civilizations in these areas became more technologically proficient. The Mesoamerican region produced more monumental sculpture, from the massive block-like works of the Olmec and Toltec cultures, to the superb low reliefs that characterize the Mayan and Aztec cultures. In the Andean region, sculptures were typically small, but often show superb skill. In North America, wood was sculpted for totem poles, masks, utensils, War canoes and a variety of other uses, with distinct variation between different cultures and regions. The most developed styles are those of the Pacific Northwest Coast, where a group of elaborate and highly stylized formal styles developed forming the basis of a tradition that continues today. In addition to the famous totem poles, painted and carved house fronts were complemented by carved posts inside and out, as well as mortuary figures and other items. Among the Inuit of the far north, traditional carving styles in ivory and soapstone are still continued. The arrival of European Catholic culture readily adapted local skills to the prevailing Baroque style, producing enormously elaborate retablos and other mostly church sculptures in a variety of hybrid styles. The most famous of such examples in Canada is the altar area of the Notre Dame Basilica in Montreal, Quebec, which was carved by peasant "habitant" labourers. Later, artists trained in the Western academic tradition followed European styles until in the late 19th century they began to draw again on indigenous influences, notably in the Mexican baroque grotesque style known as Churrigueresque. Aboriginal peoples also adapted church sculpture in variations on Carpenter Gothic; one famous example is the "Church of the Holy Cross" in Skookumchuck Hot Springs, British Columbia. The history of sculpture in the United States after Europeans' arrival reflects the country's 18th-century foundation in Roman republican civic values and Protestant Christianity. Compared to areas colonized by the Spanish, sculpture got off to an extremely slow start in the British colonies, with next to no place in churches, and was only given impetus by the need to assert nationality after independence. American sculpture of the mid- to late-19th century was often classical, often romantic, but showed a bent for a dramatic, narrative, almost journalistic realism. Public buildings during the last quarter of the 19th century and the first half of the 20th century often provided an architectural setting for sculpture, especially in relief. By the 1930s the International Style of architecture and design and art deco characterized by the work of Paul Manship and Lee Lawrie and others became popular. By the 1950s, traditional sculpture education would almost be completely replaced by a Bauhaus-influenced concern for abstract design. Minimalist sculpture replaced the figure in public settings and architects almost completely stopped using sculpture in or on their designs. Modern sculptors (21st century) use both classical and abstract inspired designs. Beginning in the 1980s, there was a swing back toward figurative public sculpture; by 2000, many of the new public pieces in the United States were figurative in design. Modern classicism contrasted in many ways with the classical sculpture of the 19th century which was characterized by commitments to naturalism (Antoine-Louis Barye)—the melodramatic (François Rude) sentimentality (Jean-Baptiste Carpeaux)—or a kind of stately grandiosity (Lord Leighton). Several different directions in the classical tradition were taken as the century turned, but the study of the live model and the post-Renaissance tradition was still fundamental to them. Auguste Rodin was the most renowned European sculptor of the early 20th century. He is often considered a sculptural Impressionist, as are his students including Camille Claudel, and Hugo Rheinhold, attempting to model of a fleeting moment of ordinary life. Modern classicism showed a lesser interest in naturalism and a greater interest in formal stylization. Greater attention was paid to the rhythms of volumes and spaces—as well as greater attention to the contrasting qualities of surface (open, closed, planar, broken etc.) while less attention was paid to story-telling and convincing details of anatomy or costume. Greater attention was given to psychological effect than to physical realism, and influences from earlier styles worldwide were used. Early masters of modern classicism included: Aristide Maillol, Alexander Matveyev, Joseph Bernard, Antoine Bourdelle, Georg Kolbe, Libero Andreotti, Gustav Vigeland, Jan Stursa, Constantin Brâncuși. As the century progressed, modern classicism was adopted as the national style of the two great European totalitarian empires: Nazi Germany and Soviet Russia, who co-opted the work of earlier artists such as Kolbe and Wilhelm Lehmbruck in Germany and Matveyev in Russia. Over the 70 years of the USSR, new generations of sculptors were trained and chosen within their system, and a distinct style, socialist realism, developed, that returned to the 19th century's emphasis on melodrama and naturalism. Classical training was rooted out of art education in Western Europe (and the Americas) by 1970 and the classical variants of the 20th century were marginalized in the history of modernism. But classicism continued as the foundation of art education in the Soviet academies until 1990, providing a foundation for expressive figurative art throughout eastern Europe and parts of the Middle East. By the year 2000, the European classical tradition retains a wide appeal to the public but awaits an educational tradition to revive its contemporary development. Some of the modern classical became either more decorative/art deco (Paul Manship, Jose de Creeft, Carl Milles) or more abstractly stylized or more expressive (and Gothic) (Anton Hanak, Wilhelm Lehmbruck, Ernst Barlach, Arturo Martini)—or turned more to the Renaissance (Giacomo Manzù, Venanzo Crocetti) or stayed the same (Charles Despiau, Marcel Gimond). Modernist sculpture movements include Cubism, Geometric abstraction, De Stijl, Suprematism, Constructivism, Dadaism, Surrealism, Futurism, Formalism Abstract expressionism, Pop-Art, Minimalism, Land art, and Installation art among others. In the early days of the 20th century, Pablo Picasso revolutionized the art of sculpture when he began creating his "constructions" fashioned by combining disparate objects and materials into one constructed piece of sculpture; the sculptural equivalent of the collage in two-dimensional art. The advent of Surrealism led to things occasionally being described as "sculpture" that would not have been so previously, such as "involuntary sculpture" in several senses, including coulage. In later years Picasso became a prolific potter, leading, with interest in historic pottery from around the world, to a revival of ceramic art, with figures such as George E. Ohr and subsequently Peter Voulkos, Kenneth Price, and Robert Arneson. Marcel Duchamp originated the use of the "found object" (French: objet trouvé) or "readymade" with pieces such as "Fountain" (1917). Similarly, the work of Constantin Brâncuși at the beginning of the century paved the way for later abstract sculpture. In revolt against the naturalism of Rodin and his late-19th-century contemporaries, Brâncuși distilled subjects down to their essences as illustrated by the elegantly refined forms of his "Bird in Space" series (1924). Brâncuși's impact, with his vocabulary of reduction and abstraction, is seen throughout the 1930s and 1940s, and exemplified by artists such as Gaston Lachaise, Sir Jacob Epstein, Henry Moore, Alberto Giacometti, Joan Miró, Julio González, Pablo Serrano, Jacques Lipchitz and by the 1940s abstract sculpture was impacted and expanded by Alexander Calder, Len Lye, Jean Tinguely, and Frederick Kiesler who were pioneers of Kinetic art. Modernist sculptors largely missed out on the huge boom in public art resulting from the demand for war memorials for the two World Wars, but from the 1950s the public and commissioning bodies became more comfortable with Modernist sculpture and large public commissions both abstract and figurative became common. Picasso was commissioned to make a maquette for a huge -high public sculpture, the so-called "Chicago Picasso" (1967). His design was ambiguous and somewhat controversial, and what the figure represents is not clear; it could be a bird, a horse, a woman or a totally abstract shape. During the late 1950s and the 1960s abstract sculptors began experimenting with a wide array of new materials and different approaches to creating their work. Surrealist imagery, anthropomorphic abstraction, new materials and combinations of new energy sources and varied surfaces and objects became characteristic of much new modernist sculpture. Collaborative projects with landscape designers, architects, and landscape architects expanded the outdoor site and contextual integration. Artists such as Isamu Noguchi, David Smith, Alexander Calder, Jean Tinguely, Richard Lippold, George Rickey, Louise Bourgeois, and Louise Nevelson came to characterize the look of modern sculpture. By the 1960s Abstract expressionism, Geometric abstraction and Minimalism, which reduces sculpture to its most essential and fundamental features, predominated. Some works of the period are: the Cubi works of David Smith, and the welded steel works of Sir Anthony Caro, as well as welded sculpture by a large variety of sculptors, the large-scale work of John Chamberlain, and environmental installation scale works by Mark di Suvero. Other Minimalists include Tony Smith, Donald Judd, Robert Morris, Anne Truitt, Giacomo Benevelli, Arnaldo Pomodoro, Richard Serra, Dan Flavin, Carl Andre, and John Safer who added motion and monumentality to the theme of purity of line. During the 1960s and 1970s figurative sculpture by modernist artists in stylized forms was made by artists such as Leonard Baskin, Ernest Trova, George Segal, Marisol Escobar, Paul Thek, Robert Graham in a classic articulated style, and Fernando Botero bringing his painting's 'oversized figures' into monumental sculptures. Site specific and environmental art works are represented by artists: Andy Goldsworthy, Walter De Maria, Richard Long, Richard Serra, Robert Irwin, George Rickey and Christo and Jeanne-Claude led contemporary abstract sculpture in new directions. Artists created environmental sculpture on expansive sites in the 'land art in the American West' group of projects. These land art or 'earth art' environmental scale sculpture works exemplified by artists such as Robert Smithson, Michael Heizer, James Turrell (Roden Crater). Eva Hesse, Sol LeWitt, Jackie Winsor, Keith Sonnier, Bruce Nauman and Dennis Oppenheim among others were pioneers of Postminimalist sculpture. Also during the 1960s and 1970s artists as diverse as Eduardo Paolozzi, Chryssa, Claes Oldenburg, George Segal, Edward Kienholz, Nam June Paik, Wolf Vostell, Duane Hanson, and John DeAndrea explored abstraction, imagery and figuration through video art, environment, light sculpture, and installation art in new ways. Conceptual art is art in which the concept(s) or idea(s) involved in the work take precedence over traditional aesthetic and material concerns. Works include "One and Three Chairs", 1965, is by Joseph Kosuth, and "An Oak Tree" by Michael Craig-Martin, and those of Joseph Beuys, James Turrell and Jacek Tylicki. Some modern sculpture forms are now practiced outdoors, as environmental art and environmental sculpture, often in full view of spectators. Light sculpture, street art sculpture and site-specific art also often make use of the environment. Ice sculpture is a form of ephemeral sculpture that uses ice as the raw material. It is popular in China, Japan, Canada, Sweden, and Russia. Ice sculptures feature decoratively in some cuisines, especially in Asia. Kinetic sculptures are sculptures that are designed to move, which include mobiles. Snow sculptures are usually carved out of a single block of snow about on each side and weighing about 20–30 tons. The snow is densely packed into a form after having been produced by artificial means or collected from the ground after a snowfall. Sound sculptures take the form of indoor sound installations, outdoor installations such as aeolian harps, automatons, or be more or less near conventional musical instruments. Sound sculpture is often site-specific. Art toys have become another format for contemporary artists since the late 1990s, such as those produced by Takashi Murakami and Kid Robot, designed by Michael Lau, or hand-made by Michael Leavitt (artist). Sculptures are sensitive to environmental conditions such as temperature, humidity and exposure to light and ultraviolet light. Acid rain can also cause damage to certain building materials and historical monuments. This results when sulfuric acid in the rain chemically reacts with the calcium compounds in the stones (limestone, sandstone, marble and granite) to create gypsum, which then flakes off. At any time many contemporary sculptures have usually been on display in public places; theft was not a problem as pieces were instantly recognisable. In the early 21st century the value of metal rose to such an extent that theft of massive bronze sculpture for the value of the metal became a problem; sculpture worth millions being stolen and melted down for the relatively low value of the metal, a tiny fraction of the value of the artwork.
https://en.wikipedia.org/wiki?curid=26714
Slashdot Slashdot (sometimes abbreviated as /.) is a social news website that originally billed itself as "News for Nerds. Stuff that Matters". It features news stories on science, technology, and politics that are submitted and evaluated by site users and editors. Each story has a comments section attached to it where users can add online comments. The website was founded in 1997 by Hope College students Rob Malda, also known as "CmdrTaco", and classmate Jeff Bates, also known as "Hemos". In 2012, they sold it to DHI Group, Inc. (i.e., Dice Holdings International, which created the Dice.com website for tech job seekers). In January 2016, BIZX acquired Slashdot Media, including both slashdot.org and SourceForge. In December 2019, BIZX rebranded to Slashdot Media. Summaries of stories and links to news articles are submitted by Slashdot's own users, and each story becomes the topic of a threaded discussion among users. Discussion is moderated by a user-based moderation system. Randomly selected moderators are assigned points (typically 5) which they can use to rate a comment. Moderation applies either "−1" or "+1" to the current rating, based on whether the comment is perceived as either "normal", "offtopic", "insightful", "redundant", "interesting", or "troll" (among others). The site's comment and moderation system is administered by its own open source content management system, Slash, which is available under the GNU General Public License. In 2012, "Slashdot" had around 3.7 million unique visitors per month and received over 5300 comments per day. The site has won more than 20 awards, including People's Voice Awards in 2000 for "Best Community Site" and "Best News Site". At its peak use, a news story posted to the site with a link could overwhelm some smaller or independent sites. This phenomenon was known as the "Slashdot effect". Slashdot was preceded by Rob Malda's personal website "Chips & Dips", which launched in October 1997, featured a single "rant" each day about something that interested its author – typically something to do with Linux or open source software. At the time, Malda was a student at Hope College in Holland, Michigan, majoring in computer science. The site became "Slashdot" in September 1997 under the slogan "News for Nerds. Stuff that Matters," and quickly became a hotspot on the Internet for news and information of interest to computer geeks. The name "Slashdot" came from a somewhat "obnoxious parody of a URL" – when Malda registered the domain, he desired to make a name that was "silly and unpronounceable" – try pronouncing out, "h-t-t-p-colon-slash-slash-slashdot-dot-org". By June 1998, the site was seeing as many as 100,000 page views per day and advertisers began to take notice. Slashdot was co-founded by Rob Malda and Jeff Bates. By December 1998, Slashdot had net revenues of $18,000, yet its Internet profile was higher and revenues were expected to increase. On June 29, 1999, the site was sold to Linux megasite Andover.net for $1.5 million in cash and $7 million in Andover stock at the Initial public offering (IPO) price. Part of the deal was contingent upon the continued employment of Malda and Bates and on the achievement of certain "milestones". With the acquisition of Slashdot, Andover.net could now advertise itself as "the leading Linux/Open Source destination on the Internet". Andover.net merged with VA Linux on February 3, 2000, changed its name to SourceForge, Inc. on May 24, 2007, and then became Geeknet, Inc. on November 4, 2009. Slashdot's 10,000th article was posted after two and a half years on February 24, 2000, and the 100,000th article was posted on December 11, 2009 after 12 years online. During the first 12 years, the most active story with the most responses posted was the post-2004 US Presidential Election article "Kerry Concedes Election To Bush" with 5,687 posts. This followed the creation of a new article section, "politics.slashdot.org", created at the start of the 2004 election on September 7, 2004. Many of the most popular stories are political, with "Strike on Iraq" (March 19, 2003) the second-most-active article and "Barack Obama Wins US Presidency" (November 5, 2008) the third-most-active. The rest of the 10 most active articles are an article announcing the 2005 London bombings, and several articles about Evolution vs. Intelligent Design, Saddam Hussein's capture, and "Fahrenheit 9/11". Articles about Microsoft and its Windows Operating System are popular. A thread posted in 2002 titled "What's Keeping You On Windows?" was the 10th-most-active story, and an article about Windows 2000/NT4 source-code leaks the most visited article with more than 680,000 hits. Some controversy erupted on March 9, 2001 after an anonymous user posted the full text of Scientology's "Operating Thetan Level Three" (OT III) document in a comment attached to a Slashdot article. The Church of Scientology demanded that Slashdot remove the document under the Digital Millennium Copyright Act. A week later, in a long article, Slashdot editors explained their decision to remove the page while providing links and information on how to get the document from other sources. Slashdot Japan was launched on May 28, 2001 (although the first article was published April 5, 2001) and is an official offshoot of the US-based Web site. the site was owned by OSDN-Japan, Inc., and carried some of the US-based Slashdot articles as well as localized stories. An external site, "New Media Services", has reported the importance of Online Moderation last December 1, 2011. On Valentine's Day 2002, founder Rob Malda proposed to longtime girlfriend Kathleen Fent using the front page of Slashdot. They were married on December 8, 2002, in Las Vegas, Nevada. Slashdot implemented a paid subscription service on March 1, 2002. Slashdot's subscription model works by allowing users to pay a small fee to be able to view pages without banner ads, starting at a rate of $5 per 1,000 page views – non-subscribers may still view articles and respond to comments, with banner ads in place. On March 6, 2003, subscribers were given the ability to see articles 10 to 20 minutes before they are released to the public. Slashdot altered its threaded discussion forum display software to explicitly show domains for links in articles, as "users made a sport out of tricking unsuspecting readers into visiting [Goatse.cx]." In observance of April Fools' Day in 2006, Slashdot temporarily changed its signature teal color theme to a warm palette of bubblegum pink and changed its masthead from the usual, "News for Nerds" motto to, "OMG!!! Ponies!!!" Editors joked that this was done to increase female readership. In another supposed April Fools' Day joke, User Achievement tags were introduced on April 1, 2009. This system allowed users to be tagged with various achievements, such as "The Tagger" for tagging a story or "Member of the {1,2,3,4,5} Digit UID Club" for having a Slashdot UID consisting of a certain number of digits. While it was posted on April Fools' Day to allow for certain joke achievements, the system is real. Slashdot unveiled its newly redesigned site on June 4, 2006, following a CSS Redesign Competition. The winner of the competition was Alex Bendiken, who built on the initial CSS framework of the site. The new site looks similar to the old one but is more polished with more rounded curves, collapsible menus, and updated fonts. On November 9 that same year, Malda wrote that Slashdot attained 16,777,215 (or 224 − 1) comments, which broke the database for three hours until the administrators fixed the problem. On January 25, 2011, the site launched its third major redesign in its 13.5-year history, which gutted the HTML and CSS, and updated the graphics. On August 25, 2011, Malda resigned as Editor-in-Chief with immediate effect. He did not mention any plans for the future, other than spending more time with his family, catching up on some reading, and possibly writing a book. His final farewell message received over 1,400 comments within 24 hours on the site. On December 7, 2011, Slashdot announced that it would start to push what the company described as "sponsored" Ask Slashdot questions. On March 28, 2012, Slashdot launched Slashdot TV. Two months later, in May 2012, Slashdot launched SlashBI, SlashCloud, and SlashDataCenter, three websites dedicated to original journalistic content. The websites proved controversial, with longtime Slashdot users commenting that the original content ran counter to the website's longtime focus on user-generated submissions. Nick Kolakowski, the editor of the three websites, told The Next Web that the websites were “meant to complement Slashdot with an added layer of insight into a very specific area of technology, without interfering with Slashdot’s longtime focus on tech-community interaction and discussion.” Despite the debate, articles published on SlashCloud and SlashBI attracted attention from io9, NPR, Nieman Lab, Vanity Fair, and other publications. In September 2012, Slashdot, SourceForge, and Freecode were acquired by online job site Dice.com for $20 million, and incorporated into a subsidiary known as Slashdot Media. While initially stating that there were no plans for major changes to Slashdot, in October 2013, Slashdot launched a "beta" for a significant redesign of the site, which featured a simpler appearance and commenting system. While initially an opt-in beta, the site automatically began migrating selected users to the new design in February 2014; the rollout led to a negative response from many longtime users, upset by the added visual complexity, and the removal of features, such as comment viewing, that distinguished Slashdot from other news sites. An organized boycott of the site was held from February 10 to 17, 2014. The "beta" site was eventually shelved. In July 2015, Dice announced that it planned to sell Slashdot and SourceForge; in particular, the company stated in a filing that it was unable to "successfully [leverage] the Slashdot user base to further Dice's digital recruitment business". On January 27, 2016, the two sites were sold to the San Diego-based BizX, LLC for an undisclosed amount. It was run by its founder, Rob "CmdrTaco" Malda, from 1998 until 2011. He shared editorial responsibilities with several other editors including Timothy Lord, Patrick "Scuttlemonkey" McGarry, Jeff "Soulskill" Boehm, Rob "Samzenpus" Rozeboom, and Keith Dawson. Jonathan "cowboyneal" Pater is another popular editor of Slashdot, who came to work for Slashdot as a programmer and systems administrator. His online nickname (handle), CowboyNeal, is inspired by a Grateful Dead tribute to Neal Cassady in their song, "That's It for the Other One". He is best known as the target of the usual comic poll option, a tradition started by Chris DiBona. Slashdot runs on Slash, a content management system available under the GNU General Public License. Early versions of Slash were written by Rob Malda in the spring of 1998. After Andover.net bought Slashdot in June 1999, Slash remains Free software and anyone can contribute to development. Slashdot's editors are primarily responsible for selecting and editing the primary stories that are posted daily by submitters. The editors provide a one-paragraph summary for each story and a link to an external website where the story originated. Each story becomes the topic for a threaded discussion among the site's users. A user-based moderation system is employed to filter out abusive or offensive comments. Every comment is initially given a score of "−1" to "+2", with a default score of "+1" for registered users, "0" for anonymous users (Anonymous Coward), "+2" for users with high "karma", or "−1" for users with low "karma". As moderators read comments attached to articles, they click to moderate the comment, either up ("+1") or down ("−1"). Moderators may choose to attach a particular descriptor to the comments as well, such as "normal", "offtopic", "flamebait", "troll", "redundant", "insightful", "interesting", "informative", "funny", "overrated", or "underrated", with each corresponding to a "−1" or "+1" rating. So a comment may be seen to have a rating of "+1 insightful" or "−1 troll". Comments are very rarely deleted, even if they contain hateful remarks. Starting in August 2019 anonymous comments and postings have been disabled. Moderation points add to a user's rating, which is known as "karma" on Slashdot. Users with high "karma" are eligible to become moderators themselves. The system does not promote regular users as "moderators" and instead assigns five moderation points at a time to users based on the number of comments they have entered in the system – once a user's moderation points are used up, they can no longer moderate articles (though they can be assigned more moderation points at a later date). Paid staff editors have an unlimited number of moderation points. A given comment can have any integer score from "−1" to "+5", and registered users of Slashdot can set a personal threshold so that no comments with a lesser score are displayed. For instance, a user reading Slashdot at level "+5" will only see the highest rated comments, while a user reading at level "−1" will see a more "unfiltered, anarchic version". A meta-moderation system was implemented on September 7, 1999, to moderate the moderators and help contain abuses in the moderation system. Meta-moderators are presented with a set of moderations that they may rate as either "fair" or "unfair". For each moderation, the meta-moderator sees the original comment and the reason assigned by the moderator (e.g. "troll", "funny"), and the meta-moderator can click to see the context of comments surrounding the one that was moderated. Slashdot features discussion forums on a variety of technology- and science-related topics, or "News for Nerds", as its motto states. Articles are divided into the following sections: Slashdot uses a system of "tags" where users can categorize a story to group them together and sorting them. Tags are written in all lowercase, with no spaces, and limited to 64 characters. For example, articles could be tagged as being about "security" or "mozilla". Some articles are tagged with longer tags, such as "whatcouldpossiblygowrong" (expressing the perception of catastrophic risk), "suddenoutbreakofcommonsense" (used when the community feels that the subject has finally figured out something obvious), "correlationnotcausation" (used when scientific articles lack direct evidence; see correlation does not imply causation), or "getyourasstomars" (commonly seen in articles about Mars or space exploration). As an online community with primarily user-generated content, many in-jokes and internet memes have developed over the course of the site's history. A popular meme (based on an unscientific Slashdot user poll) is, "In Soviet Russia, "noun" "verb" you!" This type of joke has its roots in the 1960s or earlier, and is known as a "Russian reversal". Other popular memes usually pertain to computing or technology, such as "Imagine a Beowulf cluster of these", "But does it run Linux?", or "Netcraft now confirms: BSD (or some other software package or item) is dying." Users will also typically refer to articles referring to data storage and data capacity by inquiring how much it is in units of Libraries of Congress. Sometimes bandwidth speeds are referred to in units of Libraries of Congress per second. When numbers are quoted, people will comment that the number happens to be the "combination to their luggage" (a reference to the Mel Brooks film Spaceballs) and express false anger at the person who revealed it. Slashdotters often use the abbreviation TFA which stands for "The fucking article" or RTFA ("Read the fucking article"), which itself is derived from the abbreviation RTFM. Usage of this abbreviation often exposes comments from posters who have not read the article linked to in the main story. Slashdotters typically like to mock then United States Senator Ted Stevens' 2006 description of the Internet as a "series of tubes" or former Microsoft CEO Steve Ballmer's chair-throwing incident from 2005. Microsoft founder Bill Gates is a popular target of jokes by Slashdotters, and all stories about Microsoft were once identified with a graphic of Gates looking like a Borg from "". Many Slashdotters have long talked about the supposed release of "Duke Nukem Forever", which was promised in 1997 but was delayed indefinitely (the game was eventually released in 2011). References to the game are commonly brought up in other articles about software packages that are not yet in production even though the announced delivery date has long passed (see vaporware). Having a low Slashdot user identifier (user ID) is highly valued since they are assigned sequentially; having one is a sign that someone has an older account and has contributed to the site longer. For Slashdot's 10-year anniversary in 2007, one of the items auctioned off in the charity auction for the Electronic Frontier Foundation was a 3-digit Slashdot user ID. In 2006, Slashdot had approximately 5.5 million users per month and in January 2013, the site's Alexa rank was 2,000, with the average user spending 3 minutes and 18 seconds per day on the site and 82,665 sites linking in. By 2019 this had reduced to Alexa rank 5,194. The primary stories on the site consist of a short synopsis paragraph, a link to the original story, and a lengthy discussion section, all contributed by users. At its peak, discussion on stories could get up to 10,000 posts per day. Slashdot has been considered a pioneer in user-driven content, influencing other sites such as Google News and Wikipedia. There has been a dip in readership as of 2011, primarily due to the increase of technology-related blogs and Twitter feeds. In 2002, approximately 50% of Slashdot's traffic consisted of people who simply check out the headlines and click through, while others participate in discussion boards and take part in the community. Many links in Slashdot stories caused the linked site to get swamped by heavy traffic and its server to collapse. This was known as the "Slashdot effect", a term first coined on February 15, 1999 that refers to an article about a "new generation of niche Web portals driving unprecedented amounts of traffic to sites of interest". Slashdot has received over twenty awards, including People's Voice Awards in 2000 in both of the categories for which it was nominated ("Best Community Site" and "Best News Site"). It was also voted as one of "Newsweek"s favorite technology Web sites and rated in Yahoo!'s Top 100 Web sites as the "Best Geek Hangout" (2001). The main antagonists in the 2004 novel "Century Rain", by Alastair Reynolds – The Slashers – are named after Slashdot users. The site was mentioned briefly in the 2000 novel "Cosmonaut Keep", written by Ken MacLeod. Several tech celebrities have stated that they either checked the website regularly or participated in its discussion forums using an account. Some of these celebrities include: Apple co-founder Steve Wozniak, writer and actor Wil Wheaton, and id Software technical director John Carmack.
https://en.wikipedia.org/wiki?curid=26715
South Australia South Australia (abbreviated as SA) is a state in the southern central part of Australia. It covers some of the most arid parts of the country. With a total land area of , it is the fourth-largest of Australia's states and territories by area, and fifth largest by population. It has a total of 1.76 million people, and its population is the second most highly centralised in Australia, after Western Australia, with more than 77 percent of South Australians living in the capital, Adelaide, or its . Other population centres in the state are relatively small; Mount Gambier, the second largest centre, has a population of 28,684. South Australia shares borders with all of the other mainland states, and with the Northern Territory; it is bordered to the west by Western Australia, to the north by the Northern Territory, to the north-east by Queensland, to the east by New South Wales, to the south-east by Victoria, and to the south by the Great Australian Bight. The state comprises less than 8 percent of the Australian population and ranks fifth in population among the six states and two territories. The majority of its people reside in greater Metropolitan Adelaide. Most of the remainder are settled in fertile areas along the south-eastern coast and River Murray. The state's colonial origins are unique in Australia as a freely settled, planned British province, rather than as a convict settlement. Colonial government commenced on 28 December 1836, when the members of the council were sworn in near the Old Gum Tree. As with the rest of the continent, the region has a long history of human occupation by numerous tribes and languages. The South Australian Company established a temporary settlement at Kingscote, Kangaroo Island, on 26 July 1836, five months before Adelaide was founded. The guiding principle behind settlement was that of "systematic colonisation", a theory espoused by Edward Gibbon Wakefield that was later employed by the New Zealand Company. The goal was to establish the province as a centre of civilisation for free immigrants, promising civil liberties and religious tolerance. Although its history is marked by economic hardship, South Australia has remained politically innovative and culturally vibrant. Today, it is known for its fine wine and numerous cultural festivals. The state's economy is dominated by the agricultural, manufacturing and mining industries. Evidence of human activity in South Australia dates back as far as 20,000 years, with flint mining activity and rock art in the Koonalda Cave on the Nullarbor Plain. In addition wooden spears and tools were made in an area now covered in peat bog in the South East. Kangaroo Island was inhabited long before the island was cut off by rising sea levels. The first recorded European sighting of the South Australian coast was in 1627 when the Dutch ship the "Gulden Zeepaert", captained by François Thijssen, examined and mapped a section of the coastline as far east as the Nuyts Archipelago. Thijssen named the whole of the country eastward of the Leeuwin "Nuyts Land", after a distinguished passenger on board; the Hon. Pieter Nuyts, one of the Councillors of India. The coastline of South Australia was first mapped by Matthew Flinders and Nicolas Baudin in 1802, excepting the inlet later named the Port Adelaide River which was first discovered in 1831 by Captain Collet Barker and later accurately charted in 1836–37 by Colonel William Light, leader of the South Australian Colonization Commissioners' 'First Expedition' and first Surveyor-General of South Australia. The land which now forms the state of South Australia was claimed for Britain in 1788 as part of the colony of New South Wales. Although the new colony included almost two-thirds of the continent, early settlements were all on the eastern coast and only a few intrepid explorers ventured this far west. It took more than forty years before any serious proposal to establish settlements in the south-western portion of New South Wales were put forward. On 15 August 1834, the British Parliament passed the South Australia Act 1834 ("Foundation Act"), which empowered His Majesty to erect and establish a province or provinces in southern Australia. The act stated that the land between 132° and 141° east longitude and from 26° south latitude to the southern ocean would be allotted to the colony, and it would be convict-free. In contrast to the rest of Australia, "terra nullius" did not apply to the new province. The Letters Patent, which used the enabling provisions of the South Australia Act 1834 to fix the boundaries of the Province of South Australia, provided that "nothing in those our Letters Patent shall affect or be construed to affect the rights of any Aboriginal Natives of the said Province to the actual occupation and enjoyment in their own Persons or in the Persons of their Descendants of any Lands therein now actually occupied or enjoyed by such Natives." Although the patent guaranteed land rights under force of law for the indigenous inhabitants, it was ignored by the South Australian Company authorities and squatters. Despite strong reference to the rights of the native population in the initial proclamation by the Governor, there were many conflicts and deaths in the Australian Frontier Wars in South Australia. Survey was required before settlement of the province, and the Colonization Commissioners for South Australia appointed William Light as the leader of its 'First Expedition', tasked with examining 1500 miles of the South Australian coastline and selecting the best site for the capital, and with then planning and surveying the site of the city into one-acre Town Sections and its surrounds into 134-acre Country Sections. Eager to commence the establishment of their whale and seal fisheries, the South Australian Company sought, and obtained, the Commissioners' permission to send Company ships to South Australia, in advance of the surveys and ahead of the Commissioners' colonists. The Company's settlement of seven vessels and 636 people was temporarily made at Kingscote on Kangaroo Island, until the official site of the capital was selected by William Light, where the City of Adelaide is currently located. The first immigrants arrived at Holdfast Bay (near the present day Glenelg) in November 1836. The commencement of colonial government was proclaimed on 28 December 1836, now known as Proclamation Day. South Australia is the only Australian state to have never received British convicts. Another free settlement, Swan River colony was established in 1829 but Western Australia later sought convict labour, and in 1849 Western Australia was formally constituted as a penal colony. Although South Australia was constituted such that convicts could never be transported to the Province, some emancipated or escaped convicts or expirees made their own way there, both prior to 1836, or later, and may have constituted 1–2% of the early population. The plan for the province was that it would be an experiment in reform, addressing the problems perceived in British society. There was to be religious freedom and no established religion. Sales of land to colonists created an Emigration Fund to pay the costs of transferring a poor young labouring population to South Australia. In early 1838 the colonists became concerned after it was reported that convicts who had escaped from the eastern states may make their way to South Australia. The South Australia Police was formed in April 1838 to protect the community and enforce government regulations. Their principal role was to run the first temporary gaol, a two-room hut. The current flag of South Australia was adopted on 13 January 1904, and is a British blue ensign defaced with the state badge. The badge is described as a piping shrike with wings outstretched on a yellow disc. The state badge is believed to have been designed by Robert Craig of Adelaide's School of Design. The terrain consists largely of arid and semi-arid rangelands, with several low mountain ranges. The most important (but not tallest) is the Mount Lofty-Flinders Ranges system, which extends north about from Cape Jervis to the northern end of Lake Torrens. The highest point in the state is not in those ranges; Mount Woodroffe () is in the Musgrave Ranges in the extreme northwest of the state. The south-western portion of the state consists of the sparsely inhabited Nullarbor Plain, fronted by the cliffs of the Great Australian Bight. Features of the coast include Spencer Gulf and the Eyre and Yorke Peninsulas that surround it. The principal industries and exports of South Australia are wheat, wine and wool. More than half of Australia's wines are produced in the South Australian wine regions which principally include: Barossa Valley, Clare Valley, McLaren Vale, Coonawarra, the Riverland and the Adelaide Hills. "See South Australian wine." South Australia has boundaries with every other Australian mainland state and territory except the Australian Capital Territory and the Jervis Bay Territory. The Western Australia border has a history involving the South Australian government astronomer, Dodwell, and the Western Australian Government Astronomer, Curlewis, marking the border on the ground in the 1920s. In 1863, that part of New South Wales to the north of South Australia was annexed to South Australia, by letters patent, as the "Northern Territory of South Australia", which became shortened to the Northern Territory (6 July 1863). The Northern Territory was handed to the federal government in 1911 and became a separate territory. According to Australian maps, South Australia's south coast is flanked by the Southern Ocean, but official international consensus defines the Southern Ocean as extending north from the pole only to 60°S or 55°S, at least 17 degrees of latitude further south than the most southern point of South Australia. Thus the south coast is officially adjacent to the south-most portion of the Indian Ocean. "See Southern Ocean: Existence and definitions". The southern part of the state has a Mediterranean climate, while the rest of the state has either an arid or semi-arid climate. South Australia's main temperature range is in January and in July. The highest maximum temperature was recorded as at Oodnadatta on 2 January 1960, which is also the highest official temperature recorded in Australia. The lowest minimum temperature was at Yongala on 20 July 1976. South Australia's average annual employment for 2009–10 was 800,600 persons, 18% higher than for 2000–01. For the corresponding period, national average annual employment rose by 22%. South Australia's largest employment sector is health care and social assistance, surpassing manufacturing in SA as the largest employer since 2006–07. In 2009–10, manufacturing in SA had average annual employment of 83,700 persons compared with 103,300 for health care and social assistance. Health care and social assistance represented nearly 13% of the state average annual employment. The retail trade is the second largest employer in SA (2009–10), with 91,900 jobs, and 12 per cent of the state workforce. The manufacturing industry plays an important role in South Australia's economy, generating 11.7% of the state's gross state product (GSP) and playing a large part in exports. The manufacturing industry consists of automotive (44% of total Australian production, 2006) and component manufacturing, pharmaceuticals, defence technology (2.1% of GSP, 2002–03) and electronic systems (3.0% of GSP in 2006). South Australia's economy relies on exports more than any other state in Australia. State export earnings stood at A$10 billion per year and grew by 8.8% from 2002 to 2003. Production of South Australian food and drink (including agriculture, horticulture, aquaculture, fisheries and manufacturing) is a $10 billion industry. South Australia's credit rating was upgraded to AAA by Standard & Poor's Rating Agency in September 2004 and to AAA by Moody's Rating Agency November 2004, the highest credit ratings achievable by any company or sovereign. The State had previously lost these ratings in the State Bank collapse. However, in 2012 Standard & Poor's downgraded the state's credit rating to AA+ due to declining revenues, new spending initiatives and a weaker than expected budgetary outlook. South Australia's Gross State Product was A$48.9 billion starting 2004, making it A$32,996 per capita. Exports for 2006 were valued at $9.0bn with imports at $6.2bn. Private Residential Building Approvals experienced 80% growth over the year of 2006. South Australia's economy includes the following major industries: meat and meat preparations, wheat, wine, wool and sheepskins, machinery, metal and metal manufactures, fish and crustaceans, road vehicles and parts, and petroleum products. Other industries, such as education and defence technology, are of growing importance. South Australia receives the least amount of federal funding for its local road network of all states on a per capita and a per kilometre basis. In 2013, South Australia was named by Commsec Securities as the second lowest performing economy in Australia. While some sources have pointed at weak retail spending and capital investment, others have attributed poor performance to declines in public spending. South Australia has the lead over other Australian states for its commercialisation and commitment to renewable energy. It is now the leading producer of wind power in Australia. Renewable energy is a growing source of electricity in South Australia, and there is potential for growth from this particular industry of the state's economy. The Hornsdale Power Reserve is a bank of grid-connected batteries adjacent to the Hornsdale Wind Farm in South Australia's Mid-North region. At the time of construction in late 2017, it was billed as the largest lithium-ion battery in the world. The Olympic Dam mine near Roxby Downs in northern South Australia is the largest deposit of uranium in the world, possessing more than a third of the world's low-cost recoverable reserves and 70% of Australia's. The mine, owned and operated by BHP Billiton, presently accounts for 9% of global uranium production. The Olympic Dam mine is also the world's fourth-largest remaining copper deposit, and the world's fifth largest gold deposit. There was a proposal to vastly expand the operations of the mine, making it the largest open-cut mine in the world, but in 2012 the BHP Billiton board decided not to go ahead with it at that time due to then lower commodity prices. Crown land held in right of South Australia is managed under the Crown Land Management Act 2009. South Australia is a constitutional monarchy with the Queen of Australia as sovereign, and the Governor of South Australia as her representative. It is a state of the Commonwealth of Australia. The bicameral Parliament of South Australia consists of the lower house known as the House of Assembly and the upper house known as the Legislative Council. General elections are held every four years, the last being the 2018 election. Initially, the Governor of South Australia held almost total power, derived from the letters patent of the imperial government to create the colony. He was accountable only to the British Colonial Office, and thus democracy did not exist in the colony. A new body was created to advise the governor on the administration of South Australia in 1843 called the Legislative Council. It consisted of three representatives of the British Government and four colonists appointed by the governor. The governor retained total executive power. In 1851, the Imperial Parliament enacted the Australian Colonies Government Act, which allowed for the election of representatives to each of the colonial legislatures and the drafting of a constitution to properly create representative and responsible government in South Australia. Later that year, propertied male colonists were allowed to vote for 16 members on a new 24 seat Legislative Council. Eight members continued to be appointed by the governor. The main responsibility of this body was to draft a constitution for South Australia. The body drafted the most democratic constitution ever seen in the British Empire and provided for universal manhood suffrage. It created the bicameral Parliament of South Australia. For the first time in the colony, the executive was elected by the people, and the colony used the Westminster system, where the government is the party or coalition that exerts a majority in the House of Assembly. Women's suffrage in Australia took a leap forward – enacted in 1895 and taking effect from the 1896 colonial election, South Australia was the first in Australia and only the second in the world after New Zealand to allow women to vote, and the first in the world to allow women to stand for election. In 1897 Catherine Helen Spence was the first woman in Australia to be a candidate for political office when she was nominated to be one of South Australia's delegates to the conventions that drafted the constitution. South Australia became an original state of the Commonwealth of Australia on 1 January 1901. South Australia is divided into 74 local government areas. Local councils are responsible for functions delegated by the South Australian parliament, such as road infrastructure and waste management. Council revenue comes mostly from property taxes and government grants. As at March 2018 the population of South Australia was 1,733,500. A majority of the state's population lives within Greater Adelaide's metropolitan area which had an estimated population of 1,333,927 in June 2017. Other significant population centres include Mount Gambier (29,505), Victor Harbor-Goolwa (26,334), Whyalla (21,976), Murray Bridge (18,452), Port Lincoln (16,281), Port Pirie (14,267), and Port Augusta (13,957). At the 2016 census, the most commonly nominated ancestries were: 28.9% of the population was born overseas at the 2016 census. The five largest groups of overseas-born were from England (5.8%), India (1.6%), China (1.5%), Italy (1.1%) and Vietnam (0.9%). 2% of the population, or 34,184 people, identified as Indigenous Australians (Aboriginal Australians and Torres Strait Islanders) in 2016. At the 2016 census, 78.2% of the population spoke only English at home. The other languages most commonly spoken at home were Italian (1.7%), Standard Mandarin (1.7%), Greek (1.4%) Vietnamese (1.1%), and Cantonese (0.6%). At the 2016 census, overall 53.9% of responses identified some variant of Christianity. 9% of respondents chose not to state a religion. The most commonly nominated responses were 'No Religion' (35.4%), Catholicism (18%), Anglicanism (10%) and Uniting Church (7.1%). On 1 January 2009, the school leaving age was raised to 17 (having previously been 15 and then 16). Education is compulsory for all children until age 17, unless they are working or undergoing other training. The majority of students stay on to complete their South Australian Certificate of Education (SACE). School education is the responsibility of the South Australian government, but the public and private education systems are funded jointly by it and the Commonwealth Government. The South Australian Government provides, to schools on a per student basis, 89 percent of the total Government funding while the Commonwealth contributes 11 percent. Since the early 1970s it has been an ongoing controversy that 68 percent of Commonwealth funding (increasing to 75% by 2008) goes to private schools that are attended by 32% of the states students. Private schools often refute this by saying that they receive less State Government funding than public schools, and in 2004 the main private school funding came from the Australian government, not the state government. On 14 June 2013, South Australia became the third Australian state to sign up to the Australian Federal Government's Gonski Reform Program. This will see funding for primary and secondary education to South Australia increased by $1.1 billion before 2019. There are three public and four private universities in South Australia. The three public universities are the University of Adelaide (established 1874, third oldest in Australia), Flinders University (est. 1966) and the University of South Australia (est. 1991). The four private universities are Torrens University Australia (est. 2013), Carnegie Mellon University - Australia (est. 2006), University College London's School of Energy and Resources (Australia), and Cranfield University. All six have their main campus in the Adelaide metropolitan area: Adelaide and UniSA on North Terrace in the city; CMU, UCL and Cranfield are co-located on Victoria Square in the city, and Flinders at Bedford Park. Tertiary vocational education is provided by a range of Registered Training Organisations (RTOs) which are regulated at Commonwealth level. The range of RTOs delivering education include public, private and 'enterprise' providers i.e. employing organisations who run an RTO for their own employees or members. The largest public provider of vocational education is TAFE South Australia which is made up of colleges throughout the state, many of these in rural areas, providing tertiary education to as many people as possible. In South Australia, TAFE is funded by the state government and run by the South Australian Department of Further Education, Employment, Science and Technology (DFEEST). Each TAFE SA campus provides a range of courses with its own specialisation. After settlement, the major form of transport in South Australia was ocean transport. Limited land transport was provided by horses and bullocks. In the mid 19th century, the state began to develop a widespread rail network, although a coastal shipping network continued until the post war period. Roads began to improve with the introduction of motor transport. By the late 19th century, road transport dominated internal transport in South Australia. South Australia has four interstate rail connections, to Perth via the Nullarbor Plain, to Darwin through the centre of the continent, to New South Wales through Broken Hill, and to Melbourne–which is the closest capital city to Adelaide. Rail transport is important for many mines in the north of the state. The capital Adelaide has a commuter rail network made of electric and diesel electric powered multiple units, with 6 lines between them. South Australia has extensive road networks linking towns and other states. Roads are also the most common form of transport within the major metropolitan areas with car transport predominating. Public transport in Adelaide is mostly provided by buses and trams with regular services throughout the day. Adelaide Airport provides regular flights to other capitals, major South Australian towns and many international locations. The airport also has daily flights to several Asian hub airports. Adelaide Metro buses J1 and J1X connect to the city (approx. 30 minutes travel time). Standard fares apply and tickets may be purchased from the driver. Maximum charge (September 2016) for Metroticket is $5.30; off-peak and seniors discounts may apply. The River Murray was formerly an important trade route for South Australia, with paddle steamers linking inland areas and the ocean at Goolwa. South Australia has a container port at Port Adelaide. There are also numerous important ports along the coast for minerals and grains. The passenger terminal at Port Adelaide periodically sees cruise liners. Kangaroo Island is dependent on the Sea Link ferry service between Cape Jervis and Penneshaw. South Australia has been known as "the Festival State" for many years, for its abundance of arts and gastronomic festivals. While much of the arts scene is concentrated in Adelaide, the state government has supported regional arts actively since the 1990s. One of the manifestations of this was the creation of Country Arts SA, created in 1992. Diana Laidlaw did much to further the arts in South Australia during her term as Arts Minister from 1993 to 2002, and after Mike Rann assumed government in 2002, he created a strategic plan in 2004 (updated 2007) which included furthering and promoting the arts in South Australia under the topic heading "Objective 4: Fostering Creativity and Innovation". In September 2019, with the arts portfolio now subsumed within the Department of the Premier and Cabinet (DPC) after the election of Steven Marshall as Premier, and the 2004 strategic plan having been deleted from the website in 2018, the ""Arts and Culture Plan, South Australia 2019–2024"" was created by the Department. Marshall said when launching the plan: “The arts sector in South Australia is already very strong but it’s been operating without a plan for 20 years”. However the plan does not signal any new government support, even after the government's cuts to arts funding when Arts South Australia was absorbed into DPC in 2018. Specific propoals within the plan include an “Adelaide in 100 Objects” walking tour, a new shared ticketing system for small to medium arts bodies, a five-year-plan to revitalise regional art centres, creation of an arts-focussed high school, and a new venue for the Adelaide Symphony Orchestra. Australian rules football (AFL) is the most popular spectator sport in South Australia, with South Australians having the highest attendance rate in Australia. South Australia fields two teams in the Australian Football League national competition: the Adelaide Football Club and Port Adelaide Football Club. As of 2015 the two clubs were in the top five in terms of membership numbers, with both clubs' membership figures reaching over 60,000. Both teams have used the Adelaide Oval as their home ground since 2014, having previously used Football Park (AAMI Stadium). The South Australian National Football League, which was the premier league in the state before the advent of the Australian Football League, is a popular local league comprising ten teams: Sturt, Port Adelaide, Adelaide, West Adelaide, South Adelaide, North Adelaide, Norwood, Woodville/West Torrens, Glenelg and Central District. The South Australian Amateur Football League comprises 68 member clubs playing over 110 matches per week across ten senior divisions and three junior divisions. The SAAFL is one of Australia's largest and strongest Australian rules football associations. Cricket is the most popular summer sport in South Australia and attracts big crowds. South Australia has a cricket team, the West End Redbacks, who play at Adelaide Oval in the Adelaide Park Lands during the summer; they won their first title since 1996 in the summer of 2010–11. Many international matches have been played at the Adelaide Oval; it was one of the host cities of 2015 Cricket World Cup, and for many years it hosted the Australia Day One Day International. South Australia is also home to the Adelaide Strikers, an Australian men's professional Twenty20 cricket team, that competes in Australia's domestic Twenty20 cricket competition, the Big Bash League. Adelaide United represents South Australia in soccer in the men's A-League and women's W-League. The club's home ground is Hindmarsh Stadium (Coopers Stadium), but occasionally plays games at the Adelaide Oval. The club was founded in 2003 and are the 2015–16 season champions of the A-League. The club was also premier in the inaugural 2005–06 A-League season, finishing 7 points clear of the rest of the competition, before finishing 3rd in the finals. Adelaide United was also a Grand Finalist in the 2006–07 and 2008–09 seasons. Adelaide is the only A-League club to have progressed past the group stages of the Asian Champions League on more than one occasion. Adelaide City remains South Australia's most successful club, having won three National Soccer League titles and three NSL Cups. City was the first side from South Australia to ever win a continental title when it claimed the 1987 Oceania Club Championship and it has also won a record 17 South Australian championships and 17 Federation Cups. West Adelaide became the first South Australian club to be crowned Australian champion when it won the 1978 National Soccer League title. Like City, it now competes in the National Premier Leagues South Australia and the two clubs contest the Adelaide derby. Basketball also has a big following in South Australia, with the Adelaide 36ers playing in the Adelaide Entertainment Centre. The 36ers have won four championships in the last 20 years in the National Basketball League. The Adelaide Entertainment Centre, located in Hindmarsh, is the home of basketball in the state. Mount Gambier also has a national basketball team – the Mount Gambier Pioneers. The Pioneers play at the Icehouse (Mount Gambier Basketball Stadium) which seats over 1,000 people and is also home to the Mount Gambier Basketball Association. The Pioneers won the South Conference in 2003 and the Final in 2003; this team was rated second in the top five teams to have ever played in the league. In 2012, the club entered its 25th season, with a roster of 10 senior players (two imports) and three development squad players. Australia's premier motor sport series, the Supercars Championship, has visited South Australia each year since 1999. South Australia's Supercars event, the Adelaide 500, is staged on the Adelaide Street Circuit, a temporary track laid out through the streets and parklands to the east of the Adelaide city centre. Attendance for the 2010 event totalled 277,800. An earlier version of the Adelaide Street Circuit played host to the Australian Grand Prix, a round of the FIA Formula One World Championship, each year from 1985 to 1995. Mallala Motor Sport Park, a permanent circuit located near the town of Mallala, 58 km north of Adelaide, caters for both state and national level motor sport throughout the year. The Bend Motorsport Park, is another permanent circuit, located just outside of Tailem Bend. Sixty-three percent of South Australian children took part in organised sports in 2002–2003. The ATP Adelaide was a tennis tournament held from 1972 to 2008 that then moved to Brisbane and was replaced with The World Tennis Challenge a Professional Exhibition Tournament that is part of the Australian Open Series. Also, the Royal Adelaide Golf Club has hosted nine editions of the Australian Open, with the most recent being in 1998. The state has hosted the Tour Down Under cycle race since 1999.
https://en.wikipedia.org/wiki?curid=26716
Slime mold Slime mold or slime mould is an informal name given to several kinds of unrelated eukaryotic organisms that can live freely as single cells, but can aggregate together to form multicellular reproductive structures. Slime molds were formerly classified as fungi but are no longer considered part of that kingdom. Although not forming a single monophyletic clade, they are grouped within the paraphyletic group referred to as kingdom Protista. More than 900 species of slime mold occur globally. Their common name refers to part of some of these organisms' life cycles where they can appear as gelatinous "slime". This is mostly seen with the Myxogastria, which are the only macroscopic slime molds. Most slime molds are smaller than a few centimeters, but some species may reach sizes up to several square meters and masses up to 20 kilograms. Many slime molds, mainly the "cellular" slime molds, do not spend most of their time in this state. When food is abundant, these slime molds exist as single-celled organisms. When food is in short supply, many of these single-celled organisms will congregate and start moving as a single body. In this state they are sensitive to airborne chemicals and can detect food sources. They can readily change the shape and function of parts, and may form stalks that produce fruiting bodies, releasing countless spores, light enough to be carried on the wind or hitch a ride on passing animals. They feed on microorganisms that live in any type of dead plant material. They contribute to the decomposition of dead vegetation, and feed on bacteria, yeasts, and fungi. For this reason, slime molds are usually found in soil, lawns, and on the forest floor, commonly on deciduous logs. In tropical areas they are also common on inflorescences and fruits, and in aerial situations (e.g., in the canopy of trees). In urban areas, they are found on mulch or in the leaf mold in rain gutters, and also grow in air conditioners, especially when the drain is blocked. Slime molds, as a group, are polyphyletic. They were originally represented by the subkingdom Gymnomycota in the Fungi kingdom and included the defunct phyla Myxomycota, Acrasiomycota, and Labyrinthulomycota. Today, slime molds have been divided among several supergroups, "none" of which is included in the kingdom Fungi. Slime molds can generally be divided into two main groups. In more strict terms, slime molds comprise the mycetozoan group of the amoebozoa. Mycetozoa include the following three groups: Even at this level of classification there are conflicts to be resolved. Recent molecular evidence shows that, while the first two groups are likely to be monophyletic, the protosteloids are likely to be polyphyletic. For this reason, scientists are currently trying to understand the relationships among these three groups. The most commonly encountered are the Myxogastria. A common slime mold that forms tiny brown tufts on rotting logs is "Stemonitis". Another form, which lives in rotting logs and is often used in research, is "Physarum polycephalum". In logs, it has the appearance of a slimy web-work of yellow threads, up to a few feet in size. "Fuligo" forms yellow crusts in mulch. The "Dictyosteliida" – cellular slime molds – are distantly related to the plasmodial slime molds and have a very different lifestyle. Their amoebae do not form huge coenocytes, and remain individual. They live in similar habitats and feed on microorganisms. When food is depleted and they are ready to form sporangia, they do something radically different. They release signal molecules into their environment, by which they find each other and create swarms. These amoeba then join up into a tiny multicellular slug-like coordinated creature, which crawls to an open lit place and grows into a fruiting body. Some of the amoebae become spores to begin the next generation, but some of the amoebae sacrifice themselves to become a dead stalk, lifting the spores up into the air. The protosteloids have characters intermediate between the previous two groups, but they are much smaller, the fruiting bodies only forming one to a few spores. Non-amoebozoan slime molds include: Slime molds begin life as amoeba-like cells. These unicellular amoebae are commonly haploid and feed on bacteria. These amoebae can mate if they encounter the correct mating type and form zygotes that then grow into plasmodia. These contain many nuclei without cell membranes between them, and can grow to meters in size. The species "Fuligo septica" is often seen as a slimy yellow network in and on rotting logs. The amoebae and the plasmodia engulf microorganisms. The plasmodium grows into an interconnected network of protoplasmic strands. Within each protoplasmic strand, the cytoplasmic contents rapidly stream. If one strand is carefully watched for about 50 seconds, the cytoplasm can be seen to slow, stop, and then reverse direction. The streaming protoplasm within a plasmodial strand can reach speeds of up to 1.35 mm per second, which is the fastest rate recorded for any microorganism. Migration of the plasmodium is accomplished when more protoplasm streams to advancing areas and protoplasm is withdrawn from rear areas. When the food supply wanes, the plasmodium will migrate to the surface of its substrate and transform into rigid fruiting bodies. The fruiting bodies or sporangia are what are commonly seen. They superficially look like fungi or molds but are not related to the true fungi. These sporangia will then release spores which hatch into amoebae to begin the life cycle again. Slime molds are isogamous organisms, which means their sex cells are all the same size. There are over 900 species of slime molds that exist today. "Physarum polycephalum" is one species that has three sex genes – "mat"A, "mat"B, and "mat"C. The first two types have thirteen separate variations. "Mat"C, however, only has three variations. Each sexually mature slime mold contains two copies of each of the three sex genes. When "Physarum polycephalum" is ready to make its sex cells, it grows a bulbous extension of its body to contain them. Each cell is created with a random combination of the genes that the slime mold contains within its genome. Therefore, it can create cells with up to eight different gene types. Once these cells are released, they are independent and tasked with finding another cell it is able to fuse with. Other "Physarum polycephalum" may contain different combinations of the "mat"A, "mat"B, and "mat"C genes, allowing over 500 possible variations. It is advantageous for organisms with this type of reproductive cells to have many sexes because the likelihood of the cells finding a partner is greatly increased. At the same time, the risk of inbreeding is drastically reduced. "Dictyostelium discoideum" is another species of slime mold that has many different sexes. When this organism has entered the stage of reproduction, it releases an attractant, called "acrasin". Acrasin is made up of cyclic adenosine monophosphate, or cyclic AMP. Cyclic AMP is crucial in passing hormone signals between sex cells. When it comes time for the cells to fuse, "Dictyostelium discoideum" has mating types of its own that dictate which cells are compatible with each other. These include NC-4, WS 582, WS 583, WS 584, WS 5-1, WS 7, WS 10, WS 11-1, WS 28-1, WS 57-6, and WS 112b. A scientific study demonstrated the compatibility of these eleven mating types of "Dictyostelium discoideum" by monitoring the formation of macrocysts. For example, WS 583 is very compatible with WS 582, but not NC-4. It was concluded that cell contact between the compatible mating types needs to occur before macrocysts can form. In "Myxogastria", the plasmodial portion of the life cycle only occurs after syngamy, which is the fusion of cytoplasm and nuclei of myxoamoebae or swarm cells. The diploid zygote becomes a multinucleated plasmodium through multiple nuclear divisions without further cell division. Myxomycete plasmodia are multinucleate masses of protoplasm that move by cytoplasmic streaming. In order for the plasmodium to move, cytoplasm must be diverted towards the leading edge from the lagging end. This process results in the plasmodium advancing in fan-like fronts. As it moves, plasmodium also gains nutrients through the phagocytosis of bacteria and small pieces of organic matter. The plasmodium also has the ability to subdivide and establish separate plasmodia. Conversely, separate plasmodia that are genetically similar and compatible can fuse together to create a larger plasmodium. In the event that conditions become dry, the plasmodium will form a sclerotium, essentially a dry and dormant state. In the event that conditions become moist again the sclerotium absorbs water and an active plasmodium is restored. When the food supply wanes, the Myxomycete plasmodium will enter the next stage of its life cycle forming haploid spores, often in a well-defined sporangium or other spore-bearing structure. When a slime mold mass or mound is physically separated, the cells find their way back to re-unite. Studies on "Physarum polycephalum" have even shown an ability to learn and predict periodic unfavorable conditions in laboratory experiments. John Tyler Bonner, a professor of ecology known for his studies of slime molds, argues that they are "no more than a bag of amoebae encased in a thin slime sheath, yet they manage to have various behaviors that are equal to those of animals who possess muscles and nerves with ganglia – that is, simple brains." Atsushi Tero of Hokkaido University grew "Physarum" in a flat wet dish, placing the mold in a central position representing Tokyo and oat flakes surrounding it corresponding to the locations of other major cities in the Greater Tokyo Area. As "Physarum" avoids bright light, light was used to simulate mountains, water and other obstacles in the dish. The mold first densely filled the space with plasmodia, and then thinned the network to focus on efficiently connected branches. The network strikingly resembled Tokyo's rail system. Slime mold "Physarum polycephalum" was also used by Andrew Adamatzky from the University of the West of England and his colleagues world-wide in experimental laboratory approximations of motorway networks of 14 geographical areas: Australia, Africa, Belgium, Brazil, Canada, China, Germany, Iberia, Italy, Malaysia, Mexico, the Netherlands, UK and US.
https://en.wikipedia.org/wiki?curid=26725
Substitution splice The substitution splice or stop trick is a cinematic special effect in which filmmakers achieve an appearance, disappearance, or transformation by altering one or more selected aspects of the mise-en-scène between two shots while maintaining the same framing and other aspects of the scene in both shots. The effect is usually polished by careful editing to establish a seamless cut and optimal moment of change. It has also been referred to as stop motion substitution or stop-action. The pioneering French filmmaker Georges Méliès claimed to have accidentally developed the stop trick, as he wrote in "Les Vues Cinématographiques" in 1907 (translated from French): According to the film scholar Jacques Deslandes, it is more likely that Méliès discovered the trick by carefully examining a print of the Edison Manufacturing Company's 1895 film "The Execution of Mary Stuart", in which a primitive version of the trick appears. And certainly the trick was used in the widely seen 1903 film, "The Great Train Robbery", when a dummy is substituted for an actor in the coal car. In any case, the substitution splice was both the first special effect Méliès perfected, and the most important in his body of work. Film historians such as Richard Abel and Elizabeth Ezra established that much of the effect was the result of Méliès's careful frame matching during the editing process, creating a seamless match cut out of two separately staged shots. Indeed, Méliès often used substitution splicing not as an obvious special effect, but as an inconspicuous editing technique, matching and combining short takes into one apparently seamless longer shot. Substitution splicing could become even more seamless when the film was colored by hand, as many of Méliès's films were; the addition of painted color acts as a sleight of hand technique allowing the cuts to pass by unnoticed. The substitution splice was the most popular cinematic special effect in trick films and early film fantasies, especially those that evolved from the stage tradition of the "féerie". Segundo de Chomón is among the other filmmakers who used substitution splicing to create elaborate fantasy effects. D.W. Griffith's 1909 film "The Curtain Pole", starring Mack Sennett, used substitution splices for comedic effect. The transformations made possible by the substitution splice were so central to early fantasy films that, in France, such films were often described simply as "scènes à transformation". This technique is different from the stop motion technique, in which the entire shot is created frame by frame.
https://en.wikipedia.org/wiki?curid=26737
Scandinavia Scandinavia ( ) is a subregion in Northern Europe, with strong historical, cultural, and linguistic ties. The term "Scandinavia" in local usage covers the three kingdoms of Denmark, Norway, and Sweden. The majority national languages of these three belong to the Scandinavian dialect continuum, and are mutually intelligible North Germanic languages. In English usage, "Scandinavia" also sometimes refers more narrowly to the Scandinavian Peninsula, or more broadly so as to include the Åland Islands, the Faroe Islands, Finland and Iceland. The broader definition is similar to what are locally called the Nordic countries, which also include the remote Norwegian islands of Svalbard and Jan Mayen and Greenland, a constituent country within the Kingdom of Denmark. The geography of Scandinavia is extremely varied. Notable are the Norwegian fjords, the Scandinavian Mountains, the flat, low areas in Denmark and the archipelagos of Sweden and Norway. Sweden has many lakes and moraines, legacies of the ice age, which ended about ten millennia ago. The southern and by far most populous regions of Scandinavia have a temperate climate. Scandinavia extends north of the Arctic Circle, but has relatively mild weather for its latitude due to the Gulf Stream. Many of the Scandinavian mountains have an alpine tundra climate. The climate varies from north to south and from west to east: a marine west coast climate () typical of western Europe dominates in Denmark, southernmost part of Sweden and along the west coast of Norway reaching north to 65°N, with orographic lift giving more mm/year precipitation (<5000 mm) in some areas in western Norway. The central part – from Oslo to Stockholm – has a humid continental climate (Dfb), which gradually gives way to subarctic climate (Dfc) further north and cool marine west coast climate (Cfc) along the northwestern coast. A small area along the northern coast east of the North Cape has tundra climate (Et) as a result of a lack of summer warmth. The Scandinavian Mountains block the mild and moist air coming from the southwest, thus northern Sweden and the Finnmarksvidda plateau in Norway receive little precipitation and have cold winters. Large areas in the Scandinavian mountains have alpine tundra climate. The warmest temperature ever recorded in Scandinavia is 38.0 °C in Målilla (Sweden). The coldest temperature ever recorded is −52.6 °C in Vuoggatjålme, Arjeplog (Sweden). The coldest month was February 1985 in Vittangi (Sweden) with a mean of −27.2 °C. Southwesterly winds further warmed by foehn wind can give warm temperatures in narrow Norwegian fjords in winter. Tafjord has recorded 17.9 °C in January and Sunndal 18.9 °C in February. The words "Scandinavia" and "Scania" ("Skåne", the southernmost province of Sweden) are both thought to go back to the Proto-Germanic compound *"Skaðin-awjō" (the "ð" represented in Latin by "t" or "d"), which appears later in Old English as "Scedenig" and in Old Norse as "Skáney". The earliest identified source for the name "Scandinavia" is Pliny the Elder's "Natural History", dated to the first century AD. Various references to the region can also be found in Pytheas, Pomponius Mela, Tacitus, Ptolemy, Procopius and Jordanes, usually in the form of "Scandza". It is believed that the name used by Pliny may be of West Germanic origin, originally denoting Scania. According to some scholars, the Germanic stem can be reconstructed as *"skaðan-" and meaning "danger" or "damage". The second segment of the name has been reconstructed as *"awjō", meaning "land on the water" or "island". The name "Scandinavia" would then mean "dangerous island", which is considered to refer to the treacherous sandbanks surrounding Scania. Skanör in Scania, with its long Falsterbo reef, has the same stem ("skan") combined with -"ör", which means "sandbanks". Alternatively, "Sca(n)dinavia" and "Skáney", along with the Old Norse goddess name "Skaði", may be related to Proto-Germanic "*skaðwa-" (meaning "shadow"). John McKinnell comments that this etymology suggests that the goddess Skaði may have once been a personification of the geographical region of Scandinavia or associated with the underworld. Another possibility is that all or part of the segments of the name came from the pre-Germanic Mesolithic people inhabiting the region. In modernity, Scandinavia is a peninsula, but between approximately 10,300 and 9,500 years ago the southern part of Scandinavia was an island separated from the northern peninsula, with water exiting the Baltic Sea through the area where Stockholm is now located. Correspondingly, some Basque scholars have presented the idea that the segment "sk" that appears in "*Skaðinawjō" is connected to the name for the Euzko peoples, akin to Basques, that populated Paleolithic Europe. According to one scholar, Scandinavian people share particular genetic markers with the Basque people. The Latin names in Pliny's text gave rise to different forms in medieval Germanic texts. In Jordanes' history of the Goths (AD 551), the form "Scandza" is the name used for their original home, separated by sea from the land of Europe (chapter 1, 4). Where Jordanes meant to locate this quasi-legendary island is still a hotly debated issue, both in scholarly discussions and in the nationalistic discourse of various European countries. The form "Scadinavia" as the original home of the Langobards appears in Paulus Diaconus' "Historia Langobardorum", but in other versions of "Historia Langobardorum" appear the forms "Scadan", "Scandanan", "Scadanan" and "Scatenauge". Frankish sources used "Sconaowe" and Aethelweard, an Anglo-Saxon historian, used "Scani". In "Beowulf", the forms "Scedenige" and "Scedeland" are used while the Alfredian translation of Orosius and Wulfstan's travel accounts used the Old English "Sconeg". The earliest Sami yoik texts written down refer to the world as "Skadesi-suolo" (north Sami) and "Skađsuâl" (east Sami), meaning "Skaði's island". Svennung considers the Sami name to have been introduced as a loan word from the North Germanic languages; "Skaði" is the giant stepmother of Freyr and Freyja in Norse mythology. It has been suggested that Skaði to some extent is modeled on a Sami woman. The name for Skade's father Thjazi is known in Sami as "Čáhci", "the waterman"; and her son with Odin, Saeming, can be interpreted as a descendant of "Saam" the Sami population. Older joik texts give evidence of the old Sami belief about living on an island and state that the wolf is known as "suolu gievra", meaning "the strong one on the island". The Sami place name "Sulliidčielbma" means "the island's threshold" and "Suoločielgi" means "the island's back". In recent substrate studies, Sami linguists have examined the initial cluster "sk"- in words used in Sami and concluded that "sk"- is a phonotactic structure of alien origin. Although the term "Scandinavia" used by Pliny the Elder probably originated in the ancient Germanic languages, the modern form "Scandinavia" does not descend directly from the ancient Germanic term. Rather the word was brought into use in Europe by scholars borrowing the term from ancient sources like Pliny, and was used vaguely for Scania and the southern region of the peninsula. The term was popularised by the linguistic and cultural Scandinavist movement, which asserted the common heritage and cultural unity of the Scandinavian countries and rose to prominence in the 1830s. The popular usage of the term in Sweden, Denmark and Norway as a unifying concept became established in the nineteenth century through poems such as Hans Christian Andersen's "I am a Scandinavian" of 1839. After a visit to Sweden, Andersen became a supporter of early political Scandinavism. In a letter describing the poem to a friend, he wrote: "All at once I understood how related the Swedes, the Danes and the Norwegians are, and with this feeling I wrote the poem immediately after my return: 'We are one people, we are called Scandinavians!'". The influence of Scandinavism as a Scandinavist political movement peaked in the middle of the nineteenth century, between the First Schleswig War (1848–1850) and the Second Schleswig War (1864). The Swedish king also proposed a unification of Denmark, Norway and Sweden into a single united kingdom. The background for the proposal was the tumultuous events during the Napoleonic Wars in the beginning of the century. This war resulted in Finland (formerly the eastern third of Sweden) becoming the Russian Grand Duchy of Finland in 1809 and Norway ("de jure" in union with Denmark since 1387, although "de facto" treated as a province) becoming independent in 1814, but thereafter swiftly forced to accept a personal union with Sweden. The dependent territories Iceland, the Faroe Islands and Greenland, historically part of Norway, remained with Denmark in accordance with the Treaty of Kiel. Sweden and Norway were thus united under the Swedish monarch, but Finland's inclusion in the Russian Empire excluded any possibility for a political union between Finland and any of the other Nordic countries. The end of the Scandinavian political movement came when Denmark was denied the military support promised from Sweden and Norway to annex the (Danish) Duchy of Schleswig, which together with the (German) Duchy of Holstein had been in personal union with Denmark. The Second war of Schleswig followed in 1864, a brief but disastrous war between Denmark and Prussia (supported by Austria). Schleswig-Holstein was conquered by Prussia and after Prussia's success in the Franco-Prussian War a Prussian-led German Empire was created and a new power-balance of the Baltic sea countries was established. The Scandinavian Monetary Union, established in 1873, lasted until World War I. The term "Scandinavia" (sometimes specified in English as "Continental Scandinavia" or "mainland Scandinavia") is commonly used strictly for Denmark, Norway and Sweden as a subset of the Nordic countries (known in Norwegian, Danish, and Swedish as "Norden"; , , ). However, in English usage, the term "Scandinavia" is sometimes used as a synonym or near-synonym for "Nordic countries". Debate about which meaning is more appropriate is complicated by the fact that usage in English is different from usage in the Scandinavian languages themselves (which use "Scandinavia" in the narrow meaning), and by the fact that the question of whether a country belongs to Scandinavia is politicised: people from the Nordic world beyond Norway, Denmark and Sweden may be offended at being either included in or excluded from the category of "Scandinavia". "Nordic countries" is used unambiguously for Denmark, Norway, Sweden, Finland and Iceland, including their associated territories (Svalbard, Greenland, the Faroe Islands and the Åland Islands). In addition to the mainland Scandinavian countries of: The Nordic countries also consist of: The clearest example of the use of the term "Scandinavia" as a political and societal construct is the unique position of Finland, based largely on the fact that most of modern-day Finland was part of Sweden for more than six centuries (see: Finland under Swedish rule), thus to much of the world associating Finland with all of Scandinavia. But the creation of a Finnish identity is unique in the region in that it was formed in relation to two different imperial models, the Swedish and the Russian. There is also the geological term "Fennoscandia" (sometimes "Fennoscandinavia"), which in technical use refers to the Fennoscandian Shield (or "Baltic Shield"), that is the Scandinavian peninsula (Norway and Sweden), Finland and Karelia (excluding Denmark and other parts of the wider Nordic world). The terms "Fennoscandia" and "Fennoscandinavia" are sometimes used in a broader, political sense to refer to Norway, Sweden, Denmark, and Finland. Whereas both narrow and broad conceptions of Scandinavian countries are straightforwardly defined, there is much ambiguity and political contestation as to which people are Scandinavian people (or "Scandinavians"). English dictionaries usually define the noun "Scandinavian" as meaning any inhabitant of Scandinavia (which might be narrowly conceived or broadly conceived). However, the noun "Scandinavian" is frequently used as a synonym for speakers of Scandinavian languages (languages descended from Old Norse). This usage can exclude the indigenous Sámi people of Scandinavia, as well as other non-Scandinavian-speaking inhabitants of the region. Thus, based on intersecting cultural and geographic definitions, Scandinavians always include Scandinavian-speaking Swedes, Norwegians, and Danes (and, earlier, ancient speakers of the North Germanic languages). In usages based on cultural/linguistic definitions (native speakers of North Germanic languages), Scandinavians also include Faroe Islanders, Icelanders, the Swedish-speaking population of Finland, the Swedish-speaking population of Estonia, and the Scandinavian diaspora. In usages based on geographical definitions (inhabitants of Continental Scandinavia), Scandinavians include Sami people and, depending on how broad an understanding of "Scandinavia" is being used, Finns and Inuit. Two language groups have coexisted on the Scandinavian peninsula since prehistory—the North Germanic languages (Scandinavian languages) and the Sami languages. The majority of the population of Scandinavia (including Iceland and the Faroe Islands) today derive their language from several North Germanic tribes who once inhabited the southern part of Scandinavia and spoke a Germanic language that evolved into Old Norse and from Old Norse into Danish, Swedish, Norwegian, Faroese, and Icelandic. The Danish, Norwegian and Swedish languages form a dialect continuum and are known as the Scandinavian languages—all of which are considered mutually intelligible with one another. Faroese and Icelandic, sometimes referred to as insular Scandinavian languages, are intelligible in continental Scandinavian languages only to a limited extent. A small minority of Scandinavians are Sami people, concentrated in the extreme north of Scandinavia. Finland (sometimes included in Scandinavia in English usage) is mainly populated by speakers of Finnish, with a minority of approximately 5% of Swedish speakers. However, Finnish is also spoken as a recognized minority language in Sweden, including in distinctive varieties sometimes known as Meänkieli. Finnish is distantly related to the Sami languages, but these are entirely different in origin to the Scandinavian languages. German (in Denmark), Yiddish and Romani are recognized minority languages in parts of Scandinavia. More recent migrations has added even more languages. Apart from Sami and the languages of minority groups speaking a variant of the majority language of a neighboring state, the following minority languages in Scandinavia are protected under the European Charter for Regional or Minority Languages: Yiddish, Romani Chib/Romanes and Romani. The North Germanic languages of Scandinavia are traditionally divided into an East Scandinavian branch (Danish and Swedish) and a West Scandinavian branch (Norwegian, Icelandic and Faroese), but because of changes appearing in the languages since 1600 the East Scandinavian and West Scandinavian branches are now usually reconfigured into Insular Scandinavian ("ö-nordisk"/"øy-nordisk") featuring Icelandic and Faroese and Continental Scandinavian ("Skandinavisk"), comprising Danish, Norwegian and Swedish. The modern division is based on the degree of mutual comprehensibility between the languages in the two branches. The populations of the Scandinavian countries, with common Scandinavian roots in language, can—at least with some training—understand each other's standard languages as they appear in print and are heard on radio and television. The reason Danish, Swedish and the two official written versions of Norwegian ("Nynorsk" and "Bokmål") are traditionally viewed as different languages, rather than dialects of one common language, is that each is a well-established standard language in its respective country. Danish, Swedish and Norwegian have since medieval times been influenced to varying degrees by Middle Low German and standard German. That influence came from not just proximity but also that Denmark and later Denmark-Norway ruling over the German speaking region of Holstein, and in Sweden with its close trade with the Hanseatic League. Norwegians are accustomed to variation and may perceive Danish and Swedish only as slightly more distant dialects. This is because they have two official written standards, in addition to the habit of strongly holding on to local dialects. The people of Stockholm, Sweden and Copenhagen, Denmark have the greatest difficulty in understanding other Scandinavian languages. In the Faroe Islands and Iceland, learning Danish is mandatory. This causes Faroese people as well as Icelandic people to become bilingual in two very distinct North Germanic languages, making it relatively easy for them to understand the other two Mainland Scandinavian languages. Although Iceland was under the political control of Denmark until a much later date (1918), very little influence and borrowing from Danish has occurred in the Icelandic language. Icelandic remained the preferred language among the ruling classes in Iceland. Danish was not used for official communications, most of the royal officials were of Icelandic descent and the language of the church and law courts remained Icelandic. The Scandinavian languages are (as a language family) unrelated to Finnish, Estonian and Sami languages, which as Uralic languages are distantly related to Hungarian. Owing to the close proximity, there is still a great deal of borrowing from the Swedish and Norwegian languages in the Finnish and Sami languages. The long history of linguistic influence of Swedish on Finnish is also due to the fact that Finnish, the language of the majority in Finland, was treated as a minority language while Finland was part of Sweden. Finnish-speakers had to learn Swedish in order to advance to higher positions. Swedish spoken in today's Finland includes a lot of words that are borrowed from Finnish, whereas the written language remains closer to that of Sweden. Finland is officially bilingual, with Finnish and Swedish having mostly the same status at national level. Finland's majority population are Finns, whose mother tongue is either Finnish (approximately 95%), Swedish or both. The Swedish-speakers live mainly on the coastline starting from approximately the city of Porvoo (in the Gulf of Finland) up to the city of Kokkola (in the Bay of Bothnia). The Åland Islands, an autonomous province of Finland situated in the Baltic Sea between Finland and Sweden, are entirely Swedish-speaking. Children are taught the other official language at school: for Swedish-speakers this is Finnish (usually from the 3rd grade), while for Finnish-speakers it is Swedish (usually from the 3rd, 5th or 7th grade). Finnish speakers constitute a language minority in Sweden and Norway. Meänkieli and Kven are Finnish dialects spoken in Swedish Lapland and Norwegian Lapland. The Sami languages are indigenous minority languages in Scandinavia. They belong to their own branch of the Uralic language family and are unrelated to the North Germanic languages other than by limited grammatical (particularly lexical) characteristics resulting from prolonged contact. Sami is divided into several languages or dialects. Consonant gradation is a feature in both Finnish and northern Sami dialects, but it is not present in south Sami, which is considered to have a different language history. According to the Sami Information Centre of the Sami Parliament in Sweden, southern Sami may have originated in an earlier migration from the south into the Scandinavian peninsula. A key ancient description of Scandinavia was provided by Pliny the Elder, though his mentions of "Scatinavia" and surrounding areas are not always easy to decipher. Writing in the capacity of a Roman admiral, he introduces the northern region by declaring to his Roman readers that there are 23 islands "Romanis armis cognitae" ("known to Roman arms") in this area. According to Pliny, the "clarissima" ("most famous") of the region's islands is "Scatinavia", of unknown size. There live the "Hilleviones". The belief that Scandinavia was an island became widespread among classical authors during the first century and dominated descriptions of Scandinavia in classical texts during the centuries that followed. Pliny begins his description of the route to "Scatinavia" by referring to the mountain of Saevo ("mons Saevo ibi"), the Codanus Bay ("Codanus sinus") and the Cimbrian promontory. The geographical features have been identified in various ways. By some scholars, "Saevo" is thought to be the mountainous Norwegian coast at the entrance to Skagerrak and the Cimbrian peninsula is thought to be Skagen, the north tip of Jutland, Denmark. As described, Saevo and Scatinavia can also be the same place. Pliny mentions Scandinavia one more time: in Book VIII he says that the animal called "achlis" (given in the accusative, "achlin", which is not Latin) was born on the island of Scandinavia. The animal grazes, has a big upper lip and some mythical attributes. The name "Scandia", later used as a synonym for "Scandinavia", also appears in Pliny's "Naturalis Historia" ("Natural History"), but is used for a group of Northern European islands which he locates north of Britannia. "Scandia" thus does not appear to be denoting the island Scadinavia in Pliny's text. The idea that "Scadinavia" may have been one of the "Scandiae" islands was instead introduced by Ptolemy (c. 90 – c. 168 AD), a mathematician, geographer and astrologer of Roman Egypt. He used the name "Skandia" for the biggest, most easterly of the three "Scandiai" islands, which according to him were all located east of Jutland. Neither Pliny's nor Ptolemy's lists of Scandinavian tribes include the Suiones mentioned by Tacitus. Some early Swedish scholars of the Swedish Hyperborean school and of the ninettenth-century romantic nationalism period proceeded to synthesize the different versions by inserting references to the Suiones, arguing that they must have been referred to in the original texts and obscured over time by spelling mistakes or various alterations. During a period of Christianization and state formation in the 10th–13th centuries, numerous Germanic petty kingdoms and chiefdoms were unified into three kingdoms: The three Scandinavian kingdoms joined in 1387 in the Kalmar Union under Queen Margaret I of Denmark. Sweden left the union in 1523 under King Gustav Vasa. In the aftermath of Sweden's secession from the Kalmar Union, civil war broke out in Denmark and Norway—the Protestant Reformation followed. When things had settled, the Norwegian Privy Council was abolished—it assembled for the last time in 1537. A personal union, entered into by the kingdoms of Denmark and Norway in 1536, lasted until 1814. Three sovereign successor states have subsequently emerged from this unequal union: Denmark, Norway and Iceland. The borders between the three countries got the shape they have had since in the middle of the seventeenth century: In the 1645 Treaty of Brömsebro, Denmark–Norway ceded the Norwegian provinces of Jämtland, Härjedalen and Idre and Särna, as well as the Baltic Sea islands of Gotland and Ösel (in Estonia) to Sweden. The Treaty of Roskilde, signed in 1658, forced Denmark–Norway to cede the Danish provinces Scania, Blekinge, Halland, Bornholm and the Norwegian provinces of Båhuslen and Trøndelag to Sweden. The 1660 Treaty of Copenhagen forced Sweden to return Bornholm and Trøndelag to Denmark–Norway, and to give up its recent claims to the island Funen. In the east, Finland, was a fully incorporated part of Sweden since medieval times until the Napoleonic wars, when it was ceded to Russia. Despite many wars over the years since the formation of the three kingdoms, Scandinavia has been politically and culturally close. Denmark–Norway as a historiographical name refers to the former political union consisting of the kingdoms of Denmark and Norway, including the Norwegian dependencies of Iceland, Greenland and the Faroe Islands. The corresponding adjective and demonym is Dano-Norwegian. During Danish rule, Norway kept its separate laws, coinage and army as well as some institutions such as a royal chancellor. Norway's old royal line had died out with the death of Olav IV in 1387, but Norway's remaining a hereditary kingdom became an important factor for the Oldenburg dynasty of Denmark–Norway in its struggles to win elections as kings of Denmark. The Treaty of Kiel (14 January 1814) formally dissolved the Dano-Norwegian union and ceded the territory of Norway proper to the King of Sweden, but Denmark retained Norway's overseas possessions. However, widespread Norwegian resistance to the prospect of a union with Sweden induced the governor of Norway, crown prince Christian Frederick (later Christian VIII of Denmark), to call a constituent assembly at Eidsvoll in April 1814. The assembly drew up a liberal constitution and elected Christian Frederick to the throne of Norway. Following a Swedish invasion during the summer, the peace conditions of the Convention of Moss (14 August 1814) specified that king Christian Frederik had to resign, but Norway would keep its independence and its constitution within a personal union with Sweden. Christian Frederik formally abdicated on 10 August 1814 and returned to Denmark. The Norwegian parliament Storting elected king Charles XIII of Sweden as king of Norway on 4 November. The Storting dissolved the union between Sweden and Norway in 1905, after which the Norwegians elected Prince Charles of Denmark as king of Norway: he reigned as Haakon VII. The economies of the countries of Scandinavia are amongst the strongest in Europe. There is a generous welfare system in Sweden, Denmark, Norway and Finland. Various promotional agencies of the Nordic countries in the United States (such as The American-Scandinavian Foundation, established in 1910 by the Danish American industrialist Niels Poulsen) serve to promote market and tourism interests in the region. Today, the five Nordic heads of state act as the organization's patrons and according to the official statement by the organization its mission is "to promote the Nordic region as a whole while increasing the visibility of Denmark, Finland, Iceland, Norway and Sweden in New York City and the United States". The official tourist boards of Scandinavia sometimes cooperate under one umbrella, such as the Scandinavian Tourist Board. The cooperation was introduced for the Asian market in 1986, when the Swedish national tourist board joined the Danish national tourist board to coordinate intergovernmental promotion of the two countries. Norway's government entered one year later. All five Nordic governments participate in the joint promotional efforts in the United States through the Scandinavian Tourist Board of North America.
https://en.wikipedia.org/wiki?curid=26740
Stockholm Stockholm () is the capital and most populous urban area of Sweden as well as in Scandinavia. 975,904 people live in the municipality, approximately 1.6 million in the urban area, and 2.4 million in the metropolitan area. The city stretches across fourteen islands where Lake Mälaren flows into the Baltic Sea. Outside the city to the east, and along the coast, is the island chain of the Stockholm archipelago. The area has been settled since the Stone Age, in the 6th millennium BC, and was founded as a city in 1252 by Swedish statesman Birger Jarl. It is also the county seat of Stockholm County. Stockholm is the cultural, media, political, and economic centre of Sweden. The Stockholm region alone accounts for over a third of the country's GDP, and is among the top 10 regions in Europe by GDP per capita. Ranked as an alpha- global city, it is the largest in Scandinavia and the main centre for corporate headquarters in the Nordic region. The city is home to some of Europe's top ranking universities, such as the Stockholm School of Economics, Karolinska Institute and KTH Royal Institute of Technology. It hosts the annual Nobel Prize ceremonies and banquet at the Stockholm Concert Hall and Stockholm City Hall. One of the city's most prized museums, the Vasa Museum, is the most visited non-art museum in Scandinavia. The Stockholm metro, opened in 1950, is well known for the decor of its stations; it has been called the longest art gallery in the world. Sweden's national football arena is located north of the city centre, in Solna. Ericsson Globe, the national indoor arena, is in the southern part of the city. The city was the host of the 1912 Summer Olympics, and hosted the equestrian portion of the 1956 Summer Olympics otherwise held in Melbourne, Victoria, Australia. Stockholm is the seat of the Swedish government and most of its agencies, including the highest courts in the judiciary, and the official residencies of the Swedish monarch and the Prime Minister. The government has its seat in the Rosenbad building, the Riksdag (Swedish parliament) is seated in the Parliament House, and the Prime Minister's residence is adjacent at Sager House. Stockholm Palace is the official residence and principal workplace of the Swedish monarch, while Drottningholm Palace, a World Heritage Site on the outskirts of Stockholm, serves as the Royal Family's private residence. After the Ice Age, around 8,000 BC, there were already many people living in what is today the Stockholm area, but as temperatures dropped, inhabitants moved south. Thousands of years later, as the ground thawed, the climate became tolerable and the lands became fertile, people began to migrate back to the North. At the intersection of the Baltic Sea and lake Mälaren is an archipelago site where the Old Town of Stockholm was first built from about 1000 CE by Vikings. They had a positive trade impact on the area because of the trade routes they created. Stockholm's location appears in Norse sagas as Agnafit, and in Heimskringla in connection with the legendary king Agne. The earliest written mention of the name Stockholm dates from 1252, by which time the mines in Bergslagen made it an important site in the iron trade. The first part of the name () means log in Swedish, although it may also be connected to an old German word () meaning fortification. The second part of the name () means islet and is thought to refer to the islet Helgeandsholmen in central Stockholm. According to the "Eric Chronicles" the city is said to have been founded by Birger Jarl to protect Sweden from sea invasions made by Karelians after the pillage of Sigtuna on Lake Mälaren in the summer of 1187. Stockholm's core, the present Old Town () was built on the central island next to Helgeandsholmen from the mid-13th century onward. The city originally rose to prominence as a result of the Baltic trade of the Hanseatic League. Stockholm developed strong economic and cultural linkages with Lübeck, Hamburg, Gdańsk, Visby, Reval, and Riga during this time. Between 1296 and 1478 Stockholm's City Council was made up of 24 members, half of whom were selected from the town's German-speaking burghers. The strategic and economic importance of the city made Stockholm an important factor in relations between the Danish Kings of the Kalmar Union and the national independence movement in the 15th century. The Danish King Christian II was able to enter the city in 1520. On 8 November 1520, a massacre of opposition figures called the Stockholm Bloodbath took place and set off further uprisings that eventually led to the breakup of the Kalmar Union. With the accession of Gustav Vasa in 1523 and the establishment of royal power, the population of Stockholm began to grow, reaching 10,000 by 1600. The 17th century saw Sweden grow into a major European power, reflected in the development of the city of Stockholm. From 1610 to 1680 the population multiplied sixfold. In 1634, Stockholm became the official capital of the Swedish empire. Trading rules were also created that gave Stockholm an essential monopoly over trade between foreign merchants and other Swedish and Scandinavian territories. In 1697, Tre Kronor (castle) burned and was replaced by Stockholm Palace. In 1710, a plague killed about 20,000 (36 percent) of the population. After the end of the Great Northern War the city stagnated. Population growth halted and economic growth slowed. The city was in shock after having lost its place as the capital of a Great power. However, Stockholm maintained its role as the political center of Sweden and continued to develop culturally under Gustav III. By the second half of the 19th century, Stockholm had regained its leading economic role. New industries emerged and Stockholm was transformed into an important trade and service center as well as a key gateway point within Sweden. The population also grew dramatically during this time, mainly through immigration. At the end of the 19th century, less than 40% of the residents were Stockholm-born. Settlement began to expand outside the city limits. The 19th century saw the establishment of a number of scientific institutes, including the Karolinska Institutet. The General Art and Industrial Exposition was held in 1897. From 1887 to 1953 the Old Stockholm telephone tower was a landmark; originally built to link phone lines, it became redundant after these were buried, and it was latterly used for advertising. Stockholm became a modern, technologically advanced, and ethnically diverse city in the latter half of the 20th century. Many historical buildings were torn down during the modernist era, including substantial parts of the historical district of Klara, and replaced with modern architecture. However, in many other parts of Stockholm (such as in Gamla stan, Södermalm, Östermalm, Kungsholmen and Vasastan), many "old" buildings, blocks and streets built before the modernism and functionalism movements took off in Sweden (around 1930–35) survived this era of demolition. Throughout the century, many industries shifted away from industrial activities into more high-tech and service industry areas. Currently, Stockholm's metropolitan area is one of the fastest-growing regions in Europe, and its population is expected to number 2.5 million by 2024. As a result of this massive population growth, there has been a proposal to build densely packed high-rise buildings in the city center connected by elevated walkways. Stockholm is located on Sweden's east coast, where the freshwater Lake Mälaren — Sweden's third-largest lake — flows out into the Baltic Sea. The central parts of the city consist of fourteen islands that are continuous with the Stockholm archipelago. The geographical city center is situated on the water, in Riddarfjärden bay. Over 30% of the city area is made up of waterways and another 30% is made up of parks and green spaces. Positioned at the eastern end of the Central Swedish lowland, the city's location reflect the early orientation of Swedish trade toward the Baltic region. Stockholm belongs to the Temperate deciduous forest biome, which means the climate is very similar to that of the far northeastern area of the United States and coastal Nova Scotia in Canada. The average annual temperature is . The average rainfall is a year. The deciduous forest has four distinct seasons, spring, summer, autumn, and winter. In the autumn the leaves change color. During the winter months, the trees lose their leaves. For details about the other municipalities in the Stockholm area, see the pertinent articles. North of Stockholm Municipality: Järfälla, Solna, Täby, Sollentuna, Lidingö, Upplands Väsby, Österåker, Sigtuna, Sundbyberg, Danderyd, Vallentuna, Ekerö, Upplands-Bro, Vaxholm, and Norrtälje. South of Stockholm: Huddinge, Nacka, Botkyrka, Haninge, Tyresö, Värmdö, Södertälje, Salem, Nykvarn and Nynäshamn. Stockholm Municipality is an administrative unit defined by geographical borders. The semi-official name for the municipality is "City of Stockholm" ("Stockholms stad" in Swedish). As a municipality, the City of Stockholm is subdivided into district councils, which carry responsibility for primary schools, social, leisure and cultural services within their respective areas. The municipality is usually described in terms of its three main parts: Innerstaden (Stockholm City Centre), Söderort (Southern Stockholm) and Västerort (Western Stockholm). The districts of these parts are: The modern centre Norrmalm (concentrated around the town square Sergels torg) is the largest shopping district in Sweden. It is the most central part of Stockholm in business and shopping. Stockholm has a humid continental climate in the 0 °C isotherm (Köppen: "Dfb") and an oceanic climate ("Cfb") in the -3 °C isotherm. Although winters are cold, average temperatures generally remain above 0 °C for much of the year. Summers are mild, and precipitation occurs throughout the year. Due to the city's high northerly latitude, the length of the day varies widely from more than 18 hours around midsummer to only around 6 hours in late December. The nights from late May until mid-July are bright even when cloudy. Stockholm has relatively mild weather compared to other locations at a similar latitude, or even farther south. With an average of just over 1800 hours of sunshine per year, it is also one of the sunniest cities in Northern Europe, receiving more sunshine than Paris, London and a few other major European cities of a more southerly latitude. Because of the urban heat island effect and the prevailing wind traveling overland rather than sea during summer months, Stockholm has the warmest July months of the Nordic capitals. Stockholm has an annual average snow cover between 75 and 100 days. In spite of its mild climate, Stockholm is located further north than parts of Canada that are above the Arctic tree line at sea level. Summers average daytime high temperatures of and lows of around , but temperatures can reach on some days. Days above occur on average 1.55 days per year (1992–2011). Days between and are relatively common especially in July and August. Night-time lows of above are rare, and hot summer nights vary from . Winters generally bring cloudy weather with the most precipitation falling in December and January (as rain or as snow). The average winter temperatures range from , and occasionally drop below in the outskirts. Spring and autumn are generally cool to mild. The climate table below presents weather data from the years 1981–2010 although the official Köppen reference period was from 1961–1990. According to ongoing measurements, the temperature has increased during the years 1991–2009 as compared with the last series. This increase averages about overall months. Warming is most pronounced during the winter months, with an increase of more than in January. For the 2002–2014 measurements some further increases have been found, although some months such as June have been relatively flat. The highest temperature ever recorded in Stockholm was on 3 July 1811; the lowest was on 20 January 1814. The temperature has not dropped to below since 10 January 1987. The highest ever average temperature in one month recorded in Stockholm is |22,5|°C| in July 2018. The highest temperature on one day that month was |33,6|°C| on the 26th of July. Annual precipitation is with around 170 wet days and light to moderate rainfall throughout the year. The precipitation is not uniformly distributed throughout the year. The second half of the year receives 50% more than the first half. Snowfall occurs mainly from December through March. Snowfall may occasionally occur in late October as well as in April. In Stockholm, the aurora borealis can occasionally be observed. Stockholm's location just south of the 60th parallel north means that the number of daylight hours is relatively small during winter – about six hours – while in June and the first half of July, the nights are relatively short, with about 18 hours of daylight. Around the summer solstice the sun never reaches further below the horizon than 7.3 degrees. This gives the sky a bright blue colour in summer once the sun has set because it does not get any darker than nautical twilight. Also, when looking straight up towards the zenith, few stars are visible after the sun has gone down. This is not to be confused with the midnight sun, which occurs north of the Arctic Circle, around 7 degrees farther north. The Stockholm Municipal Council () is the name of the local assembly. Its 101 councillors are elected concurrently with general elections, held at the same time as the elections to the Riksdag and county councils. The Council convenes twice every month at Stockholm City Hall, and the meetings are open to the public. The matters on which the councillors decide have generally already been drafted and discussed by various boards and committees. Once decisions are referred for practical implementation, the employees of the City administrations and companies take over. The elected majority has a Mayor and eight Vice Mayors. The Mayor and each majority Vice Mayor is the head of a department, with responsibility for a particular area of operation, such as City Planning. The opposition also has four Vice Mayors, but they hold no executive power. Together the Mayor and the 12 Vice Mayors form the Council of Mayors, and they prepare matters for the City Executive Board. The Mayor holds a special position among the Vice Mayors, chairing both the Council of Mayors and the City Executive Board. The City Executive Board () is elected by the City Council and is equivalent to a cabinet. The City Executive Board renders an opinion in all matters decided by the council and bears the overall responsibility for follow-up, evaluation and execution of its decisions. The Board is also responsible for financial administration and long-term development. The City Executive Board consists of 13 members, who represent both the majority and the opposition. Its meetings are not open to the public. Following the 2018 Stockholm municipal election a majority of seats in the municipal council is at present held by a center/right-wing majority and the Mayor of Stockholm () is Anna Konig Jerlmyr from the Moderate Party. The vast majority of Stockholm residents work in the service industry, which accounts for roughly 85% of jobs in Stockholm. The almost total absence of heavy industry (and fossil fuel power plants) makes Stockholm one of the world's cleanest metropolises. The last decade has seen a significant number of jobs created in high technology companies. Large employers include IBM, Ericsson, and Electrolux. A major IT centre is located in Kista, in northern Stockholm. Stockholm is Sweden's financial centre. Major Swedish banks, such as Swedbank, Handelsbanken, and SEB, are headquartered in Stockholm, as are the major insurance companies Skandia, Folksam and Trygg-Hansa. Stockholm is also home to Sweden's foremost stock exchange, the Stockholm Stock Exchange ("Stockholmsbörsen"). Additionally, about 45% of Swedish companies with more than 200 employees are headquartered in Stockholm. Noted clothes retailer H&M is also headquartered in the city. In recent years, tourism has played an important part in the city's economy. Stockholm County is ranked as the 10th largest visitor destination in Europe, with over 10 million commercial overnight stays per year. Among 44 European cities, Stockholm had the 6th highest growth in the number of nights spent in the period 2004–2008. The largest companies in Stockholm, by number of employees (2017) The city-owned company Stokab started in 1994 to build a fiber-optic network throughout the municipality as a level playing field for all operators (City of Stockholm, 2011). Around a decade later, the network was long making it the longest optic fiber network in the world and now has over 90 operators and 450 enterprises as customers. 2011 was the final year of a three-year project which brought fiber to 100% of public housing, meaning an extra 95,000 houses were added. (City of Stockholm, 2011) Research and higher education in the sciences started in Stockholm in the 18th century, with education in medicine and various research institutions such as the Stockholm Observatory. The medical education was eventually formalized in 1811 as Karolinska Institutet. KTH Royal Institute of Technology ("Swedish: Kungliga Tekniska högskolan") was founded in 1827 and is currently Scandinavia's largest higher education institute of technology with 13,000 students. Stockholm University, founded in 1878 with university status granted in 1960, has 52,000 students . It also incorporates many historical institutions, such as the Observatory, the Swedish Museum of Natural History, and the botanical garden "Bergianska trädgården". The Stockholm School of Economics, founded in 1909, is one of the few private institutions of higher education in Sweden. In the fine arts, educational institutions include the Royal College of Music, which has a history going back to the conservatory founded as part of the Royal Swedish Academy of Music in 1771, the Royal University College of Fine Arts, which has a similar historical association with the Royal Swedish Academy of Arts and a foundation date of 1735, and the Swedish National Academy of Mime and Acting, which is the continuation of the school of the Royal Dramatic Theatre, once attended by Greta Garbo. Other schools include the design school Konstfack, founded in 1844, the University College of Opera (founded in 1968 but with older roots), the University College of Dance, and the "Stockholms Musikpedagogiska Institut" (the University College of Music Education). The Södertörn University College was founded in 1995 as a multi-disciplinary institution for southern Metropolitan Stockholm, to balance the many institutions located in the northern part of the region. Other institutes of higher education are: The biggest complaint from students of higher education in Stockholm is the lack of student accommodations, the difficulty in finding other accommodations and the high rent. The Stockholm region is home to around 22% of Sweden's total population, and accounts for about 29% of its gross domestic product. The geographical notion of "Stockholm" has changed over time. By the turn of the 19th century, Stockholm largely consisted of the area today known as City Centre, roughly or one-fifth of the current municipal area. In the ensuing decades several other areas were incorporated (such as Brännkyrka Municipality in 1913, at which time it had 25,000 inhabitants, and Spånga in 1949). The municipal border was established in 1971; with the exception of Hansta, in 1982 purchased by Stockholm Municipality from Sollentuna Municipality and today a nature reserve. Of the population of 935,619 in 2016, 461,677 were men and 473,942 women. The average age is 40 years; 40.1% of the population is between 20 and 44 years. 382,887 people, or 40.9% of the population, over the age 15 were unmarried. 259,153 people, or 27.7% of the population, were married. 99,524 or 10.6% of the population, had been married but divorced. 299,925 people or 32.1% of Stockholm's residents are of an immigrant or non-Swedish background. As of October 2018, there were 201,821 foreign-born people in Stockholm. The largest group of them are the Finns (17,000), followed by Iraqis (16,275), Poles (11,994) and Iranians (11,429). Residents of Stockholm are known as Stockholmers (""stockholmare""). Languages spoken in Greater Stockholm outside of Swedish include Finnish, one of the official minority languages of Sweden; and English, as well as Albanian, Bosnian, Syriac, Arabic, Turkish, Kurdish, Persian, Dutch, Spanish, Serbian and Croatian. The entire Stockholm metropolitan area, consisting of 26 municipalities, has a population of over 2.2 million, making it the most populous city in the Nordic region. The Stockholm urban area, defined only for statistical purposes, had a total population of 1,630,738 in 2015. In the following municipalities some of the districts are contained within the Stockholm urban area, though not all: Apart from being Sweden's capital, Stockholm houses many national cultural institutions. The Stockholm region is home to three of Sweden's World Heritage Sites – spots judged as invaluable places that belong to all of humanity: The Drottningholm Palace, Skogskyrkogården (The Woodland Cemetery) and Birka. In 1998, Stockholm was named European Capital of Culture. Authors connected to Stockholm include the poet and songwriter Carl Michael Bellman (1740–1795), novelist and dramatist August Strindberg (1849–1912), and novelist Hjalmar Söderberg (1869–1941), all of whom made Stockholm part of their works. Martin Beck is a fictional Swedish police detective from Stockholm, who is the main character in a series of 10 novels by Maj Sjöwall and Per Wahlöö, collectively titled The Story of a Crime, and often based in Stockholm. Other authors with notable heritage in Stockholm were the Nobel Prize laureate Eyvind Johnson (1900–1976) and the popular poet and composer Evert Taube (1890–1976). The novelist Per Anders Fogelström (1917–1998) wrote a popular series of historical novels depicting life in Stockholm from the mid-18th to the mid-20th century. The city's oldest section is Gamla stan (Old Town), located on the original small islands of the city's earliest settlements and still featuring the medieval street layout. Some notable buildings of Gamla Stan are the large German Church ("Tyska kyrkan") and several mansions and palaces: the "Riddarhuset" (the House of Nobility), the Bonde Palace, the Tessin Palace and the Oxenstierna Palace. The oldest building in Stockholm is the Riddarholmskyrkan from the late 13th century. After a fire in 1697 when the original medieval castle was destroyed, Stockholm Palace was erected in a baroque style. Storkyrkan Cathedral, the episcopal seat of the Bishop of Stockholm, stands next to the castle. It was founded in the 13th century but is clad in a baroque exterior dating to the 18th century. As early as the 15th century, the city had expanded outside of its original borders. Some pre-industrial, small-scale buildings from this era can still be found in Södermalm. During the 19th century and the age of industrialization Stockholm grew rapidly, with plans and architecture inspired by the large cities of the continent such as Berlin and Vienna. Notable works of this time period include public buildings such as the Royal Swedish Opera and private developments such as the luxury housing developments on Strandvägen. In the 20th century, a nationalistic push spurred a new architectural style inspired by medieval and renaissance ancestry as well as influences of the Jugend/Art Nouveau style. A key landmark of Stockholm, the Stockholm City Hall, was erected 1911–1923 by architect Ragnar Östberg. Other notable works of these times are the Stockholm Public Library and the World Heritage Site Skogskyrkogården. In the 1930s modernism characterized the development of the city as it grew. New residential areas sprang up such as the development on Gärdet while industrial development added to the growth, such as the KF manufacturing industries on Kvarnholmen located in the Nacka Municipality. In the 1950s, suburban development entered a new phase with the introduction of the Stockholm metro. The modernist developments of Vällingby and Farsta were internationally praised. In the 1960s this suburban development continued but with the aesthetic of the times, the industrialized and mass-produced blocks of flats received a large amount of criticism. At the same time that this suburban development was taking place, the most central areas of the inner city were being redesigned, known as "Norrmalmsregleringen". Sergels Torg, with its five high-rise office towers was created in the 1960s, followed by the total clearance of large areas to make room for new development projects. The most notable buildings from this period include the ensemble of the House of Culture, City Theatre and the Riksbank at Sergels Torg, designed by architect Peter Celsing. In the 1980s, the planning ideas of modernism were starting to be questioned, resulting in suburbs with denser planning, such as Skarpnäck. In the 1990s this idea was taken further with the development of an old industrial area close to the inner city, resulting in a sort of mix of modernistic and urban planning in the new area of Hammarby Sjöstad. The municipality has appointed an official "board of beauty" called "Skönhetsrådet" to protect and preserve the beauty of the city. Stockholm's architecture (along with Visby, Gotland) provided the inspiration for Japanese anime director Hayao Miyazaki as he sought to evoke an idealized city untouched by World War. His creation called "Koriko", draws directly from what Miyazaki felt was Stockholm's sense of well-established architectural unity, vibrancy, independence, and safety. Stockholm is one of the most crowded museum-cities in the world with around 100 museums, visited by millions of people every year. The Vasa Museum () is a maritime museum on Djurgården which displays the only almost fully intact 17th century ship that has ever been salvaged, the 64-gun warship "Vasa" that sank on her maiden voyage in 1628. The Nationalmuseum houses the largest collection of art in the country: 16,000 paintings and 30,000 objects of art handicraft. The collection dates back to the days of Gustav Vasa in the 16th century, and has since been expanded with works by artists such as Rembrandt, and Antoine Watteau, as well as constituting a main part of Sweden's art heritage, manifested in the works of Alexander Roslin, Anders Zorn, Johan Tobias Sergel, Carl Larsson, Carl Fredrik Hill and Ernst Josephson. From the year 2013 to 2018 the museum was closed due to a restoration of the building. Moderna Museet (Museum of Modern Art) is Sweden's national museum of modern art. It has works by noted modern artists such as Picasso and Salvador Dalí. Skansen (in English: the Sconce) is a combined open-air museum and zoo, located on the island of Djurgården. It was founded in 1891 by Artur Hazelius (1833–1901) to show the way of life in the different parts of Sweden before the industrial era. Other notable museums (in alphabetical order): Stockholm has a vibrant art scene with a number of internationally recognized art centres and commercial galleries. Amongst others, privately sponsored initiatives such as Bonniers Konsthall, Magasin 3, and state-supported institutions such as Tensta Konsthall and Index all show leading international and national artists. In the last few years, a gallery district has emerged around Hudiksvallsgatan where leading galleries such as Andréhn-Schiptjenko, Brändström & Stene have located. Other important commercial galleries include Nordenhake, Milliken Gallery and Galleri Magnus Karlsson. The Stockholm suburbs are places with diverse cultural background. Some areas in the inner suburbs, including those of Skärholmen, Tensta, Jordbro, Fittja, Husby, Brandbergen, Rinkeby, Rissne, Kista, Hagsätra, Hässelby, Farsta, Rågsved, Flemingsberg, and the outer suburb of Södertälje, have high percentages of immigrants or second generation immigrants. These mainly come from the Middle East (Assyrians, Syriacs, Turks and Kurds) also Bosnians and Serbs, but there are also immigrants from Africa, Southeast Asia and Latin America. Other parts of the inner suburbs, such as Täby, Danderyd, Lidingö, Flysta and, as well as some of the suburbs mentioned above, have a majority of ethnic Swedes. Distinguished among Stockholm's many theatres are the Royal Dramatic Theatre ("Kungliga Dramatiska Teatern"), one of Europe's most renowned theatres, and the Royal Swedish Opera, inaugurated in 1773. Other notable theatres are the Stockholm City Theatre (Stockholms stadsteater), the Peoples Opera ("Folkoperan"), the Modern Theatre of Dance ("Moderna dansteatern"), the China Theatre, the Göta Lejon Theatre, the Mosebacke Theatre, and the Oscar Theatre. Gröna Lund is an amusement park located on the island of Djurgården. This amusement park has over 30 attractions and many restaurants. It is a popular tourist attraction and visited by thousands of people every day. It is open from the end of April to the middle of September. Gröna Lund also serves as a concert venue. Stockholm is the media centre of Sweden. It has four nationwide daily newspapers and is also the central location of the publicly funded radio (SR) and television (SVT). In addition, all other major television channels have their base in Stockholm, such as: TV3, TV4 and TV6. All major magazines are also located to Stockholm, as are the largest literature publisher, the Bonnier group. The world's best-selling video game "Minecraft" was created in Stockholm by Markus 'Notch' Persson in 2009, and its company Mojang is currently headquartered there. The most popular spectator sports are football and ice hockey. The three most popular football clubs in Stockholm are AIK, Djurgårdens IF and Hammarby IF, who all play in the first tier, Allsvenskan. AIK play at Sweden's national stadium for football, Friends Arena in Solna, with a capacity of 54,329. The Europa League final of 2017 was played the 24th of May between AFC Ajax and Manchester United on Friends Arena. Manchester United won the trophy after a 2–0 victory. Djurgårdens IF and Hammarby play at Tele2 Arena in Johanneshov, with a capacity of 30,000 spectators. All three clubs are multi-sport clubs, which have ice hockey teams; Djurgårdens IF play in the first tier, AIK in the second and Hammarby in the third tier, as well as teams in bandy, basketball, floorball and other sports, including individual sports. Historically, the city was the host of the 1912 Summer Olympics. From those days stem the Stockholms Olympiastadion which has since hosted numerous sports events, notably football and athletics. Other major sports arenas are Friends Arena the new national football stadium, Stockholm Globe Arena, a multi-sport arena and one of the largest spherical buildings in the world and the nearby indoor arena Hovet. Besides the 1912 Summer Olympics, Stockholm hosted the 1956 Summer Olympics Equestrian Games and the UEFA Euro 1992. The city was also second runner up in the 2004 Summer Olympics bids. Stockholm hosted the 1958 FIFA World Cup. Stockholm is bid jointly with Åre for the 2026 Winter Olympics competing against the other joint bid of Milan/Cortina d'Ampezzo, Italy, if awarded it would have been the second city to host both Summer and Winter Olympics after Beijing and for the 2026 Winter Paralympics if awarded it would also have been the second city to host both Summer and Winter Paralympics also after Beijing and with Åre it will also be to host all three winter event including Winter Olympic Games, Winter Paralympic Games and the Special Olympics World Winter Games in which Åre will host in 2021 along with Östersund. Stockholm first bid for the Winter Olympics for 2022 Winter Olympics, but withdrew its bid in 2014 due to financial matters. Stockholm also hosted all but one of the Nordic Games, a winter multi-sport event that predated the Winter Olympics. In 2015, the Stockholms Kungar Rugby league club was formed. They are Stockholm's first Rugby league team and will play in Sweden's National Rugby league championship. Every year Stockholm is host to the ÖTILLÖ Swimrun World Championship. Stockholm has hosted the Stockholm Open, an ATP World Tour 250 series professional tennis tournament annually since 1969. Each year since 1995, the tournament has been hosted at the Kungliga tennishallen. There are over 1000 restaurants in Stockholm. Stockholm boasts a total of ten Michelin star restaurants, two with two stars and one with three stars. Stockholm is one of the cleanest capitals in the world. The city was granted the 2010 European Green Capital Award by the EU Commission; this was Europe's first "green capital". Applicant cities were evaluated in several ways: climate change, local transport, public green areas, air quality, noise, waste, water consumption, waste water treatment, sustainable utilisation of land, biodiversity and environmental management. Out of 35 participant cities, eight finalists were chosen: Stockholm, Amsterdam, Bristol, Copenhagen, Freiburg, Hamburg, Münster, and Oslo. Some of the reasons why Stockholm won the 2010 European Green Capital Award were: its integrated administrative system, which ensures that environmental aspects are considered in budgets, operational planning, reporting, and monitoring; its cut in carbon dioxide emissions by 25% per capita in ten years; and its decision towards being fossil fuel free by 2050. Stockholm has long demonstrated concern for the environment. The city's current environmental program is the fifth since the first one was established in the mid-1970s. In 2011, Stockholm passed the title of European Green Capital to Hamburg, Germany. In the beginning of 2010, Stockholm launched the program Professional Study Visits in order to share the city's green best practices. The program provides visitors with the opportunity to learn how to address issues such as waste management, urban planning, carbon dioxide emissions, and sustainable and efficient transportation system, among others. According to the European Cities Monitor 2010, Stockholm is the best city in terms of freedom from pollution. Surrounded by 219 nature reserves, Stockholm has around 1,000 green spaces, which corresponds to 30% of the city's area. Founded in 1995, the Royal National City Park is the world's first legally protected "national urban park". For a description of the formation process, value assets and implementation of the legal protection of The Royal National Urban Park, see Schantz 2006 The water in Stockholm is so clean that people can dive and fish in the centre of the city. The waters of downtown Stockholm serve as spawning grounds for multiple fish species including trout and salmon, though human intervention is needed to keep populations up. Regarding CO2 emissions, the government's target is that Stockholm will be CO2 free before 2050. Stockholm used to have problematic levels of particulates (PM10) due to studded winter tires, but as of 2016 the levels are below limits, after street-specific bans. Instead the current (2016) problem is nitrogen oxides emitted by diesel vehicles. In 2016 the average levels for urban background (roof of Torkel Knutssonsgatan) were: NO2 11 μg/m3, NOx 14 μg/m3, PM10 12 μg/m3, PM2.5 4.9 μg/m3, soot 0.4 μg/m3, ultrafine particles 6200/cm3, CO 0.2 mg/m3, SO2 0.4 μg/m3, ozone 51 μg/m3. For urban street level (the densely trafficked Hornsgatan) the average levels were: NO2 43 μg/m3, NOx 104 μg/m3, PM10 23 μg/m3, PM2.5 5.9 μg/m3, soot 1.0 μg/m3, ultrafine particles 17100/cm3, CO 0.3 mg/m3, ozone 31 μg/m3. Stockholm has an extensive public transport system. It consists of the Stockholm Metro (), which consist of three color-coded main systems (green, red and blue) with seven lines (10, 11, 13, 14, 17, 18, 19); the Stockholm commuter rail () which runs on the state-owned railroads on six lines (40, 41, 42, 43, 44, 48); four light rail/tramway lines (7, 12, 21, and 22); the 891 mm narrow-gauge railway Roslagsbanan, on three lines (27, 28, 29) in the northeastern part; the local railway Saltsjöbanan, on two lines (25, 26) in the southeastern part; a large number of bus lines, and the inner-city Djurgården ferry. The overwhelming majority of the land-based public transport in Stockholm County (save for the airport buses/airport express trains and other few commercially viable bus lines) is organized under the common umbrella of Storstockholms Lokaltrafik (SL), an aktiebolag wholly owned by Stockholm County Council. Since the 1990s, the operation and maintenance of the SL public transport services are contracted out to independent companies bidding for contracts, such as MTR, which currently operate the Metro. The archipelago boat traffic is handled by Waxholmsbolaget, which is also wholly owned by the County Council. SL has a common ticket system in the entire Stockholm County, which allows for easy travel between different modes of transport. The tickets are of two main types, single ticket and travel cards, both allowing for unlimited travel with SL in the entire Stockholm County for the duration of the ticket validity. On 1 April 2007, a zone system (A, B, C) and price system was introduced. Single tickets were available in forms of cash ticket, individual unit pre-paid tickets, pre-paid ticket slips of 8, sms-ticket and machine ticket. Cash tickets bought at the point of travel were the most expensive and pre-paid tickets slips of 8 are the cheapest. A single ticket costs 32 SEK with the card and 45 SEK without and is valid for 75 minutes. The duration of the travel card validity depended on the exact type; they were available from 24 hours up to a year. As of 2018, a 30-day card costs 860 SEK. Tickets of all these types were available with reduced prices for students and persons under 20 and over 65 years of age. On 9 January 2017, the zone system was removed, and the cost of the tickets was increased. With an estimated cost of SEK 16.8 billion (January 2007 price level), which equals 2.44 billion US dollars, the City Line, an environmentally certified project, comprises a -long commuter train tunnel (in rock and water) beneath Stockholm, with two new stations (Stockholm City and Stockholm Odenplan), and a -long railway bridge at Årsta. The City Line was built by the Swedish Transport Administration in co-operation with the City of Stockholm, Stockholm County Council, and Stockholm Transport, SL. As Stockholm Central Station is overloaded, the purpose of this project was to double the city's track capacity and improve service efficiency. Operations began in July 2017. Between Riddarholmen and Söder Mälarstrand, the City Line runs through a submerged concrete tunnel. As a green project, the City Line includes the purification of waste water; noise reduction through sound-attenuating tracks; the use of synthetic diesel, which provides users with clean air; and the recycling of excavated rocks. Stockholm is at the junction of the European routes E4, E18 and E20. A half-completed motorway ring road exists on the south, west and north sides of the City Centre. The northern section of the ring road, Norra Länken, opened for traffic in 2015 while the final subsea eastern section is being discussed as a future project. A bypass motorway for traffic between Northern and Southern Sweden, Förbifart Stockholm, is currently being built. The many islands and waterways make extensions of the road system both complicated and expensive, and new motorways are often built as systems of tunnels and bridges. Stockholm has a congestion pricing system, Stockholm congestion tax, in use on a permanent basis since 1 August 2007, after having had a seven-month trial period in the first half of 2006. The City Centre is within the congestion tax zone. All the entrances and exits of this area have unmanned control points operating with automatic number plate recognition. All vehicles entering or exiting the congestion tax affected area, with a few exceptions, have to pay 10–20 SEK (1.09–2.18 EUR, 1.49–2.98 USD) depending on the time of day between 06:30 and 18:29. The maximum tax amount per vehicle per day is 60 SEK (6.53 EUR, ). Payment is done by various means within 14 days after one has passed one of the control points; one cannot pay at the control points. After the trial period was over, consultative referendums were held in Stockholm Municipality and several other municipalities in Stockholm County. The then-reigning government (Persson Cabinet) stated that they would only take into consideration the results of the referendum in Stockholm Municipality. The opposition parties (Alliance for Sweden) stated that if they were to form a cabinet after the general election—which was held the same day as the congestion tax referendums—they would take into consideration the referendums held in several of the other municipalities in Stockholm County as well. The results of the referendums were that the Stockholm Municipality voted for the congestion tax, while the other municipalities voted against it. The opposition parties won the general election and a few days before they formed government (Reinfeldt Cabinet) they announced that the congestion tax would be reintroduced in Stockholm, but that the revenue would go entirely to road construction in and around Stockholm. During the trial period and according to the agenda of the previous government the revenue went entirely to public transport. Stockholm has regular ferry lines to Helsinki and Turku in Finland (commonly called "Finlandsfärjan"); Tallinn, Estonia; Riga, Latvia, Åland islands and to Saint Petersburg. The large Stockholm archipelago is served by the archipelago boats of Waxholmsbolaget (owned and subsidized by Stockholm County Council). Between April and October, during the warmer months, it is possible to rent Stockholm City Bikes by purchasing a bike card online or through retailers. Cards allow users to rent bikes from any Stockholm City Bikes stand spread across the city and return them in any stand. There are two types of cards: the Season Card (valid from 1 April to 31 October) and the 3-day card. When their validity runs out they can be reactivated and are therefore reusable. Bikes can be used for up to three hours per loan and can be rented from Monday to Sunday from 6 am to 10 pm. Arlanda Express airport rail link runs between Arlanda Airport and central Stockholm. With a journey of 20 minutes, the train ride is the fastest way of traveling to the city center. Arlanda Central Station is also served by commuter, regional and intercity trains. Additionally, there are also bus lines, Flygbussarna, that run between central Stockholm and all the airports. Stockholm Central Station has train connections to many Swedish cities as well as to Oslo, Norway and Copenhagen, Denmark. The popular X 2000 service to Gothenburg takes three hours. Most of the trains are run by SJ AB. Stockholm often performs well in international rankings, some of which are mentioned below:
https://en.wikipedia.org/wiki?curid=26741
Stamp collecting Stamp collecting is the collecting of postage stamps and related objects. It is related to philately, which is the study of stamps. It has been one of the world's most popular hobbies since the late nineteenth century with the rapid growth of the postal service, as a never-ending stream of new stamps was produced by countries that sought to advertise their distinctiveness through their stamps. Stamp collecting is generally accepted as one of the areas that make up the wider subject of philately, which is the study of stamps. A philatelist may, but does not have to, collect stamps. It is not uncommon for the term "philatelist" to be used to mean a stamp collector. Many casual stamp collectors accumulate stamps for sheer enjoyment and relaxation without worrying about the tiny details. The creation of a large or comprehensive collection, however, generally requires some philatelic knowledge and will usually contain areas of philatelic studies. Postage stamps are often collected for their historical value and geographical aspects and also for the many subjects depicted on them, ranging from ships, horses, and birds to kings, queens and presidents. Sales of postage stamps are an important source of income for some countries whose stamp issues may exceed their postal needs, but have designs that appeal to many stamp collectors. It has been suggested that John Bourke, Receiver General of Stamp Dues in Ireland, was the first collector. In 1774 he assembled a book of the existing embossed revenue stamps, ranging in value from 6 pounds to half a penny, as well as the hand stamped charge marks that were used with them. His collection is preserved in the Royal Irish Academy, Dublin. Postage stamp collecting began at the same time that stamps were first issued, and by 1860 thousands of collectors and stamp dealers were appearing around the world as this new study and hobby spread across Europe, European colonies, the United States and other parts of the world. The first postage stamp, the Penny Black, was issued by Britain in May 1840 and pictured a young Queen Victoria. It was produced without perforations (imperforate) and consequently had to be cut from the sheet with scissors in order to be used. While unused examples of the Penny Black are quite scarce, used examples are quite common, and may be purchased for $20 to $200, depending upon condition. People started to collect stamps almost immediately. One of the earliest and most notable was John Edward Gray. In 1862, Gray stated that he "began to collect postage stamps shortly after the system was established and before it had become a rage". Female stamp collectors date from the earliest days of postage stamp collecting. One of the earliest was Adelaide Lucy Fenton who wrote articles in the 1860s for the journal "The Philatelist" under the name Herbert Camoens. As the hobby and study of stamps began to grow, stamp albums and stamp related literature began to surface, and by the early 1880s publishers like Stanley Gibbons made a business out of this advent. Children and teenagers were early collectors of stamps in the 1860s and 1870s. Many adults dismissed it as a childish pursuit but later many of those same collectors, as adults, began to systematically study the available postage stamps and publish books about them. Some stamps, such as the triangular issues of the Cape of Good Hope, have become legendary. Stamp collecting is a less popular hobby in the early 21st century than it was a hundred years ago. In 2013, "The Wall Street Journal" estimated the global number of stamp collectors was around 60 million. Tens of thousands of stamp dealers supply them with stamps along with stamp albums, catalogues and other publications. There are also thousands of stamp (philatelic) clubs and organizations that provide them with the history and other aspects of stamps. Today, though the number of collectors is somewhat less, stamp collecting is still one of the world's most popular indoor hobbies. A few basic items of equipment are recommended for proper stamp collection. Stamp tongs help to handle stamps safely, a magnifying glass helps in viewing fine details and an album is a convenient way to store stamps. The stamps need to be attached to the pages of the album in some way, and stamp hinges are a cheap and simple way to do this. However, hinging stamps can damage them, thus reducing their value; today many collectors prefer more expensive "hingeless mounts". Issued in various sizes, these are clear, chemically neutral thin plastic holders that open to receive stamps and are gummed on the back so that they stick to album pages. Another alternative is a stockbook, where the stamps drop into clear pockets without the need for a mount. Stamps should be stored away from light, heat and moisture or they will be damaged. Stamps can be displayed according to the collector's wishes, by country, topic, or even by size, which can create a display pleasing to the eye. There are no rules and it is entirely a matter for the individual collector to decide. Albums can be commercially purchased, downloaded or created by the collector. In the latter cases, using acid free paper provides better long-term stamp protection. Many collectors ask their family and friends to save stamps for them from their mail. Although the stamps received by major businesses and those kept by elderly relatives may be of international and historical interest, the stamps received from family members are often of the definitive sort. Definitives seem mundane but, considering their variety of colours, watermarks, paper differences, perforations and printing errors, they can fill many pages in a collection. Introducing either variety or specific focus to a collection can require the purchasing of stamps, either from a dealer or online. Online stamp collector clubs often contain a platform for buying/selling and trading. Large numbers of relatively recent stamps, often still attached to fragments or envelopes, may be obtained cheaply and easily. Rare and old stamps can also be obtained, but these can be very expensive. Duplicate stamps are those a collector already has and are not required, therefore, to fill a gap in a collection. Duplicate stamps can be sold or traded, so they are an important medium of exchange among collectors. Many dealers sell stamps through the Internet while others have neighborhood shops which are among the best resources for beginning and intermediate collectors. Some dealers also jointly set up week-end stamp markets called "bourses" that move around a region from week to week. They also meet collectors at regional exhibitions and stamp shows. A worldwide collection would be enormous, running to thousands of volumes, and would be incredibly expensive to acquire. Many consider that Count Philipp von Ferrary's collection at the beginning of the 20th century was the most complete ever formed. Many collectors limit their collecting to particular countries, certain time periods or particular subjects (called "topicals") like birds or aircraft. Some of the more popular collecting areas include: There are thousands of organizations for collectors: local stamp clubs, special-interest groups, and national organizations. Most nations have a national collectors' organization, including the American Philatelic Society (APS) in the United States; the Royal Philatelic Society London and Philatelic Traders Society in United Kingdom; and the Royal Philatelic Society of Canada in Canada. The Internet has greatly expanded the availability of information and made it easier to obtain stamps and other philatelic material. The American Topical Association is now a part of the APS and promotes thematic collecting as well as encouraging sub-groups of numerous topics. Stamp clubs and philatelic societies can add a social aspect to stamp collecting and provide a forum where novices can meet experienced collectors. Although such organizations are often advertised in stamp magazines and online, the relatively small number of collectors – especially outside urban areas – means that a club may be difficult to set up and sustain. The Internet partially solves this problem, as the association of collectors online is not limited by geographical distance. For this reason, many highly specific stamp clubs have been established on the Web, with international membership. Organizations such as the Cinderella Stamp Club (UK) retain hundreds of members interested in a specific aspect of collecting. Social organizations, such as the Lions Club and Rotary International, have also formed stamp collecting groups specific to those stamps that are issued from many countries worldwide that display the organization's logo. Rare stamps are often old and many have interesting stories attached to them. Some include: Stamp catalogues are the primary tool used by serious collectors to organize their collections, and for the identification and valuation of stamps. Most stamp shops have stamp catalogues available for purchase. A few catalogues are offered online, either free or for a fee. There are hundreds of different catalogues, most specializing in particular countries or periods. Collector clubs tend to provide free catalogues to their members. The stamp collection assembled by French-Austrian aristocrat Philipp von Ferrary (1850–1917) at the beginning of the 20th century is widely considered the most complete stamp collection ever formed (or likely to be formed). It included, for example, all of the rare stamps described above that had been issued by 1917. However, as Ferrary was an Austrian citizen, the collection was broken up and sold by the French government after the First World War, as war reparations. A close rival was Thomas Tapling (1855–1891), whose Tapling Collection was donated to the British Museum. Several European monarchs were keen stamp collectors, including King George V of the United Kingdom and King Carol II of Romania. King George V possessed one of the most valuable stamp collections in the world and became President of the Royal Philatelic Society. His collection was passed on to Queen Elizabeth II who, while not a serious philatelist, has a collection of British and Commonwealth first day covers which she started in 1952. U.S. President Franklin Delano Roosevelt was a stamp collector; he designed several American commemorative stamps during his term. Late in life Ayn Rand renewed her childhood interest in stamps and became an enthusiastic collector. Several entertainment and sport personalities have been known to be collectors. Freddie Mercury, lead singer of the band Queen, collected stamps as a child. His childhood stamp album is in the collection of the British Postal Museum & Archive. John Lennon of The Beatles was a childhood stamp collector. His stamp album is held by the National Postal Museum. Former world chess champion Anatoly Karpov has amassed a huge stamp collection over the decades, led by stamps from Belgium and Belgian Congo, that has been estimated to be worth $15 million.
https://en.wikipedia.org/wiki?curid=26742
Near-Earth object A near-Earth object (NEO) is any small Solar System body whose orbit brings it to proximity with Earth. By convention, a Solar System body is a NEO if its closest approach to the Sun (perihelion) is less than 1.3 astronomical units (AU). If a NEO's orbit crosses the Earth's, and the object is larger than across, it is considered a potentially hazardous object (PHO). Most known PHOs and NEOs are asteroids, but a small fraction are comets. There are over 20,000 known near-Earth asteroids (NEAs), over a hundred short-period near-Earth comets (NECs), and a number of solar-orbiting spacecraft and meteoroids large enough to be tracked in space before striking the Earth. It is now widely accepted that collisions in the past have had a significant role in shaping the geological and biological history of the Earth. NEOs have become of increased interest since the 1980s because of greater awareness of the potential danger. Asteroids as small as 20 m can damage the local environment and populations. Larger asteroids penetrate the atmosphere to the surface of the Earth, producing craters if they impact a continent or tsunamis if they impact sea. Asteroid impact avoidance by deflection is possible in principle, and methods of mitigation are being researched. Two scales, the Torino scale and the more complex Palermo scale, rate a risk based on how probable the orbit calculations of an identified NEO make an Earth impact and on how bad the consequences of such an impact would be. Some NEOs have had temporarily positive Torino or Palermo scale ratings after their discovery, but , more precise calculations based on longer observation arcs led in all cases to a reduction of the rating to or below 0. Since 1998, the United States, the European Union, and other nations are scanning the sky for NEOs in an effort called Spaceguard. The initial US Congress mandate to NASA was to catalog at least 90% of NEOs that are at least in diameter, which could cause a global catastrophe,and had been met by 2011. In later years, the survey effort has been expanded to smaller objects which have the potential for large-scale, though not global, damage. NEOs have low surface gravity, and many have Earth-like orbits that make them easy targets for spacecrafts. , five near-Earth comets and five near-Earth asteroids have been visited by spacecraft. A small sample of one NEO was returned to Earth in 2010, and similar missions are in progress. Preliminary plans for commercial asteroid mining have been drafted by private startup companies. Near-Earth objects (NEOs) are technically and by convention defined as all small Solar System bodies with orbits around the Sun that lie partly between 0.983 and 1.3 astronomical units (AU; Sun–Earth distance) away from the Sun. Thus, NEOs are not necessarily currently near the Earth, but they can potentially approach the Earth relatively closely. The term is also used more flexibly sometimes, for example for objects in orbit around the Earth or for quasi-satellites, which have a more complex orbital relationship with the Earth. When a NEO is detected, like all other small Solar System bodies, its positions and brightness are submitted to the International Astronomical Union's (IAU's) Minor Planet Center (MPC) for cataloging. The MPC maintains separate lists of confirmed NEOs and potential NEOs. The orbits of some NEOs intersect that of the Earth, so they pose a collision danger. These are considered potentially hazardous objects (PHOs) if their estimated diameter is above 140 meters. The MPC maintains a separate list for the asteroids among PHOs, the potentially hazardous asteroids (PHAs). NEOs are also catalogued by two separate units of the Jet Propulsion Laboratory (JPL) of the National Aeronautics and Space Administration (NASA): the Center for Near Earth Object Studies (CNEOS) and the Solar System Dynamics Group. PHAs are currently defined based on parameters relating to their potential to approach the Earth dangerously closely and the estimated consequences that an impact would have. Mostly objects with an Earth minimum orbit intersection distance (MOID) of 0.05 AU or less and an absolute magnitude of 22.0 or brighter (a rough indicator of large size) are considered PHAs. Objects that either cannot approach closer to the Earth (i.e. MOID) than , or are fainter than H = 22.0 (about in diameter with assumed albedo of 14%), are not considered PHAs. NASA's catalog of near-Earth objects also includes the approach distances of asteroids and comets (expressed in lunar distances). The first near-Earth objects to be observed by humans were comets. Their extraterrestrial nature was recognised and confirmed only after Tycho Brahe tried to measure the distance of a comet through its parallax in 1577 and the lower limit he obtained was well above the Earth diameter; the periodicity of some comets was first recognised in 1705, when Edmond Halley published his orbit calculations for the returning object now known as Halley's Comet. The 1758–1759 return of Halley's Comet was the first comet appearance predicted in advance. It has been said that Lexell's comet of 1770 was the first discovered Near-Earth object. The first near-Earth asteroid to be discovered was 433 Eros in 1898. The asteroid was subject to several extensive observation campaigns, primarily because measurements of its orbit enabled a precise determination of the then imperfectly known distance of the Earth from the Sun. In 1937, asteroid 69230 Hermes was discovered when it passed the Earth at twice the distance of the Moon. Hermes was considered a threat because it was lost after its discovery; thus its orbit and potential for collision with Earth were not known precisely. Hermes was only re-discovered in 2003, and it is now known to be no threat for at least the next century. On June 14, 1968, the 1.4 km diameter asteroid 1566 Icarus passed Earth at a distance of , or 16 times the distance of the Moon. During this approach, Icarus became the first minor planet to be observed using radar, with measurements obtained at the Haystack Observatory and the Goldstone Tracking Station. This was the first close approach predicted years in advance (Icarus had been discovered in 1949), and also earned significant public attention, due to alarmist news reports. A year before the approach, MIT students launched Project Icarus, devising a plan to deflect the asteroid with rockets in case it was found to be on a collision course with Earth. Project Icarus received wide media coverage, and inspired the 1979 disaster movie "Meteor", in which the US and the USSR join forces to blow up an Earth-bound fragment of an asteroid hit by a comet. On March 23, 1989, the diameter Apollo asteroid 4581 Asclepius (1989 FC) missed the Earth by . If the asteroid had impacted it would have created the largest explosion in recorded history, equivalent to 20,000 megatons of TNT. It attracted widespread attention because it was discovered only after the closest approach. In March 1998, early orbit calculations for recently discovered asteroid showed a potential 2028 close approach from the Earth, well within the orbit of the Moon, but with a large error margin allowing for a direct hit. Further data allowed a revision of the 2028 approach distance to , with no chance of collision. By that time, inaccurate reports of a potential impact had caused a media storm. From the late 1990s, a typical frame of reference in searches for NEOs has been the scientific concept of risk. The risk that any near-Earth object poses is viewed having regard to both the culture and the technology of human society. Through history, humans have associated NEOs with changing risks, based on religious, philosophical or scientific views, as well as humanity's technological or economical capability to deal with such risks. Thus, NEOs have been seen as omens of natural disasters or wars; harmless spectacles in an unchanging universe; the source of era-changing cataclysms or potentially poisonous fumes (during Earth's passage through the tail of Halley's Comet in 1910); and finally as a possible cause of a crater-forming impact that could even cause extinction of humans and other life on Earth. The potential of catastrophic impacts by near-Earth comets was recognised as soon as the first orbit calculations provided an understanding of their orbits: in 1694, Edmond Halley presented a theory that Noah's flood in the Bible was caused by a comet impact. Human perception of near-Earth asteroids as benign objects of fascination or killer objects with high risk to human society has ebbed and flowed during the short time that NEAs have been scientifically observed. Scientists have recognised the threat of impacts that create craters much bigger than the impacting bodies and have indirect effects on an even wider area since the 1980s, after the confirmation of a theory that the Cretaceous–Paleogene extinction event (in which dinosaurs died out) 65 million years ago was caused by a large asteroid impact. The awareness of the wider public of the impact risk rose after the observation of the impact of the fragments of Comet Shoemaker–Levy 9 into Jupiter in July 1994. In 1998, the movies "Deep Impact" and "Armageddon" popularised the notion that near-Earth objects could cause catastrophic impacts. Also at that time, a conspiracy theory arose about the supposed 2003 impact of the fictitious planet Nibiru, which persisted on the internet as the predicted impact date was moved to 2012 and then 2017. There are two schemes for the scientific classification of impact hazards from NEOs: On both scales, risks of any concern are indicated by values above zero. The annual background frequency used in the Palermo scale for impacts of energy greater than "E" megatonnes is estimated as: For instance, this formula implies that the expected value of the time from now until the next impact greater than 1 megatonne is 33 years, and that when it occurs, there is a 50% chance that it will be above 2.4 megatonnes. This formula is only valid over a certain range of "E". However, another paper published in 2002 – the same year as the paper on that the Palermo scale is based – found a power law with different constants: This formula gives considerably lower rates for a given "E". For instance, it gives the rate for bolides of 10 megatonnes or more (like the Tunguska explosion) as 1 per thousand years, rather than 1 per 210 years as in the Palermo formula. However, the authors give a rather large uncertainty (once in 400 to 1800 years for 10 megatonnes), due in part to uncertainties in determining the energies of the atmospheric impacts that they used in their determination. NASA maintains an automated system to evaluate the threat from known NEOs over the next 100 years, which generates the continuously updated Sentry Risk Table. All or nearly all of the objects are highly likely to drop off the list eventually as more observations come in, reducing the uncertainties and enabling more accurate orbital predictions. In March 2002, became the first asteroid with a positive rating on the Torino Scale, with about a 1 in 9,300 chance of an impact in 2049. Additional observations reduced the estimated risk to zero, and the asteroid was removed from the Sentry Risk Table in April 2002. It is now known that in the next two centuries, will pass the Earth at a safe closest distance (perigee) of on August 31, 2080. Asteroid was lost after its 1950 discovery, since its observations over just 17 days were insufficient to determine its orbit; it was rediscovered on December 31, 2000. It has a diameter of about a kilometer (0.6 miles). It was also observed by radar during its close approach in 2001, allowing much more precise orbit calculations. Although this asteroid will not strike for at least 800 years and thus has no Torino scale rating, it was added to the Sentry list in April 2002 because it was the first object with a Palermo scale value greater than zero. The then-calculated 1 in 300 maximum chance of impact and +0.17 Palermo scale value was roughly 50% greater than the background risk of impact by all similarly large objects until 2880. Uncertainties in the orbit calculations were further reduced using radar observations in 2012, and this decreased the odds of an impact. Taking all radar and optical observations until 2015 into account, the probability of impact is, , assessed at 1 in 8,300. The corresponding Palermo scale value of −1.42 is still the highest for all objects on the Sentry List Table. , only one other object () has a Palermo scale value above −2 for a single impact date. On December 24, 2004, asteroid 99942 Apophis (at the time known by its provisional designation ) was assigned a 4 on the Torino scale, the highest rating ever given, as the information available at the time translated to a 2.7% chance of Earth impact on Friday, April 13, 2029. By December 28, 2004, additional observations had produced a smaller uncertainty zone for the 2029 approach which no longer included the Earth. The 2029 risk of impact consequently dropped to zero, but later potential impact dates were still rated 1 on the Torino scale. Further observations lowered this 2036 risk to a Torino rating of 0 in August 2006. , calculations show Apophis has no chance of impacting Earth before 2060. In February 2006, has been assigned a Torino Scale rating of 2 due to a close encounter predicted for May 4, 2102. After more precise calculations, the rating was lowered to 1 in May 2006 and 0 in October 2006, and the asteroid was removed from the Sentry Risk Table entirely in February 2008. , is listed with the highest chance of impacting Earth, at 1 in 20 on September 5, 2095. At only across, the asteroid however is much too small to be considered a Potentially Hazardous Asteroid and it poses no serious threat: the possible 2095 impact therefore rates only −3.32 on the Palermo Scale. Observations during the August 2022 close approach are expected to ascertain whether the asteroid will impact Earth in 2095. The first astronomical program dedicated to the discovery of near-Earth asteroids was the Palomar Planet-Crossing Asteroid Survey, started in 1973 by astronomers Eugene Shoemaker and Eleanor Helin. The link to impact hazard, the need for dedicated survey telescopes and options to head off an eventual impact were first discussed at a 1981 interdisciplinary conference in Snowmass, Colorado. Plans for a more comprehensive survey, named the Spaceguard Survey, were developed by NASA from 1992, under a mandate from the United States Congress. To promote the survey on an international level, the International Astronomical Union (IAU) organised a workshop at Vulcano, Italy in 1995, and set up the Spaceguard Foundation also in Italy a year later. In 1998, the United States Congress gave NASA a mandate to detect 90% of near-earth asteroids over diameter (that threaten global devastation) by 2008. Several surveys have undertaken "Spaceguard" activities (an umbrella term), including Lincoln Near-Earth Asteroid Research (LINEAR), Spacewatch, Near-Earth Asteroid Tracking (NEAT), Lowell Observatory Near-Earth-Object Search (LONEOS), Catalina Sky Survey (CSS), Campo Imperatore Near-Earth Object Survey (CINEOS), Japanese Spaceguard Association, Asiago-DLR Asteroid Survey (ADAS) and Near-Earth Object WISE (NEOWISE). As a result, the ratio of the known and the estimated total number of near-Earth asteroids larger than 1 km in diameter rose from about 20% in 1998 to 65% in 2004, 80% in 2006, and 93% in 2011. The original Spaceguard goal has thus been met, only three years late. , 893 NEAs larger than 1 km have been discovered, or 97% of an estimated total of about 920. In 2005, the original USA Spaceguard mandate was extended by the George E. Brown, Jr. Near-Earth Object Survey Act, which calls for NASA to detect 90% of NEOs with diameters of or greater, by 2020. As of January, 2020, it is estimated that less than half of these have been found, but objects of this size hit the earth only about once in 2000 years. In January 2016, NASA announced the creation of the Planetary Defense Coordination Office (PDCO) to track NEOs larger than about in diameter and coordinate an effective threat response and mitigation effort. Survey programs aim to identify threats years in advance, giving humanity time to prepare a space mission to avert the threat. The ATLAS project, by contrast, aims to find impacting asteroids shortly before impact, much too late for deflection maneuvers but still in time to evacuate and otherwise prepare the affected Earth region. Another project, the Zwicky Transient Facility (ZTF), which surveys for objects that change their brightness rapidly, also detects asteroids passing close to Earth. Scientists involved in NEO research have also considered options for actively averting the threat if an object is found to be on a collision course with Earth. All viable methods aim to deflect rather than destroy the threatening NEO, because the fragments would still cause widespread destruction. Deflection, which means a change in the object's orbit months to years prior to the predicted impact, also requires orders of magnitude less energy. Near-Earth objects are classified as meteoroids, asteroids, or comets depending on size, composition, and orbit. Those which are asteroids can additionally be members of an asteroid family, and comets create meteoroid streams that can generate meteor showers. , 893 NEAs appear on the Sentry impact risk page at the NASA website. A significant number of these NEAs are at most 50 meters in diameter and none of the listed objects are placed even in the "green zone" (Torino Scale 1), meaning that none warrant the attention of general public. The main problem with estimating the number of NEOs is that probablility of detecting one is influenced by a number of factors, starting with the size of the object but also including the characteristics of its orbit. These observational biases need to be taken into account when trying to calculate the number of bodies in a population from the list of its detected members. What is easily detected will be more counted. Bigger asteroids reflect more light, and two of the biggest Near-Earth objects 433 Eros and 1036 Ganymed, were naturally also among the first to be detected. 1036 Ganymed is about in diameter and 433 Eros is about in diameter . The other major detection bias is that it is much easier to spot objects on the night-side of Earth. There is much less noise from the bright sky, and the searcher is looking at the sunlit side of the asteroids. In the daytime sky, a searcher looking towards the sun sees the backside of the object (e.g. comparing a Full moon at night to a New Moon in daytime). The light of sun hitting asteroids has been called "full asteroid" similar to a "full moon" and the greater amount of light, creates a bias that they are easier to detect in this case. In addition, opposition surge make them even brighter when the Earth is along the axis of sunlight. Finally, the day sky near the Sun is much brighter than the night sky. Evidencing this bias, over half (53%) of the known Near Earth objects were discovered in just 3.8% of the sky, in a 22.5° cone facing directly away from the Sun, and the vast majority (87%) were first found in only 15% of the sky, in the 45° cone facing away from the Sun, as depicted in the diagram below. One way around this opposition bias is to use thermal infrared telescopes that observe their heat emissions instead of the light they reflect. Asteroids with orbits which make them spend more time on the day-side of the Earth are therefore less likely to be discovered than those that spend most of their time beyond the orbit of the Earth. For example, one study noted that detection of bodies in low-eccentricity Earth-crossing orbits is favored, making Atens more likely to be detected than Apollos. Such observational biases must be identified and quantified to determine NEO populations, as studies of asteroid populations then take those known observational selection biases into account to make a more accurate assessment. In the year 2000 and taking into account all known observational biases, it was estimated that there are approximately 900 near earth "asteroids" of at least kilometer size, or technically and more accurately, with an absolute magnitude brighter than 17.75. These are objects in a near-Earth orbit without the tail or coma of a comet. , 22,261 near-Earth asteroids are known, 1,955 of which are both sufficiently large and come sufficiently close to Earth to be considered potentially hazardous. NEAs survive in their orbits for just a few million years. They are eventually eliminated by planetary perturbations, causing ejection from the Solar System or a collision with the Sun or a planet. With orbital lifetimes short compared to the age of the Solar System, new asteroids must be constantly moved into near-Earth orbits to explain the observed asteroids. The accepted origin of these asteroids is that main-belt asteroids are moved into the inner Solar System through orbital resonances with Jupiter. The interaction with Jupiter through the resonance perturbs the asteroid's orbit and it comes into the inner Solar System. The asteroid belt has gaps, known as Kirkwood gaps, where these resonances occur as the asteroids in these resonances have been moved onto other orbits. New asteroids migrate into these resonances, due to the Yarkovsky effect that provides a continuing supply of near-Earth asteroids. Compared to the entire mass of the asteroid belt, the mass loss necessary to sustain the NEA population is relatively small; totalling less than 6% over the past 3.5 billion years. The composition of near-Earth asteroids is comparable to that of asteroids from the asteroid belt, reflecting a variety of asteroid spectral types. A small number of NEAs are extinct comets that have lost their volatile surface materials, although having a faint or intermittent comet-like tail does not necessarily result in a classification as a near-Earth comet, making the boundaries somewhat fuzzy. The rest of the near-Earth asteroids are driven out of the asteroid belt by gravitational interactions with Jupiter. Many asteroids have natural satellites (minor-planet moons). , 74 NEAs were known to have at least one moon, including three known to have two moons. The asteroid 3122 Florence, one of the largest PHAs with a diameter of , has two moons measuring across, which were discovered by radar imaging during the asteroid's 2017 approach to Earth. While the size of a small fraction of these asteroids is known to better than 1%, from radar observations, from images of the asteroid surface, or from stellar occultations, the diameter of the vast majority of near Earth asteroids has only been estimated on the basis of their brightness and a representative asteroid surface reflectivity or albedo, which is commonly assumed to be 14%. Such indirect size estimates are uncertain by over a factor of 2 for individual asteroids, since asteroid albedos can range at least as low as 0.05 and as high as 0.3. This makes the volume of those asteroids uncertain by a factor of 8, and their mass by at least as much, since their assumed density also has its own uncertainty. Using this crude method, an absolute magnitude of 17.75 roughly corresponds to a diameter of and an absolute magnitude of 22.0 corresponds to a diameter of . Diameters of intermediate precision, better than from an assumed albedo but not nearly as precise as direct measurements, can be obtained from the combination of reflected light and thermal infrared emission, using a thermal model of the asteroid. In May 2016, the precision of such asteroid diameter estimates arising from the Wide-field Infrared Survey Explorer and NEOWISE missions was questioned by technologist Nathan Myhrvold, His early original criticism did not pass peer review and faced criticism for its methodology itself, but a revised version was subsequently published. In 2000, NASA reduced its estimate of the number of existing near-Earth asteroids over one kilometer in diameter from 1,000–2,000 to 500–1,000. Shortly thereafter, the LINEAR survey provided an alternative estimate of . In 2011, on the basis of NEOWISE observations, the estimated number of one-kilometer NEAs was narrowed to (of which 93% had been discovered at the time), while the number of NEAs larger than 140 meters across was estimated at . The NEOWISE estimate differed from other estimates primarily in assuming a slightly lower average asteroid albedo, which produces larger estimated diameters for the same asteroid brightness. This resulted in 911 then known asteroids at least 1 km across, as opposed to the 830 then listed by CNEOS which assumed a slightly higher albedo. In 2017, two studies using an improved statistical method reduced the estimated number of NEAs brighter than absolute magnitude 17.75 (approximately over one kilometer in diameter) slightly to . The estimated number of asteroids brighter than absolute magnitude of 22.0 (approximately over 140 m across) rose to , double the WISE estimate, of which about a third are known as of 2018. As of January 4, 2019 and using diameters mostly estimated crudely from a measured absolute magnitude and an assumed albedo, 897 NEAs listed by CNEOS, including 156 PHAs, measure at least 1 km in diameter, and 8,452 known NEAs are larger than 140 m in diameter. The smallest known near-Earth asteroid is with an absolute magnitude of 33.2, corresponding to an estimated diameter of about . The largest such object is 1036 Ganymed, with an absolute magnitude of 9.45 and a directly measured equivalent diameter of about . The number of asteroids brighter than , which corresponds to about in diameter, is estimated at about —of which about 1.3 percent had been discovered by February 2016; the number of asteroids brighter than (larger than ) is estimated at about million—of which about 0.003 percent had been discovered by February 2016. Near-Earth asteroids are divided into groups based on their semi-major axis (a), perihelion distance (q), and aphelion distance (Q): Atiras and Amors do not cross the Earth's orbit and are not immediate impact threats, but their orbits may change to become Earth-crossing orbits in the future. , 36 Atiras, 1,510 Atens, 10,199 Apollos and 8,583 Amors have been discovered and cataloged. NEAs on a co-orbital configuration have the same orbital period as the Earth. All co-orbital asteroids have special orbits that are relatively stable and, paradoxically, can prevent them from getting close to Earth: In 1961, the IAU defined meteoroids as a class of solid interplanetary objects distinct from asteroids by their considerably smaller size. This definition was useful at the time because, with the exception of the Tunguska event, all historically observed meteors were produced by objects significantly smaller than the smallest asteroids observable by telescopes. As the distinction began to blur with the discovery of ever smaller asteroids and a greater variety of observed NEO impacts, revised definitions with size limits have been proposed from the 1990s. In April 2017, the IAU adopted a revised definition that generally limits meteoroids to a size between 30 µm and 1 m in diameter, but permits the use of the term for any object of any size that caused a meteor, thus leaving the distinction between asteroid and meteoroid blurred. Near-Earth comets (NECs) are objects in a near-Earth orbit with a tail or coma. Comet nuclei are typically less dense than asteroids but they pass Earth at higher relative speeds, thus the impact energy of comet nucleus is slightly larger than that of a similar-sized asteroid. NECs may pose an additional hazard due to fragmentation: the meteoroid streams which produce meteor showers may include large inactive fragments, effectively NEAs. Although no impact of a comet in Earth's history has been conclusively confirmed, the Tunguska event may have been caused by a fragment of Comet Encke. Comets are commonly divided between short-period and long-period comets. Short-period comets, with an orbital period of less than 200 years, originate in the Kuiper belt, beyond the orbit of Neptune; while long-period comets originate in the Oort Cloud, in the outer reaches of the Solar System. The orbital period distinction is of importance in the evaluation of the risk from near-Earth comets because short-period NECs are likely to have been observed during multiple apparitions and thus their orbits can be determined with some precision, while long-period NECs can be assumed to have been seen for the first and last time when they appeared during the Age of Science, thus their approaches cannot be predicted well in advance. Since the threat from long-period NECs is estimated to be at most 1% of the threat from NEAs, and long-period comets are very faint and thus difficult to detect at large distances from the Sun, Spaceguard efforts consistently focused on asteroids and short-period comets. CNEOS even restricts its definition of NECs to short-period comets—, 107 such objects have been discovered. , only 20 comets have been observed to pass within of Earth, including 10 which are or have been short-period comets. Two of these comets, Halley's Comet and 73P/Schwassmann–Wachmann, have been observed during multiple close approaches. The closest observed approach was 0.0151 AU (5.88 LD) for Lexell's Comet on July 1, 1770. After an orbit change due to a close approach of Jupiter in 1779, this object is no longer a NEC. The closest approach ever observed for a current short-period NEC is 0.0229 AU (8.92 LD) for Comet Tempel–Tuttle in 1366. This comet is the parent body of the Leonid meteor shower, which also produced the Great Meteor Storm of 1833. Orbital calculations show that P/1999 J6 (SOHO), a faint sungrazing comet and confirmed short-period NEC observed only during its close approaches to the Sun, passed Earth undetected at a distance of 0.0121 AU (4.70 LD) on June 12, 1999. Comet 109P/Swift–Tuttle, which is also the source of the Perseid meteor shower which hits Earth every year in August, has a roughly 130-year orbit which passes close to the Earth. After the comet's 1992 return, when only the two previous returns in 1862 and 1737 have been identified, orbital calculations showed that the comet would pass very close to Earth during its next return in 2126, with an impact within the range of uncertainty. By 1993, even earlier returns (back to at least 188 AD) have been identified, and the new orbital calculation eliminated the impact risk, predicting the comet to pass Earth in 2126 at a distance of 24 million kilometers. In 3044, the comet is expected to pass Earth at less than 1.6 million kilometers. Defunct space probes and final stages of rockets can end up in near-Earth orbits around the Sun, and be re-discovered by NEO surveys when they return to Earth's vicinity. In September 2002, astronomers found an object designated J002E3. The object was on a temporary satellite orbit around Earth, leaving for a solar orbit in June 2003. Calculations showed that it was also on a solar orbit before 2002, but was close to Earth in 1971. J002E3 was identified as the third stage of the Saturn V rocket that carried Apollo 12 to the Moon. In 2006, two more apparent temporary satellites were discovered which were suspected of being artificial. One of them was eventually confirmed as an asteroid and classified as the temporary satellite . The other, 6Q0B44E, was confirmed as an artificial object, but its identity is unknown. Another temporary satellite was discovered in 2013, and was designated as a suspected asteroid. It was later found to be an artificial object of unknown origin. is no longer listed as an asteroid by the Minor Planet Center. In some cases, active space probes on solar orbits have been observed by NEO surveys and erroneously catalogued as asteroids before identification. During its 2007 flyby of Earth on its route to a comet, ESA's space probe "Rosetta" was detected unidentified and classified as asteroid , with an alert issued due to its close approach. The designation was similarly removed from asteroid catalogues when the observed object was identified with "Gaia", ESA's space observatory for astrometry. When a near-Earth object impacts Earth, objects up to a few tens of metres across ordinarily explode in the upper atmosphere (usually harmlessly), with most or all of the solids vaporized, while larger objects hit the water surface, forming tsunami waves, or the solid surface, forming impact craters. The frequency of impacts of objects of various sizes is estimated on the basis of orbit simulations of NEO populations, the frequency of impact craters on the Earth and the Moon, and the frequency of close encounters. The study of impact craters indicates that impact frequency has been more or less steady for the past 3.5 billion years, which requires a steady replenishment of the NEO population from the asteroid main belt. One impact model based on widely accepted NEO population models estimates the average time between the impact of two stony asteroids with a diameter of at least at about one year; for asteroids across (which impacts with as much energy as the atomic bomb dropped on Hiroshima, approximately 15 kilotonnes of TNT) at five years, for asteroids across (an impact energy of 10 megatons, comparable to the Tunguska event in 1908) at 1,300 years, for asteroids across at half a million years, and for asteroids across at 18 million years. Some other models estimate similar impact frequencies, while others calculate higher frequencies. For Tunguska-sized (10-megaton) impacts, the estimates range from one event every 2,000–3,000 years to one event every 300 years. The second-largest observed impact after the Tunguska meteor was a 1.1-megaton air blast in 1963 near the Prince Edward Islands between South Africa and Antarctica, which was detected only by infrasound sensors. The third-largest, but by far best-observed impact, was the Chelyabinsk meteor of February 15, 2013. A previously unknown asteroid exploded above this Russian city with an equivalent blast yield of 400–500 kilotons. The calculated orbit of the pre-impact asteroid is similar to that of Apollo asteroid 2011 EO40, making the latter the meteor's possible parent body. On October 7, 2008, 19 hours after it was first observed, asteroid blew up above the Nubian Desert in Sudan. It was the first time that an asteroid was observed and its impact was predicted prior to its entry into the atmosphere as a meteor. 10.7 kg of meteorites were recovered after the impact. On January 2, 2014, just 21 hours after it was the first asteroid to be discovered in 2014, 2–4 m blew up in Earth's atmosphere above the Atlantic Ocean. Far from any land, the meteor explosion was only observed by three infrasound detectors of the Comprehensive Nuclear-Test-Ban Treaty Organization. This impact was the second to be predicted in advance. Asteroid impact prediction is however in its infancy and successfully predicted asteroid impacts are rare. The vast majority of impacts recorded by infrasound sensors designed to detect detonation of nuclear devices: are not predicted in advance. Observed impacts aren't restricted to the surface and atmosphere of Earth. Dust-sized NEOs have impacted man-made spacecraft, including NASA's Long Duration Exposure Facility, which collected interplanetary dust in low Earth orbit for six years from 1984. Impacts on the Moon can be observed as flashes of light with a typical duration of a fraction of a second. The first lunar impacts were recorded during the 1999 Leonid storm. Subsequently, several continuous monitoring programs were launched. , the largest observed lunar impact occurred on September 11, 2013, lasted 8 seconds, and was likely caused by an object in diameter. Each year, several mostly small NEOs pass Earth closer than the distance of the Moon. On August 10, 1972, a meteor that became known as the 1972 Great Daylight Fireball was witnessed by many people; it moved north over the Rocky Mountains from the U.S. Southwest to Canada. It was an Earth-grazing meteoroid that passed within of the Earth's surface, and was filmed by a tourist at the Grand Teton National Park in Wyoming with an 8-millimeter color movie camera. On October 13, 1990, Earth-grazing meteoroid EN131090 was observed above Czechoslovakia and Poland, moving at along a trajectory from south to north. The closest approach to the Earth was above the surface. It was captured by two all-sky cameras of the European Fireball Network, which for the first time enabled geometric calculations of the orbit of such a body. On March 18, 2004, LINEAR announced that a asteroid, 2004 FH, would pass the Earth that day at only , about one-tenth the distance to the Moon, and the closest miss ever noticed until then. They estimated that similar-sized asteroids come as close about every two years. On March 31, 2004, two weeks after 2004 FH, set a new record for closest recorded approach above the atmosphere, passing Earth's surface only away (about one Earth radius or one-sixtieth of the distance to the Moon). Because it was very small (6 meters/20 feet), FU162 was detected only hours before its closest approach. If it had collided with Earth, it probably would have disintegrated harmlessly in the atmosphere. On February 4, 2011, an asteroid designated , estimated at in diameter, passed within of the Earth, setting a new record for closest approach without impact, which still stands . On November 8, 2011, asteroid , relatively large at about in diameter, passed within (0.85 lunar distances) of Earth. On February 15, 2013, the asteroid 367943 Duende () passed approximately above the surface of Earth, closer than satellites in geosynchronous orbit. The asteroid was not visible to the unaided eye. This was the first close passage of an object discovered during a previous passage, and was thus the first to be predicted well in advance. Some NEOs are of special interest because they can be physically explored with lower mission velocity than is necessary for even the Moon, due to their combination of low velocity with respect to Earth and weak gravity. They may present interesting scientific opportunities both for direct geochemical and astronomical investigation, and as potentially economical sources of extraterrestrial materials for human exploitation. This makes them an attractive target for exploration. The IAU held a minor planets workshop in Tucson, Arizona, in March 1971. At that point, launching a spacecraft to asteroids was considered premature; the workshop only inspired the first astronomical survey specifically aiming for NEAs. Missions to asteroids were considered again during a workshop at the University of Chicago held by NASA's Office of Space Science in January 1978. Of all of the near-Earth asteroids (NEA) that had been discovered by mid-1977, it was estimated that spacecraft could rendezvous with and return from only about 1 in 10 using less propulsive energy than is necessary to reach Mars. It was recognised that due to the low surface gravity of all NEAs, moving around on the surface of a NEA would cost very little energy, and thus space probes could gather multiple samples. Overall, it was estimated that about one percent of all NEAs might provide opportunities for human-crewed missions, or no more than about ten NEAs known at the time. A five-fold increase in the NEA discovery rate was deemed necessary to make a manned mission within ten years worthwhile. The first near-Earth asteroid to be visited by a spacecraft was asteroid 433 Eros when NASA's "Near Earth Asteroid Rendezvous" ("NEAR") probe orbited it from February 2001, landing on the asteroid surface in February 2002. A second near-Earth asteroid, the long peanut-shaped 25143 Itokawa, was visited in September 2005 by JAXA's "Hayabusa" mission, which succeeded in taking material samples back to Earth. A third near-Earth asteroid, the long elongated 4179 Toutatis, was explored by CNSA's "Chang'e 2" spacecraft during a flyby in December 2012. The Apollo asteroid 162173 Ryugu is the target of JAXA's "Hayabusa 2" mission. The space probe was launched in December 2014, is expected to arrive at the asteroid in June 2018, and to return a sample to Earth in December 2020. The Apollo asteroid 101955 Bennu, which, , has the second-highest cumulative Palermo scale rating (−1.71 for several close encounters between 2175 and 2199), is the target of NASA's "OSIRIS-REx" probe. The New Frontiers program mission was launched in September 2016. On its two-year journey to Bennu, the probe had searched for Earth's Trojan asteroids, rendezvoused with Bennu in August 2018, and had entered into orbit around the asteroid in December 2018. "OSIRIS-REx" will return samples from the asteroid in September 2023. In April 2012, the company Planetary Resources announced its plans to mine asteroids commercially. In a first phase, the company reviewed data and selected potential targets among NEAs. In a second phase, space probes would be sent to the selected NEAs; mining spacecraft would be sent in a third phase. Planetary Resources launched two testbed satellites in April 2015 and January 2018, and the first prospecting satellite for the second phase is planned for a 2020 launch. The Near-Earth Object Surveillance Mission (NEOSM) is planned for launch no earlier than 2025 to discover and characterize the orbit of most of the potentially hazardous asteroids larger than over the course of its mission. The first near-Earth comet visited by a space probe was 21P/Giacobini–Zinner in 1985, when the NASA/ESA probe "International Cometary Explorer" ("ICE") passed through its coma. In March 1986, ICE, along with Soviet probes "Vega 1" and "Vega 2", ISAS probes "Sakigake" and "Suisei" and ESA probe "Giotto" flew by the nucleus of Halley's Comet. In 1992, "Giotto" also visited another NEC, 26P/Grigg–Skjellerup. In November 2010, the NASA probe "Deep Impact" flew by the near-Earth comet 103P/Hartley. Earlier, in July 2005, this probe flew by the non-near-Earth comet Tempel 1, hitting it with a large copper mass. In August 2014, ESA probe "Rosetta" began orbiting near-Earth comet 67P/Churyumov–Gerasimenko, while its lander "Philae" landed on its surface in November 2014. After the end of its mission, Rosetta was crashed into the comet's surface in 2016.
https://en.wikipedia.org/wiki?curid=21626
Nation state A nation state is a state in which a great majority shares the same culture and is conscious of it. The nation state is an ideal in which cultural boundaries match up with political boundaries. According to one definition, "a nation state is a sovereign state of which most of its subjects are united also by factors which defined a nation such as language or common descent." It is a more precise concept than "country", since a country does not need to have a predominant ethnic group. A nation, in the sense of a common ethnicity, may include a diaspora or refugees who live outside the nation state; some nations of this sense do not have a state where that ethnicity predominates. In a more general sense, a nation state is simply a large, politically sovereign country or administrative territory. A nation state may be contrasted with: This article mainly discusses the more specific definition of a nation-state, as a typically sovereign country dominated by a particular ethnicity. The relationship between a nation (in the ethnic sense) and a state can be complex. The presence of a state can encourage ethnogenesis, and a group with a pre-existing ethnic identity can influence the drawing of territorial boundaries or argue for political legitimacy. This definition of a "nation-state" is not universally accepted. "All attempts to develop terminological consensus around "nation" resulted in failure", concludes academic Valery Tishkov. Walker Connor discusses the impressions surrounding the characters of "nation", "(sovereign) state", "nation state", and "nationalism". Connor, who gave the term "ethnonationalism" wide currency, also discusses the tendency to confuse nation and state and the treatment of all states as if nation states. In "Globalization and Belonging", Sheila L. Crouche discusses "The Definitional Dilemma". The origins and early history of nation states are disputed. A major theoretical question is: "Which came first, the nation or the nation state?" Scholars such as Steven Weber, David Woodward, Michel Foucault and Jeremy Black have advanced the hypothesis that the nation state did not arise out of political ingenuity or an unknown undetermined source, nor was it a political invention; but is an inadvertent byproduct of 15th-century intellectual discoveries in political economy, capitalism, mercantilism, political geography, and geography combined together with cartography and advances in map-making technologies. It was with these intellectual discoveries and technological advances that the nation state arose. For others, the nation existed first, then nationalist movements arose for sovereignty, and the nation state was created to meet that demand. Some "modernization theories" of nationalism see it as a product of government policies to unify and modernize an already existing state. Most theories see the nation state as a 19th-century European phenomenon, facilitated by developments such as state-mandated education, mass literacy and mass media. However, historians also note the early emergence of a relatively unified state and identity in Portugal and the Dutch Republic. In France, Eric Hobsbawm argues, the French state preceded the formation of the French people. Hobsbawm considers that the state made the French nation, not French nationalism, which emerged at the end of the 19th century, the time of the Dreyfus Affair. At the time of the 1789 French Revolution, only half of the French people spoke some French, and 12–13% spoke the version of it that was to be found in literature and in educational facilities, according to Hobsbawm. During the Italian unification, the number of people speaking the Italian language was even lower. The French state promoted the replacement of various regional dialects and languages by a centralised French language. The introduction of conscription and the Third Republic's 1880s laws on public instruction facilitated the creation of a national identity under this theory. Some nation states, such as Germany and Italy, came into existence at least partly as a result of political campaigns by nationalists, during the 19th century. In both cases, the territory was previously divided among other states, some of them very small. The sense of common identity was at first a cultural movement, such as in the "Völkisch movement" in German-speaking states, which rapidly acquired a political significance. In these cases, the nationalist sentiment and the nationalist movement clearly precede the unification of the German and Italian nation states. Historians Hans Kohn, Liah Greenfeld, Philip White and others have classified nations such as Germany or Italy, where cultural unification preceded state unification, as "ethnic nations" or "ethnic nationalities". However, "state-driven" national unifications, such as in France, England or China, are more likely to flourish in multiethnic societies, producing a traditional national heritage of "civic nations", or "territory-based nationalities". Some authors deconstruct the distinction between ethnic nationalism and civic nationalism because of the ambiguity of the concepts. They argue that the paradigmatic case of Ernest Renan is an idealisation and it should be interpreted within the German tradition and not in opposition to it. For example, they argue that the arguments used by Renan at the conference "What is a nation?" are not consistent with his thinking. This alleged civic conception of the nation would be determined only by the case of the loss gives Alsace and Lorraine in the Franco-Prussian War. The idea of a nation state was and is associated with the rise of the modern system of states, often called the "Westphalian system" in reference to the Treaty of Westphalia (1648). The balance of power, which characterized that system, depended on its effectiveness upon clearly defined, centrally controlled, independent entities, whether empires or nation states, which recognize each other's sovereignty and territory. The Westphalian system did not create the nation state, but the nation state meets the criteria for its component states (by assuming that there is no disputed territory). The nation state received a philosophical underpinning in the era of Romanticism, at first as the "natural" expression of the individual peoples (romantic nationalism: see Johann Gottlieb Fichte's conception of the "Volk", later opposed by Ernest Renan). The increasing emphasis during the 19th century on the ethnic and racial origins of the nation, led to a redefinition of the nation state in these terms. Racism, which in Boulainvilliers's theories was inherently antipatriotic and antinationalist, joined itself with colonialist imperialism and "continental imperialism", most notably in pan-Germanic and pan-Slavic movements. The relation between racism and ethnic nationalism reached its height in the 20th century fascism and Nazism. The specific combination of "nation" ("people") and "state" expressed in such terms as the "Völkische Staat" and implemented in laws such as the 1935 Nuremberg laws made fascist states such as early Nazi Germany qualitatively different from non-fascist nation states. Minorities were not considered part of the people ("Volk"), and were consequently denied to have an authentic or legitimate role in such a state. In Germany, neither Jews nor the Roma were considered part of the people and were specifically targeted for persecution. German nationality law defined "German" on the basis of German ancestry, excluding "all" non-Germans from the people. In recent years, a nation state's claim to absolute sovereignty within its borders has been criticized. A global political system based on international agreements and supra-national blocs characterized the post-war era. Non-state actors, such as international corporations and non-governmental organizations, are widely seen as eroding the economic and political power of nation states. According to Andreas Wimmer and Yuval Feinstein, nation-states tended to emerge when power shifts allowed nationalists to overthrow existing regimes or absorb existing administrative units. Xue Li and Alexander Hicks links the frequency of nation-state creation to processes of diffusion that emanate from international organizations. In Europe, during the 18th century, the classic non-national states were the "multiethnic" empires, the Austrian Empire, Kingdom of France, Kingdom of Hungary, the Russian Empire, the Spanish Empire, the Ottoman Empire, the British Empire and smaller nations at what would now be called sub-state level. The multi-ethnic empire was an absolute monarchy ruled by a king, emperor or sultan. The population belonged to many ethnic groups, and they spoke many languages. The empire was dominated by one ethnic group, and their language was usually the language of public administration. The ruling dynasty was usually, but not always, from that group. This type of state is not specifically European: such empires existed in Asia, Africa and the Americas. In the Muslim world, immediately after Muhammad's death in 632, Caliphates were established. Caliphates were Islamic states under the leadership of a political-religious successor to the Islamic prophet Muhammad. These polities developed into multi-ethnic trans-national empires. The Ottoman sultan, Selim I (1512–1520) reclaimed the title of caliph, which had been in dispute and asserted by a diversity of rulers and "shadow caliphs" in the centuries of the Abbasid-Mamluk Caliphate since the Mongols' sacking of Baghdad and the killing of the last Abbasid Caliph in Baghdad, Iraq 1258. The Ottoman Caliphate as an office of the Ottoman Empire was abolished under Mustafa Kemal Atatürk in 1924 as part of Atatürk's Reforms. Some of the smaller European states were not so ethnically diverse, but were also dynastic states, ruled by a royal house. Their territory could expand by royal intermarriage or merge with another state when the dynasty merged. In some parts of Europe, notably Germany, very small territorial units existed. They were recognized by their neighbors as independent, and had their own government and laws. Some were ruled by princes or other hereditary rulers, some were governed by bishops or abbots. Because they were so small, however, they had no separate language or culture: the inhabitants shared the language of the surrounding region. In some cases these states were simply overthrown by nationalist uprisings in the 19th century. Liberal ideas of free trade played a role in German unification, which was preceded by a customs union, the Zollverein. However, the Austro-Prussian War, and the German alliances in the Franco-Prussian War, were decisive in the unification. The Austro-Hungarian Empire and the Ottoman Empire broke up after the First World War, and the Russian Empire became the Soviet Union after the Russian Civil War. A few of the smaller states survived: the independent principalities of Liechtenstein, Andorra, Monaco, and the republic of San Marino. (Vatican City is a special case. All of the larger Papal States save the Vatican itself were occupied and absorbed by Italy by 1870. The resulting Roman Question was resolved with the rise of the modern state under the 1929 Lateran treaties between Italy and the Holy See.) "Legitimate states that govern effectively and dynamic industrial economies are widely regarded today as the defining characteristics of a modern nation-state." Nation states have their own characteristics, differing from those of the pre-national states. For a start, they have a different attitude to their territory when compared with dynastic monarchies: it is semisacred and nontransferable. No nation would swap territory with other states simply, for example, because the king's daughter married. They have a different type of border, in principle defined only by the area of settlement of the national group, although many nation states also sought natural borders (rivers, mountain ranges). They are constantly changing in population size and power because of the limited restrictions of their borders. The most noticeable characteristic is the degree to which nation states use the state as an instrument of national unity, in economic, social and cultural life. The nation state promoted economic unity, by abolishing internal customs and tolls. In Germany, that process, the creation of the Zollverein, preceded formal national unity. Nation states typically have a policy to create and maintain a national transportation infrastructure, facilitating trade and travel. In 19th-century Europe, the expansion of the rail transport networks was at first largely a matter for private railway companies, but gradually came under control of the national governments. The French rail network, with its main lines radiating from Paris to all corners of France, is often seen as a reflection of the centralised French nation state, which directed its construction. Nation states continue to build, for instance, specifically national motorway networks. Specifically transnational infrastructure programmes, such as the Trans-European Networks, are a recent innovation. The nation states typically had a more centralised and uniform public administration than its imperial predecessors: they were smaller, and the population less diverse. (The internal diversity of the Ottoman Empire, for instance, was very great.) After the 19th-century triumph of the nation state in Europe, regional identity was subordinate to national identity, in regions such as Alsace-Lorraine, Catalonia, Brittany and Corsica. In many cases, the regional administration was also subordinated to central (national) government. This process was partially reversed from the 1970s onward, with the introduction of various forms of regional autonomy, in formerly centralised states such as France. The most obvious impact of the nation state, as compared to its non-national predecessors, is the creation of a uniform national culture, through state policy. The model of the nation state implies that its population constitutes a nation, united by a common descent, a common language and many forms of shared culture. When the implied unity was absent, the nation state often tried to create it. It promoted a uniform national language, through language policy. The creation of national systems of compulsory primary education and a relatively uniform curriculum in secondary schools, was the most effective instrument in the spread of the national languages. The schools also taught the national history, often in a propagandistic and mythologised version, and (especially during conflicts) some nation states still teach this kind of history. Language and cultural policy was sometimes negative, aimed at the suppression of non-national elements. Language prohibitions were sometimes used to accelerate the adoption of national languages and the decline of minority languages (see examples: Anglicisation, Bulgarization, Croatization, Czechization, Francisation, Italianization, Germanisation, Magyarisation, Polonisation, Russification, Serbization, Slovakisation). In some cases, these policies triggered bitter conflicts and further ethnic separatism. But where it worked, the cultural uniformity and homogeneity of the population increased. Conversely, the cultural divergence at the border became sharper: in theory, a uniform French identity extends from the Atlantic coast to the Rhine, and on the other bank of the Rhine, a uniform German identity begins. To enforce that model, both sides have divergent language policy and educational systems. In some cases, the geographic boundaries of an ethnic population and a political state largely coincide. In these cases, there is little immigration or emigration, few members of ethnic minorities, and few members of the "home" ethnicity living in other countries. Examples of nation states where ethnic groups make up more than 85% of the population include the following: The notion of a unifying "national identity" also extends to countries that host multiple ethnic or language groups, such as India. For example, Switzerland is constitutionally a confederation of cantons, and has four official languages, but it has also a "Swiss" national identity, a national history and a classic national hero, Wilhelm Tell. Innumerable conflicts have arisen where political boundaries did not correspond with ethnic or cultural boundaries. After World War II in the Josip Broz Tito era, nationalism was appealed to for uniting South Slav peoples. Later in the 20th century, after the break-up of the Soviet Union, leaders appealed to ancient ethnic feuds or tensions that ignited conflict between the Serbs, Croats and Slovenes, as well as Bosniaks, Montenegrins and Macedonians, eventually breaking up the long collaboration of peoples. Ethnic cleansing was carried out in the Balkans, resulting in the destruction of the formerly socialist republic and producing the civil wars in Bosnia and Herzegovina in 1992–95, resulting in mass population displacements and segregation that radically altered what was once a highly diverse and intermixed ethnic makeup of the region. These conflicts were largely about creating a new political framework of states, each of which would be ethnically and politically homogeneous. Serbs, Croats and Bosniaks insisted they were ethnically distinct although many communities had a long history of intermarriage. Presently Slovenia (89% Slovene), Croatia (90.4% Croat) and Serbia (83% Serb) could be classified as nation states per se, whereas North Macedonia (66% Macedonian), Montenegro (42% Montenegrin) and Bosnia and Herzegovina (50.1% Bosniak) are multinational states. Belgium is a classic example of a state that is not a nation state. The state was formed by secession from the United Kingdom of the Netherlands in 1830, whose neutrality and integrity was protected by the Treaty of London 1839; thus it served as a buffer state after the Napoleonic Wars between the European powers France, Prussia (after 1871 the German Empire) and the United Kingdom until World War I, when its neutrality was breached by the Germans. Currently, Belgium is divided between the Flemings in the north and the French-speaking or the German-speaking population in the south. The Flemish population in the north speaks Dutch, the Walloon population in the south speaks French or German. The Brussels population speaks French or Dutch. The Flemish identity is also cultural, and there is a strong separatist movement espoused by the political parties, the right-wing Vlaams Belang and the Nieuw-Vlaamse Alliantie. The Francophone Walloon identity of Belgium is linguistically distinct and regionalist. There is also unitary Belgian nationalism, several versions of a Greater Netherlands ideal, and a German-speaking community of Belgium annexed from Germany in 1920, and re-annexed by Germany in 1940–1944. However these ideologies are all very marginal and politically insignificant during elections. China covers a large geographic area and uses the concept of "Zhonghua minzu" or Chinese nationality, in the sense of ethnic groups, but it also officially recognizes the majority Han ethnic group which accounts for over 90% of the population, and no fewer than 55 ethnic national minorities. According to Philip G. Roeder, Moldova is an example of a Soviet era "segment-state" (Moldavian SSR), where the "nation-state project of the segment-state trumped the nation-state project of prior statehood. In Moldova, despite strong agitation from university faculty and students for reunification with Romania, the nation-state project forged within the Moldavian SSR trumped the project for a return to the interwar nation-state project of Greater Romania." See Controversy over linguistic and ethnic identity in Moldova for further details. The United Kingdom is an unusual example of a nation state due to its claimed "countries within a country" status. The United Kingdom, which is formed by the union of England, Scotland, Wales and Northern Ireland, is a unitary state formed initially by the merger of two independent kingdoms, the Kingdom of England (which already included Wales) and the Kingdom of Scotland, but the Treaty of Union (1707) that set out the agreed terms has ensured the continuation of distinct features of each state, including separate legal systems and separate national churches. In 2003, the British Government described the United Kingdom as "countries within a country". While the Office for National Statistics and others describe the United Kingdom as a "nation state", others, including a then Prime Minister, describe it as a "multinational state", and the term Home Nations is used to describe the four national teams that represent the four nations of the United Kingdom (England, Northern Ireland, Scotland, Wales). Some refer to it as a "Union State". There has been academic debate over whether the United Kingdom can be legally dissolved as it is normally recognized internationally as a single nation state. English law jurist A.V. Dicey from an English legal perspective wrote that the question is based on whether the legislation giving rise to the union (the Union with Scotland Act), one of the two pieces of legislation which created the state, can be repealed. Dicey claimed because the Law of England does not acknowledge the word "unconstitutional", as a matter of English law it can be repealed. He also stated any tampering with the Acts of Union 1707 would be political madness. A similar unusual example is the Kingdom of the Netherlands. As of 10 October 2010, the Kingdom of the Netherlands consists of four countries: Each is expressly designated as a "land" in Dutch law by the Charter for the Kingdom of the Netherlands. Unlike the German "Länder" and the Austrian "Bundesländer", "landen" is consistently translated as "countries" by the Dutch government. Israel was founded as a Jewish state in 1948. Its "Basic Laws" describe it as both a Jewish and a democratic state. The (2018) explicitly specifies the nature of the State of Israel as the nation-state of the Jewish people. According to the Israel Central Bureau of Statistics, 75.7% of Israel's population are Jews. Arabs, who make up 20.4% of the population, are the largest ethnic minority in Israel. Israel also has very small communities of Armenians, Circassians, Assyrians, Samaritans. There are also some non-Jewish spouses of Israeli Jews. However, these communities are very small, and usually number only in the hundreds or thousands. Pakistan, even being an ethnically diverse country and officially a federation, is regarded as a nation state due to its ideological basis on which it was given independence from British India as a separate nation rather than as part of a unified India. Different ethnic groups in Pakistan are strongly bonded by their common Muslim identity, common cultural and social values, common historical heritage, a national "lingua franca" (Urdu) and joint political, strategic and economic interests. The most obvious deviation from the ideal of "one nation, one state" is the presence of minorities, especially ethnic minorities, which are clearly not members of the majority nation. An ethnic nationalist definition of a nation is necessarily exclusive: ethnic nations typically do not have open membership. In most cases, there is a clear idea that surrounding nations are different, and that includes members of those nations who live on the "wrong side" of the border. Historical examples of groups who have been specifically singled out as "outsiders" are the Roma and Jews in Europe. Negative responses to minorities within the nation state have ranged from cultural assimilation enforced by the state, to expulsion, persecution, violence, and extermination. The assimilation policies are usually enforced by the state, but violence against minorities is not always state initiated: it can occur in the form of mob violence such as lynching or pogroms. Nation states are responsible for some of the worst historical examples of violence against minorities not considered part of the nation. However, many nation states accept specific minorities as being part of the nation, and the term "national minority" is often used in this sense. The Sorbs in Germany are an example: for centuries they have lived in German-speaking states, surrounded by a much larger ethnic German population, and they have no other historical territory. They are now generally considered to be part of the German nation and are accepted as such by the Federal Republic of Germany, which constitutionally guarantees their cultural rights. Of the thousands of ethnic and cultural minorities in nation states across the world, only a few have this level of acceptance and protection. Multiculturalism is an official policy in many states, establishing the ideal of peaceful existence among multiple ethnic, cultural, and linguistic groups. Many nations have laws protecting minority rights. When national boundaries that do not match ethnic boundaries are drawn, such as in the Balkans and Central Asia, ethnic tension, massacres and even genocide, sometimes has occurred historically (see Bosnian genocide and 2010 South Kyrgyzstan ethnic clashes). Ideally, the border of a nation state extends far enough to include all the members of the nation, and all of the national homeland. Again, in practice some of them always live on the 'wrong side' of the border. Part of the national homeland may be there too, and it may be governed by the 'wrong' nation. The response to the non-inclusion of territory and population may take the form of irredentism: demands to annex "unredeemed" territory and incorporate it into the nation state. Irredentist claims are usually based on the fact that an identifiable part of the national group lives across the border. However, they can include claims to territory where no members of that nation live at present, because they lived there in the past, the national language is spoken in that region, the national culture has influenced it, geographical unity with the existing territory, or a wide variety of other reasons. Past grievances are usually involved and can cause revanchism. It is sometimes difficult to distinguish irredentism from pan-nationalism, since both claim that all members of an ethnic and cultural nation belong in one specific state. Pan-nationalism is less likely to specify the nation ethnically. For instance, variants of Pan-Germanism have different ideas about what constituted Greater Germany, including the confusing term "Grossdeutschland", which, in fact, implied the inclusion of huge Slavic minorities from the Austro-Hungarian Empire. Typically, irredentist demands are at first made by members of non-state nationalist movements. When they are adopted by a state, they typically result in tensions, and actual attempts at annexation are always considered a "casus belli", a cause for war. In many cases, such claims result in long-term hostile relations between neighbouring states. Irredentist movements typically circulate maps of the claimed national territory, the "greater" nation state. That territory, which is often much larger than the existing state, plays a central role in their propaganda. Irredentism should not be confused with claims to overseas colonies, which are not generally considered part of the national homeland. Some French overseas colonies would be an exception: French rule in Algeria unsuccessfully treated the colony as a "département" of France. It has been speculated by both proponents of globalization and various science fiction writers that the concept of a nation state may disappear with the ever-increasing interconnectedness of the world. Such ideas are sometimes expressed around concepts of a world government. Another possibility is a societal collapse and move into communal anarchy or zero world government, in which nation states no longer exist. Globalization especially has helped to bring about the discussion about the disappearance of nation states, as global trade and the rise of the concepts of a 'global citizen' and a common identity have helped to reduce differences and 'distances' between individual nation states, especially with regards to the internet. The theory of the clash of civilizations lies in direct contrast to cosmopolitan theories about an ever more-connected world that no longer requires nation states. According to political scientist Samuel P. Huntington, people's cultural and religious identities will be the primary source of conflict in the post–Cold War world. The theory was originally formulated in a 1992 lecture at the American Enterprise Institute, which was then developed in a 1993 "Foreign Affairs" article titled "The Clash of Civilizations?", in response to Francis Fukuyama's 1992 book, "The End of History and the Last Man". Huntington later expanded his thesis in a 1996 book "The Clash of Civilizations and the Remaking of World Order". Huntington began his thinking by surveying the diverse theories about the nature of global politics in the post–Cold War period. Some theorists and writers argued that human rights, liberal democracy and capitalist free market economics had become the only remaining ideological alternative for nations in the post–Cold War world. Specifically, Francis Fukuyama, in "The End of History and the Last Man", argued that the world had reached a Hegelian "end of history". Huntington believed that while the age of ideology had ended, the world had reverted only to a normal state of affairs characterized by cultural conflict. In his thesis, he argued that the primary axis of conflict in the future will be along cultural and religious lines. As an extension, he posits that the concept of different civilizations, as the highest rank of cultural identity, will become increasingly useful in analyzing the potential for conflict. In the 1993 "Foreign Affairs" article, Huntington writes: Sandra Joireman suggests that Huntington may be characterised as a neo-primordialist, as, while he sees people as having strong ties to their ethnicity, he does not believe that these ties have always existed. Historians often look to the past to find the origins of a particular nation state. Indeed, they often put so much emphasis on the importance of the nation state in modern times, that they distort the history of earlier periods in order to emphasize the question of origins. Lansing and English argue that much of the medieval history of Europe was structured to follow the historical winners—especially the nation states that emerged around Paris and London. Important developments that did not directly lead to a nation state get neglected, they argue:
https://en.wikipedia.org/wiki?curid=21627
Nicolas-Louis de Lacaille Abbé Nicolas-Louis de Lacaille, formerly sometimes spelled de la Caille, (; 15 March 1713 – 21 March 1762) was a French astronomer and geodesist who named 14 out of the 88 constellations. From 1750 to 1754 he studied the sky at the Cape of Good Hope in present-day South Africa. Lacaille observed over 10,000 stars using just a half-inch refractor. Born at Rumigny, he attended school in Mantes-sur-Seine (now Mantes-la-Jolie). Afterwards, he studied rhetoric and philosophy at the and then theology at the Collège de Navarre. He was left destitute in 1731 by the death of his father, who had held a post in the household of the duchess of Vendôme. However, he was supported in his studies by the Duc de Bourbon, his father's former patron. After he graduated, he did not accept ordination as a priest but took deacon's orders, becoming an Abbé. He concentrated thereafter on science, and, through the patronage of Jacques Cassini, obtained employment, first in surveying the coast from Nantes to Bayonne, then, in 1739, in remeasuring the French meridian arc, for which he is honored with a pyramid at Juvisy-sur-Orge. The success of this difficult operation, which occupied two years, and achieved the correction of the anomalous result published by Jacques Cassini in 1718, was mainly due to Lacaille's industry and skill. He was rewarded by admission to the Royal Academy of Sciences and appointment as Professor of mathematics in the Mazarin college of the University of Paris, where he constructed a small observatory fitted for his own use. He was the author of a number of influential textbooks and a firm advocate of Newtonian gravitational theory. His students included Antoine Lavoisier and Jean Sylvain Bailly, both of whom were later guillotined during the Revolution. His desire to determine the distances of the planets trigonometrically, using the longest possible baseline, led him to propose, in 1750, an expedition to the Cape of Good Hope. This was officially sanctioned by Roland-Michel Barrin de La Galissonière. There, he constructed an observatory on the shore of Table Bay with the support of the Dutch Governor Ryk Tulbagh. The primary result of his two-year stay was observed nearly 10,000 southern stars, the production of which required observing every night for over a year. In the course of his survey he took note of 42 nebulous objects. He also achieved his aim of determining the lunar and solar parallaxes (Mars serving as an intermediary). This work required near-simultaneous observations from Europe which were carried out by Jérôme Lalande. His southern catalogue, called "Coelum Australe Stelliferum", was published posthumously in 1763. He found it necessary to introduce 14 new constellations which have since become standard. One of these was Mons Mensae, the only constellation named after a terrestrial feature (the Table Mountain). While at the Cape, Lacaille determined the radius of the earth in the southern hemisphere. He set out a baseline in the Swartland plain north of present-day Darling. Using triangulation he then measured a 137 km arc of meridian between Cape Town and Aurora, determining the latitudes of the end points by means of astronomical-geodetic observations. There is a memorial to his work at a location near Aurora, pictured here. His result suggested that the earth was more flattened towards the south pole than towards the north. George Everest, of the Indian Survey, while recuperating from an illness at the Cape nearly seventy years later, suggested that Lacaille's latitude observations had been affected by the gravitational attraction of Table Mountain at the southern end and by the Piketberg Mountain at the northern. In 1838, Thomas Maclear, who was Astronomer Royal at the Cape, repeated the measurements over a longer baseline and ultimately confirmed Everest's conjecture. Maclear's Beacon was erected on the Table Mountain in Cape Town to help with the verification. During his voyage to the southern hemisphere as a passenger on the vessel "Le Glorieux", captained by the noted hydrographer Jean-Baptiste d'Après de Mannevillette, Lacaille became conscious of the difficulties in determining positions at sea. On his return to Paris he prepared the first set of tables of the Moon's position that was accurate enough to use for determining time and longitude by the method of 'Lunars' (Lunar distances) using the orbital theory of Clairaut. Lacaille was in fact an indefatigable calculator. Apart from constructing astronomical ephemerides and mathematical tables, he calculated a table of eclipses for 1800 years. Lalande said of him that, during a comparatively short life, he had made more observations and calculations than all the astronomers of his time put together. The quality of his work rivalled its quantity, while the disinterestedness and rectitude of his moral character earned him universal respect. On his return to Paris in 1754, following a diversion to Mauritius, Lacaille was distressed to find himself an object of public attention. He resumed his work at the Mazarin College. In 1757 he published his "Astronomiae Fundamenta Novissimus", containing a list of about 400 bright stars with positions corrected for aberration and nutation. He carried out calculations on comet orbits and was responsible for giving Halley's Comet its name. His last public lecture, given on 14 September 1761 at the Royal Academy of Sciences, summarised the improvements to astronomy that had occurred during his lifetime, to which he had made no small contribution. His death, probably caused in part by over-work, occurred in 1762. He was buried in the vaults of the Mazarin College, now the Institut de France in Paris. In 1754, he was elected a foreign member of the Royal Swedish Academy of Sciences. He was also an honorary member of the academies of Saint Petersburg and Berlin, the Royal Society of London and the Royal Society of Göttingen, and the Institute of Bologna. Lacaille has the honor of naming 14 different constellations. List of credited constellations: The crater "La Caille" on the Moon is named after him. Asteroid 9135 Lacaille (AKA 7609 P-L and 1994 EK6), discovered on 17 October 1960 by Cornelis Johannes van Houten, Ingrid van Houten-Groeneveld and Tom Gehrels at Palomar Observatory, was also named after him. In honor of his contribution to the study of the southern hemisphere sky, a 60-cm telescope at Reunion Island will be named the Lacaille Telescope.
https://en.wikipedia.org/wiki?curid=21628
Nawaf al-Hazmi Nawaf Muhammed Salim al-Hazmi (, ; also known as "Rabia al-Makki") (August 9, 1976 – September 11, 2001) was a Saudi Arabian terrorist. He was one of five hijackers of American Airlines Flight 77, which they crashed into the Pentagon as part of the September 11 attacks in the United States. Hazmi and a long-time friend, Khalid al-Mihdhar, left their homes in Saudi Arabia in 1995 to fight for Muslims in the Bosnian War. Hazmi later traveled to Afghanistan to fight with the Taliban against the Afghan Northern Alliance. He returned to Saudi Arabia in early 1999. Already long-time affiliates of al-Qaeda with extensive fighting experience, Hazmi and Mihdhar were chosen by Osama bin Laden for an ambitious terrorist plot to pilot commercial airliners into designated targets in the United States. Hazmi and Mihdhar both obtained US tourist visas in April 1999. Hazmi trained in an al-Qaeda training camp in the fall of 1999 and traveled to Malaysia for the 2000 Al-Qaeda Summit. Hazmi arrived in Los Angeles, California, from Bangkok, Thailand, on January 15, 2000, alongside Mihdhar. The two settled in San Diego, staying at the Parkwood Apartments until May 2000. While in San Diego, they attended its mosque, led by Anwar al-Awlaki. The two took flying lessons in San Diego, but due to their poor English skills they did not perform well during their flight lessons and their flight instructor regarded them as suspicious. Mihdhar left Hazmi in California for Yemen in June 2000. Hazmi stayed in California until he met up with Hani Hanjour in December 2000, and they both traveled to Phoenix, Arizona. They later moved to Falls Church, Virginia, in April 2001, where the rest of the hijackers began to join them. Hazmi met frequently with Mohamed Atta, the ringleader of the attacks, during the summer of 2001. The CIA reportedly received Hazmi's name on a list of 19 persons suspected of planning an attack in the near future. Hazmi was one of the four names on the list who were known for certain. A search for Hazmi and other suspected terrorists commenced, but they were not located until after the attacks. On September 10, 2001, Hazmi, Mihdhar, and Hanjour checked into a hotel in Herndon, Virginia. The next morning, Hazmi and four other terrorists, including Hazmi's younger brother, Salem al-Hazmi, boarded American Airlines Flight 77 at Dulles Airport and hijacked the plane so that Hanjour could pilot and crash the plane into the Pentagon as part of the September 11 attacks. The crash killed all 64 passengers aboard the aircraft and 125 in the Pentagon. Following the attacks, Hazmi's participation was initially dismissed as that of a "muscle hijacker", but he was later revealed to have played a larger role in the operational planning than previously believed. Nawaf was born in Mecca in Saudi Arabia to Muhammad Salim al-Hazmi, a grocer. He traveled to Afghanistan as a teenager in 1993. CNN's preliminary report following the attacks claimed that an unnamed acquaintance relayed ""He told me once that his father had tried to kill him when he was a child. He never told me why, but he had a long knife scar on his forearm"", and claimed that his older brother was a police chief in Jizan. In 1995, he and his childhood friend, Khalid al-Mihdhar, joined a group that went to fight alongside Bosnian Muslims in the Bosnian War. Afterwards, Nawaf returned to Afghanistan along with his brother Salem, and Mihdhar. In Afghanistan, they fought alongside the Taliban against the Afghan Northern Alliance, and joined up with al-Qaeda. Nawaf al-Hazmi returned to Saudi Arabia in early 1999. Osama bin Laden held Hazmi and Mihdhar in high respect, with their experience fighting during the 1990s in Bosnia and elsewhere. Al-Qaeda later referred to Hazmi as Mihdhar's "Second-in-command". When bin Laden committed to the "planes operation" plot in spring 1999, he personally selected Hazmi and Mihdhar to be involved in the plot as pilot hijackers. In addition to Hazmi and Mihdhar, two Yemenis were selected for a southeast Asia component of the plot, which was later scrapped for being too difficult to coordinate with the operations in the United States. Known as "Rabi'ah al-Makki" during the preparations, Hazmi had been so eager to participate in operations within the United States, he already had a US visa when bin Laden selected him. Hazmi obtained a B-1/B-2 tourist visa on April 3, 1999, from the US consulate in Jeddah, Saudi Arabia, using a new passport he acquired a few weeks earlier. Hazmi's passport did have indicators of al-Qaeda association, but immigration inspectors were not trained to look for those. In the autumn of 1999, these four attended the Mes Aynak training camp in Afghanistan, which provided advanced training. Hazmi went with the two Yemenis, Tawfiq bin Attash (Khallad) and Abu Bara al Yemeni, to Karachi, Pakistan, where Khalid Sheikh Mohammed, the plot's coordinator, instructed him on western culture, travel, as well as taught some basic English phrases. Mihdhar did not go with him to Karachi, but instead left for Yemen. Khalid Sheikh Mohammed then sent Hazmi and the other men to Malaysia for a meeting. Before leaving for Malaysia, Khalid Sheikh Mohammed doctored Hazmi's Saudi passport in order to conceal his travel to Pakistan and Afghanistan, and make it appear that Hazmi had come to Malaysia from Saudi Arabia via Dubai. After the attacks, the Associated Press would re-publish a "bizarre" story by the "Cody Enterprise" that quoted witnesses stating that Nawaf entered the United States during the autumn of 1999, crossing along the Canada–US border as one of two men delivering skylights to the local high school in Cody, Wyoming. Leaving the city 45 minutes later with the remaining cardboard boxes, the men allegedly asked "how to get to Florida." Based on information uncovered by the FBI in the 1998 United States embassy bombings case, the National Security Agency (NSA) began tracking the communications of Mihdhar's father-in-law, Ahmad Muhammad Ali al-Hada, who was facilitating al-Qaeda communications, in 1999. Authorities also became aware of Hazmi, as a friend and associate of Mihdhar. Saudi Intelligence was also aware that Hazmi was associated with al-Qaeda, and associated with the 1998 African embassy bombings and attempts to smuggle arms into the kingdom in 1997. He also said that he revealed this to the CIA, saying "What we told them was these people were on our watch list from previous activities of al-Qaeda" The CIA strongly denies having received any such warning. In late 1999, the NSA informed the CIA of an upcoming meeting in Malaysia, which Hada mentioned would involve "Khalid", "Nawaf", and "Salem". On January 5, Hazmi arrived in Kuala Lumpur, where he met up with Mihdhar, Attash, and Abu Bara. The group was in Malaysia to meet with Hambali for the 2000 Al Qaeda Summit, during which key details of the attacks may have been arranged. At this time, there was an East Asia component to the September 11 attacks plot, but bin Laden later canceled it for being too difficult to coordinate with operations in the United States. Ramzi bin al-Shibh was also at the summit, and Khalid Sheikh Mohammed possibly attended the summit. In Malaysia, the group stayed with Yazid Sufaat, a local member of Jemaah Islamiyah, who provided accommodations at request of Hambali. Both Mihdhar and Hazmi were secretly photographed at the meeting by Malaysian authorities, who provided surveillance at the request of the CIA. Malaysian authorities reported that Mihdhar spoke at length with Tawfiq bin Attash, one of the Yemenis, and others who were later involved in the USS Cole bombing. Hazmi and Mihdhar also met with Fahd al-Quso, who was later involved in the USS Cole bombing. After the meeting, Mihdhar and Hazmi traveled to Bangkok in Thailand on January 8, and left a week later on January 15 to travel to the United States. On January 15, 2000, Hazmi and Mihdhar arrived together at Los Angeles International Airport from Bangkok, and were admitted for a six-month period. Immediately after entering the country, Nawaf and Mihdhar met Omar al-Bayoumi in an airport restaurant. Bayoumi claims he was merely being charitable in helping the two seemingly out-of-place Muslims to move to San Diego where he helped them find an apartment near his own, co-signed their lease, and gave them $1,500 to help pay their rent. While in San Diego, witnesses told the FBI he and Mindhar had a close relationship with Anwar Al Awlaki. Authorities say the two regularly attended the Masjid Ar-Ribat al-Islami mosque Awlaki led in San Diego, and Awlaki had many closed-door meetings with them. Hazmi got a part-time job through the mosque at a nearby car wash. In the beginning of February 2000, Mihdhar and Hazmi rented an apartment at the Parkwood Apartments, a 175-unit complex in the Clairemont Mesa section of San Diego, near the Balboa Drive Mosque. In February, Mihdhar purchased a used 1988 Toyota Corolla. While living at the Parkwood Apartments, neighbors thought that Mihdhar and Hazmi were odd. Months passed without them getting any furniture for the apartment. Instead, the men slept on mattresses on the floor, yet they carried briefcases, were frequently on their mobile phones, and were occasionally picked up by a limousine. After the attacks, their neighbors told the media that the pair constantly played flight simulator games. Residents said a total of four men spent time together at Parkwood, playing in the pool like children. On April 4, 2000, Hazmi took a one-hour introductory flight lesson at the National Air College in San Diego. Both Mihdhar and Hazmi took flight lessons in May 2000 at the Sorbi Flying Club, located at Montgomery Field in San Diego. On May 5, Hazmi and Mihdhar took a lesson for one hour, and additional lessons on May 10 at the Sorbi Flying Club, with Hazmi flying an aircraft for 30 minutes. However, their English skills were very poor, and they did not do well with flight lessons. The first day that they showed up, they told instructors that they wanted to learn how to fly Boeings. Mihdhar and Hazmi raised some suspicion when they offered extra money to their flight instructor, Richard Garza, if he would train them to fly jets. Suspicious of the two men, Garza refused the offer but did not report them to authorities. Garza described the two men as "impatient students" who "wanted to learn to fly jets, specifically Boeings." Adel Rafeea received a wire transfer of $5,000, on April 18, from Ali Abdul Aziz Ali in the UAE, which he later claimed was money Nawaf had asked him to accept on his behalf. At the end of May 2000, Hazmi and Mihdhar moved out of Parkwood Apartments, and moved to nearby Lemon Grove, California. At this time, Mihdhar transferred his vehicle's registration to Hazmi, and he left San Diego on June 10, 2000. Mihdhar returned to Yemen, which angered Khalid Sheikh Mohammed, who did not want Hazmi to be left alone in California. On July 12, 2000, Hazmi filed for an extension of his visa, which was due to expire. His visa was extended until January 2001, though Hazmi never filed any further requests to extend it beyond that. In September, Nawaf and Mihdhar both moved into the house of FBI informant Abdussattar Shaikh, although he did not report the pair as suspicious. Mihdhar is believed to have left the apartment in early October, less than two weeks before the USS Cole Bombing. Nawaf continued living with Shaikh until December. Hani Hanjour arrived in San Diego in early December 2000, joining Hazmi, but on December 10, they were seen leaving their Mount Vernon address leaving for Phoenix, Arizona where Hanjour could take refresher flight training. On December 12, they arrived at Mesa, Arizona. On December 22, Hanjour and Hazmi signed a lease for an apartment in the Indian Springs Village complex in Mesa, moving in on January 9. In March, al-Hazmi received a shipment of VHS videos including videos about Boeing 747 and 777 flight decks and "how an airline captain should look and act" and later a road atlas, map of New York City and a World aeronautical chart. On March 30, al-Hazmi notified his utility company that he might be moving to another state or Saudi Arabia. He and Hanjour moved out before the apartment rental expired at the end of the month on their way to Virginia. 2 days later on April 1, 2001, Oklahoma police officer C. L. Parkins pulled Hazmi over for speeding in their Corolla along with an additional citation for failing to use a seatbelt together totaling $138. A routine inspection of his California drivers license turned up no warrants or alerts, although his name was known to both the NSA and the CIA as a suspected terrorist. Anwar al-Awlaki had already headed east and served as Imam at the Dar al-Hijrah mosque in the metropolitan Washington, DC area starting in January 2001. Shortly after this, his sermons were attended by three of the 9/11 hijackers (the new one being Hanjour). By April 3, he was likely with companion Hani Hanjour when he was recorded at an ATM in Front Royal, Virginia, arriving in Falls Church, Virginia, by April 4. They met a man believed to be a Jordanian named Eyad Alrababah at a 7-11 that day. The 9-11 commission wrote that Hazmi and Hanjour met Alrababah at the Dar al Hijra mosque who was computer technician who had moved from West Paterson, New Jersey and was there to ask imam Anwar al-Awlaki about finding a job. He helped the pair rent an apartment in Alexandria where they moved in. The September 11 Commission concluded that two of the hijackers "reportedly respected Awlaki as a religious figure". Police found his telephone number in the contacts of Ramzi bin al-Shibh (the "20th hijacker") when they searched his Hamburg apartment while investigating the 9/11 attacks. On May 1, 2001, Hazmi reported to police that men tried to take his wallet outside his Fairfax, Virginia, residence, but before the county officer left, Hazmi signed a "statement of release" indicating he did not want the incident investigated. In May 2001, Nidal Hasan's mother's funeral was held at the Falls Church mosque, although it is not known if al-Hazmi attended the service. On May 2, two other hijackers, Ahmed al-Ghamdi and Majed Moqed, arrived in Virginia and moved in with them. On May 8, Alrababah suggested that al-Hazmi and al-Mihdhar move with him to Fairfield, Connecticut, and helped all four hijackers move to a hotel there. They called area flight schools and after a few days Alrababah drove the four to Paterson, New Jersey, to show them around. Some FBI agents suspected that Awlaki gave Alrababah the job of helping Hazmi and Hanjour. Alrababah was later arrested as a witness convicted after 9/11 in a fraudulent driver's license scheme and deported to Jordan. On May 21, al-Hazmi moved in with Hanjour into an apartment in Paterson New Jersey. Mohamed Atta was living in the same city at another location. On June 30, al-Hazmi's car was involved in a minor traffic accident on the east-bound George Washington Bridge. On June 25, 2001, al-Hazmi obtained a drivers' license in Florida, providing an address in Delray Beach, Florida, and he obtained a USA ID card on July 10. On August 2, al-Hazmi also obtained a Virginia drivers' license, and made a request for it to be reissued on September 7. On July 20, al-hazmi and fellow hijacker Hani Hanjour flew to the Montgomery County Airpark in Maryland from on a practice flight from Fairfield, New Jersey. Al-Hazmi, along with at least five other future hijackers, traveled to Las Vegas, Nevada, at least six times in the summer of 2001. They reportedly drank alcohol, gambled, and paid strippers to perform lap dances for them. Throughout the summer, al-Hazmi met with leader Mohamed Atta to discuss the status of the operation of a monthly basis. On August 23, Israeli Mossad reportedly gave his name to the CIA as part of a list of 19 names they said were planning an attack in the near future. Only four of the names are known for certain – Narwaf, Atta, al-Shehhi and al-Mihdhar, but again, the connection was not made with previous contacts by local law enforcement. On the same day, he was added to an INS watch list, together with Mihdhar to prevent entry into the US. An internal review after 9/11 found that "everything was done [to find them] that could have been done." However, the search does not appear to have been particularly aggressive. A national motor vehicle index was reportedly checked, but Hazmi's speeding ticket was not detected for some reason. The FBI did not search credit card databases, bank account databases, or car registration, all of which would have produced positive results. Hazmi was even listed in the 2000–2001 San Diego phone book, but this too was not searched until after the attacks. He had not been placed on terrorist watch lists, nor did the CIA or NSA alert the FBI, Customs and Immigration, or local police and enforcement agencies. On August 27, brothers Nawaf and Salem purchased flight tickets through Travelocity.com using Nawaf's Visa card. On September 1, Nawaf registered Room #7 at the Pin-Del Motel in Laurel, Maryland. On the registration, he listed his driver's license number as 3402142-D, and gave a New York hotel as his permanent residence. Ziad Jarrah had checked into the hotel on August 27. Nawaf and Mihdhar purchased their 9/11 plane tickets online using a credit card with their real names. This raised no red flags, since the FAA had not been informed that the two were on a terrorist watchlist. On September 10, 2001, Hanjour, Mihdhar, and Nawaf checked into the Marriott Residence Inn in Herndon, Virginia, where Saleh Ibn Abdul Rahman Hussayen, a prominent Saudi government official, was staying – although no evidence was ever uncovered that they had met, or knew of each other's presence. On September 11, Hazmi boarded American Airlines Flight 77. The flight was scheduled to depart at 08:10 but ended up departing 10 minutes late from Gate D26 at Dulles. The last normal radio communications from the aircraft to air traffic control occurred at 08:50:51. At 08:54, the hijackers sent pilots Charles Burlingame and David Charlesbois to the back of the plane. Flight 77 began to deviate from its normal, assigned flight path and turned south. The hijackers then set the flight's autopilot in the direction of Washington, D.C. Passenger Barbara Olson called her husband, United States Solicitor General Theodore Olson, and reported that the plane had been hijacked and that the assailants had box cutters and knives. At 09:37, American Airlines Flight 77 crashed into the west facade of the Pentagon, killing all 64 aboard (including the hijackers) along with 125 in the Pentagon. Nawaf al-Hazmi's 1988 blue Toyota Corolla was found on the next day in Dulles International Airport's hourly parking lot. Inside the vehicle, authorities found a letter written by Mohamed Atta, maps of Washington, D.C. and New York City, a cashier's check made out to a Phoenix flight school, four drawings of a Boeing 757 cockpit, a box cutter, and a page with notes and phone numbers. In the recovery process at the Pentagon, remains of all five Flight 77 hijackers were identified through a process of elimination, as not matching any DNA samples for the victims, and put into custody of the FBI. Forensics teams confirmed that it seemed two of the hijackers were brothers, based on their DNA similarities. Several weeks after the attacks, a Las Vegas Days Inn employee went to the FBI and stated that she recognized Hazmi's photographs from the media as being a man she had met at the hotel, who had asked for details on hotels near Los Angeles. She admitted that he never gave his name. Late in 2005, Army Lt. Col. Kevin Shaffer and Congressman Curt Weldon alleged that the Defense Department data mining project Able Danger had kept Nawaf, Khalid al-Mihdhar, Mohamed Atta and Marwan al-Shehhi all under surveillance as al-Qaeda agents.
https://en.wikipedia.org/wiki?curid=21630
Nero Nero ( ; Nero Claudius Caesar Augustus Germanicus; born Lucius Domitius Ahenobarbus; 15 December 37 – 9 June 68 AD) was Roman emperor from 54 to 68, the last ruler of the Julio-Claudian dynasty. He was adopted by his great-uncle Claudius, thus becoming his heir and successor. Like Claudius, Nero became emperor with the consent of the Praetorian Guard. Nero's mother, Agrippina the Younger, dominated Nero's early life and decisions until he cast her off and had her killed five years into his reign. During the early years of his reign, Nero was content to be guided by his mother, his tutor Lucius Annaeus Seneca, and his Praetorian prefect Sextus Afranius Burrus. As time passed, he began to play a more active and independent role in government and foreign policy. During his reign, the redoubtable general Corbulo conducted a successful war and negotiated peace with the Parthian Empire. His general Suetonius Paulinus crushed a major revolt in Britain, led by the Iceni Queen Boudica. The Bosporan Kingdom was briefly annexed to the empire, and the First Jewish–Roman War began. Nero focused much of his attention on diplomacy and trade, as well as the cultural life of the empire, ordering theatres built and promoting athletic games. He made public appearances as an actor, poet, musician, and charioteer. In the eyes of traditionalists, this undermined the dignity and authority of his person, status, and office. His extravagant, empire-wide program of public and private works was funded by a rise in taxes that was much resented by the upper classes. In contrast, his populist style of rule remained well-admired among the lower classes of Rome and the provinces until his death and beyond. Various plots against his life were revealed, the leaders of which—most of them Nero's own courtiers—who would be executed (at least until his final demise). In AD 68 Vindex, governor of the Gaulish territory Gallia Lugdunensis, rebelled, with support from Galba, governor of Hispania Tarraconensis. Vindex's revolt failed in its immediate aim, though Nero fled Rome when its discontented civil and military authorities chose Galba as emperor. On 9 June in AD 68, he committed suicide, becoming the first Roman Emperor, after learning that he had been tried "in absentia" and condemned to death as a public enemy. His death ended the Julio-Claudian dynasty, sparking a brief period of civil wars known as the Year of the Four Emperors. Nero's rule is usually associated with tyranny and extravagance. Most Roman sources, including Suetonius and Cassius Dio, offer overwhelmingly negative assessments of his personality and reign; likewise, Tacitus claims that the Roman people thought him compulsive and corrupt. Suetonius tells that many Romans believed that the Great Fire of Rome was instigated by Nero to clear the way for his planned palatial complex, the Domus Aurea. According to Tacitus he was said to have seized Christians as scapegoats for the fire and burned them alive, seemingly motivated not by public justice but by personal cruelty. Some modern historians question the reliability of the ancient sources on Nero's tyrannical acts, however. A few sources paint Nero in a more favorable light. There is evidence of his popularity among the Roman commoners, especially in the eastern provinces of the Empire, where a popular legend arose that Nero had not died and would return. At least three leaders of short-lived, failed rebellions presented themselves as "Nero reborn" to enlist popular support. Nero was born Lucius Domitius Ahenobarbus on 15 December 37in Antium. He was the only son of Gnaeus Domitius Ahenobarbus and Agrippina the Younger. His maternal grandparents were Germanicus and Agrippina the Elder; his mother, Caligula's sister. He was Augustus' great-great grandson, descended from the first Emperor's only daughter, Julia. The ancient biographer Suetonius, who was critical of Nero's ancestors, wrote that Augustus had reproached Nero's grandfather for his unseemly enjoyment of violent gladiator games. According to Jürgen Malitz, Suetonius tells that Nero's father was known to be "irascible and brutal", and that both "enjoyed chariot races and theater performances to a degree not befitting their position." Nero's father, Domitius, died in 40 AD. A few years before his death, Domitius had been involved in a political scandal that, according to Malitz, "could have cost him his life if Tiberius had not died in the year 37." In the previous year, Nero's mother Agrippina had been caught up in a scandal of her own. Caligula's beloved sister Drusilla had recently died and Caligula began to feel threatened by his brother-in-law Marcus Aemilius Lepidus. Agrippina, suspected of adultery with her brother-in-law, was forced to carry the funerary urn after Lepidus' execution. Caligula then banished his two surviving sisters, Agrippina and Julia Livilla, to a remote island in the Mediterranean Sea. According to "The Oxford Encyclopedia of Ancient Greece and Rome", Agrippina was exiled for plotting to overthrow Caligula. Nero's inheritance was taken from him and he was sent to live with his paternal aunt Domitia Lepida the Younger, the mother of Claudius' third wife Valeria Messalina. Caligula's reign lasted from 37 until 41. He died from multiple stab wounds in January of 41 after being ambushed by his own Praetorian Guard on the Palatine Hill. Claudius succeeded Caligula as Emperor. Agrippina married Claudius in 49 and became his fourth wife. By February 49, she had persuaded Claudius to adopt her son Nero. After Nero's adoption, "Claudius" became part of his name: Nero Claudius Caesar Drusus Germanicus. Claudius had gold coins issued to mark the adoption. Classics professor Josiah Osgood has written that "the coins, through their distribution and imagery alike, showed that a new Leader was in the making." David Shotter noted that, despite events in Rome, Nero's step-brother Britannicus was more prominent in provincial coinages during the early 50s. Nero officially formally entered public life as an adult in 51—he was around 14 years old. When he turned 16, Nero married Claudius' daughter (his own step-sister), Claudia Octavia. Between the years 51 and 53, he gave several speeches on behalf of various communities including the Ilians; the Apameans, requesting a five-year tax reprieve after an earthquake; and the northern colony of Bologna, after their settlement suffered a devastating fire. Claudius died in 54; many ancient historians claim that he was poisoned by Agrippina. Shotter has written that "Claudius' death in 54 has usually been regarded as an event hastened by Agrippina because of signs that Claudius was showing a renewed affection for his natural son," but he notes that among ancient sources Josephus was uniquely reserved in describing the poisoning as a rumor. Contemporary sources differ in their accounts. Tacitus says that Locusta prepared the poison, which was served to the Emperor by his food taster Halotus. Tacitus also writes that Agrippina arranged for Claudius' doctor Xenophon to administer poison, in the event that the Emperor survived. Suetonius differs in some details, but also implicates Halotus and Agrippina. Like Tacitus, Cassius Dio writes that the poison was prepared by Locusta, but in Dio's account it is administered by Agrippina instead of Halotus. In Apocolocyntosis, Seneca the Younger does not mention mushrooms at all. Agrippina's involvement in Claudius' death is not accepted by all modern scholars. Before Claudius' death, Agrippina had maneuvered to remove Britannicus' tutors and replace them with tutors that she had selected. She was also able to convince Claudius to replace with a single commander, Burrus, two prefects of the Praetorian guard who were suspected of supporting Brittanicus. Since Agrippina had replaced the guard officers with men loyal to her, Nero was able to assume power without incident. Most of what we know about Nero's reign comes from three ancient writers: Tacitus, Suetonius, and Greek historian Cassius Dio. According to ancient historians, Nero's construction projects were overly extravagant and the large number of expenditures under Nero left Italy "thoroughly exhausted by contributions of money" with "the provinces ruined." Modern historians, though, note that the period was riddled with deflation and that it is likely that Nero's spending came in the form of public-works projects and charity intended to ease economic troubles. Nero became emperor in 54 , aged sixteen years. This made him the youngest sole emperor until Elagabalus, who became emperor aged 14 in 218. The first five years of Nero's reign were described as "Quinquennium Neronis" by Trajan; the interpretation of the phrase is a matter of dispute amongst scholars. As Pharaoh of Egypt, Nero adopted the royal titulary "Autokrator Neron Heqaheqau Meryasetptah Tjemaahuikhasut Wernakhtubaqet Heqaheqau Setepennenu Merur" ('Emperor Nero, Ruler of rulers, chosen by Ptah, beloved of Isis, the sturdy-armed one who struck the foreign lands, victorious for Egypt, ruler of rulers, chosen of Nun who loves him'). Nero's tutor, Seneca, prepared Nero's first speech before the Senate. During this speech, Nero spoke about "eliminating the ills of the previous regime." H.H. Scullard writes that "he promised to follow the Augustan model in his principate, to end all secret trials "intra cubiculum", to have done with the corruption of court favorites and freedmen, and above all to respect the privileges of the Senate and individual Senators." His respect of the Senatorial autonomy, which distinguished him from Caligula and Claudius, was generally well received by the Roman Senate. Scullard writes that Nero's mother, Agrippina, "meant to rule through her son." Agrippina murdered her political rivals: Domitia Lepida the Younger, the aunt that Nero had lived with during Agrippina's exile; Marcus Junius Silanus, a great grandson of Augustus; and Narcissus. One of the earliest coins that Nero issues during his reign shows Agrippina on the coin's obverse side; usually, this would be reserved for a portrait of the emperor. The Senate also allowed Agrippina two lictors during public appearances, an honor that was customarily bestowed upon only magistrates and the Vestalis Maxima. In 55, Nero removed Agrippina's ally Marcus Antonius Pallas from his position in the treasury. Shotter writes the following about Agrippina's deteriorating relationship with Nero: "What Seneca and Burrus probably saw as relatively harmless in Nero—his cultural pursuits and his affair with the slave girl Claudia Acte—were to her signs of her son's dangerous emancipation of himself from her influence." Britannicus was poisoned after Agrippina threatened to side with him. Nero, who was having an affair with Acte, exiled Agrippina from the palace when she began to cultivate a relationship with his wife Octavia. Jürgen Malitz writes that ancient sources do not provide any clear evidence to evaluate the extent of Nero's personal involvement in politics during the first years of his reign. He describes the policies that are explicitly attributed to Nero as "well-meant but incompetent notions" like Nero's failed initiative to abolish taxes in 58. Scholars generally credit Nero's advisors Burrus and Seneca with the administrative successes of these years. Malitz writes that in later years, Nero panicked when he had to make decisions on his own during times of crisis. "The Oxford Encyclopedia of Ancient Greece and Rome" cautiously notes that Nero's reasons for killing his mother in 59 are "not fully understood." According to Tacitus, the source of conflict between Nero and his mother was Nero's affair with Poppaea Sabina. In "Histories" Tacitus writes that the affair began while Poppaea was still married to Rufrius Crispinus, but in his later work "Annals" Tacitus says Poppaea was married to Otho when the affair began. In "Annals" Tacitus writes that Agrippina opposed Nero's affair with Poppaea because of her affection for his wife Octavia. Anthony Barrett writes that Tacitus' account in "Annals" "suggests that Poppaea's challenge drove [Nero] over the brink." A number of modern historians have noted that Agrippina's death would not have offered much advantage for Poppaea, as Nero did not marry Poppaea until 62. Barrett writes that Poppaea seems to serve as a "literary device, utilized [by Tacitus] because [he] could see no plausible explanation for Nero's conduct and also incidentally [served] to show that Nero, like Claudius, had fallen under the malign influence of a woman." According to Suetonius, Nero had his former freedman Anicetus arrange a shipwreck; Agrippina survived the wreck, swam ashore and was executed by Anicetus, who reported her death as a suicide. Modern scholars believe that Nero's reign had been going well in the years before Agrippina's death. For example, Nero promoted the exploration of the Nile river sources with a successful expedition. After Agrippina's exile, Burrus and Seneca were responsible for the administration of the Empire. However, Nero's "conduct became far more egregious" after his mother's death. Miriam T. Griffins suggests that Nero's decline began as early as 55 with the murder of his stepbrother Britannicus, but also notes that "Nero lost all sense of right and wrong and listened to flattery with total credulity" after Agrippina's death. Griffin points out that Tacitus "makes explicit the significance of Agrippina's removal for Nero's conduct". In 62, Nero's adviser Burrus died. That same year Nero called for the first treason trial of his reign ("maiestas" trial) against Antistius Sosianus. He also executed his rivals Cornelius Sulla and Rubellius Plautus. Jürgen Malitz considers this to be a turning point in Nero's relationship with the Roman Senate. Malitz writes that "Nero abandoned the restraint he had previously shown because he believed a course supporting the Senate promised to be less and less profitable." After Burrus' death, Nero appointed two new Praetorian Prefects: Faenius Rufus and Ofonius Tigellinus. Politically isolated, Seneca was forced to retire. According to Tacitus, Nero divorced Octavia on grounds of infertility, and banished her. After public protests over Octavia's exile, Nero accused her of adultery with Anicetus and she was executed. In 64, Nero married Pythagoras, a freedman. The Great Fire of Rome erupted on the night of 18 to 19 July, AD 64. The fire started on the slope of the Aventine overlooking the Circus Maximus. Tacitus, the main ancient source for information about the fire, wrote that countless mansions, residences and temples were destroyed. Tacitus and Cassius Dio have both written of extensive damage to the Palatine, which has been supported by subsequent archaeological excavations. The fire is reported to have burned for over a week. It destroyed three of fourteen Roman districts and severely damaged seven more. Tacitus wrote that some ancient accounts described the fire as an accident, while others had claimed that it was a plot of Nero's. Tacitus is the only surviving source which does not blame Nero for starting the fire; he says he is "unsure." Pliny the Elder, Suetonius and Cassius Dio all wrote that Nero was responsible for the fire. These accounts give several reasons for Nero's alleged arson like Nero's envy of King Priam and a dislike for the city's ancient construction. Suetonius wrote that Nero started the fire because he wanted the space to build his Golden House. This Golden House or "Domus Aurea" included lush artificial landscapes and a 30-meter-tall statue of himself, the Colossus of Nero. The size of this complex is debated (from 100 to 300 acres). Tacitus wrote that Nero accused Christians of starting the fire to remove suspicion from himself. According to this account, many Christians were arrested and brutally executed by "being thrown to the beasts, crucified, and being burned alive". Suetonius and Cassius Dio alleged that Nero sang the "Sack of Ilium" in stage costume while the city burned. The popular legend that Nero played the fiddle while Rome burned "is at least partly a literary construct of Flavian propaganda [...] which looked askance on the abortive Neronian attempt to rewrite Augustan models of rule." In fact, the fiddle would not be invented until nearly 1400 years after Nero's death. According to Tacitus, Nero was in Antium during the fire. Upon hearing news of the fire, Nero returned to Rome to organize a relief effort, providing for the removal of bodies and debris, which he paid for from his own funds. After the fire, Nero opened his palaces to provide shelter for the homeless, and arranged for food supplies to be delivered in order to prevent starvation among the survivors. In the wake of the fire, he made a new urban development plan. Houses built after the fire were spaced out, built in brick, and faced by porticos on wide roads. Nero also built a new palace complex known as the Domus Aurea in an area cleared by the fire. To find the necessary funds for the reconstruction, tributes were imposed on the provinces of the empire. The cost to rebuild Rome was immense, requiring funds the state treasury did not have. Nero devalued the Roman currency for the first time in the Empire's history. He reduced the weight of the denarius from 84 per Roman pound to 96 (3.80 grams to 3.30 grams). He also reduced the silver purity from 99.5% to 93.5%—the silver weight dropping from 3.80 grams to 2.97 grams. Furthermore, Nero reduced the weight of the aureus from 40 per Roman pound to 45 (7.9 grams to 7.2 grams). In 65, Gaius Calpurnius Piso, a Roman statesman, organized a conspiracy against Nero with the help of Subrius Flavus and Sulpicius Asper, a tribune and a centurion of the Praetorian Guard. According to Tacitus, many conspirators wished to "rescue the state" from the emperor and restore the Republic. The freedman Milichus discovered the conspiracy and reported it to Nero's secretary, Epaphroditos. As a result, the conspiracy failed and its members were executed including Lucan, the poet. Nero's previous advisor Seneca was accused by Natalis; he denied the charges but was still ordered to commit suicide as by this point he had fallen out of favor with Nero. Nero was said to have kicked Poppaea to death in 65, before she could have his second child. Modern historians, noting the probable biases of Suetonius, Tacitus, and Cassius Dio, and the likely absence of eyewitnesses to such an event, propose that Poppaea may have died after miscarriage or in childbirth. Nero went into deep mourning; Poppaea was given a sumptuous state funeral, divine honors, and was promised a temple for her cult. A year's importation of incense was burned at the funeral. Her body was not cremated, as would have been strictly customary, but embalmed after the Egyptian manner and entombed; it is not known where. In 67, Nero married Sporus, a young boy who is said to have greatly resembled Poppaea. Nero had him castrated, tried to make a woman out of him, and married him in a dowry and bridal veil. It is believed that he did this out of regret for his killing of Poppaea. In March 68, Gaius Julius Vindex, the governor of Gallia Lugdunensis, rebelled against Nero's tax policies. Lucius Verginius Rufus, the governor of Germania Superior, was ordered to put down Vindex's rebellion. In an attempt to gain support from outside his own province, Vindex called upon Servius Sulpicius Galba, the governor of Hispania Tarraconensis, to join the rebellion and further, to declare himself emperor in opposition to Nero. At the Battle of Vesontio in May 68, Verginius' forces easily defeated those of Vindex and the latter committed suicide. However, after putting down this one rebel, Verginius' legions attempted to proclaim their own commander as Emperor. Verginius refused to act against Nero, but the discontent of the legions of Germany and the continued opposition of Galba in Spain did not bode well for him. While Nero had retained some control of the situation, support for Galba increased despite his being officially declared a public enemy ("hostis publicus"). The prefect of the Praetorian Guard, Gaius Nymphidius Sabinus, also abandoned his allegiance to the Emperor and came out in support of Galba. In response, Nero fled Rome with the intention of going to the port of Ostia and, from there, to take a fleet to one of the still-loyal eastern provinces. According to Suetonius, Nero abandoned the idea when some army officers openly refused to obey his commands, responding with a line from Virgil's "Aeneid": "Is it so dreadful a thing then to die?" Nero then toyed with the idea of fleeing to Parthia, throwing himself upon the mercy of Galba, or appealing to the people and begging them to pardon him for his past offences "and if he could not soften their hearts, to entreat them at least to allow him the prefecture of Egypt". Suetonius reports that the text of this speech was later found in Nero's writing desk, but that he dared not give it from fear of being torn to pieces before he could reach the Forum. Nero returned to Rome and spent the evening in the palace. After sleeping, he awoke at about midnight to find the palace guard had left. Dispatching messages to his friends' palace chambers for them to come, he received no answers. Upon going to their chambers personally, he found them all abandoned. When he called for a gladiator or anyone else adept with a sword to kill him, no one appeared. He cried, "Have I neither friend nor foe?" and ran out as if to throw himself into the Tiber. Returning, Nero sought a place where he could hide and collect his thoughts. An imperial freedman, Phaon, offered his villa, located outside the city. Travelling in disguise, Nero and four loyal freedmen, Epaphroditos, Phaon, Neophytus, and Sporus, reached the villa, where Nero ordered them to dig a grave for him. At this time, a courier arrived with a report that the Senate had declared Nero a public enemy, that it was their intention to execute him by beating him to death, and that armed men had been sent to apprehend him for the act to take place in the Roman Forum. The Senate actually was still reluctant and deliberating on the right course of action, as Nero was the last member of the Julio-Claudian Family. Indeed, most of the senators had served the imperial family all their lives and felt a sense of loyalty to the deified bloodline, if not to Nero himself. The men actually had the goal of returning Nero back to the Senate, where the Senate hoped to work out a compromise with the rebelling governors that would preserve Nero's life, so that at least a future heir to the dynasty could be produced. Nero, however, did not know this, and at the news brought by the courier, he prepared himself for suicide, pacing up and down muttering "Qualis artifex pereo" ("What an artist dies in me"). Losing his nerve, he begged one of his companions to set an example by killing himself first. At last, the sound of approaching horsemen drove Nero to face the end. However, he still could not bring himself to take his own life but instead he forced his private secretary, Epaphroditos, to perform the task. When one of the horsemen entered and saw that Nero was dying, he attempted to stop the bleeding, but efforts to save Nero's life were unsuccessful. Nero's final words were "Too late! This is fidelity!" He died on 9 June 68, the anniversary of the death of Octavia, and was buried in the Mausoleum of the Domitii Ahenobarbi, in what is now the Villa Borghese (Pincian Hill) area of Rome. According to Sulpicius Severus, it is unclear whether Nero took his own life. With his death, the Julio-Claudian dynasty ended. When news of his death reached Rome, the Senate posthumously declared Nero a public enemy to appease the coming Galba (as the Senate had initially declared Galba as a public enemy) and proclaimed Galba as the new emperor. Chaos would ensue in the year of the Four Emperors. According to Suetonius and Cassius Dio, the people of Rome celebrated the death of Nero. Tacitus, though, describes a more complicated political environment. Tacitus mentions that Nero's death was welcomed by Senators, nobility and the upper class. The lower-class, slaves, frequenters of the arena and the theater, and "those who were supported by the famous excesses of Nero", on the other hand, were upset with the news. Members of the military were said to have mixed feelings, as they had allegiance to Nero, but had been bribed to overthrow him. Eastern sources, namely Philostratus and Apollonius of Tyana, mention that Nero's death was mourned as he "restored the liberties of Hellas with a wisdom and moderation quite alien to his character" and that he "held our liberties in his hand and respected them." Modern scholarship generally holds that, while the Senate and more well-off individuals welcomed Nero's death, the general populace was "loyal to the end and beyond, for Otho and Vitellius both thought it worthwhile to appeal to their nostalgia." Nero's name was erased from some monuments, in what Edward Champlin regards as an "outburst of private zeal". Many portraits of Nero were reworked to represent other figures; according to Eric R. Varner, over fifty such images survive. This reworking of images is often explained as part of the way in which the memory of disgraced emperors was condemned posthumously (see damnatio memoriae). Champlin, however, doubts that the practice is necessarily negative and notes that some continued to create images of Nero long after his death. Damaged portraits of Nero, often with hammer-blows directed to the face, have been found in many provinces of the Roman Empire, three recently having been identified from the United Kingdom (see damnatio memoriae). The civil war during the year of the Four Emperors was described by ancient historians as a troubling period. According to Tacitus, this instability was rooted in the fact that emperors could no longer rely on the perceived legitimacy of the imperial bloodline, as Nero and those before him could. Galba began his short reign with the execution of many of Nero's allies. One such notable enemy included Nymphidius Sabinus, who claimed to be the son of Emperor Caligula. Otho overthrew Galba. Otho was said to be liked by many soldiers because he had been a friend of Nero's and resembled him somewhat in temperament. It was said that the common Roman hailed Otho as Nero himself. Otho used "Nero" as a surname and reerected many statues to Nero. Vitellius overthrew Otho. Vitellius began his reign with a large funeral for Nero complete with songs written by Nero. After Nero's suicide in 68, there was a widespread belief, especially in the eastern provinces, that he was not dead and somehow would return. This belief came to be known as the Nero Redivivus Legend. The legend of Nero's return lasted for hundreds of years after Nero's death. Augustine of Hippo wrote of the legend as a popular belief in 422. At least three Nero imposters emerged leading rebellions. The first, who sang and played the cithara or lyre and whose face was similar to that of the dead emperor, appeared in 69 during the reign of Vitellius. After persuading some to recognize him, he was captured and executed. Sometime during the reign of Titus (79–81), another impostor appeared in Asia and sang to the accompaniment of the lyre and looked like Nero but he, too, was killed. Twenty years after Nero's death, during the reign of Domitian, there was a third pretender. He was supported by the Parthians, who only reluctantly gave him up, and the matter almost came to war. In Britannia (Britain) in 59, Prasutagus, leader of the Iceni tribe, and a client king of Rome's during Claudius' reign, died. The client state arrangement was unlikely to survive the death of the former Emperor. Prasutagus' will leaving control of the Iceni to his wife Boudica was denied, and, when Catus Decianus scourged Boudica and raped her daughters, the Iceni revolted. They were joined by the Trinovantes tribe, and their uprising became the most significant provincial rebellion of the 1st century. Under Boudica the towns of Camulodunum (Colchester), Londinium (London) and Verulamium (St Albans) were burned and a substantial body of legion infantry destroyed. The governor of the province Gaius Suetonius Paulinus assembled his remaining forces and defeated the Britons and restored order but for a while Nero considered abandoning the province. Julius Classicianus replaced Decianus as procurator. Classicianus advised Nero to replace Paulinus, who continued to punish the population even after the rebellion was over. Nero decided to adopt a more lenient approach to governing the province, and appointed a new governor, Petronius Turpilianus. Nero began preparing for war in the early years of his reign, after the Parthian king Vologeses set his brother Tiridates on the Armenian throne. Around 57 and 58 Domitius Corbulo and his legions advanced on Tiridates and captured the Armenian capital Artaxata. Tigranes was chosen to replace Tiridates on the Armenian throne. When Tigranes attacked Adiabene, Nero had to send further legions to defend Armenia and Syria from Parthia. The Roman victory came at a time when the Parthians were troubled by revolts; when this was dealt with they were able to devote resources to the Armenian situation. A Roman army under Paetus surrendered under humiliating circumstances and though both Roman and Parthian forces withdrew from Armenia, it was under Parthian control. The triumphal arch for Corbulo's earlier victory was part-built when Parthian envoys arrived in 63 AD to discuss treaties. Given "imperium" over the eastern regions, Corbulo organised his forces for an invasion but was met by this Parthian delegation. An agreement was thereafter reached with the Parthians: Rome would recognize Tiridates as king of Armenia, only if he agreed to receive his diadem from Nero. A coronation ceremony was held in Italy 66. Dio reports that Tiridates said "I have come to you, my God, worshiping you as Mithras." Shotter says this parallels other divine designations that were commonly applied to Nero in the East including "The New Apollo" and "The New Sun." After the coronation, friendly relations were established between Rome and the eastern kingdoms of Parthia and Armenia. Artaxata was temporarily renamed Neroneia. In 66, there was a Jewish revolt in Judea stemming from Greek and Jewish religious tension. In 67, Nero dispatched Vespasian to restore order. This revolt was eventually put down in 70, after Nero's death. This revolt is famous for Romans breaching the walls of Jerusalem and destroying the Second Temple of Jerusalem. Nero studied poetry, music, painting and sculpture. He both sang and played the "cithara" (a type of lyre). Many of these disciplines were standard education for the Roman elite, but Nero's devotion to music exceeded what was socially acceptable for a Roman of his class. Ancient sources were critical of Nero's emphasis on the arts, chariot-racing and athletics. Pliny described Nero as an "actor-emperor" ("scaenici imperatoris") and Suetonius wrote that he was "carried away by a craze for popularity...since he was acclaimed as the equal of Apollo in music and of the Sun in driving a chariot, he had planned to emulate the exploits of Hercules as well." In 67 Nero participated in the Olympics. He had bribed organizers to postpone the games for a year so he could participate, and artistic competitions were added to the athletic events. Nero won every contest in which he was a competitor. During the games Nero sang and played his lyre on stage, acted in tragedies and raced chariots. He won a 10-horse chariot race, despite being thrown from the chariot and leaving the race. He was crowned on the basis that he would have won if he had completed the race. After he died a year later, his name was removed from the list of winners. Champlin writes that though Nero's participation "effectively stifled true competition, [Nero] seems to have been oblivious of reality." Nero established the Neronian games in 60. Modeled on Greek style games, these games included "music" "gymnastic" and "questrian" contents. According to Suetonius the gymnastic contests were held in the Saepta area of the Campus Martius. The history of Nero's reign is problematic in that no historical sources survived that were contemporary with Nero. These first histories, while they still existed, were described as biased and fantastical, either overly critical or praising of Nero. The original sources were also said to contradict on a number of events. Nonetheless, these lost primary sources were the basis of surviving secondary and tertiary histories on Nero written by the next generations of historians. A few of the contemporary historians are known by name. Fabius Rusticus, Cluvius Rufus and Pliny the Elder all wrote condemning histories on Nero that are now lost. There were also pro-Nero histories, but it is unknown who wrote them or for what deeds Nero was praised. The bulk of what is known of Nero comes from Tacitus, Suetonius and Cassius Dio, who were all of the senatorial class. Tacitus and Suetonius wrote their histories on Nero over fifty years after his death, while Cassius Dio wrote his history over 150 years after Nero's death. These sources contradict one another on a number of events in Nero's life including the death of Claudius, the death of Agrippina, and the Roman fire of 64, but they are consistent in their condemnation of Nero. A handful of other sources also add a limited and varying perspective on Nero. Few surviving sources paint Nero in a favourable light. Some sources, though, portray him as a competent emperor who was popular with the Roman people, especially in the east. Cassius Dio (c. 155–229) was the son of Cassius Apronianus, a Roman senator. He passed the greater part of his life in public service. He was a senator under Commodus and governor of Smyrna after the death of Septimius Severus; and afterwards suffect consul around 205, and also proconsul in Africa and Pannonia. Books 61–63 of Dio's "Roman History" describe the reign of Nero. Only fragments of these books remain and what does remain was abridged and altered by John Xiphilinus, an 11th-century monk. Dio Chrysostom (c. 40–120), a Greek philosopher and historian, wrote the Roman people were very happy with Nero and would have allowed him to rule indefinitely. They longed for his rule once he was gone and embraced imposters when they appeared: Epictetus (c. 55–135) was the slave to Nero's scribe Epaphroditos. He makes a few passing negative comments on Nero's character in his work, but makes no remarks on the nature of his rule. He describes Nero as a spoiled, angry and unhappy man. The historian Josephus (c. 37–100), while calling Nero a tyrant, was also the first to mention bias against Nero. Of other historians, he said: Although more of a poet than historian, Lucanus (c. 39–65) has one of the kindest accounts of Nero's rule. He writes of peace and prosperity under Nero in contrast to previous war and strife. Ironically, he was later involved in a conspiracy to overthrow Nero and was executed. Philostratus II "the Athenian" (c. 172–250) spoke of Nero in the Life of Apollonius Tyana (Books 4–5). Although he has a generally bad or dim view of Nero, he speaks of others' positive reception of Nero in the East. The history of Nero by Pliny the Elder (c. 24–79) did not survive. Still, there are several references to Nero in Pliny's "Natural Histories". Pliny has one of the worst opinions of Nero and calls him an "enemy of mankind." Plutarch (c. 46–127) mentions Nero indirectly in his account of the Life of Galba and the Life of Otho, as well as in the Vision of Thespesius in Book 7 of the Moralia, where a voice orders that Nero's soul be transferred to a more offensive species. Nero is portrayed as a tyrant, but those that replace him are not described as better. It is not surprising that Seneca (c. 4 BC–65 AD), Nero's teacher and advisor, writes very well of Nero. Suetonius (c. 69–130) was a member of the equestrian order, and he was the head of the department of the imperial correspondence. While in this position, Suetonius started writing biographies of the emperors, accentuating the anecdotal and sensational aspects. The "Annals" by Tacitus (c. 56–117) is the most detailed and comprehensive history on the rule of Nero, despite being incomplete after the year 66. Tacitus described the rule of the Julio-Claudian emperors as generally unjust. He also thought that existing writing on them was unbalanced: Tacitus was the son of a procurator, who married into the elite family of Agricola. He entered his political life as a senator after Nero's death and, by Tacitus' own admission, owed much to Nero's rivals. Realising that this bias may be apparent to others, Tacitus protests that his writing is true. In 1562 Girolamo Cardano published in Basel his "Encomium Neronis", which was one of the first historical references of the Modern era to portray Nero in a positive light. At the end of 66, conflict broke out between Greeks and Jews in Jerusalem and Caesarea. According to the Talmud, Nero went to Jerusalem and shot arrows in all four directions. All the arrows landed in the city. He then asked a passing child to repeat the verse he had learned that day. The child responded, "I will lay my vengeance upon Edom by the hand of my people Israel" (Ezekiel 25:14). Nero became terrified, believing that God wanted the Second Temple to be destroyed, but that he would punish the one to carry it out. Nero said, "He desires to lay waste His House and to lay the blame on me," whereupon he fled and converted to Judaism to avoid such retribution. Vespasian was then dispatched to put down the rebellion. The Talmud adds that the sage Reb Meir Baal HaNess lived in the time of the Mishnah, and was a prominent supporter of the Bar Kokhba rebellion against Roman rule. Rabbi Meir was considered one of the greatest of the Tannaim of the third generation (139–163). According to the Talmud, his father was a descendant of Nero who had converted to Judaism. His wife Bruriah is one of the few women cited in the Gemara. He is the third-most-frequently-mentioned sage in the Mishnah. Roman and Greek sources nowhere report Nero's alleged trip to Jerusalem or his alleged conversion to Judaism. There is also no record of Nero having any offspring who survived infancy: his only recorded child, Claudia Augusta, died aged 4 months. Non-Christian historian Tacitus describes Nero extensively torturing and executing Christians after the fire of 64. Suetonius also mentions Nero punishing Christians, though he does so because they are "given to a new and mischievous superstition" and does not connect it with the fire. Christian writer Tertullian (c. 155–230) was the first to call Nero the first persecutor of Christians. He wrote, "Examine your records. There you will find that Nero was the first that persecuted this doctrine." Lactantius (c. 240–320) also said that Nero "first persecuted the servants of God." as does Sulpicius Severus. However, Suetonius writes that, "since the Jews constantly made disturbances at the instigation of Chrestus, the [emperor Claudius] expelled them from Rome" (""Iudaeos impulsore Chresto assidue tumultuantis Roma expulit""). These expelled "Jews" may have been early Christians, although Suetonius is not explicit. Nor is the Bible explicit, calling Aquila of Pontus and his wife, Priscilla, both expelled from Italy at the time, "Jews" (Acts 18:2). The first text to suggest that Nero ordered the execution of an apostle is a letter by Clement to the Corinthians traditionally dated to around AD 96. The apocryphal Ascension of Isaiah, a Christian writing from the 2nd century, says, "the slayer of his mother, who himself (even) this king, will persecute the plant which the Twelve Apostles of the Beloved have planted. Of the Twelve one will be delivered into his hands"; this is interpreted as referring to Nero. Bishop Eusebius of Caesarea (c. 275–339) was the first to write explicitly that Paul was beheaded in Rome during the reign of Nero. He states that Nero's persecution led to Peter and Paul's deaths, but that Nero did not give any specific orders. However, several other accounts going back to the 1st century have Paul surviving his two years in Rome and travelling to Hispania, before facing trial in Rome again prior to his death. Peter is first said to have been crucified upside-down in Rome during Nero's reign (but not by Nero) in the apocryphal Acts of Peter (c. 200). The account ends with Paul still alive and Nero abiding by God's command not to persecute any more Christians. By the 4th century, a number of writers were stating that Nero killed Peter and Paul. The Sibylline Oracles, Book 5 and 8, written in the 2nd century, speak of Nero returning and bringing destruction. Within Christian communities, these writings, along with others, fueled the belief that Nero would return as the Antichrist. In 310, Lactantius wrote that Nero "suddenly disappeared, and even the burial place of that noxious wild beast was nowhere to be seen. This has led some persons of extravagant imagination to suppose that, having been conveyed to a distant region, he is still reserved alive; and to him they apply the Sibylline verses". Lactantius maintains that it is not right to believe this. In 422, Augustine of Hippo wrote about 2 Thessalonians 2:1–11, where he believed that Paul mentioned the coming of the Antichrist. Although he rejects the theory, Augustine mentions that many Christians believed Nero was the Antichrist or would return as the Antichrist. He wrote, "so that in saying, 'For the mystery of iniquity doth already work,' he alluded to Nero, whose deeds already seemed to be as the deeds of Antichrist." Some modern biblical scholars such as Delbert Hillers (Johns Hopkins University) of the American Schools of Oriental Research and the editors of the "Oxford Study Bible" and "Harper Collins Study Bible", contend that the number 666 in the Book of Revelation is a code for Nero, a view that is also supported in Roman Catholic Biblical commentaries.
https://en.wikipedia.org/wiki?curid=21632
Neoclassical economics Neoclassical economics is an approach to economics focusing on the determination of goods, outputs, and income distributions in markets through supply and demand. This determination is often mediated through a hypothesized maximization of utility by income-constrained individuals and of profits by firms facing production costs and employing available information and factors of production, in accordance with rational choice theory, a theory that has come under considerable question in recent years. Neoclassical economics dominates microeconomics and, together with Keynesian economics, forms the neoclassical synthesis which dominates mainstream economics today. Although neoclassical economics has gained widespread acceptance by contemporary economists, there have been many critiques of neoclassical economics, often incorporated into newer versions of neoclassical theory, but some remaining distinct fields. The term was originally introduced by Thorstein Veblen in his 1900 article 'Preconceptions of Economic Science', in which he related marginalists in the tradition of Alfred Marshall et al. to those in the Austrian School. No attempt will here be made even to pass a verdict on the relative claims of the recognized two or three main "schools" of theory, beyond the somewhat obvious finding that, for the purpose in hand, the so-called Austrian school is scarcely distinguishable from the neo-classical, unless it be in the different distribution of emphasis. The divergence between the modernized classical views, on the one hand, and the historical and Marxist schools, on the other hand, is wider, so much so, indeed, as to bar out a consideration of the postulates of the latter under the same head of inquiry with the former. – Veblen It was later used by John Hicks, George Stigler, and others to include the work of Carl Menger, William Stanley Jevons, Léon Walras, John Bates Clark, and many others. Today it is usually used to refer to mainstream economics, although it has also been used as an umbrella term encompassing a number of other schools of thought, notably excluding institutional economics, various historical schools of economics, and Marxian economics, in addition to various other heterodox approaches to economics. Neoclassical economics is characterized by several assumptions common to many schools of economic thought. There is not a complete agreement on what is meant by neoclassical economics, and the result is a wide range of neoclassical approaches to various problem areas and domains—ranging from neoclassical theories of labor to neoclassical theories of demographic changes. It was expressed by E. Roy Weintraub that neoclassical economics rests on three assumptions, although certain branches of neoclassical theory may have different approaches: From these three assumptions, neoclassical economists have built a structure to understand the allocation of scarce resources among alternative ends—in fact understanding such allocation is often considered the definition of economics to neoclassical theorists. Here's how William Stanley Jevons presented "the problem of Economics". Given, a certain population, with various needs and powers of production, in possession of certain lands and other sources of material: required, the mode of employing their labour which will maximize the utility of their produce. From the basic assumptions of neoclassical economics comes a wide range of theories about various areas of economic activity. For example, profit maximization lies behind the neoclassical theory of the firm, while the derivation of demand curves leads to an understanding of consumer goods, and the supply curve allows an analysis of the factors of production. Utility maximization is the source for the neoclassical theory of consumption, the derivation of demand curves for consumer goods, and the derivation of labor supply curves and reservation demand. Market supply and demand are aggregated across firms and individuals. Their interactions determine equilibrium output and price. The market supply and demand for each factor of production is derived analogously to those for market final output to determine equilibrium income and the income distribution. Factor demand incorporates the marginal-productivity relationship of that factor in the output market. Neoclassical economics emphasizes equilibria, which are the solutions of agent maximization problems. Regularities in economies are explained by methodological individualism, the position that economic phenomena can be explained by aggregating over the behavior of agents. The emphasis is on microeconomics. Institutions, which might be considered as prior to and conditioning individual behavior, are de-emphasized. Economic subjectivism accompanies these emphases. See also general equilibrium. Classical economics, developed in the 18th and 19th centuries, included a value theory and distribution theory. The value of a product was thought to depend on the costs involved in producing that product. The explanation of costs in classical economics was simultaneously an explanation of distribution. A landlord received rent, workers received wages, and a capitalist tenant farmer received profits on their investment. This classic approach included the work of Adam Smith and David Ricardo. However, some economists gradually began emphasizing the perceived value of a good to the consumer. They proposed a theory that the value of a product was to be explained with differences in utility (usefulness) to the consumer. (In England, economists tended to conceptualize utility in keeping with the utilitarianism of Jeremy Bentham and later of John Stuart Mill.) The third step from political economy to economics was the introduction of marginalism and the proposition that economic actors made decisions based on margins. For example, a person decides to buy a second sandwich based on how full he or she is after the first one, a firm hires a new employee based on the expected increase in profits the employee will bring. This differs from the aggregate decision making of classical political economy in that it explains how vital goods such as water can be cheap, while luxuries can be expensive. The change in economic theory from classical to neoclassical economics has been called the "marginal revolution", although it has been argued that the process was slower than the term suggests. It is frequently dated from William Stanley Jevons's "Theory of Political Economy" (1871), Carl Menger's "Principles of Economics" (1871), and Léon Walras's "Elements of Pure Economics" (1874–1877). Historians of economics and economists have debated: In particular, Jevons saw his economics as an application and development of Jeremy Bentham's utilitarianism and never had a fully developed general equilibrium theory. Menger did not embrace this hedonic conception, explained diminishing marginal utility in terms of subjective prioritization of possible uses, and emphasized disequilibrium and the discrete; further Menger had an objection to the use of mathematics in economics, while the other two modeled their theories after 19th century mechanics. Jevons built on the hedonic conception of Bentham or of Mill, while Walras was more interested in the interaction of markets than in explaining the individual psyche. Alfred Marshall's textbook, "Principles of Economics" (1890), was the dominant textbook in England a generation later. Marshall's influence extended elsewhere; Italians would compliment Maffeo Pantaleoni by calling him the "Marshall of Italy". Marshall thought classical economics attempted to explain prices by the cost of production. He asserted that earlier marginalists went too far in correcting this imbalance by overemphasizing utility and demand. Marshall thought that "We might as reasonably dispute whether it is the upper or the under blade of a pair of scissors that cuts a piece of paper, as whether value is governed by utility or cost of production". Marshall explained price by the intersection of supply and demand curves. The introduction of different market "periods" was an important innovation of Marshall's: Marshall took supply and demand as stable functions and extended supply and demand explanations of prices to all runs. He argued supply was easier to vary in longer runs, and thus became a more important determinant of price in the very long run. An important change in neoclassical economics occurred around 1933. Joan Robinson and Edward H. Chamberlin, with the near simultaneous publication of their respective books, "The Economics of Imperfect Competition" (1933) and "The Theory of Monopolistic Competition" (1933), introduced models of imperfect competition. Theories of market forms and industrial organization grew out of this work. They also emphasized certain tools, such as the marginal revenue curve. Joan Robinson's work on imperfect competition, at least, was a response to certain problems of Marshallian partial equilibrium theory highlighted by Piero Sraffa. Anglo-American economists also responded to these problems by turning towards general equilibrium theory, developed on the European continent by Walras and Vilfredo Pareto. J. R. Hicks's "Value and Capital" (1939) was influential in introducing his English-speaking colleagues to these traditions. He, in turn, was influenced by the Austrian School economist Friedrich Hayek's move to the London School of Economics, where Hicks then studied. These developments were accompanied by the introduction of new tools, such as indifference curves and the theory of ordinal utility. The level of mathematical sophistication of neoclassical economics increased. Paul Samuelson's "Foundations of Economic Analysis" (1947) contributed to this increase in mathematical modelling. The interwar period in American economics has been argued to have been pluralistic, with neoclassical economics and institutionalism competing for allegiance. Frank Knight, an early Chicago school economist attempted to combine both schools. But this increase in mathematics was accompanied by greater dominance of neoclassical economics in Anglo-American universities after World War II. Some argue that outside political interventions, such as McCarthyism, and internal ideological bullying played an important role in this rise to dominance. Hicks' book, "Value and Capital" had two main parts. The second, which was arguably not immediately influential, presented a model of temporary equilibrium. Hicks was influenced directly by Hayek's notion of intertemporal coordination and paralleled by earlier work by Lindhal. This was part of an abandonment of disaggregated long run models. This trend probably reached its culmination with the Arrow–Debreu model of intertemporal equilibrium. The Arrow-Debreu model has canonical presentations in Gérard Debreu's "Theory of Value" (1959) and in Arrow and Hahn's "General Competitive Analysis" (1971). Many of these developments were against the backdrop of improvements in both econometrics, that is the ability to measure prices and changes in goods and services, as well as their aggregate quantities, and in the creation of macroeconomics, or the study of whole economies. The attempt to combine neo-classical microeconomics and Keynesian macroeconomics would lead to the neoclassical synthesis which has been the dominant paradigm of economic reasoning in English-speaking countries since the 1950s. Hicks and Samuelson were for example instrumental in mainstreaming Keynesian economics. Macroeconomics influenced the neoclassical synthesis from the other direction, undermining foundations of classical economic theory such as Say's law, and assumptions about political economy such as the necessity for a hard-money standard. These developments are reflected in neoclassical theory by the search for the occurrence in markets of the equilibrium conditions of Pareto optimality and self-sustainability. Perhaps the best way to frame a criticism of Neoclassical Economics is in the terms offered by Leijonhufvud in the contention that "Instead of looking for an alternative to replace it, we should try to imagine an economic theory to transcend its limitations." The contention also points to the need to bring in empirical science... testing and re-testing Neoclassical Economics propositions... in order to nudge the Framework and Theory toward a foundation of empirical reality. It is with such empirical reality we might transcend limitations. Leijonhufvud is speaking from the perspective of Experimental Economics; Behavioral Economics, too, uses experimental techniques, but also relies on surveys and other observations of what drives economic choice, also seeking ways to bring economic reality into the Framework and Theory. For overviews of the many empirical findings in both Experimental and Behavioral Economics, some supporting Neoclassical Economics and many suggesting changes needed in the Framework and Theory, see Altman and Tomer.. Also, for an overview of the empirical findings relating to conservation (and recycling) behavior, as in the notion of Empathy Conservation, see Lynne et al.. Neoclassical Economics Framing and Theory has a history of not being able to adequately explain choices related to the interdependence of a person with the natural system. Neoclassical economics is sometimes criticized for having a normative bias. In this view, it does not focus on explaining actual economies, but instead on describing a theoretical world in which Pareto optimality applies. Perhaps the strongest criticism lies in its disregard for the physical limits of the Earth and its ecosphere which are the physical container of all human economies. This disregard becomes hot denial by neoclassical economists when limits are asserted, since to accept such limits creates fundamental contradictions with the foundational presumptions that growth in scale of the human economy forever is both possible and desirable. The disregard/denial of limits includes both resources and "waste sinks", the capacity to absorb human waste products and man-made toxins. Ecological Economics sees interdependent Travelers on a Spaceship Earth, a Spaceship having limits. Neoclassical Economics, instead, sees each Traveler as independent of every other Traveler, and with the natural systems that make Travel on the Spaceship Earth possible, which are presumed (without empirical test) to be unlimited, or, at best limited only by knowledge. The empirical reality that people are interdependent with one another and with nature (i.e. the Spaceship Earth systems) is also recognized in Humanistic Economics, Buddhist Economics and Metaeconomics. Neoclassical Economics addresses the reality of interdependence through the notion of an externality, which is only occasional, and of no real consequence in that the market can always resolve it. Just change the property rights, privatizing the resource or good in question. Empirical reality points to the matter of interdependence being far more complex than can be fixed only with changing to private property rights, seeing the essential need, pragmatically speaking, for a good mix of both private and public property, a major theme in Institutional Economics. The assumption that individuals act rationally may be viewed as ignoring important aspects of human behavior. Many see the "economic man" as being quite different from real people, the Econ different from the Human. Many economists, even contemporaries, have criticized this model of economic man... with empirical evidence (as noted, especially in Behavioral Economics) growing in support of representing a person as a Human rather than an Econ. Thorstein Veblen put it most sardonically that neoclassical economics assumes a person to be: [A] lightning calculator of pleasures and pains, who oscillates like a homogeneous globule of desire of happiness under the impulse of stimuli that shift about the area, but leave him intact. As a result, neoclassical economics has extreme difficulty explaining such things as voting behavior, or someone running into a burning building to save a complete stranger, perhaps even perishing in the process. Clearly such choices are not much, if at all, in the self-interest. Such "non-rational" decision making has been examined deeply and widely in Behavioral Economics. Perhaps most importantly, Behavioral Economics has empirically demonstrated that while the Econ almost exclusively pursues only self-interest, the Human pursues a Dual Interest. The Dual Interest includes both the Ego-based self-interest and the Empathy-based other (shared with others, yet internalized within the own-self)-interest.. And, most importantly, it is quite rational to seek balance in Self&Other-interest, even sacrificing a bit in the domain of Self-interest in order to do so. Voting behavior, as well as running into a burning building, is rational in that it produces payoff in the realm of shared other-interest... as in the right-thing-to-do... which often requires a bit of sacrifice in the domain of self-interest. Rationality is all about maximizing a joint, non-separable and interdependent self&other-interest, which represents the own-interest. Maximizing own-interest generally means a bit of sacrifice in both domains of self-interest and other-interest, with own-interest all about finding balance. The Dual Interest analytical system now represents the analytical engine of a Metaeconomics... the Meta pointing to bringing considerations of both ethics and the moral dimension...the right-thing-to-do... back into the formal structure of Neoclassical Economics. The Moral Dimension was there at the beginning, in the Moral Philosophy of Adam Smith. It is also quite rational to seek a balance in the Own-interest, with the Moral Dimension tempering the Self-interest. Large corporations might perhaps come closer to the neoclassical ideal of profit maximization, but this is not necessarily viewed as desirable if this comes at the expense of neglect of wider social issues.. The wider social issues are represented in the shared other-interest while profit maximization is represented in the self-interest. Balance is needed. Problems exist with making the neoclassical general equilibrium theory compatible with an economy that develops over time and includes capital goods. This was explored in a major debate in the 1960s—the "Cambridge capital controversy"—about the validity of neoclassical economics, with an emphasis on economic growth, capital, aggregate theory, and the marginal productivity theory of distribution. There were also internal attempts by neoclassical economists to extend the Arrow-Debreu model to disequilibrium investigations of stability and uniqueness. However a result known as the Sonnenschein–Mantel–Debreu theorem suggests that the assumptions that must be made to ensure that equilibrium is stable and unique are quite restrictive. Neoclassical economics is also often seen as relying too heavily on complex mathematical models, such as those used in general equilibrium theory, without enough regard to whether these actually describe the real economy. Many see an attempt to model a system as complex as a modern economy by a mathematical model as unrealistic and doomed to failure. A famous answer to this criticism is Milton Friedman's claim that theories should be judged by their ability to predict events rather than by the realism of their assumptions. Mathematical models also include those in game theory, linear programming, and econometrics. Some see mathematical models used in contemporary research in mainstream economics as having transcended neoclassical economics, while others disagree. Critics of neoclassical economics are divided into those who think that highly mathematical method is inherently wrong and those who think that mathematical method is potentially good even if contemporary methods have problems. In general, allegedly overly unrealistic assumptions are one of the most common criticisms towards neoclassical economics. It is fair to say that many (but not all) of these criticisms can only be directed towards a subset of the neoclassical models (for example, there are many neoclassical models where unregulated markets fail to achieve Pareto-optimality and there has recently been an increased interest in modeling non-rational decision making). Its disregard for social reality and its alleged role in aiding the elites to widen the wealth gap and social inequality is also frequently criticized. It has been argued within the field of Ecological Economics that the Neoclassical Economics system is by nature dysfunctional. It considers the destruction of the natural world through the accelerating consumption of non-renewable resources as well as the exhaustion of the "waste sinks" of the ecosphere as mere "externalities." Such externalities, in turn, are viewed as occurring only occasionally, and easily rectified by shifting public property to private property: The Market will resolve any externalitity, given the opportunity to do so; so, there is no need for any kind of Government, or any other kind of Community "intervention." The Spaceship Earth system is viewed as a subset of the Human Economy, and fully subject to control (which is essential in order to have independence). Neoclassical Economics sees independence between the Human Economy and the Spaceship, between each Human and Nature. Ecological Economics points, instead, to the Human Economy as being embedded in the Spaceship Earth system, so everything is internal: It sees interdependence between each Human and Nature. In effect, there are no externalities, except for some material and energy exchange beyond the atmosphere of the Spaceship. So, a Framework and Theory is needed to transcend the limitation of the Neoclassical Economics presumption of independence, transcending the focus on only the Ego-based Self-interest of an independent person, in both consumption and production. The inherent interdependence of each person and nature, as well as each person with every other person, is recognized in Frameworks and Theory that see the role of Empathy in forming a shared Other-interest in the outcomes on the Spaceship. The essential need to consider Empathy, in order to address the matter of achieving sustainability on this Spaceship Earth, is also becoming a theme in the natural and environmental sciences.
https://en.wikipedia.org/wiki?curid=21634
Naomi Wolf Naomi R. Wolf (born November 12, 1962) is an American liberal progressive feminist author, journalist, and former political advisor to Al Gore and Bill Clinton. Via Wolf's first book "The Beauty Myth" (1991), she became a leading spokeswoman of what has been described as the third wave of the feminist movement. Such leading feminists as Gloria Steinem and Betty Friedan praised the work; others, including Camille Paglia and Christina Hoff Sommers, criticized it. Her later books include the bestseller "" in 2007 and "". Critics have challenged the quality and veracity of the scholarship in her books, including "Outrages" (2019). In this case, her serious misreading of court records led to its publication in the U.S. being cancelled. Her career in journalism began in 1995 and has included topics such as abortion, the Occupy Wall Street movement, Edward Snowden and ISIS. She has written for media outlets such as "The Nation", "The New Republic", "The Guardian" and "The Huffington Post". Wolf was born in San Francisco, to a Jewish family. Her mother is Deborah Goleman Wolf, an anthropologist and the author of "The Lesbian Community". Her father was Leonard Wolf, a Romanian-born gothic horror scholar at University of California, Berkeley and Yiddish translator. Leonard Wolf died from advanced Parkinson's Disease on March 20, 2019. Wolf has a brother, Aaron, and a half-brother, Julius, from her father's earlier relationship; it remained his secret until his daughter was in her 30s. She attended Lowell High School and debated in regional speech tournaments as a member of the Lowell Forensic Society. Wolf attended Yale University receiving her Bachelor of Arts in English literature in 1984. From 1985 to 1987, she was a Rhodes Scholar at New College, Oxford. Her initial period at Oxford University was difficult for Wolf as she experienced "raw sexism, overt snobbery and casual antisemitism". Her writing became so personal and subjective that her tutor advised against submitting her doctoral thesis. Wolf told interviewer Rachel Cooke, writing for "The Observer", in 2019: "My subject didn’t exist. I wanted to write feminist theory, and I kept being told by the dons there was no such thing." Her feminist writing at this time formed the basis of her first book, "The Beauty Myth". Wolf ultimately returned to Oxford, completing her Doctor of Philosophy degree in English literature in 2015. Her thesis, supervised by Dr. Stefano Evangelista of Trinity College, formed the basis for her 2019 book "Outrages: Sex, Censorship, and the Criminalization of Love". Wolf was involved in Bill Clinton's 1996 re-election bid, brainstorming with the president's team about ways to reach female voters. During Al Gore's bid for the presidency in the 2000 election, Wolf was hired as a consultant to target female voters, reprising her role in the Clinton campaign. Wolf's ideas and participation in the Gore campaign generated considerable media coverage and criticism. According to a report by Michael Duffy in "Time", Wolf was paid a salary of $15,000 (by November 1999, $5,000) per month "in exchange for advice on everything from how to win the women's vote to shirt-and-tie combinations." This article was the original source of the assertion that Wolf was responsible for Gore's "three-buttoned, earth-toned look." In an interview with Melinda Henneberger in "The New York Times", Wolf said she had been appointed in January 1999 and denied ever advising Gore on his wardrobe. Wolf said she had mentioned the term "alpha male" only once in passing and that "[it] was just a truism, something the pundits had been saying for months, that the vice president is in a supportive role and the President is in an initiatory role ... I used those terms as shorthand in talking about the difference in their job descriptions". In 1991, Wolf gained international attention as a spokeswoman of third-wave feminism from the publication of her first book "The Beauty Myth", an international bestseller. It was named "one of the seventy most influential books of the twentieth century" by "The New York Times". She argues that "beauty" as a normative value is entirely socially constructed, and that the patriarchy determines the content of that construction with the objective of maintaining women's subjugation. Wolf posits the idea of an "iron-maiden", an intrinsically unattainable standard that is then used to punish women physically and psychologically for their failure to achieve and conform to it. Wolf criticized the fashion and beauty industries as exploitative of women, but added that the beauty myth extended into all areas of human functioning. Wolf writes that women should have "the choice to do whatever we want with our faces and bodies without being punished by an ideology that is using attitudes, economic pressure, and even legal judgments regarding women's appearance to undermine us psychologically and politically". Wolf argues that women were under assault by the "beauty myth" in five areas: work, religion, sex, violence, and hunger. Ultimately, Wolf argues for a relaxation of normative standards of beauty. In her introduction, Wolf positioned her argument against the concerns of second-wave feminists and offered the following analysis: Although "The Beauty Myth" was a bestseller, it received mixed responses from feminists and the media. Second-wave feminist Germaine Greer wrote that "The Beauty Myth" was "the most important feminist publication since "The Female Eunuch"", and Gloria Steinem wrote, ""The Beauty Myth" is a smart, angry, insightful book, and a clarion call to freedom. Every woman should read it." British novelist Fay Weldon called the book "essential reading for the New Woman". Betty Friedan wrote in "Allure" magazine that ""The Beauty Myth" and the controversy it is eliciting could be a hopeful sign of a new surge of feminist consciousness." However, Camille Paglia, whose "Sexual Personae" was published in the same year as "The Beauty Myth", derided Wolf as unable to perform "historical analysis", and called her education "completely removed from reality." Her comments touched off a series of debates between Wolf and Paglia in the pages of "The New Republic". Likewise, Christina Hoff Sommers criticized Wolf for publishing the estimate that 150,000 women were dying every year from anorexia. Sommers states that she tracked down the source to the American Anorexia and Bulimia Association who stated that they were misquoted; the figure refers to sufferers, not fatalities. Wolf's citation for the incorrect figure came from a book by Brumberg, who referred to an American Anorexia and Bulimia Association newsletter and misquoted the newsletter. Wolf accepted the error and changed it in future editions. Sommers gave an estimate for the number of fatalities in 1990 as 100–400. The annual anorexia casualties in the US were estimated to be around 50 to 60 per year in the mid-1990s. In 1995, for an article in "The Independent on Sunday", British journalist Joan Smith recalled asking Wolf to explain her unsourced assertion in "The Beauty Myth" that the UK "has 3.5 million anorexics or bulimics (95 per cent of them female), with 6,000 new cases yearly". Wolf replied, according to Smith, that she had calculated the statistics from patients with eating disorders at one clinic. In "The New York Times", Caryn James lambasted the book as a "sloppily researched polemic as dismissible as a hackneyed adventure film ... Even by the standards of pop-cultural feminist studies, "The Beauty Myth" is a mess." She called the statistics Wolf that cited "shamefully secondhand and outdated. In contrast, "The Washington Post" called the book "persuasive" and praised its "accumulated evidence". Caspar Schoemaker of the Netherlands Trimbos Institute published a paper in the academic journal "Eating disorders" demonstrating that of the 23 statistics cited by Wolf in "Beauth Myth", 18 were incorrect, with Wolf citing numbers that average out to 8 times the number in the source she was citing. For example, Wolf wrote that 7.5% of girls and women have anexoria, the accurate figure is 0.065%. Revisiting "Beauty Myth" in 2019 for "The New Republic", literary critic Maris Kreizman recalls that reading it as an undergraduate made her "world burst open." It "remains one of the most formative books in (Kreizman's) life." However, as she matured, Kreizman saw Wolf's books as "poorly argued tracts" that made "wilder and wilder assertions" even, in 2014, spreading a conspiracy theory that the beheadings of American journalists James Foley and Steven Sotloff by ISIS were "faked and staged." Kreizman "began to write (Wolf) off as a fringe character" despite the fact that she had "once informed my own feminism so deeply." In "Fire with Fire" (1993), Wolf writes on politics, female empowerment and women's sexual liberation. "The New York Times" assailed the work for its "dubious oversimplifications and highly debatable assertions" and its "disconcerting penchant for inflationary prose," nonetheless approving of Wolf's "efforts to articulate an accessible, pragmatic feminism, ... helping to replace strident dogma with common sense." The "Time" magazine reviewer Martha Duffy dismissed the book as "flawed," although she commented that Wolf was "an engaging raconteur" who was also "savvy about the role of TV – especially the Thomas-Hill hearings and daytime talk shows – in radicalizing women, including homemakers." She characterized the book as advocating an inclusive strain of feminism that welcomed abortion opponents. In the UK, feminist author Natasha Walter writing in "The Independent" said that the book "has its faults, but compared with "The Beauty Myth" it has energy and spirit, and generosity too." Walter, however, criticized it for having a "narrow agenda" where "you will look in vain for much discussion of older women, of black women, of women with low incomes, of mothers." Characterizing Wolf as a "media star", Walter wrote: "She is particularly good, naturally, on the role of women in the media." "Promiscuities" (1997) reports on and analyzes the shifting patterns of contemporary adolescent sexuality. Wolf argues that literature is rife with examples of male coming-of-age stories, covered autobiographically by D.H. Lawrence, Tobias Wolff, J.D. Salinger and Ernest Hemingway, and covered misogynistically by Henry Miller, Philip Roth and Norman Mailer. Wolf insists, however, that female accounts of adolescent sexuality have been systematically suppressed. She adduces cross-cultural material to demonstrate that women have, across history, been celebrated as more carnal than men. Wolf also argues that women must reclaim the legitimacy of their own sexuality by shattering the polarization of women between virgin and whore. "Promiscuities" generally received negative reviews. In "The New York Times", Michiko Kakutani called Wolf a "frustratingly inept messenger: a sloppy thinker and incompetent writer. She tries in vain to pass off tired observations as radical "aperçus", subjective musings as generational truths, sappy suggestions as useful ideas". However, two days earlier in the "Times" Sunday edition, Weaver Courtney praised the book: "Anyone—particularly anyone who, like Ms. Wolf, was born in the 1960s—will have a very hard time putting down "Promiscuities". Told through a series of confessions, her book is a searing and thoroughly fascinating exploration of the complex wildlife of female sexuality and desire." In contrast, "The Library Journal" excoriated the work, writing, "Overgeneralization abounds as she attempts to apply the microcosmic events of this mostly white, middle-class, liberal milieu to a whole generation. ... There is a desperate defensiveness in the tone of this book which diminishes the force of her argument." "Misconceptions" (2001) examines pregnancy and childbirth. Most of the book is told through the prism of Wolf's personal experience of her first pregnancy. She describes the "vacuous impassivity" of the ultrasound technician who gives her the first glimpse of her new baby. Wolf laments her C-section and examines why the procedure is commonplace in the United States, advocating a return to midwifery. The second half of the book is anecdotal, focusing on inequalities between parents to child care. In her "New York Times" review, Claire Dederer suggested it was inappropriate to consider "Wolf as a political theorist, and instead call her a memoirist. She does her best writing when she's observing her own life." Her capability as a memoirist is not "self-indulgent. It seems vital, and in a sense radical, in the tradition of 1970's feminists who sought to speak to every aspect of women's lives." Wolf's "The Treehouse: Eccentric Wisdom from my Father on How to Live, Love, and See" (2005) is an account of her midlife crisis attempt to reclaim her creative and poetic vision and revalue her father's love, and her father's force as an artist and a teacher. In "" (2007), Wolf takes a historical look at the rise of fascism, outlining 10 steps necessary for a fascist group (or government) to destroy the democratic character of a nation-state. The book details how this pattern was implemented in Nazi Germany, Fascist Italy, and elsewhere, and analyzes its emergence and application of all the 10 steps in American political affairs since the September 11 attacks. Alex Beam wrote in "The New York Times": "In the book, Wolf insists that she is not equating [George W.] Bush with Hitler, nor the United States with Nazi Germany, then proceeds to do just that." Several years later, Mark Nuckols, argued in "The Atlantic" that Wolf's supposed historical parallels between incidents from the era of the European dictators and modern America are based on a highly selective reading in which Wolf omits significant details and misuses her sources. For "The Daily Beast", Michael Moynihan, characterized the book as "an astoundingly lazy piece of writing." "The End of America" was adapted for the screen as a documentary by filmmakers Annie Sundberg and Ricki Stern, best known for "The Devil Came on Horseback" and "The Trials of Darryl Hunt". It premiered in October 2008, and was favorably reviewed in "The New York Times" by Stephen Holden "Variety" magazine, and Nigel Andrews in the "Financial Times". Wolf returned to this general theme in an article in 2014 considering how modern Western women, born in inclusive, egalitarian liberal democracies, are assuming positions of leadership in neofascist political movements. "Give Me Liberty: A Handbook for American Revolutionaries" (2008) was written as a sequel to "The End of America: Letter of Warning to a Young Patriot." In the book, Wolf looks at times and places in history where citizens were faced with the closing of an open society and successfully fought back. Published in 2012 on the topic of the vagina, "Vagina: A New Biography" was much criticized, especially by feminist authors. Katie Roiphe described it as "ludicrous" in "Slate": "I doubt the most brilliant novelist in the world could have created a more skewering satire of Naomi Wolf's career than her latest book." In "The Nation", Katha Pollitt considered it a "silly book" containing "much dubious neuroscience and much foolishness." It becomes "loopier as it goes on. We learn that women think and feel through their vagina, which can 'grieve' and feel insulted." Toni Bentley wrote in "The New York Times Book Review" that Wolf used "shoddy research methodology", while with "her graceless writing, Wolf opens herself to ridicule on virtually every page." In "The New York Review of Books", Zoë Heller wrote that the book "offers an unusually clear insight into the workings of her mystic feminist philosophy". Part of the book concerns the history of the vagina's representation, but is "full of childlike generalizations" and her understanding of science "is pretty shaky too". "Los Angeles Times" columnist Meghan Daum decried the book's "painful" writing and its "hoary ideas about how women think." In "The New York Observer", Nina Burleigh suggested that critics of the book were so vehement "because (a) their editors handed the book to them for review because they thought it was an Important Feminist Book when it's actually slight and (b) there's a grain of truth in what she's trying to say." In response to the criticism, Wolf stated in a television interview: [A]nything that shows documentation of the brain and vagina connection is going to alarm some feminists... . ..also feminism has kind of retreated into the academy and sort of embraced the idea that all gender is socially constructed and so here is a book that is actually looking at science ... though there has been some criticisms of the book from some feminists ... who say, well you can't look at the science because that means we have to grapple with the science ... to me the feminist task of creating a just world isn't changed at all by this fascinating neuroscience that shows some differences between men and women. Wolf's book "Outrages: Sex, Censorship, and the Criminalization of Love" was published in 2019, a work based on the 2015 D.Phil. thesis she had completed under the supervision of Trinity College, Oxford literary scholar Dr. Stefano-Maria Evangelista. In the book, she studies the repression of homosexuality in relation to attitudes towards divorce and prostitution, and also in relation to the censorship of books. The book was published in the UK in May 2019 by Virago Press. On June 12, 2019, "Outrages" was named to the "O, The Oprah Magazine"s "The 32 Best Books by Women of Summer 2019" list. The following day, the U.S. publisher recalled all copies from U.S. bookstores. An error in a central tenet of the book — a misunderstanding of the term "death recorded" — was identified in a 2019 BBC radio interview with broadcaster and author Matthew Sweet. He cited a website for the Old Bailey Criminal Court, the same site which Wolf had referred to as one of her sources earlier in the interview. Sweet stated the following:"'Death Recorded' ... this is the definition I'm reading ... the definition from the Old Bailey website." He challenged other points of the book to which Wolf replied: "I was going by the Old Bailey Records and Regional Crime tables." Sweet then interrupted her:“Well, that’s how I got this, through that same sort of, uh, that same portal!" Reviewers have described other errors of scholarship in the work. Wolf appeared at the Hay Festival, Wales in late May 2019, a few days after her exchange with Matthew Sweet, where she defended her book and said she had already corrected the error. She stated at an event in Manhattan in June that she was not embarrassed by the correction, but rather felt grateful towards Sweet for the correction. but, as of October 2019, she has yet to do so. On October 18, 2019, it became known the release of the book by Houghton Mifflin Harcourt in the United States was being canceled. Wolf expressed the hope that the book would still be published in the US. In an October 1995 article for "The New Republic" Wolf was critical of contemporary pro-choice positions, arguing that the movement had "developed a lexicon of dehumanization" and urged feminists to accept abortion as a form of homicide and defend the procedure within the ambiguity of this moral conundrum. She continued, "Abortion should be legal; it is sometimes even necessary. Sometimes the mother must be able to decide that the fetus, in its full humanity, must die." Wolf concluded by speculating that in a world of "real gender equality," passionate feminists "might well hold candlelight vigils at abortion clinics, standing shoulder to shoulder with the doctors who work there, commemorating and saying goodbye to the dead." In an article for "New York" magazine on the subtle manipulation of George W. Bush's image among women, Wolf wrote in 2005: "Abortion is an issue not of "Ms." Magazine-style fanaticism or suicidal Republican religious reaction, but a complex issue." Wolf suggested in a 2003 article for "New York" magazine that the ubiquity of internet pornography tends to enervate the sexual attraction of men toward typical real women. She writes, "The onslaught of porn is responsible for deadening male libido in relation to real women, and leading men to see fewer and fewer women as 'porn-worthy.' Far from having to fend off porn-crazed young men, according to Wolf, young women are worrying that as mere flesh and blood, they can scarcely get, let alone hold, their attention." Wolf advocated abstaining from porn not on moral grounds, but because "greater supply of the stimulant equals diminished capacity." Wolf has commented about the dress required of women living in Muslim countries. In "The Sydney Morning Herald" in August 2008, she wrote: In the January 2013 issue of "The Atlantic", law and business professor Mark Nuckols wrote: "In her various books, articles, and public speeches, Wolf has demonstrated recurring disregard for the historical record and consistently mutilated the truth with selective and ultimately deceptive use of her sources." He further stated: "[W]hen she distorts facts to advance her political agenda, she dishonors the victims of history and poisons present-day public discourse about issues of vital importance to a free society." Nuckols argued that Wolf "has for many years now been claiming that a fascist coup in America is imminent. ... [I]n "The Guardian" she alleged, with no substantiation, that the U.S. government and big American banks are conspiring to impose a 'totally integrated corporate-state repression of dissent'." "Vox" journalist Max Fisher urged Wolf's readers "to understand the distinction between her earlier work, which rose on its merits, and her newer conspiracy theories, which are unhinged, damaging, and dangerous." Charles C. W. Cooke wrote in the "National Review Online", Over the last eight years, Naomi Wolf has written hysterically about coups and about vaginas and about little else besides. She has repeatedly insisted that the country is on the verge of martial law, and transmogrified every threat—both pronounced and overhyped—into a government-led plot to establish a dictatorship. She has made prediction after prediction that has simply not come to pass. Hers are not sober and sensible forecasts of runaway human nature, institutional atrophy, and constitutional decline, but psychedelic fever-dreams that are more typically suited to the "InfoWars" crowd. Under the headline "Naomi Wolf Went Off the Deep End Long Ago", Aaron Goldstein in "The American Spectator" advised, "Her words must be taken not just with a grain of salt, but a full shaker's worth." Shortly after the WikiLeaks founder Julian Assange was arrested in 2010, she wrote in an article for "The Huffington Post" that the allegations made against him by his two reputed victims amounted to no more than bad manners from a boyfriend. His accusers, she later wrote in several contexts, were working for the CIA and Assange had been falsely incriminated. On December 20, 2010, "Democracy Now!" featured a debate between Wolf and Jaclyn Friedman on the Assange case. According to Wolf, the alleged victims should have said no, asserted that they consented to having sex with him, and said the claims were politically motivated and demeaned the cause of legitimate rape victims. In a 2011 "Guardian" article she objected to Assange's two accusers having their anonymity preserved. In response, Katha Pollitt wrote in "The Nation" that the "point is a little bizarre: doesn’t Wolf realize that anonymity applies only to the media? Everyone in the justice system knows who the complainants are." On October 18, 2011, Wolf was arrested and detained in New York during the Occupy Wall Street protests, having ignored a police warnings not to remain on the street in front of a building. Wolf spent about 30 minutes in a cell. She disputed the NYPD's interpretation of applicable laws: "I was taken into custody for disobeying an unlawful order. The issue is that I actually know New York City permit law ... I didn't choose to get myself arrested. I chose to obey the law and that didn't protect me." A month later, Wolf argued in "The Guardian", citing leaked documents, that attacks on the Occupy movement were a coordinated plot, orchestrated by federal law enforcement agencies. Those leaks, she alleged, showed that the FBI was privately treating OWS as a terrorist threat, rather than the public assertions acknowledging it is a peaceful organization. The response to this article ranged from praise to criticism of Wolf for being overly speculative and creating a "conspiracy theory". Wolf responded that there is ample evidence for her argument, and proceeded to review the information available to her at the time of the article, and what she alleged was new evidence since that time. Imani Gandy of Balloon Juice, wrote that "nothing substantiates Wolf's claims", that "Wolf's article has no factual basis whatsoever and is, therefore, a journalistic failure of the highest order" and that "it was incumbent upon (Wolf) to fully research her claims and to provide facts to back them up." Corey Robin, a political theorist, journalist, and associate professor of political science at Brooklyn College and the Graduate Center of the City University of New York, stated on his blog: "The reason Wolf gets her facts wrong is that she's got her theory wrong." In early 2012, WikiLeaks began publishing the Global Intelligence Files, a trove of e-mails obtained via a hack by Anonymous and Jeremy Hammond. Among them was an email with an official Department of Homeland Security document from October 2011 attached. It indicated that DHS was closely watching Occupy, and concluded, "While the peaceful nature of the protests has served so far to mitigate their impact, larger numbers and support from groups such as Anonymous substantially increase the risk for potential incidents and enhance the potential security risk to critical infrastructure." In late December 2012, FBI documents released following an FOIA request from the Partnership for Civil Justice Fund revealed that the FBI used counterterrorism agents and other resources to extensively monitor the national Occupy movement. The documents contained no references to agency personnel covertly infiltrating Occupy branches, but did indicate that the FBI gathered information from police departments and other law enforcement agencies relating to planned protests. Additionally, the blog Techdirt reported that the documents disclosed a plot by unnamed parties "to murder OWS leadership in Texas" but that "the FBI never bothered to inform the targets of the threats against their lives." In a December 2012 article for "The Guardian", Wolf wrote: "Mother Jones" claimed that none of the documents revealed efforts by federal law enforcement agencies to disband the Occupy camps, and that the documents did not provide much evidence that federal officials attempted to suppress protesters' free speech rights. It was, said "Mother Jones", "a far cry from Wolf's contention." In June 2013, "New York" magazine reported Wolf, in a recent Facebook post, had expressed her "creeping concern" that NSA leaker Edward Snowden "is not who he purports to be, and that the motivations involved in the story may be more complex than they appear to be." Wolf was similarly skeptical of Snowden's "very pretty pole-dancing Facebooking girlfriend who appeared for, well, no reason in the media coverage ... and who keeps leaking commentary, so her picture can be recycled in the press." She pondered whether he was planted by "the Police State". Wolf responded on her website: "I do find a great deal of media/blog discussion about serious questions such as those I raised, questions that relate to querying some sources of news stories, and their potential relationship to intelligence agencies or to other agendas that may not coincide with the overt narrative, to be extraordinarily ill-informed and naive." Specifically regarding Snowden, she wrote, "Why should it be seen as bizarre to wonder, if there are some potential red flags—the key term is 'wonder'—if a former NSA spy turned apparent whistleblower might possibly still be—working for the same people he was working for before?" She was accused by the "Salon" website of making factual errors and misreadings. In a series a series of Facebook postings in October 2014, Wolf questioned the authenticity of videos purporting to show beheadings of two American journalists and two Britons by the Islamic State implying that they had been staged by the U.S. government and that the victims and their parents were actors. Wolf also charged that the U.S. was dispatching military troops not to assist in treating the Ebola virus epidemic in West Africa, but to carry the disease back home to justify a military takeover of America. She further said that the Scottish independence referendum, in which Scots voted to remain in the United Kingdom, was faked. Speaking about this at a demonstration in Glasgow on October 12, Wolf said, "I truly believe it was rigged." Responding to such criticism, Wolf said, "All the people who are attacking me right now for 'conspiracy theories' have no idea what they are talking about ... people who assume the dominant narrative MUST BE TRUE and the dominant reasons MUST BE REAL are not experienced in how that world works." To her nearly 100,000 Facebook followers, Wolf maintained, "I stand by what I wrote." However, in a later Facebook post, Wolf retracted her statement: "I am not asserting that the ISIS videos have been staged", she wrote. I certainly sincerely apologize if one of my posts was insensitively worded. I have taken that one down. ... I am not saying the ISIS beheading videos are not authentic. I am not saying they are not records of terrible atrocities. I am saying that they are not yet independently confirmed by two sources as authentic, which any Journalism School teaches, and the single source for several of them, SITE, which received half a million dollars in government funding in 2004, and which is the only source cited for several, has conflicts of interest that should be disclosed to readers of news outlets. Max Fisher commented that "the videos were widely distributed on open-source jihadist online outlets" while the "Maryland-based nonprofit SITE monitors extremist social media." Wolf deleted her original Facebook posts. Wolf's first marriage was to journalist David Shipley, then an editor at "The New York Times". The couple had two children, a son and daughter. Wolf and Shipley divorced in 2005. On 23 November 2018, Wolf married Brian William O'Shea, a disabled U.S. Army Veteran, private detective, and owner of Striker Pierce Investigations. According to a "New York Times" article published in November 2018, Wolf and O'Shea met in 2014 due to threats against Wolf after reporting on human rights violations in the Middle East. The couple live in New York City. In 2004, in an article for "New York" magazine, Wolf accused literary scholar Harold Bloom of a "sexual encroachment" in late Fall 1983 by touching her inner thigh. She said that what she alleged Bloom did was not harassment, either legally or emotionally, and she did not think herself a "victim", but that she had harbored this secret for 21 years. Explaining why she had finally gone public with the charges, Wolf wrote, I began, nearly a year ago, to try—privately—to start a conversation with my alma mater that would reassure me that steps had been taken in the ensuing years to ensure that unwanted sexual advances of this sort weren't still occurring. I expected Yale to be responsive. After nine months and many calls and e-mails, I was shocked to conclude that the atmosphere of collusion that had helped to keep me quiet twenty years ago was still intact—as secretive as a Masonic lodge. Sexual encroachment in an educational context or a workplace is, most seriously, a corruption of meritocracy; it is in this sense parallel to bribery. I was not traumatized personally, but my educational experience was corrupted. If we rephrase sexual transgression in school and work as a civil-rights and civil-society issue, everything becomes less emotional, less personal. If we see this as a systemic corruption issue, then when people bring allegations, the focus will be on whether the institution has been damaged in its larger mission. In "Slate" magazine around the time the allegations against Bloom first surfaced, Meghan O'Rourke wrote that Wolf generalized about sexual assault at Yale on the basis of her alleged personal experience. Moreover, O'Rourke commented, that despite Wolf's assertion sexual assault existed at Yale, she did not interview any Yale students for her story. In addition, O'Rourke wrote, "She jumps through verbal hoops to make it clear she was not 'personally traumatized,' yet she spends paragraphs describing the incident in precisely those terms." O'Rourke wrote that, despite Wolf's claim that her educational experience was corrupted, "(s)he neglects to mention that she later was awarded a Rhodes (scholarship)." O'Rourke concluded Wolf's "gaps and imprecision" in the "New York" article "give fodder to skeptics who think sexual harassment charges are often just a form of hysteria." Separately, a formal complaint was filed with the U.S. Department of Education Office for Civil Rights on March 15, 2011, by 16 current and former Yale students—12 female and 4 male—describing a sexually hostile environment at Yale. A federal investigation of Yale University began in March 2011 in response to the complaints. Wolf stated on CBS's "The Early Show" in April: "Yale has been systematically covering up much more serious crimes than the ones that can be easily identified." More specifically, she alleged "they use the sexual harassment grievance procedure in a very cynical way, purporting to be supporting victims, but actually using a process to stonewall victims, to isolate them, and to protect the university." Yale settled the federal complaint in June 2012, acknowledging "inadequacies" but not facing "disciplinary action with the understanding that it keeps in place policy changes instituted after the complaint was filed. The school (was) required to report on its progress to the Office of Civil Rights until May, 2014." In January 2018, Wolf accused Yale officials of blocking her from filing a formal grievance against Bloom. She told "The New York Times" that she had attempted to file the complaint in 2015 with Yale's University-Wide Committee on Sexual Misconduct, but that the university had refused to accept it. On January 16, 2018, Wolf said, she determined to see Yale's provost, Ben Polak, in another attempt to present her case. "As she documented on Twitter," the newspaper reported, "she brought a suitcase and a sleeping bag, because she said she did not know how long she would have to stay. When she arrived at the provost's office, she said, security guards prevented her from entering any elevators. Eventually, she said, Aley Menon, the secretary of the sexual misconduct committee, appeared and they met in the committee's offices for an hour, during which she gave Ms. Menon a copy of her complaint." This was reported and confirmed by Norman Vanamee who apparently met Wolf at Yale on this morning. In "Town & Country" magazine in January 2018, Vanamee returned to the story and wrote, "Yale University has a 93-person police department, and, after the guard called for backup, three of its armed and uniformed officers appeared and stationed themselves between Wolf and the elevator bank." During an interview for "Time" magazine in spring 2015, Bloom denied ever being indoors with "this person" whom he referred to as "Dracula's daughter."
https://en.wikipedia.org/wiki?curid=21636
New Year New Year is the time or day at which a new calendar year begins and the calendar's year count increments by one. Many cultures celebrate the event in some manner and the 1st day of January is often marked as a national holiday. In the Gregorian calendar, the most widely used calendar system today, New Year occurs on January 1 (New Year's Day). This was also the first day of the year in the original Julian calendar and of the Roman calendar (after 153 BC).. During the Middle Ages in western Europe, while the Julian calendar was still in use, authorities moved New Year's Day, depending upon locale, to one of several other days, including March 1, March 25, Easter, September 1, and December 25. Beginning in 1582, the adoptions of the Gregorian calendar has meant that many national or local dates in the Western World and beyond have changed to using one fixed date for New Year's Day, January 1. Other cultures observe their traditional or religious New Years Day according to their own customs, sometimes in addition to a (Gregorian) civil calendar. Chinese New Year, the Islamic New Year, the traditional Japanese New Year and the Jewish New Year are the more well-known examples. India and other countries continue to celebrate New Year on different dates. The new year of many South and Southeast Asian calendars falls between April 13–15, marking the beginning of spring. The early development of the Christian liturgical year coincided with the Roman Empire (east and west), and later the Byzantine Empire, both of which employed a taxation system labeled the Indiction, the years for which began on September 1. This timing may account for the ancient church's establishment of September 1 as the beginning of the liturgical year, despite the official Roman New Year's Day of January 1 in the Julian calendar, because the indiction was the principal means for counting years in the empires, apart from the reigns of the Emperors. The September 1 date prevailed throughout all of Christendom for many centuries, until subsequent divisions eventually produced revisions in some places. After the sack of Rome in 410, communications and travel between east and west deteriorated. Liturgical developments in Rome and Constantinople did not always match, although a rigid adherence to form was never mandated in the church. Nevertheless, the principal points of development were maintained between east and west. The Roman and Constantinopolitan liturgical calendars remained compatible even after the East-West Schism in 1054. Separations between the Roman Catholic ecclesiastical year and Eastern Orthodox liturgical calendar grew only over several centuries' time. During those intervening centuries, the Roman Catholic ecclesiastic year was moved to the first day of Advent, the Sunday nearest to St. Andrew's Day (November 30). According to the Latin Rite of the Catholic Church, the liturgical year begins at 4:00 PM on Saturday preceding the fourth Sunday prior to December 25 (between November 26 and December 2). By the time of the Reformation (early 16th century), the Roman Catholic general calendar provided the initial basis for the calendars for the liturgically-oriented Protestants, including the Anglican and Lutheran Churches, who inherited this observation of the liturgical new year. The present-day Eastern Orthodox liturgical calendar is the virtual culmination of the ancient eastern development cycle, though it includes later additions based on subsequent history and lives of saints. It still begins on September 1, proceeding annually into the Nativity of the Theotokos (September 8) and Exaltation of the Cross (September 14) to the celebration of Nativity of Christ (Christmas), through his death and resurrection (Pascha/Easter), to his Ascension and the Dormition of the Theotokos ("falling asleep" of the Virgin Mary, August 15). This last feast is known in the Roman Catholic church as the Assumption. The dating of "September 1" is according to the "new" (revised) Julian calendar or the "old" (standard) Julian calendar, depending on which is used by a particular Orthodox Church. Hence, it may fall on 1 September on the civil calendar, or on 14 September (between 1900 and 2099 inclusive). The Coptic and Ethiopian liturgical calendars are unrelated to these systems but instead follow the Alexandrian calendar which fixed the wandering ancient Egyptian calendar to the Julian year. Their New Year celebrations on Neyrouz and Enkutatash were fixed; however, at a point in the Sothic cycle close to the Indiction, between the years 1900 and 2100, they fall on September 11 during most years and September 12 in the years before a leap year. During the Roman Republic and the Roman Empire years beginning on the date on which each consul first entered the office. This was probably May 1 before 222 BC, March 15 from 222 BC to 154 BC, and January 1 from 153 BC. In 45 BC, when Julius Caesar's new Julian calendar took effect, the Senate fixed January 1 as the first day of the year. At that time, this was the date on which those who were to hold civil office assumed their official position, and it was also the traditional annual date for the convening of the Roman Senate. This civil new year remained in effect throughout the Roman Empire, east and west, during its lifetime and well after, wherever the Julian calendar continued in use. In England, the Angle, Saxon, and Viking invasions of the fifth through tenth centuries plunged the region back into pre-history for a time. While the reintroduction of Christianity brought the Julian calendar with it, its use was primarily in the service of the church to begin with. After William the Conqueror became king in 1066, he ordered that January 1 be re-established as the civil New Year. Later, however, England and Scotland joined much of Europe to celebrate the New Year on March 25. In the Middle Ages in Europe a number of significant feast days in the ecclesiastical calendar of the Roman Catholic Church came to be used as the beginning of the Julian year: Southward equinox day (usually September 22) was "New Year's Day" in the French Republican Calendar, which was in use from 1793 to 1805. This was "primidi Vendémiaire", the first day of the first month. It took quite a long time before January 1 again became the universal or standard start of the civil year. The years of adoption of 1 January as the new year are as follows: March 1 was the first day of the numbered year in the Republic of Venice until its destruction in 1797, and in Russia from 988 until 1492 (Anno Mundi 7000 in the Byzantine calendar). September 1 was used in Russia from 1492 (A.M. 7000) until the adoption of the Anno Domini notation in 1700 via a December 1699 decree of Tsar Peter I. Because of the division of the globe into time zones, the new year moves progressively around the globe as the start of the day ushers in the New Year. The first time zone to usher in the New Year, just west of the International Date Line, is located in the Line Islands, a part of the nation of Kiribati, and has a time zone 14 hours ahead of UTC. All other time zones are 1 to 25 hours behind, most in the previous day (December 31); on American Samoa and Midway, it is still 11 PM on December 30. These are among the last inhabited places to observe New Year. However, uninhabited outlying U.S. territories Howland Island and Baker Island are designated as lying within the time zone 12 hours behind UTC, the last places on earth to see the arrival of January 1. These small coral islands are found about midway between Hawaii and Australia, about 1,000 miles west of the Line Islands. This is because the International Date Line is a composite of local time zone arrangements, which winds through the Pacific Ocean, allowing each locale to remain most closely connected in time with the nearest or largest or most convenient political and economic locales with which each associate. By the time Howland Island sees the new year, it is 2 AM on January 2 in the Line Islands of Kiribati.
https://en.wikipedia.org/wiki?curid=21637
Northern Territory The Northern Territory (NT; formally the Northern Territory of Australia) is an Australian territory in the central and central northern regions of Australia. It shares borders with Western Australia to the west (129th meridian east), South Australia to the south (26th parallel south), and Queensland to the east (138th meridian east). To the north, the territory looks out to the Timor Sea, the Arafura Sea and the Gulf of Carpentaria, including Western New Guinea and other islands of the Indonesian archaepeligo. The NT covers , making it the third-largest Australian federal division, and the 11th-largest country subdivision in the world. It is sparsely populated, with a population of only 244,761, fewer than half as many people as Tasmania. The archaeological history of the Northern Territory begins over 40,000 years ago when Indigenous Australians settled the region. Makassan traders began trading with the indigenous people of the Northern Territory for trepang from at least the 18th century onwards. The coast of the territory was first seen by Europeans in the 17th century. The British were the first Europeans to attempt to settle the coastal regions. After three failed attempts to establish a settlement (1824–28, 1838–49, and 1864–66), success was achieved in 1869 with the establishment of a settlement at Port Darwin. Today the economy is based on tourism, especially Kakadu National Park in the Top End and the Uluṟu-Kata Tjuṯa National Park (Ayers Rock) in central Australia, and mining. The capital and largest city is Darwin. The population is concentrated in coastal regions and along the Stuart Highway. The other major settlements are (in order of size) Palmerston, Alice Springs, Katherine, Nhulunbuy and Tennant Creek. Residents of the Northern Territory are often known simply as "Territorians" and fully as "Northern Territorians", or more informally as "Top Enders" and "Centralians". Indigenous Australians have lived in the present area of the Northern Territory for at least 65,000 years, and extensive seasonal trade links existed between them and the peoples of what is now Indonesia for at least five centuries. With the coming of the British, there were four early attempts to settle the harsh environment of the northern coast, of which three failed in starvation and despair. The land now occupied by the Northern Territory was part of colonial New South Wales from 1825 to 1863, except for a brief time from February to December 1846, when it was part of the short-lived colony of North Australia. The Northern Territory was part of South Australia from 1863 to 1911. Under the administration of colonial South Australia, the overland telegraph was constructed between 1870 and 1872. From its establishment in 1869 the Port of Darwin was the major Territory supply for many decades. A railway was built between Palmerston and Pine Creek between 1883 and 1889. The economic pattern of cattle raising and mining was established so that by 1911 there were 513,000 cattle. Victoria River Downs was at one time the largest cattle station in the world. Gold was found at Grove Hill in 1872 and at Pine Creek, Brocks Creek, Burundi, and copper was found at Daly River. On 1 January 1911, a decade after federation, the Northern Territory was separated from South Australia and transferred to federal control. Alfred Deakin opined at this time "To me the question has been not so much commercial as national, first, second, third and last. Either we must accomplish the peopling of the northern territory or submit to its transfer to some other nation." In late 1912 there was growing sentiment that the name "Northern Territory" was unsatisfactory. The names "Kingsland" (after King George V and to correspond with Queensland), "Centralia" and "Territoria" were proposed with Kingsland becoming the preferred choice in 1913. However, the name change never went ahead. For a brief time between 1927 and 1931 the Northern Territory was divided into North Australia and Central Australia at the 20th parallel of South latitude. Soon after this time, parts of the Northern Territory were considered in the Kimberley Plan as a possible site for the establishment of a Jewish Homeland, understandably considered the "Unpromised Land". During World War II, most of the Top End was placed under military government. This is the only time since Federation that part of an Australian state or territory has been under military control. After the war, control for the entire area was handed back to the Commonwealth. The Bombing of Darwin occurred on 19 February 1942. It was the largest single attack ever mounted by a foreign power on Australia. Evidence of Darwin's World War II history is found at a variety of preserved sites in and around the city, including ammunition bunkers, airstrips, oil tunnels and museums. The port was damaged in the 1942 Japanese air raids. It was subsequently restored. In the late 1960s improved roads in adjoining States linking with the territory, port delays and rapid economic development led to uncertainty in port and regional infrastructure development. As a result of the Commission of Enquiry established by the Administrator, port working arrangements were changed, berth investment deferred and a port masterplan prepared. Extension of rail transport was then not considered because of low freight volumes. Indigenous Australians had struggled for rights to fair wages and land. An important event in this struggle was the strike and walk off by the Gurindji people at Wave Hill Cattle Station in 1966. The federal government of Gough Whitlam set up the Woodward Royal Commission in February 1973, which set to enquire into how land rights might be achieved in the Northern Territory. Justice Woodward's first report in July 1973 recommended that a Central Land Council and a Northern Land Council be established to present to him the views of Aboriginal people. In response to the report of the Royal Commission a Land Rights Bill was drafted, but the Whitlam Government was dismissed before it was passed. The Aboriginal Land Rights (Northern Territory) Act 1976 was eventually passed by the Fraser government on 16 December 1976 and began operation on 26 January 1977). In 1974, from Christmas Eve to Christmas Day, Darwin was devastated by tropical Cyclone Tracy. Cyclone Tracy killed 71 people, caused A$837 million in damage (1974 dollars), or approximately A$6.85 billion (2018 dollars), and destroyed more than 70 per cent of Darwin's buildings, including 80 per cent of houses. Tracy left more than 41,000 out of the 47,000 inhabitants of the city homeless. The city was rebuilt with much-improved construction codes and is a modern, landscaped metropolis today. In 1978 the territory was granted responsible government, with a Legislative Assembly headed by a chief minister. The territory also publishes official notices in its own "Government Gazette". The administrator of the Northern Territory is an official acting as the Queen's "indirect" representative in the territory. During 1995–96 the Northern Territory was briefly one of the few places in the world with legal voluntary euthanasia, until the Federal Parliament overturned the legislation. Before the over-riding legislation was enacted, four people used the law supported by Dr. Philip Nitschke. There are many very small settlements scattered across the territory, but the larger population centres are located on the single paved road that links Darwin to southern Australia, the Stuart Highway, known to locals simply as "the track". The Northern Territory is home to two spectacular natural rock formations, Uluru (Ayers Rock) and Kata Tjuta (The Olgas), which are sacred to the local Aboriginal peoples and which have become major tourist attractions. The northern portion of the territory is principally tropical savannas, composed of several distinct ecoregions – Arnhem Land tropical savanna, Carpentaria tropical savanna, Kimberley tropical savanna, Victoria Plains tropical savanna, and Mitchell Grass Downs. The southern portion of the territory is covered in deserts and xeric shrublands, including the Great Sandy-Tanami desert, Simpson Desert, and Central Ranges xeric scrub. In the northern part of the territory lies Kakadu National Park, which features extensive wetlands and native wildlife. To the north of that lies the Arafura Sea, and to the east lies Arnhem Land, whose regional centre is Maningrida on the Liverpool River delta. There is an extensive series of river systems in the Northern Territory. These rivers include: the Alligator Rivers, Daly River, Finke River, McArthur River, Roper River, Todd River and Victoria River. The Hay River is a river south-west of Alice Springs, with the Marshall River, Arthur Creek, Camel Creek and Bore Creek flowing into it. The Northern Territory has two distinctive climate zones. The northern end, including Darwin, has a tropical climate with high humidity and two seasons, the wet (October to April) and dry season (May to September). During the dry season nearly every day is warm and sunny, and afternoon humidity averages around 30%. There is very little rainfall between May and September. In the coolest months of June and July, the daily minimum temperature may dip as low as , but very rarely lower, and frost has never been recorded. The wet season is associated with tropical cyclones and monsoon rains. The majority of rainfall occurs between December and March (the southern hemisphere summer), when thunderstorms are common and afternoon relative humidity averages over 70% during the wettest months. On average more than of rain falls in the north. Rainfall is highest in north-west coastal areas, where rainfall averages from . The central region is the desert centre of the country, which includes Alice Springs and Uluru (Ayers Rock), and is semi-arid with little rain usually falling during the hottest months from October to March. Seasons are more distinct in central Australia, with very hot summers and cool winters. Frost is recorded a few times a year. The region receives less than of rain per year. The highest temperature recorded in the territory was at Finke on 1 and 2 January 1960. The lowest temperature was at Alice Springs on 17 July 1976. The Northern Territory Parliament is one of the three unicameral parliaments in the country. Based on the Westminster System, it consists of the Northern Territory Legislative Assembly which was created in 1974, replacing the Northern Territory Legislative Council. It also produces the Northern Territory of Australia Government Gazette. The Northern Territory Legislative Council was the partly elected governing body from 1947 until its replacement by the fully elected Northern Territory Legislative Assembly in 1974. The total enrolment for the 1947 election was 4,443. The Northern Territory was split into five electorates: Darwin, Alice Springs, Tennant Creek, Batchelor, and Stuart. While this assembly exercises powers similar to those of the parliaments of the states of Australia, it does so by legislated devolution of powers from the Commonwealth Government, rather than by any constitutional right. As such, the Commonwealth Government retains the right to legislate for the territory, including the power to override legislation passed by the Legislative Assembly. The Monarch is represented by the Administrator of the Northern Territory, who performs a role similar to that of a state governor. Twenty-five members of the Legislative Assembly are elected to four-year terms from single-member electorates. For some years there has been agitation for full statehood. A referendum of voters in the Northern Territory was held on the issue in 1998, which resulted in a 'no' vote. This was a shock to both the Northern Territory and Commonwealth governments, as opinion polls showed most Territorians supported statehood. But under the Australian Constitution, the federal government may set the terms of entry to full statehood. The Northern Territory was offered three senators, rather than the 12 guaranteed to original states. (Because of the difference in populations, equal numbers of Senate seats would mean a Territorian's vote for a senator would have been worth more than 30 votes in New South Wales or Victoria.) Alongside what was cited as an arrogant approach adopted by then chief minister Shane Stone, it is believed that most Territorians, regardless of their general views on statehood, were reluctant to adopt the particular offer that was made. The chief minister is the head of government of a self-governing territory (the head of a state government is a "premier"). The chief minister is appointed by the administrator, who in normal circumstances appoints the leader of whichever party holds the majority of seats in the Northern Territory Legislative Assembly. The current chief minister is Michael Gunner of the Australian Labor Party. He replaced Adam Giles on 31 August 2016. The Northern Territory became self-governing on 1 July 1978 under its own administrator appointed by the Governor-General of Australia. The federal government, not the NT government, advises the governor-general on the appointment of the administrator, but by convention consults first with the Territory government. The current administrator is Vicki O'Halloran. The Northern Territory is represented in the federal parliament by two members in the House of Representatives and two members in the Senate. , resulting from the 2019 federal election, Warren Snowdon from the Australian Labor Party (ALP) and Luke Gosling from the Australian Labor Party (ALP) serve in the House of Representatives, and Malarndirri McCarthy from the ALP and Sam McMahon from the Country Liberal Party serve in the Senate. The Northern Territory is divided into 17 local government areas, including 11 shires and five municipalities. Shire, city and town councils are responsible for functions delegated by the Northern Territory parliament, such as road infrastructure and waste management. Council revenue comes mostly from property taxes and government grants. Aboriginal land councils in the Northern Territory are groups of Aboriginal landowners, set up under the "Aboriginal Land Rights Act 1976". The two dominant political parties in the Northern Territory are the conservative Country Liberal Party, and the social-democratic Australian Labor Party. Minor parties that are also active in the NT are the Northern Territory Greens, Palmer United Party and Australia's First Nations Political Party. In the 2016 Northern Territory general election only two CLP representatives were elected (MLAs Higgins and Finocchiaro) plus five independents. This makes the parliamentary status of the CLP as a major party a matter of conjecture. The population of the Northern Territory at the 2011 Australian census was 211,945, a 10 per cent increase from the 2006 census. The Australian Bureau of Statistics estimated a June 2015 resident population of 244,300, taking into account residents overseas or interstate. The territory's population represents 1% of the total population of Australia. The Northern Territory's population is the youngest in Australia and has the largest proportion (23.2%) under 15 years of age and the smallest proportion (5.7%) aged 65 and over. The median age of residents of the Northern Territory is 31 years, six years younger than the national median age. Indigenous Australians own some 49% of the land. The life expectancy of Aboriginal Australians is well below that of non-Indigenous Australians in the Northern Territory, a fact that is mirrored elsewhere in Australia. ABS statistics suggest that Indigenous Australians die about 11 years earlier than the average non-Indigenous Australian. There are Aboriginal communities in many parts of the territory, the largest ones being the Pitjantjatjara near Uluru, the Arrernte near Alice Springs, the Luritja between those two, the Warlpiri further north, and the Yolngu in eastern Arnhem Land. More than 54% of Territorians live in Darwin, located in the territory's north (Top End). Less than half of the territory's population live in the rural Northern Territory. Despite this, the Northern Territory is the least urbanised federal division in the Commonwealth (followed by Tasmania). Not all communities are incorporated cities, or towns. They are referred to as "Statistical Local Areas." At the 2016 census, the most commonly nominated ancestries were: 31.2% of the population was born overseas at the 2016 census. The five largest groups of overseas-born were from the Philippines (2.6%), England (2.4%), New Zealand (2%), India (1.6%) and Greece (0.6%). 25.5% of the population, or 58,248 people, identified as Indigenous Australians (Aboriginal Australians and Torres Strait Islanders) in 2016. At the 2016 census, 58% of the population spoke only English at home. The other languages most commonly spoken at home were Kriol (1.9%), Djambarrpuyngu (1.9%), Greek (1.4%) Tagalog (1.3%), and Warlpiri (0.9%). There are more than 100 Aboriginal languages and dialects spoken in the Northern Territory, in addition to English which is most common in cities such as Darwin or Alice Springs. Major indigenous languages spoken in the Northern Territory include Murrinh-patha and Ngangikurrungurr in the northwest around Wadeye, Warlpiri and Warumungu in the centre around Tennant Creek, Arrernte around Alice Springs, Pintupi-Luritja to the south east, Pitjantjatjara in the south near Uluru, Yolngu Matha to the far north in Arnhem Land (where the dialect Djambarrpuyngu of Dhuwal is considered a lingua franca), and Burarra, Maung, Iwaidja and Kunwinjku in the centre north and on Croker Island and the Goulburn Islands. Tiwi is spoken on Melville Island and Bathurst Island. Literature in many of these languages is available in the Living Archive of Aboriginal Languages. In the 2016 census Roman Catholics form the single largest religious group in the territory with 19.9% of the Northern Territory's population, followed by Anglican (8.4%), Uniting Church (5.7%) and Lutheran (2.6%). Buddhism is the territory's largest non-Christian religion (2.0%), followed by Hinduism (1.6%), which is the fastest growing religion population percentage wise in the state. Australian Aboriginal religion and mythology (1.4%) is also practiced. Around 30% of Territorians do not profess any religion. Many Aborigines practise their traditional religion, their belief in the Dreamtime. A Northern Territory school education consists of six years of primary schooling, including one transition year, three years of middle schooling, and three years of secondary schooling. In the beginning of 2007, the Northern Territory introduced Middle School for Years 7–9 and High School for Years 10–12. Northern Territory children generally begin school at age five. On completing secondary school, students earn the Northern Territory Certificate of Education (NTCE). Students who successfully complete their secondary education also receive a tertiary entrance ranking, or ATAR score, to determine university admittance. Northern Territory schools are either publicly or privately funded. Public schools, also known as state or government schools, are funded and run directly by the Department of Education. Private fee-paying schools include schools run by the Catholic Church and independent schools, some elite ones similar to English public schools. Some Northern Territory Independent schools are affiliated with Protestant, Lutheran, Anglican, Greek Orthodox or Seventh-day Adventist Churches, but include non-church schools and an Indigenous school. As of 2009, the Northern Territory had 151 public schools, 15 Catholic schools and 21 independent schools. 39,492 students were enrolled in schools around the territory with 29,175 in public schools, and 9,882 in independent schools. The Northern Territory has about 4,000 full-time teachers. The Northern Territory has one university which opened in 1989 under the name of the Northern Territory University. Now renamed as the Charles Darwin University, it had about 19,000 students enrolled: about 5,500 higher education students and about 13,500 students on vocational education and training (VET) courses. The first tertiary institution in the territory was the Batchelor Institute of Indigenous Tertiary Education which was established in the mid-1960s. The Northern Territory Library is the territory's research and reference library. It is responsible for collecting and preserving the Northern Territory documentary heritage and making it available through a range of programs and services. Material in the collection includes books, newspapers, magazines, journals, manuscripts, maps, pictures, objects, sound and video recordings and databases. The Northern Territory's economy is largely driven by mining, which is concentrated on energy producing minerals, petroleum and energy and contributes around $2.5 billion to the gross state product and employs over 4,600 people. Mining accounts for 14.9% of the gross state product in 2014–15 compared to just 7% nationally. In recent years, largely due to the effect of major infrastructure projects and mine expansions, construction has overtaken mining as the largest single industry in the territory. Construction, mining and manufacturing, and government and community services, combine to account for about half of the territory's gross state product (GSP), compared to about a third of national gross domestic product (GDP). The economy has grown considerably over the past decade, from a value of $15 billion in 2004–05 to over $22 billion in 2014–15. In 2012–13 the territory economy expanded by 5.6%, over twice the level of national growth, and in 2014–15 it grew by 10.5%, four times the national growth rate. Between 2003 and 2006 the gross state product had risen from $8.67 billion to $11.476 billion and increase of 32.4%. During the three years to 2006–2007 the Northern Territory gross state product grew by an average annual rate of 5.5%. Gross state product per capita in the Northern Territory ($72,496) is higher than any Australian state or territory and is also higher than the gross domestic product per capita for Australia ($54,606). The Northern Territory's exports were up 12.9% or $681 million in 2012–13. The largest contributor to the territory's exports was: mineral fuels (largely LNG), crude materials (mainly mineral ores) and food and live animals (primarily live cattle). The main international markets for territory exports are Japan, China, Indonesia, the United States and Korea. Imports to the Northern Territory totalled $2,887.8 million which consisted of mainly machinery and equipment manufacturing (58.4%) and petroleum, coal, chemical and associated product manufacturing (17.0%). The principal mining operations are bauxite at Gove Peninsula where the production is estimated to increase 52.1% to $254 million in 2007–08, manganese at Groote Eylandt, production is estimated to increase 10.5% to $1.1 billion which will be helped by the newly developed mines include Bootu Creek and Frances Creek, gold which is estimated to increase 21.7 per cent to $672 million at the Union Reefs plant and uranium at Ranger Uranium Mine. Tourism is an important economic driver for the territory and a significant industry in regional areas. Iconic destinations such as Uluru and Kakadu make the Northern Territory a popular destination for domestic and international travellers. Diverse landscapes, waterfalls, wide open spaces, aboriginal culture and wild and untamed wildlife provides the opportunity for visitors to immerse themselves in the natural wonder that the Northern Territory offers. In 2015, the territory received a total of about 1.6 million domestic and international visitors contributing an estimated $2.0 billion to the local economy. Holiday visitors made up the majority of total visitation (about 792,000 visitors). Tourism has strong links to other sectors in the economy including accommodation and food services, retail trade, recreation and culture, and transport. The territory's current marketing campaign is 'Do the NT'. The Northern Territory is the most sparsely populated state or territory in Australia. The NT has a connected network of sealed roads, including two National Highways, linking with adjoining States and connecting the major Territory population centres, and other important centres such as Uluru (Ayers Rock), Kakadu and Litchfield National Parks. The Stuart Highway, once known as "The Track", runs north to south, connecting Darwin and Alice Springs to Adelaide. Some of the sealed roads are single lane bitumen. Many unsealed (dirt) roads connect the more remote settlements. The Adelaide–Darwin railway, a new standard gauge railway, connects Adelaide via Alice Springs with Darwin, replacing earlier narrow gauge railways which had a gap between Alice Springs and Birdum. The Ghan passenger train runs from Darwin to Adelaide, stopping at Katherine, Tennant Creek, Alice Springs and Kulgera in the NT. The Northern Territory was one of the few remaining places in the world with no speed restrictions on select public roads, until 21 November 2016. On 1 January 2007 a default speed limit of 110 km/h was introduced on roads outside of urban areas (Inside urban areas of 40, 50 or 60 km/h). Speeds of up to 130 km/h are permitted on some major highways, such as the Stuart Highway. On 1 February 2014, the speed limit was removed on a 204 km portion of the Stuart Highway for a one-year trial period. The maximum speed limit was changed to 130 km/h on 21 November 2016. Darwin International Airport is the major domestic and international airport for the territory. Several smaller airports are also scattered throughout the territory and are served by smaller airlines; including Alice Springs Airport, Ayers Rock Airport, Katherine Airport and Tennant Creek Airport. The Northern Territory has only one daily tabloid newspaper, News Corporation's "Northern Territory News," or "NT News". "The Sunday Territorian" is the sister paper to the "NT News" and is the only dedicated Sunday tabloid newspaper in the Northern Territory. The "Centralian Advocate" is circulated around the Alice Springs region twice a week. There are also five weekly community newspapers. The territory receives the national daily, "The Australian", while "The Sydney Morning Herald, The Age" and the "Guardian Weekly" are also available in Darwin. Katherine's paper is the "Katherine Times". There is an LGBT community publication, QNews Magazine, which is published in Darwin and Alice Springs. Metropolitan Darwin has had five broadcast television stations: Darwin also has a single open-narrowcast station: Regional Northern Territory has a similar availability of stations: Remote areas are generally required to receive television via the Viewer Access Satellite Television service, which carries the same channels as the regional areas, as well as some extra open-narrowcast services, including Indigenous Community Television and Westlink. Darwin has radio stations on both AM and FM frequencies. ABC stations include ABC NewsRadio (102.5FM), 105.7 ABC Darwin (8DDD 105.7FM), ABC Radio National (657AM), ABC Classic FM (107.3FM) and Triple J (103.3FM). The two commercial stations are Mix 104.9 (8MIX) and Hot 100 FM (8HOT). The leading community stations are 104.1 Territory FM, and Radio Larrakia (8KNB). The radio stations in Alice Springs are also broadcast on the AM and FM frequencies. ABC stations include Triple J (94.9FM), ABC Classic FM (97.9FM), 783 ABC Alice Springs (783AM) and ABC Radio National (99.7FM). There are two community stations in the town—CAAMA (100.5FM) and 8CCC (102.1FM). The commercial stations, which are both owned by the same company are Sun 96.9 (96.9FM) and 8HA (900AM). Two additional stations, Territory FM (98.7FM) and Radio TAB (95.9FM) are syndicated from Darwin and Brisbane, respectively.
https://en.wikipedia.org/wiki?curid=21638
Low-alcohol beer Low-alcohol beer is beer with little or no alcohol content and aims to reproduce the taste of beer without (or at least reduce) the inebriating effects of standard alcoholic brews. Most low-alcohol beers are lagers, but there are some low-alcohol ales. Low-alcohol beer is also known as light beer, non-alcoholic beer, small beer, small ale, or near-beer. Low-alcoholic brews such as small beer date back at least to Medieval Europe, where they served as a less risky alternative to water (which often was polluted by feces and parasites) and were less expensive than the full strength brews used at festivals. More recently, the temperance movements and the need to avoid alcohol while driving, operating machinery, taking certain medications, etc. led to the development of non-intoxicating beers. In the United States, non-alcoholic brews were promoted during Prohibition, according to John Naleszkiewicz. In 1917, President Wilson proposed limiting the alcohol content of malt beverages to 2.75% to try to appease avid prohibitionists. In 1919, Congress approved the Volstead Act, which limited the alcohol content of all beverages to 0.5%. These very low alcohol beverages became known as tonics, and many breweries began brewing them in order to stay in business during Prohibition. Since removing the alcohol from the beer requires just one simple extra step, many breweries saw it as an easy change. In 1933, when Prohibition was repealed, breweries easily removed this extra step. By the 1980s and 1990s, growing concerns about alcoholism led to the growing popularity of "light" beers. In the 2010s, breweries have focused on marketing low-alcohol beers to counter the popularity of homebrew. Declining consumption has also led to the introduction of mass-market non-alcoholic beverages, dubbed as "near beer". Low-alcohol and alcohol-free bars and pubs have also started to open to cater for drinkers of non-alcoholic beverages, such as Scottish brewer BrewDog's London bar opened in early 2020. In the UK, the introduction of a lower rate of beer duty for low-strength beer (of 2.8% ABV or less) in October 2011 spurred many small brewers to revive old small beer styles and create higher-hopped craft beers at the lower alcohol level to be able to lower the cost of their beer to consumers. At the start of the 21st century, alcohol-free beer has seen a rise in popularity in the Middle East (which now makes up a third of the market). One reason for this is that Islamic scholars issued fatawa which permitted the consumption of beer as long as large quantities could be consumed without getting drunk. Positive features of non-alcoholic brews include the ability to drive after consuming several drinks, the reduction in alcohol-related illness, and less severe hangover symptoms. Some common complaints about non-alcoholic brews include a loss of flavor, addition of one step in the brewing process, sugary taste, and a shorter shelf life. There are also legal implications. Some state governments, e.g. Pennsylvania, prohibit the sale of non-alcoholic brews to persons under the age of 21. A study conducted by the department of psychology at Indiana University said, "Because non-alcoholic beer provides sensory cues that simulate alcoholic beer, this beverage may be more effective than other placebos in contributing to a credible manipulation of expectancies to receive alcohol", making people feel "drunk" when physically they are not. In the United States, beverages containing less than 0.5% alcohol by volume (ABV) were legally called non-alcoholic, according to the now-defunct Volstead Act. Because of its very low alcohol content, non-alcoholic beer may be legally sold to people under age 21 in many American states. In the United Kingdom, Government guidance recommends the following descriptions for "alcohol substitute" drinks including alcohol-free beer. The use of these descriptions is voluntary: In some parts of the European Union, beer must contain no more than 0.5% ABV if it is labelled "alcohol-free". In Australia, the term "light beer" refers to any beer with less than 3.5% alcohol. Light beers are beers with reduced caloric content compared to regular beer, and typically also have a lower alcoholic content, depending on the brand and where they are sold. The spelling "lite beer" is also commonly used. Light beers are manufactured by reducing the carbohydrate content, and secondarily by reducing the alcohol content, since both carbohydrates and alcohol contribute to the caloric content of beer. Light beers are marketed primarily to drinkers who wish to manage their calorie intake. However, these beers are sometimes criticized for being less flavorful than full-strength beers, being "watered down" (whether in perception or in fact), and thus advertising campaigns for light beers generally advertise their retention of flavor. In Australia, regular beers have approximately 4%-5% ABV, while reduced-alcohol beers have 2.2%–3.2%. In Canada, a reduced-alcohol beer contains 2.6%–4.0% ABV, and an "extra-light" beer contains less than 2.5%. In the United States, most mass-market light beer brands, including Bud Light, Coors Light, and Miller Lite, have 4.2% ABV, 16% less than ordinary beers from the same makers which are 5% ABV. In Sweden, low alcohol beer is either 2.2%, 2.8% or 3.5%, and can be purchased in an ordinary supermarket whereas normal strength beers of above 3.5% must be purchased at "Systembolaget". Beer containing 2.8-3.5% ABV (called Folköl or "Peoples' Beer") may be legally sold in any convenience store to people over 18 years of age, whereas stronger beer may only be sold in state-run liquor stores to people older than 20. In addition, businesses selling food for on-premises consumption do not need an alcohol license to serve 3.5% beer. Virtually all major Swedish brewers, and several international ones, in addition to their full-strength beer, make 3.5% folköl versions as well. Beer below or equaling 2.25% ABV ("lättöl") is not legally subject to age restrictions; however, some stores voluntarily opt out from selling it to minors anyway. Low-point beer, which is often known in the United States as "three-two beer" or "3 point 2 brew", is beer that contains 3.2% alcohol by weight (equivalent to about 4% ABV). The term "low-point beer" is unique to the United States, where some states limit the sale of beer, but beers of this type are also available in countries (such as Sweden and Finland) that tax or otherwise regulate beer according to its alcohol content. In the United States, 3.2 beer was the highest alcohol content beer allowed to be produced legally for nine months in 1933. As part of his New Deal, President Franklin D. Roosevelt signed the Cullen–Harrison Act that repealed the Volstead Act on March 22, 1933. In December 1933, the Twenty-first Amendment to the United States Constitution was passed, negating the federal government's power to regulate the sale of alcoholic beverages, though states retained the power to regulate. After the repeal of Prohibition, a number of state laws prohibiting the sale of intoxicating liquors remained in effect. As these were repealed, they were first replaced by laws limiting the maximum alcohol content allowed for sale as 3.2 ABW. As of 2019, the states of Minnesota and Utah permit general establishments such as supermarket chains and convenience stores to sell only low-point beer; in the 2010s, Colorado, Kansas, and Oklahoma revised state laws to end this practice. In these states, all alcoholic beverages containing more than 3.2% alcohol by weight (ABW) must be sold from state-licensed liquor stores. Missouri also has a legal classification for low-point beer, which it calls "nonintoxicating beer". Unlike Minnesota and Utah, Missouri does not limit supermarket chains and convenience stores to selling only low-point beer. Instead, Missouri's alcohol laws permit grocery stores, drug stores, gas stations, and even "general merchandise stores" (a term that Missouri law does not define) to sell any alcoholic beverage; consequently, 3.2% beer is rarely sold in Missouri. Originally, "near beer" was a term for malt beverages containing little or no alcohol (less than 0.5% ABV), which were mass-marketed during Prohibition in the United States. Near beer could not legally be labeled as "beer" and was officially classified as a "cereal beverage". The public, however, almost universally called it "near beer". The most popular "near beer" was Bevo, brewed by the Anheuser-Busch company. The Pabst company brewed "Pablo", Miller brewed "Vivo", and Schlitz brewed "Famo". Many local and regional breweries stayed in business by marketing their own near-beers. By 1921, production of near beer had reached over 300 million US gallons (1 billion L) a year (36 L/s). A popular illegal practice was to add alcohol to near beer. The resulting beverage was known as "spiked beer" or "needle beer", so called because a needle was used to inject alcohol through the cork of the bottle or keg. Food critic and writer Waverley Root described the common American near beer as "such a wishy-washy, thin, ill-tasting, discouraging sort of slop that it might have been dreamed up by a Puritan Machiavelli with the intent of disgusting drinkers with genuine beer forever." Beginning in the late 2000s, the term "near beer" has been revived to refer to modern non-alcoholic beer. In the early 2010s, major breweries began experimenting with mass-market non-alcoholic beers to counter with declining alcohol consumption amid growing preference for craft beer, launching beverages like Anheuser-Busch's Budweiser Prohibition Brew, launched in 2016. A drink similar to "near beer", "bjórlíki" was quite popular in Iceland before alcoholic beer was made legal in 1989. The Icelandic variant normally consisted of a shot of vodka added to a half-a-litre glass of light beer. Small beer (also, small ale) is a beer/ale that contains very little alcohol. Sometimes unfiltered and porridge-like, it was a favored drink in Medieval Europe and colonial North America as opposed to the often polluted water and the expensive beer used for festivities. Small beer was also produced in households for consumption by children and servants at those occasions. However, small beer/small ale can also refer to a beer made of the "second runnings" from a very strong beer (e.g., scotch ale) mash. These beers can be as strong as a mild ale, depending on the strength of the original mash. (Drake's 24th Anniversary Imperial Small Beer was expected to reach above 9.5% abv.) This was done as an economy measure in household brewing in England up to the 18th century and is still done by some homebrewers. One commercial brewery, San Francisco's Anchor Brewing Company, also produces their "Anchor Small Beer" using the second runnings from their Old Foghorn Barleywine. The term is also used for commercially produced beers which are thought to taste too weak. The Middle East accounts for almost a third of worldwide sales of nonalcoholic and alcohol-free beer. The market for nonalcoholic beer in Malaysia has been slow in comparison to other Muslim-majority countries, and as of 2015, the Malaysian government has not approved any nonalcoholic beers as halal. In 2008, the sale of non-alcoholic beers in Iran continued its high performance with double-digit growth rates in both value and volume and is expected to more than double its total volume sales between 2008 and 2013. Non alcoholic beer sales in India are relatively low. North America is seeing a rise in non-alcoholic beer consumption. In the U.S., it has a reputation as the preferred beverage of retired police, suburban dads, and reformed alcoholics. Former President George W. Bush, who was the presidential candidate who voters most wanted to have a beer with, and Vice President Mike Pence are known to drink non-alcoholic beer. Spain is the main consumer and producer of low-alcohol beer in the European Union. As of March 2020, sales of alcohol-free beer are up by 30% since 2016, with younger generations shunning alcoholic beverages. With the global non-alcoholic beer market expected to double by 2024, there has been an increase in breweries producing the product. As more people lean towards non-alcoholic beverages for health reasons, social reasons, or just because they want to enjoy the taste of beer without the effects of alcohol, companies are producing beers that cater to these audiences. Craft Non-Alcoholic Beer began to take off in early 2018, as beer companies slowed down on trying to put as high of an ABV% in their brews as possible, and started producing more sessionable beers. Some beers that are still classified as "alcoholic" can have an ABV of as low as 2.4%, and the companies producing these are still seeing sales. With an ever growing health conscious market segment, breweries began to produce craft non-alcoholic beers with as little as 10 calories per can, so that those who crave beer can fulfill their cravings without breaking their health resolution. Beers that are labeled "non-alcoholic" still contain a very small amount of alcohol. Thus, some US states require the purchaser to be of a legal drinking age. Exceptions include: According to the Birmingham Beverage Company, the brewing process of traditional brews consists of eight basic steps, nine for brewing non-alcoholic brews. Low-alcohol beer starts out as regular alcoholic beer, which is then processed to remove the alcohol. Older processes simply heat the beer to evaporate most of the alcohol. Since alcohol is more volatile than water, as the beer is heated alcohol boils off first. The alcohol is allowed to escape and the remaining liquid is becomes the product, essentially the opposite of the process used to make distilled beverages. Most modern breweries utilize vacuum evaporation to reduce the boiling temperature and maintain flavor. In essence, the beer is placed under a light vacuum to facilitate the alcohol molecules going into the gaseous phase. If a sufficient vacuum is applied, it is not necessary to "cook" the beer at a temperature that destroys the flavor. Some heat must nevertheless be supplied to counter the heat lost to enthalpy of vaporization. A more modern alternative process uses reverse osmosis to avoid heating the product at all. Under pressure, the beer is passed through a polymeric filter with pores small enough that only alcohol and water (and a few volatile acids) can pass through. A syrupy mixture of complex carbohydrates and most of the flavor compounds are retained by the filter. Alcohol is distilled out of the filtered alcohol-water mix using conventional distillation methods. Adding the water and remaining acids back into the syrup left behind on the filter completes the process. Sometimes beer is simply diluted with water to give the desired alcohol level. The conversion from a traditional alcoholic beer to a non-alcoholic beer takes place after the seventh step and preceding the finishing step. The uncarbonated beer is heated up to its boiling point. Another method of removing the alcohol is to decrease the pressure so the alcohol boils at room temperature. This is the preferred method because raising the temperature this late in the brewing process can greatly affect the flavor of the brew. Another tip would be avoiding using sugar from maize; this simply increases the alcohol content without adding to the flavor or body of the beer. Once the alcohol is removed, one proceeds with the normal finishing process in which the beer is carbonated and bottled. Many low-alcohol beer brands incorporate the colour blue into the packaging design, including Becks Blue, Heineken 0.0%, Ožujsko Cool and Erdinger Alkoholfrei.
https://en.wikipedia.org/wiki?curid=21640
Norman Foster, Baron Foster of Thames Bank Norman Robert Foster, Baron Foster of Thames Bank, (born 1 June 1935), is an English architect whose company, Foster + Partners, maintains an international design practice. He is the President of the Norman Foster Foundation. The Norman Foster Foundation promotes interdisciplinary thinking and research to help new generations of architects, designers and urbanists to anticipate the future. The foundation, which opened in June 2017, is based in Madrid and operates globally. He is one of the most prolific British architects of his generation. In 1999, he was awarded the Pritzker Architecture Prize, often referred to as the Nobel Prize of architecture. Norman Robert Foster was born in 1935 in Reddish, two miles north of Stockport, then a part of Cheshire. The only child of Robert and Lilian Foster ("née" Smith), the family moved to Levenshulme, near Manchester, where they lived in poverty. His father was a machine painter at the Metropolitan-Vickers works in Trafford Park which influenced him to take up engineering, design, and to pursue a career designing buildings. His mother worked in a local bakery. Foster's parents were diligent and hard workers who often had neighbours and family members look after their son, which Foster later believed restricted his relationship with his mother and father. Foster attended Burnage Grammar School for Boys in Burnage, where he was bullied by fellow pupils and took up reading. He considered himself quiet and awkward in his early years. At 16, he left school and passed an entrance exam for a trainee scheme set up by Manchester Town Hall, which led to his first job, an office junior and clerk in the treasurer's department. In 1953, Foster completed his national service in the Royal Air Force, choosing the air force because aircraft had been a longtime hobby. Upon returning to Manchester, Foster went against his parents' wishes and sought employment elsewhere. He had seven O-Levels by this time, and applied to work at a duplicating machine company, telling the interviewer he had applied for the prospect of a company car and a £1,000 salary. Instead, he became an assistant to a contract manager at a local architects, John E. Beardshaw and Partners. The staff advised him, that if he wished to become an architect, he should prepare a portfolio of drawings using the perspective and shop drawings from Beardshaw's practice as an example. Beardshaw was so impressed with Foster's drawings that he promoted him to the drawing department. In 1956, Foster began study at the School of Architecture and City Planning, part of the University of Manchester. He was ineligible for a maintenance grant, so he took part-time jobs to fund his studies, including an ice-cream salesman, bouncer, and night shifts at a bakery making crumpets. During this time, he also studied at the local library in Levenshulme. His talent and hard work was recognised in 1959 when he won £100 and a RIBA silver medal for what he described as "a measured drawing of a windmill". After graduating in 1961, Foster won the Henry Fellowship to Yale School of Architecture in New Haven, Connecticut, where he met future business partner Richard Rogers and earned his master's degree. At the suggestion of Vincent Scully, the pair travelled across America for a year. In 1963, Foster returned to England and established his own an architectural practice, Team 4, with Rogers, Su Brumwell, and sisters Georgie and Wendy Cheesman. The team earned a reputation for their high-tech industrial designs. After the four separated in 1967, Foster and Wendy founded a new practice, Foster Associates. From 1968 to 1983, Foster collaborated with American architect Richard Buckminster Fuller on several projects that became catalysts in the development of an environmentally sensitive approach to design, such as the Samuel Beckett Theatre at St Peter's College, Oxford. In 1999, the company was renamed Foster + Partners. Foster Associates concentrated on industrial buildings until 1969, when the practice worked on the administrative and leisure centre for Fred. Olsen Lines based in the London Docklands, which integrated workers and managers within the same office space. Its breakthrough building in England followed in 1974 with the completion of the Willis Faber & Dumas headquarters in Ipswich. The client was a family run insurance company that wanted to restore a sense of community to the workplace. Foster created open plan office floors, long before open-plan became the norm, and placed a roof garden, 25-metre swimming pool, and gymnasium in the building to enhance the quality of life for the company's 1,200 employees. The building has a full-height glass façade moulded to the medieval street plan and contributes drama, subtly shifting from opaque, reflective black to a glowing back-lit transparency as the sun sets. The design was inspired by the Daily Express Building in Manchester that Foster had admired as a youngster. The building is now Grade I* listed. The Sainsbury Centre for Visual Arts, an art gallery and museum on the campus of the University of East Anglia, Norwich, was one of the first major public buildings to be designed by Foster, completed in 1978, and became grade II* listed in December 2012. In 1990 Foster's design for the Terminal Building at London Stansted Airport was awarded the European Union Prize for Contemporary Architecture / Mies van der Rohe Award. Foster gained a reputation for designing office buildings. In the 1980s he designed the HSBC Main Building in Hong Kong for HSBC. The building is marked by its high level of light transparency, as all 3500 workers have a view to Victoria Peak or Victoria Harbour. Foster said that if the firm had not won the contract it would probably have been bankrupted. Foster believes that attracting young talent is essential, and is proud that the average age of people working for Foster and Partners is 32, just like it was in 1967. Foster was assigned the brief for a development on the site of the Baltic Exchange in the 1990s. The Exchange was damaged beyond repair by a bomb left by the IRA. Foster + Partners submitted a plan for a 385-metre tall skyscraper, the London Millennium Tower, but its height was seen as excessive for London's skyline. The proposal was scrapped and instead Foster proposed 30 St Mary Axe, popularly referred to as "the gherkin", after its shape. Foster worked with engineers to integrate complex computer systems with the most basic physical laws, such as convection. Foster's earlier designs reflected a sophisticated, machine-influenced high-tech vision. His style has evolved into a more sharp-edged modernity. In 2004, Foster designed the tallest bridge in the world, the Millau Viaduct in Southern France, with the Millau Mayor Jacques Godfrain stating; "The architect, Norman Foster, gave us a model of art." Foster worked with Steve Jobs from about 2009 until Jobs' death to design the Apple offices, Apple Campus 2 now called Apple Park, in Cupertino, California, US. Apple's board and staff continued to work with Foster as the design was completed and the construction in progress. The circular building was opened to employees in April 2017, six years after Jobs died in 2011. In January 2007, the "Sunday Times" reported that Foster had called in Catalyst, a corporate finance house, to find buyers for Foster + Partners. Foster does not intend to retire, but sell his 80–90% holding in the company valued at £300 million to £500 million. In 2007, he worked with Philippe Starck and Sir Richard Branson of the Virgin Group for the Virgin Galactic plans. Foster currently sits on the Board of Trustees at architectural charity Article 25 who design, construct and manage innovative, safe, sustainable buildings in some of the most inhospitable and unstable regions of the world. He has also been on the Board of Trustees of the Architecture Foundation. In 2012, Foster was among the British cultural figures selected by artist Sir Peter Blake to appear in a new version of his most famous artwork – the Beatles' "Sgt. Pepper's Lonely Hearts Club Band" album cover – to celebrate the British cultural figures of his life that he most admires. Foster has been married three times. His first wife, Wendy Cheeseman, one of the four founders of Team 4, died from cancer in 1989. From 1991 to 1995, he was married to Begum Sabiha Rumani Malik. The marriage ended in divorce. In 1996, Foster married Spanish psychologist and art curator Elena Ochoa. He has five children; two of the four sons he had with Cheeseman are adopted. In the 2000s, Foster was diagnosed with bowel cancer and was told he had weeks to live. He received chemotherapy treatment and made a full recovery. He also suffered from a heart attack. Foster was made a Knight Bachelor in the 1990 Birthday Honours, and thereby granted the title "sir". He was appointed to the Order of Merit (OM) in 1997. In the 1999 Birthday Honours, Foster's elevation to the peerage was announced in June 1999 and was raised to the peerage as Baron Foster of Thames Bank, of Reddish in the County of Greater Manchester in July. Foster was elected an Associate of the Royal Academy (ARA) on 19 May 1983, and a Royal Academician (RA) on 26 June 1991. In 1995, he was elected an Honorary Fellow of the Royal Academy of Engineering (HonFREng). On 24 April 2017, he was given the Freedom of the City of London. The Bloomberg London building received a Stirling Prize in October 2018. Foster received The Lynn S. Beedle Lifetime Achievement Award from the Council on Tall Buildings and Urban Habitat in 2007 to honour his contributions to the advancement of tall buildings. He was awarded the Aga Khan Award for Architecture, for the University of Technology Petronas in Malaysia, and in 2008 he was granted an honorary degree from the Dundee School of Architecture at the University of Dundee. In 2009 he received the Prince of Asturias Award in the category "Arts".
https://en.wikipedia.org/wiki?curid=21641
Niklaus Wirth Niklaus Emil Wirth (born 15 February 1934) is a Swiss computer scientist. He has designed several programming languages, including Pascal, and pioneered several classic topics in software engineering. In 1984 he won the Turing Award, generally recognized as the highest distinction in computer science, for developing a sequence of innovative computer languages. Wirth was born in Winterthur, Switzerland, in 1934. In 1959, he earned a Bachelor of Science (B.S.) degree in electronic engineering from the Swiss Federal Institute of Technology Zürich (ETH Zürich). In 1960, he earned a Master of Science (MSc) from Université Laval, Canada. Then in 1963, he was awarded a PhD in Electrical Engineering and Computer Science (EECS) from the University of California, Berkeley, supervised by the computer design pioneer Harry Huskey. From 1963 to 1967, he served as assistant professor of computer science at Stanford University and again at the University of Zurich. Then in 1968, he became Professor of Informatics at ETH Zürich, taking two one-year sabbaticals at Xerox PARC in California (1976–1977 and 1984–1985). He retired in 1999. In 2004, he was made a Fellow of the Computer History Museum "for seminal work in programming languages and algorithms, including Euler, Algol-W, Pascal, Modula, and Oberon." Wirth was the chief designer of the programming languages Euler, Algol W, Pascal, Modula, Modula-2, Oberon, Oberon-2, and Oberon-07. He was also a major part of the design and implementation team for the Lilith and Oberon operating systems, and for the Lola digital hardware design and simulation system. He received the Association for Computing Machinery (ACM) Turing Award for the development of these languages in 1984, and in 1994 he was inducted as a Fellow of the ACM. His book, written jointly with Kathleen Jensen, "The Pascal User Manual and Report", served as the basis of many language implementation efforts in the 1970s and 1980s in the United States and across Europe. His article "Program Development by Stepwise Refinement", about the teaching of programming, is considered to be a classic text in software engineering. In 1975 he wrote the book "Algorithms + Data Structures = Programs", which gained wide recognition. Major revisions of this book with the new title "Algorithms + Data Structures" were published in 1985 and 2004. The examples in the first edition were written in Pascal. These were replaced in the later editions with examples written in Modula-2 and Oberon respectively. His textbook, "Systematic Programming: An Introduction", was considered a good source for students who wanted to do more than just coding. Regarded as a challenging text to work through, it was sought as imperative reading for those interested in numerical mathematics. In 1992, he published (with Jürg Gutknecht) the full documentation of the Oberon OS. A second book (with Martin Reiser) was intended as a programmer's guide. In 1995, he popularized the adage now named Wirth's law, which states that software is getting slower more rapidly than hardware becomes faster. In his 1995 paper "A Plea for Lean Software" he attributes it to Martin Reiser.
https://en.wikipedia.org/wiki?curid=21642
Nebraska Nebraska is a state that lies both in the Great Plains and in the Midwestern United States. It is bordered by South Dakota to the north; Iowa to the east and Missouri to the southeast, both across the Missouri River; Kansas to the south; Colorado to the southwest; and Wyoming to the west. It is the only triply landlocked U.S. state. Nebraska's area is just over with a population of almost 1.9 million. Its capital is Lincoln, and its largest city is Omaha, which is on the Missouri River. Indigenous peoples, including Omaha, Missouria, Ponca, Pawnee, Otoe, and various branches of the Lakota (Sioux) tribes, lived in the region for thousands of years before European exploration. The state is crossed by many historic trails, including that of the Lewis and Clark Expedition. Nebraska is composed of two major land regions: the Dissected Till Plains and the Great Plains. The Dissected Till Plains region consist of gently rolling hills and contains the state's largest cities, Omaha and Lincoln. The Great Plains region, occupying most of western Nebraska, is characterized by treeless prairie. Nebraska has two major climatic zones. The eastern half of the state has a humid continental climate (Köppen climate classification "Dfa"); a unique warmer subtype considered "warm-temperate" exists near the southern plains, which is analogous to that in Kansas and Oklahoma, which have a predominantly humid subtropical climate. The western half of the state has a primarily semi-arid climate (Köppen "BSk"). The state has wide variations between winter and summer temperatures, variations that decrease moving south within the state. Violent thunderstorms and tornadoes occur primarily during spring and summer and sometimes in autumn. Chinook wind tends to warm the state significantly in the winter and early spring. Nebraska's name is the result of anglicization of the archaic Otoe words "Ñí Brásge", pronounced (contemporary Otoe "Ñí Bráhge"), or the Omaha "Ní Btháska", pronounced , meaning "flat water", after the Platte River which flows through the state. Indigenous peoples lived in the region of present-day Nebraska for thousands of years before European exploration. The historic tribes in the state included the Omaha, Missouria, Ponca, Pawnee, Otoe, and various branches of the Lakota (Sioux), some of which migrated from eastern areas into this region. When European exploration, trade, and settlement began, both Spain and France sought to control the region. In the 1690s, Spain established trade connections with the Apaches, whose territory then included western Nebraska. By 1703, France had developed a regular trade with the native peoples along the Missouri River in Nebraska, and by 1719 had signed treaties with several of these peoples. After war broke out between the two countries, Spain dispatched an armed expedition to Nebraska under Lieutenant General Pedro de Villasur in 1720. The party was attacked and destroyed near present-day Columbus by a large force of Pawnees and Otoes, both allied with the French. The massacre ended Spanish exploration of the area for the remainder of the 18th century. In 1762, during the Seven Years' War, France ceded the Louisiana territory to Spain. This left Britain and Spain competing for dominance along the Mississippi; by 1773, the British were trading with the native peoples of Nebraska. In response, Spain dispatched two trading expeditions up the Missouri in 1794 and 1795; the second, under James Mackay, established the first European settlement in Nebraska near the mouth of the Platte. Later that year, Mackay's party built a trading post, dubbed Fort Carlos IV (Fort Charles), near present-day Homer. In 1819, the United States established Fort Atkinson as the first U.S. Army post west of the Missouri River, just east of present-day Fort Calhoun. The army abandoned the fort in 1827 as migration moved further west. European-American settlement was scarce until 1848 and the California Gold Rush. On May 30, 1854, the US Congress created the Kansas and the Nebraska territories, divided by the Parallel 40° North, under the Kansas–Nebraska Act. The Nebraska Territory included parts of the current states of Colorado, North Dakota, South Dakota, Wyoming, and Montana. The territorial capital of Nebraska was Omaha. In the 1860s, after the U.S. government forced many of the Native American tribes to cede their lands and settle on reservations, it opened large tracts of land to agricultural development by Europeans and Americans. Under the Homestead Act, thousands of settlers migrated into Nebraska to claim free land granted by the federal government. Because so few trees grew on the prairies, many of the first farming settlers built their homes of sod, as had Native Americans such as the Omaha. The first wave of settlement gave the territory a sufficient population to apply for statehood. Nebraska became the 37th state on March 1, 1867, and the capital was moved from Omaha to the center at Lancaster, later renamed Lincoln after the recently assassinated President of the United States, Abraham Lincoln. The battle of Massacre Canyon, on August 5, 1873, was the last major battle between the Pawnee and the Sioux. During the 1870s to the 1880s, Nebraska experienced a large growth in population. Several factors contributed to attracting new residents. The first was that the vast prairie land was perfect for cattle grazing. This helped settlers to learn the unfamiliar geography of the area. The second factor was the invention of several farming technologies. Agricultural inventions such as barbed wire, wind mills, and the steel plow, combined with good weather, enabled settlers to use Nebraska as prime farming land. By the 1880s, Nebraska's population had soared to more than 450,000 people. The Arbor Day holiday was founded in Nebraska City by territorial governor J. Sterling Morton. The National Arbor Day Foundation is still headquartered in Nebraska City, with some offices in Lincoln. In the late 19th century, many African Americans migrated from the South to Nebraska as part of the Great Migration, primarily to Omaha which offered working class jobs in meat packing, the railroads and other industries. Omaha has a long history of civil rights activism. Blacks encountered discrimination from other Americans in Omaha and especially from recent European immigrants, ethnic whites who were competing for the same jobs. In 1912, African Americans founded the Omaha chapter of the National Association for the Advancement of Colored People to work for improved conditions in the city and state. Since the 1960s, Native American activism in the state has increased, both through open protest, activities to build alliances with state and local governments, and in the slower, more extensive work of building tribal institutions and infrastructure. Native Americans in federally recognized tribes have pressed for self-determination, sovereignty and recognition. They have created community schools to preserve their cultures, as well as tribal colleges and universities. Tribal politicians have also collaborated with state and county officials on regional issues. The state is bordered by South Dakota to the north; Iowa to the east and Missouri to the southeast, across the Missouri River; Kansas to the south; Colorado to the southwest; and Wyoming to the west. The state has 93 counties and is split between two time zones, with the state's eastern half observing Central Time and the western half observing Mountain Time. Three rivers cross the state from west to east. The Platte River, formed by the confluence of the North Platte and the South Platte, runs through the state's central portion, the Niobrara River flows through the northern part, and the Republican River runs across the southern part. The first Constitution of Nebraska in 1866 described Nebraska's boundaries as follows (Note that the description of the Northern border is no longer accurate, since the Keya Paha River and the Niobrara River no longer form the boundary of the state of Nebraska. Instead, Nebraska's Northern border now extends east along the forty-third degree of north latitude until it meets the Missouri River directly.): Nebraska is composed of two major land regions: the Dissected Till Plains and the Great Plains. The easternmost portion of the state was scoured by Ice Age glaciers; the Dissected Till Plains were left after the glaciers retreated. The Dissected Till Plains is a region of gently rolling hills; Omaha and Lincoln are in this region. The Great Plains occupy most of western Nebraska, with the region consisting of several smaller, diverse land regions, including the Sandhills, the Pine Ridge, the Rainwater Basin, the High Plains and the Wildcat Hills. Panorama Point, at , is Nebraska's highest point; though despite its name and elevation, it is a relatively low rise near the Colorado and Wyoming borders. A past tourism slogan for the state of Nebraska was "Where the West Begins" (it has since been changed to "Honestly, it's not for everyone"). Locations given for the beginning of the "West" in Nebraska include the Missouri River, the intersection of 13th and O Streets in Lincoln (where it is marked by a red brick star), the 100th meridian, and Chimney Rock. Areas under the management of the National Park Service include: Areas under the management of the National Forest Service include: Two major climatic zones are represented in Nebraska: the state's eastern half and its western half. The eastern half of the state has a humid continental climate (Köppen climate classification "Dfa"). The western half has a semi-arid climate (Koppen "BSk"). The entire state experiences wide seasonal variations in both temperature and precipitation. Average temperatures are fairly uniform across Nebraska, with hot summers and generally cold winters. Average annual precipitation decreases east to west from about in the southeast corner of the state to about in the Panhandle. Humidity also decreases significantly from east to west. Snowfall across the state is fairly even, with most of Nebraska receiving between of snow each year. Nebraska's highest-recorded temperature was in Minden on July 24, 1936. The state's lowest-recorded temperature was in Camp Clarke on February 12, 1899. Nebraska is located in Tornado Alley. Thunderstorms are common during both the spring and the summer. Violent thunderstorms and tornadoes happen primarily during those two seasons, although they also can occur occasionally during the autumn. The chinook winds from the Rocky Mountains provide a temporary moderating effect on temperatures in the state's western portion during the winter. The United States Census Bureau estimates that the population of Nebraska was 1,934,408 on July 1, 2019, a 5.92% increase since the 2010 United States Census. The center of population of Nebraska is in Polk County, in the city of Shelby. The table below shows the racial composition of Nebraska's population as of 2016. According to the 2016 American Community Survey, 10.2% of Nebraska's population were of Hispanic or Latino origin (of any race): Mexican (7.8%), Puerto Rican (0.2%), Cuban (0.2%), and other Hispanic or Latino origin (2.0%). The five largest ancestry groups were: German (36.1%), Irish (13.1%), English (7.8%), Czech (4.7%), and American (4.0%). Nebraska has the largest Czech American and non-Mormon Danish American population (as a percentage of the total population) in the nation. German Americans are the largest ancestry group in most of the state, particularly in the eastern counties. Thurston County (made up entirely of the Omaha and Winnebago reservations) has an American Indian majority, and Butler County is one of only two counties in the nation with a Czech-American plurality. In recent years, Nebraska has become home to many refugee communities. In 2016, it welcomed more refugees per capita than any other state. Nebraska, and in particular Lincoln, is the largest home of Yazidis refugees and Yazidi Americans in the United States. As of 2011, 31.0% of Nebraska's population younger than age1 were minorities. "Note: Births in table don't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number." The religious affiliations of the people of Nebraska are: The largest single denominations by number of adherents in 2010 were the Roman Catholic Church (372,838), the Lutheran Church–Missouri Synod (112,585), the Evangelical Lutheran Church in America (110,110) and the United Methodist Church (109,283). Eighty-nine percent of the cities in Nebraska have fewer than 3,000 people. Nebraska shares this characteristic with five other Midwestern states: Kansas, Oklahoma, North Dakota and South Dakota, and Iowa. Hundreds of towns have a population of fewer than 1,000. Regional population declines have forced many rural schools to consolidate. Fifty-three of Nebraska's 93 counties reported declining populations between 1990 and 2000, ranging from a 0.06% loss (Frontier County) to a 17.04% loss (Hitchcock County). More urbanized areas of the state have experienced substantial growth. In 2000, the city of Omaha had a population of 390,007; in 2005, the city's estimated population was 414,521 (427,872 including the recently annexed city of Elkhorn), a 6.3% increase over five years. The 2010 census showed that Omaha has a population of 408,958. The city of Lincoln had a 2000 population of 225,581 and a 2010 population of 258,379, a 14.5% increase. As of the 2010 Census, there were 530 cities and villages in the state of Nebraska. There are five classifications of cities and villages in Nebraska, which is based upon population. All population figures are 2017 Census Bureau estimates unless flagged by a reference number. Metropolitan Class City (300,000 or more) Primary Class City (100,000–299,999) First Class City (5,000–99,999) Second Class Cities (800–4,999) and Villages (100–800) make up the rest of the communities in Nebraska. There are 116 second-class cities and 382 villages in the state. Metropolitan areas 2017 estimate data Micropolitan areas 2012 estimate data Other areas Nebraska has a progressive income tax. The portion of income from $0 to $2,400 is taxed at 2.56%; from $2,400 to $17,500, at 3.57%; from $17,500 to $27,000, at 5.12%; and income over $27,000, at 6.84%. The standard deduction for a single taxpayer is $5,700; the personal exemption is $118. Nebraska has a state sales and use tax of 5.5%. In addition to the state tax, some Nebraska cities assess a city sales and use tax, in 0.5% increments, up to a maximum of 1.5%. Dakota County levies an additional 0.5% county sales tax. Food and ingredients that are generally for home preparation and consumption are not taxable. All real property within the state of Nebraska is taxable unless specifically exempted by statute. Since 1992, only depreciable personal property is subject to tax and all other personal property is exempt from tax. Inheritance tax is collected at the county level. The Bureau of Economic Analysis estimates of Nebraska's gross state product in 2010 was $89.8 billion. Per capita personal income in 2004 was $31,339, 25th in the nation. Nebraska has a large agriculture sector, and is a major producer of beef, pork, corn (maize), soybeans, and sorghum. Other important economic sectors include freight transport (by rail and truck), manufacturing, telecommunications, information technology, and insurance. As of November 2018, the state's unemployment rate was 2.8%, the fifth lowest in the nation. Kool-Aid was created in 1927 by Edwin Perkins in the city of Hastings, which celebrates the event the second weekend of every August with Kool-Aid Days, and Kool-Aid is the official soft drink of Nebraska. "CliffsNotes" were developed by Clifton Hillegass of Rising City. He adapted his pamphlets from the Canadian publications, "Coles Notes". Omaha is home to Berkshire Hathaway, whose chief executive officer (CEO), Warren Buffett, was ranked in March 2009 by "Forbes" magazine as the second-richest person in the world. The city is also home to Mutual of Omaha, InfoUSA, TD Ameritrade, West Corporation, Valmont Industries, Woodmen of the World, Kiewit Corporation, Union Pacific Railroad, and Gallup. Ameritas Life Insurance Corp., Nelnet, Sandhills Publishing Company, Duncan Aviation, and Hudl are based in Lincoln. The Buckle is based in Kearney. Sidney is the national headquarters for Cabela's, a specialty retailer of outdoor goods now owned by Bass Pro Shops. Grand Island is the headquarters of Hornady, a manufacturer of ammunition. The world's largest train yard, Union Pacific's Bailey Yard, is in North Platte. The Vise-Grip was invented by William Petersen in 1924, and was manufactured in De Witt until the plant was closed and moved to China in late 2008. Lincoln's Kawasaki Motors Manufacturing is the only Kawasaki plant in the world to produce the Jet Ski, all-terrain vehicle (ATV), and Mule product lines. The facility employs more than 1,200 people. The Spade Ranch, in the Sandhills, is one of Nebraska's oldest and largest beef cattle operations. Nebraska is the only state in the US where all electric utilities are publicly owned. The Union Pacific Railroad, headquartered in Omaha, was incorporated on July 1, 1862, in the wake of the Pacific Railway Act of 1862. Bailey Yard, in North Platte, is the largest railroad classification yard in the world. The route of the original transcontinental railroad runs through the state. Other major railroads with operations in the state are: Amtrak; BNSF Railway; Canadian National Railway; and Iowa Interstate Railroad. Nebraska's government operates under the framework of the Nebraska Constitution, adopted in 1875, and is divided into three branches: executive, legislative, and judicial. The head of the executive branch is Governor Pete Ricketts. Other elected officials in the executive branch are Lieutenant Governor Mike Foley, Attorney General Doug Peterson, Secretary of State Bob Evnen, State Treasurer John Murante, and State Auditor Charlie Janssen. All elected officials in the executive branch serve four-year terms. Nebraska is the only state in the United States with a unicameral legislature. Although this house is officially known simply as the "Legislature", and more commonly called the "Unicameral", its members call themselves "senators". Nebraska's Legislature is also the only state legislature in the United States that is officially nonpartisan. The senators are elected with no party affiliation next to their names on the ballot, and members of any party can be elected to the positions of speaker and committee chairs. The Nebraska Legislature can also override the governor's veto with a three-fifths majority, in contrast to the two-thirds majority required in some other states. When Nebraska became a state in 1867, its legislature consisted of two houses: a House of Representatives and a Senate. For years, U.S. Senator George Norris and other Nebraskans encouraged the idea of a unicameral legislature, and demanded the issue be decided in a referendum. Norris argued: The Legislature meets in the third Nebraska State Capitol building, built between 1922 and 1932. It was designed by Bertram G. Goodhue. Built from Indiana limestone, the capitol's base is a cross within a square. A 400-foot domed tower rises from this base. The Sower, a 19-foot bronze statue representing agriculture, crowns the building. The judicial system in Nebraska is unified, with the Nebraska Supreme Court having administrative authority over all the courts within the state. Nebraska uses the Missouri Plan for the selection of judges at all levels, including county courts (as the lowest-level courts) and twelve district courts, which contain one or more counties. The Nebraska State Court of Appeals hears appeals from the district courts, juvenile courts, and workers' compensation courts, and is the final court of appeal. Nebraska's U.S. senators are Deb Fischer and Ben Sasse, both Republicans; Fischer, elected in 2012, is the senior. Nebraska has three representatives in the House of Representatives: Jeff Fortenberry (R) of the 1st district; Don Bacon (R) of the 2nd district; and Adrian Smith (R) of the 3rd district. Nebraska is one of two states (Maine is the other) that allow for a split in the state's allocation of electoral votes in presidential elections. Under a 1991 law, two of Nebraska's five votes are awarded to the winner of the statewide popular vote, while the other three go to the highest vote-getter in each of the state's three congressional districts. For most of its history, Nebraska has been a solidly Republican state. Republicans have carried the state in all but one presidential election since 1940: the 1964 landslide election of Lyndon B. Johnson. In the 2004 presidential election, George W. Bush won the state's five electoral votes by a margin of 33 percentage points (making Nebraska's the fourth-strongest Republican vote among states) with 65.9% of the overall vote; only Thurston County, which is majority-Native American, voted for his Democratic challenger John Kerry. In 2008, the state split its electoral votes for the first time: Republican John McCain won the popular vote in Nebraska as a whole and two of its three congressional districts; the second district, which includes the city of Omaha, went for Democrat Barack Obama. Despite the current Republican domination of Nebraska politics, the state has a long tradition of electing centrist members of both parties to state and federal office; examples include George W. Norris (who served a few years in the Senate as an independent), J. James Exon, Bob Kerrey, and Chuck Hagel. Voters have tilted to the right in recent years, a trend evidenced when Hagel retired from the Senate in 2008 and was succeeded by conservative Republican Mike Johanns to the U.S. Senate, as well as with the 2006 re-election of Ben Nelson, who was considered the most conservative Democrat in the Senate until his retirement in 2013. Johanns retired in 2015 and was succeeded by another conservative, Sasse. Nelson retired in 2013 and was replaced by conservative Republican Fischer. Former President Gerald Ford was born in Nebraska, but moved away shortly after birth. Illinois native William Jennings Bryan represented Nebraska in Congress, served as U.S. Secretary of State under President Woodrow Wilson, and unsuccessfully ran for president three times. University of Nebraska system Nebraska State College System Community Colleges Private colleges/universities Museums Performing Arts Nebraska is currently home to seven member schools of the NCAA, eight of the NAIA, seven of the NJCAA, one of the NCCAA, and one independent school. The College World Series has been held in Omaha since 1950. It was held at Rosenblatt Stadium from 1950 through 2010, and at TD Ameritrade Park Omaha since 2011.
https://en.wikipedia.org/wiki?curid=21647
New Jersey New Jersey is a state in the Mid-Atlantic region of the Northeastern United States. It is bordered on the north and east by the state of New York; on the east, southeast, and south by the Atlantic Ocean; on the west by the Delaware River and Pennsylvania; and on the southwest by Delaware Bay and the State of Delaware. New Jersey is the fourth-smallest state by area but the 11th-most populous, with 8,882,190 residents as of 2019 and an area of 8,722.58 square miles, making it the most densely populated of the 50 U.S. states, with its biggest city being Newark. All but one county in New Jersey lie within the combined statistical areas of New York City or Philadelphia. New Jersey was the second-wealthiest U.S. state by median household income as of 2017. New Jersey was inhabited by Native Americans for more than 2,800 years, with historical tribes such as the Lenape along the coast. In the early 17th century, the Dutch and the Swedes founded the first European settlements in the state. The English later seized control of the region, naming it the Province of New Jersey after the largest of the Channel Islands, Jersey, and granting it as a colony to Sir George Carteret and John Berkeley, 1st Baron Berkeley of Stratton. New Jersey was the site of several important battles during the American Revolutionary War in the 18th century. In the 19th century, factories in the cities Camden, Paterson, Newark, Trenton, Jersey City, and Elizabeth (known as the "Big Six"), helped drive the Industrial Revolution. New Jersey's geographic location at the center of the Northeast megalopolis, between Boston and New York City to the northeast, and Philadelphia, Baltimore, and Washington, D.C., to the southwest, fueled its rapid growth through the process of suburbanization in the second half of the 20th century. At the turn of the 21st century, this suburbanization began reverting with the consolidation of New Jersey's culturally diverse populace toward more urban settings within the state, with towns home to commuter rail stations outpacing the population growth of more automobile-oriented suburbs since 2008. As of 2018, New Jersey was home to the highest number of millionaires per capita of all U.S. states. New Jersey's public school system consistently ranks at or among the top of all fifty U.S. states. Around 180 million years ago, during the Jurassic Period, New Jersey bordered North Africa. The pressure of the collision between North America and Africa gave rise to the Appalachian Mountains. Around 18,000 years ago, the Ice Age resulted in glaciers that reached New Jersey. As the glaciers retreated, they left behind Lake Passaic, as well as many rivers, swamps, and gorges. New Jersey was originally settled by Native Americans, with the Lenni-Lenape being dominant at the time of contact. "" is the Lenape name for the land that is now New Jersey. The Lenape were several autonomous groups that practiced maize agriculture in order to supplement their hunting and gathering in the region surrounding the Delaware River, the lower Hudson River, and western Long Island Sound. The Lenape society was divided into matrilinear clans that were based upon common female ancestors. These clans were organized into three distinct phratries identified by their animal sign: Turtle, Turkey, and Wolf. They first encountered the Dutch in the early 17th century, and their primary relationship with the Europeans was through fur trade. The Dutch became the first Europeans to lay claim to lands in New Jersey. The Dutch colony of New Netherland consisted of parts of modern Middle Atlantic states. Although the European principle of land ownership was not recognized by the Lenape, Dutch West India Company policy required its colonists to purchase the land that they settled. The first to do so was Michiel Pauw who established a patronship called Pavonia in 1630 along the North River which eventually became the Bergen. Peter Minuit's purchase of lands along the Delaware River established the colony of New Sweden. The entire region became a territory of England on June 24, 1664, after an English fleet under the command of Colonel Richard Nicolls sailed into what is now New York Harbor and took control of Fort Amsterdam, annexing the entire province. During the English Civil War, the Channel Island of Jersey remained loyal to the British Crown and gave sanctuary to the King. It was from the Royal Square in Saint Helier that Charles II of England was proclaimed King in 1649, following the execution of his father, Charles I. The North American lands were divided by Charles II, who gave his brother, the Duke of York (later King James II), the region between New England and Maryland as a proprietary colony (as opposed to a royal colony). James then granted the land between the Hudson River and the Delaware River (the land that would become New Jersey) to two friends who had remained loyal through the English Civil War: Sir George Carteret and Lord Berkeley of Stratton. The area was named the Province of New Jersey. Since the state's inception, New Jersey has been characterized by ethnic and religious diversity. New England Congregationalists settled alongside Scots Presbyterians and Dutch Reformed migrants. While the majority of residents lived in towns with individual landholdings of , a few rich proprietors owned vast estates. English Quakers and Anglicans owned large landholdings. Unlike Plymouth Colony, Jamestown and other colonies, New Jersey was populated by a secondary wave of immigrants who came from other colonies instead of those who migrated directly from Europe. New Jersey remained agrarian and rural throughout the colonial era, and commercial farming developed sporadically. Some townships, such as Burlington on the Delaware River and Perth Amboy, emerged as important ports for shipping to New York City and Philadelphia. The colony's fertile lands and tolerant religious policy drew more settlers, and New Jersey's population had increased to 120,000 by 1775. Settlement for the first 10 years of English rule took place along Hackensack River and Arthur Kill—settlers came primarily from New York and New England. On March 18, 1673, Berkeley sold his half of the colony to Quakers in England, who settled the Delaware Valley region as a Quaker colony. (William Penn acted as trustee for the lands for a time.) New Jersey was governed very briefly as two distinct provinces, East and West Jersey, for 28 years between 1674 and 1702, at times part of the Province of New York or Dominion of New England. In 1702, the two provinces were reunited under a royal governor, rather than a proprietary one. Edward Hyde, Lord Cornbury, became the first governor of the colony as a royal colony. Britain believed that he was an ineffective and corrupt ruler, taking bribes and speculating on land. In 1708 he was recalled to England. New Jersey was then ruled by the governors of New York, but this infuriated the settlers of New Jersey, who accused those governors of favoritism to New York. Judge Lewis Morris led the case for a separate governor, and was appointed governor by King George II in 1738. New Jersey was one of the Thirteen Colonies that revolted against British rule in the American Revolution. The was passed July 2, 1776, just two days before the Second Continental Congress declared American Independence from Great Britain. It was an act of the Provincial Congress, which made itself into the State Legislature. To reassure neutrals, it provided that it would become void if New Jersey reached reconciliation with Great Britain. New Jersey representatives Richard Stockton, John Witherspoon, Francis Hopkinson, John Hart, and Abraham Clark were among those who signed the United States Declaration of Independence on July 4, 1776. During the American Revolutionary War, British and American armies crossed New Jersey numerous times, and several pivotal battles took place in the state. Because of this, New Jersey today is often referred to as "The Crossroads of the American Revolution". The winter quarters of the Continental Army were established there twice by General George Washington in Morristown, which has been called "The Military Capital of the American Revolution". On the night of December 25–26, 1776, the Continental Army under George Washington crossed the Delaware River. After the crossing, he surprised and defeated the Hessian troops in the Battle of Trenton. Slightly more than a week after victory at Trenton, American forces gained an important victory by stopping General Cornwallis's charges at the Second Battle of Trenton. By evading Cornwallis's army, Washington made a surprise attack on Princeton and successfully defeated the British forces there on January 3, 1777. Emanuel Leutze's painting of "Washington Crossing the Delaware" became an icon of the Revolution. American forces under Washington met the forces under General Henry Clinton at the Battle of Monmouth in an indecisive engagement in June 1778. Washington attempted to take the British column by surprise; when the British army attempted to flank the Americans, the Americans retreated in disorder. The ranks were later reorganized and withstood the British charges. In the summer of 1783, the Continental Congress met in Nassau Hall at Princeton University, making Princeton the nation's capital for four months. It was there that the Continental Congress learned of the signing of the Treaty of Paris (1783), which ended the war. On December 18, 1787, New Jersey became the third state to ratify the United States Constitution, which was overwhelmingly popular in New Jersey, as it prevented New York and Pennsylvania from charging tariffs on goods imported from Europe. On November 20, 1789, the state became the first in the newly formed Union to ratify the Bill of Rights. The 1776 New Jersey State Constitution gave the vote to "all inhabitants" who had a certain level of wealth. This included women and blacks, but not married women, because they could not own property separately from their husbands. Both sides, in several elections, claimed that the other side had had unqualified women vote and mocked them for use of "petticoat electors", whether entitled to vote or not; on the other hand, both parties passed Voting Rights Acts. In 1807, the legislature passed a bill interpreting the constitution to mean universal "white male" suffrage, excluding paupers; the constitution was itself an act of the legislature and not enshrined as the modern constitution. On February 15, 1804, New Jersey became the last northern state to abolish new slavery and enacted legislation that slowly phased out existing slavery. This led to a gradual decrease of the slave population. By the close of the Civil War, about a dozen African Americans in New Jersey were still held in bondage. New Jersey voters initially refused to ratify the constitutional amendments banning slavery and granting rights to the United States' black population. Industrialization accelerated in the northern part of the state following completion of the Morris Canal in 1831. The canal allowed for coal to be brought from eastern Pennsylvania's Lehigh Valley to northern New Jersey's growing industries in Paterson, Newark, and Jersey City. In 1844, the second state constitution was ratified and brought into effect. Counties thereby became districts for the State Senate, and some realignment of boundaries (including the creation of Mercer County) immediately followed. This provision was retained in the 1947 Constitution, but was overturned by the Supreme Court of the United States in 1962 by the decision "Baker v. Carr". While the Governorship was stronger than under the 1776 constitution, the constitution of 1844 created many offices that were not responsible to him, or to the people, and it gave him a three-year term, but he could not succeed himself. New Jersey was one of the few Union states (the others being Delaware and Kentucky) to select a candidate other than Abraham Lincoln twice in national elections, and sided with Stephen Douglas (1860) and George B. McClellan (1864) during their campaigns. McClellan, a native Philadelphian, had New Jersey ties and formally resided in New Jersey at the time; he later became Governor of New Jersey (1878–81). (In New Jersey, the factions of the Democratic party managed an effective coalition in 1860.) During the American Civil War, the state was led first by Republican Governor Charles Smith Olden, then by Democrat Joel Parker. During the course of the war, over 80,000 from the state enlisted in the Northern army; unlike many states, including some Northern ones, no battle was fought there. In the Industrial Revolution, cities like Paterson grew and prospered. Previously, the economy had been largely agrarian, which was problematically subject to crop failures and poor soil. This caused a shift to a more industrialized economy, one based on manufactured commodities such as textiles and silk. Inventor Thomas Edison also became an important figure of the Industrial Revolution, having been granted 1,093 patents, many of which for inventions he developed while working in New Jersey. Edison's facilities, first at Menlo Park and then in West Orange, are considered perhaps the first research centers in the United States. Christie Street in Menlo Park was the first thoroughfare in the world to have electric lighting. Transportation was greatly improved as locomotion and steamboats were introduced to New Jersey. Iron mining was also a leading industry during the middle to late 19th century. Bog iron pits in the southern New Jersey Pinelands were among the first sources of iron for the new nation. Mines such as Mt. Hope, Mine Hill and the Rockaway Valley Mines created a thriving industry. Mining generated the impetus for new towns and was one of the driving forces behind the need for the Morris Canal. Zinc mines were also a major industry, especially the Sterling Hill Mine. New Jersey prospered through the Roaring Twenties. The first Miss America Pageant was held in 1921 in Atlantic City, the Holland Tunnel connecting Jersey City to Manhattan opened in 1927, and the first drive-in movie was shown in 1933 in Camden. During the Great Depression of the 1930s, the state offered begging licenses to unemployed residents, the zeppelin airship Hindenburg crashed in flames over Lakehurst, and the SS "Morro Castle" beached itself near Asbury Park after going up in flames while at sea. Through both World Wars, New Jersey was a center for war production, especially naval construction. The Federal Shipbuilding and Drydock Company yards in Kearny and Newark and the New York Shipbuilding Corporation yard in Camden produced aircraft carriers, battleships, cruisers, and destroyers. New Jersey manufactured 6.8 percent of total United States military armaments produced during World War II, ranking fifth among the 48 states. In addition, Fort Dix (1917) (originally called "Camp Dix"), Camp Merritt (1917) and Camp Kilmer (1941) were all constructed to house and train American soldiers through both World Wars. New Jersey also became a principal location for defense in the Cold War. Fourteen Nike missile stations were constructed for the defense of the New York City and Philadelphia areas. "PT-109", a motor torpedo boat commanded by Lt. (j.g.) John F. Kennedy in World War II, was built at the Elco Boatworks in Bayonne. The aircraft carrier USS "Enterprise" (CV-6) was briefly docked at the Military Ocean Terminal in Bayonne in the 1950s before she was sent to Kearney to be scrapped. In 1962, the world's first nuclear-powered cargo ship, the NS Savannah, was launched at Camden. In 1951, the New Jersey Turnpike opened, permitting fast travel by car and truck between North Jersey (and metropolitan New York) and South Jersey (and metropolitan Philadelphia). In 1959, Air Defense Command deployed the CIM-10 Bomarc surface-to-air missile to McGuire Air Force Base. On June 7, 1960 an explosion in a CIM-10 Bomarc missile fuel tank caused the accident and subsequent plutonium contamination. In the 1960s, race riots erupted in many of the industrial cities of North Jersey. The first race riots in New Jersey occurred in Jersey City on August 2, 1964. Several others ensued in 1967, in Newark and Plainfield. Other riots followed the assassination of Dr. Martin Luther King, Jr. in April 1968, just as in the rest of the country. A riot occurred in Camden in 1971. As a result of an order from the New Jersey Supreme Court to fund schools equitably, the New Jersey legislature passed an income tax bill in 1976. Prior to this bill, the state had no income tax. In the early part of the 2000s, two light rail systems were opened: the Hudson–Bergen Light Rail in Hudson County and the River Line between Camden and Trenton. The intent of these projects was to encourage transit-oriented development in North Jersey and South Jersey, respectively. The HBLR in particular was credited with a revitalization of Hudson County and Jersey City in particular. Urban revitalization has continued in North Jersey in the 21st century. As of 2014, Jersey City's Census-estimated population was 262,146, with the largest population increase of any municipality in New Jersey since 2010, representing an increase of 5.9% from the 2010 United States Census, when the city's population was enumerated at 247,597. Between 2000 and 2010, Newark experienced its first population increase since the 1950s. New Jersey is bordered on the north and northeast by New York (parts of which are across the Hudson River, Upper New York Bay, the Kill Van Kull, Newark Bay, and the Arthur Kill); on the east by the Atlantic Ocean; on the southwest by Delaware across Delaware Bay; and on the west by Pennsylvania across the Delaware River. New Jersey is often broadly divided into three geographic regions: North Jersey, Central Jersey, and South Jersey. Some New Jersey residents do not consider Central Jersey a region in its own right, but others believe it is a separate geographic and cultural area from the North and South. Within those regions are five distinct areas, based upon natural geography and population concentration. Northeastern New Jersey lies closest to Manhattan in New York City, and up to a million residents commute daily into the city for work, often via public transportation. Northwestern New Jersey is more wooded, rural, and mountainous. The Jersey shore, along the Atlantic Coast in Central and South Jersey, has its own unique natural, residential, and cultural characteristics owing to its location by the ocean. The Delaware Valley includes the southwestern counties of the state, which reside within the Philadelphia Metropolitan Area. The Pine Barrens region is in the southern interior of New Jersey. Covered rather extensively by mixed pine and oak forest, it has a much lower population density than much of the rest of the state. The federal Office of Management and Budget divides New Jersey's counties into seven Metropolitan Statistical Areas, with 16 counties included in either the New York City or Philadelphia metro areas. Four counties have independent metro areas, and Warren County is part of the Pennsylvania-based Lehigh Valley metro area. New Jersey is also at the center of the Northeast megalopolis. High Point, in Montague Township, Sussex County, is the state's highest elevation, at above sea level. The state's highest prominence is Kitty Ann Mountain in Morris County, rising 892 feet. The Palisades are a line of steep cliffs on the west side of the Hudson River, in Bergen and Hudson Counties. Major New Jersey rivers include the Hudson, Delaware, Raritan, Passaic, Hackensack, Rahway, Musconetcong, Mullica, Rancocas, Manasquan, Maurice, and Toms rivers. Due to New Jersey's peninsular geography, both sunrise and sunset are visible over water from different points on the Jersey Shore. There are two climatic conditions in the state. The south, central, and northeast parts of the state have a humid subtropical climate, while the northwest has a humid continental climate (microthermal), with much cooler temperatures due to higher elevation. New Jersey receives between 2,400 and 2,800 hours of sunshine annually. Climate change is affecting New Jersey faster than much of the rest of the United States. As of 2019, New Jersey was one of the fastest-warming states in the nation. Since 1895, average temperatures have climbed by almost 3.6 degrees Fahrenheit, double the average for the other Lower 48 states. Summers are typically hot and humid, with statewide average high temperatures of and lows of ; however, temperatures exceed on average 25 days each summer, exceeding in some years. Winters are usually cold, with average high temperatures of and lows of for most of the state, but temperatures can, for brief periods, fall below and sometimes rise above . Northwestern parts of the state have significantly colder winters with sub- being an almost annual occurrence. Spring and autumn may feature wide temperature variations, with lower humidity than summer. The USDA Plant Hardiness Zone classification ranges from6 in the northwest of the state, to 7B near Cape May. All-time temperature extremes recorded in New Jersey include on July 10, 1936 in Runyon, Middlesex County and on January 5, 1904 in River Vale, Bergen County. Average annual precipitation ranges from , uniformly spread through the year. Average snowfall per winter season ranges from in the south and near the seacoast, in the northeast and central part of the state, to about in the northwestern highlands, but this often varies considerably from year to year. Precipitation falls on an average of 120 days a year, with 25 to 30 thunderstorms, most of which occur during the summer. During winter and early spring, New Jersey can experience "nor'easters", which are capable of causing blizzards or flooding throughout the northeastern United States. Hurricanes and tropical storms (such as Tropical Storm Floyd in 1999), tornadoes, and earthquakes are rare, although New Jersey was impacted by a hurricane in 1903, and Hurricane Sandy on October 29, 2012 with the storm making landfall in the state with top winds of . The United States Census Bureau estimates that the population of New Jersey was 8,882,190 on July 1, 2019, a 1.03% increase since the 2010 United States Census. Residents of New Jersey are most commonly referred to as "New Jerseyans" or, less commonly, as "New Jerseyites". As of the 2010 census, there were 8,791,894 people living in the state. The racial makeup of the state was: 17.7% of the population were Hispanic or Latino (of any race). Non-Hispanic Whites were 58.9% of the population in 2011, down from 85% in 1970. In 2010, unauthorized immigrants constituted an estimated 6.2% of the population. This was the fourth-highest percentage of any state in the country. There were an estimated 550,000 illegal immigrants in the state in 2010. Among the municipalities which are considered sanctuary cities are Camden, Jersey City and Newark. The United States Census Bureau, , estimated New Jersey's population at 8,882,190. Which represents an increase of 213,750, or 2.4%, since the last census in 2010. As of 2010, New Jersey was the eleventh-most populous state in the United States, and the most densely populated, at 1,185 residents per square mile (458 per km2), with most of the population residing in the counties surrounding New York City, Philadelphia, and along the eastern Jersey Shore, while the extreme southern and northwestern counties are relatively less dense overall. It is also the second wealthiest state according to the U.S. Census Bureau. The center of population for New Jersey is located in Middlesex County, in the town of Milltown, just east of the New Jersey Turnpike. New Jersey is home to more scientists and engineers "per square mile" than anywhere else in the world. On October 21, 2013, same-sex marriages commenced in New Jersey. New Jersey is one of the most ethnically and religiously diverse states in the United States. As of 2011, 56.4% of New Jersey's children under the age of one belonged to racial or ethnic minority groups, meaning that they had at least one parent who was not non-Hispanic white. The state has the second largest Jewish population by percentage (after New York); the second largest Muslim population by percentage (after Michigan); the largest population of Peruvians in the United States; the largest population of Cubans outside of Florida; the third highest Asian population by percentage; and the second highest Italian population, according to the 2000 Census. African Americans, Hispanics (Puerto Ricans and Dominicans), West Indians, Arabs, and Brazilian and Portuguese Americans are also high in number. New Jersey has the third highest Asian Indian population of any state by absolute numbers and the highest by percentage, with Bergen County home to America's largest Malayali community. Overall, New Jersey has the third largest Korean population, with Bergen County home to the highest Korean concentration per capita of any U.S. county (6.9% in 2011). New Jersey also has the fourth largest Filipino population, and fourth largest Chinese population, per the 2010 U.S. Census. The five largest ethnic groups in 2000 were: Italian (17.9%), Irish (15.9%), African (13.6%), German (12.6%), Polish (6.9%). India Square, in Bombay, Jersey City, Hudson County, is home to the highest concentration of Asian Indians in the Western Hemisphere. Meanwhile, Central New Jersey, particularly Edison and surrounding Middlesex County, is prominently known for its significant concentration of Asian Indians. The world's largest Hindu temple was inaugurated in Robbinsville in 2014, a BAPS temple. The growing Little India is a South Asian-focused commercial strip in Middlesex County, the U.S. county with the highest concentration of Asian Indians. The Oak Tree Road strip runs for about one-and-a-half miles through Edison and neighboring Iselin in Woodbridge Township, near the area's sprawling Chinatown and Koreatown, running along New Jersey Route 27. It is the largest and most diverse South Asian cultural hub in the United States. Carteret's Punjabi Sikh community, variously estimated at upwards of 3,000, is the largest concentration of Sikhs in the state. Monroe Township in Middlesex County has experienced a particularly rapid growth rate in its Indian American population, with an estimated 5,943 (13.6%) as of 2017, which was 23 times the 256 (0.9%) counted as of the 2000 Census; and Diwali is celebrated by the township as a Hindu holiday. In Middlesex County, election ballots are printed in English, Spanish, Gujarati, Hindi, and Punjabi. Newark was the fourth poorest of U.S. cities with over 250,000 residents in 2008, but New Jersey as a whole had the second-highest median household income as of 2014. This is largely because so much of New Jersey consists of suburbs, most of them affluent, of New York City and Philadelphia. New Jersey is also the most densely populated state, and the only state that has had every one of its counties deemed "urban" as defined by the Census Bureau's Combined Statistical Area. In 2010, 6.2% of its population was reported as under age 5, 23.5% under 18, and 13.5% were 65 or older; and females made up approximately 51.3% of the population. A study by the Pew Research Center found that in 2013, New Jersey was the only U.S. state in which immigrants born in India constituted the largest foreign-born nationality, representing roughly 10% of all foreign-born residents in the state. For further information on various ethnoracial groups and neighborhoods prominently featured within New Jersey, see the following articles: As of 2011, 56.4% of New Jersey's population younger than age1 were minorities (meaning that they had at least one parent who was not non-Hispanic white). "Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number." As of 2010, 71.31% (5,830,812) of New Jersey residents age5 and older spoke English at home as a primary language, while 14.59% (1,193,261) spoke Spanish, 1.23% (100,217) Chinese (which includes Cantonese and Mandarin), 1.06% (86,849) Italian, 1.06% (86,486) Portuguese, 0.96% (78,627) Tagalog, and Korean was spoken as a main language by 0.89% (73,057) of the population over the age of five. In total, 28.69% (2,345,644) of New Jersey's population age5 and older spoke a mother language other than English. A diverse collection of languages has since evolved amongst the state's population, given that New Jersey has become cosmopolitan and is home to ethnic enclaves of non-English-speaking communities: By number of adherents, the largest denominations in New Jersey, according to the Association of Religion Data Archives in 2010, were the Roman Catholic Church with 3,235,290; Islam with 160,666; and the United Methodist Church with 138,052. The world's largest Hindu temple was inaugurated in Robbinsville, Mercer County, in central New Jersey during 2014, a BAPS temple. In January 2018, Gurbir Grewal became the first Sikh American state attorney general in the United States. In January 2019, Sadaf Jaffer became the first female Muslim American mayor, first female South Asian mayor, and first female Pakistani-American mayor in the United States, of Montgomery in Somerset County. For its overall population and nation-leading population density, New Jersey has a relative paucity of classic large cities. This paradox is most pronounced in Bergen County, New Jersey's most populous county, whose more than 930,000 residents in 2019 inhabited 70 municipalities, the most populous being Hackensack, with 44,522 residents estimated in 2018. Many urban areas extend far beyond the limits of a single large city, as New Jersey cities (and indeed municipalities in general) tend to be geographically small; three of the four largest cities in New Jersey by population have under 20 square miles of land area, and eight of the top ten, including all of the top five have land area under 30 square miles. , only four municipalities had populations in excess of 100,000, although Edison and Woodbridge came very close. The U.S. Bureau of Economic Analysis estimates that New Jersey's gross state product in the fourth quarter of 2018 was $639.8 billion. New Jersey's estimated taxpayer burden in 2015 was $59,400 per taxpayer. New Jersey is nearly $239 billion in debt. New Jersey's per capita gross state product in 2008 was $54,699, second in the U.S. and above the national per capita gross domestic product of $46,588. Its per capita income was the third highest in the nation with $51,358. In 2018, New Jersey had the highest number of millionaires per capita in the United States (approximately 9% of households), according to a study by Phoenix Marketing International. The state is ranked second in the nation by the number of places with per capita incomes above national average with 76.4%. Nine of New Jersey's counties are among the 100 wealthiest U.S. counties. New Jersey has seven tax brackets that determine state income tax rates, which range from 1.4% (for income below $20,000) to 8.97% (for income above $500,000). The standard sales tax rate as of January 1, 2018, is 6.625%, applicable to all retail sales unless specifically exempt by law. This rate, which is comparably lower than that of New York City, often attracts numerous shoppers from New York City, often to suburban Paramus, New Jersey, which has five malls, one of which (the Garden State Plaza) has over two million square feet of retail space. Tax exemptions include most food items for at-home preparation, medications, most clothing, footwear and disposable paper products for use in the home. There are 27 Urban Enterprise Zone statewide, including sections of Paterson, Elizabeth, and Jersey City. In addition to other benefits to encourage employment within the zone, shoppers can take advantage of a reduced 3.3125% sales tax rate (half the rate charged statewide) at eligible merchants. New Jersey has the highest cumulative tax rate of all 50 states with residents paying a total of $68 billion in state and local taxes annually with a per capita burden of $7,816 at a rate of 12.9% of income. All real property located in the state is subject to property tax unless specifically exempted by statute. New Jersey does not assess an intangible personal property tax, but it does impose an inheritance tax. New Jersey consistently ranks as having one of the highest proportional levels of disparity of any state in the United States, based upon what it receives from the federal government relative to what it gives. In 2015, WalletHub ranked New Jersey the state least dependent upon federal government aid overall and having the fourth lowest return on taxpayer investment from the federal government, at 48 cents per dollar. New Jersey has one of the highest tax burdens in the nation. Factors for this include the large federal tax liability which is not adjusted for New Jersey's higher cost of living and Medicaid funding formulas. New Jersey's economy is multifaceted, but is centered on the pharmaceutical industry, biotechnology, information technology, the financial industry, chemical development, telecommunications, food processing, electric equipment, printing, publishing, and tourism. New Jersey's agricultural outputs are nursery stock, horses, vegetables, fruits and nuts, seafood, and dairy products. New Jersey ranks second among states in blueberry production, third in cranberries and spinach, and fourth in bell peppers, peaches, and head lettuce. The state harvests the fourth-largest number of acres planted with asparagus. Although New Jersey is home to many energy-intensive industries, its energy consumption is only 2.7% of the U.S. total, and its carbon dioxide emissions are 0.8% of the U.S. total. Its comparatively low greenhouse gas emissions can be attributed to the state's use of nuclear power. According to the Energy Information Administration, nuclear power dominates New Jersey's electricity market, typically supplying more than one-half of state generation. New Jersey has three nuclear power plants, including the Oyster Creek Nuclear Generating Station, which came online in 1969 and is the oldest operating nuclear plant in the country. New Jersey has a strong scientific economy and is home to major pharmaceutical and telecommunications firms, drawing on the state's large and well-educated labor pool. There is also a strong service economy in retail sales, education, and real estate, serving residents who work in New York City or Philadelphia. Shipping is a key industry in New Jersey because of the state's strategic geographic location, the Port of New York and New Jersey being the busiest port on the East Coast. The Port Newark-Elizabeth Marine Terminal was the world's first container port and today is one of the world's largest. New Jersey hosts several business headquarters, including twenty-four Fortune 500 companies. Paramus in Bergen County has become the top retail ZIP code (07652) in the United States, with the municipality generating over US$6 billion in annual retail sales. Several New Jersey counties, including Somerset (7), Morris (10), Hunterdon (13), Bergen (21), and Monmouth (42), have been ranked among the highest-income counties in the United States. New Jersey's location at the center of the Northeast megalopolis and its extensive transportation system have put over one-third of all United States residents and many Canadian residents within overnight distance by land. This accessibility to consumer revenue has enabled seaside resorts such as Atlantic City and the remainder of the Jersey Shore, as well as the state's other natural and cultural attractions, to contribute significantly to the record 111 million tourist visits to New Jersey in 2018, providing US$44.7 billion in tourism revenue, directly supporting 333,860 jobs, sustaining more than 531,000 jobs overall including peripheral impacts, and generating US$5 billion in state and local tax revenue. In 1976, a referendum of New Jersey voters approved casino gambling in Atlantic City, where the first legalized casino opened in 1978. At that time, Las Vegas was the only other casino resort in the country. Today, several casinos lie along the Atlantic City Boardwalk, the first and longest boardwalk in the world. Atlantic City experienced a dramatic contraction in its stature as a gambling destination after 2010, including the closure of multiple casinos since 2014, spurred by competition from the advent of legalized gambling in other northeastern U.S. states. On February 26, 2013, Governor Chris Christie signed online gambling into law. Sports betting has become a growing source of gambling revenue in New Jersey since being legalized across the nation by the U.S. Supreme Court on May 14, 2018. Forests cover 45%, or approximately 2.1 million acres, of New Jersey's land area. The chief tree of the northern forests is the oak. The Pine Barrens, consisting of pine forests, is in the southern part of the state. Some mining activity of zinc, iron, and manganese still takes place in the area in and around the Franklin Furnace. New Jersey is second in the nation in solar power installations, enabled by one of the country's most favorable net metering policies, and the renewable energy certificates program. The state has more than 10,000 solar installations. In 2010, there were 605 school districts in the state. Secretary of Education Rick Rosenberg, appointed by Governor Jon Corzine, created the Education Advancement Initiative (EAI) to increase college admission rates by 10% for New Jersey's high school students, decrease dropout rates by 15%, and increase the amount of money devoted to schools by 10%. Rosenberg retracted this plan when criticized for taking the money out of healthcare to fund this initiative. In 2010, the state government paid all of the teachers' premiums for health insurance, but currently all NJ public teachers pay a portion of their own health insurance premiums. In 2015, New Jersey spent more per each public school student than any other U.S. state except New York, Alaska, and Connecticut, amounting to $18,235 spent per pupil. Over 50% of the expenditure was allocated to student instruction. According to 2011 "Newsweek" statistics, students of High Technology High School in Lincroft, Monmouth County and Bergen County Academies in Hackensack, Bergen County registered average SAT scores of 2145 and 2100, respectively, representing the second- and third-highest scores, respectively, of all listed U.S. high schools. Princeton University in Princeton, Mercer County, one of the world's most prominent research universities, is often featured at or near the top of various national and global university rankings, topping the 2020 list of "U.S. News & World Report". In 2013, Rutgers University, headquartered in New Brunswick, Middlesex County as the flagship institution of higher education in New Jersey, gained medical and dental schools, augmenting its profile as a national research university as well. In 2014, New Jersey's school systems were ranked at the top of all fifty U.S. states by financial website Wallethub.com. In 2018, New Jersey's overall educational system was ranked second among all states to Massachusetts by "U.S. News & World Report". In 2019, "Education Week" also ranked New Jersey public schools the best of all U.S. states. Nine New Jersey high schools were ranked among the top 25 in the U.S. on the "Newsweek" "America's Top High Schools 2016" list, more than from any other state. A 2017 UCLA Civil Rights project found that New Jersey has the sixth-most segregated classrooms in the United States. New Jersey has continued to play a prominent role as a U.S. cultural nexus. Like every state, New Jersey has its own cuisine, religious communities, museums, and . New Jersey is the birthplace of modern inventions such as: FM radio, the motion picture camera, the lithium battery, the light bulb, transistors, and the electric train. Other New Jersey creations include: the drive-in movie, the cultivated blueberry, cranberry sauce, the postcard, the boardwalk, the zipper, the phonograph, saltwater taffy, the dirigible, the seedless watermelon, the first use of a submarine in warfare, and the ice cream cone. Diners are iconic to New Jersey. The state is home to many diner manufacturers and has over 600 diners, more than any other place in the world. New Jersey is the only state without a state song. "I'm From New Jersey" is incorrectly listed on many websites as being the New Jersey state song, but it was not even a contender when in 1996 the New Jersey Arts Council submitted their suggestions to the New Jersey Legislature. New Jersey is frequently the target of jokes in American culture, especially from New York City-based television shows, such as "Saturday Night Live". Academic Michael Aaron Rockland attributes this to New Yorkers' view that New Jersey is the beginning of Middle America. The New Jersey Turnpike, which runs between two major East Coast cities, New York City and Philadelphia, is also cited as a reason, as people who traverse through the state may only see its industrial zones. Reality television shows like "Jersey Shore" and "The Real Housewives of New Jersey" have reinforced stereotypical views of New Jersey culture, but Rockland cited "The Sopranos" and the music of Bruce Springsteen as exporting a more positive image. New Jersey is known for several foods developed within the region, including Taylor Ham (also known as pork roll), cheesesteaks, and scrapple. Several states with substantial Italian American populations take credit for the development of submarine sandwiches, including New Jersey. New Jersey has long been an important origin for both rock and rap music. Prominent musicians from or with significant connections to New Jersey include: New Jersey currently has six teams from major professional sports leagues playing in the state, although one Major League Soccer team and two National Football League teams identify themselves as being from the New York metropolitan area. The National Hockey League's New Jersey Devils, based in Newark at the Prudential Center, is the only major league sports franchise to bear the state's name. Founded in 1974 in Kansas City, Missouri, as the Kansas City Scouts, the team played in Denver, Colorado, as the Colorado Rockies from 1976 until the spring of 1982 when naval architect, businessman, and Jersey City native John J. McMullen purchased, renamed, and moved the franchise to Brendan Byrne Arena in East Rutherford's Meadowlands Sports Complex. While the team had mostly losing records in Kansas City, Denver, and its first years in New Jersey, the Devils began to improve in the late 1980s and early 1990s under Hall of Fame president and general manager Lou Lamoriello. The team made the playoffs for the Stanley Cup in 2001 and 2012, and won it in 1995, 2000, and 2003. The organization is the youngest of the nine major league teams in the New York metropolitan area. The Devils have established a following throughout the northern and central portions of the state, carving a place in a media market once dominated by the New York Rangers and Islanders. In 2018, the Philadelphia Flyers renovated and expanded their training facility, the Virtua Center Flyers Skate Zone, in Voorhees Township in the southern portion of the state. The New York Metropolitan Area's two National Football League teams, the New York Giants and the New York Jets, play at MetLife Stadium in East Rutherford's Meadowlands Sports Complex. Built for about $1.6 billion, the venue is the most expensive stadium ever built. On February 2, 2014, MetLife Stadium hosted Super Bowl XLVIII. The New York Red Bulls of Major League Soccer play in Red Bull Arena, a soccer-specific stadium in Harrison across the Passaic River from downtown Newark. On July 27, 2011, Red Bull Arena hosted the 2011 MLS All-Star Game. From 1977 to 2012, New Jersey had a National Basketball Association team, the New Jersey Nets. WNBA's New York Liberty played in New Jersey from 2011 to 2013 while their primary home arena, Madison Square Garden was undergoing renovations. In 2016, the Philadelphia 76ers of the NBA opened their new headquarters and training facility, the Philadelphia 76ers Training Complex, in Camden. The Meadowlands Sports Complex is home to the Meadowlands Racetrack, one of three major harness racing tracks in the state. The Meadowlands Racetrack and Freehold Raceway in Freehold are two of the major harness racing tracks in North America. Monmouth Park Racetrack in Oceanport is a popular spot for thoroughbred racing in New Jersey and the northeast. It hosted the Breeders' Cup in 2007, and its turf course was renovated in preparation. New Jerseyans' collegiate allegiances are predominantly split among the three major NCAA Division I programs in the state: the Rutgers University (New Jersey's flagship state university) Scarlet Knights, members of the Big Ten Conference; the Seton Hall University (the state's largest Catholic university) Pirates, members of the Big East Conference; and the Princeton University (the state's Ivy League university) Tigers. The intense rivalry between Rutgers and Princeton athletics began with the first intercollegiate football game in 1869. The schools have not met on the football field since 1980, but they continue to play each other annually in all other sports offered by the two universities. Rutgers, which fields 24 teams in various sports, is nationally known for its football program, with a 6–4 all-time bowl record; and its women's basketball programs, which appeared in a National Final in 2007. In 2008 and 2009, Rutgers expanded their football home HighPoint.com Stadium on the Busch Campus. The basketball teams play at Louis Brown Athletic Center on Livingston Campus. Both venues and campuses are in Piscataway, across the Raritan River from New Brunswick. The university also fields men's basketball and baseball programs. Rutgers' fans live mostly in the western parts of the state and Middlesex County; its alumni base is the largest in the state. Rutgers' satellite campuses in Camden and Newark each field their own athletic programs—the Rutgers–Camden Scarlet Raptors and the Rutgers–Newark Scarlet Raiders—which both compete in NCAA Division III. Seton Hall fields no football team, but its men's basketball team is one of the Big East's storied programs. No New Jersey team has won more games in the NCAA Division I Men's Basketball Tournament, and it is the state's only men's basketball program to reach a modern National Final. The Pirates play their home games at Prudential Center in downtown Newark, about four miles from the university's South Orange campus. Their fans hail largely from in the predominantly Roman Catholic areas of the northern part of the state and the Jersey Shore. The annual inter-conference rivalry game between Seton Hall and Rutgers, whose venue alternates between Newark and Piscataway, the Garden State Hardwood Classic, is planned through 2026. The state's other Division I schools include the Monmouth University Hawks (West Long Branch), the New Jersey Institute of Technology (NJIT) Highlanders (Newark), the Rider University Broncs (Lawrenceville), and the Saint Peter's University Peacocks and Peahens (Jersey City). Fairleigh Dickinson University competes in both Division I and Division III. It has two campuses, each with its own sports teams. The teams at the Metropolitan Campus are known as the FDU Knights, and compete in the Northeast Conference and NCAA Division I. The College at Florham (FDU-Florham) teams are known as the FDU-Florham Devils and compete in the Middle Atlantic Conferences' Freedom Conference and NCAA Division III. Among the various Division III schools in the state, the Stevens Institute of Technology Ducks have fielded the longest continuously running collegiate men's lacrosse program in the country. 2009 marked the 125th season. New Jersey high schools are divided into divisions under the New Jersey State Interscholastic Athletic Association (NJSIAA).' Founded in 1918, the NJSIAA currently represents 22,000 schools, 330,000 coaches, and almost 4.5 million athletes. Motion picture technology was developed by Thomas Edison, with much of his early work done at his West Orange laboratory. Edison's Black Maria was the first motion picture studio. America's first motion picture industry started in 1907 in Fort Lee and the first studio was constructed there in 1909. DuMont Laboratories in Passaic developed early sets and made the first broadcast to the private home. A number of television shows and films have been filmed in New Jersey. Since 1978, the state has maintained a Motion Picture and Television Commission to encourage filming in-state. New Jersey has long offered tax credits to television producers. Governor Chris Christie suspended the credits in 2010, but the New Jersey State Legislature in 2011 approved the restoration and expansion of the tax credit program. Under bills passed by both the state Senate and Assembly, the program offers 20 percent tax credits (22% in urban enterprise zones) to television and film productions that shoot in the state and meet set standards for hiring and local spending. The New Jersey Turnpike is one of the most prominent and heavily trafficked roadways in the United States. This toll road, which overlaps with Interstate 95 for much of its length, carries traffic between Delaware and New York, and up and down the East Coast in general. Commonly referred to as simply "the Turnpike", it is known for its numerous rest areas named after prominent New Jerseyans. The Garden State Parkway, or simply "the Parkway", carries relatively more in-state traffic than interstate traffic and runs from New Jersey's northern border to its southernmost tip at Cape May. It is the main route that connects the New York metropolitan area to the Jersey Shore and is consistently one of the safest roads in the nation. With a total of fifteen travel and six shoulder lanes, the Driscoll Bridge on the Parkway, spanning the Raritan River in Middlesex County, is the widest motor vehicle bridge in the world by number of lanes as well as one of the busiest. New Jersey is connected to New York City via various key bridges and tunnels. The double-decked George Washington Bridge carries the heaviest load of motor vehicle traffic of any bridge in the world, at 102 million vehicles per year, across fourteen lanes. It connects Fort Lee, New Jersey to the Washington Heights neighborhood of Upper Manhattan, and carries Interstate 95 and U.S. Route 1/9 across the Hudson River. The Lincoln Tunnel connects to Midtown Manhattan carrying New Jersey Route 495, and the Holland Tunnel connects to Lower Manhattan carrying Interstate 78. New Jersey is also connected to Staten Island by three bridges—from north to south, the Bayonne Bridge, the Goethals Bridge, and the Outerbridge Crossing. New Jersey has interstate compacts with all three of its neighboring states. The Port Authority of New York and New Jersey, the Delaware River Port Authority (with Pennsylvania), the Delaware River Joint Toll Bridge Commission (with Pennsylvania), and the Delaware River and Bay Authority (with Delaware) operate most of the major transportation routes in and out of the state. Bridge tolls are collected only from traffic exiting the state, with the exception of the private Dingman's Ferry Bridge over the Delaware River, which charges a toll in both directions. It is unlawful for a customer to serve themselves gasoline in New Jersey. It became the last remaining U.S. state where all gas stations are required to sell full-service gasoline to customers at all times in 2016, after Oregon's introduction of restricted self-service gasoline availability took effect. Newark Liberty International Airport (EWR) is one of the busiest airports in the United States. Operated by the Port Authority of New York and New Jersey, it is one of the three main airports serving the New York metropolitan area. United Airlines is the airport's largest tenant, operating an entire terminal there, which it uses as one of its primary hubs. FedEx Express operates a large cargo terminal at EWR as well. The adjacent Newark Airport railroad station provides access to Amtrak and NJ Transit trains along the Northeast Corridor Line. Two smaller commercial airports, Atlantic City International Airport and rapidly growing Trenton-Mercer Airport, also operate in other parts of the state. Teterboro Airport in Bergen County, and Millville Municipal Airport in Cumberland County, are general aviation airports popular with private and corporate aircraft due to their proximity to New York City and the Jersey Shore, respectively. NJ Transit operates extensive rail and bus service throughout the state. A state-run corporation, it began with the consolidation of several private bus companies in North Jersey in 1979. In the early 1980s, it acquired Conrail's commuter train operations that connected suburban towns to New York City. Today, NJ Transit has eleven commuter rail lines that run through different parts of the state. Most of the lines end at either Penn Station in New York City or Hoboken Terminal in Hoboken. One line provides service between Atlantic City and Philadelphia, Pennsylvania. NJ Transit also operates three light rail systems in the state. The Hudson-Bergen Light Rail connects Bayonne to North Bergen, through Hoboken and Jersey City. The Newark Light Rail is partially underground, and connects downtown Newark with other parts of the city and its suburbs, Belleville and Bloomfield. The River Line connects Trenton and Camden. The PATH is a rapid transit system consisting of four lines operated by the Port Authority of New York and New Jersey. It links Hoboken, Jersey City, Harrison and Newark with New York City. The PATCO Speedline is a rapid transit system that links Camden County to Philadelphia. Both the PATCO and the PATH are two of only five rapid transit systems in the United States to operate 24 hours a day. Amtrak operates numerous long-distance passenger trains in New Jersey, both to and from neighboring states and around the country. In addition to the Newark Airport connection, other major Amtrak railway stations include Trenton Transit Center, Metropark, and the historic Newark Penn Station. The Southeastern Pennsylvania Transportation Authority, or SEPTA, has two commuter rail lines that operate into New Jersey. The Trenton Line terminates at the Trenton Transit Center, and the West Trenton Line terminates at the West Trenton Rail Station in Ewing. AirTrain Newark is a monorail connecting the Amtrak/NJ Transit station on the Northeast Corridor to the airport's terminals and parking lots. Some private bus carriers still remain in New Jersey. Most of these carriers operate with state funding to offset losses and state owned buses are provided to these carriers, of which Coach USA companies make up the bulk. Other carriers include private charter and tour bus operators that take gamblers from other parts of New Jersey, New York City, Philadelphia, and Delaware to the casino resorts of Atlantic City. New York Waterway has ferry terminals at Belford, Jersey City, Hoboken, Weehawken, and Edgewater, with service to different parts of Manhattan. Liberty Water Taxi in Jersey City has ferries from Paulus Hook and Liberty State Park to Battery Park City in Manhattan. Statue Cruises offers service from Liberty State Park to the Statue of Liberty National Monument, including Ellis Island. SeaStreak offers services from the Raritan Bayshore to Manhattan, Martha's Vineyard, and Nantucket. The Delaware River and Bay Authority operates the Cape May–Lewes Ferry on Delaware Bay, carrying both passengers and vehicles between New Jersey and Delaware. The agency also operates the Forts Ferry Crossing for passengers across the Delaware River. The Delaware River Port Authority operates the RiverLink Ferry between the Camden waterfront and Penn's Landing in Philadelphia. The position of Governor of New Jersey has been considered one of the most powerful in the nation. Until 2010, the governor was the only statewide elected executive official in the state and appointed numerous government officials. Formerly, an acting governor was even more powerful as he simultaneously served as President of the New Jersey State Senate, thus directing half of the legislative and all of the executive process. In 2002 and 2007, President of the State Senate Richard Codey held the position of acting governor for a short time, and from 2004 to 2006 Codey became a long-term acting governor due to Jim McGreevey's resignation. A 2005 amendment to the state Constitution prevents the Senate President from becoming acting governor in the event of a permanent gubernatorial vacancy without giving up her or his seat in the state Senate. Phil Murphy (D) is the Governor. The governor's mansion is Drumthwacket, located in Princeton. Before 2010, New Jersey was one of the few states without a lieutenant governor. Republican Kim Guadagno was elected the first Lieutenant Governor of New Jersey and took office on January 19, 2010. She was elected on the Republican ticket with Governor-Elect Chris Christie in the November 2009 NJ gubernatorial election. The position was created as the result of a Constitutional amendment to the New Jersey State Constitution passed by the voters on November 8, 2005 and effective as of January 17, 2006. The current version of the New Jersey State Constitution was adopted in 1947. It provides for a bicameral New Jersey Legislature, consisting of an upper house Senate of 40 members and a lower house General Assembly of 80 members. Each of the 40 legislative districts elects one State Senator and two Assembly members. Assembly members are elected for a two-year term in all odd-numbered years; State Senators are elected in the years ending in 1, 3, and7 and thus serve either four- or two-year terms. New Jersey is one of only five states that elects its state officials in odd-numbered years. (The others are Kentucky, Louisiana, Mississippi, and Virginia.) New Jersey holds elections for these offices every four years, in the year following each federal Presidential election year. Thus, the last year when New Jersey elected a Governor was 2017; the next gubernatorial election will occur in 2021. The New Jersey Supreme Court consists of a Chief Justice and six Associate Justices. All are appointed by the Governor with the advice and consent of a majority of the membership of the State Senate. Justices serve an initial seven-year term, after which they can be reappointed to serve until age 70. Most of the day-to-day work in the New Jersey courts is carried out in the Municipal Court, where simple traffic tickets, minor criminal offenses, and small civil matters are heard. More serious criminal and civil cases are handled by the Superior Court for each county. All Superior Court judges are appointed by the Governor with the advice and consent of a majority of the membership of the State Senate. Each judge serves an initial seven-year term, after which he or she can be reappointed to serve until age 70. New Jersey's judiciary is unusual in that it still has separate courts of law and equity, like its neighbor Delaware but unlike most other U.S. states. The New Jersey Superior Court is divided into Law and Chancery Divisions at the trial level; the Law Division hears both criminal cases and civil lawsuits where the plaintiff's primary remedy is damages, while the Chancery Division hears family cases, civil suits where the plaintiff's primary remedy is equitable relief, and probate trials. The Superior Court also has an Appellate Division, which functions as the state's intermediate appellate court. Superior Court judges are assigned to the Appellate Division by the Chief Justice. There is also a Tax Court, which is a court of limited jurisdiction. Tax Court judges hear appeals of tax decisions made by County Boards of Taxation. They also hear appeals on decisions made by the Director of the Division of Taxation on such matters as state income, sales and business taxes, and homestead rebates. Appeals from Tax Court decisions are heard in the Appellate Division of Superior Court. Tax Court judges are appointed by the Governor for initial terms of seven years, and upon reappointment are granted tenure until they reach the mandatory retirement age of 70. There are 12 Tax Court judgeships. New Jersey is divided into 21 counties; 13 date from the colonial era. New Jersey was completely divided into counties by 1692; the present counties were created by dividing the existing ones; most recently Union County in 1857. New Jersey is the only state in the nation where elected county officials are called "Freeholders", governing each county as part of its own Board of Chosen Freeholders. The number of freeholders in each county is determined by referendum, and must consist of three, five, seven or nine members. Depending on the county, the executive and legislative functions may be performed by the Board of Chosen Freeholders or split into separate branches of government. In 16 counties, members of the Board of Chosen Freeholders perform both legislative and executive functions on a commission basis, with each Freeholder assigned responsibility for a department or group of departments. In the other five counties (Atlantic, Bergen, Essex, Hudson and Mercer), there is a directly elected County Executive who performs the executive functions while the Board of Chosen Freeholders retains a legislative and oversight role. In counties without an Executive, a County Administrator (or County Manager) may be hired to perform day-to-day administration of county functions. New Jersey currently has 565 municipalities; the number was 566 before Princeton Township and Princeton Borough merged to form the municipality of Princeton on January 1, 2013. Unlike other states, all New Jersey land is part of a municipality. In 2008, Governor Jon Corzine proposed cutting state aid to all towns under 10,000 people, to encourage mergers to reduce administrative costs. In May 2009, the Local Unit Alignment Reorganization and Consolidation Commission began a study of about 40 small communities in South Jersey to decide which ones might be good candidates for consolidation. Starting in the 20th century, largely driven by reform-minded goals, a series of six modern forms of government was implemented. This began with the Walsh Act, enacted in 1911 by the New Jersey Legislature, which provided for a three- or five-member commission elected on a non-partisan basis. This was followed by the 1923 Municipal Manager Law, which offered a non-partisan council, provided for a weak mayor elected by and from the members of the council, and introduced a Council-manager government structure with an appointed manager responsible for day-to-day administration of municipal affairs. The Faulkner Act, originally enacted in 1950 and substantially amended in 1981, offers four basic plans: Mayor-Council, Council-Manager, Small Municipality, and Mayor-Council-Administrator. The act provides many choices for communities with a preference for a strong executive and professional management of municipal affairs and offers great flexibility in allowing municipalities to select the characteristics of its government: the number of seats on the Council; seats selected at-large, by wards, or through a combination of both; staggered or concurrent terms of office; and a mayor chosen by the Council or elected directly by voters. Most large municipalities and a majority of New Jersey's residents are governed by municipalities with Faulkner Act charters. Municipalities can also formulate their own unique form of government and operate under a Special Charter with the approval of the New Jersey Legislature. While municipalities retain their names derived from types of government, they may have changed to one of the modern forms of government, or further in the past to one of the other traditional forms, leading to municipalities with formal names quite baffling to the general public. For example, though there are four municipalities that are officially of the village type, Loch Arbour is the only one remaining with the village form of government. The other three villages—Ridgefield Park (now with a Walsh Act form), Ridgewood (now with a Faulkner Act Council-Manager charter) and South Orange (now operates under a Special Charter)—have all migrated to other non-village forms. Socially, New Jersey is considered one of the more liberal states in the nation. Polls indicate that 60% of the population are self-described as pro-choice, although a majority are opposed to late trimester and intact dilation and extraction and public funding of abortion. In a 2009 Quinnipiac University Polling Institute poll, a plurality supported same-sex marriage 49% to 43% opposed, On October 18, 2013, the New Jersey Supreme Court rendered a provisional, unanimous (7–0) order authorizing same-sex marriage in the state, pending a legal appeal by Governor Chris Christie, who then withdrew this appeal hours after the inaugural same-sex marriages took place on October 21, 2013. New Jersey also has some of the most stringent gun control laws in the U.S. These include bans on assault firearms, hollow-nose bullets and slingshots. No gun offense in New Jersey is graded less than a felony. BB guns and black-powder guns are all treated as modern firearms. New Jersey does not recognize out-of-state gun licenses and aggressively enforces its own gun laws. In past elections, New Jersey was a Republican bastion, but recently has become a Democratic stronghold. Currently, New Jersey Democrats have majority control of both houses of the New Jersey Legislature (Senate, 26–14, and Assembly, 54–26), a 10–2 split of the state's twelve seats in the U.S. House of Representatives, and both U.S. Senate seats. Although the Democratic Party is very successful statewide, the state has had Republican governors; from 1994 to 2002, Christine Todd Whitman won twice with 47% and 49% of the votes, respectively, and in the 2009 gubernatorial election, Republican Chris Christie defeated incumbent Democrat Jon Corzine with 48% of the vote. In the 2013 gubernatorial election, Christie won reelection with over 60% of the votes. Because each candidate for lieutenant governor runs on the same ticket as the party's candidate for governor, the current Governor and Lieutenant Governor are members of the Democratic Party. The governor's appointments to cabinet and non-cabinet positions may be from either party; for instance, the Attorney General is a Democrat. In federal elections, the state leans heavily towards the Democratic Party. For many years in the past, however, it was a Republican stronghold, having given comfortable margins of victory to the Republican candidate in the close elections of 1948, 1968, and 1976. New Jersey was a crucial swing state in the elections of 1960, 1968, and 1992. The last elected Republican to hold a Senate seat from New Jersey was Clifford P. Case in 1979. Newark Mayor Cory Booker was elected in October 2013 to join Robert Menendez to make New Jersey the first state with concurrent serving black and Latino U.S. senators. The state's Democratic strongholds include Camden County, Essex County (typically the state's most Democratic county—it includes Newark, the state's largest city), Hudson County (the second-strongest Democratic county, including Jersey City, the state's second-largest city); Mercer County (especially around Trenton and Princeton), Middlesex County, and Union County (including Elizabeth, the state's fourth-largest city). The suburban northwestern and southeastern counties of the state are reliably Republican: Republicans have support along the coast in Ocean County and in the mountainous northwestern part of the state, especially Morris County, Sussex County, and Warren County. Other suburban counties, especially Bergen County and Burlington County had the majority of votes go to the Democratic Party. In the 2008 election, President Barack Obama won New Jersey with approximately fifty-seven percent of the vote, compared to McCain's forty-one percent. Independent candidate Ralph Nader garnered less than one percent of the vote. About one-third of the state's counties are considered "swing" counties, but some go more one way than others. For example, Salem County, the same is true with Passaic County, with a highly populated Hispanic Democratic south (including Paterson, the state's third-largest city) and a rural, Republican north; with the "swing" township of Wayne in the middle. Other "swing" counties like Monmouth County, Somerset County, and Cape May County tend to go Republican, as they also have population in conservative areas, although Somerset has recently trended Democratic. To be eligible to vote in a U.S. election, all New Jerseyans are required to start their residency in the state 30 days prior to an election and register 21 days prior to election day. On December 17, 2007, Governor Jon Corzine signed into law a bill that would eliminate the death penalty in New Jersey. New Jersey is the first state to pass such legislation since Iowa and West Virginia eliminated executions in 1965. Corzine also signed a bill that would downgrade the Death Row prisoners' sentences from "Death" to "Life in Prison with No Parole". There is also a mineral museum Ogdensburg in Sussex County. Visitors and residents take advantage of and contribute to performances at the numerous music, theater, and dance companies and venues located throughout the state, including: New Jersey is the location of most of the boardwalks in the U.S., with nearly every town and city along the Jersey Shore area each having a boardwalk with various attractions, entertainment, shopping, dining, miniature golf, arcades, water parks with various water rides, including water slides, lazy rivers, wave pools, etc., and amusement parks hosting rides and attractions including roller coasters, carousels, Ferris wheels, bumper cars, teacups, etc.
https://en.wikipedia.org/wiki?curid=21648
New Mexico New Mexico (; , Navajo: "Yootó Hahoodzo"; ) is a state in the Southwestern region of the United States of America; its capital is Santa Fe, which was founded in 1610 as capital of Nuevo México (itself established as a province of New Spain in 1598), while its largest city is Albuquerque with its accompanying metropolitan area. It is one of the Mountain States and shares the Four Corners region with Utah, Colorado, and Arizona. New Mexico is also bordered by the state of Texas to the east-southeast, Oklahoma to the northeast, and the Mexican states of Chihuahua to the south and Sonora to the southwest. With an estimated population of 2,096,829 as of the July 1, 2019, U.S. Census Bureau estimate, New Mexico is the 36th largest state by population. With a total area of , it is the fifth-largest and sixth-least densely populated of the 50 states. Due to their geographic locations, northern and eastern New Mexico exhibit a colder, alpine climate, while western and southern New Mexico exhibit a warmer, arid climate. The economy of New Mexico is dependent on oil drilling, mineral extraction, dryland farming, cattle ranching, lumber milling, and retail trade. As of 2018, its total gross domestic product (GDP) was $101 billion with a GDP per capita of $45,465. New Mexico's status as a tax haven yields low to moderate personal income taxes on residents and military personnel, and gives tax credits and exemptions to favorable industries. Because of this, has grown and contributed $1.23 billion to its overall economy. Due to its large area and economic climate, New Mexico has a large U.S. military presence marked notably with the White Sands Missile Range. Various U.S. national security agencies base their research and testing arms in New Mexico such as the Sandia and Los Alamos National Laboratories. During the 1940s, Project Y of the Manhattan Project developed and built the country's first atomic bomb and nuclear test, Trinity. Inhabited by Native Americans for many thousands of years before European exploration, it was colonized by the Spanish in 1598 as part of the Imperial Spanish viceroyalty of New Spain. In 1563, it was named Nuevo México after the Aztec Valley of Mexico by Spanish settlers, more than 250 years before the establishment and naming of the present-day country of Mexico; thus, the present-day state of New Mexico was "not" named after the country today known as Mexico. After Mexican independence in 1821, New Mexico became a Mexican territory with considerable autonomy. This autonomy was threatened, however, by the centralizing tendencies of the Mexican government from the 1830s onward, with rising tensions eventually leading to the Revolt of 1837. At the same time, the region became more economically dependent on the United States. At the conclusion of the Mexican–American War in 1848, the United States annexed New Mexico as the U.S. New Mexico Territory. It was admitted to the Union as the 47th state on January 6, 1912. Its history has given New Mexico the highest percentage of Hispanic and Latino Americans, and the second-highest percentage of Native Americans as a population proportion (after Alaska). New Mexico is home to part of the Navajo Nation, 19 federally recognized Pueblo communities of Puebloan peoples, and three different federally recognized Apache tribes. In prehistoric times, the area was home to Ancestral Puebloans, Mogollon, and the modern extant Comanche and Utes inhabited the state. The largest Hispanic and Latino groups represented include the Hispanos of New Mexico, Chicanos, and Mexicans. The New Mexican flag features the state's Spanish origins with the same scarlet and gold coloration as Spain's Cross of Burgundy, along with the ancient sun symbol of the Zia, a Puebloan tribe. These indigenous, Hispanic, Mexican, Latin, and American frontier roots are reflected in the eponymous New Mexican cuisine and the New Mexico music genre. New Mexico received its name long before the present-day nation of Mexico won independence from Spain and adopted that name in 1821. Though the name "Mexico" itself derives from Nahuatl, and in that language it originally referred to the heartland of the Empire of the Mexicas (Aztec Empire) in the Valley of Mexico far from the area of New Mexico, Spanish explorers also used the term "Mexico" to name the region of New Mexico ("" in Spanish) in 1563. In 1581, the Chamuscado and Rodríguez Expedition named the region north of the Rio Grande "San Felipe del Nuevo México". The Spaniards had hoped to find wealthy indigenous Mexica (Aztec) cultures there similar to those of the Aztec (Mexica) Empire of the Valley of Mexico. The indigenous cultures of New Mexico, however, proved to be unrelated to the Mexicas, and they were not wealthy, but the name persisted. Before statehood, the name "New Mexico" applied to various configurations of a former U.S. New Mexico Territory and, even prior to its former Mexican territorial status, a former provincial kingdom of New Spain called Nuevo México, all in the same general area, but of varying extensions. With a total area of , New Mexico is the fifth-largest state. New Mexico's eastern border lies along 103°W longitude with the state of Oklahoma, and (due to a 19th-century surveying error) west of 103°W longitude with Texas. On the southern border, Texas makes up the eastern two-thirds, while the Mexican states of Chihuahua and Sonora make up the western third, with Chihuahua making up about 90% of that. The western border with Arizona runs along the 109° 03'W longitude. The southwestern corner of the state is known as the Bootheel. The 37°N parallel forms the northern boundary with Colorado. The states of New Mexico, Colorado, Arizona, and Utah come together at the Four Corners in New Mexico's northwestern corner. New Mexico has almost no natural water sources. Its surface water area is about . The New Mexican landscape ranges from wide, rose-colored deserts to broken mesas to high, snow-capped peaks. Despite New Mexico's arid image, heavily forested mountain wildernesses cover a significant portion of the state, especially towards the north. The Sangre de Cristo Mountains, the southernmost part of the Rocky Mountains, run roughly north–south along the east side of the Rio Grande in the rugged, pastoral north. The most important of New Mexico's rivers are the Rio Grande, Pecos, Canadian, San Juan, and Gila. The Rio Grande is tied for the fourth-longest river in the United States. The U.S. government protects millions of acres of New Mexico as national forests, including: Areas managed by the National Park Service include: Areas managed by the New Mexico State Parks Division: Visitors also frequent the surviving native pueblos of New Mexico. Tourists visiting these sites bring significant money to the state. Other areas of geographical and scenic interest include Kasha-Katuwe Tent Rocks National Monument and the Gila Wilderness in the southwest of the state. New Mexico's climate is generally semiarid to arid, though areas of continental and alpine climates exist, and its territory is mostly covered by mountains, high plains, and desert. The Great Plains (High Plains) are in eastern New Mexico, similar to the Colorado high plains in eastern Colorado. The two states share similar terrain, with both having plains, mountains, basins, mesas, and desert lands. New Mexico's statewide average precipitation is a year, with average monthly amounts peaking in the summer, as at Albuquerque, and Las Cruces in the south. The average annual temperatures can range from in the southeast to below in the northern mountains. During the summer, daytime temperatures can often exceed at elevations below , the average high temperature in July ranges from at the lower elevations down to 78 °F (26 °C) at the higher elevations. In the colder months of November to March, many cities in New Mexico can have nighttime temperature lows in the teens above zero, or lower. The highest temperature recorded in New Mexico was at the Waste Isolation Pilot Plant (WIPP) near Loving on June 27, 1994, and the lowest recorded temperature is at Gavilan on February 1, 1951. "New Mexico's window to the stars" . Albuquerque Journal. 2017. New Mexico has five unique floristic zones, providing diverse sets of habitats for many plants and animals. The Llano Estacado (or Shortgrass Prairie) in the eastern part of the state is characterized by sod-forming short grasses such as blue grama, and it used to sustain bison. The Chihuahuan Desert extends through the south of the state and is characterized by shrubby creosote. The Colorado Plateau in the northwest corner of New Mexico is high desert with cold winters, and is characterized by sagebrush, shadescale, greasewood, and other plants adapted to the saline and seleniferous soil. The mountainous Mogollon Plateau in the west-central of the state and southern Rocky Mountains in the north-central, have a wide range in elevation (), with vegetation types corresponding to elevation gradients, such as piñon-juniper woodlands near the base, through evergreen conifers, spruce-fir and aspen forests, Krummholz, and alpine tundra. The Apachian zone tucked into the southwestern bootheel of the state has high-calcium soil, oak woodlands, and Arizona cypress, and other plants that are not found in other parts of the state. Some of the native wildlife includes black bears, bighorn sheep, bobcats, cougars, coyotes, deer, elk, jackrabbits, kangaroo rats, javelina, porcupines, pronghorn antelope, roadrunners, western diamondbacks, wild turkeys, and the endangered Mexican gray wolf and Rio Grande silvery minnow. In January 2016, New Mexico sued the United States Environmental Protection Agency over negligence after the 2015 Gold King Mine waste water spill. The spill had caused heavy metals such as cadmium and lead and toxins such as arsenic to flow into the Animas River, polluting water basins of several states The first known inhabitants of New Mexico were members of the Clovis culture of Paleo-Indians. Later inhabitants include American Indians of the Mogollon and Ancestral Pueblo peoples cultures. By the time of European contact in the 16th century, the region was settled by the villages of the Pueblo peoples and groups of Navajo, Apache, and Ute. Francisco Vásquez de Coronado assembled an enormous expedition at Compostela in 1540–1542 to explore and find the mythical Seven Golden Cities of Cibola as described by Fray Marcos de Niza. The name "New Mexico" was first used by a seeker of gold mines named Francisco de Ibarra, who explored far to the north of New Spain in 1563 and reported his findings as being in "a New Mexico". Juan de Oñate officially established the name when he was appointed the first governor of the new Province of New Mexico in 1598. The same year, he founded the San Juan de los Caballeros colony, the first permanent European settlement in the future state of New Mexico, on the Rio Grande near Ohkay Owingeh Pueblo. Oñate extended El Camino Real de Tierra Adentro, Royal Road of the Interior, by from Santa Bárbara, Chihuahua, to his remote colony. The settlement of Santa Fe was established at the foot of the Sangre de Cristo Mountains, the southernmost subrange of the Rocky Mountains, around 1608. The city, along with most of the settled areas of the state, was abandoned by the Spanish for 12 years (1680–92) as a result of the successful Pueblo Revolt, the only successful revolt against European expansion by Native Americans. After the death of the Pueblo leader Popé, Diego de Vargas restored the area to Spanish rule. While developing Santa Fe as a trade center, the returning settlers founded Albuquerque in 1706 from existing surrounding communities, naming it for the viceroy of New Spain, Francisco Fernández de la Cueva, 10th Duke of Alburquerque. As a part of New Spain, the claims for the province of New Mexico passed to independent Mexico in 1821 following the Mexican War of Independence. The Republic of Texas claimed the portion east of the Rio Grande when it seceded from Mexico in 1836, when it incorrectly assumed the older Hispanic settlements of the upper Rio Grande were the same as the newly established Mexican settlements of Texas. Texas's only attempt to establish a presence or control in the claimed territory was the failed Texan Santa Fe Expedition. Their entire army was captured and jailed by Hispanic New Mexico militia. At the turn of the 19th century, the extreme northeastern part of New Mexico, north of the Canadian River and east of the Sangre de Cristo Mountains, was still claimed by France, which sold it in 1803 as part of the Louisiana Purchase. When the Louisiana Territory was admitted as a state in 1812, the U.S. reclassified it as part of the Missouri Territory. The region (along with territory that makes up present-day southeastern Colorado, the Texas and Oklahoma Panhandles, and southwestern Kansas) was ceded to Spain under the Adams-Onis Treaty in 1819. By 1800, the population of New Mexico had reached 25,000. Following the victory of the United States in the Mexican–American War (1846–48), under the Treaty of Guadalupe Hidalgo in 1848, Mexico ceded its northern holdings including their territories of California, Texas, and New Mexico, which would later be divided into the American Southwest and West Coast, to the United States of America. The United States vowed to accept the residents' claims to their lands and to accept them as full citizens with rights of suffrage. After Texas was admitted as a state to the Union, it continued to claim a northeastern portion of New Mexico. It was forced by the US government to drop these claims, in the Compromise of 1850, Texas ceded these claims to the United States of the area in New Mexico lying east of the Rio Grande, in exchange for $10 million from the federal government. Congress established the separate New Mexico Territory in September 1850. It included most of the present-day states of Arizona and New Mexico, and part of Colorado. When the boundary was fixed, a surveyor's error awarded the Permian Basin to the State of Texas. New Mexico dropped its claims to the Permian in a bid to gain statehood in 1911. In 1853, the United States acquired the mostly desert southwestern bootheel of the state and southern Arizona south of the Gila River in the Gadsden Purchase. It wanted to control lands needed for the right-of-way to encourage construction of a transcontinental railroad. New Mexico played a role in the Trans-Mississippi Theater of the American Civil War. Both Confederate and Union governments claimed ownership and territorial rights over New Mexico Territory. In 1861, the Confederacy claimed the southern tract as its own Arizona Territory and waged the ambitious New Mexico Campaign in an attempt to control the American Southwest and open up access to Union California. Confederate power in the New Mexico Territory was effectively broken after the Battle of Glorieta Pass in 1862. However, the Confederate territorial government continued to operate out of Texas, and Confederate troops marched under the Arizona flag until the end of the war. Additionally, more than 8,000 men from New Mexico Territory served in the Union Army. In the late 19th century, the majority of officially European-descended residents in New Mexico were ethnic Mexicans, many of whom had deep roots in the area from early Spanish colonial times. Politically, they still controlled most of the town and county offices through area elections, and wealthy sheepherder families commanded considerable influence. The Anglo-Americans tended to have more ties to the territorial governor and judges, who were appointed by officials out of the region. The two groups struggled for power and the future of the territory. The Anglo minority was "outnumbered, but well-organized and growing". Anglo-Americans made distinctions between the wealthy Mexicans and poor, ill-educated laborers. The United States Congress admitted New Mexico as the 47th state on January 6, 1912. European-American settlers in the state had an uneasy relationship with the large Native American tribes, most of whose members lived on reservations at the beginning of the 20th century. Although Congress passed a law in 1924 that granted all Native Americans U.S. citizenship, as well as the right to vote in federal and state elections, New Mexico was among several states with Jim Crow laws, e.g. those who do not pay taxes cannot vote. A major oil discovery in 1928 brought wealth to the state, especially Lea County and the town of Hobbs. The town was named after James Hobbs, a homesteader there in 1907. The Midwest State No.1 well, begun in late 1927 with a standard cable-tool drilling rig, revealed the first signs of oil from the Hobbs field on June 13, 1928. Drilled to 4,330 feet and completed a few months later, the well produced 700 barrels of oil per day on state land. The Midwest Refining Company's Hobbs well produced oil until 2002. The New Mexico Bureau of Mines and Mineral Resources called it "the most important single discovery of oil in New Mexico's history". During World War II, the first atomic bombs were designed and manufactured at Los Alamos, a site developed by the federal government specifically to support a high-intensity scientific effort to rapidly complete research and testing of this weapon. The first bomb was tested at Trinity site in the desert between Socorro and Alamogordo on what is now White Sands Missile Range. Native Americans from New Mexico fought for the United States in both the First and Second World Wars. Veterans were disappointed to return and find their civil rights limited by state discrimination. In Arizona and New Mexico, veterans challenged state laws or practices prohibiting them from voting. In 1948, after veteran Miguel Trujillo, Sr. of Isleta Pueblo was told by the county registrar that he could not register to vote, he filed suit against the county in federal district court. A three-judge panel overturned as unconstitutional New Mexico's provisions that Indians who did not pay taxes (and could not document if they had paid taxes) could not vote. Judge Phillips wrote: Any other citizen, regardless of race, in the State of New Mexico who has not paid one cent of tax of any kind or character, if he possesses the other qualifications, may vote. An Indian, and only an Indian, in order to meet the qualifications to vote must have paid a tax. How you can escape the conclusion that makes a requirement with respect to an Indian as a qualification to exercise the elective franchise and does not make that requirement with respect to the member of any race is beyond me.New Mexico has received large amounts of federal government spending on major military and research institutions in the state. It is home to three Air Force bases, White Sands Missile Range, and the federal research laboratories Los Alamos National Laboratory and Sandia National Laboratories. The state's population grew rapidly after World War II, growing from 531,818 in 1940 to 1,819,046 in 2000. Both residents and businesses moved to the state; some northerners came at first for the mild winters; others for retirement. On May 22, 1957, a B-36 accidentally dropped a nuclear bomb 4.5 miles from the control tower while landing at Kirtland Air Force Base. (Only its conventional "trigger" detonated.) In the late 20th century, Native Americans were authorized by federal law to establish gaming casinos on their reservations under certain conditions, in states which had authorized such gaming. Such facilities have helped tribes close to population centers to generate revenues for reinvestment in economic development and welfare of their peoples. In the 21st century, employment growth areas in New Mexico include electronic circuitry, scientific research, call centers, and Indian casinos. The United States Census Bureau estimates that the population of New Mexico was 2,096,829 on July 1, 2019, a 1.83% increase since the 2010 census. The 2000 census recorded the population of New Mexico to be 1,819,046; ten years later it was 2,059,179—an 11.7% increase. Of the people residing in New Mexico 51.4% were born there; 37.9% were born in another state; 1.1% were born in Puerto Rico, U.S. Island areas, or abroad to American parent(s); and 9.7% were foreign born. As of May 1, 2010, 7.5% of New Mexico's population was reported as under5 years of age, 25% under 18, and 13% were 65 or older. As of 2000, 8% of the residents of the state were foreign-born. Among U.S. states, New Mexico has the highest percentage of Hispanic ancestry, at 47% (as of July 1, 2012). This classification covers people of very different cultures and histories, including descendants of Spanish colonists with deep roots in the region, and recent immigrants from a variety of nations in Latin America, each with their own cultures. According to the United States Census Bureau Model-based Small Area Income and Poverty Estimates, the number of persons in poverty has increased to 400,779 (19.8% of the population) persons in 2010 from 2000. At that time, the estimated number of persons in poverty was recorded at 309,193 (17.3% of the population). The latest available data for 2014 estimate the number of persons in poverty at 420,388 (20.6% of the population). "Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number." New Mexico is a majority-minority state. The U.S. Census Bureau estimated that 48% of the total 2015 population was Hispanic or Latino of any race, the highest of any state. The majority of Hispanics in New Mexico claim to be descendants of Spanish colonists who settled here during the 16th, 17th, and 18th centuries. They speak New Mexican Spanish or English at home. The state also has a large Native American population, second in percentage behind that of Alaska. The 2018 racial composition of the population was estimated to be: According to the United States Census Bureau, 1.5% of the population identifies as multiracial/mixed-race, a population larger than both the Asian and NHPI population groups. In 2008, New Mexico had the highest percentage (47%) of Hispanics (of any race) of any state, with 83% native-born and 17% foreign-born. According to the 2000 United States Census, the most commonly claimed ancestry groups in New Mexico were: According to the 2010 U.S. Census, 28.45% of the population age5 and older speak Spanish at home, while 3.50% speak Navajo. Some speakers of New Mexican Spanish are descendants of Spanish colonists who arrived in New Mexico in the 16th, 17th, and 18th centuries. While it is a common folk belief that New Mexican Spanish is an archaic form of 17th-century Castilian Spanish, and archaisms do exist, research reveals that traditional New Mexican Spanish "is neither more Iberian nor more archaic than other New World Spanishes". Besides Navajo, which is also spoken in Arizona, a few other Native American languages are spoken by smaller groups in New Mexico, most of which are only spoken in the state. Native New Mexican languages include Mescalero Apache, Jicarilla Apache, Tewa, Southern Tiwa, Northern Tiwa, Towa, Keres (Eastern and Western), and Zuni. Mescalero and Jicarilla Apache are closely related Southern Athabaskan languages, and both are also related to Navajo. Tewa, the Tiwa languages, and Towa belong to the Kiowa-Tanoan language family, and thus all descend from a common ancestor. Keres and Zuni are language isolates, and have no relatives outside of New Mexico. The original state constitution of 1912 provided for a bilingual government with laws being published in both English and Spanish; this requirement was renewed twice, in 1931 and 1943. Nonetheless, the constitution does not declare any language as "official". While Spanish was permitted in the legislature until 1935, all state officials are required to have a good knowledge of English. Cobarrubias and Fishman therefore argue that New Mexico cannot be considered a bilingual state as not all laws are published in both languages. Others, such as Juan Perea, claim that the state was officially bilingual until 1953. With regard to the judiciary, witnesses have the right to testify in either of the two languages, and monolingual speakers of Spanish have the same right to be considered for jury duty as do speakers of English. In public education, the state has the constitutional obligation to provide bilingual education and Spanish-speaking instructors in school districts where the majority of students are hispanophone. In 1995, the state adopted an official bilingual song, "New Mexico – Mi Lindo Nuevo México". In 1989, New Mexico became the first state to officially adopt the English Plus resolution, and in 2008, the first to officially adopt a Navajo textbook for use in public schools. According to Association of Religion Data Archives (ARDA), the largest denominations in 2010 were the Catholic Church with 684,941; the Southern Baptist Convention with 113,452; The Church of Jesus Christ of Latter-day Saints with 67,637, and the United Methodist Church with 36,424 adherents. According to a 2008 survey by the Pew Research Center, the most common self-reported religious affiliation of New Mexico residents are mentioned in reference. Within the hierarchy of the Catholic Church, New Mexico belongs to the Ecclesiastical Province of Santa Fe. New Mexico has three dioceses, one of which is an archdiocese: Archdiocese of Santa Fe, Diocese of Gallup, Diocese of Las Cruces. Oil and gas production, tourism, and federal government spending are important drivers of the state economy. State government has an elaborate system of tax credits and technical assistance to promote job growth and business investment, especially in new technologies. In 2010, New Mexico's Gross Domestic Product was $80 billion, and an estimated $85 billion for 2013. In 2007, the per capita personal income was $31,474 (rank 43rd in the nation). In 2005, the percentage of persons below the poverty level was 18.4%. The New Mexico Tourism Department estimates that in Fiscal Year 2006, the travel industry in New Mexico generated expenditures of $6.5 billion. , the state's unemployment rate was 7.2%. During the late-2000s recession, New Mexico's unemployment rate peaked at 8.0% for the period June–October 2010. New Mexico is the third-largest crude oil and ninth-largest natural gas producer in the United States. The Permian and San Juan Basins, which are located partly in New Mexico, account for some of these natural resources. In 2000 the value of oil and gas produced was $8.2 billion, and in 2006, New Mexico accounted for 3.4% of the crude oil, 8.5% of the dry natural gas, and 10.2% of the natural gas liquids produced in the United States. However, the boom in hydraulic fracturing and horizontal drilling beginning in the mid-2010s led to a large increase in the production of crude oil from the Permian Basin and other U.S. sources; these developments allowed the United States to again become the world's largest producer of crude oil, in 2018. New Mexico's oil and gas operations contribute to the state's above-average release of the greenhouse gas methane, including from a national methane hot spot in the Four Corners area. Federal government spending is a major driver of the New Mexico economy. In 2005, the federal government spent $2.03 on New Mexico for every dollar of tax revenue collected from the state. This rate of return is higher than any other state in the Union. Many of the federal jobs relate to the military; the state hosts three air force bases (Kirtland Air Force Base, Holloman Air Force Base, and Cannon Air Force Base); a testing range (White Sands Missile Range); and an army proving ground (Fort Bliss's McGregor Range). A May 2005 estimate by New Mexico State University is that 11.65% of the state's total employment arises directly or indirectly from military spending. Other federal installations include the technology labs of Los Alamos National Laboratory and Sandia National Laboratories. New Mexico provides a number of economic incentives to businesses operating in the state, including various types of tax credits and tax exemptions. Most of the incentives are based on job creation. New Mexico law allows governments to provide land, buildings, and infrastructure to businesses to promote job creation. Several municipalities have imposed an Economic Development Gross Receipts Tax (a form of Municipal Infrastructure GRT) that is used to pay for these infrastructure improvements and for marketing their areas. The state provides financial incentives for film production. The New Mexico Film Office estimated at the end of 2007 that the incentive program had brought more than 85 film projects to the state since 2003 and had added $1.2 billion to the economy. Since 2008, personal income tax rates for New Mexico have ranged from 1.7% to 4.9%, within four income brackets. As of 2007, active-duty military salaries are exempt from state income tax. New Mexico is one of the largest tax havens in the U.S., offering numerous economic incentives and tax breaks on personal and corporate income. It does not have inheritance tax, estate tax, or sales taxes. New Mexico imposes a Gross Receipts Tax (GRT) on many transactions, which may even include some governmental receipts. This resembles a sales tax but, unlike the sales taxes in many states, it applies to services as well as tangible goods. Normally, the provider or seller passes the tax on to the purchaser, however legal incidence and burden apply to the business, as an excise tax. GRT is imposed by the state and there may an additional locality component to produce a total tax rate. As of July 1, 2013 the combined tax rate ranged from 5.125% to 8.6875%. Property tax is imposed on real property by the state, by counties, and by school districts. In general, personal-use personal property is not subject to property taxation. On the other hand, property tax is levied on most business-use personal property. The taxable value of property is 1/3 of the assessed value. A tax rate of about 30 mills is applied to the taxable value, resulting in an effective tax rate of about 1%. In the 2005 tax year, the average millage was about 26.47 for residential property, and 29.80 for non-residential property. Assessed values of residences cannot be increased by more than 3% per year unless the residence is remodeled or sold. Property tax deductions are available for military veterans and heads of household. New Mexico has long been an important corridor for trade and migration. The builders of the ruins at Chaco Canyon also created a radiating network of roads from the mysterious settlement. Chaco Canyon's trade function shifted to Casas Grandes in the present-day Mexican state of Chihuahua, however, north–south trade continued. The pre-Columbian trade with Mesoamerican cultures included northbound exotic birds, seashells and copper. Turquoise, pottery, and salt were some of the goods transported south along the Rio Grande. Present-day New Mexico's pre-Columbian trade is especially remarkable for being undertaken on foot. The north–south trade route later became a path for colonists with horses arriving from New Spain as well as trade and communication. The route was called "El Camino Real de Tierra Adentro". The Santa Fe Trail was the 19th-century territory's vital commercial and military highway link to the Eastern United States. All with termini in Northern New Mexico, the Camino Real, the Santa Fe Trail and the Old Spanish Trail are all recognized as National Historic Trails. New Mexico's latitude and low passes made it an attractive east–west transportation corridor. As a territory, the Gadsden Purchase increased New Mexico's land area for the purpose of the construction of a southern transcontinental railroad, that of the Southern Pacific Railroad. Another transcontinental railroad was completed by the Atchison, Topeka and Santa Fe Railway. The railroads essentially replaced the earlier trails but brought on a population boom. Early transcontinental auto trails later crossed the state bringing more migrants. Railroads were later supplemented or replaced by a system of highways and airports. Today, New Mexico's Interstate Highways approximate the earlier land routes of the Camino Real, the Santa Fe Trail and the transcontinental railroads. New Mexico has only three Interstate Highways. In Albuquerque, I-25 and I-40 meet at a stack interchange called The Big I. Interstate 10 travels in the southwest portion of New Mexico starting from the Arizona stateline near Lordsburg to the Texas stateline south past Las Cruces, near El Paso, Texas. Interstate 25 is a major north–south interstate highway starting from Las Cruces, New Mexico to the Colorado stateline near Raton. Interstate 40 is a major east–west interstate highway starting from the Arizona stateline west of Gallup to the Texas stateline east from Tucumcari. New Mexico currently has 15 United States Highways. This includes US 54, US 56, US 60, US 62, US 64, US 70, US 82, US 84, US 87, US 160, US 180, US 285, US 380, US 491, and US 550. US 66, The Mother Road, was replaced by I-40 in 1985. US 85 is currently unsigned by the NMDOT, but the AASHTO still recognize it. It runs in the same trace with I-10 and I-25. US 666, The Devils Highway, was replaced by US 491 in 2003 because the number "666" is the "Number of the Beast". New Mexico has had a problem with drunk driving, but that has lessened. According to the "Los Angeles Times", for years the state had the highest alcohol-related crash rates in the U.S., but ranked 25th in alcohol-related fatal crash rates, . The automobile changed the character of New Mexico, marking the start of large-scale immigration to the state from elsewhere in the United States. Settlers moving West during the Great Depression and post-World War II American culture immortalized the National Old Trails Highway, later U.S. Route 66. Today, New Mexico relies heavily upon the automobile for transportation. New Mexico had 59,927 route miles of highway , of which 7,037 receive federal aid. In that same year there were of freeways, of which 1000 were the route miles of Interstate Highways 10, 25 and 40. The former number has increased with the upgrading of roads near Pojoaque, Santa Fe and Las Cruces to freeways. The highway traffic fatality rate was 1.9 fatalities per million miles traveled in 2000, the 13th highest rate among U.S. states. Notable bridges include the Rio Grande Gorge Bridge near Taos. , 703 highway bridges, or one percent, were declared "structurally deficient" or "structurally obsolete". Rural and intercity public transportation by road is provided by Americanos USA, LLC, Greyhound Lines and several government operators. The New Mexico Rail Runner Express is a commuter rail system serving the metropolitan area of Albuquerque, New Mexico. It began operation on July 14, 2006. The system runs from Belen to downtown Santa Fe. Larger cities in New Mexico typically have some form of public transportation by road; ABQ RIDE is the largest such system in the state. There were 2,354 route miles of railroads in the year 2000; this number increased with the opening of the Rail Runner's extension to Santa Fe. In addition to local railroads and other tourist lines, the state jointly owns and operates a heritage narrow-gauge steam railroad, the Cumbres and Toltec Scenic Railway, with the state of Colorado. Narrow gauge railroads once connected many communities in the northern part of the state, from Farmington to Santa Fe. No fewer than 100 railroads of various names and lineage have operated in the jurisdiction at some point. New Mexico's rail transportation system reached its height in terms of length following admission as a state; in 1914 eleven railroads operated 3124 route miles. Railroad surveyors arrived in New Mexico in the 1850s. The first railroads incorporated in 1869. The first operational railroad, the Atchison, Topeka & Santa Fe Railway (ATSF), entered the territory by way of the lucrative and contested Raton Pass in 1878. It eventually reached El Paso, Texas in 1881 and with the Southern Pacific Railroad created the nation's second transcontinental railroad with a junction at Deming. The Southern Pacific Railroad entered the territory from the Territory of Arizona in 1880. The Denver & Rio Grande Railway, who would generally use narrow gauge equipment in New Mexico, entered the territory from Colorado and began service to Española on December 31, 1880. These first railroads were built as long-distance corridors, later railroad construction also targeted resource extraction. New Mexico is served by two class I railroads, the BNSF Railway and the Union Pacific Railroad. Combined, they operate 2,200 route miles of railway in the state. A commuter rail operation, the New Mexico Rail Runner Express, connects the state's capital, its largest city, and other communities. The privately operated state owned railroad began operations in July 2006. The BNSF Railway's entire line from Belen to Raton, New Mexico was sold to the state, partially for the construction of phase II of this operation, which opened in December 2008. Phase II of Rail Runner extended the line northward to Santa Fe from the Sandoval County station, the northernmost station under Phase I service. The service now connects Santa Fe, Sandoval, Bernalillo, and Valencia counties. The trains connect Albuquerque's population base and central business district to downtown Santa Fe with up to eight roundtrips in a day. The section of the line running south to Belen is served less frequently. Rail Runner operates scheduled service seven days per week. With the rise of rail transportation many settlements grew or were founded and the territory became a tourist destination. As early as 1878, the ATSF promoted tourism in the region with emphasis on Native American imagery. Named trains often reflected the territory they traveled: "Super Chief", the streamlined successor to the "Chief"; "Navajo", an early transcontinental tourist train; and "Cavern", a through car operation connecting Clovis and Carlsbad (by the early 1950s as train 23–24), were some of the named passenger trains of the ATSF that connoted New Mexico. Passenger train service once connected nine of New Mexico's present ten most populous cities (the exception is Rio Rancho), while today passenger train service connects two: Albuquerque and Santa Fe. With the decline of most intercity rail service in the United States in the late 1960s, New Mexico was left with minimal services. No less than six daily long-distance roundtrip trains supplemented by many branch line and local trains served New Mexico in the early 1960s. Declines in passenger revenue, but not necessarily ridership, prompted many railroads to turn over their passenger services in truncated form to Amtrak, a state owned enterprise. Amtrak, also known as the National Passenger Railroad Corporation, began operating the two extant long-distance routes in May 1971. Resurrection of passenger rail service from Denver to El Paso, a route once plied in part by the ATSF's "El Pasoan", has been proposed over the years. As early as the 1980s, former Governor Toney Anaya proposed building a high-speed rail line connecting the two cities with New Mexico's major cities. Front Range Commuter Rail is a project to connect Wyoming and New Mexico with high-speed rail. Amtrak's "Southwest Chief" passes through daily at stations in Gallup, Albuquerque, Lamy, Las Vegas, and Raton, offering connections to Los Angeles, Chicago and intermediate points. The "Southwest Chief" is a fast Amtrak long-distance train, being permitted a maximum speed of in various places on the tracks of the BNSF Railway. It also operates on New Mexico Rail Runner Express trackage. The "Southwest Chief" is the successor to the "Super Chief" and "El Capitan". The streamliner "Super Chief", a favorite of early Hollywood stars, was one of the most famous named trains in the United States and one of the most esteemed for its luxury and exoticness—train cars were named for regional Native American tribes and outfitted with the artwork of many local artists—but also for its speed: as few as 39 hours 45 minutes westbound. The "Sunset Limited" makes stops three times a week in both directions at Lordsburg, and Deming, serving Los Angeles, New Orleans and intermediate points. The "Sunset Limited" is the successor to the Southern Pacific Railroad's train of the same name and operates exclusively on Union Pacific trackage in New Mexico. The Albuquerque International Sunport is the state's primary port of entry for air transportation. Upham, near Truth or Consequences, is the location of the world's first operational and purpose-built commercial spaceport, Spaceport America. Rocket launches began in April 2007. It is undeveloped and has one tenant, UP Aerospace, launching small payloads. Virgin Galactic, a space tourism company, plans to make this their primary operating base. The Constitution of New Mexico established New Mexico's governmental structure. The executive branch of government is fragmented as outlined in the state constitution. The executive is composed of the Governor and other statewide elected officials including the Lieutenant Governor (elected on the same ticket as the Governor), Attorney General, Secretary of State, State Auditor, State Treasurer, and Commissioner of Public Lands. The governor appoints a cabinet who lead agencies statutorily designated under their jurisdiction. The New Mexico Legislature consists of the House of Representatives and Senate. The judiciary is composed of the New Mexico Supreme Court and lower courts. There is also local government, consisting of counties, municipalities and special districts. Current Governor Michelle Lujan Grisham (D) and Lieutenant Governor Howie Morales (D) were first elected in 2018. Terms for both the Governor and Lieutenant Governor expire in January 2023. Governors serve a term of four years, and may seek re-election for one additional term (limit of two terms). Other constitutional officers, all of whose terms also expire in January 2023, include Secretary of State Maggie Toulouse Oliver (D), Attorney General Hector Balderas (D), State Auditor Brian Colón (D), State Land Commissioner Stephanie Garcia Richard (D), and State Treasurer Tim Eichenberg (D). Currently, both chambers of the New Mexico State Legislature have Democratic majorities. There are 26 Democrats and 16 Republicans in the Senate, and 47 Democrats and 23 Republicans in the House of Representatives. New Mexico's members of the United States Senate are Democrats Martin Heinrich and Tom Udall. Democrats represent the state's three United States House of Representatives congressional districts, with Deb Haaland, Xochitl Torres Small and Ben Ray Luján representing the first, second and third districts respectively. See New Mexico congressional map. New Mexico has traditionally been considered a swing state, whose population has favored both Democratic and Republican presidential candidates, but it has become more of a Democratic stronghold beginning with the presidential election of 2008. The governor is Michelle Lujan Grisham (D), who succeeded Susana Martinez (R) on January 1, 2019 after she served two terms as governor from 2011 to 2019. Gary Johnson served as governor from 1995 to 2003. Johnson served as a Republican, but in 2012 and 2016, he ran for president from the Libertarian Party. In previous presidential elections, Al Gore carried the state (by 366 votes) in 2000; George W. Bush won New Mexico's five electoral votes in 2004, and the state's electoral votes were won by Barack Obama and Hillary Clinton in 2008, 2012, and 2016. Since achieving statehood in 1912, New Mexico has been carried by the national popular vote victor in every presidential election of the past 104 years, except 1976, when Gerald Ford won the state by 2%, but lost the national popular vote by 2%. It has also awarded its electoral votes to the candidate who would ultimately win, with the exception of 1976, 2000, and 2016. Democrats in the state are usually strongest Santa Fe Area, various areas of the Albuquerque Metro Area (such as the southeast and central areas, including the affluent Nob Hill neighborhood and the vicinity of the University of New Mexico), Northern and West Central New Mexico, and most of the Native American reservations, particularly the Navajo Nation. Republicans have traditionally had their strongholds in the eastern and southern parts of the state, the Farmington area, Rio Rancho, and the newly developed areas in the Northwest mesa. Albuquerque's Northeast Heights have historically leaned Republican, but have become a key swing area for Democrats in recent election cycles. While registered Democrats outnumber registered Republicans by nearly 200,000, New Mexico voters have favored moderate to conservative candidates of both parties at the state and federal levels. New Mexico abolished its death penalty statute, though not retroactively, effective July 1, 2009. This means individuals on New Mexico's Death Row can still be executed. On March 18, 2009, then Governor Bill Richardson signed the law abolishing the death penalty in New Mexico following the assembly and senate vote the week before, thus becoming the 15th U.S. state to abolish the penalty. On gun control, New Mexico arguably has some of the least restrictive firearms laws in the country. State law pre-empts all local gun control ordinances. New Mexico residents may purchase any firearm deemed legal under federal law. There are no waiting periods under state law for picking up a firearm after it has been purchased, and there are no restrictions on magazine capacity. Additionally, New Mexico is a "shall-issue" state for concealed carry permits. Before December 2013, New Mexico law neither explicitly allowed nor prohibited same-sex marriage. Policy concerning the issuance of marriage licenses to same-sex couples was determined at the county level; that is, some county clerks issued marriage licenses to same-sex couples while others did not. In December 2013, the New Mexico Supreme Court issued a unanimous ruling directing all county clerks to issue marriage licenses to same-sex couples, thereby making New Mexico the 17th state to recognize same-sex marriage at the statewide level. Due to its relatively low population, in combination with numerous federally funded research facilities, New Mexico had the highest concentration of PhD holders of any state in 2000. Despite this, the state routinely ranks near the bottom in surveys of the quality of primary and secondary school education. In a landmark decision, a state judge ruled in 2018 that "New Mexico is violating the constitutional rights of at-risk students by failing to provide them with sufficient education," and ordered that the governor and Legislature provide an adequate system by April 2019. New Mexico has a higher concentration of persons who do not finish high school or have some college without a degree than the nation as a whole. For the state, 23.9% of people over 25 have gone to college but not earned a degree. This is compared with 21.0% of the nation as a whole according to United States Census Bureau 2014 American Community Survey estimates. Los Alamos County has the highest number percent of post secondary degree holders of any county in New Mexico with 38.7% of the population (4,899 persons) estimated by the 2010-2014 American Community Survey. The New Mexico Public Education Department oversees the operation of primary and secondary schools; individual school districts directly operate and staff said schools. New Mexico is one of eight states that funds college scholarships through the state lottery. The state of New Mexico requires that the lottery put 30% of its gross sales into the scholarship fund. The scholarship is available to residents who graduated from a state high school, and attend a state university full-time while maintaining a 2.5 GPA or higher. It covered 100% of tuition when it was first instated in 1996, decreased to 90%, then dropped to 60% in 2017. The value slightly increased in 2018, and new legislation was passed to outline what funds are available per type of institution. With a Native American population of 134,000 in 1990, New Mexico still ranks as an important center of Native American culture. Both the Navajo and Apache share Athabaskan origin. The Apache and some Ute live on federal reservations within the state. With 16 million acres (6,500,000 ha), mostly in neighboring Arizona, the reservation of the Navajo Nation ranks as the largest in the United States. The prehistorically agricultural Pueblo Indians live in pueblos scattered throughout the state. Almost half of New Mexicans claim Hispanic origin; many are descendants of colonial settlers. They settled in the state's northern portion. Most of the Mexican immigrants reside in the southern part of the state. Also 10-15% of the population, mainly in the north, may contain Hispanic Jewish ancestry. Many New Mexicans speak a unique dialect of Spanish. Because of the historical isolation of New Mexico from other speakers of the Spanish language, some of the vocabulary of New Mexican Spanish is unknown to other Spanish speakers. It uses numerous Native American words for local features and includes anglicized words that express American concepts and modern inventions. Albuquerque has the New Mexico Museum of Natural History and Science, the National Hispanic Cultural Center, and the National Museum of Nuclear Science & History, as well as hosts the famed annual Albuquerque International Balloon Fiesta every fall. The earliest New Mexico artists whose work survives today are the Mimbres Indians, whose black and white pottery could be mistaken for modern art, except for the fact that it was produced before 1130 CE. See Mimbres culture. Many examples of this work can be seen at the Deming Luna Mimbres Museum and at the Western New Mexico University Museum. A large artistic community thrives in Santa Fe, and has included such people as Bruce Nauman, Richard Tuttle, John Connell and Steina Vasulka. The capital city has several art museums, including the New Mexico Museum of Art, Museum of Spanish Colonial Art, Museum of International Folk Art, Museum of Indian Arts and Culture, Museum of Contemporary Native Arts, SITE Santa Fe and others. Colonies for artists and writers thrive, and the small city teems with art galleries. In August, the city hosts the annual Santa Fe Indian Market, which is the oldest and largest juried Native American art showcase in the world. Performing arts include the renowned Santa Fe Opera which presents five operas in repertory each July to August, the Santa Fe Chamber Music Festival held each summer, and the restored Lensic Theater a principal venue for many kinds of performances. Santa Fe is also home to Frogville Records, an indie record label. The weekend after Labor Day boasts the burning of Zozobra, a 50 ft (15 m) marionette, during Fiestas de Santa Fe. Art is also a frequent theme in Albuquerque, New Mexico's largest city. The National Hispanic Cultural Center has held hundreds of performing arts events, art showcases, and other events related to Spanish culture in New Mexico and worldwide in the centerpiece Roy E Disney Center for the Performing Arts or in other venues at the 53 acre facility. New Mexico residents and visitors alike can enjoy performing art from around the world at Popejoy Hall on the campus of the University of New Mexico. Popejoy Hall hosts singers, dancers, Broadway shows, other types of acts, and Shakespeare. Albuquerque also has the unique and memorable KiMo Theater built in 1927 in the Pueblo Revival Style architecture. The KiMo presents live theater and concerts as well as movies and simulcast operas. In addition to other general interest theaters, Albuquerque also has the African American Performing Arts Center and Exhibit Hall which showcases achievements by people of African descent and the Indian Pueblo Cultural Center which highlights the cultural heritage of the First Nations people of New Mexico. New Mexico holds strong to its Spanish heritage. Old Spanish traditions such zarzuelas and flamenco are popular in New Mexico. Flamenco dancer and native New Mexican María Benítez founded the Maria Benítez Institute for Spanish Arts "to present programs of the highest quality of the rich artistic heritage of Spain, as expressed through music, dance, visual arts, and other art forms". There is also the Festival Flamenco Internacional de Alburquerque held each year in which native Spanish and New Mexican flamenco dancers perform at the University of New Mexico. In the mid-20th century there was a thriving Hispano school of literature and scholarship being produced in both English and Spanish. Among the more notable authors were: Angélico Chávez, Nina Otero-Warren, Fabiola Cabeza de Baca, Aurelio Espinosa, Cleofas Jaramillo, Juan Bautista Rael, and Aurora Lucero-White Lea. As well, writer D. H. Lawrence lived near Taos in the 1920s, at the D. H. Lawrence Ranch, where there is a shrine said to contain his ashes. New Mexico's strong Spanish, Native American, and Wild West frontier motifs have provided material for many authors in the state, including internationally recognized Rudolfo Anaya and Tony Hillerman. Silver City, in the southwestern mountains of the state, was originally a mining town, and at least one nearby mine still operates. It is perhaps better known now as the home of or exhibition center for large numbers of artists, visual and otherwise. Another former mining town turned art haven is Madrid, New Mexico. It was brought to national fame as the filming location for the movie "Wild Hogs" in 2007. The City of Las Cruces, in southern New Mexico, has a museum system that is affiliated with the Smithsonian Institution Affiliations Program. Las Cruces also has a variety of cultural and artistic opportunities for residents and visitors. Aside from the aforementioned "Wild Hogs", other movies filmed in New Mexico include "Sunshine Cleaning" and "Vampires". The various seasons of the A&E/Netflix series "Longmire" have been filmed in several New Mexico locations, including Las Vegas, Santa Fe, Eagle Nest, and Red River. The widely acclaimed TV show "Breaking Bad" and its spin-off "Better Call Saul" were both set and filmed in and around Albuquerque. No major league professional sports teams are based in New Mexico, but the Albuquerque Isotopes are a Pacific Coast League Triple-A baseball affiliate of the MLB Colorado Rockies. New Mexico is home to several baseball teams of the Pecos League: the Roswell Invaders, Ruidoso Osos, Santa Fe Fuego and the White Sands Pupfish. The Duke City Gladiators of the Indoor Football League (IFL) plays their home games at Tingley Coliseum in Albuquerque. New Mexico United, also based in Albuquerque, began play in the second tier of the American soccer pyramid, the USL Championship, in 2019. Another soccer team from that city, Albuquerque Sol FC, plays in the fourth-tier USL League Two. Collegiate athletics in New Mexico involve various New Mexico Lobos and New Mexico State Aggies teams in many sports. For many years the two universities have had a rivalry often referred to as the "Rio Grande Rivalry" or the "Battle of I-25" in recognition of the campuses' both being located along that highway. NMSU also has a rivalry with the University of Texas at El Paso that is called "The Battle of I-10". The winner of the NMSU-UTEP football game receives the Silver Spade trophy. Olympic gold medalist Tom Jager, who is an advocate of controversial high-altitude training for swimming, has conducted training camps in Albuquerque (elevation 5,312 ft (1,619.1 m)) and Los Alamos (7,320 ft (2,231 m)). NRA Whittington Center in Raton is the United States' largest and most comprehensive competitive shooting range and training facility.
https://en.wikipedia.org/wiki?curid=21649
North Carolina North Carolina () is a state located in the southeastern region of the United States. North Carolina is the 28th largest and 9th-most populous of the 50 United States. It is bordered by Virginia to the north, the Atlantic Ocean to the east, Georgia and South Carolina to the south, and Tennessee to the west. Raleigh is the state's capital and Charlotte is its largest city. The Charlotte metropolitan area, with an estimated population of 2,569,213 in 2018, is the most populous metropolitan area in North Carolina, the 23rd-most populous in the United States, and the largest banking center in the nation after New York City. The Raleigh metropolitan area is the second-largest metropolitan area in the state, with an estimated population of 1,362,540 in 2018, and is home to the largest research park in the United States, Research Triangle Park. North Carolina was established as a royal colony in 1729 and is one of the original Thirteen Colonies. North Carolina is named in honor of King Charles I of England who first formed the English colony, with "Carolus" being Latin for "Charles". On November 21, 1789, North Carolina became the 12th state to ratify the United States Constitution. In the run-up to the American Civil War, North Carolina declared its secession from the Union on May 20, 1861, becoming the last of eleven states to join the Confederate States. Following the Civil War, the state was restored to the Union on June 25, 1868. On December 17, 1903, Orville and Wilbur Wright successfully piloted the world's first controlled, sustained flight of a powered, heavier-than-air aircraft at Kill Devil Hills in North Carolina's Outer Banks. North Carolina uses the slogan "First in Flight" on state license plates to commemorate this achievement, alongside a newer alternative design bearing the slogan "First in Freedom" in reference to the Mecklenburg Declaration. North Carolina is defined by a wide range of elevations and landscapes. From west to east, North Carolina's elevation descends from the Appalachian Mountains to the Piedmont and Atlantic coastal plain. North Carolina's Mount Mitchell at 6,684 feet (2,037 m) is the highest-point in North America east of the Mississippi River. Most of the state falls in the humid subtropical climate zone; however, the western, mountainous part of the state has a subtropical highland climate. Woodland-culture Native Americans were in the area around 1000 BCE; starting around 750 CE, Mississippian-culture Indians created larger political units with stronger leadership and more stable, longer-term settlements. During this time, important buildings were constructed as pyramidal, flat-topped buildings. By 1550, many groups of American Indians lived in present-day North Carolina, including Chowanoke, Roanoke, Pamlico, Machapunga, Coree, Cape Fear Indians, Waxhaw, Waccamaw, and Catawba. Juan Pardo explored the area in 1566–1567, establishing Fort San Juan in 1567 at the site of the Native American community of Joara, a Mississippian culture regional chiefdom in the western interior, near the present-day city of Morganton. The fort lasted only 18 months; the local inhabitants killed all but one of the 120 men Pardo had stationed at a total of six forts in the area. A later expedition by Philip Amadas and Arthur Barlowe followed in 1584, at the direction of Sir Walter Raleigh. In June 1718, the pirate Blackbeard ran his flagship, the "Queen Anne's Revenge", aground at Beaufort Inlet, North Carolina, in present-day Carteret County. After the grounding her crew and supplies were transferred to smaller ships. In November, after appealing to the governor of North Carolina, who promised safe-haven and a pardon, Blackbeard was killed in an ambush by troops from Virginia. In 1996 Intersal, Inc., a private firm, discovered the remains of a vessel likely to be the "Queen Anne's Revenge", which was added to the U.S. National Register of Historic Places. North Carolina became one of the English Thirteen Colonies and with the territory of South Carolina was originally known as the Province of North-Carolina. The northern and southern parts of the original province separated in 1729. Originally settled by small farmers, sometimes having a few slaves, who were oriented toward subsistence agriculture, the colony lacked cities or towns. Pirates menaced the coastal settlements, but by 1718 the pirates had been captured and killed. Growth was strong in the middle of the 18th century, as the economy attracted Scots-Irish, Quaker, English and German immigrants. A majority of the colonists generally supported the American Revolution, and a smaller number of Loyalists than in some other colonies such as Georgia, South Carolina, Delaware, New York. During colonial times, Edenton served as the state capital beginning in 1722, and New Bern was selected as the capital in 1766. Construction of Tryon Palace, which served as the residence and offices of the provincial governor William Tryon, began in 1767 and was completed in 1771. In 1788 Raleigh was chosen as the site of the new capital, as its central location protected it from coastal attacks. Officially established in 1792 as both county seat and state capital, the city was named after Sir Walter Raleigh, sponsor of Roanoke, the "lost colony" on Roanoke Island. The population of the colony more than quadrupled from 52,000 in 1740 to 270,000 in 1780 from high immigration from Virginia, Maryland and Pennsylvania plus immigrants from abroad. North Carolina made the smallest per-capita contribution to the Revolutionary War of any state, as only 7,800 men joined the Continental Army under General George Washington; an additional 10,000 served in local militia units under such leaders as General Nathanael Greene. There was some military action, especially in 1780–81. Many Carolinian frontiersmen had moved west over the mountains, into the Washington District (later known as Tennessee), but in 1789, following the Revolution, the state was persuaded to relinquish its claim to the western lands. It ceded them to the national government so the Northwest Territory could be organized and managed nationally. After 1800, cotton and tobacco became important export crops. The eastern half of the state, especially the Tidewater region, developed a slave society based on a plantation system and slave labor. Many free people of color migrated to the frontier along with their European-American neighbors, where the social system was looser. By 1810, nearly three percent of the free population consisted of free people of color, who numbered slightly more than ten thousand. The western areas were dominated by white families, especially Scots-Irish, who operated small subsistence farms. In the early national period, the state became a center of Jeffersonian and Jacksonian democracy, with a strong Whig presence, especially in the West. After Nat Turner's slave uprising in 1831, North Carolina and other southern states reduced the rights of free blacks. In 1835 the legislature withdrew their right to vote. On May 20, 1861, North Carolina was the last of the Confederate states to declare secession from the Union, 13 days after the Tennessee legislature voted for secession. Some 125,000 North Carolinians served in the military; 20,000 were killed in battle, the most of any state in the Confederacy, and 21,000 died of disease. The state government was reluctant to support the demands of the national government in Richmond, and the state was the scene of only small battles. With the defeat of the Confederacy in 1865, the Reconstruction Era began. The United States abolished slavery without compensation to slaveholders or reparations to freedmen. A Republican Party coalition of black freedmen, northern carpetbaggers and local scalawags controlled state government for three years. The white conservative Democrats regained control of the state legislature in 1870, in part by Ku Klux Klan violence and terrorism at the polls, to suppress black voting. Republicans were elected to the governorship until 1876, when the Red Shirts, a paramilitary organization that arose in 1874 and was allied with the Democratic Party, helped suppress black voting. More than 150 black Americans were murdered in electoral violence in 1876. Post civil war-debt cycles pushed people to switch from subsistence agriculture to commodity agriculture. Among this time the notorious Crop-Lien system developed and was financially difficult on landless whites and blacks, due to high amounts of usury. Also due to the push for commodity agriculture, the free range was ended. Prior to this time people fenced in their crops and had their livestock feeding on the free range areas. After the ending of the free range people now fenced their animals and had their crops in the open. Democrats were elected to the legislature and governor's office, but the Populists attracted voters displeased with them. In 1896 a biracial, Populist-Republican Fusionist coalition gained the governor's office and passed laws that would extend the voting franchise to blacks and poor whites. The Democrats regained control of the legislature in 1896 and passed laws to impose Jim Crow and racial segregation of public facilities. Voters of North Carolina's 2nd congressional district elected a total of four African-American congressmen through these years of the late 19th century. Political tensions ran so high that a small group of white Democrats in 1898 planned to take over the Wilmington government if their candidates were not elected. In the Wilmington Insurrection of 1898, more than 1,500 white men attacked the black newspaper and neighborhood, killed numerous men, and ran off the white Republican mayor and aldermen. They installed their own people and elected Alfred M. Waddell as mayor, in the only coup d'état in United States history. In 1899 the state legislature passed a new constitution, with requirements for poll taxes and literacy tests for voter registration which disenfranchised most black Americans in the state. Exclusion from voting had wide effects: it meant black Americans could not serve on juries or in any local office. After a decade of white supremacy, many people forgot that North Carolina had ever had thriving middle-class black Americans. Black citizens had no political voice in the state until after the federal Civil Rights Act of 1964 and Voting Rights Act of 1965 were passed to enforce their constitutional rights. It was not until 1992 that another African American was elected as a U.S. Representative from North Carolina. As in the rest of the former Confederacy, North Carolina had become a one-party state, dominated by the Democratic Party. Impoverished by the Civil War and vicious debt cycles, the state continued with an economy based on tobacco, cotton textiles and commodity agriculture. Towns and cities remained few in the east. A major industrial base emerged in the late 19th century in the counties of the Piedmont Triad, based on cotton mills established at the fall line. Railroads were built to connect the new industrializing cities. The state was the site of the first successful controlled, powered and sustained heavier-than-air flight, by the Wright brothers, near Kitty Hawk on December 17, 1903. In the first half of the 20th century, many African Americans left the state to go North for better opportunities, in the Great Migration. Their departure changed the demographic characteristics of many areas. North Carolina was hard hit by the Great Depression, but the New Deal programs of Franklin D. Roosevelt for cotton and tobacco significantly helped the farmers. After World War II, the state's economy grew rapidly, highlighted by the growth of such cities as Charlotte, Raleigh, and Durham in the Piedmont. Raleigh, Durham, and Chapel Hill form the Research Triangle, a major area of universities and advanced scientific and technical research. In the 1990s, Charlotte became a major regional and national banking center. Tourism has also been a boon for the North Carolina economy as people flock to the Outer Banks coastal area and the Appalachian Mountains anchored by Asheville. By the 1970s, spurred in part by the increasingly leftward tilt of national Democrats, conservative whites began to vote for Republican national candidates and gradually for more Republicans locally. The Greensboro Sit-ins played a crucial role in the Civil Rights Movement to bring full equality to American blacks. North Carolina was inhabited for at least ten thousand years by succeeding prehistoric indigenous cultures. The Hardaway Site saw major periods of occupation as far back as 10,000 years. Before 200 AD, they were building earthwork mounds, which were used for ceremonial and religious purposes. Succeeding peoples, including those of the ancient Mississippian culture established by 1000 AD in the Piedmont, continued to build or add on to such mounds. In the 500–700 years preceding European contact, the Mississippian culture built large, complex cities and maintained far-flung regional trading networks. Its largest city was Cahokia, located in present-day Illinois near the Mississippi River. Historically documented tribes in the North Carolina region include the Carolina Algonquian-speaking tribes of the coastal areas, such as the Chowanoke, Roanoke, Pamlico, Machapunga, Coree, and Cape Fear Indians, who were the first encountered by the English; the Iroquoian-speaking Meherrin, Cherokee, and Tuscarora of the interior; and Southeastern Siouan tribes, such as the Cheraw, Waxhaw, Saponi, Waccamaw, and Catawba. Spanish explorers traveling inland in the 16th century met Mississippian culture people at Joara, a regional chiefdom near present-day Morganton. Records of Hernando de Soto attested to his meeting with them in 1540. In 1567 Captain Juan Pardo led an expedition to claim the area for the Spanish colony and to establish another route to protect silver mines in Mexico. Pardo made a winter base at Joara, which he renamed "Cuenca". His expedition built Fort San Juan and left a contingent of 30 men there, while Pardo traveled further, and built and garrisoned five other forts. He returned by a different route to Santa Elena on Parris Island, South Carolina, then a center of Spanish Florida. In the spring of 1568, natives killed all but one of the soldiers and burned the six forts in the interior, including the one at Fort San Juan. Although the Spanish never returned to the interior, this effort marked the first European attempt at colonization of the interior of what became the United States. A 16th-century journal by Pardo's scribe Bandera and archaeological findings since 1986 at Joara have confirmed the settlement. In 1584, Elizabeth I granted a charter to Sir Walter Raleigh, for whom the state capital is named, for land in present-day North Carolina (then part of the territory of Virginia). It was the second American territory which the English attempted to colonize. Raleigh established two colonies on the coast in the late 1580s, but both failed. The fate of the "Lost Colony" of Roanoke Island remains one of the most widely debated mysteries of American history. Virginia Dare, the first English child to be born in North America, was born on Roanoke Island on August 18, 1587; Dare County is named for her. As early as 1650, settlers from the Virginia colony moved into the area of Albemarle Sound. By 1663, King Charles II of England granted a charter to start a new colony on the North American continent; it generally established North Carolina's borders. He named it "Carolina" in honor of his father Charles I. By 1665, a second charter was issued to attempt to resolve territorial questions. In 1710, owing to disputes over governance, the Carolina colony began to split into North Carolina and South Carolina. The latter became a crown colony in 1729. In the 1700s, a series of smallpox epidemics swept the South, causing high fatalities among the Native Americans, who had no immunity to the new disease (it had become endemic in Europe). According to the historian Russell Thornton, "The 1738 epidemic was said to have killed one-half of the Cherokee, with other tribes of the area suffering equally." After the Spanish in the 16th century, the first permanent European settlers of North Carolina were English colonists who migrated south from Virginia. The latter had grown rapidly and land was less available. Nathaniel Batts was documented as one of the first of these Virginian migrants. He settled south of the Chowan River and east of the Great Dismal Swamp in 1655. By 1663, this northeastern area of the Province of Carolina, known as the Albemarle Settlements, was undergoing full-scale English settlement. During the same period, the English monarch Charles II gave the province to the Lords Proprietors, a group of noblemen who had helped restore Charles to the throne in 1660. The new province of "Carolina" was named in honor and memory of King Charles I (Latin: "Carolus"). In 1712, North Carolina became a separate colony. Except for the Earl Granville holdings, it became a royal colony seventeen years later. A large revolt happened in the state in 1711 known as Cary's Rebellion. Differences in the settlement patterns of eastern and western North Carolina, or the Low Country and uplands, affected the political, economic, and social life of the state from the 18th until the 20th century. The Tidewater in eastern North Carolina was settled chiefly by immigrants from rural England and the Scottish Highlands. The upcountry of western North Carolina was settled chiefly by Scots-Irish, English, and German Protestants, the so-called "cohee". Arriving during the mid- to late 18th century, the Scots-Irish from what is today Northern Ireland were the largest non-English immigrant group before the Revolution; English indentured servants were overwhelmingly the largest immigrant group before the Revolution. During the American Revolutionary War, the English and Highland Scots of eastern North Carolina tended to remain loyal to the British Crown, because of longstanding business and personal connections with Great Britain. The English, Welsh, Scots-Irish, and German settlers of western North Carolina tended to favor American independence from Britain. Most of the English colonists had arrived as indentured servants, hiring themselves out as laborers for a fixed period to pay for their passage. In the early years the line between indentured servants and African slaves or laborers was fluid. Some Africans were allowed to earn their freedom before slavery became a lifelong status. Most of the free colored families formed in North Carolina before the Revolution were descended from unions or marriages between free white women and enslaved or free African or African-American men. Because the mothers were free, their children were born free. Many had migrated or were descendants of migrants from colonial Virginia. As the flow of indentured laborers to the colony decreased with improving economic conditions in Great Britain, planters imported more slaves, and the state's legal delineations between free and slave status tightened, effectively hardening the latter into a racial caste. The economy's growth and prosperity was based on slave labor, devoted primarily to the production of tobacco. On April 12, 1776, the colony became the first to instruct its delegates to the Continental Congress to vote for independence from the British Crown, through the Halifax Resolves passed by the North Carolina Provincial Congress. The date of this event is memorialized on the state flag and state seal. Throughout the Revolutionary War, fierce guerrilla warfare erupted between bands of pro-independence and pro-British colonists. In some cases the war was also an excuse to settle private grudges and rivalries. A major American victory in the war took place at King's Mountain along the North Carolina–South Carolina border; on October 7, 1780, a force of 1000 mountain men from western North Carolina (including what is today the state of Tennessee) and southwest Virginia overwhelmed a force of some 1000 British troops led by Major Patrick Ferguson. Most of the soldiers fighting for the British side in this battle were Carolinians who had remained loyal to the Crown (they were called "Tories" or Loyalists). The American victory at Kings Mountain gave the advantage to colonists who favored American independence, and it prevented the British Army from recruiting new soldiers from the Tories. The road to Yorktown and America's independence from Great Britain led through North Carolina. As the British Army moved north from victories in Charleston and Camden, South Carolina, the Southern Division of the Continental Army and local militia prepared to meet them. Following General Daniel Morgan's victory over the British Cavalry Commander Banastre Tarleton at the Battle of Cowpens on January 17, 1781, southern commander Nathanael Greene led British Lord Charles Cornwallis across the heartland of North Carolina, and away from the latter's base of supply in Charleston, South Carolina. This campaign is known as "The Race to the Dan" or "The Race for the River". In the Battle of Cowan's Ford, Cornwallis met resistance along the banks of the Catawba River at Cowan's Ford on February 1, 1781, in an attempt to engage General Morgan's forces during a tactical withdrawal. Morgan had moved to the northern part of the state to combine with General Greene's newly recruited forces. Generals Greene and Cornwallis finally met at the Battle of Guilford Courthouse in present-day Greensboro on March 15, 1781. Although the British troops held the field at the end of the battle, their casualties at the hands of the numerically superior Continental Army were crippling. Following this "Pyrrhic victory", Cornwallis chose to move to the Virginia coastline to get reinforcements, and to allow the Royal Navy to protect his battered army. This decision would result in Cornwallis' eventual defeat at Yorktown, Virginia, later in 1781. The Patriots' victory there guaranteed American independence. On November 21, 1789, North Carolina became the twelfth state to ratify the Constitution. In 1840, it completed the state capitol building in Raleigh, still standing today. Most of North Carolina's slave owners and large plantations were located in the eastern portion of the state. Although North Carolina's plantation system was smaller and less cohesive than that of Virginia, Georgia, or South Carolina, significant numbers of planters were concentrated in the counties around the port cities of Wilmington and Edenton, as well as suburban planters around the cities of Raleigh, Charlotte, and Durham in the Piedmont. Planters owning large estates wielded significant political and socio-economic power in antebellum North Carolina, which was a slave society. They placed their interests above those of the generally non-slave-holding "yeoman" farmers of western North Carolina. In mid-century, the state's rural and commercial areas were connected by the construction of a wooden plank road, known as a "farmer's railroad", from Fayetteville in the east to Bethania (northwest of Winston-Salem). Besides slaves, there were a number of free people of color in the state. Most were descended from free African Americans who had migrated along with neighbors from Virginia during the 18th century. The majority were the descendants of unions in the working classes between white women, indentured servants or free, and African men, indentured, slave or free. After the Revolution, Quakers and Mennonites worked to persuade slaveholders to free their slaves. Some were inspired by their efforts and the language of the Revolution to arrange for manumission of their slaves. The number of free people of color rose markedly in the first couple of decades after the Revolution. On October 25, 1836, construction began on the Wilmington and Raleigh Railroad to connect the port city of Wilmington with the state capital of Raleigh. In 1849 the North Carolina Railroad was created by act of the legislature to extend that railroad west to Greensboro, High Point, and Charlotte. During the Civil War, the Wilmington-to-Raleigh stretch of the railroad would be vital to the Confederate war effort; supplies shipped into Wilmington would be moved by rail through Raleigh to the Confederate capital of Richmond, Virginia. During the antebellum period, North Carolina was an overwhelmingly rural state, even by Southern standards. In 1860 only one North Carolina town, the port city of Wilmington, had a population of more than 10,000. Raleigh, the state capital, had barely more than 5,000 residents. While slaveholding was slightly less concentrated than in some Southern states, according to the 1860 census, more than 330,000 people, or 33% of the population of 992,622, were enslaved African Americans. They lived and worked chiefly on plantations in the eastern Tidewater. In addition, 30,463 free people of color lived in the state. They were also concentrated in the eastern coastal plain, especially at port cities such as Wilmington and New Bern, where a variety of jobs were available. Free African Americans were allowed to vote until 1835, when the state revoked their suffrage in restrictions following the slave rebellion of 1831 led by Nat Turner. Southern slave codes criminalized willful killing of a slave in most cases. North Carolina was known as a 'Slave State' by 1860, in which one-third of the population was enslaved. This was a smaller proportion than in many other Southern states. The state did not vote to join the Confederacy until President Abraham Lincoln called on it to invade its sister state, South Carolina, becoming the last or penultimate state to officially join the Confederacy. The title of "last to join the Confederacy" has been disputed; although Tennessee's informal secession on May 7, 1861, preceded North Carolina's official secession on May 20, the Tennessee legislature did not formally vote to secede until June 8, 1861. Despite the State supplying the Confederacy with at least 125,000 troops, and Union with approx. 15,000 troops of all ranks it saw little action on its territory. The supply of Confederate troops was by far the greatest number of any of the Confederate States, of which approximately 40,000 of those died: more than half from disease, the remainder from battlefield wounds and starvation. Elected in 1862, Governor Zebulon Baird Vance tried to maintain state autonomy against Confederate President Jefferson Davis in Richmond. After secession, some North Carolinians refused to support the Confederacy. Some of the yeoman farmers in the state's mountains and western Piedmont region remained neutral during the Civil War, while some covertly supported the Union cause during the conflict. Approximately 2,000 North Carolinians from western North Carolina enlisted in the Union Army and fought for the North in the war. Two additional Union Army regiments were raised in the coastal areas of the state, which were occupied by Union forces in 1862 and 1863. Numerous slaves escaped to Union lines, where they became essentially free. Confederate troops from all parts of North Carolina served in virtually all the major battles of the Army of Northern Virginia, the Confederacy's most famous army. The largest battle fought in North Carolina was at Bentonville, which was a futile attempt by Confederate General Joseph Johnston to slow Union General William Tecumseh Sherman's advance through the Carolinas in the spring of 1865. In April 1865, after losing the Battle of Morrisville, Johnston surrendered to Sherman at Bennett Place, in what is today Durham. North Carolina's port city of Wilmington was the last Confederate port to fall to the Union, in February 1865, after the Union won the nearby Second Battle of Fort Fisher, its major defense downriver. The first Confederate soldier to be killed in the Civil War was Private Henry Wyatt from North Carolina, in the Battle of Big Bethel in June 1861. At the Battle of Gettysburg in July 1863, the 26th North Carolina Regiment participated in Pickett/Pettigrew's Charge and advanced the farthest into the Northern lines of any Confederate regiment. During the Battle of Chickamauga, the 58th North Carolina Regiment advanced farther than any other regiment on Snodgrass Hill to push back the remaining Union forces from the battlefield. At Appomattox Court House in Virginia in April 1865, the 75th North Carolina Regiment, a cavalry unit, fired the last shots of the Confederate Army of Northern Virginia in the Civil War. For many years, North Carolinians proudly boasted that they had been "First at Bethel, Farthest at Gettysburg and Chickamauga, and Last at Appomattox". Following the collapse of the Confederacy in 1865, North Carolina, along with the rest of the former Confederate States, was put under direct control by the U.S. military and was relieved of its constitutional government and representation within the United States Congress in what is now referred to as the Reconstruction era. In order to earn back its rights, the state had to make concessions to Washington, one of which was the ratification of the Thirteenth Amendment that was followed through by the North Carolina Supreme Court on December 4, 1865. Congressional Republicans during the Reconstruction, commonly referred to as "radical Republicans", constantly pushed for new constitutions for each of the Southern states that emphasized on equal rights for African-Americans. In 1868, a constitutional convention restored the state government of North Carolina. Though the Fifteenth Amendment was also adopted that same year, it remained in most cases ineffective for almost a century—not to mention paramilitary groups and their lynching with impunity. The elections in April 1868 following the constitutional convention led to a narrow victory for a Republican-dominated government, with 19 African-Americans holding positions in the North Carolina State Legislature. In attempt to put the reforms into effect, the new Republican Governor William W. Holden declared martial law on any county allegedly not complying with law and order using the passage of the Shoffner Act. North Carolina is bordered by South Carolina on the south, Georgia on the southwest, Tennessee on the west, Virginia on the north, and the Atlantic Ocean on the east. The United States Census Bureau places North Carolina in the South Atlantic division of the southern region. North Carolina consists of three main geographic regions: the Atlantic coastal plain, occupying the eastern portion of the state; the central Piedmont region, and the Mountain region in the west, which is part of the Appalachian Mountains. The coastal plain consists of more specifically-defined areas known as the Outer Banks, a string of sandy, narrow barrier islands separated from the mainland by sounds or inlets, including Albemarle Sound and Pamlico Sound, the tidewater region, the native home of the venus flytrap, and the inner coastal plain, where longleaf pine trees are native. So many ships have been lost off Cape Hatteras that the area is known as the "Graveyard of the Atlantic"; more than a thousand ships have sunk in these waters since records began in 1526. The most famous of these is the "Queen Anne's Revenge" (flagship of the pirate Blackbeard), which went aground in Beaufort Inlet in 1718. The coastal plain transitions to the Piedmont region along the Atlantic Seaboard fall line, the elevation at which waterfalls first appear on streams and rivers. The Piedmont region of central North Carolina is the state's most populous region, containing the six largest cities in the state by population. It consists of gently rolling countryside frequently broken by hills or low mountain ridges. Small, isolated, and deeply eroded mountain ranges and peaks are located in the Piedmont, including the Sauratown Mountains, Pilot Mountain, the Uwharrie Mountains, Crowder's Mountain, King's Pinnacle, the Brushy Mountains, and the South Mountains. The Piedmont ranges from about in elevation in the east to about in the west. The western section of the state is part of the Appalachian Mountain range. Among the subranges of the Appalachian Mountains located in the state are the Great Smoky Mountains, Blue Ridge Mountains, and Black Mountains. The Black Mountains are the highest in the eastern United States, and culminate in Mount Mitchell at , the highest point east of the Mississippi River. North Carolina has 17 major river basins. The five basins west of the Blue Ridge Mountains flow to the Gulf of Mexico, while the remainder flow to the Atlantic Ocean. Of the 17 basins, 11 originate within the state of North Carolina, but only four are contained entirely within the state's border—the Cape Fear, the Neuse, the White Oak, and the Tar–Pamlico basin. Elevation above sea level is most responsible for temperature change across the state, with the mountain area being coolest year-round. The climate is also influenced by the Atlantic Ocean and the Gulf Stream, especially in the coastal plain. These influences tend to cause warmer winter temperatures along the coast, where temperatures only occasionally drop below the freezing point at night. The coastal plain averages around of snow or ice annually, and in many years, there may be no snow or ice at all. The Atlantic Ocean exerts less influence on the climate of the Piedmont region, which has hotter summers and colder winters than along the coast, though the average daily maximum is still below in most locations. North Carolina experiences severe weather both in summer and in winter, with summer bringing threat of hurricanes, tropical storms, heavy rain, and flooding. Destructive hurricanes that have hit North Carolina include Hurricane Fran, Hurricane Florence, Hurricane Floyd, Hurricane Hugo, and Hurricane Hazel, the latter being the strongest storm ever to make landfall in the state, as a Category4 in 1954. Hurricane Isabel ranks as the most destructive of the 21st century. North Carolina averages fewer than 20 tornadoes per year, many of them produced by hurricanes or tropical storms along the coastal plain. Tornadoes from thunderstorms are a risk, especially in the eastern part of the state. Western Piedmont is often protected by the mountains, which tend to break up storms as they try to cross over; the storms will often re-form farther east. A phenomenon known as "cold-air damming" often occurs in the northwestern part of the state, which can weaken storms but can also lead to major ice events in winter. In April 2011, the worst tornado outbreak in North Carolina's history occurred. Thirty confirmed tornadoes touched down, mainly in the Eastern Piedmont and Sandhills, killing at least 24 people. In September 2019 Hurricane Dorian hit the area. The United States Census Bureau estimates that the population of North Carolina was 10,488,084 on July 1, 2019, an 9.99% increase since the 2010 Census. Of the people residing in North Carolina 58.5% were born there; 33.1% were born in another state; 1.0% were born in Puerto Rico, U.S. Island areas, or born abroad to American parent(s); and 7.4% were foreign-born. Demographics of North Carolina covers the variety of ethnic groups who reside in North Carolina, along with the relevant trends. The state's racial composition in the 2010 Census: As of 2010, 89.66% (7,750,904) of North Carolina residents age five and older spoke English at home as a primary language, while 6.93% (598,756) spoke Spanish, 0.32% (27,310) French, 0.27% (23,204) German, and Chinese (which includes Mandarin) was spoken as a main language by 0.27% (23,072) of the population five and older. In total, 10.34% (893,735) of North Carolina's population age five and older spoke a mother language other than English. North Carolina is also home to a spectrum of different dialects of Southern American English and Appalachian English. North Carolina residents, like those of other Southern states, since the colonial era have historically been overwhelmingly Protestant, first Anglican, then Baptist and Methodist. Before the Civil War, the Baptists split into regional associations of the North and South, over the issue of slavery. By the late 19th century, the largest Protestant denomination in North Carolina was the Baptist, when both whites and blacks were considered, but the latter people had set up their own organizations. After emancipation, black Baptists quickly set up their own independent congregations in North Carolina and other states of the South, as they wanted to be free of white supervision. Black Baptists developed their own state and national associations, such as the National Baptist Convention USA, Inc.. While the Baptists in total (counting both blacks and whites) have maintained the majority in this part of the country (known as the Bible Belt), a wide variety of faiths are practiced by other residents in the state, including Judaism, Islam, Baha'i, Buddhism, and Hinduism. As of 2010 the Southern Baptist Convention was the biggest denomination, with 4,241 churches and 1,513,000 members; the second largest was the United Methodist Church, with 660,000 members and 1,923 churches. The third was the Roman Catholic Church, with 428,000 members in 190 congregations. The fourth greatest was the Presbyterian Church (USA), with 186,000 members and 710 congregations; this denomination was brought by Scots-Irish immigrants who settled the backcountry in the colonial era. The state also has a special history with the Moravian Church, as settlers of this faith (largely of German origin) settled in the Winston-Salem area in the 18th and 19th centuries. Presbyterians, historically Scots-Irish, have had a strong presence in Charlotte and in Scotland County. Currently, the rapid influx of northerners and immigrants from Latin America is steadily increasing ethnic and religious diversity: the number of Roman Catholics and Jews in the state has increased, as well as general religious diversity. The second-largest Protestant denomination in North Carolina after Baptist traditions is Methodism, which is strong in the northern Piedmont, especially in populous Guilford County. There are also a substantial number of Quakers in Guilford County and northeastern North Carolina. Many universities and colleges in the state have been founded on religious traditions, and some currently maintain that affiliation, including: The state also has several major seminaries, including the Southeastern Baptist Theological Seminary in Wake Forest, and the Hood Theological Seminary (AME Zion) in Salisbury. In 2016, the U.S. Census Bureau released 2015 population estimate counts for North Carolina's counties. Mecklenburg County has the largest population, while Wake County has the second largest population in North Carolina. In 2018, the U.S. Census Bureau released 2018 population estimate counts for North Carolina's cities with populations above 70,000. Charlotte has the largest population, while Raleigh has the highest population density of North Carolina's largest cities. North Carolina has three major Combined Statistical Areas with populations of more than 1.6 million (U.S. Census Bureau 2018 estimates): North Carolina's 2018 total gross state product was $496 billion. Based on American Community Survey 2010–2014 data, North Carolina's median household income was $46,693. It ranked forty-first out of fifty states plus the District of Columbia for median household income. North Carolina had the fourteenth highest poverty rate in the nation at 17.6%. 13% of families were below the poverty line. The state has a very diverse economy because of its great availability of hydroelectric power, its pleasant climate, and its wide variety of soils. The state ranks third among the South Atlantic states in population, but leads the region in industry and agriculture. North Carolina leads the nation in the production of tobacco, textiles, and furniture. Charlotte, the state's largest city, is a major textile and trade center. According to a Forbes article written in 2013 Employment in the "Old North State" has gained many different industry sectors. Science, technology, engineering, and math (STEM) industries in the area surrounding North Carolina's capital have grown 17.9 percent since 2001, placing Raleigh-Cary at No.5 among the 51 largest metro areas in the country where technology is booming. In 2010, North Carolina's total gross state product was $424.9 billion, while the state debt in November 2012, according to one source, totalled $2.4 billion, while according to another, was in 2012 $57.8 billion. In 2011, the civilian labor force was at around 4.5million with employment near 4.1 million. North Carolina is the leading U.S. state in production of flue-cured tobacco and sweet potatoes, and comes second in the farming of pigs and hogs, trout, and turkeys. In the three most recent USDA surveys (2002, 2007, 2012), North Carolina also ranked second in the production of Christmas trees. North Carolina has 15 metropolitan areas, and in 2010 was chosen as the third-best state for business by Forbes Magazine, and the second-best state by Chief Executive Officer Magazine. Since 2000, there has been a clear division in the economic growth of North Carolina's urban and rural areas. While North Carolina's urban areas have enjoyed a prosperous economy with steady job growth, low unemployment, and rising wages, many of the state's rural counties have suffered from job loss, rising levels of poverty, and population loss as their manufacturing base has declined. According to one estimate, one-half of North Carolina's 100 counties have lost population since 2010, primarily due to the poor economy in many of North Carolina's rural areas. However, the population of the state's urban areas is steadily increasing. Transportation systems in North Carolina consist of air, water, road, rail, and public transportation including intercity rail via Amtrak and light rail in Charlotte. North Carolina has the second-largest state highway system in the country as well as the largest ferry system on the east coast. North Carolina's airports serve destinations throughout the United States and international destinations in Canada, Europe, Central America, and the Caribbean. In 2013 Charlotte Douglas International Airport, which serves as the second busiest hub for American Airlines, ranked as the 23rd busiest airport in the world. North Carolina has a growing passenger rail system with Amtrak serving most major cities. Charlotte is also home to North Carolina's only light rail system known as the Lynx. The government of North Carolina is divided into three branches: executive, legislative, and judicial. These consist of the Council of State (led by the Governor), the bicameral legislature (called the General Assembly), and the state court system (headed by the North Carolina Supreme Court). The state constitution delineates the structure and function of the state government. North Carolina has 13 seats in the U.S. House of Representatives and two seats in the U.S. Senate. North Carolina's party loyalties have undergone a series of important shifts in the last few years: While the 2010 midterms saw Tar Heel voters elect a bicameral Republican majority legislature for the first time in more than a century, North Carolina has also become a Southern swing state in presidential races. Since Southern Democrat Jimmy Carter's comfortable victory in the state in 1976, the state had consistently leaned Republican in presidential elections until Democrat Barack Obama narrowly won the state in 2008. In the 1990s, Democrat Bill Clinton came within a point of winning the state in 1992 and also only narrowly lost the state in 1996. In the early 2000s, Republican George W. Bush easily won the state by more than 12 points. By 2008, demographic shifts, population growth, and increased liberalization in densely populated areas such as the Research Triangle, Charlotte, Greensboro, Winston-Salem, Fayetteville, and Asheville, propelled Barack Obama to victory in North Carolina, the first Democrat to win the state since 1976. In 2012, North Carolina was again considered a competitive swing state, with the Democrats even holding their 2012 Democratic National Convention in Charlotte. However, Republican Mitt Romney ultimately eked out a two-point win in North Carolina, the only 2012 swing state that Obama lost, and one of only two states (along with Indiana) to flip from Obama in 2008 to the GOP in 2012. In 2012, the state elected a Republican Governor (Pat McCrory) and Lieutenant Governor (Dan Forest) for the first time in more than two decades, while also giving the Republicans veto-proof majorities in both the State House of Representatives and the State Senate. Because of gerrymandering in redistricting after the 2010 census, Democrats have been underrepresented in the state and Congressional delegations since 2012, although they have sometimes represented more than half the state's population. Several U.S. House of Representatives seats flipped control in 2012, with the Republicans holding nine seats to the Democrats' four. In the 2014 mid-term elections, Republican David Rouzer won the state's seventh congressional district seat, increasing the congressional delegation party split to 10–3 in favor of the GOP. The state was sued for racially gerrymandering the districts, which resulted in minority voting power being diluted in some areas, resulting in skewed representation. The federal court ordered redistricting in 2015. "I propose that we draw the maps to give a partisan advantage to 10 Republicans and three Democrats because I do not believe it's possible to draw a map with 11 Republicans and two Democrats," David Lewis, a Republican state representative who led the redistricting effort, said at the time. "North Carolina Republicans won 10 of the 13 seats in 2016, when Democrats got 47 percent of the statewide vote. In 2018 Republicans took nine, with one seat undecided, even though Democrats got 48 percent of the overall vote. (Excluding one district where a Republican ran unopposed, Democrats' share in 2018 was 51 percent.)" (The undecided election in North Carolina's 9th congressional district is because the bipartisan State Election Board refused in February 2019 to certify the results, after an investigation found evidence of widespread ballot fraud committed by Republican operatives.) Two suits challenging the state congressional district map were led by "two dozen voters, the state Democratic Party, the state chapter of the League of Women Voters, and the interest group Common Cause". They contend that the redistricting resulted in deliberate under-representation of a substantial portion of voters. This case reached the United States Supreme Court in March 2019, which also heard a related partisan gerrymandering case from Maryland. Elementary and secondary public schools are overseen by the North Carolina Department of Public Instruction. The North Carolina Superintendent of Public Instruction is the secretary of the North Carolina State Board of Education, but the board, rather than the superintendent, holds most of the legal authority for making public education policy. In 2009, the board's chairman also became the "chief executive officer" for the state's school system. North Carolina has 115 public school systems, each of which is overseen by a local school board. A county may have one or more systems within it. The largest school systems in North Carolina are the Wake County Public School System, Charlotte-Mecklenburg Schools, Guilford County Schools, Winston-Salem/Forsyth County Schools, and Cumberland County Schools. In total there are 2,425 public schools in the state, including 99 charter schools. North Carolina Schools were segregated until the Brown v. Board of Education trial and the release of the Pearsall Plan. In 1795, North Carolina opened the first public university in the United States—the University of North Carolina (now named the University of North Carolina at Chapel Hill). More than 200 years later, the University of North Carolina system encompasses 17 public universities including North Carolina State University, North Carolina A&T State University, North Carolina Central University, the University of North Carolina at Chapel Hill, the University of North Carolina at Greensboro, East Carolina University, Western Carolina University, Winston-Salem State University, the University of North Carolina at Asheville, the University of North Carolina at Charlotte, the University of North Carolina at Pembroke, UNC Wilmington, Elizabeth City State University, Appalachian State University, Fayetteville State University, and UNC School of the Arts, and . Along with its public universities, North Carolina has 58 public community colleges in its community college system. The largest university in North Carolina is currently North Carolina State University, with more than 34,000 students. North Carolina is also home to many well-known private colleges and universities, including Duke University, Wake Forest University, Pfeiffer University, Lees-McRae College, Davidson College, Barton College, North Carolina Wesleyan College, Elon University, Guilford College, Livingstone College, Salem College, Shaw University (the first historically black college or university in the South), Laurel University, Meredith College, Methodist University, Belmont Abbey College (the only Catholic college in the Carolinas), Campbell University, University of Mount Olive, Montreat College, High Point University, Lenoir-Rhyne University (the only Lutheran university in North Carolina) and Wingate University. Early newspapers were established in the eastern part of North Carolina in the mid 18th-century. The Fayetteville Observer, established in 1816, is the oldest newspaper still in publication in North Carolina. The Wilmington Star-News, established 1867, is the oldest continuously running newspaper. As of January 1, 2020, there were approximately 240 North Carolina newspapers in publication in the state of North Carolina. As of 2020, most North Carolina newspapers use Facebook and Twitter for distribution of content. North Carolina is home to four major league sports franchises: the Carolina Panthers of the National Football League, the Charlotte Hornets of the National Basketball Association, and an unnamed franchise of Major League Soccer are based in Charlotte, while the Raleigh-based Carolina Hurricanes play in the National Hockey League. The Panthers and Hurricanes are the only two major professional sports teams that have the same geographical designation while playing in different metropolitan areas. The Hurricanes are the only major professional team from North Carolina to have won a league championship, having captured the Stanley Cup in 2006. North Carolina is also home to two other top-level professional teams in less prominent sports—the Charlotte Hounds of Major League Lacrosse and the North Carolina Courage of the National Women's Soccer League. While North Carolina has no Major League Baseball team, it does have numerous minor league baseball teams, with the highest level of play coming from the AAA-affiliated Charlotte Knights and Durham Bulls. Additionally, North Carolina has minor league teams in other team sports including soccer and ice hockey, most notably North Carolina FC and the Charlotte Checkers, both of which play in the second tier of their respective sports. In addition to professional team sports, North Carolina has a strong affiliation with NASCAR and stock-car racing, with Charlotte Motor Speedway in Concord hosting two Cup Series races every year. Charlotte also hosts the NASCAR Hall of Fame, while Concord is the home of several top-flight racing teams, including Hendrick Motorsports, Roush Fenway Racing, Richard Petty Motorsports, Stewart-Haas Racing, and Chip Ganassi Racing. Numerous other tracks around North Carolina host races from low-tier NASCAR circuits as well. Golf is a popular summertime leisure activity, and North Carolina has hosted several important professional golf tournaments. Pinehurst Resort in Pinehurst has hosted a PGA Championship, Ryder Cup, two U.S. Opens, and one U.S. Women's Open. The Wells Fargo Championship is a regular stop on the PGA Tour and is held at Quail Hollow Club in Charlotte, and Quail Hollow has also played host to the PGA Championship. The Wyndham Championship is played annually in Greensboro at Sedgefield Country Club. College sports are also popular in North Carolina, with 18 schools competing at the Division I level. The Atlantic Coast Conference (ACC) is headquartered in Greensboro, and both the ACC Football Championship Game (Charlotte) and the ACC Men's Basketball Tournament (Greensboro) were most recently held in North Carolina. Additionally, the city of Charlotte is home to the National Junior College Athletics Association's (NJCAA) headquarters. College basketball is very popular in North Carolina, buoyed by the Tobacco Road rivalries between ACC members North Carolina, Duke, North Carolina State, and Wake Forest. The ACC Championship Game and the Duke's Mayo Bowl are held annually in Charlotte's Bank of America Stadium, featuring teams from the ACC and the Southeastern Conference. Additionally, the state has hosted the NCAA Men's Basketball Final Four on two occasions, in Greensboro in 1974 and in Charlotte in 1994. Charlotte is the most-visited city in the state, attracting 28.3 million visitors in 2018. Area attractions include Carolina Panthers NFL football team and Charlotte Hornets basketball team, Carowinds amusement park, Charlotte Motor Speedway, U.S. National Whitewater Center, Discovery Place, Great Wolf Lodge, Sea Life Aquarium, Bechtler Museum of Modern Art, Billy Graham Library, Carolinas Aviation Museum, Harvey B. Gantt Center for African-American Arts + Culture, Levine Museum of the New South, McColl Center for Art + Innovation, Mint Museum, and the NASCAR Hall of Fame. Every year the Appalachian Mountains attract several million tourists to the Western part of the state, including the historic Biltmore Estate. The scenic Blue Ridge Parkway and Great Smoky Mountains National Park are the two most visited national park and unit in the United States with more than 25 million visitors in 2013. The City of Asheville is consistently voted as one of the top places to visit and live in the United States, known for its rich art deco architecture, mountain scenery and outdoor activities. In Raleigh, many tourists visit the Capital, African American Cultural Complex, Contemporary Art Museum of Raleigh, Gregg Museum of Art & Design at NCSU, Haywood Hall House & Gardens, Marbles Kids Museum, North Carolina Museum of Art, North Carolina Museum of History, North Carolina Museum of Natural Sciences, North Carolina Sports Hall of Fame, Raleigh City Museum, J. C. Raulston Arboretum, Joel Lane House, Mordecai House, Montfort Hall, and the Pope House Museum. The Carolina Hurricanes NHL hockey team is also located in the city. In the Conover–Hickory area, Hickory Motor Speedway, RockBarn Golf and Spa, home of the Greater Hickory Classic at Rock Barn; Catawba County Firefighters Museum, and SALT Block attract many tourists to Conover. Hickory which has Valley Hills Mall. The Piedmont Triad, or center of the state, is home to Krispy Kreme, Mayberry, Texas Pete, the Lexington Barbecue Festival, and Moravian cookies. The internationally acclaimed North Carolina Zoo in Asheboro attracts visitors to its animals, plants, and a 57-piece art collection along five miles of shaded pathways in the world's largest-land-area natural-habitat park. Seagrove, in the central portion of the state, attracts many tourists along Pottery Highway (NC Hwy 705). MerleFest in Wilkesboro attracts more than 80,000 people to its four-day music festival; and Wet 'n Wild Emerald Pointe water park in Greensboro is another attraction. The Outer Banks and surrounding beaches attract millions of people to the Atlantic beaches every year. The mainland northeastern part of the state, having recently adopted the name the Inner Banks, is also known as the Albemarle Region, for the Albemarle Settlements, some of the first settlements on North Carolina's portion of the Atlantic Coastal Plain. The regions historic sites are connected by the Historic Albemarle Tour. North Carolina provides a large range of recreational activities, from swimming at the beach to skiing in the mountains. North Carolina offers fall colors, freshwater and saltwater fishing, hunting, birdwatching, agritourism, ATV trails, ballooning, rock climbing, biking, hiking, skiing, boating and sailing, camping, canoeing, caving (spelunking), gardens, and arboretums. North Carolina has theme parks, aquariums, museums, historic sites, lighthouses, elegant theaters, concert halls, and fine dining. North Carolinians enjoy outdoor recreation utilizing numerous local bike paths, 34 state parks, and 14 national parks. National Park Service units include the Appalachian National Scenic Trail, the Blue Ridge Parkway, Cape Hatteras National Seashore, Cape Lookout National Seashore, Carl Sandburg Home National Historic Site at Flat Rock, Fort Raleigh National Historic Site at Manteo, Great Smoky Mountains National Park, Guilford Courthouse National Military Park in Greensboro, Moores Creek National Battlefield near Currie in Pender County, the Overmountain Victory National Historic Trail, Old Salem National Historic Site in Winston-Salem, the Trail of Tears National Historic Trail, and Wright Brothers National Memorial in Kill Devil Hills. National Forests include Uwharrie National Forest in central North Carolina, Croatan National Forest in Eastern North Carolina, Pisgah National Forest in the western mountains, and Nantahala National Forest in the southwestern part of the state. North Carolina has traditions in art, music, and cuisine. The nonprofit arts and culture industry generates $1.2 billion in direct economic activity in North Carolina, supporting more than 43,600 full-time equivalent jobs and generating $119 million in revenue for local governments and the state of North Carolina. North Carolina established the North Carolina Museum of Art as the first major museum collection in the country to be formed by state legislation and funding and continues to bring millions into the NC economy. Also see this list of museums in North Carolina. One of the more famous arts communities in the state is Seagrove, the handmade-pottery capital of the U.S., where artisans create handcrafted pottery inspired by the same traditions that began in this community more than 200 years ago.. North Carolina boasts a large number of noteworthy jazz musicians, some among the most important in the history of the genre. These include: John Coltrane, (Hamlet, High Point); Thelonious Monk (Rocky Mount); Billy Taylor (Greenville); Woody Shaw (Laurinburg); Lou Donaldson (Durham); Max Roach (Newland); Tal Farlow (Greensboro); Albert, Jimmy and Percy Heath (Wilmington); Nina Simone (Tryon); and Billy Strayhorn (Hillsborough). North Carolina is also famous for its tradition of old-time music, and many recordings were made in the early 20th century by folk-song collector Bascom Lamar Lunsford. Musicians such as the North Carolina Ramblers helped solidify the sound of country music in the late 1920s, while the influential bluegrass musician Doc Watson also hailed from North Carolina. Both North and South Carolina are hotbeds for traditional rural blues, especially the style known as the Piedmont blues. Ben Folds Five originated in Winston-Salem, and Ben Folds still records and resides in Chapel Hill. The British band Pink Floyd is named, in part, after Chapel Hill bluesman Floyd Council. The Research Triangle area has long been a well-known center for folk, rock, metal, jazz and punk. James Taylor grew up around Chapel Hill, and his 1968 song "Carolina in My Mind" has been called an unofficial anthem for the state. Other famous musicians from North Carolina include J. Cole, Shirley Caesar, Roberta Flack, Clyde McPhatter, Nnenna Freelon, Warren Haynes, Jimmy Herring, Michael Houser, Eric Church, Future Islands, Randy Travis, Ryan Adams, Ronnie Milsap, Anthony Hamilton, The Avett Brothers and Luke Combs. Metal and punk acts such as Corrosion of Conformity, Between the Buried and Me, and Nightmare Sonata are native to North Carolina. EDM producer Porter Robinson hails from Chapel Hill. North Carolina is the home of more "American Idol" finalists than any other state: Clay Aiken (season two), Fantasia Barrino (season three), Chris Daugherty (season five), Kellie Pickler (season five), Bucky Covington (season five), Anoop Desai (season eight), Scotty McCreery (season ten), and Caleb Johnson (season thirteen). North Carolina also has the most "American Idol" winners with Barrino, McCreery, and Johnson. In the mountains, the Brevard Music Center hosts choral, operatic, orchestral, and solo performances during its annual summer schedule. North Carolina has five professional opera companies: Opera Carolina in Charlotte, NC Opera in Raleigh, Greensboro Opera in Greensboro, Piedmont Opera in Winston-Salem, and Asheville Lyric Opera in Asheville. Academic conservatories and universities also produce fully staged operas, such as the A. J. Fletcher Opera Institute of the University of North Carolina School of the Arts in Winston-Salem, the Department of Music of the University of North Carolina at Chapel Hill, and UNC Greensboro. Among others, there are three high-level symphonic orchestras: NC Symphony in Raleigh, Charlotte Symphony, and Winston-Salem Symphony. The NC Symphony holds the North Carolina Master Chorale. The Carolina Ballet is headquartered in Raleigh, and there is also the Charlotte Ballet. The state boasts three performing arts centers: DPAC in Durham, Duke Energy Center for the Performing Arts in Raleigh, and the Blumenthal Performing Art Centers in Charlotte. They feature concerts, operas, recitals, and traveling Broadway musicals. Hip-hop artists DaBaby and J. Cole hail from North Carolina. Also, see the North Carolina Music Hall of Fame. North Carolina has a variety of shopping choices. SouthPark Mall in Charlotte is currently the largest in the Carolinas, with almost 2.0 million square feet. Other major malls in Charlotte include Northlake Mall and Carolina Place Mall in nearby suburb Pineville. Other major malls throughout the state include Hanes Mall in Winston-Salem, North Carolina, The Thruway Center in Winston-Salem, North Carolina, Crabtree Valley Mall, North Hills Mall, and Triangle Town Center in Raleigh; Friendly Center and Four Seasons Town Centre in Greensboro; Oak Hollow Mall in High Point; Concord Mills in Concord; Valley Hills Mall in Hickory; Cross Creek Mall in Fayetteville; and The Streets at Southpoint and Northgate Mall in Durham and Independence Mall in Wilmington, North Carolina, and Tanger Outlets in Charlotte, Nags Head, Blowing Rock, and Mebane, North Carolina. A culinary staple of North Carolina is pork barbecue. There are strong regional differences and rivalries over the sauces and methods used in making the barbecue. The common trend across Western North Carolina is the use of premium grade Boston butt. Western North Carolina pork barbecue uses a tomato-based sauce, and only the pork shoulder (dark meat) is used. Western North Carolina barbecue is commonly referred to as Lexington barbecue after the Piedmont Triad town of Lexington, home of the Lexington Barbecue Festival, which attracts more than 100,000 visitors each October. Eastern North Carolina pork barbecue uses a vinegar-and-red-pepper-based sauce and the "whole hog" is cooked, thus integrating both white and dark meat. Krispy Kreme, an international chain of doughnut stores, was started in North Carolina; the company's headquarters are in Winston-Salem. Pepsi-Cola was first produced in 1898 in New Bern. A regional soft drink, Cheerwine, was created and is still based in the city of Salisbury. Despite its name, the hot sauce Texas Pete was created in North Carolina; its headquarters are also in Winston-Salem. The Hardee's fast-food chain was started in Rocky Mount. Another fast-food chain, Bojangles', was started in Charlotte, and has its corporate headquarters there. A popular North Carolina restaurant chain is Golden Corral. Started in 1973, the chain was founded in Fayetteville, with headquarters located in Raleigh. Popular pickle brand Mount Olive Pickle Company was founded in Mount Olive in 1926. Fast casual burger chain Hwy 55 Burgers, Shakes & Fries also makes its home in Mount Olive. Cook Out, a popular fast-food chain featuring burgers, hot dogs, and milkshakes in a wide variety of flavors, was founded in Greensboro in 1989 and has begun expanding outside of North Carolina. In 2013, Southern Living named Durham–Chapel Hill the South's "Tastiest City". Over the last decade, North Carolina has become a cultural epicenter and haven for internationally prize-winning wine (Noni Bacca Winery), internationally prized cheeses (Ashe County), "L'institut International aux Arts Gastronomiques: Conquerront Les Yanks les Truffes, January 15, 2010" international hub for truffles (Garland Truffles), and beer making, as tobacco land has been converted to grape orchards while state laws regulating alcohol content in beer allowed a jump in ABV from 6% to 15%. The Yadkin Valley in particular has become a strengthening market for grape production, while Asheville recently won the recognition of being named 'Beer City USA'. Asheville boasts the largest number of breweries per capita of any city in the United States. Recognized and marketed brands of beer in North Carolina include Highland Brewing, Duck Rabbit Brewery, Mother Earth Brewery, Weeping Radish Brewery, Big Boss Brewing, Foothills Brewing, Carolina Brewing Company, Lonerider Brewing, and White Rabbit Brewing Company. North Carolina has large grazing areas for beef and dairy cattle. Truck farms can be found in North Carolina. A truck farm is a small farm where fruits and vegetables are grown to be sold at local markets. The state's shipping, commercial fishing, and lumber industries are important to its economy. Service industries, including education, health care, private research, and retail trade, are also important. Research Triangle Park, a large industrial complex located in the Raleigh-Durham area, is one of the major centers in the country for electronics and medical research. Tobacco was one of the first major industries to develop after the Civil War. Many farmers grew some tobacco, and the invention of the cigarette made the product especially popular. Winston-Salem is the birthplace of R. J. Reynolds Tobacco Company (RJR), founded by R. J. Reynolds in 1874 as one of 16 tobacco companies in the town. By 1914 it was selling 425 million packs of Camels a year. Today it is the second-largest tobacco company in the U.S. (behind Altria Group). RJR is an indirect wholly owned subsidiary of Reynolds American Inc., which in turn is 42% owned by British American Tobacco. Several ships have been named after the state. Most famous is the , a World War II battleship. The ship served in several battles against the forces of Imperial Japan in the Pacific theater during the war. Now decommissioned, it is part of the USS "North Carolina" Battleship Memorial in Wilmington. Another , a nuclear attack submarine, was commissioned in Wilmington, North Carolina, on May 3, 2008. The state maintains a group of protected areas known as the North Carolina State Park System, which is managed by the North Carolina Division of Parks & Recreation (NCDPR), an agency of the North Carolina Department of Environment and Natural Resources (NCDENR). Fort Bragg, near Fayetteville and Southern Pines, is a large and comprehensive military base and is the headquarters of the XVIII Airborne Corps, 82nd Airborne Division, and the U.S. Army Special Operations Command. Serving as the air wing for Fort Bragg is Pope Field, also located near Fayetteville. Located in Jacksonville, Marine Corps Base Camp Lejeune, combined with nearby bases Marine Corps Air Station (MCAS) Cherry Point, MCAS New River, Camp Geiger, Camp Johnson, Stone Bay and Courthouse Bay, makes up the largest concentration of Marines and sailors in the world. MCAS Cherry Point is home of the 2nd Marine Aircraft Wing. Located in Goldsboro, Seymour Johnson Air Force Base is home of the 4th Fighter Wing and 916th Air Refueling Wing. One of the busiest air stations in the United States Coast Guard is located at the Coast Guard Air Station in Elizabeth City. Also stationed in North Carolina is the Military Ocean Terminal Sunny Point in Southport. On January 24, 1961, a B-52G broke up in midair and crashed after suffering a severe fuel loss, near Goldsboro, dropping two nuclear bombs in the process without detonation. In 2013, it was revealed that three safety mechanisms on one bomb had failed, leaving just one low-voltage switch preventing detonation. General History Government and education Other
https://en.wikipedia.org/wiki?curid=21650
North Dakota North Dakota () is a U.S. state in the midwestern and northern regions of the United States. It is the nineteenth largest in area, the fourth smallest by population, and the fourth most sparsely populated of the 50 states. North Dakota was admitted to the Union on November 2, 1889, along with its neighboring state, South Dakota. It was either the 39th or 40th state admitted to the union. Before signing the statehood papers, President Benjamin Harrison shuffled the papers so that no one could tell which became a state first. Its capital is Bismarck, and its largest city is Fargo. In the 21st century, North Dakota's natural resources have played a major role in its economic performance, particularly with the oil extraction from the Bakken formation, which lies beneath the northwestern part of the state. Such development has led to population growth and reduced unemployment, resulting in North Dakota's having the second lowest unemployment rate in the nation (after Hawaii). North Dakota contains the tallest man-made structure in the Western Hemisphere, the KVLY-TV mast. North Dakota is located in the Upper Midwest region of the United States. It lies at the center of the North American continent and borders Canada to the north. The geographic center of North America is near the town of Rugby. Bismarck is the capital of North Dakota, and Fargo is the largest city. Soil is North Dakota's most precious resource. It is the base of the state's great agricultural wealth. North Dakota also has enormous mineral resources. These mineral resources include billions of tons of lignite coal. In addition, North Dakota has large oil reserves. Petroleum was discovered in the state in 1951 and quickly became one of North Dakota's most valuable mineral resources. In the early 2000s, the emergence of hydraulic fracturing technologies enabled mining companies to extract huge amounts of oil from the Bakken shale rock formation in the western part of the state. North Dakota's economy is based more heavily on farming than the economies of most other states. Many North Dakota factories process farm products or manufacture farm equipment. Many of the state's merchants also rely on agriculture. Farms and ranches cover nearly all of North Dakota. They stretch from the flat Red River Valley in the east, across rolling plains, to the rugged Badlands in the west. The chief crop, wheat, is grown in nearly every county. North Dakota harvests more than 90 percent of the nation's canola and flaxseed. It is also the country's top producer of barley and sunflower seeds and a leader in the production of beans, honey, lentils, oats, peas, and sugar beets. Few white settlers came to the North Dakota region before the 1870s because railroads had not yet entered the area. During the early 1870s, the Northern Pacific Railroad began to push across the Dakota Territory. Large-scale farming also began during the 1870s. Eastern corporations and some families established huge wheat farms covering large areas of land in the Red River Valley. The farms made such enormous profits they were called bonanza farms. White settlers, attracted by the success of the bonanza farms, flocked to North Dakota, rapidly increasing the territory's population. In 1870, North Dakota had 2,405 people. By 1890, the population had grown to 190,983. North Dakota was named for the Sioux people who once lived in the territory. The Sioux called themselves Dakota or Lakota, meaning allies or friends. One of North Dakota's nicknames is the Peace Garden State. This nickname honors the International Peace Garden, which lies on the state's border with Manitoba, Canada. North Dakota is also called the Flickertail State because of the many flickertail ground squirrels that live in the central part of the state. North Dakota is in the U.S. region known as the Great Plains. The state shares the Red River of the North with Minnesota to the east. South Dakota is to the south, Montana is to the west, and the Canadian provinces of Saskatchewan and Manitoba are to the north. North Dakota is near the middle of North America with a stone marker in Rugby, North Dakota marking the "Geographic Center of the North American Continent". With an area of , North Dakota is the 19th largest state. The western half of the state consists of the hilly Great Plains as well as the northern part of the Badlands, which are to the west of the Missouri River. The state's high point, White Butte at , and Theodore Roosevelt National Park are in the Badlands. The region is abundant in fossil fuels including natural gas, crude oil and lignite coal. The Missouri River forms Lake Sakakawea, the third largest artificial lake in the United States, behind the Garrison Dam. The central region of the state is divided into the Drift Prairie and the Missouri Plateau. The eastern part of the state consists of the flat Red River Valley, the bottom of glacial Lake Agassiz. Its fertile soil, drained by the meandering Red River flowing northward into Lake Winnipeg, supports a large agriculture industry. Devils Lake, the largest natural lake in the state, is also found in the east. Eastern North Dakota is overall flat; however, there are significant hills and buttes in western North Dakota. Most of the state is covered in grassland; crops cover most of eastern North Dakota but become increasingly sparse in the center and farther west. Natural trees in North Dakota are found usually where there is good drainage, such as the ravines and valley near the Pembina Gorge and Killdeer Mountains, the Turtle Mountains, the hills around Devil's Lake, in the dunes area of McHenry County in central North Dakota, and along the Sheyenne Valley slopes and the Sheyenne delta. This diverse terrain supports nearly 2,000 species of plants. North Dakota has a continental climate with warm summers and cold winters. The temperature differences are significant because of its far inland position and being in the center of the Northern Hemisphere, with roughly equal distances to the North Pole and the Equator. Native American peoples lived in what is now North Dakota for thousands of years before the coming of Europeans. The known tribes included the Mandan people (from around the 11th century), while the first Hidatsa group arrived a few hundred years later. They both assembled in villages on tributaries of the Missouri River in what would become west-central North Dakota. Crow Indians traveled the plains from the west to visit and trade with the related Hidatsas after the split between them, probably in the 17th century. Later came divisions of the Dakota people—the Lakota, the Santee and the Yanktonai. The Assiniboine and the Plains Cree undertook southward journeys to the village Indians, either for trade or for war. The Shoshone Indians in present-day Wyoming and Montana may have carried out attacks on Indian enemies as far east as the Missouri. A group of Cheyennes lived in a village of earth lodges at the lower Sheyenne River (Biesterfeldt Site) for decades in the 18th century. Due to attacks by Crees, Assiniboines and Chippewas armed with fire weapons, they left the area around 1780 and crossed Missouri some time after. A band of the few Sotaio Indians lived east of Missouri River and met the uprooted Cheyennes before the end of the century. They soon followed the Cheyennes across Missouri and lived among them south of Cannonball River. Eventually, the Cheyenne and the Sutaio became one tribe and turned into mounted buffalo hunters with ranges mainly outside North Dakota. Before the middle of the 19th century, the Arikara entered the future state from the south and joined the Mandan and Hidatsa. With time, a number of Indians entered into treaties with the United States. Many of the treaties defined the territory of a specific tribe (see the map). The first European to reach the area was the French-Canadian trader Pierre Gaultier, sieur de La Vérendrye, who led an exploration and trading party to the Mandan villages in 1738. guided by Assiniboine Indians. From 1762 to 1802, the region formed part of Spanish Louisiana. European Americans settled in Dakota Territory only sparsely until the late 19th century, when railroads opened up the region. With the advantage of grants of land, they vigorously marketed their properties, extolling the region as ideal for agriculture. Congress passed an omnibus bill for statehood for North Dakota, South Dakota, Montana, and Washington, titled the Enabling Act of 1889, on February 22, 1889 during the administration of President Grover Cleveland. His successor, Benjamin Harrison, signed the proclamations formally admitting North Dakota and South Dakota to the Union on November 2, 1889. The rivalry between the two new states presented a dilemma of which was to be admitted first. Harrison directed Secretary of State James G. Blaine to shuffle the papers and obscure from him which he was signing first. The actual order went unrecorded, thus no one knows which of the Dakotas was admitted first. However, since "North Dakota" alphabetically appears before "South Dakota", its proclamation was published first in the Statutes At Large. Unrest among wheat farmers, especially among Norwegian immigrants, led to a populist political movement centered in the Non Partisan League ("NPL") around the time of World War I. The NPL ran candidates on the Republican ticket (but merged into the Democratic Party after World War II). It tried to insulate North Dakota from the power of out-of-state banks and corporations. In addition to founding the state-owned Bank of North Dakota and North Dakota Mill and Elevator (both still in existence), the NPL established a state-owned railroad line (later sold to the Soo Line Railroad). Anti-corporate laws virtually prohibited a corporation or bank from owning title to land zoned as farmland. These laws, still in force today, after having been upheld by state and federal courts, make it almost impossible to foreclose on farmland, as even after foreclosure, the property title cannot be held by a bank or mortgage company. Furthermore, the Bank of North Dakota, having powers similar to a Federal Reserve branch bank, exercised its power to limit the issuance of subprime mortgages and their collateralization in the form of derivative instruments, and so prevented a collapse of housing prices within the state in the wake of 2008's financial crisis. The original North Dakota State Capitol in Bismarck burned to the ground on December 28, 1930. It was replaced by a limestone-faced art-deco skyscraper that still stands today. A round of federal investment and construction projects began in the 1950s, including the Garrison Dam and the Minot and Grand Forks Air Force bases. Western North Dakota saw a boom in oil exploration in the late 1970s and early 1980s, as rising petroleum prices made development profitable. This boom came to an end after petroleum prices declined. In recent years, the state has had lower rates of unemployment than the national average, and increased job and population growth. Much of the growth has been based on development of the Bakken oil fields in the western part of the state. Estimates as to the remaining amount of oil in the area vary, with some estimating over 100 years' worth. For decades, North Dakota's annual murder rate and the violent crime rate was regularly the lowest in the United States. In recent years, however, while still below the national average, crime has risen sharply. In 2016, the violent crime rate was three times higher than in 2004, with the rise occurring mostly in the late 2000s, coinciding with the oil boom era. This happened at a time when the national violent crime rate declined slightly. Workers in the oil boom towns have been blamed for much of the increase. The United States Census Bureau estimates North Dakota's population was 762,062 on July 1, 2019, a 13.30% increase since the 2010 United States Census. This makes North Dakota the U.S. state with the largest percentage in population growth since 2011. The fourth least-populous state in the country, only Alaska, Vermont, and Wyoming have fewer residents. From fewer than 2,000 people in 1870, North Dakota's population grew to near 680,000 by 1930. Growth then slowed, and the population has fluctuated slightly over the past seven decades, hitting a low of 617,761 in the 1970 census, with 642,200 in the 2000 census. Except for Native Americans, the North Dakota population has a lesser percentage of minorities than in the nation as a whole. As of 2011, 20.7% of North Dakota's population younger than age1 were minorities. The center of population of North Dakota is in Wells County, near Sykeston. "Note: Births in table don't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number". Throughout the latter half of the nineteenth century and into the early twentieth century, North Dakota, along with most of the midwest, experienced a mass influx of newcomers from both the eastern United States and immigrants from Europe. North Dakota was a known popular destination for immigrant farmers and general laborers and their families, mostly from Norway, Iceland, Sweden, Germany and the United Kingdom. Much of this settlement gravitated throughout the western side of the Red River Valley, as was similarly seen in South Dakota and in a parallel manner in Minnesota. This area is well known for its fertile lands. By the outbreak of the First World War, this was among North America's richest farming regions. But a period of higher rainfall ended, and many migrants weren't successful in the arid conditions. Many family plots were too small to farm successfully. From the 1930s until the end of the 20th century, North Dakota's population gradually declined, interrupted by a couple of brief increases. Young adults with university degrees were particularly likely to leave the state. With the advancing process of mechanization of agricultural practices, and environmental conditions requiring larger landholdings for successful agriculture, subsistence farming proved to be too risky for families. Many people moved to urban areas for jobs. Since the late 20th century, one of the major causes of migration from North Dakota is the lack of skilled jobs for college graduates. Expansion of economic development programs has been urged to create skilled and high-tech jobs, but the effectiveness of such programs has been open to debate. During the first decade of the 21st century, the population increased in large part because of jobs in the oil industry related to development of tight oil (shale oil) fields. Elsewhere, the Native American population has increased as some reservations have attracted people back from urban areas. Immigration North Dakota is one of the top resettlement locations for refugees proportionally. According to the U.S. Office of Refugee Resettlement, in 2013–2014 "more than 68 refugees" per 100,000 North Dakotans were settled in the state. In fiscal year 2014, 582 refugees settled in the state. Fargo Mayor Mahoney said North Dakota accepting the most refugees per capita should be celebrated given the benefits they bring to the state. In 2015, Lutheran Social Services of North Dakota, the state's only resettlement agency, was "awarded $458,090 in federal funding to improve refugee services". Immigration from outside the United States resulted in a net increase of 3,323 people, and migration within the country produced a net loss of 21,110 people. Of the residents of North Dakota, 69.8% were born in North Dakota, 27.2% were born in a different state, 0.6% were born in Puerto Rico, U.S. Island areas, or born abroad to American parent(s), and 2.4% were born in another country. The age and gender distributions approximate the national average. According to the 2010 Census, the racial and ethnic composition of North Dakota was as follows: Throughout the mid-19th century, Dakota Territory was still dominated by Native Americans. Warfare and disease reduced their population at the same time Europeans and Americans were settling in the state. In the 21st century, most North Dakotans are of Northern European descent. As of 2009, the seven largest European ancestry groups in North Dakota are: North Dakota has the most churches per capita of any state. Additionally, North Dakota has the highest percentage of church-going population of any state. A 2001 survey indicated 35% of North Dakota's population was Lutheran, and 30% was Catholic. Other religious groups represented were Methodists (7%), Baptists (6%), the Assemblies of God (3%), Presbyterians (1.27%), and Jehovah's Witnesses (1%). Christians with unstated or other denominational affiliations, including other Protestants and The Church of Jesus Christ of Latter-day Saints (LDS Church), totaled 3%, bringing the total Christian population to 86%. There were an estimated 920 Muslims and 730 Jews in the state in 2000. Three percent of respondents answered "no religion" on the survey, and 6% declined to answer. The largest church bodies by number of adherents in 2010 were the Roman Catholic Church with 167,349; the Evangelical Lutheran Church in America with 163,209; and the Lutheran Church–Missouri Synod with 22,003. In 2010, 94.86% (584,496) of North Dakotans over 5 years old spoke English as their primary language. 5.14% (31,684) of North Dakotans spoke a language other than English. 1.39% (8,593) spoke German, 1.37% (8,432) spoke Spanish, and 0.30% (1,847) spoke Norwegian. Other languages spoken included Serbo-Croatian (0.19%), Chinese and Japanese (both 0.15%), and Native American languages and French (both 0.13%). In 2000, 2.5% of the population spoke German in addition to English, reflecting early 20th century immigration. In the 21st century, North Dakota has an increasing population of Native Americans, who in 2010 made up 5.44% of the population. By the early 19th century the territory was dominated by Siouan-speaking peoples, whose territory stretched west from the Great Lakes area. The word "Dakota" is a Sioux (Lakota/Dakota) word meaning "allies" or "friends". The primary historic tribal nations in or around North Dakota, are the Lakota and the Dakota ("The Great Sioux Nation" or "Oceti Sakowin", meaning the seven council fires), the Blackfoot, the Cheyenne, the Chippewa (known as Ojibwe in Canada), and the Mandan. The federally recognized tribes have Indian reservations in the state. Social gatherings known as "powwows" (or wacipis in Lakota/Dakota) continue to be an important part of Native American culture and are held regularly throughout the state. Throughout Native American history, powwows were held, usually in the spring, to rejoice at the beginning of new life and the end of the winter cold. These events brought Native American tribes together for singing and dancing and allowed them to meet with old friends and acquaintances, as well as to make new ones. Many powwows also held religious significance for some tribes. Today, powwows are still a part of the Native American culture and are attended by Natives and non-Natives alike. In North Dakota, the United Tribes International Powwow held each September in the capital of Bismarck, is one of the largest powwows in the United States. A pow wow is an occasion for parades and Native American dancers in regalia, with many dancing styles presented. It is traditional for male dancers to wear regalia decorated with beads, quills, and eagle feathers; male grass dancers wear colorful fringe regalia, and male fancy dancers wear brightly colored feathers. Female dancers dance much more subtly than the male dancers. Fancy female dancers wear cloth, beaded moccasins, and jewelry, while the jingle dress dancer wears a dress made of metal cones. Inter-tribal dances during the powwow, allow everyone (even spectators) can take part in the dancing. Around 1870 many European immigrants from Norway settled in North Dakota's northeastern corner, especially near the Red River. Icelanders also arrived from Canada. Pembina was a town of many Norwegians when it was founded; they worked on family farms. They started Lutheran churches and schools, greatly outnumbering other denominations in the area. This group has unique foods such as "lefse" and "lutefisk". The continent's largest Scandinavian event, "Norsk Høstfest", is celebrated each September in Minot's North Dakota State Fair Center, a local attraction featuring art, architecture, and cultural artifacts from all five Nordic countries. The Icelandic State Park in Pembina County and an annual Icelandic festival reflect immigrants from that country, who are also descended from Scandinavians. Old World folk customs have persisted for decades in North Dakota, with the revival of techniques in weaving, silver crafting, and wood carving. Traditional turf-roof houses are displayed in parks; this style originated in Iceland. A stave church is a landmark in Minot. Ethnic Norwegians constitute nearly one-third or 32.3% of Minot's total population and 30.8% of North Dakota's total population. Ethnic Germans who had settled in Russia for several generations since the reign of Catherine the Great grew dissatisfied in the nineteenth century because of economic problems and because of the revocation of religious freedoms for Mennonites and Hutterites, in particular the revocation of exemption from military service in 1871. Most Mennonites and Hutterites migrated to America in the late 1870s. By 1900, about 100,000 had immigrated to the U.S., settling primarily in North Dakota, South Dakota, Kansas, and Nebraska. The south-central part of North Dakota became known as "the German-Russian triangle". By 1910, about 60,000 ethnic Germans from Russia lived in Central North Dakota. They were Lutherans, Mennonites, Hutterites and Roman Catholics who had kept most of their German customs of the time when their ancestors immigrated to Russia. They were committed to agriculture. Traditional iron cemetery grave markers are a famous art form practiced by ethnic Germans. North Dakota's major fine art museums and venues include the Chester Fritz Auditorium, Empire Arts Center, the Fargo Theatre, North Dakota Museum of Art, and the Plains Art Museum. The Bismarck-Mandan Symphony Orchestra, Fargo-Moorhead Symphony Orchestra, Greater Grand Forks Symphony Orchestra, Minot Symphony Orchestra and Great Plains Harmony Chorus are full-time professional and semi-professional musical ensembles who perform concerts and offer educational programs to the community. North Dakotan musicians of many genres include blues guitarist Jonny Lang, country music singer Lynn Anderson, jazz and traditional pop singer and songwriter Peggy Lee, big band leader Lawrence Welk, and pop singer Bobby Vee. The state is also home to Indie rock June Panic (of Fargo, signed to Secretly Canadian). Ed Schultz was known around the country until his death in July 2018 as the host progressive talk radio show, "The Ed Schultz Show", and "The Ed Show" on MSNBC. Shadoe Stevens hosted "American Top 40" from 1988 to 1995. Josh Duhamel is an Emmy Award-winning actor known for his roles in "All My Children" and "Las Vegas". Nicole Linkletter and CariDee English were winning contestants of Cycles 5 and 7, respectively, of "America's Next Top Model". Kellan Lutz has appeared in movies such as "Stick It", "Accepted", "Prom Night", and "Twilight". Bismarck was home of the Dakota Wizards of the NBA Development League, and currently hosts the Bismarck Bucks of the Indoor Football League. NCAA has two NCAA Division I teams, the North Dakota Fighting Hawks and North Dakota State Bison, and two Division II teams, the Mary Marauders and Minot State Beavers. Fargo is home to the USHL Ice Hockey team the Fargo Force. The North Dakota High School Activities Association features more than 25,000 participants. Outdoor activities such as hunting and fishing are hobbies for many North Dakotans. Ice fishing, skiing, and snowmobiling are also popular during the winter months. Residents of North Dakota may own or visit a cabin along a lake. Popular sport fish include walleye, perch, and northern pike. The western terminus of the North Country National Scenic Trail is on Lake Sakakawea, where it abuts the Lewis and Clark Trail. Agriculture is North Dakota's largest industry, although petroleum, food processing, and technology are also major industries. Its growth rate is about 4.1%. According to the Bureau of Economic Analysis the economy of North Dakota had a gross domestic product of $55.180 billion in the second quarter of 2018. The per capita income was $34,256,when measured from 2013-2017 by the United States Department of Commerce. The three-year median household income from 2013–2017 was $61,285. According to Gallup data, North Dakota led the U.S. in job creation in 2013 and has done so since 2009. The state has a Job Creation Index score of 40, nearly 10 points ahead of its nearest competitors. North Dakota has added 56,600 private-sector jobs since 2011, creating an annual growth rate of 7.32 percent. According to statistics released on March 25, 2014 by the Bureau of Economic Analysis, North Dakota's personal income grew 7.6 percent in 2013 to $41.3 billion. The state has recorded the highest personal income growth among all states for the sixth time since 2007. North Dakota's personal income growth is tied to various private business sectors such as agriculture, energy development, and construction. Just over 21% of North Dakota's total 2013 gross domestic product (GDP) of $49.77 billion comes from natural resources and mining. North Dakota is the only state with a state-owned bank, the Bank of North Dakota in Bismarck, and a state-owned flour mill, the North Dakota Mill and Elevator in Grand Forks. These were established by the NPL before World War II. As of 2012, Fargo is home to the second-largest campus of Microsoft with 1,700 employees, and Amazon.com employs several hundred in Grand Forks. , the state's unemployment rate is among the lowest in the nation at 2.4 percent. It has not reached five percent since 1987. At end of 2010, the state per capita income was ranked 17th in the nation, the biggest increase of any state in a decade from rank 38th. The reduction in the unemployment rate and growth in per capita income is attributable to the oil boom in the state. Due to a combination of oil-related development and investing in technology and service industries, North Dakota has had a budget surplus every year since the 2008 market crash. Since 1976, the highest that North Dakota's unemployment rate has reached is just 6.2%, recorded in 1983. Every U.S. state except neighboring South Dakota has had a higher unemployment rate during that period. North Dakota's earliest industries were fur trading and agriculture. Although less than 10% of the population is employed in the agricultural sector, it remains a major part of the state's economy. With industrial-scale farming, it ranks 9th in the nation in the value of crops and 18th in total value of agricultural products sold. Large farms generate the most crops. The share of people in the state employed in agriculture is comparatively high: , only two to three percent of the population of the United States is directly employed in agriculture. North Dakota has about 90% of its land area in farms with of cropland, the third-largest amount in the nation. Between 2002 and 2007, total cropland increased by about a million acres (4,000 km2); North Dakota was the only state showing an increase. Over the same period, were shifted into soybean and corn monoculture production, the largest such shift in the United States. Agriculturalists are concerned about too much monoculture, as it makes the economy at risk from insect or crop diseases affecting a major crop. In addition, this development has adversely affected habitats of wildlife and birds, and the balance of the ecosystem. The state is the largest producer in the U.S. of many cereal grains, including barley (36% of U.S. crop), durum wheat (58%), hard red spring wheat (48%), oats (17%), and combined wheat of all types (15%). It is the second leading producer of buckwheat (20%). , corn became the state's largest crop produced, although it is only 2% of total U.S. production. The Corn Belt extends to North Dakota, but is more on the edge of the region instead of in its center. Corn yields are high in the southeast part of the state and smaller in other parts of the state. Most of the cereal grains are grown for livestock feed. The state is the leading producer of many oilseeds, including 92% of the U.S. canola crop, 94% of flax seed, 53% of sunflower seeds, 18% of safflower seeds, and 62% of mustard seed. Canola is suited to the cold winters and it matures fast. Processing of canola for oil production produces canola meal as a by-product. The by-product is a high-protein animal feed. Soybeans are also an increasingly important crop, with additional planted between 2002 and 2007. Soybeans are a major crop in the eastern part of the state, and cultivation is common in the southeast part of the state. Soybeans were not grown at all in North Dakota in the 1940s, but the crop has become especially common since 1998. In North Dakota soybeans have to mature fast, because of the comparatively short growing season. Soybeans are grown for livestock feed. North Dakota is the second leading producer of sugarbeets, which are grown mostly in the Red River Valley. The state is also the largest producer of honey, dry edible peas and beans, lentils, and the third-largest producer of potatoes. North Dakota's Top Agricultural Commodities (according to the USDA ) The energy industry is a major contributor to the economy. North Dakota has both coal and oil reserves. Shale gas is also produced. Lignite coal reserves in Western North Dakota are used to generate about 90% of the electricity consumed, and electricity is also exported to nearby states. North Dakota has the second largest lignite coal production in the U.S. However, lignite coal is the lowest grade coal. There are larger and higher grade coal reserves (anthracite, bituminous coal and subbituminous coal) in other U.S. states. Oil was discovered near Tioga in 1951, generating of oil a year by 1984. Recoverable oil reserves have jumped dramatically recently. The oil reserves of the Bakken Formation may hold up to of oil, 25 times larger than the reserves in the Arctic National Wildlife Refuge. A report issued in April 2008 by the U.S. Geological Survey estimates the oil recoverable by current technology in the Bakken formation is two orders of magnitude less, in the range of to , with a mean of . The northwestern part of the state is the center of the North Dakota oil boom. The Williston, Tioga, Stanley and Minot-Burlington communities are having rapid growth that strains housing and local services. , the state is the 2nd-largest oil producer in the U.S., with an average of 575,490 barrels per day. The Great Plains region, which includes the state of North Dakota, has been referred to as "the Saudi Arabia of wind energy". Development of wind energy in North Dakota has been cost effective because the state has large rural expanses and wind speeds seldom go below 10 mph. North Dakota is considered the least visited state, owing, in part, to its not having a major tourist attraction. Nonetheless, tourism is North Dakota's third largest industry, contributing more than $3 billion into the state's economy annually. Outdoor attractions like the 144-mile Maah Daah Hey Trail and activities like fishing and hunting attract visitors. The state is known for the Lewis & Clark Trail and being the winter camp of the Corps of Discovery. Areas popular with visitors include Theodore Roosevelt National Park in the western part of the state. The park often exceeds 475,000 visitors each year. Regular events in the state that attract tourists include "Norsk Høstfest" in Minot, billed as North America's largest Scandinavian festival; the Medora Musical; and the North Dakota State Fair. The state also receives a significant number of visitors from the neighboring Canadian provinces of Manitoba and Saskatchewan, particularly when the exchange rate is favorable. Many international tourists now also come to visit the Oscar-Zero Missile Alert Facility. North Dakota has six level-II trauma centers, 44 hospitals, 52 rural health clinics, and 80 nursing homes. Major provider networks include Sanford, St. Alexius, Trinity, and Altru. Blue Cross Blue Shield of North Dakota is the largest medical insurer in the state. North Dakota expanded Medicaid in 2014, and its health insurance exchange is the federal site, HealthCare.gov. North Dakota law requires pharmacies, other than hospital dispensaries and pre-existing stores, to be majority-owned by pharmacists. Voters rejected a proposal to change the law in 2014. The North Dakota Department of Emergency Services provides 24/7 communication and coordination for more than 50 agencies. In addition, "it administers federal disaster recovery programs and the Homeland Security Grant Program". In 2011, the Department selected Geo-Comm, Inc. "for the Statewide Seamless Base Map Project", which will facilitate "identifying locations 9–1–1 callers" and route emergency calls based on locations. In 1993 the state adopted the Burkle addressing system numbering rural roads and buildings to aid in the delivery of emergency services. Transportation in North Dakota is overseen by the North Dakota Department of Transportation. The major Interstate highways are Interstate 29 and Interstate 94, with I-29 and I-94 meeting at Fargo, with I-29 oriented north to south along the eastern edge of the state, and I-94 bisecting the state from east to west between Minnesota and Montana. A unique feature of the North Dakota Interstate Highway system is virtually all of it is paved in concrete, not blacktop, because of the extreme weather conditions it must endure. BNSF and the Canadian Pacific Railway operate the state's largest rail systems. Many branch lines formerly used by BNSF and Canadian Pacific Railway are now operated by the Dakota, Missouri Valley, and Western Railroad and the Red River Valley and Western Railroad. North Dakota's principal airports are the Hector International Airport (FAR) in Fargo, Grand Forks International Airport (GFK), Bismarck Municipal Airport (BIS), Minot International Airport (MOT) and Williston Basin International Airport (XWA) in Williston. Amtrak's Empire Builder runs through North Dakota, making stops at Fargo (2:13 am westbound, 3:35 am eastbound), Grand Forks (4:52 am westbound, 12:57 am eastbound), Minot (around 9 am westbound and around 9:30 pm eastbound), and four other stations. It is the descendant of the famous line of the same name run by the Great Northern Railway, which was built by the tycoon James J. Hill and ran from St. Paul to Seattle. Intercity bus service is provided by Greyhound and Jefferson Lines. Public transit in North Dakota includes daily fixed-route bus systems in Fargo, Bismarck-Mandan, Grand Forks, and Minot, paratransit service in 57 communities, along with multi-county rural transit systems. As with the federal government of the United States, political power in North Dakota state government is divided into three branches: executive, legislative, and judicial. The Constitution of North Dakota and the North Dakota Century Code form the formal law of the state; the "North Dakota Administrative Code" incorporates additional rules and policies of state agencies. The executive branch is headed by the elected governor. The current governor is Doug Burgum, a Republican who took office December 15, 2016, after his predecessor, Jack Dalrymple did not seek reelection. The current Lieutenant Governor of North Dakota is Brent Sanford, who is also the President of the Senate. The offices of governor and lieutenant governor have four-year terms, which are next up for election in 2020. The governor has a cabinet consisting of appointed leaders of various state government agencies, called commissioners. The other elected constitutional offices are secretary of state, attorney general, state auditor, and state treasurer. The North Dakota Legislative Assembly is a bicameral body consisting of the Senate and the House of Representatives. The state has 47 districts, each with one senator and two representatives. Both senators and representatives are elected to four-year terms. The state's legal code is named the North Dakota Century Code. North Dakota's court system has four levels. Municipal courts serve the cities, and most cases start in the district courts, which are courts of general jurisdiction. There are 42 district court judges in seven judicial districts. Appeals from the trial courts and challenges to certain governmental decisions are heard by the North Dakota Court of Appeals, consisting of three-judge panels. The five-justice North Dakota Supreme Court hears all appeals from the district courts and the Court of Appeals. Historically, North Dakota was populated by the Mandan, Hidatsa, Lakota, and Ojibwe, and later by the Sanish and Métis. Today, five federally recognized tribes within the boundaries of North Dakota have independent, sovereign relationships with the federal government and territorial reservations: North Dakota's United States Senators are John Hoeven (R) and Kevin Cramer (R). The state has one at-large congressional district represented by Representative Kelly Armstrong (R). Federal court cases are heard in the United States District Court for the District of North Dakota, which holds court in Bismarck, Fargo, Grand Forks, and Minot. Appeals are heard by the Eighth Circuit Court of Appeals based in St. Louis, Missouri. The major political parties in North Dakota are the Democratic-NPL and the Republican Party. , the Constitution Party and the Libertarian Party are also organized parties in the state. At the state level, the governorship has been held by the Republican Party since 1992, along with a majority of the state legislature and statewide officers. Dem-NPL showings were strong in the 2000 governor's race, and in the 2006 legislative elections, but the League has not had a major breakthrough since the administration of former state governor George Sinner. The Republican Party presidential candidate usually carries the state; in 2004, George W. Bush won with 62.9% of the vote. Of all the Democratic presidential candidates since 1892, only Grover Cleveland (1892, one of three votes), Woodrow Wilson (1912 and 1916), Franklin D. Roosevelt (1932 and 1936), and Lyndon B. Johnson (1964) received Electoral College votes from North Dakota. On the other hand, Dem-NPL candidates for North Dakota's federal Senate and House seats won every election between 1982 and 2008, and the state's federal delegation was entirely Democratic from 1987 to 2011. However, both of the current U.S. Senators, John Hoeven and Kevin Cramer, are Republicans, as is the sole House member, Kelly Armstrong. North Dakota has a slightly progressive income tax structure; the five brackets of state income tax rates are 1.1%, 2.04%, 2.27%, 2.64%, and 2.90% as of 2017. In 2005 North Dakota ranked 22nd highest by per capita state taxes. The sales tax in North Dakota is 5% for most items. The state allows municipalities to institute local sales taxes and special local taxes, such as the 1.75% supplemental sales tax in Grand Forks. Excise taxes are levied on the purchase price or market value of aircraft registered in North Dakota. The state imposes a use tax on items purchased elsewhere but used within North Dakota. Owners of real property in North Dakota pay property tax to their county, municipality, school district, and special taxing districts. The Tax Foundation ranks North Dakota as the state with the 20th most "business friendly" tax climate in the nation. Tax Freedom Day arrives on April 1, 10 days earlier than the national Tax Freedom Day. In 2006, North Dakota was the state with the lowest number of returns filed by taxpayers with an Adjusted Gross Income of over $1M—only 333. 56.54% of North Dakota's 762,062 people live in one of the top fifteen most populated cities. Fargo is the largest city in North Dakota and is the economic hub for the region. Bismarck, in south-central North Dakota along the banks of the Missouri River, has been North Dakota's capital city since 1883, first as capital of the Dakota Territory, and then as state capital since 1889. Minot is a city in northern North Dakota and is home of the North Dakota State Fair and Norsk Høstfest. A few miles west of Bismarck on the west side of the Missouri River, the city of Mandan was named for the Mandan Indians who inhabited the area at the time of the Lewis and Clark Expedition. New Salem is the site of the world's largest statue of a holstein cow; the world's largest statue of a bison is in Jamestown. Grand Forks and Devils Lake are in scenic areas of North Dakota. West Fargo, the fifth largest city in North Dakota, is one of the fastest growing cities. and was recognized as a Playful City USA by KaBOOM! in 2011. Williston is near the confluence of the Missouri River and the Yellowstone River near Montana. Medora in the North Dakota Badlands hosts the Medora Musical every summer and is the gateway to Theodore Roosevelt National Park. Fort Yates, along the Missouri River on the Standing Rock Indian Reservation, claims to host the final resting place of Hunkpapa Lakota leader Sitting Bull (Mobridge, South Dakota also claims his gravesite). The state has 11 public colleges and universities, five tribal community colleges, and four private schools. The largest institutions are North Dakota State University and the University of North Dakota. The higher education system consists of the following institutions: North Dakota University System (public institutions): Tribal institutions: Private institutions: "The Flickertail State" is one of North Dakota's nicknames and is derived from Richardson's ground squirrel ("Spermophilus richardson ii"), a very common animal in the region. The ground squirrel constantly flicks its tail in a distinctive manner. In 1953, legislation to make the ground squirrel the state emblem was voted down in the state legislature. The state has 10 daily newspapers, the largest being "The Forum of Fargo-Moorhead". Other weekly and monthly publications (most of which are fully supported by advertising) are also available. The most prominent of these is the alternative weekly "High Plains Reader". The state's oldest radio station, WDAY-AM, was launched on May 23, 1922. North Dakota's three major radio markets center around Fargo, Bismarck, and Grand Forks, though stations broadcast in every region of the state. Several new stations were built in Williston in the early 2010s. North Dakota has 34 AM and 88 FM radio stations. KFGO in Fargo has the largest audience. Broadcast television in North Dakota started on April 3, 1953, when KCJB-TV (now KXMC-TV) in Minot started operations. North Dakota's television media markets are Fargo-Grand Forks, (117th largest nationally), including the eastern half of the state, and Minot-Bismarck (152nd), making up the western half of the state. There are currently 31 full-power television stations, arranged into 10 networks, with 17 digital subchannels. Public broadcasting in North Dakota is provided by Prairie Public, with statewide television and radio networks affiliated with PBS and NPR. Public access television stations open to community programming are offered on cable systems in Bismarck, Dickinson, Fargo, and Jamestown.
https://en.wikipedia.org/wiki?curid=21651
Natural language processing Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. The history of natural language processing (NLP) generally started in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten-year-long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s when the first statistical machine translation systems were developed. Some notably successful natural language processing systems developed in the 1960s were SHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies, and ELIZA, a simulation of a Rogerian psychotherapist, written by Joseph Weizenbaum between 1964 and 1966. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. When the "patient" exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?". During the 1970s, many programmers began to write "conceptual ontologies", which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, many chatterbots were written including PARRY, Racter, and Jabberwacky. Up to the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. This was due to both the steady increase in computational power (see Moore's law) and the gradual lessening of the dominance of Chomskyan theories of linguistics (e.g. transformational grammar), whose theoretical underpinnings discouraged the sort of corpus linguistics that underlies the machine-learning approach to language processing. Some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules. However, part-of-speech tagging introduced the use of hidden Markov models to natural language processing, and increasingly, research has focused on statistical models, which make soft, probabilistic decisions based on attaching real-valued weights to the features making up the input data. The cache language models upon which many speech recognition systems now rely are examples of such statistical models. Such models are generally more robust when given unfamiliar input, especially input that contains errors (as is very common for real-world data), and produce more reliable results when integrated into a larger system comprising multiple subtasks. Many of the notable early successes occurred in the field of machine translation, due especially to work at IBM Research, where successively more complicated statistical models were developed. These systems were able to take advantage of existing multilingual textual corpora that had been produced by the Parliament of Canada and the European Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, which was (and often continues to be) a major limitation in the success of these systems. As a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. Recent research has increasingly focused on unsupervised and semi-supervised learning algorithms. Such algorithms can learn from data that has not been hand-annotated with the desired answers or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the World Wide Web), which can often make up for the inferior results if the algorithm used has a low enough time complexity to be practical. In the 2010s, representation learning and deep neural network-style machine learning methods became widespread in natural language processing, due in part to a flurry of results showing that such techniques can achieve state-of-the-art results in many natural language tasks, for example in language modeling, parsing, and many others. Popular techniques include the use of word embeddings to capture semantic properties of words, and an increase in end-to-end learning of a higher-level task (e.g., question answering) instead of relying on a pipeline of separate intermediate tasks (e.g., part-of-speech tagging and dependency parsing). In some areas, this shift has entailed substantial changes in how NLP systems are designed, such that deep neural network-based approaches may be viewed as a new paradigm distinct from statistical natural language processing. For instance, the term "neural machine translation" (NMT) emphasizes the fact that deep learning-based approaches to machine translation directly learn sequence-to-sequence transformations, obviating the need for intermediate steps such as word alignment and language modeling that was used in statistical machine translation (SMT). In the early days, many language-processing systems were designed by hand-coding a set of rules: such as by writing grammars or devising heuristic rules for stemming. Since the so-called "statistical revolution" in the late 1980s and mid-1990s, much natural language processing research has relied heavily on machine learning. The machine-learning paradigm calls instead for using statistical inference to automatically learn such rules through the analysis of large "corpora" (the plural form of "corpus", is a set of documents, possibly with human or computer annotations) of typical real-world examples. Many different classes of machine-learning algorithms have been applied to natural-language-processing tasks. These algorithms take as input a large set of "features" that are generated from the input data. Some of the earliest-used algorithms, such as decision trees, produced systems of hard if-then rules similar to the systems of handwritten rules that were then common. Increasingly, however, research has focused on statistical models, which make soft, probabilistic decisions based on attaching real-valued weights to each input feature. Such models have the advantage that they can express the relative certainty of many different possible answers rather than only one, producing more reliable results when such a model is included as a component of a larger system. Systems based on machine-learning algorithms have many advantages over hand-produced rules: The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks. Though natural language processing tasks are closely intertwined, they are frequently subdivided into categories for convenience. A coarse division is given below. The first published work by an artificial intelligence was published in 2018, "1 the Road", marketed as a novel, contains sixty million words. Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses." Cognitive science is the interdisciplinary, scientific study of the mind and its processes. Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics. George Lakoff offers a methodology to build Natural language processing (NLP) algorithms through the perspective of Cognitive science, along with the findings of Cognitive linguistics : The first defining aspect of this cognitive task of NLP is the application of the theory of Conceptual metaphor, explained by Lakoff as “the understanding of one idea, in terms of another” which provides an idea of the intent of the author. For example, consider some of the meanings, in English, of the word "“big”". When used as a Comparative, as in "“That is a big tree,”" a likely inference of the intent of the author is that the author is using the word "“big”" to imply a statement about the tree being "”physically large”" in comparison to other trees or the authors experience. When used as a Stative verb, as in "”Tomorrow is a big day”", a likely inference of the author’s intent it that "”big”" is being used to imply "”importance”". These examples are not presented to be complete, but merely as indicators of the implication of the idea of Conceptual metaphor. The intent behind other usages, like in "”She is a big person”" will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information. This leads to the second defining aspect of this cognitive task of NLP, namely Probabilistic context-free grammar (PCFG) which enables cognitive NLP algorithms to assign relative measures of meaning to a word, phrase, sentence or piece of text based on the information presented before and after the piece of text being analyzed. The mathematical equation for such algorithms is presented in : "Where," RMM, is the Relative Measure of Meaning token, is any block of text, sentence, phrase or word N, is the number of tokens being analyzed PMM, is the Probable Measure of Meaning based on a corpora n, is one less than the number of tokens being analyzed d, is the location of the token along the sequence of n tokens PF, is the Probability Function specific to a language
https://en.wikipedia.org/wiki?curid=21652
New South Wales New South Wales (abbreviated as NSW) is a state on the east coast of :Australia. It borders Queensland to the north, Victoria to the south, and South Australia to the west. Its coast borders the Coral and Tasman Seas to the east. The Australian Capital Territory is an enclave within the state. New South Wales' state capital is Sydney, which is also Australia's most populous city. , the population of New South Wales was over 8 million, making it Australia's most populous state. Just under two-thirds of the state's population, 5.1 million, live in the Greater Sydney area. Inhabitants of New South Wales are referred to as "New South Welshmen". The Colony of New South Wales was founded as a British penal colony in 1788. It originally comprised more than half of the Australian mainland with its western boundary set at 129th meridian east in 1825. The colony then also included the island territories of New Zealand, Van Diemen's Land, Lord Howe Island, and Norfolk Island. During the 19th century, most of the colony's area was detached to form separate British colonies that eventually became New Zealand and the various states and territories of Australia. However, the Swan River Colony was never administered as part of New South Wales. Lord Howe Island remains part of New South Wales, while Norfolk Island has become a federal territory, as have the areas now known as the Australian Capital Territory and the Jervis Bay Territory. The original inhabitants of New South Wales were the Aboriginal tribes who arrived in Australia about 40,000 to 60,000 years ago. Before European settlement there were an estimated 250,000 Aboriginal people in the region. The Wodi wodi people are the original custodians of the Illawarra region of South Sydney. Speaking a variant of the Dharawal language, the Wodi Wodi peoples lived across a large stretch of land which was roughly surrounded by what is now known as Campbelltown, Shoalhaven River and Moss Vale. The Bundjalung people are the original custodians of parts of the northern coastal areas. In 1770 Lieutenant James Cook was the first European to visit New South Wales when he conducted a survey along the unmapped eastern coast of the Dutch-named continent of New Holland, now Australia. In his original journal(s) covering the survey, in triplicate to satisfy Admiralty Orders, Cook first named the land "New Wales", after Wales. However, in the copy held by the Admiralty, he "revised the wording" to "New South Wales". The first British settlement was made by what is known in Australian history as the First Fleet; this was led by Captain Arthur Phillip, who assumed the role of governor of the settlement on arrival in 1788 until 1792. After years of chaos and anarchy after the overthrow of Governor William Bligh, a new governor, Lieutenant-Colonel (later Major-General) Lachlan Macquarie, was sent from Britain to reform the settlement in 1809. During his time as governor, Macquarie commissioned the construction of roads, wharves, churches and public buildings, sent explorers out from Sydney and employed a planner to design the street layout of Sydney. Macquarie's legacy is still evident today. During the 19th century, large areas were successively separated to form the British colonies of Tasmania (proclaimed as a separate colony named Van Diemen's Land in 1825), South Australia (1836), Victoria (1851) and Queensland (1859). Responsible government was granted to the New South Wales colony in 1855. Following the Treaty of Waitangi, William Hobson declared British sovereignty over New Zealand in 1840. In 1841 it was separated from the Colony of New South Wales to form the new Colony of New Zealand. Charles Darwin visited Australia in January 1836 and in "The Voyage of the Beagle" (chapter 19 of the 11th edition) records his hesitations about and fascination with New South Wales, including his speculations about the geological origin and formation of the great valleys, the aboriginal population, the situation of the convicts, and the future prospects of the country. At the end of the 19th century, the movement toward federation between the Australian colonies gathered momentum. Conventions and forums involving colony leaders were held on a regular basis. Proponents of New South Wales as a free trade state were in dispute with the other leading colony Victoria, which had a protectionist economy. At this time customs posts were common on borders, even on the Murray River. Travelling from New South Wales to Victoria in those days was very difficult. Supporters of federation included the New South Wales premier Sir Henry Parkes whose 1889 Tenterfield Speech (given in Tenterfield) was pivotal in gathering support for New South Wales involvement. Edmund Barton, later to become Australia's first Prime Minister, was another strong advocate for federation and a meeting held in Corowa in 1893 drafted an initial constitution. In 1898 popular referenda on the proposed federation were held in New South Wales, Victoria, South Australia and Tasmania. All votes resulted in a majority in favour, but the New South Wales government under Premier George Reid (popularly known as "yes–no Reid" because of his constant changes of opinion on the issue) had set a requirement for a higher "yes" vote than just a simple majority which was not met. In 1899 further referenda were held in the same states as well as Queensland (but not Western Australia). All resulted in yes votes with majorities increased from the previous year. New South Wales met the conditions its government had set for a yes vote. As a compromise to the question on where the capital was to be located, an agreement was made that the site was to be within New South Wales but not closer than from Sydney, while the provisional capital would be Melbourne. Eventually the area that now forms the Australian Capital Territory was ceded by New South Wales when Canberra was selected. In the years after World War I, the high prices enjoyed during the war fell with the resumption of international trade. Farmers became increasingly discontented with the fixed prices paid by the compulsory marketing authorities set up as a wartime measure by the Hughes government. In 1919 the farmers formed the Country Party, led at national level by Earle Page, a doctor from Grafton, and at state level by Michael Bruxner, a small farmer from Tenterfield. The Great Depression, which began in 1929, ushered in a period of political and class conflict in New South Wales. The mass unemployment and collapse of commodity prices brought ruin to both city workers and to farmers. The beneficiary of the resultant discontent was not the Communist Party, which remained small and weak, but Jack Lang's Labor populism. Lang's second government was elected in November 1930 on a policy of repudiating New South Wales' debt to British bondholders and using the money instead to help the unemployed through public works. This was denounced as illegal by conservatives, and also by James Scullin's federal Labor government. The result was that Lang's supporters in the federal Caucus brought down Scullin's government, causing a second bitter split in the Labor Party. In May 1932 the Governor, Sir Philip Game dismissed his government. The subsequent election was won by the conservative opposition. By the outbreak of World War II in 1939, the differences between New South Wales and the other states that had emerged in the 19th century had faded as a result of federation and economic development behind a wall of protective tariffs. New South Wales continued to outstrip Victoria as the centre of industry, and increasingly of finance and trade as well. Labor returned to office under the moderate leadership of William McKell in 1941 and remained in power for 24 years. World War II saw another surge in industrial development to meet the needs of a war economy, and also the elimination of unemployment. Labor stayed in power until 1965. Towards the end of its term in power, it announced a plan for the construction of an opera/arts facility on Bennelong Point. The design competition was won by Jørn Utzon. Controversy over the cost of the Sydney Opera House became a political issue and was a factor in the eventual defeat of Labor in 1965 by the conservative Liberal Party led by Robert Askin. Askin remains a controversial figure, with supporters claiming him to be reformist especially in terms of reshaping the NSW economy. Others though, regard the Askin era as synonymous with corruption with Askin the head of a network involving NSW police and SP bookmaking (Goot). In the late 1960s a secessionist movement in the New England region of the state led to a referendum on the issue. The new state would have consisted of much of northern NSW including Newcastle. The referendum was narrowly defeated and, , there are no active or organised campaigns for new states in NSW. Askin's resignation in 1975 was followed by a number of short lived premierships by Liberal Party leaders. When a general election came in 1976 the ALP under Neville Wran were returned to power. Wran was able to transform this narrow one seat victory into landslide wins (known as Wranslide) in 1978 and 1981. After winning a comfortable though reduced majority in 1984, Wran resigned as premier and left parliament. His replacement Barrie Unsworth struggled to emerge from Wran's shadow and lost a 1988 election against a resurgent Liberal Party led by Nick Greiner. Unsworth was replaced as ALP leader by Bob Carr. Initially Greiner was a popular leader instigating reform such as the creation of the Independent Commission Against Corruption (ICAC). Greiner called a snap election in 1991 which the Liberals were expected to win. However the ALP polled extremely well and the Liberals lost their majority and needed the support of independents to retain power. Greiner was accused (by ICAC) of corrupt actions involving an allegation that a government position was offered to tempt an independent (who had defected from the Liberals) to resign his seat so that the Liberal party could regain it and shore up its numbers. Greiner resigned but was later cleared of corruption. His replacement as Liberal leader and Premier was John Fahey whose government secured Sydney the right to host the 2000 Summer Olympics. In the 1995 election, Fahey's government lost narrowly and the ALP under Bob Carr returned to power. Like Wran before him Carr was able to turn a narrow majority into landslide wins at the next two elections (1999 and 2003). During this era, NSW hosted the 2000 Sydney Olympics which were internationally regarded as very successful, and helped boost Carr's popularity. Carr surprised most people by resigning from office in 2005. He was replaced by Morris Iemma, who remained Premier after being re-elected in the March 2007 state election, until he was replaced by Nathan Rees in September 2008. Rees was subsequently replaced by Kristina Keneally in December 2009. Keneally's government was defeated at the 2011 state election and Barry O'Farrell became Premier on 28 March. On 17 April 2014 O'Farrell stood down as Premier after misleading an ICAC investigation concerning a gift of a bottle of wine. The Liberal Party then elected Treasurer Mike Baird as party leader and Premier. Baird resigned as Premier on 23 January 2017, and was replaced by Gladys Berejiklian. New South Wales is bordered on the north by Queensland, on the west by South Australia, on the south by Victoria and on the east by the Coral and Tasman Seas. The Australian Capital Territory and the Jervis Bay Territory form a separately administered entity that is bordered entirely by New South Wales. The state can be divided geographically into four areas. New South Wales's three largest cities, Sydney, Newcastle and Wollongong, lie near the centre of a narrow coastal strip extending from cool temperate areas on the far south coast to subtropical areas near the Queensland border. The Illawarra region is centred on the city of Wollongong, with the Shoalhaven, Eurobodalla and the Sapphire Coast to the south. The Central Coast lies between Sydney and Newcastle, with the Mid North Coast and Northern Rivers regions reaching northwards to the Queensland border. Tourism is important to the economies of coastal towns such as Coffs Harbour, Lismore, Nowra and Port Macquarie, but the region also produces seafood, beef, dairy, fruit, sugar cane and timber. The Great Dividing Range extends from Victoria in the south through New South Wales to Queensland, parallel to the narrow coastal plain. This area includes the Snowy Mountains, the Northern, Central and Southern Tablelands, the Southern Highlands and the South West Slopes. Whilst not particularly steep, many peaks of the range rise above , with the highest Mount Kosciuszko at . Skiing in Australia began in this region at Kiandra around 1861. The relatively short ski season underwrites the tourist industry in the Snowy Mountains. Agriculture, particularly the wool industry, is important throughout the highlands. Major centres include Armidale, Bathurst, Bowral, Goulburn, Inverell, Orange, Queanbeyan and Tamworth. There are numerous forests in New South Wales, with such tree species as Red Gum Eucalyptus and Crow Ash ("Flindersia australis"), being represented. Forest floors have a diverse set of understory shrubs and fungi. One of the widespread fungi is Witch's Butter ("Tremella mesenterica"). The western slopes and plains fill a significant portion of the state's area and have a much sparser population than areas nearer the coast. Agriculture is central to the economy of the western slopes, particularly the Riverina region and Murrumbidgee Irrigation Area in the state's south-west. Regional cities such as Albury, Dubbo, Griffith and Wagga Wagga and towns such as Deniliquin, Leeton and Parkes exist primarily to service these agricultural regions. The western slopes descend slowly to the western plains that comprise almost two-thirds of the state and are largely arid or semi-arid. The mining town of Broken Hill is the largest centre in this area. One possible definition of the centre for New South Wales is located west-north-west of Tottenham. The major part of New South Wales, west of the Great Dividing Range, has an arid to semi arid climate. Rainfall averages from a year throughout most of this region. Summer temperatures can be very hot, while winter nights can be quite cold in this region. Rainfall varies throughout the state. The far north-west receives the least, less than annually, while the east receives between of rain. The climate along the flat, coastal plain east of the range varies from oceanic in the south to humid subtropical in the northern half of the state, right above Wollongong. Rainfall is highest in this area; however, it still varies from around to as high as in the wettest areas, for example Dorrigo. Along the southern coast, rainfall is heaviest in winter due to cold fronts which move across southern Australia, while in the far north, around Lismore, rain is heaviest in summer from tropical systems and occasionally even cyclones. The climate in the southern half of the state is generally warm to hot in summer and cool in the winter. The seasons are more defined in the southern half of the state, especially as one moves inland towards South West Slopes, Central West and the Riverina region. The climate in the northeast region of the state, or the North Coast, bordering Queensland, is hot and humid in the summer and mild in winter. The Northern Tablelands, which are also on the north coast, have relatively mild summers and cold winters, due to their high elevation on the Great Dividing Range. Peaks along the Great Dividing Range vary from to over above sea level. Temperatures can be cool to cold in winter with frequent frosts and snowfall, and are rarely hot in summer due to the elevation. Lithgow has a climate typical of the range, as do the regional cities of Orange, Cooma, Oberon and Armidale. Such places fall within the subtropical highland ("Cwb") variety. Rainfall is moderate in this area, ranging from . Snowfall is common in the higher parts of the range, sometimes occurring as far north as the Queensland border. On the highest peaks of the Snowy Mountains, the climate can be subpolar oceanic and even alpine on the higher peaks with very cold temperatures and heavy snow. The Blue Mountains, Southern Tablelands and Central Tablelands, which are situated on the Great Dividing Range, have mild to warm summers and cold winters, although not as severe as those in the Snowy Mountains. The highest maximum temperature recorded was at Menindee in the west of the state on 10 January 1939. The lowest minimum temperature was at Charlotte Pass in the Snowy Mountains on 29 June 1994. This is also the lowest temperature recorded in the whole of Australia excluding the Antarctic Territory. The estimated population of New South Wales at the end of September 2018 was 8,023,700 people, representing approximately 31.96% of nationwide population. In June 2017 Sydney was home to almost two-thirds (65.3%) of the NSW population. At the 2016 census, the most commonly nominated ancestries were: At the 2016 census, there were 2,581,138 people living in New South Wales that were born overseas, accounting for 34.5% of the population. Only 45.4% of the population had both parents born in Australia. 2.9% of the population, or 216,176 people, identified as Indigenous Australians (Aboriginal Australians and Torres Strait Islanders) in 2016. 26.5% of people in New South Wales speak a language other than English at home with Mandarin (3.2%), Arabic (2.7%), Cantonese (1.9%), Vietnamese (1.4%) and Greek (1.1%) the most widely spoken. In the 2016 census, the most commonly reported religions and Christian denominations were Roman Catholicism (24.7%), Anglicanism (15.5%) and Islam (3.6%). 25.1% of the population described themselves as having no religion. Executive authority is vested in the Governor of New South Wales, who represents and is appointed by Elizabeth II, Queen of Australia. The current Governor is Margaret Beazley. The Governor commissions as Premier the leader of the parliamentary political party that can command a simple majority of votes in the Legislative Assembly. The Premier then recommends the appointment of other Members of the two Houses to the Ministry, under the principle of responsible or Westminster government. As in other Westminster systems, there is no constitutional requirement in NSW for the Government to be formed from the Parliament—merely convention. The Premier is Gladys Berejiklian of the Liberal Party. The form of the Government of New South Wales is prescribed in its Constitution, dating from 1856 and currently the Constitution Act 1902 (NSW). Since 1901 New South Wales has been a state of the Commonwealth of Australia, and the Australian Constitution regulates its relationship with the Commonwealth. In 2006, the Constitution Amendment Pledge of Loyalty Act 2006 No 6, was enacted to amend the NSW Constitution Act 1902 to require Members of the New South Wales Parliament and its Ministers to take a pledge of loyalty to Australia and to the people of New South Wales instead of swearing allegiance to Elizabeth II her heirs and successors, and to revise the oaths taken by Executive Councillors. The Pledge of Loyalty Act was officially assented to by the Queen on 3 April 2006. The option to swear allegiance to the Queen was restored as an alternative option in June 2012. Under the Australian Constitution, New South Wales ceded certain legislative and judicial powers to the Commonwealth, but retained independence in all other areas. The New South Wales Constitution says: "The Legislature shall, subject to the provisions of the Commonwealth of Australia Constitution Act, have power to make laws for the peace, welfare, and good government of New South Wales in all cases whatsoever". The first "responsible" self-government of New South Wales was formed on 6 June 1856 with Sir Stuart Alexander Donaldson appointed by Governor Sir William Denison as its first Colonial Secretary which in those days accounted also as the Premier. The Parliament of New South Wales is composed of the Sovereign and two houses: the Legislative Assembly (lower house), and the Legislative Council (upper house). Elections are held every four years on the fourth Saturday of March, the most recent being on 23 March 2019. At each election one member is elected to the Legislative Assembly from each of 93 electoral districts and half of the 42 members of the Legislative Council are elected by a statewide electorate. New South Wales is divided into 128 local government areas. There is also the Unincorporated Far West Region which is not part of any local government area, in the sparsely inhabited Far West, and Lord Howe Island, which is also unincorporated but self-governed by the Lord Howe Island Board. New South Wales is policed by the New South Wales Police Force, a statutory authority. Established in 1862, the New South Wales Police Force investigates Summary and Indictable offences throughout the State of New South Wales. The state has two fire services: the volunteer based New South Wales Rural Fire Service, which is responsible for the majority of the state, and the Fire and Rescue NSW, a government agency responsible for protecting urban areas. There is some overlap in due to suburbanisation. Ambulance services are provided through the New South Wales Ambulance. Rescue services (i.e. vertical, road crash, confinement) are a joint effort by all emergency services, with Ambulance Rescue, Police Rescue Squad and Fire Rescue Units contributing. Volunteer rescue organisations include the Australian Volunteer Coast Guard, State Emergency Service (SES), Surf Life Saving New South Wales and Volunteer Rescue Association (VRA). The NSW school system comprises a kindergarten to year 12 system with primary schooling up to year 6 and secondary schooling between years 7 and 12. Schooling is compulsory from before 6 years old until the age of 17 (unless Year 10 is completed earlier). Between 1990 and 2010, schooling was only compulsory in NSW until age 15. Primary and secondary schools include government and non-government schools. Government schools are further classified as comprehensive and selective schools. Non-government schools include Catholic schools, other denominational schools, and non-denominational independent schools. Typically, a primary school provides education from kindergarten level to year 6. A secondary school, usually called a "high school", provides education from years 7 to 12. Secondary colleges are secondary schools which only cater for years 11 and 12. The NSW Education Standards Authority classifies the 13 years of primary and secondary schooling into six stages, beginning with Early Stage 1 (Kindergarten) and ending with Stage 6 (years 11 and 12). A Record of School Achievement (RoSA) is awarded by the NSW Education Standards Authority to students who have completed at least Year 10 but leave school without completing the Higher School Certificate. The RoSA was introduced in 2012 to replace the former School Certificate. The Higher School Certificate (HSC) is the usual Year 12 leaving certificate in NSW. Most students complete the HSC prior to entering the workforce or going on to study at university or TAFE (although the HSC itself can be completed at TAFE). The HSC must be completed for a student to get an Australian Tertiary Admission Rank (formerly Universities Admission Index), which determines the student's rank against fellow students who completed the Higher School Certificate. Eleven universities primarily operate in New South Wales. Sydney is home to Australia's first university, the University of Sydney founded in 1850. Other universities include the University of New South Wales, Macquarie University, the University of Technology, Sydney and Western Sydney University. The Australian Catholic University has two of its six campuses in Sydney, and the private University of Notre Dame Australia also operates a secondary campus in the city. Outside Sydney, the leading universities are the University of Newcastle and the University of Wollongong. Armidale is home to the University of New England, and Charles Sturt University and Southern Cross University have campuses spread across cities in the state's south-west and north coast respectively. The public universities are state government agencies, however they are largely regulated by the federal government, which also administers their public funding. Admission to NSW universities is arranged together with universities in the Australian Capital Territory by another government agency, the Universities Admission Centre. Primarily vocational training is provided up the level of advanced diplomas is provided by the state government's ten Technical and Further Education (TAFE) institutes. These institutes run courses in more than130 campuses throughout the state. Since the 1970s, New South Wales has undergone an increasingly rapid economic and social transformation. Old industries such as steel and shipbuilding have largely disappeared; although agriculture remains important, its share of the state's income is smaller than ever before. New industries such as information technology and financial services are largely centred in Sydney and have risen to take their place, with many companies having their Australian headquarters in Sydney CBD. In addition, the Macquarie Park area of Sydney has attracted the Australian headquarters of many information technology firms. Coal and related products are the state's biggest export. Its value to the state's economy is over A$5 billion, accounting for about 19% of all exports from NSW. Tourism has also become important, with Sydney as its centre, also stimulating growth on the North Coast, around Coffs Harbour and Byron Bay. Tourism is worth over $25.1 billion to the New South Wales economy and employs 7.1% of the workforce. In 2007, then-Premier of New South Wales Morris Iemma established Events New South Wales to "market Sydney and NSW as a leading global events destination". In July 2011 Events NSW merged with three key state authorities including Tourism NSW to establish Destination NSW (DNSW). New South Wales had a Gross State Product in 2018–19 (equivalent to Gross Domestic Product) of $614.4 billion which equalled $76,361 per capita. On 9 October 2007 NSW announced plans to build a 1,000 MW bank of wind powered turbines. The output of these is anticipated to be able to power up to 400,000 homes. The cost of this project will be $1.8 billion for 500 turbines. On 28 August 2008 the New South Wales cabinet voted to privatise electricity retail, causing 1,500 electrical workers to strike after a large anti-privatisation campaign. The NSW business community is represented by the NSW Business Chamber which has 30,000 members. Agriculture is spread throughout the eastern two-thirds of New South Wales. Cattle, sheep and pigs are the predominant types of livestock produced in NSW and they have been present since their importation during the earliest days of European settlement. Economically the state is the most important state in Australia, with about one-third of the country's sheep, one-fifth of its cattle, and one-third of its small number of pigs. New South Wales produces a large share of Australia's hay, fruit, legumes, lucerne, maize, nuts, wool, wheat, oats, oilseeds (about 51%), poultry, rice (about 99%), vegetables, fishing including oyster farming, and forestry including wood chips. Bananas and sugar are grown chiefly in the Clarence, Richmond and Tweed River areas. Wools are produced on the Northern Tablelands as well as prime lambs and beef cattle. The cotton industry is centred in the Namoi Valley in northwestern New South Wales. On the central slopes there are many orchards, with the principal fruits grown being apples, cherries and pears. However, the fruit industry is threatened by the Queensland fruit fly (Bactrocera tyroni) which causes more than $28.5 million a year in damage to Australian crops, primarily in Queensland and northern New South Wales. About 40,200 hectares of vineyards lie across the eastern region of the state, with excellent wines produced in the Hunter Valley, with the Riverina being the largest wine producer in New South Wales. Australia's largest and most valuable Thoroughbred horse breeding area is centred on Scone in the Hunter Valley. The Hunter Valley is the home of the world-famous Coolmore, Darley and Kia-Ora Thoroughbred horse studs. About half of Australia's timber production is in New South Wales. Large areas of the state are now being replanted with eucalyptus forests. Under the Water Management Act 2000, updated riparian water rights were given to those within NSW with livestock. This change was named "The Domestic Stock Right" which gives "an owner or occupier of a landholding is entitled to take water from a river, estuary or lake which fronts their land or from an aquifer which is underlying their land for domestic consumption and stock watering without the need for an access licence." Passage through New South Wales is vital for cross-continent transport. Rail and road traffic from Brisbane (Queensland) to Perth (Western Australia), or to Melbourne (Victoria) must pass through New South Wales. The majority of railways in New South Wales are currently operated by the state government. Some lines began as branch-lines of railways starting in other states. For instance, Balranald near the Victorian border was connected by a rail line coming up from Victoria and into New South Wales. Another line beginning in Adelaide crossed over the border and stopped at Broken Hill. Railways management are conducted by Sydney Trains and NSW TrainLink which maintain rolling stock. Sydney Trains operates trains within Sydney while NSW TrainLink operates outside Sydney, intercity, country and interstate services. Both Sydney Trains and NSW TrainLink have their main terminus at Sydney's Central station. NSW TrainLink regional and long-distance services consist of XPT services to Grafton, Casino, Brisbane, Melbourne and Dubbo, as well as Xplorer services to Canberra, Griffith, Broken Hill, Armidale and Moree. NSW TrainLink intercity trains operate on the Blue Mountains Line, Central Cost & Newcastle Line, South Coast Line, Southern Highlands Line and Hunter Line. Major roads are the concern of both federal and state governments. The latter maintains these through the Department of Roads and Maritime Services, formerly the Roads and Traffic Authority, and before that, the Department of Main Roads (DMR). The main roads in New South Wales are Other roads are usually the concern of the RMS and/or the local government authority. Kingsford Smith Airport (commonly Sydney Airport, and locally referred to as Mascot Airport or just 'Mascot'), located in the southern Sydney suburb of Mascot is the major airport for not just the state but the whole nation. It is a hub for Australia's national airline Qantas. Other airlines serving regional New South Wales include: Transdev Sydney Ferries operates Sydney Ferries services within Sydney Harbour and the Parramatta River, while Newcastle Transport has a ferry service within Newcastle. All other ferry services are privately operated. Spirit of Tasmania ran a commercial ferry service between Sydney and Devonport, Tasmania. This service was terminated in 2006. Private boat services operated between South Australia, Victoria and New South Wales along the Murray and Darling Rivers but these only exist now as the occasional tourist paddle-wheeler service. New South Wales has more than 780 national parks and reserves covering more than 8% of the state. These parks range from rainforests, waterfalls, rugged bush to marine wonderlands and outback deserts, including World Heritage sites. The Royal National Park on the southern outskirts of Sydney became Australia's first National Park when proclaimed on 26 April 1879. Originally named The National Park until 1955, this park was the second National Park to be established in the world after Yellowstone National Park in the U.S. Kosciuszko National Park is the largest park in state encompassing New South Wales' alpine region. The National Parks Association was formed in 1957 to create a system of national parks all over New South Wales which led to the formation of the National Parks and Wildlife Service in 1967. This government agency is responsible for developing and maintaining the parks and reserve system, and conserving natural and cultural heritage, in the state of New South Wales. These parks preserve special habitats, plants and wildlife, such as the Wollemi National Park where the Wollemi Pine grows and areas sacred to Australian Aboriginals such as Mutawintji National Park in western New South Wales. Throughout Australian history, New South Wales sporting teams have been very successful in both winning domestic competitions and providing players to the Australian national teams. The largest sporting competition in the state is the National Rugby League, which is based in Sydney and expanded from the New South Wales Rugby League. The state is represented by the New South Wales Blues in the State of Origin series. Sydney is the spiritual home of Australian rugby league and hosts nine of the 16 NRL teams: Canterbury-Bankstown Bulldogs, Cronulla Sharks, Manly Sea Eagles, Parramatta Eels, Penrith Panthers, South Sydney Rabbitohs, Sydney Roosters and Wests Tigers, as well as being the northern home of the St George Illawarra Dragons, which is based in Wollongong. A tenth team, the Newcastle Knights is located in Newcastle. The state is represented by four teams in soccer's A-League: Sydney FC (2005–06, 2009–10, 2016–17 champions), Western Sydney Wanderers (2014 Asian champions), Central Coast Mariners (2012–13 champions) and Newcastle United Jets (2007–08 A League Champions). Australian rules football has historically not been strong in New South Wales outside the Riverina region. However, the Sydney Swans relocated from South Melbourne in 1982 and their presence and success since the late 1990s has raised the profile of Australian rules football, especially after their AFL premiership in 2005. A second NSW AFL club, the Greater Western Sydney Giants, entered the competition in 2012. The main summer sport is cricket and the Sydney Cricket Ground hosts the 'New Year' cricket Test match in January each year. The NSW Blues play in the One-Day Cup and Sheffield Shield competitions. Sydney Sixers and Sydney Thunder both play in the Big Bash League. Other teams in major national competitions include the Sydney Kings and Hawks in the National Basketball League, Sydney Uni Flames in the Women's National Basketball League, NSW Waratahs in Super Rugby and New South Wales Swifts in Suncorp Super Netball. Sydney was the host of the 2000 Summer Olympics and the 1938 British Empire Games. The Olympic Stadium, now known as ANZ Stadium hosts major events including the NRL Grand Final, State of Origin, rugby union and football internationals. It hosted the final of the 2003 Rugby World Cup and the 2015 AFC Asian Cup, as well as the 2006 FIFA World Cup qualifier between Australia and Uruguay, qualifying Australia for their first World Cup since 1974. The annual Sydney to Hobart Yacht Race begins in Sydney Harbour on Boxing Day. Bathurst hosts the annual Bathurst 1000 as part of the Supercars Championship at Mount Panorama Circuit. The popular equine sports of campdrafting and polocrosse were developed in New South Wales and competitions are now held across Australia. Polocrosse is now played in many overseas countries. Major professional teams include: As Australia's most populous state, New South Wales is home to a number of cultural institutions of importance to the nation. In music, New South Wales is home to the Sydney Symphony Orchestra, Australia's busiest and largest orchestra. Australia's largest opera company, Opera Australia, is headquartered in Sydney. Both of these organisations perform a subscription series at the Sydney Opera House. Other major musical bodies include the Australian Chamber Orchestra. Sydney is host to the Australian Ballet for its Sydney season (the ballet is headquartered in Melbourne). Apart from the Sydney Opera House, major musical performance venues include the City Recital Hall and the Sydney Town Hall. New South Wales is home to several major museums and art galleries, including the Australian Museum, the Powerhouse Museum, the Museum of Sydney, the Art Gallery of New South Wales and the Museum of Contemporary Art. Sydney is home to five Arts teaching organisations, which have all produced world-famous students: The National Art School, The College of Fine Arts, the National Institute of Dramatic Art (NIDA), the Australian Film, Television & Radio School and the Conservatorium of Music (now part of the University of Sydney). New South Wales is the setting and shooting location of many Australian films, including "Mad Max 2", which was shot near the mining town of Broken Hill. The state has also attracted international productions, both as a setting, such as in "", and as a stand-in for other locations, as seen in "The Matrix" franchise, "The Great Gatsby" and "Unbroken". 20th Century Fox operates Fox Studios Australia in Sydney. Screen NSW, which controls the state film industry, generates approximately $100 million into the New South Wales economy each year. New South Wales in recent history has pursued bilateral partnerships with other federated states/provinces and metropolises through establishing a network of sister state relationships. The state currently has 7 sister states:
https://en.wikipedia.org/wiki?curid=21654
Nitric acid Nitric acid (), also known as aqua fortis (Latin for "strong water") and spirit of niter, is a highly corrosive mineral acid. The pure compound is colorless, but older samples tend to acquire a yellow cast due to decomposition into oxides of nitrogen and water. Most commercially available nitric acid has a concentration of 68% in water. When the solution contains more than 86% HNO3, it is referred to as "fuming nitric acid". Depending on the amount of nitrogen dioxide present, fuming nitric acid is further characterized as red fuming nitric acid at concentrations above 86%, or white fuming nitric acid at concentrations above 95%. Nitric acid is the primary reagent used for nitration – the addition of a nitro group, typically to an organic molecule. While some resulting nitro compounds are shock- and thermally-sensitive explosives, a few are stable enough to be used in munitions and demolition, while others are still more stable and used as pigments in inks and dyes. Nitric acid is also commonly used as a strong oxidizing agent. Commercially available nitric acid is an azeotrope with water at a concentration of 68% HNO3. This solution has a boiling temperature of 120.5 °C at 1 atm. It is known as "concentrated nitric acid". Pure concentrated nitric acid is a colourless liquid at room temperature. Two solid hydrates are known; the monohydrate (HNO3·H2O or [H3O]NO3) and the trihydrate (HNO3·3H2O). An older density scale is occasionally seen, with concentrated nitric acid specified as 42° Baumé. Nitric acid is subject to thermal or light decomposition and for this reason it was often stored in brown glass bottles: This reaction may give rise to some non-negligible variations in the vapor pressure above the liquid because the nitrogen oxides produced dissolve partly or completely in the acid. The nitrogen dioxide (NO2) remains dissolved in the nitric acid coloring it yellow or even red at higher temperatures. While the pure acid tends to give off white fumes when exposed to air, acid with dissolved nitrogen dioxide gives off reddish-brown vapors, leading to the common names "red fuming nitric acid" and "white fuming nitric acid". Nitrogen oxides (NO"x") are soluble in nitric acid. A commercial grade of fuming nitric acid contains 98% HNO3 and has a density of 1.50 g/cm3. This grade is often used in the explosives industry. It is not as volatile nor as corrosive as the anhydrous acid and has the approximate concentration of 21.4 M. Red fuming nitric acid, or RFNA, contains substantial quantities of dissolved nitrogen dioxide (NO2) leaving the solution with a reddish-brown color. Due to the dissolved nitrogen dioxide, the density of red fuming nitric acid is lower at 1.490 g/cm3. An "inhibited" fuming nitric acid (either IWFNA, or IRFNA) can be made by the addition of 0.6 to 0.7% hydrogen fluoride (HF). This fluoride is added for corrosion resistance in metal tanks. The fluoride creates a metal fluoride layer that protects the metal. White fuming nitric acid, pure nitric acid or WFNA, is very close to anhydrous nitric acid. It is available as 99.9% nitric acid by assay. One specification for white fuming nitric acid is that it has a maximum of 2% water and a maximum of 0.5% dissolved NO2. Anhydrous nitric acid has a density of 1.513 g/cm3 and has the approximate concentration of 24 molar. Anhydrous nitric acid is a colorless mobile liquid with a density of 1.512 g/cm3 that solidifies at −42 °C to form white crystals. As it decomposes to NO2 and water, it obtains a yellow tint. It boils at 83 °C. It is usually stored in a glass shatterproof amber bottle with twice the volume of head space to allow for pressure build up, but even with those precautions the bottle must be vented monthly to release pressure. Two of the N–O bonds are equivalent and relatively short (this can be explained by theories of resonance; the canonical forms show double-bond character in these two bonds, causing them to be shorter than typical N–O bonds), and the third N–O bond is elongated because the O atom is also attached to a proton. Nitric acid is normally considered to be a strong acid at ambient temperatures. There is some disagreement over the value of the acid dissociation constant, though the p"K"a value is usually reported as less than −1. This means that the nitric acid in diluted solution is fully dissociated except in extremely acidic solutions. The p"K"a value rises to 1 at a temperature of 250 °C. Nitric acid can act as a base with respect to an acid such as sulfuric acid: The nitronium ion, , is the active reagent in aromatic nitration reactions. Since nitric acid has both acidic and basic properties, it can undergo an autoprotolysis reaction, similar to the self-ionization of water: Nitric acid reacts with most metals, but the details depend on the concentration of the acid and the nature of the metal. Dilute nitric acid behaves as a typical acid in its reaction with most metals. Magnesium, manganese, and zinc liberate H2: Nitric acid can oxidize non-active metals such as copper and silver. With these non-active or less electropositive metals the products depend on temperature and the acid concentration. For example, copper reacts with dilute nitric acid at ambient temperatures with a 3:8 stoichiometry: The nitric oxide produced may react with atmospheric oxygen to give nitrogen dioxide. With more concentrated nitric acid, nitrogen dioxide is produced directly in a reaction with 1:4 stoichiometry: Upon reaction with nitric acid, most metals give the corresponding nitrates. Some metalloids and metals give the oxides; for instance, Sn, As, Sb, and Ti are oxidized into SnO2, As2O5, Sb2O5, and TiO2 respectively. Some precious metals, such as pure gold and platinum-group metals do not react with nitric acid, though pure gold does react with "aqua regia", a mixture of concentrated nitric acid and hydrochloric acid. However, some less noble metals (Ag, Cu, ...) present in some gold alloys relatively poor in gold such as colored gold can be easily oxidized and dissolved by nitric acid, leading to colour changes of the gold-alloy surface. Nitric acid is used as a cheap means in jewelry shops to quickly spot low-gold alloys (2). However, the powerful oxidizing properties of nitric acid are thermodynamic in nature, but sometimes its oxidation reactions are rather kinetically non-favored. The presence of small amounts of nitrous acid (HNO2) greatly enhance the rate of reaction. Although chromium (Cr), iron (Fe), and aluminium (Al) readily dissolve in dilute nitric acid, the concentrated acid forms a metal-oxide layer that protects the bulk of the metal from further oxidation. The formation of this protective layer is called passivation. Typical passivation concentrations range from 20% to 50% by volume (see ASTM A967-05). Metals that are passivated by concentrated nitric acid are iron, cobalt, chromium, nickel, and aluminium. Being a powerful oxidizing acid, nitric acid reacts violently with many organic materials and the reactions may be explosive. The hydroxyl group will typically strip a hydrogen from the organic molecule to form water, and the remaining nitro group takes the hydrogen's place. Nitration of organic compounds with nitric acid is the primary method of synthesis of many common explosives, such as nitroglycerin and trinitrotoluene (TNT). As very many less stable byproducts are possible, these reactions must be carefully thermally controlled, and the byproducts removed to isolate the desired product. Reaction with non-metallic elements, with the exceptions of nitrogen, oxygen, noble gases, silicon, and halogens other than iodine, usually oxidizes them to their highest oxidation states as acids with the formation of nitrogen dioxide for concentrated acid and nitric oxide for dilute acid. or Concentrated nitric acid oxidizes I2, P4, and S8 into HIO3, H3PO4, and H2SO4, respectively. Although it reacts with graphite and amorphous carbon, it does not react with diamond; it can separate diamond from the graphite that it oxidizes. Nitric acid reacts with proteins to form yellow nitrated products. This reaction is known as the xanthoproteic reaction. This test is carried out by adding concentrated nitric acid to the substance being tested, and then heating the mixture. If proteins that contain amino acids with aromatic rings are present, the mixture turns yellow. Upon adding a base such as ammonia, the color turns orange. These color changes are caused by nitrated aromatic rings in the protein. Xanthoproteic acid is formed when the acid contacts epithelial cells. Respective local skin color changes are indicative of inadequate safety precautions when handling nitric acid. Nitric acid is made by reaction of nitrogen dioxide (NO2) with water. Or, shortened: Normally, the nitric oxide produced by the reaction is reoxidized by the oxygen in air to produce additional nitrogen dioxide. Bubbling nitrogen dioxide through hydrogen peroxide can help to improve acid yield. Commercial grade nitric acid solutions are usually between 52% and 68% nitric acid. Production of nitric acid is via the Ostwald process, named after German chemist Wilhelm Ostwald. In this process, anhydrous ammonia is oxidized to nitric oxide, in the presence of platinum or rhodium gauze catalyst at a high temperature of about 500 K and a pressure of 9 atm. Nitric oxide is then reacted with oxygen in air to form nitrogen dioxide. This is subsequently absorbed in water to form nitric acid and nitric oxide. The nitric oxide is cycled back for reoxidation. Alternatively, if the last step is carried out in air: The aqueous HNO3 obtained can be concentrated by distillation up to about 68% by mass. Further concentration to 98% can be achieved by dehydration with concentrated H2SO4. By using ammonia derived from the Haber process, the final product can be produced from nitrogen, hydrogen, and oxygen which are derived from air and natural gas as the sole feedstocks. In laboratory, nitric acid can be made by thermal decomposition of copper(II) nitrate, producing nitrogen dioxide and oxygen gases, which are then passed through water to give nitric acid. Then, following the Ostwald process: Alternatively, the reaction of equal masses of any nitrate salt such as sodium nitrate with sulfuric acid (H2SO4), and distilling this mixture at nitric acid's boiling point of 83 °C. A nonvolatile residue of the metal hydrogen sulfate remains in the distillation vessel. The red fuming nitric acid obtained may be converted to the white nitric acid. The dissolved NO"x" is readily removed using reduced pressure at room temperature (10–30 minutes at 200 mmHg or 27 kPa) to give white fuming nitric acid. This procedure can also be performed under reduced pressure and temperature in one step in order to produce less nitrogen dioxide gas. Dilute nitric acid may be concentrated by distillation up to 68% acid, which is a maximum boiling azeotrope. In the laboratory, further concentration involves distillation with either sulfuric acid or magnesium nitrate, which serve as dehydrating agents. Such distillations must be done with all-glass apparatus at reduced pressure, to prevent decomposition of the acid. Industrially, highly concentrated nitric acid is produced by dissolving additional nitrogen dioxide in 68% nitric acid in an absorption tower. Dissolved nitrogen oxides are either stripped in the case of white fuming nitric acid, or remain in solution to form red fuming nitric acid. More recently, electrochemical means have been developed to produce anhydrous acid from concentrated nitric acid feedstock. The main industrial use of nitric acid is for the production of fertilizers. Nitric acid is neutralized with ammonia to give ammonium nitrate. This application consumes 75–80% of the 26 million tonnes produced annually (1987). The other main applications are for the production of explosives, nylon precursors, and specialty organic compounds. In organic synthesis, industrial and otherwise, the nitro group is a versatile functional group. A mixture of nitric and sulfuric acids introduces a nitro substituent onto various aromatic compounds by electrophilic aromatic substitution. Many explosives, such as TNT, are prepared this way: Either concentrated sulfuric acid or oleum absorbs the excess water. The nitro group can be reduced to give an amine group, allowing synthesis of aniline compounds from various nitrobenzenes: The precursor to nylon, adipic acid, is produced on a large scale by oxidation of "KA oil"—a mixture of cyclohexanone and cyclohexanol—with nitric acid. Nitric acid has been used in various forms as the oxidizer in liquid-fueled rockets. These forms include red fuming nitric acid, white fuming nitric acid, mixtures with sulfuric acid, and these forms with HF inhibitor. IRFNA (inhibited red fuming nitric acid) was one of 3 liquid fuel components for the BOMARC missile. In elemental analysis by ICP-MS, ICP-AES, GFAA, and Flame AA, dilute nitric acid (0.5–5.0%) is used as a matrix compound for determining metal traces in solutions. Ultrapure trace metal grade acid is required for such determination, because small amounts of metal ions could affect the result of the analysis. It is also typically used in the digestion process of turbid water samples, sludge samples, solid samples as well as other types of unique samples which require elemental analysis via ICP-MS, ICP-OES, ICP-AES, GFAA and flame atomic absorption spectroscopy. Typically these digestions use a 50% solution of the purchased mixed with Type 1 DI Water. In electrochemistry, nitric acid is used as a chemical doping agent for organic semiconductors, and in purification processes for raw carbon nanotubes. In a low concentration (approximately 10%), nitric acid is often used to artificially age pine and maple. The color produced is a grey-gold very much like very old wax- or oil-finished wood (wood finishing). The corrosive effects of nitric acid are exploited for some specialty applications, such as etching in printmaking, pickling stainless steel or cleaning silicon wafers in electronics. A solution of nitric acid, water and alcohol, Nital, is used for etching metals to reveal the microstructure. ISO 14104 is one of the standards detailing this well known procedure. Nitric acid is used either in combination with hydrochloric acid or alone to clean glass cover slips and glass slides for high-end microscopy applications. It is also used to clean glass before silvering when making silver mirrors. Commercially available aqueous blends of 5–30% nitric acid and 15–40% phosphoric acid are commonly used for cleaning food and dairy equipment primarily to remove precipitated calcium and magnesium compounds (either deposited from the process stream or resulting from the use of hard water during production and cleaning). The phosphoric acid content helps to passivate ferrous alloys against corrosion by the dilute nitric acid. Nitric acid can be used as a spot test for alkaloids like LSD, giving a variety of colours depending on the alkaloid. Nitric acid is a corrosive acid and a powerful oxidizing agent. The major hazard posed by it is chemical burns, as it carries out acid hydrolysis with proteins (amide) and fats (ester), which consequently decomposes living tissue (e.g. skin and flesh). Concentrated nitric acid stains human skin yellow due to its reaction with the keratin. These yellow stains turn orange when neutralized. Systemic effects are unlikely, however, and the substance is not considered a carcinogen or mutagen. The standard first-aid treatment for acid spills on the skin is, as for other corrosive agents, irrigation with large quantities of water. Washing is continued for at least 10–15 minutes to cool the tissue surrounding the acid burn and to prevent secondary damage. Contaminated clothing is removed immediately and the underlying skin washed thoroughly. Being a strong oxidizing agent, nitric acid can react with compounds such as cyanides, carbides, or metallic powders explosively and with many organic compounds, such as turpentine, violently and hypergolically (i.e. self-igniting). Hence, it should be stored away from bases and organics. The first mention of nitric acid is in the works of Arabic alchemists such as Muhammad ibn Zakariya al-Razi (854–925), and then later in Pseudo-Geber's "De Inventione Veritatis", wherein it is obtained by calcining a mixture of niter, alum and blue vitriol. It was again described by Albert the Great in the 13th century and by Ramon Lull, who prepared it by heating niter and clay and called it "eau forte" (aqua fortis). Glauber devised a process to obtain it by distilling potassium nitrate with sulfuric acid. In 1776 Lavoisier showed that it contained oxygen, and in 1785 Henry Cavendish determined its precise composition and showed that it could be synthesized by passing a stream of electric sparks through moist air. In 1806, Humphry Davy reported the results of extensive distilled water electrolysis experiments concluding that nitric acid was produced at the anode from dissolved atmospheric nitrogen gas. He used a high voltage battery and non-reactive electrodes and vessels such as gold electrode cones that doubled as vessels bridged by damp asbestos. The industrial production of nitric acid from atmospheric air began in 1905 with the Birkeland–Eyde process, also known as the arc process. This process is based upon the oxidation of atmospheric nitrogen by atmospheric oxygen to nitric oxide with a very high temperature electric arc. Yields of up to approximately 4–5% nitric oxide were obtained at 3000°C, and less at lower temperatures. The nitric oxide was cooled and oxidized by the remaining atmospheric oxygen to nitrogen dioxide, and this was subsequently absorbed in water in a series of packed column or plate column absorption towers to produce dilute nitric acid. The first towers bubbled the nitrogen dioxide through water and non-reactive quartz fragments. About 20% of the produced oxides of nitrogen remained unreacted so the final towers contained an alkali solution to neutralize the rest. The process was very energy intensive and was rapidly displaced by the Ostwald process once cheap ammonia became available. Another early production method was invented by French engineer Albert Nodon around 1913. His method produced nitric acid from electrolysis of calcium nitrate converted by bacteria from nitrogenous matter in peat bogs. An earthenware pot surrounded by lime was sunk into the peat and staked with tarred lumber to make a compartment for the carbon anode around which the nitric acid is formed. Nitric acid was pumped out from a glass pipe that was sunk down to the bottom of the pot. Fresh water was pumped into the top through another glass pipe to replace the fluid removed. The interior was filled with coke. Cast iron cathodes were sunk into the peat surrounding it. Resistance was about 3 ohms per cubic meter and the power supplied was around 10 volts. Production from one deposit was 800 tons per year. Once the Haber process for the efficient production of ammonia was introduced in 1913, nitric acid production from ammonia using the Ostwald process overtook production from the Birkeland–Eyde process. This method of production is still in use today.
https://en.wikipedia.org/wiki?curid=21655
Nihilism Nihilism (; from Latin "nihil", "nothing") is the philosophical view that all knowledge and values are baseless. Most commonly, nihilism is presented in the form of existential nihilism, which argues that life is without objective meaning, purpose, or intrinsic value. Moral nihilists assert that morality does not exist at all. Nihilism may also take epistemological, ontological, or metaphysical forms, meaning respectively that, in some aspect, knowledge is not possible, or reality does not actually exist. The term is sometimes used in association with anomie to explain the general mood of despair at a perceived pointlessness of existence that one may develop upon realising there are no necessary norms, rules, or laws. Nihilism has also been described as conspicuous in or constitutive of certain historical periods. For example, Jean Baudrillard and others have called postmodernity a nihilistic epoch and some religious theologians and figures of religious authority have asserted that postmodernity and many aspects of modernity represent a rejection of theism, and that such rejection of theistic doctrine entails nihilism. Nihilism has many definitions, and thus can describe multiple arguably independent philosophical positions. Epistemological nihilism is a form of skepticism in which all knowledge is accepted as being possibly untrue or as being impossible to confirm as true. Existential nihilism is the belief that life has no intrinsic meaning or value. With respect to the universe, existential nihilism posits that a single human or even the entire human species is insignificant, without purpose and unlikely to change in the totality of existence. The meaninglessness or meaning of life is largely explored in the philosophical school of existentialism. Medical nihilism is the view that we should have little confidence in the effectiveness of medical interventions. Jacob Stegenga proposed the term in the book "Medical Nihilism". It is a work in philosophy of science that deals with contextualized demarcation of medical research. Stegenga applies Bayes' Theorem to medical research then argues for the premise that "even when presented with evidence for a hypothesis regarding the effectiveness of a medical intervention, we ought to have low confidence in that hypothesis." Mereological nihilism (also called compositional nihilism) is the position that objects with proper parts do not exist (not only objects in space, but also objects existing in time do not have any temporal parts), and only basic building blocks without parts exist, and thus the world we see and experience full of objects with parts is a product of human misperception (i.e., if we could see clearly, we would not perceive compositive objects). This interpretation of existence must be based on resolution. The resolution with which humans see and perceive the "improper parts" of the world is not an objective fact of reality, but is rather an implicit trait that can only be qualitatively explored and expressed. Therefore, there is no arguable way to surmise or measure the validity of mereological nihilism. Example: An ant can get lost on a large cylindrical object because the circumference of the object is so large with respect to the ant that the ant effectively feels as though the object has no curvature. Thus, the resolution with which the ant views the world it exists "within" is a very important determining factor in how the ant experiences this "within the world" feeling. Metaphysical nihilism is the philosophical theory that posits that concrete objects and physical constructs might not exist in the possible world, or that even if there exist possible worlds that contain some concrete objects, there is at least one that contains only abstract objects. Extreme metaphysical nihilism is commonly defined as the belief that nothing exists as a correspondent component of the self-efficient world. The American Heritage Medical Dictionary defines one form of nihilism as "an extreme form of skepticism that denies all existence." A similar skepticism concerning the concrete world can be found in solipsism. However, despite the fact that both deny the certainty of objects' true existence, the nihilist would deny the existence of self whereas the solipsist would affirm it. Both these positions are considered forms of anti-realism. Moral nihilism, also known as ethical nihilism, is the meta-ethical view that there is no morality whatsoever; therefore, no action is preferable to any other. For example, a moral nihilist would say that killing someone, for whatever reason, is neither right nor wrong. Moral nihilism is distinct from moral relativism, which acknowledges individual or cultural moral values. Other nihilists may argue not that there is no morality, but that if it does exist, it is a human construction and thus artificial, wherein any and all meaning is relative for different possible outcomes. As an example, if someone kills someone else, such a nihilist might argue that killing is not inherently a bad thing, or bad independently from our moral beliefs, because of the way morality is constructed as some rudimentary dichotomy. What is said to be a bad thing is given a higher negative weighting than what is called good: as a result, killing the individual was bad because it did not let the individual live, which was arbitrarily given a positive weighting. In this way, such a nihilist believes that all moral claims are void of any objective truth value. An alternative scholarly perspective is that moral nihilism is a morality in itself. Cooper writes, "In the widest sense of the word 'morality', moral nihilism is a morality." Ontological nihilism asserts that nothing actually exists. Political nihilism follows the characteristic nihilist's rejection of non-rationalized or non-proven assertions; in this case the necessity of the most fundamental social and political structures, such as government, family, and law. An influential analysis of political nihilism is presented by Leo Strauss. The Russian Nihilist movement was a Russian trend in the 1860s that rejected all authority. After the assassination of Tsar Alexander II in 1881, the Nihilists gained a reputation throughout Europe as proponents of the use of violence for political change. The Nihilists expressed anger at what they described as the abusive nature of the Eastern Orthodox Church and of the tsarist monarchy, and at the domination of the Russian economy by the aristocracy. Although the term "Nihilism" was coined by the German theologian Friedrich Heinrich Jacobi (1743–1818), its widespread usage began with the 1862 novel "Fathers and Sons" by the Russian author Ivan Turgenev. The main character of the novel, Yevgeny Bazarov, who describes himself as a Nihilist, wants to educate the people. The "go to the peoplebe the people" campaign reached its height in the 1870s, during which underground groups such as the Circle of Tchaikovsky, the People's Will, and Land and Liberty formed. It became known as the Narodnik movement, whose members believed that the newly freed serfs were merely being sold into wage slavery in the onset of the Industrial Revolution, and that the middle and upper classes had effectively replaced landowners. The Russian state attempted to suppress the nihilist movement. In actions described by the Nihilists as propaganda of the deed many government officials were assassinated. In 1881 Alexander II was killed on the very day he had approved a proposal to call a representative assembly to consider new reforms. Scientific nihilism is the doctrine that we should have very little confidence in scientific conclusions, such as findings, analysis and attempts to understand or predict future natural events, including but not limited to meteorological predictions. The concept of nihilism was discussed by the Buddha (563 B.C. to 483 B.C.), as recorded in the Theravada and Mahayana Tripiṭaka. The Tripiṭaka, originally written in Pali, refers to nihilism as ‘natthikavāda’ and the nihilist view as ’micchādiṭṭhi’. Various sutras within it describe a multiplicity of views held by different sects of ascetics while the Buddha was alive, some of which were viewed by him to be morally nihilistic. In the Doctrine of Nihilism in the Apannaka Sutta, the Buddha describes moral nihilists as holding the following views: The Buddha then states that those who hold these views will not see the danger in misconduct and the blessings in good conduct and will, therefore, avoid good bodily, verbal and mental conduct; practicing misconduct instead. The culmination of the path that the Buddha taught was Nirvana, "a place of nothingness... nonpossession and... non-attachment... [which is] the total end of death and decay". In an article Ajahn Amaro, a practicing Buddhist monk of more than 30 years, observes that in English 'nothingness' can sound like nihilism. However the word could be emphasised in a different way, so that it becomes 'no-thingness', indicating that Nirvana is not a thing you can find, but rather a state where you experience the reality of non-grasping. In the Alagaddupama Sutta, the Buddha describes how some individuals feared his teaching because they believe that their 'self' would be destroyed if they followed it. He describes this as an anxiety caused by the false belief in an unchanging, everlasting 'self'. All things are subject to change and taking any impermanent phenomena to be a 'self' causes suffering. Nonetheless, his critics called him a nihilist who teaches the annihilation and extermination of an existing being. The Buddha's response was that he only teaches the cessation of suffering. When an individual has given up craving and the conceit of 'I am' their mind is liberated, they no longer come into any state of 'being' and are no longer born again. The Aggivacchagotta Sutta records a conversation between the Buddha and an individual named Vaccha that further elaborates on this. In it Vaccha asks the Buddha to confirm one of the following, with respect to the existence of the Buddha after death: To all four questions, the Buddha answers that the terms 'appear', 'not appear', 'does and does not reappear' and 'neither does nor does not reappear' do not apply. When Vaccha expresses puzzlement, the Buddha asks Vaccha a counter question to the effect of: if a fire were to go out and someone were to ask you whether the fire went north, south, east or west, how would you reply? Vaccha replies that the question does not apply and that an extinguished fire can only be classified as 'out'. Thanissaro Bikkhu elaborates on the classification problem around the words 'reappear' etc. with respect to the Buddha and Nirvana by stating that a "person who has attained the goal [Nirvana] is thus indescribable because [they have] abandoned all things by which [they] could be described". The Suttas themselves describe the liberated mind as 'untraceable' or as 'consciousness without feature', making no distinction between the mind of a liberated being that is alive and the mind of one that is no longer alive. Despite the Buddha's explanations to the contrary, Buddhist practitioners may, at times, still approach Buddhism in a nihilistic manner. Ajahn Amaro illustrates this by retelling the story of a Buddhist monk, Ajahn Sumedho, who in his early years took a nihilistic approach to Nirvana. A distinct feature of Nirvana in Buddhism is that an individual attaining it is no longer subject to rebirth. Ajahn Sumedho, during a conversation with his teacher Ajahn Chah, comments that he is "determined above all things to fully realize Nirvana in this lifetime... deeply weary of the human condition and ... [is] determined not to be born again". To this, Ajahn Chah replies: "what about the rest of us, Sumedho? Don't you care about those who'll be left behind?" Ajahn Amaro comments that Ajahn Chah could detect that his student had a nihilistic aversion to life rather than true detachment. Ajahn Chah's answer clearly points to the Mahayana concept of the Bodhisattva, i.e. the consummated practitioner who renounces to obtain his own Nirvana and procrastinates his own liberation until everyone else has obtained it. Such concept is not to be found in the Theravada tradition as expressed in the Pali canon, which is mainly focused on individual liberation through the four stages of enlightenment culminating with the Arahant stage. Therefore, Ajahn Sumedho was correct in his interpretation of the teachings, and Ajahn Chah sought to mitigate and soften the nihilistic content of the original Theravada tradition blending Hinayana and Mahayana concepts. The term "nihilism" was first used by Friedrich Heinrich Jacobi (1743–1819). Jacobi used the term to characterize rationalism and in particular Immanuel Kant's "critical" philosophy to carry out a reductio ad absurdum according to which all rationalism (philosophy as criticism) reduces to nihilism—and thus it should be avoided and replaced with a return to some type of faith and revelation. Bret W. Davis writes, for example, "The first philosophical development of the idea of nihilism is generally ascribed to Friedrich Jacobi, who in a famous letter criticized Fichte's idealism as falling into nihilism. According to Jacobi, Fichte's absolutization of the ego (the 'absolute I' that posits the 'not-I') is an inflation of subjectivity that denies the absolute transcendence of God." A related but oppositional concept is fideism, which sees reason as hostile and inferior to faith. With the popularizing of the word "nihilism" by Ivan Turgenev, a new Russian political movement called the Nihilist movement adopted the term. They supposedly called themselves nihilists because nothing "that then existed found favor in their eyes". This movement was significant enough that, even in the English speaking world, at the turn of the 20th century the word nihilism without qualification was almost exclusively associated with this Russian revolutionary sociopolitical movement. Søren Kierkegaard (1813–1855) posited an early form of nihilism, which he referred to as "leveling". He saw leveling as the process of suppressing individuality to a point where an individual's uniqueness becomes non-existent and nothing meaningful in one's existence can be affirmed: Kierkegaard, an advocate of a philosophy of life, generally argued against levelling and its nihilistic consequences, although he believed it would be "genuinely educative to live in the age of levelling [because] people will be forced to face the judgement of [levelling] alone." George Cotkin asserts Kierkegaard was against "the standardization and levelling of belief, both spiritual and political, in the nineteenth century," and that Kierkegaard "opposed tendencies in mass culture to reduce the individual to a cipher of conformity and deference to the dominant opinion." In his day, tabloids (like the Danish magazine "Corsaren") and apostate Christianity were instruments of levelling and contributed to the "reflective apathetic age" of 19th century Europe. Kierkegaard argues that individuals who can overcome the levelling process are stronger for it, and that it represents a step in the right direction towards "becoming a true self." As we must overcome levelling, Hubert Dreyfus and Jane Rubin argue that Kierkegaard's interest, "in an increasingly nihilistic age, is in how we can recover the sense that our lives are meaningful". Note, however, that Kierkegaard's meaning of "nihilism" differs from the modern definition, in the sense that, for Kierkegaard, levelling led to a life lacking meaning, purpose or value, whereas the modern interpretation of nihilism posits that there was never any meaning, purpose or value to begin with. Nihilism is often associated with the German philosopher Friedrich Nietzsche, who provided a detailed diagnosis of nihilism as a widespread phenomenon of Western culture. Though the notion appears frequently throughout Nietzsche's work, he uses the term in a variety of ways, with different meanings and connotations. Karen L. Carr describes Nietzsche's characterization of nihilism "as a condition of tension, as a disproportion between what we want to value (or need) and how the world appears to operate." When we find out that the world does not possess the objective value or meaning that we want it to have or have long since believed it to have, we find ourselves in a crisis. Nietzsche asserts that with the decline of Christianity and the rise of physiological decadence, nihilism is in fact characteristic of the modern age, though he implies that the rise of nihilism is still incomplete and that it has yet to be overcome. Though the problem of nihilism becomes especially explicit in Nietzsche's notebooks (published posthumously), it is mentioned repeatedly in his published works and is closely connected to many of the problems mentioned there. Nietzsche characterized nihilism as emptying the world and especially human existence of meaning, purpose, comprehensible truth, or essential value. This observation stems in part from Nietzsche's perspectivism, or his notion that "knowledge" is always by someone of some thing: it is always bound by perspective, and it is never mere fact. Rather, there are interpretations through which we understand the world and give it meaning. Interpreting is something we can not go without; in fact, it is something we "need". One way of interpreting the world is through morality, as one of the fundamental ways that people make sense of the world, especially in regard to their own thoughts and actions. Nietzsche distinguishes a morality that is strong or healthy, meaning that the person in question is aware that he constructs it himself, from weak morality, where the interpretation is projected on to something external. Nietzsche discusses Christianity, one of the major topics in his work, at length in the context of the problem of nihilism in his notebooks, in a chapter entitled "European Nihilism". Here he states that the Christian moral doctrine provides people with intrinsic value, belief in God (which justifies the evil in the world) and a basis for objective knowledge. In this sense, in constructing a world where objective knowledge is possible, Christianity is an antidote against a primal form of nihilism, against the despair of meaninglessness. However, it is exactly the element of truthfulness in Christian doctrine that is its undoing: in its drive towards truth, Christianity eventually finds itself to be a construct, which leads to its own dissolution. It is therefore that Nietzsche states that we have outgrown Christianity "not because we lived too far from it, rather because we lived too close". As such, the self-dissolution of Christianity constitutes yet another form of nihilism. Because Christianity was an interpretation that posited itself as "the" interpretation, Nietzsche states that this dissolution leads beyond skepticism to a distrust of "all" meaning. Stanley Rosen identifies Nietzsche's concept of nihilism with a situation of meaninglessness, in which "everything is permitted." According to him, the loss of higher metaphysical values that exist in contrast to the base reality of the world, or merely human ideas, gives rise to the idea that all human ideas are therefore valueless. Rejecting idealism thus results in nihilism, because only similarly transcendent ideals live up to the previous standards that the nihilist still implicitly holds. The inability for Christianity to serve as a source of valuating the world is reflected in Nietzsche's famous aphorism of the madman in "The Gay Science". The death of God, in particular the statement that "we killed him", is similar to the "self"-dissolution of Christian doctrine: due to the advances of the sciences, which for Nietzsche show that man is the product of evolution, that Earth has no special place among the stars and that history is not progressive, the Christian notion of God can no longer serve as a basis for a morality. One such reaction to the loss of meaning is what Nietzsche calls "passive nihilism", which he recognises in the pessimistic philosophy of Schopenhauer. Schopenhauer's doctrine, which Nietzsche also refers to as Western Buddhism, advocates separating oneself from will and desires in order to reduce suffering. Nietzsche characterises this ascetic attitude as a "will to nothingness", whereby life turns away from itself, as there is nothing of value to be found in the world. This mowing away of all value in the world is characteristic of the nihilist, although in this, the nihilist appears inconsistent: this "will to nothingness" is still a form of willing. He describes this as "an inconsistency on the part of the nihilists:" Nietzsche's relation to the problem of nihilism is a complex one. He approaches the problem of nihilism as deeply personal, stating that this predicament of the modern world is a problem that has "become conscious" in him. According to Nietzsche, it is only when nihilism is "overcome" that a culture can have a true foundation upon which to thrive. He wished to hasten its coming only so that he could also hasten its ultimate departure. He states that there is at least the possibility of another type of nihilist in the wake of Christianity's self-dissolution, one that does "not" stop after the destruction of all value and meaning and succumb to the following nothingness. This alternate, 'active' nihilism on the other hand destroys to level the field for constructing something new. This form of nihilism is characterized by Nietzsche as "a sign of strength," a willful destruction of the old values to wipe the slate clean and lay down one's own beliefs and interpretations, contrary to the passive nihilism that resigns itself with the decomposition of the old values. This willful destruction of values and the overcoming of the condition of nihilism by the constructing of new meaning, this active nihilism, could be related to what Nietzsche elsewhere calls a 'free spirit' or the "Übermensch" from "Thus Spoke Zarathustra" and "The Antichrist", the model of the strong individual who posits his own values and lives his life as if it were his own work of art. It may be questioned, though, whether "active nihilism" is indeed the correct term for this stance, and some question whether Nietzsche takes the problems nihilism poses seriously enough. Martin Heidegger's interpretation of Nietzsche influenced many postmodern thinkers who investigated the problem of nihilism as put forward by Nietzsche. Only recently has Heidegger's influence on Nietzschean nihilism research faded. As early as the 1930s, Heidegger was giving lectures on Nietzsche's thought. Given the importance of Nietzsche's contribution to the topic of nihilism, Heidegger's influential interpretation of Nietzsche is important for the historical development of the term "nihilism". Heidegger's method of researching and teaching Nietzsche is explicitly his own. He does not specifically try to present Nietzsche "as" Nietzsche. He rather tries to incorporate Nietzsche's thoughts into his own philosophical system of Being, Time and "Dasein". In his "Nihilism as Determined by the History of Being" (1944–46), Heidegger tries to understand Nietzsche's nihilism as trying to achieve a victory through the devaluation of the, until then, highest values. The principle of this devaluation is, according to Heidegger, the will to power. The will to power is also the principle of every earlier "valuation" of values. How does this devaluation occur and why is this nihilistic? One of Heidegger's main critiques on philosophy is that philosophy, and more specifically metaphysics, has forgotten to discriminate between investigating the notion of "a" being ("Seiende") and "Being" ("Sein"). According to Heidegger, the history of Western thought can be seen as the history of metaphysics. And because metaphysics has forgotten to ask about the notion of Being (what Heidegger calls ""), it is a history about the destruction of Being. That is why Heidegger calls metaphysics nihilistic. This makes Nietzsche's metaphysics not a victory over nihilism, but a perfection of it. Heidegger, in his interpretation of Nietzsche, has been inspired by Ernst Jünger. Many references to Jünger can be found in Heidegger's lectures on Nietzsche. For example, in a letter to the rector of Freiburg University of November 4, 1945, Heidegger, inspired by Jünger, tries to explain the notion of "God is dead" as the "reality of the Will to Power." Heidegger also praises Jünger for defending Nietzsche against a too biological or anthropological reading during the Nazi era. Heidegger's interpretation of Nietzsche influenced a number of important postmodernist thinkers. Gianni Vattimo points at a back-and-forth movement in European thought, between Nietzsche and Heidegger. During the 1960s, a Nietzschean 'renaissance' began, culminating in the work of Mazzino Montinari and Giorgio Colli. They began work on a new and complete edition of Nietzsche's collected works, making Nietzsche more accessible for scholarly research. Vattimo explains that with this new edition of Colli and Montinari, a critical reception of Heidegger's interpretation of Nietzsche began to take shape. Like other contemporary French and Italian philosophers, Vattimo does not want, or only partially wants, to rely on Heidegger for understanding Nietzsche. On the other hand, Vattimo judges Heidegger's intentions authentic enough to keep pursuing them. Philosophers who Vattimo exemplifies as a part of this back and forth movement are French philosophers Deleuze, Foucault and Derrida. Italian philosophers of this same movement are Cacciari, Severino and himself. Jürgen Habermas, Jean-François Lyotard and Richard Rorty are also philosophers who are influenced by Heidegger's interpretation of Nietzsche. Gilles Deleuze's interpretation of Nietzsche's concept of nihilism is different - in some sense diametrically opposed - to the usual definition (as outlined in the rest of this article). Nihilism is one of the main topics of Deleuze's early book "Nietzsche and Philosophy" (1962). There, Deleuze repeatedly interprets Nietzsche's nihilism as "the enterprise of denying life and depreciating existence". Nihilism thus defined is therefore not the denial of higher values, or the denial of meaning, but rather the depreciation of life in the name of such higher values or meaning. Deleuze therefore (with, he claims, Nietzsche) says that Christianity and Platonism, and with them the whole of metaphysics, are intrinsically nihilist. Postmodern and poststructuralist thought has questioned the very grounds on which Western cultures have based their 'truths': absolute knowledge and meaning, a 'decentralization' of authorship, the accumulation of positive knowledge, historical progress, and certain ideals and practices of humanism and the Enlightenment. Jacques Derrida, whose deconstruction is perhaps most commonly labeled nihilistic, did not himself make the nihilistic move that others have claimed. Derridean deconstructionists argue that this approach rather frees texts, individuals or organizations from a restrictive truth, and that deconstruction opens up the possibility of other ways of being. Gayatri Chakravorty Spivak, for example, uses deconstruction to create an ethics of opening up Western scholarship to the voice of the subaltern and to philosophies outside of the canon of western texts. Derrida himself built a philosophy based upon a 'responsibility to the other'. Deconstruction can thus be seen not as a denial of truth, but as a denial of our ability to know truth. That is to say, it makes an epistemological claim, compared to nihilism's ontological claim. Lyotard argues that, rather than relying on an objective truth or method to prove their claims, philosophers legitimize their truths by reference to a story about the world that can't be separated from the age and system the stories belong to—referred to by Lyotard as "meta-narratives." He then goes on to define the postmodern condition as characterized by a rejection both of these meta-narratives and of the process of legitimation by meta-narratives. In lieu of meta-narratives we have created new language-games in order to legitimize our claims which rely on changing relationships and mutable truths, none of which is privileged over the other to speak to ultimate truth. This concept of the instability of truth and meaning leads in the direction of nihilism, though Lyotard stops short of embracing the latter. Postmodern theorist Jean Baudrillard wrote briefly of nihilism from the postmodern viewpoint in "Simulacra and Simulation". He stuck mainly to topics of interpretations of the real world over the simulations of which the real world is composed. The uses of meaning were an important subject in Baudrillard's discussion of nihilism: In "Nihil Unbound: Extinction and Enlightenment", Ray Brassier maintains that philosophy has avoided the traumatic idea of extinction, instead attempting to find meaning in a world conditioned by the very idea of its own annihilation. Thus Brassier critiques both the phenomenological and hermeneutic strands of Continental philosophy as well as the vitality of thinkers like Gilles Deleuze, who work to ingrain meaning in the world and stave off the "threat" of nihilism. Instead, drawing on thinkers such as Alain Badiou, François Laruelle, Paul Churchland, and Thomas Metzinger, Brassier defends a view of the world as inherently devoid of meaning. That is, rather than avoiding nihilism, Brassier embraces it as the truth of reality. Brassier concludes from his readings of Badiou and Laruelle that the universe is founded on the nothing, but also that philosophy is the "organon of extinction," that it is only because life is conditioned by its own extinction that there is thought at all. Brassier then defends a radically anti-correlationist philosophy proposing that Thought is conjoined not with Being, but with Non-Being. The term "Dada" was first used by Richard Huelsenbeck and Tristan Tzara in 1916. The movement, which lasted from approximately 1916 to 1923, arose during World War I, an event that influenced the artists. The Dada Movement began in the old town of Zürich, Switzerland – known as the "Niederdorf" or "Niederdörfli" – in the Café Voltaire. The Dadaists claimed that Dada was not an art movement, but an anti-art movement, sometimes using found objects in a manner similar to found poetry. The "anti-art" drive is thought to have stemmed from a post-war emptiness. This tendency toward devaluation of art has led many to claim that Dada was an essentially nihilistic movement. Given that Dada created its own means for interpreting its products, it is difficult to classify alongside most other contemporary art expressions. Due to perceived ambiguity, it has been classified as a nihilistic "modus vivendi". The term "nihilism" was actually popularized in 1862 by Ivan Turgenev in his novel "Fathers and Sons," whose hero, Bazarov, was a nihilist and recruited several followers to the philosophy. He found his nihilistic ways challenged upon falling in love. Anton Chekhov portrayed nihilism when writing "Three Sisters". The phrase "what does it matter" or variants of this are often spoken by several characters in response to events; the significance of some of these events suggests a subscription to nihilism by said characters as a type of coping strategy. The philosophical ideas of the French author, the Marquis de Sade, are often noted as early examples of nihilistic principles.
https://en.wikipedia.org/wiki?curid=21663
Nebula A nebula (Latin for 'cloud' or 'fog'; pl. nebulae, nebulæ or nebulas) is an interstellar cloud of dust, hydrogen, helium and other ionized gases. Originally, the term was used to describe any diffused astronomical object, including galaxies beyond the Milky Way. The Andromeda Galaxy, for instance, was once referred to as the "Andromeda Nebula" (and spiral galaxies in general as "spiral nebulae") before the true nature of galaxies was confirmed in the early 20th century by Vesto Slipher, Edwin Hubble and others. Most nebulae are of vast size; some are hundreds of light-years in diameter. A nebula that is visible to the human eye from Earth would appear larger, but no brighter, from close by. The Orion Nebula, the brightest nebula in the sky and occupying an area twice the diameter of the full Moon, can be viewed with the naked eye but was missed by early astronomers. Although denser than the space surrounding them, most nebulae are far less dense than any vacuum created on Earth – a nebular cloud the size of the Earth would have a total mass of only a few kilograms. Many nebulae are visible due to fluorescence caused by embedded hot stars, while others are so diffused that they can be detected only with long exposures and special filters. Some nebulae are variably illuminated by T Tauri variable stars. Nebulae are often star-forming regions, such as in the "Pillars of Creation" in the Eagle Nebula. In these regions, the formations of gas, dust, and other materials "clump" together to form denser regions, which attract further matter, and eventually will become dense enough to form stars. The remaining material is then believed to form planets and other planetary system objects. Around 150 AD, Ptolemy recorded, in books VII–VIII of his "Almagest", five stars that appeared nebulous. He also noted a region of nebulosity between the constellations Ursa Major and Leo that was not associated with any star. The first true nebula, as distinct from a star cluster, was mentioned by the Persian astronomer Abd al-Rahman al-Sufi, in his "Book of Fixed Stars" (964). He noted "a little cloud" where the Andromeda Galaxy is located. He also cataloged the Omicron Velorum star cluster as a "nebulous star" and other nebulous objects, such as Brocchi's Cluster. The supernova that created the Crab Nebula, the SN 1054, was observed by Arabic and Chinese astronomers in 1054. In 1610, Nicolas-Claude Fabri de Peiresc discovered the Orion Nebula using a telescope. This nebula was also observed by Johann Baptist Cysat in 1618. However, the first detailed study of the Orion Nebula was not performed until 1659, by Christiaan Huygens, who also believed he was the first person to discover this nebulosity. In 1715, Edmond Halley published a list of six nebulae. This number steadily increased during the century, with Jean-Philippe de Cheseaux compiling a list of 20 (including eight not previously known) in 1746. From 1751 to 1753, Nicolas-Louis de Lacaille cataloged 42 nebulae from the Cape of Good Hope, most of which were previously unknown. Charles Messier then compiled a catalog of 103 "nebulae" (now called Messier objects, which included what are now known to be galaxies) by 1781; his interest was detecting comets, and these were objects that might be mistaken for them. The number of nebulae was then greatly increased by the efforts of William Herschel and his sister Caroline Herschel. Their "Catalogue of One Thousand New Nebulae and Clusters of Stars" was published in 1786. A second catalog of a thousand was published in 1789 and the third and final catalog of 510 appeared in 1802. During much of their work, William Herschel believed that these nebulae were merely unresolved clusters of stars. In 1790, however, he discovered a star surrounded by nebulosity and concluded that this was a true nebulosity, rather than a more distant cluster. Beginning in 1864, William Huggins examined the spectra of about 70 nebulae. He found that roughly a third of them had the emission spectrum of a gas. The rest showed a continuous spectrum and thus were thought to consist of a mass of stars. A third category was added in 1912 when Vesto Slipher showed that the spectrum of the nebula that surrounded the star Merope matched the spectra of the Pleiades open cluster. Thus the nebula radiates by reflected star light. About 1923, following the Great Debate, it had become clear that many "nebulae" were in fact galaxies far from our own. Slipher and Edwin Hubble continued to collect the spectra from many different nebulae, finding 29 that showed emission spectra and 33 that had the continuous spectra of star light. In 1932, Hubble announced that nearly all nebula are associated with stars, and their illumination comes from star light. He also discovered that the emission spectrum nebulae are nearly always associated with stars having spectral classifications of B or hotter (including all O-type main sequence stars), while nebulae with continuous spectra appear with cooler stars. Both Hubble and Henry Norris Russell concluded that the nebulae surrounding the hotter stars are transformed in some manner. There are a variety of formation mechanisms for the different types of nebulae. Some nebulae form from gas that is already in the interstellar medium while others are produced by stars. Examples of the former case are giant molecular clouds, the coldest, densest phase of interstellar gas, which can form by the cooling and condensation of more diffuse gas. Examples of the latter case are planetary nebulae formed from material shed by a star in late stages of its stellar evolution. Star-forming regions are a class of emission nebula associated with giant molecular clouds. These form as a molecular cloud collapses under its own weight, producing stars. Massive stars may form in the center, and their ultraviolet radiation ionizes the surrounding gas, making it visible at optical wavelengths. The region of ionized hydrogen surrounding the massive stars is known as an H II region while the shells of neutral hydrogen surrounding the H II region are known as photodissociation region. Examples of star-forming regions are the Orion Nebula, the Rosette Nebula and the Omega Nebula. Feedback from star-formation, in the form of supernova explosions of massive stars, stellar winds or ultraviolet radiation from massive stars, or outflows from low-mass stars may disrupt the cloud, destroying the nebula after several million years. Other nebulae form as the result of supernova explosions; the death throes of massive, short-lived stars. The materials thrown off from the supernova explosion are then ionized by the energy and the compact object that its core produces. One of the best examples of this is the Crab Nebula, in Taurus. The supernova event was recorded in the year 1054 and is labeled SN 1054. The compact object that was created after the explosion lies in the center of the Crab Nebula and its core is now a neutron star. Still other nebulae form as planetary nebulae. This is the final stage of a low-mass star's life, like Earth's Sun. Stars with a mass up to 8–10 solar masses evolve into red giants and slowly lose their outer layers during pulsations in their atmospheres. When a star has lost enough material, its temperature increases and the ultraviolet radiation it emits can ionize the surrounding nebula that it has thrown off. Our Sun will produce a planetary nebula and its core will remain behind in the form of a white dwarf. Objects named nebulae belong to 4 major groups. Before their nature was understood, galaxies ("spiral nebulae") and star clusters too distant to be resolved as stars were also classified as nebulae, but no longer are. Not all cloud-like structures are named nebulae; Herbig–Haro objects are an example. Most nebulae can be described as diffuse nebulae, which means that they are extended and contain no well-defined boundaries. Diffuse nebulae can be divided into emission nebulae, reflection nebulae and dark nebulae. Visible light nebulae may be divided into emission nebulae, which emit spectral line radiation from excited or ionized gas (mostly ionized hydrogen); they are often called H II regions, H II referring to ionized hydrogen), and reflection nebulae which are visible primarily due to the light they reflect. Reflection nebulae themselves do not emit significant amounts of visible light, but are near stars and reflect light from them. Similar nebulae not illuminated by stars do not exhibit visible radiation, but may be detected as opaque clouds blocking light from luminous objects behind them; they are called dark nebulae. Although these nebulae have different visibility at optical wavelengths, they are all bright sources of infrared emission, chiefly from dust within the nebulae. Planetary nebulae are the remnants of the final stages of stellar evolution for lower-mass stars. Evolved asymptotic giant branch stars expel their outer layers outwards due to strong stellar winds, thus forming gaseous shells, while leaving behind the star's core in the form of a white dwarf. Radiation from the hot white dwarf excites the expelled gases, producing emission nebulae with spectra similar to those of emission nebulae found in star formation regions. They are H II regions, because mostly hydrogen is ionized, but planetary are denser and more compact than nebulae found in star formation regions. Planetary nebulae were given their name by the first astronomical observers who were initially unable to distinguish them from planets, and who tended to confuse them with planets, which were of more interest to them. Our Sun is expected to spawn a planetary nebula about 12 billion years after its formation. A protoplanetary nebula (PPN) is an astronomical object at the short-lived episode during a star's rapid stellar evolution between the late asymptotic giant branch (LAGB) phase and the following planetary nebula (PN) phase. During the AGB phase, the star undergoes mass loss, emitting a circumstellar shell of hydrogen gas. When this phase comes to an end, the star enters the PPN phase. The PPN is energized by the central star, causing it to emit strong infrared radiation and become a reflection nebula. Collimated stellar winds from the central star shape and shock the shell into an axially symmetric form, while producing a fast moving molecular wind. The exact point when a PPN becomes a planetary nebula (PN) is defined by the temperature of the central star. The PPN phase continues until the central star reaches a temperature of 30,000 K, after which it is hot enough to ionize the surrounding gas. A supernova occurs when a high-mass star reaches the end of its life. When nuclear fusion in the core of the star stops, the star collapses. The gas falling inward either rebounds or gets so strongly heated that it expands outwards from the core, thus causing the star to explode. The expanding shell of gas forms a supernova remnant, a special diffuse nebula. Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emission. This emission originates from high-velocity electrons oscillating within magnetic fields.
https://en.wikipedia.org/wiki?curid=21664
Natural theology Natural theology, once also termed physico-theology, is a type of theology that provides arguments for the existence of God based on reason and ordinary experience of nature. This distinguishes it from revealed theology, which is based on scripture and/or religious experiences, also from transcendental theology, which is based on "a priori" reasoning. It is thus a type of philosophy, with the aim of explaining the nature of the gods, or of one supreme God. For monotheistic religions, this principally involves arguments about the attributes or non-attributes of God, and especially the existence of God, using arguments that do not involve recourse to supernatural revelation. Marcus Terentius Varro (116–27 BCE) established a distinction between political theology (the social functions of religion), natural theology and mythical theology. His terminology became part of the Stoic tradition and then Christianity through Augustine of Hippo and Thomas Aquinas. Besides Hesiod's "Works and Days" and Zarathushtra's Gathas, Plato gives the earliest surviving account of a natural theology. In the "Timaeus", written , we read: "We must first investigate concerning [the whole Cosmos] that primary question which has to be investigated at the outset in every case, — namely, whether it has always existed, having no beginning or generation, or whether it has come into existence, having begun from some beginning." In the "Laws", in answer to the question as to what arguments justify faith in the gods, Plato affirms: "One is our dogma about the soul...the other is our dogma concerning the ordering of the motion of the stars". Varro (Marcus Terentius Varro) in his (lost) "Antiquitates rerum humanarum et divinarum" ("Antiquities of Human and Divine Things", 1st century BCE) established a distinction between three kinds of theology: civil (political) ("theologia civilis"), natural (physical) ("theologia naturalis") and mythical ("theologia mythica"). The theologians of civil theology are "the people", asking how the gods relate to daily life and the state (imperial cult). The theologians of natural theology are the philosophers, asking about the nature of the gods, and the theologians of mythical theology are the poets, crafting mythology. From the 8th century CE, the Mutazilite school of Islam, compelled to defend their principles against the orthodox Islam of their day, used philosophy for support, and were among the first to pursue a rational Islamic theology, termed "Ilm-al-Kalam" (scholastic theology). The teleological argument was later presented by the early Islamic philosophers Alkindus and Averroes, while Avicenna presented both the cosmological argument and the ontological argument in "The Book of Healing" (1027). Thomas Aquinas ( – 1274) presented several versions of the cosmological argument in his "Summa Theologica", and of the teleological argument in his "Summa contra Gentiles". He presented the ontological argument, but rejected it in favor of proofs that invoke cause and effect alone. His "quinque viae" ("five ways") in those books attempted to demonstrate the existence of God in different ways, including (as way No. 5) the goal-directed actions seen in nature. Raymond of Sabunde's Liber Naturae Sive Creaturarum, Etc. (or Theologia Naturalis), written 1434–1436, marks an important stage in the history of natural theology. John Ray (1627–1705) also known as John Wray, was an English naturalist, sometimes referred to as the father of English natural history. He published important works on plants, animals, and natural theology, with the objective "to illustrate the glory of God in the knowledge of the works of nature or creation". William Derham (1657–1735) continued Ray's tradition of natural theology in two of his own works, "Physico-Theology", published during 1713, and "Astro-Theology", 1714. These later influenced the work of William Paley. In "An Essay on the Principle of Population", published during 1798, Thomas Malthus ended with two chapters on natural theology and population. Malthus—a devout Christian—argued that revelation would "damp the soaring wings of intellect", and thus never let "the difficulties and doubts of parts of the scripture" interfere with his work. William Paley, an important influence on Charles Darwin, who studied theology at Christ College in Cambridge, gave a well-known rendition of the teleological argument for God. During 1802 he published "Natural Theology, or Evidences of the Existence and Attributes of the Deity collected from the Appearances of Nature". In this he described the Watchmaker analogy, for which he is probably best known. However, his book, which was one of the most published books of the 19th and 20th century, presents a number of teleological and cosmological arguments for the existence of God. The book served as a template for many subsequent natural theologies during the 19th century. Professor of chemistry and natural history, Edward Hitchcock also studied and wrote on natural theology. He attempted to unify and reconcile science and religion, emphasizing geology. His major work of this type was "The Religion of Geology and its Connected Sciences" (1851). The Gifford Lectures were established by the will of Adam Lord Gifford to "promote and diffuse the study of Natural Theology in the widest sense of the term—in other words, the knowledge of God." The term natural theology as used by Gifford means theology supported by science and not dependent on the miraculous. Debates over the applicability of teleology to scientific questions continued during the nineteenth century, as Paley's argument about design conflicted with radical new theories on the transmutation of species. In order to support the scientific ideas of the time, which explored the natural world within Paley's framework of a divine designer, Francis Henry Egerton, 8th Earl of Bridgewater, a gentleman naturalist, commissioned eight Bridgewater Treatises upon his deathbed to explore "the Power, Wisdom, and Goodness of God, as manifested in the Creation." They were published first during the years 1833 to 1840, and afterwards in Bohn's Scientific Library. The treatises are: In response to the claim in Whewell's treatise that "We may thus, with the greatest propriety, deny to the mechanical philosophers and mathematicians of recent times any authority with regard to their views of the administration of the universe", Charles Babbage published what he termed "The Ninth Bridgewater Treatise, A Fragment". As his preface states, this volume was not part of that series, but rather his own considerations of the subject. He draws on his own work on calculating engines to consider God as a divine programmer setting complex laws as the basis of what we think of as miracles, rather than miraculously producing new species by creative whim. There was also a fragmentary supplement to this, published posthumously by Thomas Hill. The theology of the Bridgewater Treatises was often disputed, given that it assumed humans could have knowledge of God acquired by observation and reasoning without the aid of revealed knowledge. The works are of unequal merit; several of them were esteemed as apologetic literature, but they attracted considerable criticism. One notable critic of the Bridgewater Treatises was Edgar Allan Poe, who wrote "Criticism". Robert Knox, an Edinburgh surgeon and major advocate of radical morphology, referred to them as the "Bilgewater Treatises", to mock the "ultra-teleological school". Though memorable, this phrase overemphasises the influence of teleology in the series, at the expense of the idealism of the likes of Kirby and Roget. The Bridgewater Treatises
https://en.wikipedia.org/wiki?curid=21665
New Zealand English New Zealand English (NZE) is the variant of the English language spoken and written by most English-speaking New Zealanders. Its language code in ISO and Internet standards is en-NZ. English is the first language of the majority of the population. The English language was established in New Zealand by colonists during the 19th century. It is one of "the newest native-speaker variet[ies] of the English language in existence, a variety which has developed and become distinctive only in the last 150 years". The most distinctive influences on New Zealand English have come from Australian English, English in southern England, Irish English, Scottish English, the prestige Received Pronunciation (RP), and Māori. New Zealand English is most similar to Australian English in pronunciation, with some key differences. A prominent difference is the realisation of /ɪ/: in New Zealand English this is pronounced as a schwa. The first dictionary with entries documenting New Zealand English was probably the "Heinemann New Zealand Dictionary", published in 1979. Edited by Harry Orsman (1928–2002), it is a 1,337-page book, with information relating to the usage and pronunciation of terms that were widely accepted throughout the English-speaking world, and those peculiar to New Zealand. It includes a one-page list of the approximate date of entry into common parlance of the many terms found in New Zealand English but not elsewhere, such as "haka" (1827), "Boohai" (1920), and "bach" (1905). A second edition was published in 1989 with the cover subtitle "the first dictionary of New Zealand English and New Zealand pronunciation". A third edition, edited by Nelson Wattie, was published as "The Reed Dictionary of New Zealand English" by Reed Publishing in 2001. The first dictionary fully dedicated to the New Zealand variety of English was "The New Zealand Dictionary", published by New House Publishers in 1994 and edited by Elizabeth and Harry Orsman. A second edition was published in 1995, edited by Elizabeth Orsman. In 1997, Oxford University Press produced the Harry Orsman-edited "The Dictionary of New Zealand English: A Dictionary of New Zealandisms on Historical Principles", a 981-page book which it claimed was based on over 40 years of research. This research started with Orsman's 1951 thesis and continued with his editing this dictionary. To assist with and maintain this work, the New Zealand Dictionary Centre was founded in 1997. It has published several more dictionaries of New Zealand English, including "The New Zealand Oxford Paperback Dictionary", edited by New Zealand lexicographer Tony Deverson in 1998, culminating in the 1,374-page "The New Zealand Oxford Dictionary" in 2004, by Tony Deverson and Graeme Kennedy. A second, revised edition of "The New Zealand Oxford Paperback Dictionary" was published in 2006, this time using standard lexicographical regional markers to identify the New Zealand content, which were absent from the first edition. Another authoritative work is the "Collins English Dictionary" first published in 1979 by HarperCollins, which contains an abundance of well-cited New Zealand words and phrases, drawing from the 650 million word Bank of English, a British research facility set up at the University of Birmingham in 1980 and funded by Collins publishers. Although this is a British dictionary of International English there has always been a credited New Zealand advisor for the New Zealand content, namely Professor Ian Gordon from 1979 until 2002 and Professor Elizabeth Gordon from the University of Canterbury since 2003. New Zealand-specific dictionaries compiled from the "Collins English Dictionary" include the "Collins New Zealand Concise English Dictionary" (1982), "Collins New Zealand School Dictionary" (1999) and "Collins New Zealand Paperback Dictionary" (2009.) Australia's "Macquarie Dictionary" was first published in 1981, and has since become the authority on Australian English. It has always included an abundance of New Zealand words and phrases additional to the mutually shared words and phrases of both countries. Every edition has retained a New Zealander as advisor for the New Zealand content, the first being Harry Orsman and the most recent being noted New Zealand lexicographer Laurie Bauer. A more light-hearted look at English as spoken in New Zealand, "A Personal Kiwi-Yankee Dictionary", was written by the American-born University of Otago psychology lecturer Louis Leland in 1980. This slim volume lists many of the potentially confusing and/or misleading terms for Americans visiting or emigrating to New Zealand. A second edition was published in 1990. From the 1790s, New Zealand was visited by British, French and American whaling, sealing and trading ships. Their crews traded European goods with the indigenous Māori. The first settlers to New Zealand were mainly from Australia, many of them ex-convicts or escaped convicts. Sailors, explorers and traders from Australia and other parts of Europe also settled. When in 1788 the , most of New Zealand was nominally included, but no real legal authority or control was exercised. However, when the New Zealand Company announced in 1839 its plans to establish colonies in New Zealand this and the increased commercial interests of merchants in Sydney and London spurred the British to take stronger action. Captain William Hobson was sent to New Zealand to persuade Māori to cede their sovereignty to the British Crown and on 6 February 1840, Hobson and about forty Māori chiefs signed the Treaty of Waitangi at Waitangi in the Bay of Islands. From this point onward there was considerable European settlement, primarily from England, Wales, Scotland and Ireland; and to a lesser extent the United States, India, China, and various parts of continental Europe. Some 400,000 settlers came from Britain, of whom 300,000 stayed permanently. Most were young people and 250,000 babies were born. New Zealand ceased to be part of New South Wales and became a British colony on 1 July 1841. Gold discoveries in Otago (1861) and Westland (1865), caused a worldwide gold rush that more than doubled the population from 71,000 in 1859 to 164,000 in 1863. Between 1864 and 1865, under the New Zealand Settlements Act 1863, 13 ships carrying citizens of England, Ireland and South Africa arrived in New Zealand under the Waikato Immigration Scheme. In the 1870s and 1880s, several thousand Chinese men, mostly from Guangdong province, migrated to New Zealand to work on the South Island goldfields. Although the first Chinese migrants had been invited by the Otago Provincial government they quickly became a target of hostility from settlers and laws were enacted specifically to discourage them from coming to New Zealand thereafter. The European population of New Zealand grew explosively from fewer than 1000 in 1831 to 500,000 by 1881. By 1911 the number of European settlers had reached a million. This colourful history of unofficial and official settlement of peoples from all over Europe, Australia, South Africa, and Asia and the intermingling of the people with the indigenous Māori brought about what would eventually evolve into a "New Zealand accent" and a unique regional English lexicon. A distinct New Zealand variant of the English language has been recognised since at least 1912, when Frank Arthur Swinnerton described it as a "carefully modulated murmur". From the beginning of the haphazard Australian and European settlements and latter official British migrations, a new dialect began to form by adopting Māori words to describe the different flora and fauna of New Zealand, for which English did not have words of its own. The New Zealand accent appeared first in towns with mixed populations of immigrants from Australia, England, Ireland, and Scotland. These included the militia towns of the North Island and the gold-mining towns of the South Island. In more homogeneous towns such as those in Otago and Southland, settled mainly by people from Scotland, the New Zealand accent took longer to appear. Since the latter 20th century New Zealand society has gradually divested itself of its fundamentally British roots and has adopted influences from all over the world, especially in the early 21st century when New Zealand experienced an increase of non-British immigration which has since brought about a more prominent multi-national society. The Internet, television, movies and popular music have all brought international influences into New Zealand society and the New Zealand lexicon. Americanization of New Zealand society and language has subtly and gradually been taking place since World War II and especially since the 1970s, as has happened also in neighbouring Australia. In February 2018, Clayton Mitchell MP from New Zealand First led a campaign for English to be recognised as an official language in New Zealand. Not all New Zealanders have the same accent, as the level of cultivation (i.e. the closeness to Received Pronunciation) of every speaker's accent differs. The phonology in this section is of an educated speaker of New Zealand English, and uses a transcription system designed by specifically to faithfully represent the New Zealand accent. It transcribes some of the vowels differently, whereas the approximant is transcribed with the symbol even in phonemic transcription. New Zealand English has a number of dialectal words and phrases. These are mostly informal terms that are more common in casual speech. Numerous loanwords have been taken from the Māori language or from Australian English. New Zealand adopted decimal currency in 1967 and the metric system in 1974. Despite this, several imperial measures are still widely encountered and usually understood, such as feet and inches for a person's height, pounds and ounces for an infant's birth weight, and in colloquial terms such as referring to drinks in pints. In the food manufacturing industry in New Zealand both metric and non-metric systems of weight are used and usually understood owing to raw food products being imported from both metric and non-metric countries. However per the December 1976 Weights and Measures Amendment Act, all foodstuffs must be retailed using the metric system. In general, the knowledge of non-metric units is lessening. The word "spud" for "potato", now common throughout the English-speaking world, is first recorded in New Zealand English. As with Australian English, but in contrast to most other forms of the language, some speakers of New Zealand English use both the terms "bath" and "bathe" as verbs, with "bath" used as a transitive verb (e.g. "I will bath the dog"), and "bathe" used predominantly, but not exclusively, as an intransitive verb (e.g. "Did you bathe?"). Both the words "amongst" and "among" are used, as in British English. The same is true for two other pairs, "whilst" and "while" and "amidst" and "amid". New Zealand English terms of Australian origin include "bushed" (lost or bewildered), "chunder" (to vomit), "drongo" (a foolish or stupid person), "fossick" (to search), "jumbuck" (sheep, from Australian pidgin), "larrikin" (mischievous person), "Maccas" (slang for McDonald's food), "maimai" (a duckshooter's hide; originally a makeshift shelter, from aboriginal "mia-mia"), "paddock" (field, or meadow), "pom" or "pommy" (an Englishman), "skite" (verb: to boast), "station" (for a very large farm), "wowser" (non-drinker of alcohol, or killjoy), and "ute" (pickup truck). Advancing from its British and Australian English origins, New Zealand English has evolved to include many terms of American origin, or which are otherwise used in American English, in preference over the equivalent contemporary British terms. Some examples of such words in New Zealand English are "bobby pin" for the British "hair pin", "muffler" for "silencer", "truck" for "lorry", "station wagon" for "estate car", "stove" for "cooker", "creek" over "brook" or "stream", "hope chest" for "bottom drawer", "eggplant" for "aubergine", "hardware store" for "ironmonger", "median strip" for "central reservation", "stroller" for "pushchair", "pushup" for "press-up", "potato chip" for "potato crisp", "cellphone" or "cell" over the British "mobile phone" or "mobile". Other examples of vocabulary directly borrowed from American English include "the boonies", "bucks" (dollars), "bushwhack" (fell timber), "butt" (bum or arse), "ding" (dent), "dude", "duplex", "faggot" or "fag" (interchangeable with the British "poof" and "poofter"), "figure" (to think or conclude; consider), "hightail it", "homeboy", "hooker", "lagoon", "lube" (oil change), "man" (in place of "mate" or "bro" in direct address), "major" (to study or qualify in a subject), "to be over" [some situation] (be fed up), "rig" (large truck), "sheltered workshop" (workplace for disabled persons), "spat" (a small argument), and "subdivision", and "tavern". Regarding grammar, since about 2000 the American "gotten" has been increasingly commonly used as the past participle of "get" – instead of the standard British English "got". In a number of instances, terms of British and American origin can be used interchangeably. Additionally, many American borrowings are not unique to New Zealand English, and may be found in other dialects of English, including British English. In addition to word and phrase borrowings from Australian, British and American English, New Zealand has its own unique words and phrases derived entirely in New Zealand. Not considering slang, some of these New Zealandisms are: Many of these relate to words used to refer to common items, often based on which major brands become eponyms. Some New Zealanders often reply to a question with a statement spoken with a rising intonation at the end. This often has the effect of making their statement sound like another question. There is enough awareness of this that it is seen in exaggerated form in comedy parody of New Zealanders, such as in the 1970s comedy character "Lyn Of Tawa". This rising intonation can also be heard at the end of statements that are not in response to a question but to which the speaker wishes to add emphasis. High rising terminals are also heard in Australia. In informal speech, some New Zealanders use the third person feminine "she" in place of the third person neuter "it" as the subject of a sentence, especially when the subject is the first word of the sentence. The most common use of this is in the phrase "She'll be right" meaning either "It will be okay" or "It is close enough to what is required". Similar to Australian English are uses such as "she was great car" or "she's a real beauty, this [object]". Another specific New Zealand usage is the way in which New Zealanders refer to the country's two main islands. They are always (except on maps) referred to as "the North Island" and "the South Island". And because of their size, New Zealanders tend to think of these two islands as being 'places', rather than 'pieces of land', so the preposition "in" (rather than "on") is usually used – for example, "my mother lives in the North Island", "Christchurch is in the South Island". This is true only for the two main islands; for smaller islands, the usual preposition "on" is used – for example, "on Stewart Island" (the third largest), or "on Waiheke Island" (the third most populous). Many local everyday words have been borrowed from the Māori language, including words for local flora, fauna, place names and the natural environment. The dominant influence of Māori on New Zealand English is lexical. A 1999 estimate based on the Wellington corpora of written and spoken New Zealand English put the proportion of words of Māori origin at approximately 0.6%, mostly place and personal names. The everyday use of Māori words, usually colloquial, occurs most prominently among youth, young adults and Māori populations. Examples are "kia ora" ("hello"), "nau mai" ("welcome"), and "kai" ("food"). Māori is ever present and has a significant conceptual influence in the legislature, government, and community agencies (e.g. health and education), where legislation requires that proceedings and documents be translated into Māori (under certain circumstances, and when requested). Political discussion and analysis of issues of sovereignty, environmental management, health, and social well-being thus rely on Māori at least in part. Māori as a spoken language is particularly important wherever community consultation occurs. Recognisable regional variations are slight, except for Southland and the southern part of neighbouring Otago, with its "Southland burr." This southern area traditionally received heavy immigration from Scotland (see Dunedin). Several words and phrases common in Scots or Scottish English persist there: examples include the use of "wee" for "small", and phrases such as "to do the messages" meaning "to go shopping". Other Southland features which may also relate to early Scottish settlement are the use of the vowel in a set of words (dance, castle), which is also common in Australian English, and in the maintenance of the /ʍ/ ~ /w/ distinction (e.g. where "which" and "witch" are not homophones). Recent research (2012) suggests that postvocalic /r/ is not restricted to Southland, but is found also in the central North Island where there may be a Pasifika influence, but also a possible influence from modern New Zealand hip‐hop music, which has been shown to have high levels of non‐prevocalic /r/ after the vowel. Taranaki has been said to have a minor regional accent, possibly due to the high number of immigrants from the South-West of England. However, this is becoming less pronounced. Some Māori have an accent distinct from the general New Zealand accent; and also tend to include Māori words more frequently. Comedian Billy T. James and the bro'Town TV programme were notable for featuring exaggerated versions of this. Linguists recognise this as "Māori English", and describe it as strongly influenced by syllable-timed Māori speech patterns. Linguists count "Pākehā English" as the other main accent, and note that it is beginning to adopt similar rhythms, distinguishing it from other stress-timed English accents. It is commonly held that New Zealand English is spoken very quickly. This idea is given support by a study comparing adult New Zealand English and American English speakers which observed faster speaking and articulation rates among the New Zealand English group overall. However, a similar study with American and New Zealand English-speaking children found the opposite, with the speaking and articulation rates of the New Zealand children being slower. The same study proposed that differences in the relative number of tense and lax vowels between the two speaker groups may have influenced the speaking and articulation rates.
https://en.wikipedia.org/wiki?curid=21670
North American English North American English (NAmE, NAE) is the most generalized variety of the English language as spoken in the United States and Canada. Because of their related histories and cultures, plus the similarities between the pronunciation (accent), vocabulary, and grammar of American English and Canadian English, the two spoken varieties are often grouped together under a single category. Canadians are generally tolerant of both British and American spellings, with British spellings being favored in more formal settings and in Canadian print media. The United Empire Loyalists who fled the American Revolution (1775–1783) have had a large influence on Canadian English from its early roots. Some terms in North American English are used almost exclusively in Canada and the United States (for example, the terms "diaper" and "gasoline" are widely used instead of "nappy" and "petrol"). Although many English speakers from outside North America regard such terms as distinct Americanisms, they are often just as common in Canada, mainly due to the effects of heavy cross-border trade and cultural penetration by the American mass media. The list of divergent words becomes longer if considering regional Canadian dialects, especially as spoken in the Atlantic provinces and parts of Vancouver Island where significant pockets of British culture still remain. There are a considerable number of different accents within the regions of both the United States and Canada, originally deriving from the accents prevalent in different English, Scottish and Irish regions of the British Isles and corresponding to settlement patterns of these peoples in the colonies. These were developed and built upon as new waves of immigration, and migration across the North American continent, brought new accents and dialects to new areas, and as these ways of speaking merged and assimilated with the population. It is claimed that despite the centuries of linguistic changes there is still a resemblance between the English East Anglia accents which would have been used by early English settlers in New England (including the Pilgrims), and modern Northeastern United States accents. Similarly, the accents of Newfoundland have some similarities to the accents of Scotland and Ireland. Ethnic American English Regional American English Below, thirteen major North American English accents are defined by particular characteristics: A majority of North American English (for example, in contrast to British English) includes phonological features that concern consonants, such as rhoticity (full pronunciation of all sounds), conditioned T-glottalization (with "satin" pronounced , not ), T- and D-flapping (with "metal" and "medal" pronounced the same, as ), L-velarization (with "filling" pronounced , not ), as well as features that concern vowel sounds, such as various vowel mergers before (so that, "Mary", "marry", and "merry" are all commonly pronounced the same), raising of pre-voiceless (with "price" and "bright" using a higher vowel sound than "prize" and "bride"), the weak vowel merger (with "affected" and "effected" often pronounced the same), at least one of the vowel mergers (the – merger is completed among virtually all Americans and the – merger among nearly half, while both are completed among virtually all Canadians), and yod-dropping (with "new" pronounced , not ). The last item is more advanced in American English than Canadian English.
https://en.wikipedia.org/wiki?curid=21673
Natural resource Natural resources are resources that exist without any actions of humankind. This includes all valued characteristics such as magnetic, gravitational, electrical properties and forces, etc. On Earth, it includes sunlight, atmosphere, water, land (includes all minerals) along with all vegetation, crops, and animal life that naturally subsists upon or within the previously identified characteristics and substances. Particular areas such as the rainforest in Fatu-Hiva are often characterized by the biodiversity and geodiversity existent in their ecosystems. Natural resources may be further classified in different ways. Natural resources are materials and components (something that can be used) that can be found within the environment. Every man-made product is composed of natural resources (at its fundamental level). A natural resource may exist as a separate entity such as fresh water, air, and as well as any living organism such as a fish, or it may exist in an alternate form that must be processed to obtain the resource such as metal ores, rare-earth elements, petroleum, and most forms of energy. There is much debate worldwide over natural resource allocations. This is particularly true during periods of increasing scarcity and shortages (depletion and overconsumption of resources). There are various methods of categorizing natural resources, these include the source of origin, stage of development, and by their renewability. On the basis of origin, natural resources may be divided into two types: Considering their stage of development, natural resources may be referred to in the following ways: On the basis of recovery rate, natural resources can be categorized as follows: Resource extraction involves any activity that withdraws resources from nature. This can range in scale from the traditional use of preindustrial societies to global industry. Extractive industries are, along with agriculture, the basis of the primary sector of the economy. Extraction produces raw material, which is then processed to add value. Examples of extractive industries are hunting, trapping, mining, oil and gas drilling, and forestry. Natural resources can add substantial amounts to a country's wealth, however, a sudden inflow of money caused by a resource boom can create social problems including inflation harming other industries ("Dutch disease") and corruption, leading to inequality and underdevelopment, this is known as the "resource curse". Extractive industries represent a large growing activity in many less-developed countries but the wealth generated does not always lead to sustainable and inclusive growth. People often accuse extractive industry businesses as acting only to maximize short-term value, implying that less-developed countries are vulnerable to powerful corporations. Alternatively, host governments are often assumed to be only maximizing immediate revenue. Researchers argue there are areas of common interest where development goals and business cross. These present opportunities for international governmental agencies to engage with the private sector and host governments through revenue management and expenditure accountability, infrastructure development, employment creation, skills and enterprise development, and impacts on children, especially girls and women. A strong civil society can play an important role in ensuring the effective management of natural resources. Norway can serve as a role model in this regard as it has good institutions and open and dynamic public debate with strong civil society actors that provide effective checks and balances system for the government's management of extractive industries. In recent years, the depletion of natural resources has become a major focus of governments and organizations such as the United Nations (UN). This is evident in the UN's Agenda 21 Section Two, which outlines the necessary steps for countries to take to sustain their natural resources. The depletion of natural resources is considered a sustainable development issue. The term sustainable development has many interpretations, most notably the Brundtland Commission's 'to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs', however in broad terms it is balancing the needs of the planet's people and species now and in the future. In regards to natural resources, depletion is of concern for sustainable development as it has the ability to degrade current environments and the potential to impact the needs of future generations. Depletion of natural resources is associated with social inequity. Considering most biodiversity are located in developing countries, depletion of this resource could result in losses of ecosystem services for these countries. Some view this depletion as a major source of social unrest and conflicts in developing nations. At present, there is a particular concern for rainforest regions that hold most of the Earth's biodiversity. According to Nelson deforestation and degradation affect 8.5% of the world's forests with 30% of the Earth's surface already cropped. If we consider that 80% of people rely on medicines obtained from plants and of the world's prescription medicines have ingredients taken from plants, loss of the world's rainforests could result in a loss of finding more potential life-saving medicines. The depletion of natural resources is caused by 'direct drivers of change' such as Mining, petroleum extraction, fishing, and forestry as well as 'indirect drivers of change' such as demography (e.g. population growth), economy, society, politics, and technology. The current practice of Agriculture is another factor causing depletion of natural resources. For example, the depletion of nutrients in the soil due to excessive use of nitrogen and desertification. The depletion of natural resources is a continuing concern for society. This is seen in the cited quote given by Theodore Roosevelt, a well-known conservationist and former United States president, who was opposed to unregulated natural resource extraction. In 1982, the United Nations developed the World Charter for Nature, which recognized the need to protect nature from further depletion due to human activity. It states that measures must be taken at all societal levels, from international to individual, to protect nature. It outlines the need for sustainable use of natural resources and suggests that the protection of resources should be incorporated into national and international systems of law. To look at the importance of protecting natural resources further, the World Ethic of Sustainability, developed by the IUCN, WWF and the UNEP in 1990, set out eight values for sustainability, including the need to protect natural resources from depletion. Since the development of these documents, many measures have been taken to protect natural resources including establishment of the scientific field and practice of conservation biology and habitat conservation, respectively. Conservation biology is the scientific study of the nature and status of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction. It is an interdisciplinary subject drawing on science, economics and the practice of natural resource management. The term "conservation biology" was introduced as the title of a conference held at the University of California, San Diego, in La Jolla, California, in 1978, organized by biologists Bruce A. Wilcox and Michael E. Soulé. Habitat conservation is a land management practice that seeks to conserve, protect and restore habitat areas for wild plants and animals, especially conservation reliant species, and prevent their extinction, fragmentation or reduction in range. Natural resource management is a discipline in the management of natural resources such as land, water, soil, plants, and animals—with a particular focus on how management affects quality of life for present and future generations. Hence, sustainable development is followed according to judicial use of resources to supply both the present generation and future generations. Management of natural resources involves identifying who has the right to use the resources, and who does not, for defining the boundaries of the resource. The resources may be managed by the users according to the rules governing of when and how the resource is used depending on local condition or the resources may be managed by a governmental organization or other central authority. A "...successful management of natural resources depends on freedom of speech, a dynamic and wide-ranging public debate through multiple independent media channels and an active civil society engaged in natural resource issues...", because of the nature of the shared resources the individuals who are affected by the rules can participate in setting or changing them. The users have rights to devise their own management institutions and plans under the recognition by the government. The right to resources includes land, water, fisheries and pastoral rights. The users or parties accountable to the users have to actively monitor and ensure the utilisation of the resource compliance with the rules and to impose penalty on those peoples who violates the rules. These conflicts are resolved in a quick and low cost manner by the local institution according to the seriousness and context of the offence. The global science-based platform to discuss natural resources management is the World Resources Forum, based in Switzerland.
https://en.wikipedia.org/wiki?curid=21675
Nancy Sinatra Nancy Sandra Sinatra (born June 8, 1940) is an American singer and actress. She is the elder daughter of Frank Sinatra and Nancy ( Barbato) Sinatra, and is widely known for her 1966 signature hit "These Boots Are Made for Walkin'. Other defining recordings include "Sugar Town", the 1967 number one "Somethin' Stupid" (a duet with her father), the title song from the James Bond film "You Only Live Twice", several collaborations with Lee Hazlewood, such as "Jackson", "Summer Wine" and her cover of Cher's "Bang Bang (My Baby Shot Me Down)". Nancy Sinatra began her career as a singer and actress in November 1957 with an appearance on her father's ABC-TV variety series, but initially achieved success only in Europe and Japan. In early 1966 she had a transatlantic number-one hit with "These Boots Are Made for Walkin. She appeared on TV in high boots, and with colorfully dressed go-go dancers, creating a popular and enduring image of the Swinging Sixties. The song was written by Lee Hazlewood, who wrote and produced most of her hits and sang with her on several duets, including "Some Velvet Morning". In 1966 and 1967, Sinatra charted with 13 titles, all of which featured Billy Strange as arranger and conductor. Sinatra also had a brief acting career in the mid-1960s, including a co-starring role with Elvis Presley in the movie "Speedway", and with Peter Fonda in "The Wild Angels". In "Marriage on the Rocks", Frank and Nancy Sinatra played a fictional father and daughter. Sinatra was born on June 8, 1940, in Jersey City, New Jersey. She is the eldest of the three children Frank Sinatra had by his first wife, Nancy Barbato (1917–2018). Both of her parents were of Italian ancestry. When she was a toddler, the family moved to Hasbrouck Heights, New Jersey. They later moved again to Toluca Lake, California, for Frank Sinatra's Hollywood career. There, she spent many years in piano, dance and dramatic performance lessons, as well as undergoing months of voice lessons. In the late 1950s, Sinatra began to study music, dancing, and voice at the University of California in Los Angeles. She dropped out after a year, and made her professional debut in 1960 on her father's television special, "", celebrating the return of Elvis Presley from Europe following his discharge from service in the U.S. Army. Nancy was sent to the airport on behalf of her father to welcome Elvis when his plane landed. On the special, Nancy and her father danced and sang a duet, "You Make Me Feel So Young/Old". That same year she began a five-year marriage to Tommy Sands. Sinatra was signed to her father's label, Reprise Records, in 1961. Her first single, "Cuff Links and a Tie Clip", went largely unnoticed. However, subsequent singles charted in Europe and Japan. Without a hit in the US by 1965, she was on the verge of being dropped. Her singing career received a boost with the help of songwriter/producer/arranger Lee Hazlewood, who had been making records for ten years, notably with Duane Eddy. Hazlewood's collaboration with Sinatra began when Frank Sinatra asked Lee to help boost his daughter's career. When recording "These Boots are Made for Walkin"', Hazelwood is said to have made this suggestion to Nancy, "You can’t sing like Nancy Nice Lady anymore. You have to sing for the truckers". She later described him as "part Henry Higgins and part Sigmund Freud". Hazelwood had her sing in a lower key and crafted songs for her. Bolstered by an image overhaul—including bleached-blonde hair, frosted lips, heavy eye make-up and Carnaby Street fashions—Sinatra made her mark on the American (and British) music scene in early 1966 with "These Boots Are Made for Walkin', its title inspired by a line in Robert Aldrich's 1963 western comedy "4 for Texas" starring her father and Dean Martin. One of her many hits written by Hazlewood, it received three Grammy Award nominations, including two for Sinatra and one for arranger Billy Strange. It sold over one million copies, and was awarded a gold disc. She appeared on TV in high boots, and with colorfully dressed go-go dancers, a craze during the late '60s, and created a popular and enduring image of the Swinging Sixties. A run of chart singles followed, including the two 1966 top 10 hits "How Does That Grab You, Darlin'?" (U.S. No. 7) and "Sugar Town" (U.S. No. 5). "Sugar Town" became her second million-seller. The ballad "Somethin' Stupid"—a duet with her father—hit No. 1 in the U.S. and the U.K. in April 1967 and spent nine weeks at the top of Billboard's easy listening chart. The pair became the only father-daughter duo to top the Hot 100 with what DJs dubbed 'the incest song' because it performed as if sung by two lovers. The record earned a Grammy Award nomination for Record of the Year and remains the only father-daughter duet to hit No. 1 in the U.S.; it became Sinatra's third million-selling disc. Other 45s showing her forthright delivery include "Friday’s Child" (U.S. No. 36, 1966), and the 1967 hits "Love Eyes" (U.S. No. 15) and "Lightning’s Girl" (U.S. No. 24). She rounded out 1967 with the raunchy but low-charting "Tony Rome" (U.S. No. 83)—the title track from the detective film "Tony Rome" starring her father—while her first solo single in 1968 was the more wistful "100 Years" (U.S. No. 69). In 1968, she recorded the Kenny Young song "The Highway Song" with Mickie Most producing for the U.K. and European markets. The song reached top 20 in the U.K. and other European countries. Sinatra enjoyed a parallel recording career cutting duets with the husky-voiced, country-and-western-inspired Hazlewood, starting with "Summer Wine" (originally the B-side of "Sugar Town"). Their biggest hit was a cover of the country song, "Jackson". The single peaked at No. 14 on the "Billboard" Hot 100 in the summer of 1967, when Johnny Cash and June Carter Cash also made the song their own. In December, they released the "MOR"-psychedelic single "Some Velvet Morning", regarded as one of the more unusual singles in pop, and the peak of Sinatra and Hazlewood's vocal collaborations. It reached No. 26 in the U.S. The promo clip is, like the song, sui generis. The British broadsheet "The Daily Telegraph" placed "Some Velvet Morning" in pole position in its 2003 list of the Top 50 Best Duets Ever. ("Somethin' Stupid" ranked number 27.) In 2017, National Public Radio made this comment about the album "Nancy & Lee": "its sly, sultry movements both are a gem of traditional '60s pop and an inversion of traditional conceptions of romance". In 1967, Sinatra recorded the theme song for the James Bond film "You Only Live Twice". In the liner notes of the CD reissue of her 1966 album, "Nancy In London", Sinatra states that she was "scared to death" of recording the song, and asked the songwriters: "Are you sure you don't want Shirley Bassey?" There are two versions of the Bond theme. The first is the lushly orchestrated track featured during the opening and closing credits of the film. The second—and more guitar-heavy—version appeared on the double A-sided single with "Jackson", though the Bond theme stalled at No. 44 on the U.S. Billboard Hot 100. "Jackson"/"You Only Live Twice" was more successful in the U.K., reaching No.11 on the singles chart during a nineteen-week chart run (in the Top 50) that saw the single become the 70th-best-selling single of 1967 in the U.K. In 1966 and 1967, Sinatra traveled to Vietnam to perform for the U.S. troops. Many US soldiers adopted her song "These Boots Are Made for Walkin' as their anthem, as shown in Pierre Schoendoerffer's documentary "The Anderson Platoon" (1967) and reprised in a scene in Stanley Kubrick's "Full Metal Jacket" (1987). Sinatra recorded several anti-war songs, including "My Buddy", featured on her album "Sugar", "Home", co-written by Mac Davis, and "It's Such A Lonely Time of Year", which appeared on the 1968 LP "The Sinatra Family Wish You a Merry Christmas". In 1988, Sinatra recreated her Vietnam concert appearances on an episode of the television show "China Beach". Today, Sinatra still performs for charitable causes supporting US veterans who served in Vietnam, including Rolling Thunder Inc.. In 1963, she played the young secretary in Burke's Law episode "Who killed Wade Walker". She starred in three teen musicals ("beach party films")—"For Those Who Think Young" (1964), "Get Yourself a College Girl" (1964) and "The Ghost in the Invisible Bikini" (1966)—the latter of which featured her in a singing role. She was also scheduled to appear in the role that went to Linda Evans in "Beach Blanket Bingo". Nancy did not do the film as her character was kidnapped and the parallel to her brother Frank Sinatra Jr.'s kidnapping was not considered tasteful. In 1965, Sinatra was a guest with Woody Allen on the game show "Password". In 1966, she appeared as herself in "The Oscar", and starred in "The Last of the Secret Agents", as well as singing the title song. She also starred in Roger Corman's biker story "The Wild Angels" with Peter Fonda and Bruce Dern, then in 1968, she shared the screen with Elvis Presley in his musical comedy "Speedway"—her final film. She was the only singer to have a solo song on an Elvis album or soundtrack. Ann-Margret had performed a solo in the film "Viva Las Vegas" in 1964. However, the film's soundtrack was an EP and not a full-length LP album. Sinatra made appearances on "The Ed Sullivan Show", "The Smothers Brothers Comedy Hour", "The Man from U.N.C.L.E.", and "Rowan & Martin's Laugh-In", "The Virginian," and in a 1967 Christmas-themed episode of "The Dean Martin Show" that featured the Sinatra and Martin families. Nancy starred in television specials that included the 1966 Frank Sinatra special "A Man and His Music – Part II", and the 1967 NBC TV special "Movin' With Nancy". She appeared with Lee Hazlewood, her father and his Rat Pack pals Dean Martin and Sammy Davis Jr., with a cameo appearance by her brother Frank Sinatra Jr. and guest-star appearance by "West Side Story" dancer David Winters. Jack Haley Jr. was the director and producer of the special and received an Emmy Award for Outstanding Directorial Achievement in Music or Variety. At one point in the program, Sinatra shared a kiss with Sammy Davis Jr. She has stated, "The kiss [was] one of the first interracial kisses seen on television and it caused some controversy then, and now. [But] contrary to some inaccurate online reports, the kiss was unplanned and spontaneous." The special features choreography and dancing by David Winters. As there was no Emmy Award category for Choreography—the shows one of two Emmy nominations was placed in the "Special Classification of Individual Achievements" category. Winters lost to co-winners "The Smothers Brothers Comedy Hour" and "The Jackie Gleason Show". Possibly due to this special's success and its choreography a new category for "Outstanding Choreography" was created by the Emmys the next year. "Movin' With Nancy" was sponsored by Royal Crown Cola. Sinatra remained with Reprise until 1970. In 1971, she signed with RCA Records, resulting in three albums: "Nancy & Lee – Again" (1971), "Woman" (1972), and a compilation of some of her Reprise recordings under the title "This Is Nancy Sinatra" (1973). That year she released a non-LP single, "Sugar Me" b/w "Ain't No Sunshine". The former was written by Lynsey De Paul/Barry Blue and, with other covers of works by early-'70s popular songwriters, resurfaced on the 1998 album "How Does It Feel". In the autumn of 1971, Sinatra and Hazlewood's duet "Did You Ever?" reached number two in the UK Singles Chart. In 1972, they performed for a Swedish documentary, "Nancy & Lee In Las Vegas", which chronicled their Vegas concerts at the Riviera Hotel and featured solo numbers and duets from concerts, behind-the-scenes footage, and scenes of Sinatra's late husband, Hugh Lambert, and her mother. The film did not appear until 1975. By 1975, she was releasing singles on Private Stock, which are the most sought-after by collectors. Among those released were "Kinky Love", "Annabell of Mobile", "It's for My Dad," and "Indian Summer" (with Hazlewood). "Kinky Love" was banned by some radio stations in the 1970s for "suggestive" lyrics. It saw the light of day on CD in 1998 on "Sheet Music: A Collection of Her Favorite Love Songs". Pale Saints covered the song in 1991. By the mid-1970s, she slowed her musical activity and ceased acting to concentrate on being a wife and mother. She returned to the studio in 1981 to record a country album with Mel Tillis called "Mel & Nancy". Two of their songs made the Billboard Country Singles Chart: "Texas Cowboy Night" (#23) and "Play Me or Trade Me" (#43). In 1985, she wrote the book "Frank Sinatra, My Father". At 54, Sinatra posed for "Playboy" in the May 1995 issue and made appearances on TV shows to promote her album "One More Time". The magazine appearance caused some controversy. On the talk show circuit, she said her father was proud of the photos. Sinatra told Jay Leno on a 1995 "Tonight Show" that her daughters gave their approval, but her mother said she should ask her father before committing to the project. Sinatra claims that when she told her father what "Playboy" would be paying her, he said, "Double it." Taking her father's advice from when she began her recording career ("Own your own masters"), she owns or holds an interest in most of her material, including videos. On Monday 12 August 2002, Nancy appeared live in-concert for the first time in the UK at The Liquid Rooms, Edinburgh, as part of the official Edinburgh International Festival. Her musical director and keyboardist was long-time collaborator, and former member of "The Wrecking Crew", Don Randi. This sold-out one-off concert was filmed by the BBC. An edited version including brief interview and tourist-type segments was later broadcast on BBC Four. In 2004, she collaborated with former Los Angeles neighbor Morrissey to record a version of his song "Let Me Kiss You", which was featured on her autumn release "Nancy Sinatra". The single—released the same day as Morrissey's version—charted at No. 46 in the UK, providing Sinatra with her first hit for over 30 years. The follow-up single, "Burnin' Down the Spark", failed to chart. The album, titled "Nancy Sinatra", featured rock performers such as Calexico, Sonic Youth, U2, Pulp's Jarvis Cocker, Steven Van Zandt, Jon Spencer, and Pete Yorn, who all cited Sinatra as an influence. Each artist crafted a song for Sinatra to sing on the album. Two years later, EMI released "The Essential Nancy Sinatra"—a UK-only Greatest-Hits compilation featuring the previously unreleased track, "Machine Gun Kelly". The record was Sinatra's first to make the UK album charts (No. 73) in 30 years (since her "Did You Ever?" made No. 31 on the UK charts in 1971). Sinatra also recorded "Another Gay Sunshine Day" for "Another Gay Movie" in 2006. Sinatra received her own star on the Hollywood Walk of Fame on May 11, 2006. In 2002, a Golden Palm Star on the Palm Springs, California, Walk of Stars was dedicated to her. Sinatra appeared, as herself, on one of the final episodes ("Chasing It") of the HBO mob drama "The Sopranos". Her brother Frank Jr. had previously appeared in the 2000 episode "The Happy Wanderer". In 2007, Sinatra and Anoushka Shankar recorded a public service announcement for Deejay Ra's 'Hip-Hop Literacy' campaign, encouraging reading of music and movie-related books and screenplays. September 2009 saw the release of Sinatra's digital-only album "Cherry Smiles: The Rare Singles," featuring previously unreleased tracks and songs only available on 45. She hosted a weekly show called "Nancy for Frank" on Sirius Satellite Radio, "Siriusly Sinatra", where she shares her personal insights about her late father. On April 11, 2011, Black Devil Disco Club released their second album featuring Sinatra's vocals on "To Ardent". A single featuring the album version and several remixes of "To Ardent" was released on May 23, 2011. The single "Jack in Boots" by Lempo and Japwow was also released in 2011 on "SuSu Music", featuring Nancy on vocals, reaching No. 13 in the Music Week Club Chart (UK) and No. 36 on Beatport, plugged on Capital FM, BBC 6Music and BBC Radio One. In Irvine, California, on August 3, 2013, Sinatra joined alt-rock band Wilco on "Bang Bang" and "These Boots are Made for Walkin in their support set for the Bob Dylan-headlined AmericanaramA tour. On December 3, 2013, Sinatra released the digital-only album "Shifting Gears", featuring 15 previously unreleased tracks from the vault, including a rendition of Neil Diamond's "Holly Holy". The orchestra tracks were recorded in the 1970s while Sinatra was touring with a 40-piece orchestra and her vocal tracks were recorded within 10 years of the release of the collection. In May 2017, Sinatra's 1967 hit duet with Lee Hazlewood, "Summer Wine", was used by clothing-retail giant H&M in their "The Summer Shop 2017" ad campaign. As a result, the song debuted at No. 1 on "Billboard" magazine and Clio's Top TV Commercial chart for May 2017. Marriages: Children (with Lambert): (both women were left US$1 million from their grandfather Frank Sinatra's will in a trust fund started in 1983) Feature films
https://en.wikipedia.org/wiki?curid=21683
New Amsterdam New Amsterdam (, or ) was a 17th-century Dutch settlement established at the southern tip of Manhattan Island that served as the seat of the colonial government in New Netherland. The "factorij" became a settlement outside Fort Amsterdam. The fort was situated on the strategic southern tip of the island of Manhattan and was meant to defend the fur trade operations of the Dutch West India Company in the North River (Hudson River). In 1624, it became a provincial extension of the Dutch Republic and was designated as the capital of the province in 1625. By 1655, the population of New Netherland had grown to 2,000 people, with 1,500 living in New Amsterdam. By 1664, the population of New Netherland had skyrocketed to almost 9,000 people, 2,500 of whom lived in New Amsterdam, 1,000 lived near Fort Orange, and the remainder in other towns and villages. In 1664 the English took over New Amsterdam and renamed it New York City after the Duke of York (later James II & VII). After the Second Anglo-Dutch War of 1665–67, England and the United Provinces of the Netherlands agreed to the status quo in the Treaty of Breda. The English kept the island of Manhattan, the Dutch giving up their claim to the town and the rest of the colony, while the English formally abandoned Surinam in South America, and the island of Run in the East Indies to the Dutch, confirming their control of the valuable Spice Islands. Today much of what was once New Amsterdam is in New York City. In 1524, nearly a century before the arrival of the Dutch, the site that later became New Amsterdam was named New Angoulême by the Italian explorer Giovanni da Verrazzano, to commemorate his patron King Francis I of France, former Count of Angoulême. The first recorded exploration by the Dutch of the area around what is now called New York Bay was in 1609 with the voyage of the ship "Halve Maen" (English: "Half Moon"), captained by Henry Hudson in the service of the Dutch Republic, as the emissary of Maurice of Nassau, Prince of Orange, Holland's stadholder. Hudson named the river the Mauritius River. He was covertly attempting to find the Northwest Passage for the Dutch East India Company. Instead, he brought back news about the possibility of exploitation of beaver by the Dutch who sent commercial, private missions to the area the following years. At the time, beaver pelts were highly prized in Europe, because the fur could be felted to make waterproof hats. A by-product of the trade in beaver pelts was castoreum—the secretion of the animals' anal glands—which was used for its medicinal properties and for perfumes. The expeditions by Adriaen Block and Hendrick Christiaensen in 1611, 1612, 1613 and 1614, resulted in the surveying and charting of the region from the 38th parallel to the 45th parallel. On their 1614 map, which gave them a four-year trade monopoly under a patent of the States General, they named the newly discovered and mapped territory New Netherland for the first time. It also showed the first year-round trading presence in New Netherland, Fort Nassau, which would be replaced in 1624 by Fort Orange, which eventually grew into the town of Beverwijck, now Albany. Dominican trader Juan Rodriguez (rendered in Dutch as Jan Rodrigues), born in Santo Domingo of Portuguese and African descent, arrived on Manhattan Island during the winter of 1613–1614, trapping for pelts and trading with the local population as a representative of the Dutch. He was the first recorded non-Native American inhabitant of what would eventually become New York City. The territory of New Netherland was originally a private, profit-making commercial enterprise focused on cementing alliances and conducting trade with the diverse Native American ethnic groups. Surveying and exploration of the region was conducted as a prelude to an anticipated official settlement by the Dutch Republic, which occurred in 1624. In 1620 the Pilgrims attempted to sail to the Hudson River from England. However, "Mayflower" reached Cape Cod (now part of Massachusetts) on November 9, 1620, after a voyage of 64 days. For a variety of reasons, primarily a shortage of supplies, "Mayflower" could not proceed to the Hudson River, and the colonists decided to settle near Cape Cod, establishing the Plymouth Colony. The mouth of the Hudson River was selected as the ideal place for initial settlement as it had easy access to the ocean while also securing an ice-free lifeline to the beaver trading post near present-day Albany. Here, Native American hunters supplied them with pelts in exchange for European-made trade goods and wampum, which was soon being made by the Dutch on Long Island. In 1621, the Dutch West India Company was founded. Between 1621 and 1623, orders were given to the private, commercial traders to vacate the territory, thus opening up the territory to Dutch settlers and company traders. It also allowed the laws and ordinances of the states of Holland to apply. Previously, during the private, commercial period, only the law of the ship had applied. In May 1624, the first settlers in New Netherland arrived on Noten Eylandt (Nut or Nutten Island, now Governors Island) aboard the ship "New Netherland" under the command of Cornelius Jacobsen May, who disembarked on the island with thirty families to take legal possession of the New Netherland territory. The families were then dispersed to Fort Wilhelmus on Verhulsten Island (Burlington Island) in the South River (now the Delaware River), to Kievitshoek (now Old Saybrook, Connecticut) at the mouth of the Verse River (now the Connecticut River) and further north at Fort Nassau on the Mauritius or North River (now the Hudson River), near what is now Albany. A fort and sawmill were soon erected at Nut Island. The latter was constructed by Franchoys Fezard and was taken apart for iron in 1648. The threat of attack from other European colonial powers prompted the directors of the Dutch West India Company to formulate a plan to protect the entrance to the Hudson River. In 1624, 30 families were sponsored by Dutch West India Company moving from Nut Island to Manhattan Island, where a citadel to contain Fort Amsterdam was being laid out by Cryn Frederickz van Lobbrecht at the direction of Willem Verhulst. By the end of 1625, the site had been staked out directly south of Bowling Green on the site of the present U.S. Custom House. The Mohawk-Mahican War in the Hudson Valley led the company to relocate even more settlers to the vicinity of the new Fort Amsterdam. In the end, colonizing was a prohibitively expensive undertaking, only partly subsidized by the fur trade. This led to a scaling back of the original plans. By 1628, a smaller fort was constructed with walls containing a mixture of clay and sand. The fort also served as the center of trading activity. It contained a barracks, the church, a house for the West India Company director and a warehouse for the storage of company goods. Troops from the fort used the triangle between the "Heerestraat" and what came to be known as Whitehall Street for marching drills. Verhulst, with his council, was responsible for the selection of Manhattan as a permanent place of settlement and for situating Fort Amsterdam. He was replaced as the company director-general of New Amsterdam by Peter Minuit in 1626. According to the writer Nathaniel Benchley, to legally safeguard the settlers' investments, possessions and farms on Manhattan island, Minuit negotiated the "purchase" of Manhattan from a band of Canarse from Brooklyn who occupied the bottom quarter of Manhattan, known then as the Manhattoes for 60 guilders' worth of trade goods. Minuit conducted the transaction with the Canarse chief Seyseys, who was only too happy to accept valuable merchandise in exchange for an island that was actually mostly controlled by the Weckquaesgeeks. The deed itself has not survived, so the specific details are unknown. A textual reference to the deed became the foundation for the legend that Minuit had purchased Manhattan from the Native Americans for twenty-four dollars' worth of trinkets and beads, the guilder rate at the time being about two and a half to a Spanish dollar. The price of 60 Dutch guilders in 1626 amounts to around $1,100 in 2012 dollars. Further complicating the calculation is that the value of goods in the area would have been different than the value of those same goods in the developed market of the Netherlands. The Dutch exploited the hydropower of existing creeks by constructing mills at Turtle Bay (between present-day East 45th–48th Streets) and Montagne's Kill, later called Harlem Mill Creek (East 108th Street). In 1639 a sawmill was located in the northern forest at what was later the corner of East 74th Street and Second Avenue, at which African laborers cut lumber. The New Amsterdam settlement had a population of approximately 270 people, including infants. In 1642 the new director-general Willem Kieft decided to build a stone church within the fort. The work was carried out by recent English immigrants, the brothers John and Richard Ogden. The church was finished in 1645 and stood until destroyed in the Slave Insurrection of 1741. A pen-and-ink view of New Amsterdam, drawn on-the-spot and discovered in the map collection of the Austrian National Library in Vienna in 1991, provides a unique view of New Amsterdam as it appeared from Capske (small Cape) Rock in 1648. Capske Rock was situated in the water close to Manhattan between Manhattan and Noten Eylant, and signified the start of the East River roadstead. New Amsterdam received municipal rights on February 2, 1653, thus becoming a city. Albany, then named "Beverwyck", received its city rights in 1652. "Nieuw Haarlem", now known as Harlem, was formally recognized in 1658. The first Jews known to have lived in New Amsterdam arrived in 1654. First to arrive were Solomon Pietersen and Jacob Barsimson, who sailed during the summer of 1654 directly from Holland, with passports that gave them permission to trade in the colony. Then in early September, 23 Jewish refugees arrived from the formerly Dutch city of Recife, which had been conquered by the Portuguese in January 1654. The director of New Amsterdam, Peter Stuyvesant, sought to turn them away but was ultimately overruled by the directors of the Dutch West India Company in Amsterdam. Asser Levy, an Ashkenazi Jew who was one of the 23 refugees, eventually prospered and in 1661 became the first Jew to own a house in New Amsterdam, which also made him the first Jew known to have owned a house anywhere in North America. In 1661 the Communipaw ferry was founded and began a long history of trans-Hudson ferry and ultimately rail and road transportation. On September 15, 1655, New Amsterdam was attacked by 2,000 Native Americans as part of the Peach Tree War. They destroyed 28 farms, killed 100 settlers, and took 150 prisoners. In 1664, Jan van Bonnel built a saw mill on East 74th Street and the East River, where a 13,710-meter long stream that began in the north of today's Central Park, which became known as the Saw Kill or Saw Kill Creek, emptied into the river. Later owners of the property George Elphinstone and Abraham Shotwell replaced the sawmill with a leather mill in 1677. The Saw Kill was later redirected into a culvert, arched over, and its trickling little stream was called Arch Brook. On August 27, 1664, while England and the Dutch Republic were at peace, four English frigates sailed into New Amsterdam's harbor and demanded New Netherland's surrender, whereupon New Netherland was provisionally ceded by Stuyvesant. On September 6, Stuyvesant sent lawyer Johannes de Decker and five other delegates to sign the official Articles of Capitulation. This was swiftly followed by the Second Anglo-Dutch War, between England and the Dutch Republic. In June 1665, New Amsterdam was reincorporated under English law as New York City, named after the Duke of York (later King James II). He was the brother of the English King Charles II, who had been granted the lands. In 1667 the Treaty of Breda ended the conflict in favor of the Dutch. The Dutch did not press their claims on New Netherland but did demand control over the valuable sugar plantations and factories captured by them that year on the coast of Surinam, giving them full control over the coast of what is now Guyana and Surinam. In July 1673, during the Third Anglo-Dutch War, the Dutch briefly and quickly occupied New York City and renamed it New Orange. Anthony Colve was installed as the first governor. Previously there had only been West India Company directors. After the signing of the Treaty of Westminster in November 1674, the city was relinquished to the English and the name reverted to "New York". Suriname became an official Dutch possession in return. The beginnings of New Amsterdam, unlike most other colonies in the New World, were thoroughly documented in city maps. During the time of New Netherland's colonization, the Dutch were the pre-eminent cartographers in Europe. The delegated authority of the Dutch West India Company over New Netherland required maintaining sovereignty on behalf of the States General, generating cash flow through commercial enterprise for its shareholders, and funding the province's growth. Thus its directors regularly required that censuses be taken. These tools to measure and monitor the province's progress were accompanied by accurate maps and plans. These surveys, as well as grassroots activities to seek redress of grievances, account for the existence of some of the most important of the early documents. There is a particularly detailed city map called the Castello Plan produced in 1660. Virtually every structure in New Amsterdam at the time is believed to be represented, and by cross-referencing the "Nicasius de Sille List" of 1660, which enumerates all the citizens of New Amsterdam and their addresses, it can be determined who resided in every house. The city map known as the Duke's Plan probably derived from the same 1660 census as the Castello Plan. The Duke's Plan includes two outlying areas of development on Manhattan along the top of the plan. The work was created for James (1633–1701), the Duke of York and Albany, after whom New York City and New York State's capital Albany were named, just after the seizure of New Amsterdam by the British. After that provisional relinquishment of New Netherland, Stuyvesant reported to his superiors that he "had endeavored to promote the increase of population, agriculture and commerce...the flourishing condition which might have been more flourishing if the now afflicted inhabitants had been protected by a suitable garrison...and had been helped with the long sought for settlement of the boundary, or in default thereof had they been seconded with the oft besought reinforcement of men and ships against the continual troubles, threats, encroachments and invasions of the British neighbors and government of Hartford Colony, our too powerful enemies." The existence of these city maps has proven to be very useful in the archaeology of New York City. For instance, the Castello map aided the excavation of the Stadthuys (City Hall) of New Amsterdam in determining the exact location of the building. The maps enable a precise reconstruction of the town. Fort Amsterdam was located at the most southern tip of the island of Manhattan, which today is surrounded by Bowling Green. The Battery is a reference to its batteries or cannons. Broadway was the main street that led out of town north towards Harlem. The town was surrounded to the north by a wall leading from the eastern to the western shore. The course of this city wall is today Wall Street. A canal led from the harbor inland and was filled in 1676, which today is Broad Street. The layout of the streets was winding, as in a European city. Only starting from Wall Street going toward uptown did the typical grid become enforced long after the town ceased to be Dutch. Most of the Financial District overlaps New Amsterdam and has retained the original street layout. The 1625 date of the founding of New Amsterdam is now commemorated in the official Seal of New York City. (Formerly, the year on the seal was 1664, the year of the provisional Articles of Transfer, assuring New Netherlanders that they "shall keep and enjoy the liberty of their consciences in religion", negotiated with the English by Peter Stuyvesant and his council.) Sometimes considered a dysfunctional trading post by the English who later acquired it from the Dutch, Russell Shorto, author of "The Island at the Center of the World", suggests that the city left its cultural marks on later New York and, by extension, the United States as a whole. Major recent historical research has been based on a set of documents that have survived from that period, untranslated. They are the administrative records of the colony, unreadable by most scholars. Since the 1970s, a professor named Charles Gehring has made it his life's work to translate this first-hand history of the Colony of New Netherland. The scholarly conclusion has largely been that the settlement of New Amsterdam is much more like current New York than previously thought. Cultural diversity and a mindset that resembles the American Dream were already present in the first few years of this colony. Writers like Russell Shorto argue that the large influence of New Amsterdam on the American psyche has largely been overlooked in the classic telling of American beginnings, because of animosity between the English victors and the conquered Dutch. The original 17th-century architecture of New Amsterdam has completely vanished (affected by the fires of 1776 and 1835), leaving only archaeological remnants. The original street plan of New Amsterdam has stayed largely intact, as have some houses outside Manhattan. The presentation of the legacy of the unique culture of 17th-century New Amsterdam remains a concern of preservationists and educators. The National Park Service celebrated in 2009 the 400th anniversary of Henry Hudson's 1609 voyage on behalf of the Dutch with the "New Amsterdam Trail". The Dutch-American historian and journalist Hendrik Willem van Loon wrote in 1933 a work of alternative history entitled ""If the Dutch Had Kept Nieuw Amsterdam"" (in "If, Or History Rewritten", edited by J. C. Squire, 1931, Simon & Schuster). A similar theme, at greater length, was taken up by writer Elizabeth Bear, who published the "New Amsterdam" series of detective stories that take place in a world where the city remained Dutch until the Napoleonic Wars and retained its name also afterward. One of New York Broadway theatres is the New Amsterdam Theatre. The name New Amsterdam is also written on the architrave situated on top of the row of columns in front of the Manhattan Municipal Building, commemorating the name of the Dutch colony. Although no architectural monuments or buildings have survived, the legacy lived on in the form of Dutch Colonial Revival architecture. A number of structures in New York City were constructed in the 19th and 20th centuries in this style, such as Wallabout Market in Brooklyn, South William Street in Manhattan, West End Collegiate Church at West 77th Street, and others.
https://en.wikipedia.org/wiki?curid=21685
Modern Paganism Modern Paganism, also known as Contemporary Paganism and Neopaganism, is a collective term for new religious movements influenced by or derived from the various historical pagan beliefs of pre-modern peoples. Although they share similarities, contemporary Pagan religious movements are diverse, and do not share a single set of beliefs, practices, or texts. Most academics who study the phenomenon treat it as a movement that is divided into different religions; others characterize it as a single religion of which different Pagan faiths are denominations. Adherents rely on pre-Christian, folkloric, and ethnographic sources to a variety of degrees; many follow a spirituality that they accept as entirely modern, while others claim prehistoric beliefs, or else attempt to revive indigenous, ethnic religions as accurately as possible. Academic research has placed the Pagan movement along a spectrum, with eclecticism on one end and polytheistic reconstructionism on the other. Polytheism, animism, and pantheism are common features of Pagan theology. Contemporary Paganism has sometimes been associated with the New Age movement, with scholars highlighting both their similarities and differences. The academic field of Pagan studies began to coalesce in the 1990s, emerging from disparate scholarship in the preceding two decades. There is "considerable disagreement as to the precise definition and proper usage" of the term "modern Paganism". Even within the academic field of Pagan studies, there is no consensus about how contemporary Paganism can best be defined. Most scholars describe modern Paganism as a broad array of different religions, not a single one. The category of modern Paganism could be compared to the categories of Abrahamic religion and Indian religions in its structure. A second, less common definition found within Pagan studies—promoted by the religious studies scholars Michael F. Strmiska and Graham Harvey—characterises modern Paganism as a single religion, of which groups like Wicca, Druidry, and Heathenry are denominations. This perspective has been critiqued, given the lack of core commonalities in issues such as theology, cosmology, ethics, afterlife, holy days, or ritual practices within the Pagan movement. Contemporary Paganism has been defined as "a collection of modern religious, spiritual, and magical traditions that are self-consciously inspired by the pre-Judaic, pre-Christian, and pre-Islamic belief systems of Europe, North Africa, and the Near East." Thus it has been said that although it is "a highly diverse phenomenon", "an identifiable common element" nevertheless runs through the Pagan movement. Strmiska described Paganism as a movement "dedicated to reviving the polytheistic, nature-worshipping pagan religions of pre-Christian Europe and adapting them for the use of people in modern societies." The religious studies scholar Wouter Hanegraaff characterised Paganism as encompassing "all those modern movements which are, first, based on the conviction that what Christianity has traditionally denounced as idolatry and superstition actually represents/represented a profound and meaningful religious worldview and, secondly, that a religious practice based on this worldview can and should be revitalized in our modern world." Discussing the relationship between the different Pagan religions, religious studies scholars Kaarina Aitamurto and Scott Simpson wrote that they were "like siblings who have taken different paths in life but still retain many visible similarities". But there has been much "cross-fertilization" between these different faiths: many groups have influenced, and been influenced by, other Pagan religions, making clear-cut distinctions among them more difficult for scholars to make. The various Pagan religions have been academically classified as new religious movements, with the anthropologist Kathryn Rountree describing Paganism as a whole as a "new religious phenomenon". A number of academics, particularly in North America, consider modern Paganism a form of nature religion. Some practitioners eschew the term "Pagan" altogether, preferring the more specific name of their religion, such as Heathen or Wiccan. This is because the term "Pagan" originates in Christian terminology, which Pagans wish to avoid. Some favor the term "ethnic religion"; the World Pagan Congress, founded in 1998, soon renamed itself the European Congress of Ethnic Religions, enjoying that term's association with the Greek "ethnos" and the academic field of ethnology. Within linguistically Slavic areas of Europe, the term "Native Faith" is often favored as a synonym for Paganism, rendered as "Ridnovirstvo" in Ukrainian, "Rodnoverie" in Russian, and "Rodzimowierstwo" in Polish. Alternately, many practitioners in these regions view "Native Faith" as a category within modern Paganism that does not encompass all Pagan religions. Other terms some Pagans favor include "traditional religion", "indigenous religion", "nativist religion", and "reconstructionism". Various Pagans who are active in Pagan studies, such as Michael York and Prudence Jones, have argued that, due to similarities in their worldviews, the modern Pagan movement can be treated as part of the same global phenomenon as pre-Christian religion, living indigenous religions, and world religions like Hinduism, Shinto, and Afro-American religions. They have also suggested that these could all be included under the rubric of "paganism" or "Paganism". This approach has been received critically by many specialists in religious studies. Critics have pointed out that such claims would cause problems for analytic scholarship by lumping together belief systems with very significant differences, and that the term would serve modern Pagan interests by making the movement appear far larger on the world stage. Doyle White writes that modern religions that draw upon the pre-Christian belief systems of other parts of the world, such as Sub-Saharan Africa or the Americas, cannot be seen as part of the contemporary Pagan movement, which is "fundamentally Eurocentric". Similarly, Strmiska stresses that modern Paganism should not be conflated with the belief systems of the world's indigenous peoples because the latter lived under colonialism and its legacy, and that while some Pagan worldviews bear similarities to those of indigenous communities, they stem from "different cultural, linguistic, and historical backgrounds". Many scholars have favored the use of "Neopaganism" to describe this phenomenon, with the prefix "neo-" serving to distinguish the modern religions from their ancient, pre-Christian forerunners. Some Pagan practitioners also prefer "Neopaganism", believing that the prefix conveys the reformed nature of the religion, such as its rejection of practices such as animal sacrifice. Conversely, most Pagans do not use the word "Neopagan", with some expressing disapproval of it, arguing that the term "neo" offensively disconnects them from what they perceive as their pre-Christian forebears. To avoid causing offense, many scholars in the English-speaking world have begun using the prefixes "modern" or "contemporary" rather than "neo". Several Pagan studies scholars, such as Ronald Hutton and Sabina Magliocco, have emphasized the use of the upper-case "Paganism" to distinguish the modern movement from the lower-case "paganism", a term commonly used for pre-Christian belief systems. In 2015, Rountree stated that this lower case/upper case division was "now [the] convention" in Pagan studies. The term "neo-pagan" was coined in the 19th century in reference to Renaissance and Romanticist Hellenophile classical revivalism. By the mid-1930s "Neopagan" was being applied to new religious movements like Jakob Wilhelm Hauer's German Faith Movement and Jan Stachniuk's Polish Zadruga, usually by outsiders and often pejoratively. Pagan as a self-designation appeared in 1964 and 1965, in the publications of the Witchcraft Research Association; at that time, the term was in use by revivalist Witches in the United States and the United Kingdom, but unconnected to the broader, counterculture Pagan movement. The modern popularisation of the terms pagan and neopagan as they are currently understood is largely traced to Oberon Zell-Ravenheart, co-founder of the 1st Neo-Pagan Church of All Worlds who, beginning in 1967 with the early issues of "Green Egg", used both terms for the growing movement. This usage has been common since the pagan revival in the 1970s. According to Strmiska, the reappropriation of the term "pagan" by modern Pagans served as "a deliberate act of defiance" against "traditional, Christian-dominated society", allowing them to use it as a source of "pride and power". In this, he compared it to the gay liberation movement's reappropriation of the term "queer", which had formerly been used only as a term of homophobic abuse. He suggests that part of the term's appeal lay in the fact that a large proportion of Pagan converts were raised in Christian families, and that by embracing the term "pagan", a word long used for what was "rejected and reviled by Christian authorities", a convert summarizes "in a single word his or her definitive break" from Christianity. He further suggests that the term gained appeal through its depiction in romanticist and 19th-century European nationalist literature, where it had been imbued with "a certain mystery and allure", and that by embracing the word "pagan" modern Pagans defy past religious intolerance to honor the pre-Christian peoples of Europe and emphasize those societies' cultural and artistic achievements. For some Pagan groups, ethnicity is central to their religion, and some restrict membership to a single ethnic group. Some critics have described this approach as a form of racism. Other Pagan groups allow people of any ethnicity, on the view that the gods and goddesses of a particular region can call anyone to their form of worship. Some such groups feel a particular affinity for the pre-Christian belief systems of a particular region with which they have no ethnic link because they see themselves as reincarnations of people from that society. There is greater focus on ethnicity within the Pagan movements in continental Europe than within the Pagan movements in North America and the British Isles. Such ethnic Paganisms have variously been seen as responses to concerns about foreign colonizing ideologies, globalization, cosmopolitanism, and anxieties about cultural erosion. Although they acknowledged that it was "a highly simplified model", Aitamurto and Simpson wrote that there was "some truth" to the claim that leftist-oriented forms of Paganism were prevalent in North America and the British Isles while rightist-oriented forms of Paganism were prevalent in Central and Eastern Europe. They noted that in these latter regions, Pagan groups placed an emphasis on "the centrality of the nation, the ethnic group, or the tribe". Rountree wrote that it was wrong to assume that "expressions of Paganism can be categorized straight-forwardly according to region", but acknowledged that some regional trends were visible, such as the impact of Catholicism on Paganism in Southern Europe. Another division within modern Paganism rests on differing attitudes to the source material surrounding pre-Christian belief systems. Strmiska notes that Pagan groups can be "divided along a continuum: at one end are those that aim to reconstruct the ancient religious traditions of a particular ethnic group or a linguistic or geographic area to the highest degree possible; at the other end are those that freely blend traditions of different areas, peoples, and time periods." Strmiska argues that these two poles could be termed "reconstructionism" and "eclecticism", respectively. Reconstructionists do not altogether reject innovation in their interpretation and adaptation of the source material, however they do believe that the source material conveys greater authenticity and thus should be emphasized. They often follow scholarly debates about the nature of such pre-Christian religions, and some reconstructionists are themselves scholars. Eclectic Pagans, conversely, seek general inspiration from the pre-Christian past, and do not attempt to recreate past rites or traditions with specific attention to detail. On the reconstructionist side can be placed those movements which often favour the designation "Native Faith", including Romuva, Heathenry, and Hellenism. On the eclectic side has been placed Wicca, Thelema, Adonism, Druidry, the Goddess Movement, Discordianism and the Radical Faeries. Strmiska also suggests that this division could be seen as being based on "discourses of identity", with reconstructionists emphasizing a deep-rooted sense of place and people, and eclectics embracing a universality and openness toward humanity and the Earth. Strmiska nevertheless notes that this reconstructionist-eclectic division is "neither as absolute nor as straightforward as it might appear". He cites the example of Dievturība, a form of reconstructionist Paganism that seeks to revive the pre-Christian religion of the Latvian people, by noting that it exhibits eclectic tendencies by adopting a monotheistic focus and ceremonial structure from Lutheranism. Similarly, while examining neo-shamanism among the Sami people of Northern Scandinavia, Siv Ellen Kraft highlights that despite the religion being reconstructionist in intent, it is highly eclectic in the manner in which it has adopted elements from shamanic traditions in other parts of the world. In discussing Asatro – a form of Heathenry based in Denmark – Matthew Amster notes that it did not fit clearly within such a framework, because while seeking a reconstructionist form of historical accuracy, Asatro nevertheless strongly eschewed the emphasis on ethnicity that is common to other reconstructionist groups. While Wicca is identified as an eclectic form of Paganism, Strmiska also notes that some Wiccans have moved in a more reconstructionist direction by focusing on a particular ethnic and cultural link, thus developing such variants as Norse Wicca and Celtic Wicca. Concern has also been expressed regarding the utility of the term "reconstructionism" when dealing with Paganisms in Central and Eastern Europe, because in many of the languages of these regions, equivalents of the term "reconstructionism" – such as the Czech "Historická rekonstrukce" and Lithuanian "Istorinė rekonstrukcija" – are already used to define the secular hobby of historical re-enactment. Some Pagans distinguish their beliefs and practices as a form of religious naturalism, embracing a naturalistic worldview, including those who identify as humanistic or atheopagans. Many such Pagans aim for an explicitly ecocentric practice, which may overlap with scientific pantheism. Although inspired by the pre-Christian belief systems of the past, modern Paganism is not the same phenomenon as these lost traditions and in many respects differs from them considerably. Strmiska stresses that modern Paganism is a "new", "modern" religious movement, even if some of its content derives from ancient sources. Contemporary Paganism as practiced in the United States in the 1990s has been described as "a synthesis of historical inspiration and present-day creativity". Eclectic Paganism takes an undogmatic religious stance and therefore potentially sees no one as having authority to deem a source apocryphal. Contemporary paganism has therefore been prone to fakelore, especially in recent years as information and misinformation alike have been spread on the Internet and in print media. A number of Wiccan, pagan and even some Traditionalist or Tribalist groups have a history of Grandmother Stories – typically involving initiation by a Grandmother, Grandfather, or other elderly relative who is said to have instructed them in the secret, millennia-old traditions of their ancestors. As this secret wisdom can almost always be traced to recent sources, tellers of these stories have often later admitted they made them up. Strmiska asserts that contemporary paganism could be viewed as a part of the "much larger phenomenon" of efforts to revive "traditional, indigenous, or native religions" that were occurring across the globe. Beliefs and practices vary widely among different Pagan groups; however, there are a series of core principles common to most, if not all, forms of modern paganism. The English academic Graham Harvey noted that Pagans "rarely indulge in theology". One principle of the Pagan movement is polytheism, the belief in and veneration of multiple gods or goddesses. Within the Pagan movement, there can be found many deities, both male and female, who have various associations and embody forces of nature, aspects of culture, and facets of human psychology. These deities are typically depicted in human form, and are viewed as having human faults. They are therefore not seen as perfect, but rather are venerated as being wise and powerful. Pagans feel that this understanding of the gods reflected the dynamics of life on Earth, allowing for the expression of humour. One view in the Pagan community is that these polytheistic deities are not viewed as literal entities, but as Jungian archetypes or other psychological constructs that exist in the human psyche. Others adopt the belief that the deities have both a psychological and external existence. Many Pagans believe adoption of a polytheistic world-view would be beneficial for western society – replacing the dominant monotheism they see as innately repressive. In fact, many American neopagans first came to their adopted faiths because it allowed a greater freedom, diversity, and tolerance of worship among the community. This pluralistic perspective has helped the varied factions of modern Paganism exist in relative harmony. Most Pagans adopt an ethos of "unity in diversity" regarding their religious beliefs. It is its inclusion of female deity which distinguishes Pagan religions from their Abrahamic counterparts. In Wicca, male and female deities are typically balanced out in a form of duotheism. Many East Asian philosophies equate weakness with femininity and strength with masculinity; this is not the prevailing attitude in paganism and Wicca. Among many Pagans, there is a strong desire to incorporate the female aspects of the divine in their worship and within their lives, which can partially explain the attitude which sometimes manifests as the veneration of women. There are exceptions to polytheism in Paganism, as seen for instance in the form of Ukrainian Paganism promoted by Lev Sylenko, which is devoted to a monotheistic veneration of the god Dazhbog. As noted above, Pagans with naturalistic worldviews may not believe in or work with deities at all. Pagan religions commonly exhibit a metaphysical concept of an underlying order that pervades the universe, such as the concept of "harmonia" embraced by Hellenists and that of "Wyrd" found in Heathenry. A key part of most Pagan worldviews is the holistic concept of a universe that is interconnected. This is connected with a belief in either pantheism or panentheism. In both beliefs, divinity and the material or spiritual universe are one. For pagans, pantheism means that "divinity is inseparable from nature and that deity is immanent in nature". Dennis D. Carpenter noted that the belief in a pantheistic or panentheistic deity has led to the idea of interconnectedness playing a key part in pagans' worldviews. The prominent Reclaiming priestess Starhawk related that a core part of goddess-centred pagan witchcraft was "the understanding that all being is interrelated, that we are all linked with the cosmos as parts of one living organism. What affects one of us affects us all." Another pivotal belief in the contemporary Pagan movement is that of animism. This has been interpreted in two distinct ways among the Pagan community. First, it can refer to a belief that everything in the universe is imbued with a life force or spiritual energy. In contrast, some contemporary Pagans believe that there are specific spirits that inhabit various features in the natural world, and that these can be actively communicated with. Some Pagans have reported experiencing communication with spirits dwelling in rocks, plants, trees and animals, as well as power animals or animal spirits who can act as spiritual helpers or guides. Animism was also a concept common to many pre-Christian European religions, and in adopting it, contemporary Pagans are attempting to "reenter the primeval worldview" and participate in a view of cosmology "that is not possible for most Westerners after childhood". All Pagan movements place great emphasis on the divinity of nature as a primary source of divine will, and on humanity's membership of the natural world, bound in kinship to all life and the Earth itself. The animistic aspects of Pagan theology assert that all things have a soul - not just humans or organic life - so this bond is held with mountains and rivers as well as trees and wild animals. As a result, Pagans believe the essence of their spirituality is both ancient and timeless, regardless of the age of specific religious movements. Places of natural beauty are therefore treated as sacred and ideal for ritual, like the nemetons of the ancient Celts. Many Pagans hold that different lands and/or cultures have their own natural religion, with many legitimate interpretations of divinity, and therefore reject religious exclusivism. While the Pagan community has tremendous variety in political views spanning the whole of the political spectrum, environmentalism is often a common feature. Such views have also led many pagans to revere the planet Earth as Mother Earth, who is often referred to as Gaia after the ancient Greek goddess of the Earth. Pagan ritual can take place in both a public and private setting. Contemporary Pagan ritual is typically geared towards "facilitating altered states of awareness or shifting mind-sets". In order to induce such altered states of consciousness, pagans utilize such elements as drumming, visualization, chanting, singing, dancing, and meditation. American folklorist Sabina Magliocco came to the conclusion, based upon her ethnographic fieldwork in California that certain Pagan beliefs "arise from what they experience during religious ecstasy". Sociologist Margot Adler highlighted how several Pagan groups, like the Reformed Druids of North America and the Erisian movement incorporate a great deal of play in their rituals rather than having them be completely serious and somber. She noted that there are those who would argue that "the Pagan community is one of the only spiritual communities that is exploring humor, joy, abandonment, even silliness and outrageousness as valid parts of spiritual experience". Domestic worship typically takes place in the home and is carried out by either an individual or family group. It typically involves offerings – including bread, cake, flowers, fruit, milk, beer, or wine – being given to images of deities, often accompanied with prayers and songs and the lighting of candles and incense. Common Pagan devotional practices have thus been compared to similar practices in Hinduism, Buddhism, Shinto, Roman Catholicism, and Orthodox Christianity, but contrasted with that in Protestantism, Judaism, and Islam. Although animal sacrifice was a common part of pre-Christian ritual in Europe, it is rarely practiced in contemporary Paganism. Paganism's public rituals are generally calendrical, although the pre-Christian festivals that Pagans use as a basis varied across Europe. Nevertheless, common to almost all Pagan religions is an emphasis on an agricultural cycle and respect for the dead. Common Pagan festivals include those marking the summer solstice and winter solstice as well as the start of spring and the harvest. In Wicca, a Wheel of the Year has been developed which typically involves eight seasonal festivals. The belief in magickal rituals and spells is held by a "significant number" of contemporary Pagans. Among those who believe in it, there are a variety of different views about what magick is. Many Neopagans adhere to the definition of magick provided by Aleister Crowley, the founder of Thelema: "the Science and Art of causing change to occur in conformity with Will". Also accepted by many is the related definition purportedly provided by the ceremonial magician Dion Fortune: "magick is the art and science of changing consciousness according to the Will". Among those who practice magic are Wiccans, those who identify as Neopagan Witches, and practitioners of some forms of revivalist Neo-druidism, the rituals of which are at least partially based upon those of ceremonial magic and freemasonry. The origins of modern Paganism lie in the romanticist and national liberation movements that developed in Europe during the 18th and 19th centuries. The publications of studies into European folk customs and culture by scholars like Johann Gottfried Herder and Jacob Grimm resulted in a wider interest in these subjects and a growth in cultural self-consciousness. At the time, it was commonly believed that almost all such folk customs were survivals from the pre-Christian period. These attitudes would also be exported to North America by European immigrants in these centuries. The Romantic movement of the 18th century led to the re-discovery of Old Gaelic and Old Norse literature and poetry. The 19th century saw a surge of interest in Germanic paganism with the Viking revival in Victorian Britain and Scandinavia. In Germany the Völkisch movement was in full swing. These pagan currents coincided with Romanticist interest in folklore and occultism, the widespread emergence of pagan themes in popular literature, and the rise of nationalism. The rise of modern Paganism was aided by the decline in Christianity throughout many parts of Europe and North America, as well as by the concomitant decline in enforced religious conformity and greater freedom of religion that developed, allowing people to explore a wider range of spiritual options and form religious organisations that could operate free from legal persecution. Historian Ronald Hutton has argued that many of the motifs of 20th century neo-Paganism may be traced back to the utopian, mystical counter-cultures of the late-Victorian and Edwardian periods (also extending in some instances into the 1920s), via the works of amateur folklorists, popular authors, poets, political radicals and alternative lifestylers. Prior to the spread of the 20th-century neopagan movement, a notable instance of self-identified paganism was in Sioux writer Zitkala-sa's essay "Why I Am A Pagan". Published in the Atlantic Monthly in 1902, the Native American activist and writer outlined her rejection of Christianity (referred to as "the new superstition") in favor of a harmony with nature embodied by the Great Spirit. She further recounted her mother's abandonment of Sioux religion and the unsuccessful attempts of a "native preacher" to get her to attend the village church. In the 1920s Margaret Murray theorized that a secret underground religion had survived the witchcraft prosecutions enacted by the ecclesiastical and secular courts. Most historians now reject Murray's theory, as she based it partially upon the similarities of the accounts given by those accused of witchcraft; such similarity is now thought to actually derive from there having been a standard set of questions laid out in the witch-hunting manuals used by interrogators. The 1960s and 1970s saw a resurgence in Neodruidism as well as the rise of Germanic neopaganism and Ásatrú in the United States and in Iceland. In the 1970s, Wicca was notably influenced by feminism, leading to the creation of an eclectic, Goddess-worshipping movement known as Dianic Wicca. The 1979 publication of Margot Adler's "Drawing Down the Moon" and Starhawk's "The Spiral Dance" opened a new chapter in public awareness of paganism. With the growth and spread of large, pagan gatherings and festivals in the 1980s, public varieties of Wicca continued to further diversify into additional, eclectic sub-denominations, often heavily influenced by the New Age and counter-culture movements. These open, unstructured or loosely structured traditions contrast with British Traditional Wicca, which emphasizes secrecy and initiatory lineage. The 1980s and 1990s also saw an increasing interest in serious academic research and reconstructionist pagan traditions. The establishment and growth of the Internet in the 1990s brought rapid growth to these, and other pagan movements. After the collapse of the Soviet Union, freedom of religion was legally established across the post-Soviet states, allowing for the growth in both Christian and non-Christian religions, among them Paganism. Goddess Spirituality, which is also known as the Goddess movement, is a Pagan religion in which a singular, monotheistic Goddess is given predominance. Designed primarily for women, Goddess Spirituality revolves around the sacredness of the female form, and of aspects of women's lives that have been traditionally neglected in western society, such as menstruation, sexuality and maternity. Adherents of the Goddess Spirituality movement typically envision a history of the world that is different from traditional narratives about the past, emphasising the role of women rather than that of men. According to this view, human society was formerly a matriarchy, with communities being egalitarian, pacifistic and focused on the worship of the Goddess, and was subsequently overthrown by violent patriarchal hordes - usually Indo-European pastoralists, who worshipped male sky gods and who continued to rule through the form of Abrahamic Religions, specifically Christianity in the West. Adherents look for elements of this mythological history in "theological, anthropological, archaeological, historical, folkloric and hagiographic writings". Heathenism, also known as Germanic Neopaganism, refers to a series of contemporary Pagan traditions based on the historical religions, culture and literature of Germanic-speaking Europe. Heathenry is spread out across northwestern Europe, North America and Australasia, where the descendants of historic Germanic-speaking people now live. Many Heathen groups adopt variants of Norse mythology as a basis for their beliefs, conceiving of the Earth as on the great world tree Yggdrasil. Heathens believe in multiple polytheistic deities adopted from historical Germanic mythologies. Most are polytheistic realists, believing that the deities are real entities, while others view them as Jungian archetypes. Neo-Druidry is the second-largest pagan path after Wicca, and shows similar heterogeneity. It draws inspirations from historical Druids, the priest caste of the ancient pagan Celts. Neo-Druidry dates to the earliest forms of modern paganism: the Ancient Order of Druids founded in 1781 had many aspects of freemasonry, and has practiced rituals at Stonehenge since 1905. George Watson MacGregor Reid founded the Druid Order in its current form in 1909. In 1964 Ross Nichols established the Order of Bards, Ovates and Druids. In the United States, the Ancient Order of Druids in America (AODA) was established in 1912, the Reformed Druids of North America (RDNA) in 1963, and Ár nDraíocht Féin (ADF) in 1983 by Isaac Bonewits. Since the 1960s and '70s, paganism and the then emergent counterculture, New Age, and hippie movements experienced a degree of cross-pollination. Reconstructionism rose in the 1980s and 1990s. Most pagans are not committed to a single defined tradition, but understand paganism as encompassing a wide range of non-institutionalized spirituality, as promoted by the Church of All Worlds, the Feri Tradition and other movements. Notably, Wicca in the United States since the 1970s has largely moved away from its Gardnerian roots and diversified into eclectic variants. Paganism generally emphasizes the sanctity of the Earth and Nature. Pagans often feel a duty to protect the Earth through activism, and support causes such as rain forest protection, organic farming, permaculture, and animal rights. Some pagans are influenced by Animist traditions of the indigenous Native Americans and Africans and other indigenous or shamanic traditions. Eco-paganism and Eco-magic, which are offshoots of direct action environmental groups, strongly emphasize fairy imagery and a belief in the possibility of intercession by the fae (fairies, pixies, gnomes, elves, and other spirits of nature and the Otherworlds). Some Unitarian Universalists are eclectic pagans. Unitarian Universalists look for spiritual inspiration in a wide variety of religious beliefs. The Covenant of Unitarian Universalist Pagans, or CUUPs, encourages its chapters to "use practices familiar to members who attend for worship services but not to follow only one tradition of paganism". In 1925, the Czech esotericist Franz Sättler founded the pagan religion Adonism, devoted to the ancient Greek god Adonis, whom Sättler equated with the Christian Satan, and which purported that the end of the world would come in 2000. Adonism largely died out in the 1930s, but remained an influence on the German occult scene. The western LGBTQ community, often marginalized and/or outright rejected by Abrahamic-predominant mainstream religious establishments, has often sought spiritual acceptance and association in neopagan religious/spiritual practice. Pagan-specializing religious scholar Christine Hoff Kraemer wrote, "Pagans tend to be relatively accepting of same-sex relationships, BDSM, polyamory, transgender, and other expressions of gender and sexuality that are marginalized by mainstream society." Conflict naturally arises, however, as some neopagan belief systems and sect ideologies stem from fundamental beliefs in the male-female gender binary, heterosexual pairing, resulting heterosexual reproduction, and/or gender essentialism. In response, groups and sects inclusive of or specific to LGBTQ people have developed. Theologian Jone Salomonsen noted in the 1980s and 1990s that the Reclaiming movement of San Francisco featured an unusually high number of LGBTQ people, particularly bisexuals. Margot Adler noted groups whose practices focused on male homosexuality, such as Eddie Buczynski's Minoan Brotherhood, a Wiccan sect that combines the iconography from ancient Minoan religion with a Wiccan theology and an emphasis on men who love men, and the eclectic pagan group known as the Radical Faeries. When Adler asked one gay male pagan what the pagan community offered members of the LGBTQ community, he replied, "A place to belong. Community. Acceptance. And a way to connect with all kinds of people—gay, bi, straight, celibate, transgender—in a way that is hard to do in the greater society." Transgender existence and acceptability is especially controversial in many neopagan sects. One of the most notable of these is Dianic Wicca. This female-only, radical feminist variant of Wicca allows cisgender lesbians but not transgender women. This is due to Dianic belief in gender essentialism; according to founder Zsuzsanna Budapest, "you have to have sometimes [sic] in your life a womb, and ovaries and [mensturate] and not die". This belief and the way it is expressed is often denounced as transphobia and trans-exclusionary radical feminism. Trans exclusion can also be found in Alexandrian Wicca, whose founder views trans individuals as melancholy people who should seek other beliefs due to the Alexandrian focus on heterosexual reproduction and duality. In contrast to the eclectic traditions, Polytheistic Reconstructionists practice culturally specific ethnic traditions based on folklore, songs and prayers, as well as reconstructions from the historical record. Hellenic, Roman, Kemetic, Celtic, Germanic, Guanche, Baltic and Slavic Reconstructionists aim to preserve and revive the practices and beliefs of Ancient Greece, Ancient Rome, Ancient Egypt, the Celts, the Germanic peoples, the Guanche people, the Balts and the Slavs, respectively. Wicca is the largest form of modern Paganism, as well as the best-known and most extensively studied. Religious studies scholar Graham Harvey noted that the poem "Charge of the Goddess" remains central to the liturgy of most Wiccan groups. Originally written by Wiccan High Priestess Doreen Valiente in the mid-1950s, the poem allows Wiccans to gain wisdom and experience deity in "the ordinary things in life". Historian Ronald Hutton identified a wide variety of different sources that influenced Wicca's development, including ceremonial magic, folk magic, Romanticist literature, Freemasonry, and the historical theories of English archaeologist Margaret Murray. English esotericist Gerald Gardner was at the forefront of the burgeoning Wiccan movement. He claimed to have been initiated by the New Forest coven in 1939, and that the religion that he discovered was a modern remnant of the old Witch-Cult described in Murray's works, which originated in the pre-Christian paganism of Europe. Various forms of Wicca have since evolved or been adapted from Gardner's British Traditional Wicca or Gardnerian Wicca, such as Alexandrian Wicca. Other forms loosely based on Gardner's teachings are Faery Wicca, Kemetic Wicca, Judeo-Paganism or jewitchery, and Dianic Wicca or feminist Wicca, which emphasizes the divine feminine, often creating women-only or lesbian-only groups. In the academic community Wicca has also been interpreted as having close affinities with process philosophy. In the 1990s, Wiccan beliefs and practices were used as a partial basis for a number of U.S. films and television series, such as "The Craft", "Charmed" and "Buffy the Vampire Slayer", leading to a surge in teenagers' and young adults' interest and involvement in the religion. Beit Asherah (the house of the Goddess Asherah) was one of the first Neopagan synagogues, founded in the early 1990s by Stephanie Fox, Steven Posch, and Magenta Griffiths (Lady Magenta). Magenta Griffiths is High Priestess of the Beit Asherah coven, and a former board member of the Covenant of the Goddess. The Chuvash people, a Turkic ethnic group, native to an area stretching from the Volga Region to Siberia, have experienced a Pagan revival since the fall of the Soviet Union, under the name Vattisen Yaly (, "Tradition of the Old"). Vattisen Yaly could be categorised as a peculiar form of Tengrism, a related revivalist movement of Central Asian traditional religion, however it differs significantly from it: the Chuvash being a heavily Fennicised and Slavified ethnicity (they were also never fully Islamised, contrarywise to most of other Turks), and having had exchanges also with other Indo-European ethnicities, their religion shows many similarities with Finnic and Slavic Paganisms; moreover, the revival of "Vattisen Yaly" in recent decades has occurred following Neopagan patterns. Thus it should be more carefully categorised as a Neopagan religion. Today the followers of the Chuvash Traditional Religion are called "the true Chuvash". Their main god is Tura, a deity comparable to the Estonian Taara, the Germanic Thunraz and the pan-Turkic Tengri. Establishing precise figures on Paganism is difficult. Due to the secrecy and fear of persecution still prevalent among Pagans, limited numbers are willing to openly be counted. The decentralised nature of Paganism and sheer number of solitary practitioners further complicates matters. Nevertheless, there is a slow growing body of data on the subject. Combined statistics from Western nations put Pagans well over one million worldwide. Neopagan and other folk religion movements have gained a significant following on the eastern fringes of Europe, especially in the Caucasus and the Volga region. Among Circassians, the Adyghe Habze faith has been revived after the fall of the Soviet Union, and followers of neopagan faiths were found to constitute 12% in Karachay-Cherkessia and 3% in Kabardino-Balkaria (both republics are multiethnic and also have many non-Circassians, especially Russians and Turkic peoples) In Abkhazia, the Abkhaz native faith has also been revived, and in the 2003 census, 8% of residents identified with it (note again that there are many non-Abkhaz in the state including Georgians, Russians and Armenians); on 3 August 2012 the Council of Priests of Abkhazia was formally constituted in Sukhumi. In North Ossetia, the Uatsdin faith was revived, and in 2012, 29% of the population identified with it (North Ossetia is about 2/3 Ossetian and 1/3 Russian). Neopagan movements are also present to a lesser degree elsewhere; in Dagestan 2% of the population identified with folk religious movements, while data on neopagans is unavailable for Chechnya and Ingushetia. The Mari native religion in fact has a continuous existence, but has co-existed with Orthodox Christianity for centuries, and experienced a renewal after the fall of the Soviet Union. A sociological survey conducted in 2004 found that about 15 percent of the population of Mari El consider themselves adherents of the Mari native religion. Since Mari make up just 45 percent of the republic's population of 700,000, this figure means that probably more than a third claim to follow the old religion. The percentage of pagans among the Mari of Bashkortostan and the eastern part of Tatarstan is even higher (up to 69% among women). Mari fled here from forced Christianization in the 17th to 19th centuries. A similar number was claimed by Victor Schnirelmann, for whom between a quarter and a half of the Mari either worship the Pagan gods or are adherents of Neopagan groups. Mari intellectuals maintain that Mari ethnic believers should be classified in groups with varying degrees of Russian Orthodox influence, including syncretic followers who might even go to church at times, followers of the Mari native religion who are baptized, and nonbaptized Mari. A neopagan movement drawing from various syncretic practices that had survived among the Christianised Mari people was initiated in 1990, and was estimated in 2004 to have won the adherence of 2% of the Mordvin people. A study by Ronald Hutton compared a number of different sources (including membership lists of major UK organizations, attendance at major events, subscriptions to magazines, etc.) and used standard models for extrapolating likely numbers. This estimate accounted for multiple membership overlaps as well as the number of adherents represented by each attendee of a pagan gathering. Hutton estimated that there are 250,000 neopagan adherents in the United Kingdom, roughly equivalent to the national Hindu community. A smaller number is suggested by the results of the 2001 Census, in which a question about religious affiliation was asked for the first time. Respondents were able to write in an affiliation not covered by the checklist of common religions, and a total of 42,262 people from England, Scotland and Wales declared themselves to be Pagans by this method. These figures were not released as a matter of course by the Office for National Statistics, but were released after an application by the Pagan Federation of Scotland. This is more than many well known traditions such as Rastafarian, Bahá'í and Zoroastrian groups, but fewer than the big six of Christianity, Islam, Hinduism, Sikhism, Judaism and Buddhism. It is also fewer than the adherents of Jediism, whose campaign made them the fourth largest religion after Christianity, Islam and Hinduism. The 2001 UK Census figures did not allow an accurate breakdown of traditions within the Pagan heading, as a campaign by the Pagan Federation before the census encouraged Wiccans, Heathens, Druids and others all to use the same write-in term 'Pagan' in order to maximise the numbers reported. The 2011 census however made it possible to describe oneself as Pagan-Wiccan, Pagan-Druid and so on. The figures for England and Wales showed 80,153 describing themselves as Pagan (or some subgroup thereof.) The largest subgroup was Wicca, with 11,766 adherents. The overall numbers of people self-reporting as Pagan rose between 2001 and 2011. In 2001 about seven people per 10,000 UK respondents were pagan; in 2011 the number (based on the England and Wales population) was 14.3 people per 10,000 respondents. Census figures in Ireland do not provide a breakdown of religions outside of the major Christian denominations and other major world religions. A total of 22,497 people stated Other Religion in the 2006 census; and a rough estimate is that there were 2,000–3,000 practicing pagans in Ireland in 2009. Numerous pagan groups – primarily Wiccan and Druidic – exist in Ireland though none is officially recognised by the Government. Irish Paganism is often strongly concerned with issues of place and language. Canada does not provide extremely detailed records of religious adherence. Its statistics service only collects limited religious information each decade. At the 2001 census, there were a recorded Pagans in Canada. The United States government does not directly collect religious information. As a result such information is provided by religious institutions and other third-party statistical organisations. Based on the most recent survey by the Pew Forum on religion, there are over one million Pagans in the United States. Up to 0.4% of respondents answered "Pagan" or "Wiccan" when polled. According to Helen A. Berger's 1995 survey "The Pagan Census", most American Pagans are middle-class, educated, and live in urban/suburban areas on the East and West coasts. In the 2011 Australian census, respondents identified as Pagan. Out of recorded Australians, they compose approximately 0.15% of the population. The Australian Bureau of Statistics classifies Paganism as an affiliation under which several sub-classifications may optionally be specified. This includes animism, nature religion, Druidism, pantheism, and Witchcraft. As a result, fairly detailed breakdowns of Pagan respondents are available. In 2006, there were at least (1.64‰) Pagans among New Zealand's population of approximately 4 million. Respondents were given the option to select one or more religious affiliations. Based upon her study of the pagan community in the United States, the sociologist Margot Adler noted that it is rare for Pagan groups to proselytize in order to gain new converts to their faiths. Instead, she argued that "in most cases", converts first become interested in the movement through "word of mouth, a discussion between friends, a lecture, a book, an article or a Web site". She went on to put forward the idea that this typically confirmed "some original, private experience, so that the most common experience of those who have named themselves pagan is something like 'I finally found a group that has the same religious perceptions I always had. A practicing Wiccan herself, Adler used her own conversion to paganism as a case study, remarking that as a child she had taken a great interest in the gods and goddesses of ancient Greece, and had performed her own devised rituals in dedication to them. When she eventually came across the Wiccan religion many years later, she then found that it confirmed her earlier childhood experiences, and that "I never converted in the accepted sense. I simply accepted, reaffirmed, and extended a very old experience." Folklorist Sabina Magliocco supported this idea, noting that a great many of those Californian Pagans whom she interviewed claimed that they had been greatly interested in mythology and folklore as children, imagining a world of "enchanted nature and magical transformations, filled with lords and ladies, witches and wizards, and humble but often wise peasants". Magliocco noted that it was this world that pagans "strive to re-create in some measure". Further support for Adler's idea came from American Wiccan priestess Judy Harrow, who noted that among her comrades, there was a feeling that "you don't "become" pagan, you discover that you always were". They have also been supported by Pagan studies scholar Graham Harvey. Many pagans in North America encounter the movement through their involvement in other hobbies; particularly popular with U.S. Pagans are "golden age"-type pastimes such as the Society for Creative Anachronism (SCA), "Star Trek" fandom, "Doctor Who" fandom and comic book fandom. Other manners in which many North American pagans have got involved with the movement are through political or ecological activism, such as "vegetarian groups, health food stores" or feminist university courses. Adler went on to note that from those she interviewed and surveyed in the U.S., she could identify a number of common factors that led to people getting involved in Paganism: the beauty, vision and imagination that was found within their beliefs and rituals, a sense of intellectual satisfaction and personal growth that they imparted, their support for environmentalism or feminism, and a sense of freedom. Based upon her work in the United States, Adler found that the pagan movement was "very diverse" in its class and ethnic background. She went on to remark that she had encountered pagans in jobs that ranged from "fireman to PhD chemist" but that the one thing that she thought made them into an "elite" was as avid readers, something that she found to be very common within the pagan community despite the fact that avid readers constituted less than 20% of the general population of the United States at the time. Magliocco came to a somewhat different conclusion based upon her ethnographic research of pagans in California, remarking that the majority were "white, middle-class, well-educated urbanites" but that they were united in finding "artistic inspiration" within "folk and indigenous spiritual traditions". The sociologist Regina Oboler examined the role of gender in the U.S. Pagan community, arguing that although the movement had been constant in its support for the equality of men and women ever since its foundation, there was still an essentialist view of gender engrained within it, with female deities being accorded traditional western feminine traits and male deities being similarly accorded what western society saw as masculine traits. An issue of academic debate has been regarding the connection between the New Age movement and contemporary Paganism, or Neo-Paganism. Religious studies scholar Sarah Pike asserted that there was a "significant overlap" between the two religious movements, while Aidan A. Kelly stated that Paganism "parallels the New Age movement in some ways, differs sharply from it in others, and overlaps it in some minor ways". Ethan Doyle White stated that while the Pagan and New Age movements "do share commonalities and overlap", they were nevertheless "largely distinct phenomena." Hanegraaff suggested that whereas various forms of contemporary Paganism were not part of the New Age movement – particularly those who pre-dated the movement – other Pagan religions and practices could be identified as New Age. Various differences between the two movements have been highlighted; the New Age movement focuses on an improved future, whereas the focus of Paganism is on the pre-Christian past. Similarly, the New Age movement typically propounds a universalist message which sees all religions as fundamentally the same, whereas Paganism stresses the difference between monotheistic religions and those embracing a polytheistic or animistic theology. Further, the New Age movement shows little interest in magic and witchcraft, which are conversely core interests of many Pagan religions, such as Wicca. Many Pagans have sought to distance themselves from the New Age movement, even using "New Age" as an insult within their community, while conversely many involved in the New Age have expressed criticism of Paganism for emphasizing the material world over the spiritual. Many Pagans have expressed criticism of the high fees charged by New Age teachers, something not typically present in the Pagan movement. Because of their common links to the Proto-Indo-European culture, many adherents of modern Paganism have come to regard Hinduism as a spiritual relative. Some modern Pagan literature prominently features comparative religion involving European and Indian traditions. The European Congress of Ethnic Religions has made efforts to establish mutual support with Hindu groups, as has the Lithuanian Romuva movement. In India, a prominent figure who made similar efforts was the Hindu revivalist Ram Swarup, who pointed out parallels between Hinduism and European and Arabic paganism. Swarup reached out to modern Pagans in the West. He also had an influence on Western converts to Hinduism, notably David Frawley and Koenraad Elst, who both have described Hinduism as a form of paganism. The modern Pagan writer Christopher Gérard has drawn much inspiration from Hinduism and visited Swarup in India. Reviewing Gérard's book "Parcours païen" in 2001, the historian of religion Jean-François Mayer described Gérard's activities as part of the development of a "Western-Hindu 'pagan axis'". In the Islamic World, Pagans are not considered people of the book, so they are not protected under islamic religious law. Regarding to European paganism, In "Modern Paganism in World Cultures: Comparative Perspectives" Michael F. Strmiska writes that "in Pagan magazines, websites, and Internet discussion venues, Christianity is frequently denounced as an antinatural, antifemale, sexually and culturally repressive, guilt-ridden, and authoritarian religion that has fostered intolerance, hypocrisy, and persecution throughout the world." Further, there is a common belief in the pagan community that Christianity and Paganism are opposing belief systems. This animosity is flamed by historical conflicts between Christian and pre-Christian religions, as well as the perceived ongoing Christian disdain from Christians. Some Pagans has claimed that Christian authorities have never apologized for the religious displacement of Europe's pre-Christian belief systems, particularly following the Roman Catholic Church's apology for past anti-semitism in its "". They also express disapproval of Christianity's continued missionary efforts around the globe at the expense of indigenous and other polytheistic faiths. Some Christian authors have published books criticizing modern Paganism, while other Christian critics have equated Paganism with Satanism, which is often portrayed as such in mainstream entertainment industry. In areas such as the U.S. Bible Belt, where conservative Christian dominance is strong, Pagans have faced continued religious persecution. For instance, Strmiska highlighted instances in both the U.S. and U.K. in which school teachers were fired when their employers discovered that they were Pagan. Thus, many Pagans keep their religion private to avoid discrimination and ostracism. The earliest academic studies of contemporary Paganism were published in the late 1970s and 1980s by scholars like Margot Adler, Marcello Truzzi and Tanya Luhrmann, although it would not be until the 1990s that the actual multidisciplinary academic field of Pagan studies properly developed, pioneered by academics such as Graham Harvey and Chas S. Clifton. Increasing academic interest in Paganism has been attributed to the new religious movement's increasing public visibility, as it began interacting with the interfaith movement and holding large public celebrations at sites like Stonehenge. The first international academic conference on the subject of Pagan studies was held at the University of Newcastle upon Tyne, North-East England in 1993. It was organised by two British religious studies scholars, Graham Harvey and Charlotte Hardman. In April 1996 a larger conference dealing with contemporary Paganism took place at Ambleside in the Lake District. Organised by the Department of Religious Studies at the University of Lancaster, North-West England, it was entitled "Nature Religion Today: Western Paganism, Shamanism and Esotericism in the 1990s", and led to the publication of an academic anthology, entitled "Nature Religion Today: Paganism in the Modern World". In 2004, the first peer-reviewed, academic journal devoted to Pagan studies began publication. "The Pomegranate: The International Journal of Pagan Studies" was edited by Clifton, while the academic publishers AltaMira Press began release of the Pagan Studies Series. From 2008 onward, conferences have been held bringing together scholars specialising in the study of Paganism in Central and Eastern Europe. The relationship between Pagan studies scholars and some practising Pagans has at times been strained. The Australian academic and practising Pagan Caroline Jane Tully argues that many Pagans can react negatively to new scholarship regarding historical pre-Christian societies, believing that it is a threat to the structure of their beliefs and to their "sense of identity". She furthermore argues that some of those dissatisfied Pagans lashed out against academics as a result, particularly on the Internet.
https://en.wikipedia.org/wiki?curid=21686
NTSC NTSC, named after the National Television System Committee, is the analog television color system that was introduced in North America in 1954 and stayed in use until digital conversion. It was one of three major analog color television standards, the others being PAL and SECAM. All the countries using NTSC are currently in the process of conversion, or have already converted to the ATSC standard, or to DVB, ISDB, or DTMB. This page primarily discusses the NTSC color encoding system. The articles on broadcast television systems and analog television further describe frame rates, image resolution, and audio modulation. The NTSC standard was used in most of North America, western South America, Liberia, Myanmar, South Korea, Taiwan, Philippines, Japan, and some Pacific island nations and territories (see map). Most countries using the NTSC standard, as well as those using other analog television standards, have switched to, or are in process of switching to, newer digital television standards, with there being at least four different standards in use around the world. North America, parts of Central America, and South Korea are adopting or have adopted the ATSC standards, while other countries, such as Japan, are adopting or have adopted other standards instead of ATSC. After nearly 70 years, the majority of over-the-air NTSC transmissions in the United States ceased on January 1, 2010, and by August 31, 2011 in Canada and most other NTSC markets. The majority of NTSC transmissions ended in Japan on July 24, 2011, with the Japanese prefectures of Iwate, Miyagi, and Fukushima ending the next year. After a pilot program in 2013, most full-power analog stations in Mexico left the air on ten dates in 2015, with some 500 low-power and repeater stations allowed to remain in analog until the end of 2016. Digital broadcasting allows higher-resolution television, but digital standard definition television continues to use the frame rate and number of lines of resolution established by the analog NTSC standard. The first NTSC standard was developed in 1941 and had no provision for color. In 1953, a second NTSC standard was adopted, which allowed for color television broadcasting which was compatible with the existing stock of black-and-white receivers. NTSC was the first widely adopted broadcast color system and remained dominant until the 2000s, when it started to be replaced with different digital standards such as ATSC and others. The National Television System Committee was established in 1940 by the United States Federal Communications Commission (FCC) to resolve the conflicts between companies over the introduction of a nationwide analog television system in the United States. In March 1941, the committee issued a technical standard for black-and-white television that built upon a 1936 recommendation made by the Radio Manufacturers Association (RMA). Technical advancements of the vestigial side band technique allowed for the opportunity to increase the image resolution. The NTSC selected 525 scan lines as a compromise between RCA's 441-scan line standard (already being used by RCA's NBC TV network) and Philco's and DuMont's desire to increase the number of scan lines to between 605 and 800. The standard recommended a frame rate of 30 frames (images) per second, consisting of two interlaced fields per frame at 262.5 lines per field and 60 fields per second. Other standards in the final recommendation were an aspect ratio of 4:3, and frequency modulation (FM) for the sound signal (which was quite new at the time). In January 1950, the committee was reconstituted to standardize color television. The FCC had briefly approved a color television standard in October 1950 which was developed by CBS. The CBS system was incompatible with existing black-and-white receivers. It used a rotating color wheel, reduced the number of scan lines from 525 to 405, and increased the field rate from 60 to 144, but had an effective frame rate of only 24 frames per second. Legal action by rival RCA kept commercial use of the system off the air until June 1951, and regular broadcasts only lasted a few months before manufacture of all color television sets was banned by the Office of Defense Mobilization in October, ostensibly due to the Korean War. CBS rescinded its system in March 1953, and the FCC replaced it on December 17, 1953, with the NTSC color standard, which was cooperatively developed by several companies, including RCA and Philco. In December 1953 the FCC unanimously approved what is now called the "NTSC" color television standard (later defined as RS-170a). The compatible color standard retained full backward compatibility with then-existing black-and-white television sets. Color information was added to the black-and-white image by introducing a color subcarrier of precisely 315/88 MHz (usually described as 3.579545 MHz±10 Hz or about 3.58 MHz). The precise frequency was chosen so that horizontal line-rate modulation components of the chrominance signal fall exactly in between the horizontal line-rate modulation components of the luminance signal, thereby enabling the chrominance signal to be filtered out of the luminance signal with minor degradation of the luminance signal. (Also, minimize the visibility on existing sets that don't filter it out.) Due to limitations of frequency divider circuits at the time the color standard was promulgated, the color subcarrier frequency was constructed as composite frequency assembled from small integers, in this case 5×7×9/(8×11) MHz. The horizontal line rate was reduced to approximately 15,734 lines per second (3.579545×2/455 MHz = 9/572 MHz) from 15,750 lines per second, and the frame rate was reduced to 30/1.001 ≈ 29.970 frames per second (the horizontal line rate divided by 525 lines/frame) from 30 frames per second. These changes amounted to 0.1 percent and were readily tolerated by then-existing television receivers. The first publicly announced network television broadcast of a program using the NTSC "compatible color" system was an episode of NBC's "Kukla, Fran and Ollie" on August 30, 1953, although it was viewable in color only at the network's headquarters. The first nationwide viewing of NTSC color came on the following January 1 with the coast-to-coast broadcast of the Tournament of Roses Parade, viewable on prototype color receivers at special presentations across the country. The first color NTSC television camera was the RCA TK-40, used for experimental broadcasts in 1953; an improved version, the TK-40A, introduced in March 1954, was the first commercially available color television camera. Later that year, the improved TK-41 became the standard camera used throughout much of the 1960s. The NTSC standard has been adopted by other countries, including most of the Americas and Japan. With the advent of digital television, analog broadcasts are being phased out. Most US NTSC broadcasters were required by the FCC to shut down their analog transmitters in 2009. Low-power stations, Class A stations and translators were required to shut down by 2015. NTSC color encoding is used with the System M television signal, which consists of  (approximately 29.97) interlaced frames of video per second. Each frame is composed of two fields, each consisting of 262.5 scan lines, for a total of 525 scan lines. 483 scan lines make up the visible raster. The remainder (the vertical blanking interval) allow for vertical synchronization and retrace. This blanking interval was originally designed to simply blank the electron beam of the receiver's CRT to allow for the simple analog circuits and slow vertical retrace of early TV receivers. However, some of these lines may now contain other data such as closed captioning and vertical interval timecode (VITC). In the complete raster (disregarding half lines due to interlacing) the even-numbered scan lines (every other line that would be even if counted in the video signal, e.g. {2, 4, 6, ..., 524}) are drawn in the first field, and the odd-numbered (every other line that would be odd if counted in the video signal, e.g. {1, 3, 5, ..., 525}) are drawn in the second field, to yield a flicker-free image at the field refresh frequency of  Hz (approximately 59.94 Hz). For comparison, 576i systems such as PAL-B/G and SECAM use 625 lines (576 visible), and so have a higher vertical resolution, but a lower temporal resolution of 25 frames or 50 fields per second. The NTSC field refresh frequency in the black-and-white system originally exactly matched the nominal 60 Hz frequency of alternating current power used in the United States. Matching the field refresh rate to the power source avoided intermodulation (also called "beating"), which produces rolling bars on the screen. Synchronization of the refresh rate to the power incidentally helped kinescope cameras record early live television broadcasts, as it was very simple to synchronize a film camera to capture one frame of video on each film frame by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera. When color was added to the system, the refresh frequency was shifted slightly downward by 0.1% to approximately 59.94 Hz to eliminate stationary dot patterns in the difference frequency between the sound and color carriers, as explained below in "Color encoding". By the time the frame rate changed to accommodate color, it was nearly as easy to trigger the camera shutter from the video signal itself. The actual figure of 525 lines was chosen as a consequence of the limitations of the vacuum-tube-based technologies of the day. In early TV systems, a master voltage-controlled oscillator was run at twice the horizontal line frequency, and this frequency was divided down by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hz power-line frequency and any discrepancy corrected by adjusting the frequency of the master oscillator. For interlaced scanning, an odd number of lines per frame was required in order to make the vertical retrace distance identical for the odd and even fields, which meant the master oscillator frequency had to be divided down by an odd number. At the time, the only practical method of frequency division was the use of a chain of vacuum tube multivibrators, the overall division ratio being the mathematical product of the division ratios of the chain. Since all the factors of an odd number also have to be odd numbers, it follows that all the dividers in the chain also had to divide by odd numbers, and these had to be relatively small due to the problems of thermal drift with vacuum tube devices. The closest practical sequence to 500 that meets these criteria was . (For the same reason, 625-line PAL-B/G and SECAM uses , the old British 405-line system used , the French 819-line system used etc.) The original 1953 color NTSC specification, still part of the United States Code of Federal Regulations, defined the colorimetric values of the system as follows: Early color television receivers, such as the RCA CT-100, were faithful to this specification (which was based on prevailing motion picture standards), having a larger gamut than most of today's monitors. Their low-efficiency phosphors (notably in the Red) were weak and long-persistent, leaving trails after moving objects. Starting in the late 1950s, picture tube phosphors would sacrifice saturation for increased brightness; this deviation from the standard at both the receiver and broadcaster was the source of considerable color variation. To ensure more uniform color reproduction, receivers started to incorporate color correction circuits that converted the received signal—encoded for the colorimetric values listed above—into signals encoded for the phosphors actually used within the monitor. Since such color correction can not be performed accurately on the nonlinear gamma corrected signals transmitted, the adjustment can only be approximated, introducing both hue and luminance errors for highly saturated colors. Similarly at the broadcaster stage, in 1968–69 the Conrac Corp., working with RCA, defined a set of controlled phosphors for use in broadcast color picture video monitors. This specification survives today as the SMPTE "C" phosphor specification: As with home receivers, it was further recommended that studio monitors incorporate similar color correction circuits so that broadcasters would transmit pictures encoded for the original 1953 colorimetric values, in accordance with FCC standards. In 1987, the "Society of Motion Picture and Television Engineers (SMPTE) Committee on Television Technology, Working Group on Studio Monitor Colorimetry", adopted the SMPTE C (Conrac) phosphors for general use in Recommended Practice 145, prompting many manufacturers to modify their camera designs to directly encode for SMPTE "C" colorimetry without color correction, as approved in SMPTE standard 170M, "Composite Analog Video Signal – NTSC for Studio Applications" (1994). As a consequence, the ATSC digital television standard states that for 480i signals, SMPTE "C" colorimetry should be assumed unless colorimetric data is included in the transport stream. Japanese NTSC never changed primaries and whitepoint to SMPTE "C", continuing to use the 1953 NTSC primaries and whitepoint. Both the PAL and SECAM systems used the original 1953 NTSC colorimetry as well until 1970; unlike NTSC, however, the European Broadcasting Union (EBU) rejected color correction in receivers and studio monitors that year and instead explicitly called for all equipment to directly encode signals for the "EBU" colorimetric values, further improving the color fidelity of those systems. For backward compatibility with black-and-white television, NTSC uses a luminance-chrominance encoding system invented in 1938 by Georges Valensi. The "three" color picture signals are divided into Luminance (derived mathematically from the three separate color signals (Red, Green and Blue)) which takes the place of the original monochrome signal and Chrominance which carries "only" the color information. This process is applied to "each" color source by its own Colorplexer, thereby allowing a compatible color source to be managed as if it were an ordinary monochrome source. This allows black-and-white receivers to display NTSC color signals by simply ignoring the chrominance signal. Some black-and-white TVs sold in the U.S. after the introduction of color broadcasting in 1953 were designed to filter chroma out, but the early B&W sets did not do this and chrominance could be seen as a 'dot pattern' in highly colored areas of the picture. In NTSC, chrominance is encoded using two color signals known as I (in-phase) and Q (in quadrature) in a process called QAM. The two signals each amplitude modulate 3.58 MHz carriers which are 90 degrees out of phase with each other and the result added together but with the carriers themselves being suppressed. The result can be viewed as a single sine wave with varying phase relative to a reference carrier and with varying amplitude. The varying phase represents the instantaneous "color hue" captured by a TV camera, and the amplitude represents the instantaneous "color saturation". This 3.58 MHz subcarrier is then added to the Luminance to form the 'composite color signal' which modulates the video signal carrier just as in monochrome transmission. For a color TV to recover hue information from the color subcarrier, it must have a zero phase reference to replace the previously suppressed carrier. The NTSC signal includes a short sample of this reference signal, known as the colorburst, located on the 'back porch' of each horizontal synchronization pulse. The color burst consists of a minimum of eight cycles of the unmodulated (fixed phase and amplitude) color subcarrier. The TV receiver has a "local oscillator", which is synchronized with these color bursts. Combining this reference phase signal derived from the color burst with the chrominance signal's amplitude and phase allows the recovery of the 'I' and 'Q' signals which when combined with the Luminance information allows the reconstruction of a color image on the screen. Color TV has been said to really be color"ed" TV because of the total separation of the brightness part of the picture from the color portion. In CRT televisions, the NTSC signal is turned into three color signals called Red, Green and Blue, each controlling that color electron gun. TV sets with digital circuitry use sampling techniques to process the signals but the end result is the same. For both analog and digital sets processing an analog NTSC signal, the original three color signals (Red, Green and Blue) are transmitted using three discrete signals (Luminance, I and Q) and then recovered as three separate colors and combined as a color image. When a transmitter broadcasts an NTSC signal, it amplitude-modulates a radio-frequency carrier with the NTSC signal just described, while it frequency-modulates a carrier 4.5 MHz higher with the audio signal. If non-linear distortion happens to the broadcast signal, the 3.579545 MHz color carrier may beat with the sound carrier to produce a dot pattern on the screen. To make the resulting pattern less noticeable, designers adjusted the original 15,750 Hz scanline rate down by a factor of 1.001 (0.1%) to match the audio carrier frequency divided by the factor 286, resulting in a field rate of approximately 59.94 Hz. This adjustment ensures that the difference between the sound carrier and the color subcarrier (the most problematic intermodulation product of the two carriers) is an odd multiple of half the line rate, which is the necessary condition for the dots on successive lines to be opposite in phase, making them least noticeable. The 59.94 rate is derived from the following calculations. Designers chose to make the chrominance subcarrier frequency an "n" + 0.5 multiple of the line frequency to minimize interference between the luminance signal and the chrominance signal. (Another way this is often stated is that the color subcarrier frequency is an odd multiple of half the line frequency.) They then chose to make the audio subcarrier frequency an integer multiple of the line frequency to minimize visible (intermodulation) interference between the audio signal and the chrominance signal. The original black-and-white standard, with its 15,750 Hz line frequency and 4.5 MHz audio subcarrier, does not meet these requirements, so designers had either to raise the audio subcarrier frequency or lower the line frequency. Raising the audio subcarrier frequency would prevent existing (black and white) receivers from properly tuning in the audio signal. Lowering the line frequency is comparatively innocuous, because the horizontal and vertical synchronization information in the NTSC signal allows a receiver to tolerate a substantial amount of variation in the line frequency. So the engineers chose the line frequency to be changed for the color standard. In the black-and-white standard, the ratio of audio subcarrier frequency to line frequency is   285.71. In the color standard, this becomes rounded to the integer 286, which means the color standard's line rate is  ≈ 15,734 Hz. Maintaining the same number of scan lines per field (and frame), the lower line rate must yield a lower field rate. Dividing lines per second by 262.5 lines per field gives approximately 59.94 fields per second. An NTSC television channel as transmitted occupies a total bandwidth of 6 MHz. The actual video signal, which is amplitude-modulated, is transmitted between 500 kHz and 5.45 MHz above the lower bound of the channel. The video carrier is 1.25 MHz above the lower bound of the channel. Like most AM signals, the video carrier generates two sidebands, one above the carrier and one below. The sidebands are each 4.2 MHz wide. The entire upper sideband is transmitted, but only 1.25 MHz of the lower sideband, known as a vestigial sideband, is transmitted. The color subcarrier, as noted above, is 3.579545 MHz above the video carrier, and is quadrature-amplitude-modulated with a suppressed carrier. The audio signal is frequency-modulated, like the audio signals broadcast by FM radio stations in the 88–108 MHz band, but with a 25 kHz maximum frequency deviation, as opposed to 75 kHz as is used on the FM band, making analog television audio signals sound quieter than FM radio signals as received on a wideband receiver. The main audio carrier is 4.5 MHz above the video carrier, making it 250 kHz below the top of the channel. Sometimes a channel may contain an MTS signal, which offers more than one audio signal by adding one or two subcarriers on the audio signal, each synchronized to a multiple of the line frequency. This is normally the case when stereo audio and/or second audio program signals are used. The same extensions are used in ATSC, where the ATSC digital carrier is broadcast at 0.31 MHz above the lower bound of the channel. "Setup" is a 54 mV(7.5 IRE) voltage offset between the "black" and "blanking" levels. It is unique to NTSC. CVBS stands for Color, Video, Blanking, and Sync. There is a large difference in frame rate between film, which runs at 24.0 frames per second, and the NTSC standard, which runs at approximately 29.97 (10 MHz×63/88/455/525) frames per second. In regions that use 25-fps television and video standards, this difference can be overcome by speed-up. For 30-fps standards, a process called "3:2 pulldown" is used. One film frame is transmitted for three video fields (lasting  video frames), and the next frame is transmitted for two video fields (lasting 1 video frame). Two film frames are thus transmitted in five video fields, for an average of  video fields per film frame. The average frame rate is thus 60 ÷ 2.5 = 24 frames per second, so the average film speed is nominally exactly what it should be. (In reality, over the course of an hour of real time, 215,827.2 video fields are displayed, representing 86,330.88 frames of film, while in an hour of true 24-fps film projection, exactly 86,400 frames are shown: thus, 29.97-fps NTSC transmission of 24-fps film runs at 99.92% of the film's normal speed.) Still-framing on playback can display a video frame with fields from two different film frames, so any difference between the frames will appear as a rapid back-and-forth flicker. There can also be noticeable jitter/"stutter" during slow camera pans (telecine judder). To avoid 3:2 pulldown, film shot specifically for NTSC television is often taken at 30 frame/s. To show 25-fps material (such as European television series and some European movies) on NTSC equipment, every fifth frame is duplicated and then the resulting stream is interlaced. Film shot for NTSC television at 24 frames per second has traditionally been accelerated by 1/24 (to about 104.17% of normal speed) for transmission in regions that use 25-fps television standards. This increase in picture speed has traditionally been accompanied by a similar increase in the pitch and tempo of the audio. More recently, frame-blending has been used to convert 24 FPS video to 25 FPS without altering its speed. Film shot for television in regions that use 25-fps television standards can be handled in either of two ways: Because both film speeds have been used in 25-fps regions, viewers can face confusion about the true speed of video and audio, and the pitch of voices, sound effects, and musical performances, in television films from those regions. For example, they may wonder whether the Jeremy Brett series of Sherlock Holmes television films, made in the 1980s and early 1990s, was shot at 24 fps and then transmitted at an artificially fast speed in 25-fps regions, or whether it was shot at 25 fps natively and then slowed to 24 fps for NTSC exhibition. These discrepancies exist not only in television broadcasts over the air and through cable, but also in the home-video market, on both tape and disc, including laser disc and DVD. In digital television and video, which are replacing their analog predecessors, single standards that can accommodate a wider range of frame rates still show the limits of analog regional standards. The initial version of the ATSC standard, for example, allowed frame rates of 23.976, 24, 29.97, 30, 59.94, and 60 frames per second, but not 25 and 50. Modern ATSC allows 25 and 50 FPS. Because satellite power is severely limited, analog video transmission through satellites differs from terrestrial TV transmission. AM is a linear modulation method, so a given demodulated signal-to-noise ratio (SNR) requires an equally high received RF SNR. The SNR of studio quality video is over 50 dB, so AM would require prohibitively high powers and/or large antennas. Wideband FM is used instead to trade RF bandwidth for reduced power. Increasing the channel bandwidth from 6 to 36 MHz allows a RF SNR of only 10 dB or less. The wider noise bandwidth reduces this 40 dB power saving by 36 MHz / 6 MHz = 8 dB for a substantial net reduction of 32 dB. Sound is on a FM subcarrier as in terrestrial transmission, but frequencies above 4.5 MHz are used to reduce aural/visual interference. 6.8, 5.8 and 6.2 MHz are commonly used. Stereo can be multiplex, discrete, or matrix and unrelated audio and data signals may be placed on additional subcarriers. A triangular 60 Hz energy dispersal waveform is added to the composite baseband signal (video plus audio and data subcarriers) before modulation. This limits the satellite downlink power spectral density in case the video signal is lost. Otherwise the satellite might transmit all of its power on a single frequency, interfering with terrestrial microwave links in the same frequency band. In half transponder mode, the frequency deviation of the composite baseband signal is reduced to 18 MHz to allow another signal in the other half of the 36 MHz transponder. This reduces the FM benefit somewhat, and the recovered SNRs are further reduced because the combined signal power must be "backed off" to avoid intermodulation distortion in the satellite transponder. A single FM signal is constant amplitude, so it can saturate a transponder without distortion. An NTSC "frame" consists of an "even" field followed by an "odd" field. As far as the reception of an analog signal is concerned, this is purely a matter of convention and, it makes no difference. It is rather like the broken lines running down the middle of a road, it does not matter whether it is a line/space pair or a space/line pair; the effect to a driver is exactly the same. The introduction of digital television formats has changed things somewhat. Most digital TV formats store and transmit fields in pairs as a single digital frame. Digital formats that match NTSC field rate, including the popular DVD format, record video with the "even field first" in the digital frame, while the formats that match field rate of the 625 line system often record video with "odd frame first". This means that when reproducing many non-NTSC based digital formats it is necessary to reverse the field order, otherwise an unacceptable shuddering "comb" effect occurs on moving objects as they are shown ahead in one field and then jump back in the next. This has also become a hazard where non NTSC progressive video is transcoded to interlaced and vice versa. Systems that recover progressive frames or transcode video should ensure that the "Field Order" is obeyed, otherwise the recovered frame will consist of a field from one frame and a field from an adjacent frame, resulting in "comb" interlacing artifacts. This can often be observed in PC based video playing utilities if an inappropriate choice of de-interlacing algorithm is made. During the decades of high-power NTSC broadcasts in the United States, switching between the views from two cameras was accomplished according to two Field dominance standards, the choice between the two being made by geography, East versus West. In one region, the switch was made between the odd field that finished one frame and the even field that began the next frame; in the other, the switch was made after an even field and before an odd field. Thus, for example, a home VHS recording made of a local television newscast in the East, when paused, would only ever show the view from one camera (unless a dissolve or other multicamera shot were intended), whereas VHS playback of a situation comedy taped and edited in Los Angeles and then transmitted nationwide could be paused at the moment of a switch between cameras with half the lines depicting the outgoing shot and the other half depicting the incoming shot. Unlike PAL and SECAM, with its many varied underlying broadcast television systems in use throughout the world, NTSC color encoding is almost invariably used with broadcast system M, giving NTSC-M. NTSC-N/NTSC50 is an unofficial system combining 625-line video with 3.58 MHz NTSC color. PAL software running on an NTSC Atari ST displays using this system as it cannot display PAL color. Television sets and monitors with a V-Hold knob can display this system after adjusting the vertical hold. Only Japan's variant "NTSC-J" is slightly different: in Japan, black level and blanking level of the signal are identical (at 0 IRE), as they are in PAL, while in American NTSC, black level is slightly higher (7.5 IRE) than blanking level. Since the difference is quite small, a slight turn of the brightness knob is all that is required to correctly show the "other" variant of NTSC on any set as it is supposed to be; most watchers might not even notice the difference in the first place. The channel encoding on NTSC-J differs slightly from NTSC-M. In particular, the Japanese VHF band runs from channels 1–12 (located on frequencies directly above the 76–90 MHz Japanese FM radio band) while the North American VHF TV band uses channels 2–13 (54–72 MHz, 76–88 MHz and 174–216 MHz) with 88–108 MHz allocated to FM radio broadcasting. Japan's UHF TV channels are therefore numbered from 13 up and not 14 up, but otherwise uses the same UHF broadcasting frequencies as those in North America. The Brazilian PAL-M system, introduced on February 19, 1972, uses the same lines/field as NTSC (525/60), and almost the same broadcast bandwidth and scan frequency (15.750 vs. 15.734 kHz). Prior to the introduction of color, Brazil broadcast in standard black-and-white NTSC. As a result, PAL-M signals are near identical to North American NTSC signals, except for the encoding of the color subcarrier (3.575611 MHz for PAL-M and 3.579545 MHz for NTSC). As a consequence of these close specs, PAL-M will display in monochrome with sound on NTSC sets and vice versa. This is used in Argentina, Paraguay and Uruguay. This is very similar to PAL-M (used in Brazil). The similarities of NTSC-M and NTSC-N can be seen on the ITU identification scheme table, which is reproduced here: As it is shown, aside from the number of lines and frames per second, the systems are identical. NTSC-N/PAL-N are compatible with sources such as game consoles, VHS/Betamax VCRs, and DVD players. However, they are not compatible with baseband broadcasts (which are received over an antenna), though some newer sets come with baseband NTSC 3.58 support (NTSC 3.58 being the frequency for color modulation in NTSC: 3.58 MHz). In what can be considered an opposite of PAL-60, NTSC 4.43 is a pseudo color system that transmits NTSC encoding (525/29.97) with a color subcarrier of 4.43 MHz instead of 3.58 MHz. The resulting output is only viewable by TVs that support the resulting pseudo-system (usually multi-standard TVs). Using a native NTSC TV to decode the signal yields no color, while using a PAL TV to decode the system yields erratic colors (observed to be lacking red and flickering randomly). The format was used by the USAF TV based in Germany during the Cold War. It was also found as an optional output on some LaserDisc players and some game consoles sold in markets where the PAL system is used. The NTSC 4.43 system, while not a broadcast format, appears most often as a playback function of PAL cassette format VCRs, beginning with the Sony 3/4" U-Matic format and then following onto Betamax and VHS format machines. As Hollywood has the claim of providing the most cassette software (movies and television series) for VCRs for the world's viewers, and as not "all" cassette releases were made available in PAL formats, a means of playing NTSC format cassettes was highly desired. Multi-standard video monitors were already in use in Europe to accommodate broadcast sources in PAL, SECAM, and NTSC video formats. The heterodyne color-under process of U-Matic, Betamax & VHS lent itself to minor modification of VCR players to accommodate NTSC format cassettes. The color-under format of VHS uses a 629 kHz subcarrier while U-Matic & Betamax use a 688 kHz subcarrier to carry an "amplitude modulated" chroma signal for both NTSC and PAL formats. Since the VCR was ready to play the color portion of the NTSC recording using PAL color mode, the PAL scanner and capstan speeds had to be adjusted from PAL's 50 Hz field rate to NTSC's 59.94 Hz field rate, and faster linear tape speed. The changes to the PAL VCR are minor thanks to the existing VCR recording formats. The output of the VCR when playing an NTSC cassette in NTSC 4.43 mode is 525 lines/29.97 frames per second with PAL compatible heterodyned color. The multi-standard receiver is already set to support the NTSC H & V frequencies; it just needs to do so while receiving PAL color. The existence of those multi-standard receivers was probably part of the drive for region coding of DVDs. As the color signals are component on disc for all display formats, almost no changes would be required for PAL DVD players to play NTSC (525/29.97) discs as long as the display was frame-rate compatible. In January 1960 (7 years prior to adoption of the modified SECAM version) the experimental TV studio in Moscow started broadcasting using OSKM system. OSKM abbreviation means "Simultaneous system with quadrature modulation" (In Russian: Одновременная Система с Квадратурной Модуляцией). It used the color coding scheme that was later used in PAL (U and V instead of I and Q), because it was based on D/K monochrome standard, 625/50. The color subcarrier frequency was 4.4296875 MHz and the bandwidth of U and V signals was near 1.5 MHz. Only circa 4000 TV sets of 4 models (Raduga, Temp-22, Izumrud-201 and Izumrud-203) were produced for studying the real quality of TV reception. These TV's were not commercially available, despite being included in the goods catalog for trade network of the USSR. The broadcasting with this system lasted about 3 years and was ceased well before SECAM transmissions started in the USSR. None of the current multi-standard TV receivers can support this TV system. Film content commonly shot at 24 frames/s can be converted to 30 frames/s through the telecine process to duplicate frames as needed. Mathematically for NTSC this is relatively simple as it is only needed to duplicate every fourth frame. Various techniques are employed. NTSC with an actual frame rate of   (approximately 23.976) frames/s is often defined as NTSC-film. A process known as pullup, also known as pulldown, generates the duplicated frames upon playback. This method is common for H.262/MPEG-2 Part 2 digital video so the original content is preserved and played back on equipment that can display it or can be converted for equipment that cannot. Sometimes "NTSC-U", "NTSC-US", or "NTSC-U/C" is used to describe the video gaming region of North America (the U/C refers to US + Canada), as regional lockout usually restricts games from being playable outside the region. Reception problems can degrade an NTSC picture by changing the phase of the color signal (actually differential phase distortion), so the color balance of the picture will be altered unless a compensation is made in the receiver. The vacuum-tube electronics used in televisions through the 1960s led to various technical problems. Among other things, the color burst phase would often drift when channels were changed, which is why NTSC televisions were equipped with a tint control. PAL and SECAM televisions had no need of one, and although it is still found on NTSC TVs, color drifting generally ceased to be a problem for more modern circuitry by the 1970s. When compared to PAL in particular, NTSC color accuracy and consistency is sometimes considered inferior, leading to video professionals and television engineers jokingly referring to NTSC as "Never The Same Color", "Never Twice the Same Color", or "No True Skin Colors", while for the more expensive PAL system it was necessary to "Pay for Additional Luxury". PAL has also been referred to as "Peace At Last", "Perfection At Last" or "Pictures Always Lovely" in the color war. This mostly applied to vacuum tube-based TVs, however, and later-model solid state sets using Vertical Interval Reference signals have less of a difference in quality between NTSC and PAL. This color phase, "tint", or "hue" control allows for anyone skilled in the art to easily calibrate a monitor with SMPTE color bars, even with a set that has drifted in its color representation, allowing the proper colors to be displayed. Older PAL television sets did not come with a user accessible "hue" control (it was set at the factory), which contributed to its reputation for reproducible colors. The use of NTSC coded color in S-Video systems completely eliminates the phase distortions. As a consequence, the use of NTSC color encoding gives the highest resolution picture quality (on the horizontal axis & frame rate) of the three color systems when used with this scheme. (The NTSC resolution on the vertical axis is lower than the European standards, 525 lines against 625.) However, it uses too much bandwidth for over-the-air transmission. The Atari 800 and Commodore 64 home computers generated S-video, but only when used with specially designed monitors as no TV at the time supported the separate chroma and luma on standard RCA jacks. In 1987, a standardized 4-pin mini-DIN socket was introduced for S-video input with the introduction of S-VHS players, which were the first device produced to use the 4-pin plugs. However, S-VHS never became very popular. Video game consoles in the 1990s began offering S-video output as well. The mismatch between NTSC's 30 frames per second and film's 24 frames is overcome by a process that capitalizes on the "field" rate of the interlaced NTSC signal, thus avoiding the film playback speedup used for 576i systems at 25 frames per second (which causes the accompanying audio to increase in pitch slightly, sometimes rectified with the use of a pitch shifter) at the price of some jerkiness in the video. See Frame rate conversion above. The standard NTSC video image contains some lines (lines 1–21 of each field) that are not visible (this is known as the Vertical Blanking Interval, or VBI); all are beyond the edge of the viewable image, but only lines 1–9 are used for the vertical-sync and equalizing pulses. The remaining lines were deliberately blanked in the original NTSC specification to provide time for the electron beam in CRT-based screens to return to the top of the display. VIR (or Vertical interval reference), widely adopted in the 1980s, attempts to correct some of the color problems with NTSC video by adding studio-inserted reference data for luminance and chrominance levels on line 19. Suitably equipped television sets could then employ these data in order to adjust the display to a closer match of the original studio image. The actual VIR signal contains three sections, the first having 70 percent luminance and the same chrominance as the color burst signal, and the other two having 50 percent and 7.5 percent luminance respectively. A less-used successor to VIR, GCR, also added ghost (multipath interference) removal capabilities. The remaining vertical blanking interval lines are typically used for datacasting or ancillary data such as video editing timestamps (vertical interval timecodes or SMPTE timecodes on lines 12–14), test data on lines 17–18, a network source code on line 20 and closed captioning, XDS, and V-chip data on line 21. Early teletext applications also used vertical blanking interval lines 14–18 and 20, but teletext over NTSC was never widely adopted by viewers. Many stations transmit TV Guide On Screen (TVGOS) data for an electronic program guide on VBI lines. The primary station in a market will broadcast 4 lines of data, and backup stations will broadcast 1 line. In most markets the PBS station is the primary host. TVGOS data can occupy any line from 10–25, but in practice its limited to 11–18, 20 and line 22. Line 22 is only used for 2 broadcast, DirecTV and CFPL-TV. TiVo data is also transmitted on some commercials and program advertisements so customers can autorecord the program being advertised, and is also used in weekly half-hour paid programs on Ion Television and the Discovery Channel which highlight TiVo promotions and advertisers. Below countries and territories currently use or once used the NTSC system. Many of these have switched or are currently switching from NTSC to digital television standards such as ATSC (United States, Canada, Mexico, Suriname, South Korea), ISDB (Japan, Philippines and part of South America), DVB-T (Taiwan, Panama, Colombia and Trinidad and Tobago) or DTMB (Cuba). The following countries and regions no longer use NTSC for terrestrial broadcasts.
https://en.wikipedia.org/wiki?curid=21689
Number A number is a mathematical object used to count, measure, and label. The original examples are the natural numbers 1, 2, 3, 4, and so forth. For being manipulated, individual numbers need to be represented by symbols, called "numerals"; for example, "5" is a numeral that represents the number five. As only a small number of symbols can be memorized, basic numerals are commonly organized in a numeral system, which is an organized way to represent any number. The most common numeral system is the Hindu–Arabic numeral system, which allows representing any number by a combination of ten basic numerals called digits. In addition to their use in counting and measuring, numerals are often used for labels (as with telephone numbers), for ordering (as with serial numbers), and for codes (as with ISBNs). In common usage, a "numeral" is not clearly distinguished from the "number" that it represents. In mathematics, the notion of number has been extended over the centuries to include 0, negative numbers, rational numbers such as and , real numbers such as and , and complex numbers, which extend the real numbers with a square root of (and its combinations with real numbers by addition and multiplication). Calculations with numbers are done with arithmetical operations, the most familiar being addition, subtraction, multiplication, division, and exponentiation. Their study or usage is called arithmetic. The same term may also refer to number theory, the study of the properties of numbers. Besides their practical uses, numbers have cultural significance throughout the world. For example, in Western society, the number 13 is regarded as unlucky, and "a million" may signify "a lot." Though it is now regarded as pseudoscience, belief in a mystical significance of numbers, known as numerology, permeated ancient and medieval thought. Numerology heavily influenced the development of Greek mathematics, stimulating the investigation of many problems in number theory which are still of interest today. During the 19th century, mathematicians began to develop many different abstractions which share certain properties of numbers and may be seen as extending the concept. Among the first were the hypercomplex numbers, which consist of various extensions or modifications of the complex number system. Today, number systems are considered important special examples of much more general categories such as rings and fields, and the application of the term "number" is a matter of convention, without fundamental significance. Numbers should be distinguished from numerals, the symbols used to represent numbers. The Egyptians invented the first ciphered numeral system, and the Greeks followed by mapping their counting numbers onto Ionian and Doric alphabets. Roman numerals, a system that used combinations of letters from the Roman alphabet, remained dominant in Europe until the spread of the superior Hindu–Arabic numeral system around the late 14th century, and the Hindu–Arabic numeral system remains the most common system for representing numbers in the world today. The key to the effectiveness of the system was the symbol for zero, which was developed by ancient Indian mathematicians around 500 AD. Bones and other artifacts have been discovered with marks cut into them that many believe are tally marks. These tally marks may have been used for counting elapsed time, such as numbers of days, lunar cycles or keeping records of quantities, such as of animals. A tallying system has no concept of place value (as in modern decimal notation), which limits its representation of large numbers. Nonetheless tallying systems are considered the first kind of abstract numeral system. The first known system with place value was the Mesopotamian base 60 system (ca. 3400 BC) and the earliest known base 10 system dates to 3100 BC in Egypt. The first known documented use of zero dates to AD 628, and appeared in the "Brāhmasphuṭasiddhānta", the main work of the Indian mathematician Brahmagupta. He treated 0 as a number and discussed operations involving it, including division. By this time (the 7th century) the concept had clearly reached Cambodia as Khmer numerals, and documentation shows the idea later spreading to China and the Islamic world. Brahmagupta's "Brāhmasphuṭasiddhānta" is the first book that mentions zero as a number, hence Brahmagupta is usually considered the first to formulate the concept of zero. He gave rules of using zero with negative and positive numbers, such as "zero plus a positive number is a positive number, and a negative number plus zero is the negative number." The "Brāhmasphuṭasiddhānta" is the earliest known text to treat zero as a number in its own right, rather than as simply a placeholder digit in representing another number as was done by the Babylonians or as a symbol for a lack of quantity as was done by Ptolemy and the Romans. The use of 0 as a number should be distinguished from its use as a placeholder numeral in place-value systems. Many ancient texts used 0. Babylonian and Egyptian texts used it. Egyptians used the word "nfr" to denote zero balance in double entry accounting. Indian texts used a Sanskrit word or to refer to the concept of "void". In mathematics texts this word often refers to the number zero. In a similar vein, Pāṇini (5th century BC) used the null (zero) operator in the "Ashtadhyayi", an early example of an algebraic grammar for the Sanskrit language (also see Pingala). There are other uses of zero before Brahmagupta, though the documentation is not as complete as it is in the "Brāhmasphuṭasiddhānta". Records show that the Ancient Greeks seemed unsure about the status of 0 as a number: they asked themselves "how can 'nothing' be something?" leading to interesting philosophical and, by the Medieval period, religious arguments about the nature and existence of 0 and the vacuum. The paradoxes of Zeno of Elea depend in part on the uncertain interpretation of 0. (The ancient Greeks even questioned whether  was a number.) The late Olmec people of south-central Mexico began to use a symbol for zero, a shell glyph, in the New World, possibly by the but certainly by 40 BC, which became an integral part of Maya numerals and the Maya calendar. Mayan arithmetic used base 4 and base 5 written as base 20. Sanchez in 1961 reported a base 4, base 5 "finger" abacus. By 130 AD, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for 0 (a small circle with a long overbar) within a sexagesimal numeral system otherwise using alphabetic Greek numerals. Because it was used alone, not as just a placeholder, this Hellenistic zero was the first "documented" use of a true zero in the Old World. In later Byzantine manuscripts of his "Syntaxis Mathematica" ("Almagest"), the Hellenistic zero had morphed into the Greek letter Omicron (otherwise meaning 70). Another true zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, meaning "nothing", not as a symbol. When division produced 0 as a remainder, , also meaning "nothing", was used. These medieval zeros were used by all future medieval computists (calculators of Easter). An isolated use of their initial, N, was used in a table of Roman numerals by Bede or a colleague about 725, a true zero symbol. The abstract concept of negative numbers was recognized as early as 100–50 BC in China. "The Nine Chapters on the Mathematical Art" contains methods for finding the areas of figures; red rods were used to denote positive coefficients, black for negative. The first reference in a Western work was in the 3rd century AD in Greece. Diophantus referred to the equation equivalent to (the solution is negative) in "Arithmetica", saying that the equation gave an absurd result. During the 600s, negative numbers were in use in India to represent debts. Diophantus' previous reference was discussed more explicitly by Indian mathematician Brahmagupta, in "Brāhmasphuṭasiddhānta" in 628, who used negative numbers to produce the general form quadratic formula that remains in use today. However, in the 12th century in India, Bhaskara gives negative roots for quadratic equations but says the negative value "is in this case not to be taken, for it is inadequate; people do not approve of negative roots." European mathematicians, for the most part, resisted the concept of negative numbers until the 17th century, although Fibonacci allowed negative solutions in financial problems where they could be interpreted as debts (chapter 13 of "Liber Abaci", 1202) and later as losses (in ). At the same time, the Chinese were indicating negative numbers by drawing a diagonal stroke through the right-most non-zero digit of the corresponding positive number's numeral. The first use of negative numbers in a European work was by Nicolas Chuquet during the 15th century. He used them as exponents, but referred to them as "absurd numbers". As recently as the 18th century, it was common practice to ignore any negative results returned by equations on the assumption that they were meaningless, just as René Descartes did with negative solutions in a Cartesian coordinate system. It is likely that the concept of fractional numbers dates to prehistoric times. The Ancient Egyptians used their Egyptian fraction notation for rational numbers in mathematical texts such as the Rhind Mathematical Papyrus and the Kahun Papyrus. Classical Greek and Indian mathematicians made studies of the theory of rational numbers, as part of the general study of number theory. The best known of these is Euclid's "Elements", dating to roughly 300 BC. Of the Indian texts, the most relevant is the Sthananga Sutra, which also covers number theory as part of a general study of mathematics. The concept of decimal fractions is closely linked with decimal place-value notation; the two seem to have developed in tandem. For example, it is common for the Jain math sutra to include calculations of decimal-fraction approximations to pi or the square root of 2. Similarly, Babylonian math texts used sexagesimal (base 60) fractions with great frequency. The earliest known use of irrational numbers was in the Indian Sulba Sutras composed between 800 and 500 BC. The first existence proofs of irrational numbers is usually attributed to Pythagoras, more specifically to the Pythagorean Hippasus of Metapontum, who produced a (most likely geometrical) proof of the irrationality of the square root of 2. The story goes that Hippasus discovered irrational numbers when trying to represent the square root of 2 as a fraction. However, Pythagoras believed in the absoluteness of numbers, and could not accept the existence of irrational numbers. He could not disprove their existence through logic, but he could not accept irrational numbers, and so, allegedly and frequently reported, he sentenced Hippasus to death by drowning, to impede spreading of this disconcerting news. The 16th century brought final European acceptance of negative integral and fractional numbers. By the 17th  century, mathematicians generally used decimal fractions with modern notation. It was not, however, until the 19th century that mathematicians separated irrationals into algebraic and transcendental parts, and once more undertook the scientific study of irrationals. It had remained almost dormant since Euclid. In 1872, the publication of the theories of Karl Weierstrass (by his pupil E. Kossak), Eduard Heine ("Crelle," 74), Georg Cantor (Annalen, 5), and Richard Dedekind was brought about. In 1869, Charles Méray had taken the same point of departure as Heine, but the theory is generally referred to the year 1872. Weierstrass's method was completely set forth by Salvatore Pincherle (1880), and Dedekind's has received additional prominence through the author's later work (1888) and endorsement by Paul Tannery (1894). Weierstrass, Cantor, and Heine base their theories on infinite series, while Dedekind founds his on the idea of a cut (Schnitt) in the system of real numbers, separating all rational numbers into two groups having certain characteristic properties. The subject has received later contributions at the hands of Weierstrass, Kronecker ("Crelle", 101), and Méray. The search for roots of quintic and higher degree equations was an important development, the Abel–Ruffini theorem (Ruffini 1799, Abel 1824) showed that they could not be solved by radicals (formulas involving only arithmetical operations and roots). Hence it was necessary to consider the wider set of algebraic numbers (all solutions to polynomial equations). Galois (1832) linked polynomial equations to group theory giving rise to the field of Galois theory. Continued fractions, closely related to irrational numbers (and due to Cataldi, 1613), received attention at the hands of Euler, and at the opening of the 19th century were brought into prominence through the writings of Joseph Louis Lagrange. Other noteworthy contributions have been made by Druckenmüller (1837), Kunze (1857), Lemke (1870), and Günther (1872). Ramus (1855) first connected the subject with determinants, resulting, with the subsequent contributions of Heine, Möbius, and Günther, in the theory of . The existence of transcendental numbers was first established by Liouville (1844, 1851). Hermite proved in 1873 that "e" is transcendental and Lindemann proved in 1882 that π is transcendental. Finally, Cantor showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite, so there is an uncountably infinite number of transcendental numbers. The earliest known conception of mathematical infinity appears in the Yajur Veda, an ancient Indian script, which at one point states, "If you remove a part from infinity or add a part to infinity, still what remains is infinity." Infinity was a popular topic of philosophical study among the Jain mathematicians c. 400 BC. They distinguished between five types of infinity: infinite in one and two directions, infinite in area, infinite everywhere, and infinite perpetually. Aristotle defined the traditional Western notion of mathematical infinity. He distinguished between actual infinity and potential infinity—the general consensus being that only the latter had true value. Galileo Galilei's "Two New Sciences" discussed the idea of one-to-one correspondences between infinite sets. But the next major advance in the theory was made by Georg Cantor; in 1895 he published a book about his new set theory, introducing, among other things, transfinite numbers and formulating the continuum hypothesis. In the 1960s, Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. The system of hyperreal numbers represents a rigorous method of treating the ideas about infinite and infinitesimal numbers that had been used casually by mathematicians, scientists, and engineers ever since the invention of infinitesimal calculus by Newton and Leibniz. A modern geometrical version of infinity is given by projective geometry, which introduces "ideal points at infinity", one for each spatial direction. Each family of parallel lines in a given direction is postulated to converge to the corresponding ideal point. This is closely related to the idea of vanishing points in perspective drawing. The earliest fleeting reference to square roots of negative numbers occurred in the work of the mathematician and inventor Heron of Alexandria in the , when he considered the volume of an impossible frustum of a pyramid. They became more prominent when in the 16th century closed formulas for the roots of third and fourth degree polynomials were discovered by Italian mathematicians such as Niccolò Fontana Tartaglia and Gerolamo Cardano. It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. This was doubly unsettling since they did not even consider negative numbers to be on firm ground at the time. When René Descartes coined the term "imaginary" for these quantities in 1637, he intended it as derogatory. (See imaginary number for a discussion of the "reality" of complex numbers.) A further source of confusion was that the equation seemed capriciously inconsistent with the algebraic identity which is valid for positive real numbers "a" and "b", and was also used in complex number calculations with one of "a", "b" positive and the other negative. The incorrect use of this identity, and the related identity in the case when both "a" and "b" are negative even bedeviled Euler. This difficulty eventually led him to the convention of using the special symbol "i" in place of formula_4 to guard against this mistake. The 18th century saw the work of Abraham de Moivre and Leonhard Euler. De Moivre's formula (1730) states: while Euler's formula of complex analysis (1748) gave us: The existence of complex numbers was not completely accepted until Caspar Wessel described the geometrical interpretation in 1799. Carl Friedrich Gauss rediscovered and popularized it several years later, and as a result the theory of complex numbers received a notable expansion. The idea of the graphic representation of complex numbers had appeared, however, as early as 1685, in Wallis's "De Algebra tractatus". Also in 1799, Gauss provided the first generally accepted proof of the fundamental theorem of algebra, showing that every polynomial over the complex numbers has a full set of solutions in that realm. The general acceptance of the theory of complex numbers is due to the labors of Augustin Louis Cauchy and Niels Henrik Abel, and especially the latter, who was the first to boldly use complex numbers with a success that is well known. Gauss studied complex numbers of the form , where "a" and "b" are integral, or rational (and "i" is one of the two roots of ). His student, Gotthold Eisenstein, studied the type , where "ω" is a complex root of Other such classes (called cyclotomic fields) of complex numbers derive from the roots of unity for higher values of "k". This generalization is largely due to Ernst Kummer, who also invented ideal numbers, which were expressed as geometrical entities by Felix Klein in 1893. In 1850 Victor Alexandre Puiseux took the key step of distinguishing between poles and branch points, and introduced the concept of essential singular points. This eventually led to the concept of the extended complex plane. Prime numbers have been studied throughout recorded history. Euclid devoted one book of the "Elements" to the theory of primes; in it he proved the infinitude of the primes and the fundamental theorem of arithmetic, and presented the Euclidean algorithm for finding the greatest common divisor of two numbers. In 240 BC, Eratosthenes used the Sieve of Eratosthenes to quickly isolate prime numbers. But most further development of the theory of primes in Europe dates to the Renaissance and later eras. In 1796, Adrien-Marie Legendre conjectured the prime number theorem, describing the asymptotic distribution of primes. Other results concerning the distribution of the primes include Euler's proof that the sum of the reciprocals of the primes diverges, and the Goldbach conjecture, which claims that any sufficiently large even number is the sum of two primes. Yet another conjecture related to the distribution of prime numbers is the Riemann hypothesis, formulated by Bernhard Riemann in 1859. The prime number theorem was finally proved by Jacques Hadamard and Charles de la Vallée-Poussin in 1896. Goldbach and Riemann's conjectures remain unproven and unrefuted. Numbers can be classified into sets, called number systems, such as the natural numbers and the real numbers. The major categories of numbers are as follows: There is generally no problem in identifying each number system with a proper subset of the next one (by abuse of notation), because each of these number systems is canonically isomorphic to a proper subset of the next one. The resulting hierarchy allows, for example, to talk, formally correctly, about real numbers that are rational numbers, and is expressed symbolically by writing The most familiar numbers are the natural numbers (sometimes called whole numbers or counting numbers): 1, 2, 3, and so on. Traditionally, the sequence of natural numbers started with 1 (0 was not even considered a number for the Ancient Greeks.) However, in the 19th century, set theorists and other mathematicians started including 0 (cardinality of the empty set, i.e. 0 elements, where 0 is thus the smallest cardinal number) in the set of natural numbers. Today, different mathematicians use the term to describe both sets, including 0 or not. The mathematical symbol for the set of all natural numbers is N, also written formula_8, and sometimes formula_9 or formula_10 when it is necessary to indicate whether the set should start with 0 or 1, respectively. In the base 10 numeral system, in almost universal use today for mathematical operations, the symbols for natural numbers are written using ten digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The radix or base is the number of unique numerical digits, including zero, that a numeral system uses to represent numbers (for the decimal system, the radix is 10). In this base 10 system, the rightmost digit of a natural number has a place value of 1, and every other digit has a place value ten times that of the place value of the digit to its right. In set theory, which is capable of acting as an axiomatic foundation for modern mathematics, natural numbers can be represented by classes of equivalent sets. For instance, the number 3 can be represented as the class of all sets that have exactly three elements. Alternatively, in Peano Arithmetic, the number 3 is represented as sss0, where s is the "successor" function (i.e., 3 is the third successor of 0). Many different representations are possible; all that is needed to formally represent 3 is to inscribe a certain symbol or pattern of symbols three times. The negative of a positive integer is defined as a number that produces 0 when it is added to the corresponding positive integer. Negative numbers are usually written with a negative sign (a minus sign). As an example, the negative of 7 is written −7, and . When the set of negative numbers is combined with the set of natural numbers (including 0), the result is defined as the set of integers, Z also written formula_11. Here the letter Z comes . The set of integers forms a ring with the operations addition and multiplication. The natural numbers form a subset of the integers. As there is no common standard for the inclusion or not of zero in the natural numbers, the natural numbers without zero are commonly referred to as positive integers, and the natural numbers with zero are referred to as non-negative integers. A rational number is a number that can be expressed as a fraction with an integer numerator and a positive integer denominator. Negative denominators are allowed, but are commonly avoided, as every rational number is equal to a fraction with positive denominator. Fractions are written as two integers, the numerator and the denominator, with a dividing bar between them. The fraction represents "m" parts of a whole divided into "n" equal parts. Two different fractions may correspond to the same rational number; for example and are equal, that is: In general, If the absolute value of "m" is greater than "n" (supposed to be positive), then the absolute value of the fraction is greater than 1. Fractions can be greater than, less than, or equal to 1 and can also be positive, negative, or 0. The set of all rational numbers includes the integers since every integer can be written as a fraction with denominator 1. For example −7 can be written . The symbol for the rational numbers is Q (for "quotient"), also written formula_15. The symbol for the real numbers is R, also written as formula_16 They include all the measuring numbers. Every real number corresponds to a point on the number line. The following paragraph will focus primarily on positive real numbers. The treatment of negative real numbers is according to the general rules of arithmetic and their denotation is simply prefixing the corresponding positive numeral by a minus sign, e.g. −123.456. Most real numbers can only be "approximated" by decimal numerals, in which a decimal point is placed to the right of the digit with place value 1. Each digit to the right of the decimal point has a place value one-tenth of the place value of the digit to its left. For example, 123.456 represents , or, in words, one hundred, two tens, three ones, four tenths, five hundredths, and six thousandths. A real number can be expressed by a finite number of decimal digits only if it is rational and its fractional part has a denominator whose prime factors are 2 or 5 or both, because these are the prime factors of 10, the base of the decimal system. Thus, for example, one half is 0.5, one fifth is 0.2, one-tenth is 0.1, and one fiftieth is 0.02. Representing other real numbers as decimals would require an infinite sequence of digits to the right of the decimal point. If this infinite sequence of digits follows a pattern, it can be written with an ellipsis or another notation that indicates the repeating pattern. Such a decimal is called a repeating decimal. Thus can be written as 0.333..., with an ellipsis to indicate that the pattern continues. Forever repeating 3s are also written as 0.. It turns out that these repeating decimals (including the repetition of zeroes) denote exactly the rational numbers, i.e., all rational numbers are also real numbers, but it is not the case that every real number is rational. A real number that is not rational is called irrational. A famous irrational real number is the number , the ratio of the circumference of any circle to its diameter. When pi is written as as it sometimes is, the ellipsis does not mean that the decimals repeat (they do not), but rather that there is no end to them. It has been proved that is irrational. Another well-known number, proven to be an irrational real number, is the square root of 2, that is, the unique positive real number whose square is 2. Both these numbers have been approximated (by computer) to trillions of digits. Not only these prominent examples but almost all real numbers are irrational and therefore have no repeating patterns and hence no corresponding decimal numeral. They can only be approximated by decimal numerals, denoting rounded or truncated real numbers. Any rounded or truncated number is necessarily a rational number, of which there are only countably many. All measurements are, by their nature, approximations, and always have a margin of error. Thus 123.456 is considered an approximation of any real number greater or equal to and strictly less than (rounding to 3 decimals), or of any real number greater or equal to and strictly less than (truncation after the 3. decimal). Digits that suggest a greater accuracy than the measurement itself does, should be removed. The remaining digits are then called significant digits. For example, measurements with a ruler can seldom be made without a margin of error of at least 0.001  meters. If the sides of a rectangle are measured as 1.23 meters and 4.56 meters, then multiplication gives an area for the rectangle between and . Since not even the second digit after the decimal place is preserved, the following digits are not "significant". Therefore, the result is usually rounded to 5.61. Just as the same fraction can be written in more than one way, the same real number may have more than one decimal representation. For example, 0.999..., 1.0, 1.00, 1.000, ..., all represent the natural number 1. A given real number has only the following decimal representations: an approximation to some finite number of decimal places, an approximation in which a pattern is established that continues for an unlimited number of decimal places or an exact value with only finitely many decimal places. In this last case, the last non-zero digit may be replaced by the digit one smaller followed by an unlimited number of 9's, or the last non-zero digit may be followed by an unlimited number of zeros. Thus the exact real number 3.74 can also be written 3.7399999999... and 3.74000000000... Similarly, a decimal numeral with an unlimited number of 0's can be rewritten by dropping the 0's to the right of the decimal place, and a decimal numeral with an unlimited number of 9's can be rewritten by increasing the rightmost -9 digit by one, changing all the 9's to the right of that digit to 0's. Finally, an unlimited sequence of 0's to the right of the decimal place can be dropped. For example, 6.849999999999... = 6.85 and 6.850000000000... = 6.85. Finally, if all of the digits in a numeral are 0, the number is 0, and if all of the digits in a numeral are an unending string of 9's, you can drop the nines to the right of the decimal place, and add one to the string of 9s to the left of the decimal place. For example, 99.999... = 100. The real numbers also have an important but highly technical property called the least upper bound property. It can be shown that any ordered field, which is also complete, is isomorphic to the real numbers. The real numbers are not, however, an algebraically closed field, because they do not include a solution (often called a square root of minus one) to the algebraic equation formula_19. Moving to a greater level of abstraction, the real numbers can be extended to the complex numbers. This set of numbers arose historically from trying to find closed formulas for the roots of cubic and quadratic polynomials. This led to expressions involving the square roots of negative numbers, and eventually to the definition of a new number: a square root of −1, denoted by "i", a symbol assigned by Leonhard Euler, and called the imaginary unit. The complex numbers consist of all numbers of the form where "a" and "b" are real numbers. Because of this, complex numbers correspond to points on the complex plane, a vector space of two real dimensions. In the expression , the real number "a" is called the real part and "b" is called the imaginary part. If the real part of a complex number is 0, then the number is called an imaginary number or is referred to as "purely imaginary"; if the imaginary part is 0, then the number is a real number. Thus the real numbers are a subset of the complex numbers. If the real and imaginary parts of a complex number are both integers, then the number is called a Gaussian integer. The symbol for the complex numbers is C or formula_21. The fundamental theorem of algebra asserts that the complex numbers form an algebraically closed field, meaning that every polynomial with complex coefficients has a root in the complex numbers. Like the reals, the complex numbers form a field, which is complete, but unlike the real numbers, it is not ordered. That is, there is no consistent meaning assignable to saying that " I" is greater than 1, nor is there any meaning in saying that " I" is less than 1. In technical terms, the complex numbers lack a total order that is compatible with field operations. An even number is an integer that is "evenly divisible" by two, that is divisible by two without remainder; an odd number is an integer that is not even. (The old-fashioned term "evenly divisible" is now almost always shortened to "divisible".) Any odd number "n" may be constructed by the formula for a suitable integer "k". Starting with the first non-negative odd numbers are {1, 3, 5, 7, ...}. Any even number "m" has the form where "k" is again an integer. Similarly, the first non-negative even numbers are {0, 2, 4, 6, ...}. A prime number is an integer greater than 1 that is not the product of two smaller positive integers. The first few prime numbers are 2, 3, 5, 7, and 11. There is no such simple formula as for odd and even numbers to generate the prime numbers. The primes have been widely studied for more than 2000 years and have led to many questions, only some of which have been answered. The study of these questions belongs to number theory. An example of a still unanswered question is, whether every even number is the sum of two primes. This is called Goldbach's conjecture. The question, whether every integer greater than one is a product of primes in only one way, except for a rearrangement of the primes, has been answered to the positive: this proven claim is called fundamental theorem of arithmetic. A proof appears in Euclid's Elements. Many subsets of the natural numbers have been the subject of specific studies and have been named, often after the first mathematician that has studied them. Example of such sets of integers are Fibonacci numbers and perfect numbers. For more examples, see Integer sequence. Algebraic numbers are those that are a solution to a polynomial equation with integer coefficients. Real numbers that are not rational numbers are called irrational numbers. Complex numbers which are not algebraic are called transcendental numbers. The algebraic numbers that are solutions of a monic polynomial equation with integer coefficients are called algebraic integers. Motivated by the classical problems of constructions with straightedge and compass, the constructible numbers are those complex numbers whose real and imaginary parts can be constructed using straightedge and compass, starting from a given segment of unit length, in a finite number of steps. A computable number, also known as "recursive number", is a real number such that there exists an algorithm which, given a positive number "n" as input, produces the first "n" digits of the computable number's decimal representation. Equivalent definitions can be given using μ-recursive functions, Turing machines or λ-calculus. The computable numbers are stable for all usual arithmetic operations, including the computation of the roots of a polynomial, and thus form a real closed field that contains the real algebraic numbers. The computable numbers may be viewed as the real numbers that may be exactly represented in a computer: a computable number is exactly represented by its first digits and a program for computing further digits. However, the computable numbers are rarely used in practice. One reason is that there is no algorithm for testing the equality of two computable numbers. More precisely, there cannot exist any algorithm which takes any computable number as an input, and decides in every case if this number is equal to zero or not. The set of computable numbers has the same cardinality as the natural numbers. Therefore, almost all real numbers are non-computable. However, it is very difficult to produce explicitly a real number that is not computable. The "p"-adic numbers may have infinitely long expansions to the left of the decimal point, in the same way that real numbers may have infinitely long expansions to the right. The number system that results depends on what base is used for the digits: any base is possible, but a prime number base provides the best mathematical properties. The set of the "p"-adic numbers contains the rational numbers, but is not contained in the complex numbers. The elements of an algebraic function field over a finite field and algebraic numbers have many similar properties (see Function field analogy). Therefore, they are often regarded as numbers by number theorists. The "p"-adic numbers play an important role in this analogy. Some number systems that are not included in the complex numbers may be constructed from the real numbers in a way that generalize the construction of the complex numbers. They are sometimes called hypercomplex numbers. They include the quaternions H, introduced by Sir William Rowan Hamilton, in which multiplication is not commutative, the octonions, in which multiplication is not associative in addition to not being commutative, and the sedenions, in which multiplication is not alternative, neither associative nor commutative. For dealing with infinite sets, the natural numbers have been generalized to the ordinal numbers and to the cardinal numbers. The former gives the ordering of the set, while the latter gives its size. For finite sets, both ordinal and cardinal numbers are identified with the natural numbers. In the infinite case, many ordinal numbers correspond to the same cardinal number. Hyperreal numbers are used in non-standard analysis. The hyperreals, or nonstandard reals (usually denoted as *R), denote an ordered field that is a proper extension of the ordered field of real numbers R and satisfies the transfer principle. This principle allows true first-order statements about R to be reinterpreted as true first-order statements about *R. Superreal and surreal numbers extend the real numbers by adding infinitesimally small numbers and infinitely large numbers, but still form fields.
https://en.wikipedia.org/wiki?curid=21690
No wave No wave was a short-lived avant-garde music and art scene that emerged in the late 1970s in downtown New York City. Reacting against punk rock's recycling of rock and roll clichés, no wave musicians instead experimented with noise, dissonance and atonality in addition to a variety of non-rock genres, while often reflecting an abrasive, confrontational, and nihilistic worldview. The term "no wave" was a pun based on the rejection of commercial new wave music. The movement would last a relatively short time but profoundly influenced the development of independent film, fashion and visual art. No wave is not a clearly definable musical genre with consistent features, although it was generally characterized by a rejection of the recycling of traditional rock aesthetics, such as blues rock styles and Chuck Berry guitar riffs, in punk and new wave music. Various groups drew on or explored such disparate styles as funk, jazz, blues, punk rock, and the avant garde. According to "Village Voice" writer Steve Anderson, the scene pursued an abrasive reductionism which "undermined the power and mystique of a rock vanguard by depriving it of a tradition to react against". Anderson claimed that the no wave scene represented "New York's last stylistically cohesive avant-rock movement". There were, however, some elements common to most no-wave music, such as abrasive atonal sounds; repetitive, driving rhythms; and a tendency to emphasize musical texture over melody—typical of La Monte Young's early downtown music. In the early 1980s, Downtown Manhattan's no wave scene transitioned from its abrasive origins into a more dance-oriented sound, with compilations such as ZE Records's "Mutant Disco" (1981) highlighting a newly playful sensibility borne out of the city's clash of hip hop, disco and punk styles, as well as dub reggae and world music influences. No wave music presented a negative and nihilistic world view that reflected the desolation of late 1970s downtown New York and how they viewed the larger society. Lydia Lunch noted: "The whole fucking country was nihilistic. What did we come out of? The lie of the Summer of Love into Charles Manson and the Vietnam War. Where is the positivity?" The term "no wave" was probably inspired by the French New Wave pioneer Claude Chabrol, with his remark "There are no waves, only the ocean". In 1978 a punk subculture-influenced noise series was held at New York's Artists Space. No wave musicians such as the Contortions, Teenage Jesus and the Jerks, Mars, DNA, Theoretical Girls and Rhys Chatham began experimenting with noise, dissonance and atonality in addition to non-rock styles. The former four groups were included on the "No New York" compilation, often considered the quintessential testament to the scene. The no wave-affiliated label ZE Records was founded in 1978, and would also produce acclaimed and influential compilations in subsequent years. By the early 1980s, artists such as Liquid Liquid, the B-52s, Cristina, Arthur Russell, James White and the Blacks and Lizzy Mercier Descloux developed a more dance-oriented style described by Luc Sante as "anything at all + disco bottom". Other no-wave groups such as Swans, Glenn Branca, the Lounge Lizards, Bush Tetras and Sonic Youth instead continued exploring the early scene's forays into noise and more abrasive territory. No wave inspired the "Speed Trials" noise rock series organized by Live Skull members in May 1983 at White Columns which included British band the Fall and American bands Sonic Youth, Lydia Lunch, Beastie Boys, Elliott Sharp, Swans, the Ordinaires, Arto Lindsay and Toy Killers. This was followed by the after-hours Speed Club that was fleetingly established at ABC No Rio. No wave cinema was an underground film scene in Tribeca and the East Village. Filmmakers included Amos Poe, Eric Mitchell, Charlie Ahearn, Vincent Gallo, James Nares, Jim Jarmusch, Vivienne Dick, Scott B and Beth B and Seth Tillett, and led to the Cinema of Transgression and work by Nick Zedd and Richard Kern. Visual artists played a large role in the no wave scene, as visual artists often were playing in bands, or making videos and films, while making visual art for exhibition. An early influence on this aspect of the scene was Alan Vega (aka Alan Suicide) whose electronic junk sculpture predated his role in the music group Suicide. An important exhibition of no wave visual art was Colab's organization of the "Times Square Show". In June 1980, more than 100 artists installed their work in an empty massage parlor near Times Square that included punk visual artists, graffiti artists, feminist artists, political artists, Xerox artists and performance artists. No wave art found an ongoing home on the Lower East Side with the establishment of ABC No Rio Gallery in 1980, and a no wave punk aesthetic was a dominant strand in the art galleries of the East Village (from 1982–86). In a foreword to the book "No Wave", Weasel Walter wrote of the movement's ongoing influence, I began to express myself musically in a way that felt true to myself, constantly pushing the limits of idiom or genre and always screaming "Fuck You!" loudly in the process. It's how I felt then and I still feel it now. The ideals behind the (anti-) movement known as No Wave were found in many other archetypes before and just as many afterwards, but for a few years around the late 1970s, the concentration of those ideals reached a cohesive, white-hot focus. In 2004, Scott Crary made a documentary, "Kill Your Idols", including such no wave bands as Suicide, Teenage Jesus and the Jerks, DNA and Glenn Branca as well as bands influenced by no wave including Sonic Youth, Swans, Foetus and others. In 2007–2008, three books on the scene were published: Soul Jazz's "New York Noise", Marc Masters' "No Wave", and Thurston Moore and Byron Coley's "No Wave: Post-Punk. Underground. New York. 1976–1980". Coleen Fitzgibbon and Alan W. Moore created a short film in 1978 (finished in 2009) of a New York City no wave concert to benefit Colab called "X Magazine Benefit", documenting performances by DNA, James Chance and the Contortions, and Boris Policeband. Shot in black and white Super 8 and edited on video, the film captured the gritty look and sound of the music scene during that era. In 2013, it was exhibited at Salon 94, an art gallery in New York City.
https://en.wikipedia.org/wiki?curid=21692
NeXT NeXT, Inc. (later NeXT Computer, Inc. and NeXT Software, Inc.) was an American computer and software company founded in 1985 by Apple Computer co-founder Steve Jobs. Based in Redwood City, California, the company developed and manufactured a series of computer workstations intended for the higher education and business markets. NeXT was founded by Jobs after he was forced out of Apple, along with several co-workers. NeXT introduced the first NeXT Computer in 1988, and the smaller NeXTstation in 1990. The NeXT computers experienced relatively limited sales, with estimates of about 50,000 units shipped in total. Nevertheless, their innovative object-oriented NeXTSTEP operating system and development environment (Interface Builder) were highly influential. The first major outside investment was from Ross Perot, who invested after seeing a segment about NeXT on a 1986 PBS documentary titled "Entrepreneurs". In 1987, he invested $20 million in exchange for 16 percent of NeXT's stock and subsequently joined the board of directors in 1988. NeXT later released much of the NeXTSTEP system as a programming environment standard called OpenStep. NeXT withdrew from the hardware business in 1993 to concentrate on marketing OPENSTEP for Mach, its own OpenStep implementation, for several original equipment manufacturers (OEMs). NeXT also developed WebObjects, one of the first enterprise web application frameworks. WebObjects never became very popular because of its initial high price of $50,000, but it remains a prominent early example of a Web server based on dynamic page generation rather than on static content. Apple purchased NeXT in 1997 for $429 million (equivalent to $ million in ) and 1.5 million shares of Apple stock. The merger converted Steve Jobs from Chairman and CEO of NeXT to an advisory role at Apple, the company he had co-founded in 1976; and it promised to port NeXT's operating system to Macintosh hardware, combine it with the legacy application layer of Mac OS, and yield OS X. In following decades, the new operating system was renamed macOS and was adapted into the embedded multimedia platforms of iOS, watchOS, and tvOS to serve as the basis of iPhone and iPad. In 1985, Apple co-founder Steve Jobs led Apple's SuperMicro division, which was responsible for the development of the Macintosh and Lisa personal computers. The Macintosh had been successful on university campuses partly because of the Apple University Consortium, which allowed students and institutions to buy the computers at a discount. The consortium had earned more than $50 million on computers by February 1984. Jobs visited university departments and faculty members to sell Macintosh. Jobs met Paul Berg, a Nobel Laureate in chemistry, at a luncheon held in Silicon Valley to honor François Mitterrand, then President of France. Berg was frustrated by the expense of teaching students about recombinant DNA from textbooks instead of in wet laboratories, used for the testing and analysis of chemicals, drugs, and other materials or biological matter. Wet labs were prohibitively expensive for lower-level courses and were too complex to be simulated on personal computers of the time. Berg suggested that Jobs should use his influence at Apple to create for higher education a "3M computer", a term for a workstation with one megabyte of random-access memory (RAM), a one-megapixel display, and one megaFLOPS of CPU performance. Jobs was intrigued by Berg's concept of a workstation and contemplated starting a higher education computer company in late 1985, amid increasing turmoil at Apple. Jobs's division did not release upgraded versions of the Macintosh and much of the Macintosh Office system. As a result, sales plummeted, and Apple was forced to write off millions of dollars in unsold inventory. Apple's chief executive officer (CEO) John Sculley ousted Jobs from his day-to-day role at Apple, replacing him with Jean-Louis Gassée in 1985. Later that year, Jobs began a power struggle to regain control of the company. The board of directors sided with Sculley while Jobs took a business trip to Western Europe and the Soviet Union on behalf of Apple. After several months of being sidelined, Jobs resigned from Apple on September 13, 1985. He told the board he was leaving to set up a new computer company, and that he would be taking several Apple employees from the SuperMicro division with him. He also told the board that his new company would not compete with Apple and might even consider licensing its designs back to them to market under the Macintosh brand. A number of former Apple employees followed him to NeXT, including Joanna Hoffman, Bud Tribble, George Crow, Rich Page, Susan Barnes, Susan Kare, and Dan'l Lewin. After consulting with major educational buyers from around the country, including a follow-up meeting with Paul Berg, a tentative specification for the workstation was drawn up. It was designed to be powerful enough to run wet lab simulations and cheap enough for college students to use in their dormitory rooms. Before the specifications were finished, however, Apple sued NeXT for "nefarious schemes" to take advantage of the cofounders' insider information. Jobs remarked, "It is hard to think that a $2 billion company with 4,300-plus people couldn't compete with six people in blue jeans." The suit was eventually dismissed before trial. In 1986, Jobs recruited the famous graphic designer Paul Rand to create a brand identity for $100,000. Jobs recalled, "I asked him if he would come up with a few options, and he said, 'No, I will solve your problem for you and you will pay me. You don’t have to use the solution. If you want options go talk to other people.'" Rand created a 20-page brochure detailing the brand, including the precise angle used for the logo (28°) and a new company name spelling, NeXT. NeXT changed its business plan in mid-1986. The company decided to develop both computer hardware and software, instead of just a low-end workstation. A team led by Avie Tevanian, who was a Mach kernel engineer at Carnegie Mellon University, was to develop the NeXTSTEP operating system. The hardware division, led by Rich Pageone of NeXT's cofounders who had previously led Apple's Lisa teamdesigned and developed the hardware. NeXT's first factory was completed in Fremont, California in 1987. It was capable of producing 150,000 machines per year. NeXT's first workstation is officially named the NeXT Computer, nicknamed "the cube" because of its distinctive magnesium one-foot cube case, designed by Apple IIc case designer Frogdesign in accordance with an edict from Jobs. The original design team had anticipated completing the computer in early 1987 to be ready for sale for by midyear. The NeXT Computer received standing ovations when revealed at a lavish, invitation-only gala event, "NeXT Introduction — the Introduction to the NeXT Generation of Computers for Education" at the Louise M. Davies Symphony Hall, San Francisco, California on October 12, 1988. The following day, selected educators and software developers were invited (for a $100 registration fee) to attend the first public technical overview of the NeXT computer at an event called "The NeXT Day" held at the San Francisco Hilton. This event gave developers interested in developing NeXT software an insight into the software architecture, object-oriented programming, and developing for the NeXT Computer. The luncheon speaker was Steve Jobs. The first machines were tested in 1989, after which NeXT started selling limited numbers to universities with a beta version of the NeXTSTEP operating system installed. Initially the NeXT Computer was targeted at U.S. higher education establishments only, with a base price of . The machine was widely reviewed in magazines, generally concentrating on the hardware. When asked if he was upset that the computer's debut was delayed by several months, Jobs responded, "Late? This computer is five years ahead of its time!" The NeXT Computer is based on the new 25 MHz Motorola 68030 central processing unit (CPU). The Motorola 88000 RISC chip was originally considered, but was not available in sufficient quantities. It includes between 8 and 64 MB of random-access memory (RAM), a 256 MB magneto-optical (MO) drive, a 40 MB (swap-only), 330 MB, or 660 MB hard disk drive, 10BASE2 Ethernet, NuBus, and a 17-inch MegaPixel grayscale display measuring 1120 by 832 pixels. In 1989 a typical new PC, Macintosh, or Amiga computer included a few megabytes of RAM, a 640×480 16-color or 320x240 4096-color display, a 10 to 20 megabyte hard drive, and few networking capabilities. It is the first computer to have shipped with a general-purpose DSP chip (Motorola 56001) on the motherboard. This supports sophisticated music and sound processing, including the Music Kit software. The magneto-optical drive manufactured by Canon Inc. is the primary mass storage device. This drive technology was relatively new to the market, and the NeXT is the first computer to have used it. MO drives were cheaper but much slower than hard drives; Jobs negotiated Canon's cost of $150 per blank MO disk down to a retail cost of only $50, and they have an average seek time of 96 ms. The design makes it impossible to move files between computers without a network, because each NeXT Computer has only one MO drive and the disk can not be removed without shutting down the system. Storage options proved challenging for the first NeXT Computers. The drive's limitations of speed and capacity make it insufficient as the primary medium running the NeXTSTEP operating system. In 1989, NeXT struck a deal for former Compaq reseller Businessland to sell the NeXT Computer in select markets nationwide. Selling through a retailer was a major change from NeXT's original business model of only selling directly to students and educational institutions. Businessland founder David Norman predicted that sales of the NeXT Computer would surpass sales of Compaq computers after 12 months. In 1989, Canon invested US$100 million in NeXT, giving it a 16.67 percent stake and making NeXT worth almost $600 million. Canon invested in NeXT with the condition of using the NeXTSTEP environment with its own workstations, which would mean a greatly expanded market for the software. After NeXT exited the hardware business, Canon produced a line of PCs called "object.station", including models 31, 41, 50, and 52, specifically designed to run NeXTSTEP for Intel. Canon also served as NeXT's distributor in Japan. The NeXT Computer was first released on the retail market in 1990, for . NeXT's original investor Ross Perot resigned from the board of directors in June 1991 to dedicate more time to Perot Systems, a Plano, Texas-based systems integrator. NeXT released a second generation of workstations in 1990. The new range includes a revised NeXT Computer, renamed the NeXTcube, and the NeXTstation, nicknamed "the slab" for its form-factor of a low-rise box. Jobs explicitly ensured that NeXT staff did not use the nickname "pizza box", so that the NeXT machines would not be compared to competing Sun workstations which already had that nickname. The magneto-optical drive was replaced with a 2.88 MB floppy drive but 2.88 MB floppy disks were expensive and the technology failed to supplant the 1.44 MB floppy. Realizing this, NeXT utilized the CD-ROM drive, which eventually became an industry standard for storage. Color graphics were available on the NeXTstation Color and on the NeXTdimension graphics processor hardware for the NeXTcube. The new computers were cheaper and faster than their predecessors, with the new Motorola 68040 processor. In 1992, NeXT launched "Turbo" variants of the NeXTcube and NeXTstation, with a 33 MHz 68040 processor and the maximum RAM capacity increased to 128 MB. NeXT sold 20,000 computers in 1992, and NeXT counted upgraded motherboards on back order as system sales. This is a small number compared with competitors, but the company reported sales of $140 million for the year which encouraged Canon to invest a further $30 million to keep the company afloat. In total, 50,000 NeXT machines were sold, including thousands to the then super secret National Reconnaissance Office located in Chantilly, Virginia. NeXT's long-term plan was to migrate to the emerging high-performance industry standard Reduced Instruction Set Computing (RISC) architecture, with the NeXT RISC Workstation (NRW). Initially the NRW was to be based on the Motorola 88110 processor, but due to a lack of confidence in Motorola's commitment to the 88000-series architecture in the time leading up to the AIM alliance's transition to PowerPC, it was later redesigned around dual PowerPC 601s. NeXT produced some motherboards and enclosures, but exited the hardware business before full production. NeXT computers were delivered with Mathematica pre-installed. Several developers used the NeXT platform to write pioneering programs. Tim Berners-Lee used a NeXT Computer in 1990 to create the first Web browser and Web server; accordingly, NeXT was instrumental in the development of the World Wide Web. NeXT systems were used by professors for scientific and engineering applications, and for developing finished newspaper layouts using News. George Mason University in the early 1990s had a set of NeXT workstations for publishing, as well as Silicon Graphics for CAD/GL and Mathematica for astrophysics. The games "Doom", "", and "Quake" were developed by id Software on NeXT machines. Other games based on the "Doom" engine, such as "Heretic" and "" by Raven Software, as well as "Strife" by Rogue Entertainment, were also developed on NeXT hardware using id's tools. Other commercial programs were released for NeXT computers, including Altsys Virtuoso, a vector drawing program with page-layout features which was ported to Mac OS and Microsoft Windows as Aldus FreeHand v4, and the Lotus Improv spreadsheet program. The systems were bundled with a number of smaller built-in applications, such as the Merriam-Webster Collegiate Dictionary, Oxford Quotations, the complete works of William Shakespeare, and the Digital Librarian search engine to access them. NeXT started porting the NeXTSTEP operating system to IBM PC compatible computers using the Intel 80486 processor in late 1991 because of a change in business strategy to withdraw from the hardware business entirely. A demonstration of the port was displayed at the NeXTWorld Expo in January 1992. By mid-1993 the product was complete and version 3.1, also known as NeXTSTEP 486, was released. Prior to this release, Chrysler planned to buy 3,000 copies in 1992. NeXTSTEP 3.x was later ported to PA-RISC and SPARC-based platforms, for a total of four versions: NeXTSTEP/NeXT (for NeXT's own hardware), NeXTSTEP/Intel, NeXTSTEP/PA-RISC, and NeXTSTEP/SPARC. Although the three other ports were not widely used, NeXTSTEP gained popularity at institutions such as First Chicago NBD, Swiss Bank Corporation, O'Connor and Company, and other organizations owing to its programming model. It was used by many American federal agencies, such as United States Naval Research Laboratory, the National Security Agency, the Advanced Research Projects Agency, the Central Intelligence Agency, and the National Reconnaissance Office. Some IBM PC clone vendors offered somewhat customized hardware solutions that were delivered running NeXTSTEP on Intel, such as the Elonex NextStation and the Canon object.station 41. NeXT withdrew from the hardware business in 1993 and the company was renamed NeXT Software, Inc.; consequently, 300 of the 540 staff employees were laid off. NeXT negotiated to sell the hardware business, including the Fremont factory, to Canon which later pulled out of the deal. Work on the PowerPC machines was stopped, along with all hardware production. CEO of Sun Microsystems Scott McNealy announced plans to invest $10 million in 1993 and use NeXT software in future Sun systems. NeXT partnered with Sun to create OpenStep which is NeXTSTEP's application layer hosted on a third party operating system. After exiting the hardware business, NeXT focused on other operating systems, in effect returning to the original business plan. New products based on OpenStep were released, including OpenStep Enterprise, a version for Microsoft's Windows NT. The company launched WebObjects, a platform for building large-scale dynamic web applications. Many large businesses including Dell, Disney, WorldCom, and the BBC used WebObjects for a short time. Eventually WebObjects was used solely to power Apple's iTunes Store and most of its corporate website, until discontinuing the software. Apple Computer announced the intention to acquire NeXT on December 20, 1996. Apple paid $429 million in cash, which went to the initial investors and 1.5 million Apple shares, which went to Steve Jobs, who was deliberately not given cash for his part in the deal. The main purpose of the acquisition was to use NeXTSTEP as a foundation to replace the dated classic Mac OS, instead of BeOS or the in-development Copland. The deal was finalized on February 7, 1997, bringing Jobs back to Apple as a consultant, who was later appointed as interim CEO. In 2000, Jobs took the CEO position as a permanent assignment, holding the position until his resignation on August 24, 2011; Jobs died six weeks later on October 5, 2011 from complications of a relapsed pancreatic neuroendocrine tumor. Several NeXT executives replaced their Apple counterparts when Steve Jobs restructured the company's board of directors. Over the next five years the NeXTSTEP operating system was ported to the PowerPC architecture. At the same time, an Intel port and OpenStep Enterprise toolkit for Windows were both produced. That operating system is code named Rhapsody, while the crossplatform toolkit is called "Yellow Box". For backward compatibility Apple added the "Blue Box" to Rhapsody, allowing existing Mac applications to be run in a self-contained cooperative multitasking environment. A server version of the new operating system was released as Mac OS X Server 1.0 in 1999, and the first consumer version, Mac OS X 10.0, in 2001. The OpenStep developer toolkit was renamed Cocoa. Rhapsody's Blue Box was renamed Classic Environment and changed to run applications full-screen without requiring a separate window. Apple included an updated version of the original Macintosh toolbox, called Carbon, that gave existing Mac applications access to the environment without the constraints of Blue Box. Some of NeXTSTEP's interface features are used in Mac OS X, including the Dock, the Services menu, the Finder's "Column" view, and the Cocoa text system. NeXTSTEP's processor-independent capabilities were retained in Mac OS X, leading to both PowerPC and Intel x86 versions (although only PowerPC versions were publicly available before 2006). Apple moved to Intel processors by August 2006. Jobs created a different corporate culture at NeXT in terms of facilities, salaries, and benefits. Jobs had experimented with some structural changes at Apple but at NeXT he abandoned conventional corporate structures, instead making a "community" with "members" instead of employees. There were only two different salaries at NeXT until the early 1990s. Team members who joined before 1986 were paid and those who joined afterward were paid . This caused a few awkward situations where managers were paid less than their employees. Employees were given performance reviews and raises every six months because of the spartan salary plans. To foster openness, all employees had full access to the payrolls, although few employees ever took advantage of the privilege. NeXT's health insurance plan offered benefits to not only married couples but unmarried couples and same-sex couples, although the latter privilege was later withdrawn due to insurance complications. The payroll schedule was also very different from other companies in Silicon Valley at the time because instead of being delivered twice a month in arrears at the end of the pay period, it was delivered once a month in advance. Jobs found office space in Palo Alto, California on 3475 Deer Creek Road, occupying a glass and concrete building which featured a staircase designed by architect I. M. Pei. The first floor used hardwood flooring and large worktables where the workstations would be assembled. To avoid inventory errors, NeXT used the just-in-time (JIT) inventory strategy. The company contracted out for all major components such as mainboards and cases and had the finished components shipped to the first floor for assembly. The second floor was the office space with an open floor plan. The only enclosed rooms were Jobs's office and a few conference rooms. As NeXT expanded, more office space was needed. The company rented an office at 800 and 900 Chesapeake Drive in Redwood City, also designed by Pei. The architectural centerpiece was a "floating" staircase with no visible supports. The open floor plan was retained, although furnishings became luxurious, with $5,000 chairs, $10,000 sofas, and Ansel Adams prints. NeXT's first campus in Palo Alto was subsequently occupied by SAP AG. Its second campus in Redwood City was occupied by ApniCure and OncoMed Pharmaceuticals Inc. The first issue of "NeXTWORLD" magazine was printed in 1991. It was published in San Francisco by Integrated Media and edited by Michael Miley and later Dan Ruby. It was the only mainstream periodical to discuss NeXT computers, the operating system, and NeXT software. The publication was discontinued in 1994 after only four volumes. A "NeXTWORLD Expo" followed as a developer conference, held in 1991 and 1992 at the San Francisco Civic Center and in 1993 and 1994 at the Moscone Center in San Francisco, with Steve Jobs as the keynote speaker. Though not very profitable, the company had a wide-ranging impact on the computer industry. Object-oriented programming and graphical user interfaces became more common after the 1988 release of the NeXTcube and NeXTSTEP. The technologically successful platform was often held as the trendsetter when other companies started to emulate the success of NeXT's object-oriented system. Widely seen as a response to NeXT, Microsoft announced the Cairo project in 1991; the Cairo specification included similar object-oriented user interface features for a coming consumer version of Windows NT. Although Cairo was ultimately abandoned, some elements were integrated into other projects. By 1994, Microsoft and NeXT were collaborating on a Windows NT port of OpenStep which was never released. By 1993, Taligent was considered by the press to be a competitor in objects and operating systems even without any product release, with NeXT being a main point of comparison. For the first few years, Taligent's theoretical newness was often compared to NeXT's older but mature and commercially established platform, but Taligent's debut release in 1995 was called "too little, too late" especially compared to NeXT. WebObjects failed to achieve wide popularity partly because of the initial high price of US$50,000, but it remains the first and most prominent early example of a web application server that enabled dynamic page generation based on user interactions as opposed to static content. WebObjects is now bundled with macOS Server and Xcode.
https://en.wikipedia.org/wiki?curid=21694
Nineveh Nineveh (; ""; ; ) was an ancient Assyrian city of Upper Mesopotamia, located on the outskirts of Mosul in modern-day northern Iraq. It is located on the eastern bank of the Tigris River and was the capital of the Neo-Assyrian Empire. Today it is a common name for the half of Mosul that lies on the eastern bank of the Tigris and the Nineveh Governorate takes its name from it. It was the largest city in the world for approximately fifty years until the year 612 BC when, after a bitter period of civil war in Assyria, it was sacked by a coalition of its former subject peoples, the Babylonians, Medes, Chaldeans, Persians, Scythians and Cimmerians. The city was never again a political or administrative centre, but by Late Antiquity it was the seat of a Christian bishop. It declined relative to Mosul during the Middle Ages and was mostly abandoned by the 13th century AD. Its ruins lie across the river from the modern-day major city of Mosul, in Iraq's Nineveh Governorate. The two main tells, or mound-ruins, within the walls are Kuyunjiq and Nabī Yūnus, site of a shrine to Jonah, the biblical prophet who preached to Nineveh. Large amounts of Assyrian sculpture and other artifacts have been excavated and are now located in museums around the world. The Islamic State of Iraq and the Levant (ISIL) occupied the site during the mid-2010s, during which time they bulldozed several of the monuments there and caused considerable damage to the others. Iraqi forces recaptured the area in January 2017. The English placename Nineveh comes from Latin ' and Septuagint Greek "Nineuḗ" () under influence of the Biblical Hebrew "Nīnewēh" (), from the Akkadian ' ( "Ninâ") or Old Babylonian "". The original meaning of the name is unclear but may have referred to a patron goddess. The cuneiform for "Ninâ" () is a fish within a house (cf. Aramaic "nuna", "fish"). This may have simply intended "Place of Fish" or may have indicated a goddess associated with fish or the Tigris, possibly originally of Hurrian origin. The city was later said to be devoted to "the goddess Ishtar of Nineveh" and "Nina" was one of the Sumerian and Assyrian names of that goddess. The city was also known as "Ninuwa" in Mari; "Ninawa" in Aramaic; in Syriac; and "Nainavā" () in Persian. "Nabī Yūnus" is the Arabic for "Prophet Jonah". "Kuyunjiq" was, according to Layard, a Turkish name, and it was known as "Armousheeah" by the Arabs, and is thought to have some connection with the Kara Koyunlu dynasty. The remains of ancient Nineveh, the mound-ruins of Kuyunjiq and Nabī Yūnus, are located on a level part of the plain near the junction of the Tigris and the Khosr Rivers within an area of circumscribed by a brick rampart. This whole extensive space is now one immense area of ruins overlaid in parts by new suburbs of the city of Mosul. Nineveh was an important junction for commercial routes crossing the Tigris on the great highway between the Mediterranean Sea and the Indian Ocean, thus uniting the East and the West, it received wealth from many sources, so that it became one of the greatest of all the region's ancient cities, and the capital of the Neo-Assyrian Empire. Nineveh was one of the oldest and greatest cities in antiquity. The area it occupied was originally settled as early as 6000 BC during the late Neolithic period. Deep sounding at Nineveh uncovered soil layers that have been dated to early in the era of the Hassuna archaeological culture. By 3000 BC, the area had become an important religious center for the Mesopotamian goddess Ishtar. The early city (and subsequent buildings) was constructed on a fault line and, consequently, suffered damage from a number of earthquakes. One such event destroyed the first temple of Ishtar, which was rebuilt in 2260 BC by the Akkadian king Manishtushu. Texts from the Hellenistic period later offered an eponymous Ninus as the founder of Nineveh, although there is no historical basis for this. The regional influence of Nineveh became particularly pronounced during the archaeological period known as "Ninevite 5", or "Ninevite V" (2900–2600 BC). This period is defined primarily by the characteristic pottery that is found widely throughout northern Mesopotamia. Also, for the northern Mesopotamian region, the "Early Jezirah" chronology has been developed by archaeologists. According to this regional chronology, 'Ninevite 5' is equivalent to the Early Jezirah I–II period. Ninevite 5 was preceded by the Late Uruk period. Ninevite 5 pottery is roughly contemporary to the Early Transcaucasian culture ware, and the Jemdet Nasr ware. Iraqi "Scarlet Ware" culture also belongs to this period; this colourful painted pottery is somewhat similar to Jemdet Nasr ware. Scarlet Ware was first documented in the Diyala River basin in Iraq. Later, it was also found in the nearby Hamrin Basin, and in Luristan. The historic Nineveh is mentioned in the Old Assyrian Empire during the reign of Shamshi-Adad I (1809-1775) in about 1800 BC as a centre of worship of Ishtar, whose cult was responsible for the city's early importance. The goddess's statue was sent to Pharaoh Amenhotep III of Egypt in the 14th century BC, by orders of the king of Mitanni. The Assyrian city of Nineveh became one of Mitanni's vassals for half a century until the early 14th century BC. The Assyrian king Ashur-uballit I reclaimed it in 1365 BC while overthrowing the Mitanni Empire and creating the Middle Assyrian Empire (1365–1050 BC). There is a large body of evidence to show that Assyrian monarchs built extensively in Nineveh during the late 3rd and 2nd millenniums BC; it appears to have been originally an "Assyrian provincial town". Later monarchs whose inscriptions have appeared on the high city include the Middle Assyrian Empire kings Shalmaneser I (1274–1245 BC) and Tiglath-Pileser I (1114–1076 BC), both of whom were active builders in Assur (Ashur). During the Neo-Assyrian Empire, particularly from the time of Ashurnasirpal II (ruled 883–859 BC) onward, there was considerable architectural expansion. Successive monarchs such as Tiglath-pileser III, Sargon II, Sennacherib, Esarhaddon, and Ashurbanipal maintained and founded new palaces, as well as temples to Sîn, Ashur, Nergal, Shamash, Ninurta, Ishtar, Tammuz, Nisroch and Nabiu. It was Sennacherib who made Nineveh a truly magnificent city (c. 700 BC). He laid out new streets and squares and built within it the South West Palace, or "palace without a rival", the plan of which has been mostly recovered and has overall dimensions of about . It comprised at least 80 rooms, many of which were lined with sculpture. A large number of cuneiform tablets were found in the palace. The solid foundation was made out of limestone blocks and mud bricks; it was tall. In total, the foundation is made of roughly of brick (approximately 160 million bricks). The walls on top, made out of mud brick, were an additional tall. Some of the principal doorways were flanked by colossal stone "lamassu" door figures weighing up to ; these were winged Mesopotamian lions or bulls, with human heads. These were transported from quarries at Balatai, and they had to be lifted up once they arrived at the site, presumably by a ramp. There are also of stone Assyrian palace reliefs, that include pictorial records documenting every construction step including carving the statues and transporting them on a barge. One picture shows 44 men towing a colossal statue. The carving shows three men directing the operation while standing on the Colossus. Once the statues arrived at their destination, the final carving was done. Most of the statues weigh between . The stone carvings in the walls include many battle scenes, impalings and scenes showing Sennacherib's men parading the spoils of war before him. The inscriptions boasted of his conquests: he wrote of Babylon: "Its inhabitants, young and old, I did not spare, and with their corpses I filled the streets of the city." A full and characteristic set shows the campaign leading up to the siege of Lachish in 701; it is the "finest" from the reign of Sennacherib, and now in the British Museum. He later wrote about a battle in Lachish: "And Hezekiah of Judah who had not submitted to my yoke...him I shut up in Jerusalem his royal city like a caged bird. Earthworks I threw up against him, and anyone coming out of his city gate I made pay for his crime. His cities which I had plundered I had cut off from his land." At this time, the total area of Nineveh comprised about , and fifteen great gates penetrated its walls. An elaborate system of eighteen canals brought water from the hills to Nineveh, and several sections of a magnificently constructed aqueduct erected by Sennacherib were discovered at Jerwan, about distant. The enclosed area had more than 100,000 inhabitants (maybe closer to 150,000), about twice as many as Babylon at the time, placing it among the largest settlements worldwide. Some scholars believe that the garden which Sennacherib built next to his palace, with its associated irrigation works, comprised the original Hanging Gardens of Babylon. The greatness of Nineveh was short-lived. In around 627 BC, after the death of its last great king Ashurbanipal, the Neo-Assyrian empire began to unravel through a series of bitter civil wars between rival claimants for the throne, and in 616 BC Assyria was attacked by its own former vassals, the Babylonians, Chaldeans, Medes, Persians, Scythians and Cimmerians. In about 616 BC Kalhu was sacked, the allied forces eventually reached Nineveh, besieging and sacking the city in 612 BC, following bitter house-to-house fighting, after which it was razed. Most of the people in the city who could not escape to the last Assyrian strongholds in the north and west were either massacred or deported out of the city and into the countryside where they founded new settlements. Many unburied skeletons were found by the archaeologists at the site. The Assyrian empire then came to an end by 605 BC, the Medes and Babylonians dividing its colonies between themselves. It is not clear whether Nineveh came under the rule of the Medes or the Neo-Babylonian Empire in 612. The Babylonian "Chronicle Concerning the Fall of Nineveh" records that Nineveh was "turned into mounds and heaps", but this is literary hyperbole. The complete destruction of Nineveh has traditionally been seen as confirmed by the Hebrew "Book of Ezekiel" and the Greek "Retreat of the Ten Thousand" of Xenophon (d. 354 BC). To the Greek historians Ctesias and Herodotus (c. 400 BC), Nineveh was a thing of the past; and when Xenophon passed the place in the 4th century BC he described it as abandoned. There are no later cuneiform tablets in Akkadian from Nineveh. Although devastated in 612, the city was never completely abandoned. The earliest piece of written evidence for the persistence of Nineveh as a settlement is possibly the Cyrus Cylinder of 539/538 BC, but the reading of this is disputed. If correctly read as Nineveh, it indicates that Cyrus the Great restored the temple of Ishtar at Nineveh and probably encouraged resettlement. A number of cuneiform Elamite tablets have been found at Nineveh. They probably date from the time of the revival of Elam in the century following the collapse of Assyria. The Hebrew "Book of Jonah", written in the 4th century BC, attempts to explain the failure of prophecies of the destruction of Nineveh, such as that of Nahum, implying the city's continued existence. Archaeologically, there is evidence of repairs at the temple of Nabu after 612 and for the continued use of Sennacherib's palace. There is evidence of syncretic Hellenistic cults. A statue of Hermes has been found and a Greek inscription attached to a shrine of the Sebitti. A statue of Herakles Epitrapezios dated to the 2nd century AD has also been found. The library of Ashurbanipal may still have been in use until around the time of Alexander the Great. The city was actively resettled under the Seleucid Empire. There is evidence of more changes in Sennacherib's palace under the Parthian Empire. The Parthians also established a municipal mint at Nineveh coining in bronze. According to Tacitus, in AD 50 Meherdates, a claimant to the Parthian throne with Roman support, took Nineveh. By Late Antiquity, Nineveh was restricted to the east bank of the Tigris and the west bank was uninhabited. Under the Sasanian Empire, Nineveh was not an administrative centre. By the 2nd century AD there were Christians present and by 554 it was a bishopric of the Church of the East. King Khosrow II (591–628) built a fortress on the west bank, and two Christian monasteries were constructed around 570 and 595. This growing settlement was not called Mosul until after the Arab conquests. It may have been called Hesnā ʿEbrāyē (Jews' Fort). In 627, the city was the site of the Battle of Nineveh between the Eastern Roman Empire and the Sasanians. In 641, it was conquered by the Arabs, who built a mosque on the west bank and turned it into an administrative centre. Under the Umayyad dynasty, it eclipsed Nineveh, which was reduced to a Christian suburb with limited new construction. By the 13th century, Nineveh was mostly ruins. A church was converted into a Muslim shrine to the prophet Jonah, which continued to attract pilgrims until its destruction by ISIS in 2014. In the Hebrew Bible, Nineveh is first mentioned in Genesis 10:11: "Ashur left that land, and built Nineveh". Some modern English translations interpret "Ashur" in the Hebrew of this verse as the country "Assyria" rather than a person, thus making Nimrod, rather than Ashur, the founder of Nineveh. Sir Walter Raleigh's notion that Nimrod built Nineveh, and the cities in Genesis 10:11–12, has also been refuted by scholars. The discovery of the fifteen Jubilees texts found amongst the Dead Sea Scrolls, has since shown that, according to the Jewish sects of Qumran, Genesis 10:11 affirms the apportionment of Nineveh to Ashur. The attribution of Nineveh to Ashur is also supported by the Greek Septuagint, King James Bible, Geneva Bible, and by Historian Flavius Josephus in his Antiquities of the Jews (Antiquities, i, vi, 4). Nineveh was the flourishing capital of the Assyrian Empire and was the home of King Sennacherib, King of Assyria, during the Biblical reign of King Hezekiah (יְחִזְקִיָּהוּ) and the lifetime of Judean prophet Isaiah (ישעיה). As recorded in Hebrew scripture, Nineveh was also the place where Sennacherib died at the hands of his two sons, who then fled to the vassal land of "`rrt" Urartu. The book of the prophet Nahum is almost exclusively taken up with prophetic denunciations against Nineveh. Its ruin and utter desolation are foretold. Its end was strange, sudden, and tragic. According to the Bible, it was God's doing, His judgment on Assyria's pride (). In fulfillment of prophecy, God made "an utter end of the place". It became a "desolation". The prophet Zephaniah also predicts its destruction along with the fall of the empire of which it was the capital. Nineveh is also the setting of the Book of Tobit. The Book of Jonah, set in the days of the Assyrian empire, describes it as an "exceedingly great city of three days' journey in breadth", whose population at that time is given as "more than 120,000". Genesis 10:11-12 lists four cities "Nineveh, Rehoboth, Calah, and Resen", ambiguously stating that either Resen or Calah is "the great city." The ruins of Kuyunjiq, Nimrud, Karamles and Khorsabad form the four corners of an irregular quadrangle. The ruins of the "great city" Nineveh, with the whole area included within the parallelogram they form by lines drawn from the one to the other, are generally regarded as consisting of these four sites. The description of Nineveh in Jonah likely was a reference to greater Nineveh, including the surrounding cities of Rehoboth, Calah and Resen The Book of Jonah depicts Nineveh as a wicked city worthy of destruction. God sent Jonah to preach to the Ninevites of their coming destruction, and they fasted and repented because of this. As a result, God spared the city; when Jonah protests against this, God states He is showing mercy for the population who are ignorant of the difference between right and wrong ("who cannot discern between their right hand and their left hand") and mercy for the animals in the city. Nineveh's repentance and salvation from evil can be found in the Hebrew Tanakh, aka the Old Testament, and referred to in the Christian Bible and Muslim Quran. To this day, Syriac and Oriental Orthodox churches commemorate the three days Jonah spent inside the fish during the Fast of Nineveh. The Christians observing this holiday fast by refraining from food and drink. Churches encourage followers to refrain from meat, fish and dairy products. The location of Nineveh was known, to some, continuously through the Middle Ages. Benjamin of Tudela visited it in 1170; Petachiah of Regensburg soon after. Carsten Niebuhr recorded its location during the 1761–67 Danish expedition. Niebuhr wrote afterwards that "I did not learn that I was at so remarkable a spot, till near the river. Then they showed me a village on a great hill, which they call Nunia, and a mosque, in which the prophet Jonah was buried. Another hill in this district is called Kalla Nunia, or the Castle of Nineveh. On that lies a village Koindsjug." In 1842, the French Consul General at Mosul, Paul-Émile Botta, began to search the vast mounds that lay along the opposite bank of the river. The locals whom he employed in these excavations, to their great surprise, came upon the ruins of a building at the mound of Khorsabad, which, on further exploration, turned out to be the royal palace of Sargon II, in which large numbers of reliefs were found and recorded, though they had been damaged by fire and were mostly too fragile to remove. In 1847 the young British diplomat Austen Henry Layard explored the ruins. Layard did not use modern archaeological methods; his stated goal was "to obtain the largest possible number of well preserved objects of art at the least possible outlay of time and money." In the Kuyunjiq mound, Layard rediscovered in 1849 the lost palace of Sennacherib with its 71 rooms and colossal bas-reliefs. He also unearthed the palace and famous library of Ashurbanipal with 22,000 cuneiform clay tablets. Most of Layard's material was sent to the British Museum, but two large pieces were given to Lady Charlotte Guest and eventually found their way to the Metropolitan Museum. The study of the archaeology of Nineveh reveals the wealth and glory of ancient Assyria under kings such as Esarhaddon (681–669 BC) and Ashurbanipal (669–626 BC). The work of exploration was carried on by George Smith, Hormuzd Rassam (a modern Assyrian), and others, and a vast treasury of specimens of Assyria was incrementally exhumed for European museums. Palace after palace was discovered, with their decorations and their sculptured slabs, revealing the life and manners of this ancient people, their arts of war and peace, the forms of their religion, the style of their architecture, and the magnificence of their monarchs. The mound of Kuyunjiq was excavated again by the archaeologists of the British Museum, led by Leonard William King, at the beginning of the 20th century. Their efforts concentrated on the site of the Temple of Nabu, the god of writing, where another cuneiform library was supposed to exist. However, no such library was ever found: most likely, it had been destroyed by the activities of later residents. The excavations started again in 1927, under the direction of Campbell Thompson, who had taken part in King's expeditions. Some works were carried out outside Kuyunjiq, for instance on the mound of Tell Nebi Yunus, which was the ancient arsenal of Nineveh, or along the outside walls. Here, near the northwestern corner of the walls, beyond the pavement of a later building, the archaeologists found almost 300 fragments of prisms recording the royal annals of Sennacherib, Esarhaddon, and Ashurbanipal, beside a prism of Esarhaddon which was almost perfect. After the Second World War, several excavations were carried out by Iraqi archaeologists. From 1951 to 1958 Mohammed Ali Mustafa worked the site. The work was continued from 1967 through 1971 by Tariq Madhloom. Some additional excavation occurred by Manhal Jabur in 1980, and Manhal Jabur in 1987. For the most part, these digs focused on Tell Nebi Yunus. The British archaeologist and Assyriologist Professor David Stronach of the University of California, Berkeley conducted a series of surveys and digs at the site from 1987 to 1990, focusing his attentions on the several gates and the existent mudbrick walls, as well as the system that supplied water to the city in times of siege. The excavation reports are in progress. Most recently, an Iraqi-Italian Archaeological Expedition by the Alma Mater Studiorum - University of Bologna and the Iraqi SBAH, led by prof. Nicolò Marchetti, begun in September-November 2019 a long term project aiming at the excavation, conservation and public presentation of Eastern Nineveh (NINEV_E project). Work was carried out in seven excavation areas, from the Adad Gate - now completely repaired after removing hundreds of tons of debris from Isis' destructions, explored and protected with a new roof - deep into the Nebi Yunus town. In three areas a very thick later stratigraphy was encountered, but the late 7th century BC stratum was reached everywhere (actually in one area in the pre-Sennacherib lower town the excavations already exposed an 11th century BC stratum, aiming in the future at exploring the first settlement therein). The site is really endangered with dumping of debris, illegal settlements and quarrying as the main threats. Today, Nineveh's location is marked by two large mounds, Kuyunjiq and "Nabī Yūnus" "Prophet Jonah", and the remains of the city walls (about in circumference). The Neo-Assyrian levels of Kuyunjiq have been extensively explored. The other mound, "Nabī Yūnus", has not been as extensively explored because there was an Arab Muslim shrine dedicated to that prophet on the site. On July 24, 2014, the Islamic State of Iraq and the Levant destroyed the shrine as part of a campaign to destroy religious sanctuaries it deemed "un-Islamic." The ruin mound of Kuyunjiq rises about above the surrounding plain of the ancient city. It is quite broad, measuring about . Its upper layers have been extensively excavated, and several Neo-Assyrian palaces and temples have been found there. A deep sounding by Max Mallowan revealed evidence of habitation as early as the 6th millennium BC. Today, there is little evidence of these old excavations other than weathered pits and earth piles. In 1990, the only Assyrian remains visible were those of the entry court and the first few chambers of the Palace of Sennacherib. Since that time, the palace chambers have received significant damage by looters. Portions of relief sculptures that were in the palace chambers in 1990 were seen on the antiquities market by 1996. Photographs of the chambers made in 2003 show that many of the fine relief sculptures there have been reduced to piles of rubble. Tell Nebi Yunus is located about south of Kuyunjiq and is the secondary ruin mound at Nineveh. On the basis of texts of Sennacherib, the site has traditionally been identified as the "armory" of Nineveh, and a gate and pavements excavated by Iraqis in 1954 have been considered to be part of the "armory" complex. Excavations in 1990 revealed a monumental entryway consisting of a number of large inscribed orthostats and "bull-man" sculptures, some apparently unfinished. Following the Mosul liberation, the tunnels under Tell Nebi Yunus were explored in 2018, in which a 3000-year-old palace was discovered, including a pair of reliefs, each showing a row of women, along with reliefs of "lamassu". The ruins of Nineveh are surrounded by the remains of a massive stone and mudbrick wall dating from about 700 BC. About 12 km in length, the wall system consisted of an ashlar stone retaining wall about high surmounted by a mudbrick wall about high and thick. The stone retaining wall had projecting stone towers spaced about every . The stone wall and towers were topped by three-step merlons. Five of the gateways have been explored to some extent by archaeologists: Translated "Gate of the Water Carriers", ("Mashki" from Persian root word "Mashk", meaning waterskin), also "Masqi Gate" (Arabic: بوابة مسقي), it was perhaps used to take livestock to water from the Tigris which currently flows about to the west. It has been reconstructed in fortified mudbrick to the height of the top of the vaulted passageway. The Assyrian original may have been plastered and ornamented. Named for the god Nergal, it may have been used for some ceremonial purpose, as it is the only known gate flanked by stone sculptures of winged bull-men ("lamassu"). The reconstruction is conjectural, as the gate was excavated by Layard in the mid-19th century and reconstructed in the mid-20th century. Adad Gate was named for the god Adad. A reconstruction was begun in the 1960s by Iraqis but was not completed. The result was a mixture of concrete and eroding mudbrick, which nonetheless does give some idea of the original structure. The excavator left some features unexcavated, allowing a view of the original Assyrian construction. The original brickwork of the outer vaulted passageway was well exposed, as was the entrance of the vaulted stairway to the upper levels. The actions of Nineveh's last defenders could be seen in the hastily built mudbrick construction which narrowed the passageway from . Around April 13, 2016, ISIL demolished both the gate and the adjacent wall by flattening them with a bulldozer. Named for the Sun god Shamash, it opens to the road to Erbil. It was excavated by Layard in the 19th century. The stone retaining wall and part of the mudbrick structure were reconstructed in the 1960s. The mudbrick reconstruction has deteriorated significantly. The stone wall projects outward about from the line of main wall for a width of about . It is the only gate with such a significant projection. The mound of its remains towers above the surrounding terrain. Its size and design suggest it was the most important gate in Neo-Assyrian times. Near the south end of the eastern city wall. Exploratory excavations were undertaken here by the University of California, Berkeley expedition of 1989–1990. There is an outward projection of the city wall, though not as pronounced as at the Shamash Gate. The entry passage had been narrowed with mudbrick to about as at the Adad Gate. Human remains from the final battle of Nineveh were found in the passageway. Located in the eastern wall, it is the southern most and largest of all the remaining gates of ancient Nineveh. Already in 2003, the site of Nineveh was exposed to decay of its reliefs by a lack of proper protective roofing, vandalism and looting holes dug into chamber floors. Future preservation is further compromised by the site's proximity to expanding suburbs. The ailing Mosul Dam is a persistent threat to Nineveh as well as the city of Mosul. This is in no small part due to years of disrepair (in 2006, the U.S. Army Corps of Engineers cited it as the most dangerous dam in the world), the cancellation of a second dam project in the 1980s to act as flood relief in case of failure, and occupation by ISIL in 2014 resulting in fleeing workers and stolen equipment. If the dam fails, the entire site could be under as much as 45 feet (14 m) of water. In an October 2010 report titled "Saving Our Vanishing Heritage", Global Heritage Fund named Nineveh one of 12 sites most "on the verge" of irreparable destruction and loss, citing insufficient management, development pressures and looting as primary causes. By far, the greatest threat to Nineveh has been purposeful human actions by ISIL, which first occupied the area in the mid-2010s. In early 2015, they announced their intention to destroy the walls of Nineveh if the Iraqis tried to liberate the city. They also threatened to destroy artifacts. On February 26 they destroyed several items and statues in the Mosul Museum and are believed to have plundered others to sell overseas. The items were mostly from the Assyrian exhibit, which ISIL declared blasphemous and idolatrous. There were 300 items in the museum out of a total of 1,900, with the other 1,600 being taken to the National Museum of Iraq in Baghdad for security reasons prior to the 2014 Fall of Mosul. Some of the artifacts sold and/or destroyed were from Nineveh. Just a few days after the destruction of the museum pieces, they demolished remains at major UNESCO world heritage sites Khorsabad, Nimrud, and Hatra. Assyrians of the Ancient Church of the East, Chaldean Catholic Church, Syriac Catholic Church, Syriac Orthodox Church, Assyrian Church of the East and Saint Thomas Christians of the Syro-Malabar Catholic Church observe a fast called "Ba'uta d-Ninwe" (ܒܥܘܬܐ ܕܢܝܢܘܐ) which means "Nineveh's Prayer". Copts and Ethiopian Orthodox also maintain this fast. The English Romantic poet Edwin Atherstone wrote an epic "The Fall of Nineveh". The work tells of an uprising against its king Sardanapalus of all the nations that were dominated by the Assyrian empire. He is a great criminal. He has had one hundred prisoners of war executed. After a long struggle the town is conquered by Median and Babylonian troops led by prince Arbaces and priest Belesis. The king sets his own palace on fire and dies inside together with all his concubines. Atherstone's friend, the artist John Martin, created a painting of the same name inspired by the poem. The English poet John Masefield's well-known, fanciful 1903 poem "Cargoes" mentions Nineveh in its first line. Nineveh is also mentioned in Rudyard Kipling's 1897 poem "Recessional" and in Arthur O'Shaughnessy's 1873 poem "Ode". The 1962 Italian peplum movie, "War Gods of Babylon", is based on the sacking and fall of Nineveh by the combined rebel armies led by the Babylonians.
https://en.wikipedia.org/wiki?curid=21699
Sophie Germain Marie-Sophie Germain (; 1 April 1776 – 27 June 1831) was a French mathematician, physicist, and philosopher. Despite initial opposition from her parents and difficulties presented by society, she gained education from books in her father's library, including ones by Leonhard Euler, and from correspondence with famous mathematicians such as Lagrange, Legendre, and Gauss (under the pseudonym of «Monsieur LeBlanc»). One of the pioneers of elasticity theory, she won the grand prize from the Paris Academy of Sciences for her essay on the subject. Her work on Fermat's Last Theorem provided a foundation for mathematicians exploring the subject for hundreds of years after. Because of prejudice against her sex, she was unable to make a career out of mathematics, but she worked independently throughout her life. Before her death, Gauss had recommended that she be awarded an honorary degree, but that never occurred. On 27 June 1831, she died from breast cancer. At the centenary of her life, a street and a girls’ school were named after her. The Academy of Sciences established the Sophie Germain Prize in her honor. Marie-Sophie Germain was born on 1 April 1776, in Paris, France, in a house on Rue Saint-Denis. According to most sources, her father, Ambroise-François, was a wealthy silk merchant, though some believe he was a goldsmith. In 1789, he was elected as a representative of the bourgeoisie to the États-Généraux, which he saw change into the Constitutional Assembly. It is therefore assumed that Sophie witnessed many discussions between her father and his friends on politics and philosophy. Gray proposes that after his political career, Ambroise-François became the director of a bank; in any case, the family remained well-off enough to support Germain throughout her adult life. Marie-Sophie had one younger sister, named Angélique-Ambroise, and one older sister, named Marie-Madeline. Her mother was also named Marie-Madeline, and this plethora of "Maries" may have been the reason she went by Sophie. Germain's nephew Armand-Jacques Lherbette, Marie-Madeline's son, published some of Germain's work after she died (see Work in Philosophy). When Germain was 13, the Bastille fell, and the revolutionary atmosphere of the city forced her to stay inside. For entertainment she turned to her father's library. Here she found J. E. Montucla's "L'Histoire des Mathématiques", and his story of the death of Archimedes intrigued her. Sophie Germain thought that if the geometry method, which at that time referred to all of pure mathematics, could hold such fascination for Archimedes, it was a subject worthy of study. So she pored over every book on mathematics in her father's library, even teaching herself Latin and Greek, so she could read works like those of Sir Isaac Newton and Leonhard Euler. She also enjoyed by Étienne Bézout and by . Later, Cousin visited Germain at home, encouraging her in her studies. Germain's parents did not at all approve of her sudden fascination with mathematics, which was then thought inappropriate for a woman. When night came, they would deny her warm clothes and a fire for her bedroom to try to keep her from studying, but after they left, she would take out candles, wrap herself in quilts and do mathematics. After some time, her mother even secretly supported her. In 1794, when Germain was 18, the École Polytechnique opened. As a woman, Germain was barred from attending, but the new system of education made the "lecture notes available to all who asked". The new method also required the students to "submit written observations". Germain obtained the lecture notes and began sending her work to Joseph Louis Lagrange, a faculty member. She used the name of a former student Monsieur Antoine-Auguste Le Blanc, "fearing", as she later explained to Gauss, "the ridicule attached to a female scientist". When Lagrange saw the intelligence of M. Le Blanc, he requested a meeting, and thus Sophie was forced to disclose her true identity. Fortunately, Lagrange did not mind that Germain was a woman, and he became her mentor. Germain first became interested in number theory in 1798 when Adrien-Marie Legendre published . After studying the work, she opened correspondence with him on number theory, and later, elasticity. Legendre showed some of Germain's work in the to his second edition of the , where he calls it ("very ingenious"). See also Her work on Fermat's Last Theorem below. Germain's interest in number theory was renewed when she read Carl Friedrich Gauss' monumental work . After three years of working through the exercises and trying her own proofs for some of the theorems, she wrote, again under the pseudonym of M. Le Blanc, to the author himself, who was one year younger than her. The first letter, dated 21 November 1804, discussed Gauss' and presented some of Germain's work on Fermat's Last Theorem. In the letter, Germain claimed to have proved the theorem for "n" = "p" − 1, where "p" is a prime number of the form "p" = 8"k" + 7. However, her proof contained a weak assumption, and Gauss' reply did not comment on Germain's proof. Around 1807 (sources differ), during the Napoleonic wars, the French were occupying the German town of Braunschweig, where Gauss lived. Germain, concerned that he might suffer the fate of Archimedes, wrote to General Pernety, a family friend, requesting that he ensure Gauss' safety. General Pernety sent a chief of a battalion to meet with Gauss personally to see that he was safe. As it turned out, Gauss was fine, but he was confused by the mention of Sophie's name. Three months after the incident, Germain disclosed her true identity to Gauss. He replied: How can I describe my astonishment and admiration on seeing my esteemed correspondent M. Le Blanc metamorphosed into this celebrated person ... when a woman, because of her sex, our customs and prejudices, encounters infinitely more obstacles than men in familiarising herself with [number theory's] knotty problems, yet overcomes these fetters and penetrates that which is most hidden, she doubtless has the most noble courage, extraordinary talent, and superior genius. Gauss' letters to Olbers show that his praise for Germain was sincere. In the same 1807 letter, Germain claimed that if formula_1 is of the form formula_2, then formula_3 is also of that form. Gauss replied with a counterexample: formula_4 can be written as formula_5, but formula_6 cannot. Although Gauss thought well of Germain, his replies to her letters were often delayed, and he generally did not review her work. Eventually his interests turned away from number theory, and in 1809 the letters ceased. Despite the friendship of Germain and Gauss, they never met. When Germain's correspondence with Gauss ceased, she took interest in a contest sponsored by the Paris Academy of Sciences concerning Ernst Chladni's experiments with vibrating metal plates. The object of the competition, as stated by the Academy, was "to give the mathematical theory of the vibration of an elastic surface and to compare the theory to experimental evidence". Lagrange's comment that a solution to the problem would require the invention of a new branch of analysis deterred all but two contestants, Denis Poisson and Germain. Then Poisson was elected to the Academy, thus becoming a judge instead of a contestant, and leaving Germain as the only entrant to the competition. In 1809 Germain began work. Legendre assisted by giving her equations, references, and current research. She submitted her paper early in the fall of 1811 and did not win the prize. The judging commission felt that "the true equations of the movement were not established", even though "the experiments presented ingenious results". Lagrange was able to use Germain's work to derive an equation that was "correct under special assumptions". The contest was extended by two years, and Germain decided to try again for the prize. At first Legendre continued to offer support, but then he refused all help. Germain's anonymous 1813 submission was still littered with mathematical errors, especially involving double integrals, and it received only an honorable mention because "the fundamental base of the theory [of elastic surfaces] was not established". The contest was extended once more, and Germain began work on her third attempt. This time she consulted with Poisson. In 1814 he published his own work on elasticity and did not acknowledge Germain's help (although he had worked with her on the subject and, as a judge on the Academy commission, had had access to her work). Germain submitted her third paper, "", under her own name, and on 8 January 1816 she became the first woman to win a prize from the Paris Academy of Sciences. She did not appear at the ceremony to receive her award. Although Germain had at last been awarded the , the Academy was still not fully satisfied. Germain had derived the correct differential equation (a special case of the Kirchhoff–Love equation), but her method did not predict experimental results with great accuracy, as she had relied on an incorrect equation from Euler, which led to incorrect boundary conditions. Here is Germain's final equation for the vibration of a plane lamina: where "N"2 is a constant. After winning the Academy contest, she was still not able to attend its sessions because of the Academy's tradition of excluding women other than the wives of members. Seven years later this situation was transformed, when she made friends with Joseph Fourier, a secretary of the Academy, who obtained tickets to the sessions for her. Germain published her prize-winning essay at her own expense in 1821, mostly because she wanted to present her work in opposition to that of Poisson. In the essay she pointed out some of the errors in her method. In 1826 she submitted a revised version of her 1821 essay to the Academy. According to Andrea Del Centina, the revision included attempts to clarify her work by "introducing certain simplifying hypotheses". This put the Academy in an awkward position, as they felt the paper to be "inadequate and trivial", but they did not want to "treat her as a professional colleague, as they would any man, by simply rejecting the work". So Augustin-Louis Cauchy, who had been appointed to review her work, recommended her to publish it, and she followed his advice. One further work of Germain's on elasticity was published posthumously in 1831, her "". She used the mean curvature in her research (see Honors in number theory). Germain's best work was in number theory, and her most significant contribution to number theory dealt with Fermat's Last Theorem. In 1815, after the elasticity contest, the Academy offered a prize for a proof of Fermat's Last Theorem. It reawakened Germain's interest in number theory, and she wrote to Gauss again after ten years of no correspondence. In the letter, Germain said that number theory was her preferred field and that it was in her mind all the time she was studying elasticity. She outlined a strategy for a general proof of Fermat's Last Theorem, including a proof for a special case. Germain's letter to Gauss contained her substantial progress toward a proof. She asked Gauss whether her approach to the theorem was worth pursuing. Gauss never answered. Fermat's Last Theorem can be divided into two cases. Case 1 involves all powers "p" that do not divide any of "x", "y", or "z". Case 2 includes all "p" that divide at least one of "x", "y", or "z". Germain proposed the following, commonly called "Sophie Germain's theorem": Let "p" be an odd prime. If there exists an auxiliary prime "P" = 2"Np" + 1 ("N" is any positive integer not divisible by 3) such that: Then the first case of Fermat's Last Theorem holds true for "p". Germain used this result to prove the first case of Fermat's Last Theorem for all odd primes "p"  5 must be numbers "whose size frightens the imagination", around 40 digits long. Germain did not publish this work. Her brilliant theorem is known only because of the footnote in Legendre's treatise on number theory, where he used it to prove Fermat's Last Theorem for "p" = 5 (see Correspondence with Legendre). Germain also proved or nearly proved several results that were attributed to Lagrange or were rediscovered years later. Del Centina states that "after almost two hundred years her ideas were still central", but ultimately her method did not work. In addition to mathematics, Germain studied philosophy and psychology. She wanted to classify facts and generalize them into laws that could form a system of psychology and sociology, which were then just coming into existence. Her philosophy was highly praised by Auguste Comte. Two of her philosophical works, and , were published, both posthumously. This was due in part to the efforts of Lherbette, her nephew, who collected her philosophical writings and published them. is a history of science and mathematics with Germain's commentary. In , the work admired by Comte, Germain argues that there are no differences between the sciences and the humanities. In 1829 Germain learned that she had breast cancer. Despite the pain, she continued to work. In 1831 "Crelle's Journal" published her paper on the curvature of elastic surfaces and "a note about finding and in formula_8". Mary Gray records: "She also published in an examination of principles which led to the discovery of the laws of equilibrium and movement of elastic solids." On 27 June 1831, she died in the house at 13 rue de Savoie. Despite Germain's intellectual achievements, her death certificate lists her as a "" (property holder), not a "". But her work was not unappreciated by everyone. When the matter of honorary degrees came up at the University of Göttingen in 1837—six years after Germain's death—Gauss lamented: "she [Germain] proved to the world that even a woman can accomplish something worthwhile in the most rigorous and abstract of the sciences and for that reason would well have deserved an honorary degree". Germain's resting place in the Père Lachaise Cemetery in Paris is marked by a gravestone. At the centennial celebration of her life, a street and a girls' school were named after her, and a plaque was placed at the house where she died. The school houses a bust commissioned by the Paris City Council. In January 2020, Satellogic, a high-resolution Earth observation imaging and analytics company, launched a ÑuSat type micro-satellite named in honor of Sophie Germain. E. Dubouis defined a "sophien" of a prime to be a prime where , for such that yield such that has no solutions when and are prime to . A Sophie Germain prime is a prime such that is also prime. The Germain curvature (also called mean curvature) is formula_9, where and are the maximum and minimum values of the normal curvature. Sophie Germain's identity states that for any }, Vesna Petrovich found that the educated world's response to the publication in 1821 of Germain's prize-winning essay "ranged from polite to indifferent". Yet, some critics had high praise for it. Of her essay in 1821, Cauchy said: "[it] was a work for which the name of its author and the importance of the subject both deserved the attention of mathematicians". Germain was also included in H. J. Mozans' book "Woman in Science", although Marilyn Bailey Ogilvie claims that the biography "is inaccurate and the notes and bibliography are unreliable". Nevertheless, it quotes the mathematician Claude-Louis Navier as saying that "it is a work which few men are able to read and which only one woman was able to write". Germain's contemporaries also had good things to say relating to her work in mathematics. Gauss certainly thought highly of her and recognized that European culture presented special difficulties to a woman in mathematics (see Correspondence with Gauss). The modern view generally acknowledges that although Germain had great talent as a mathematician, her haphazard education had left her without the strong base she needed to truly excel. As explained by Gray, "Germain's work in elasticity suffered generally from an absence of rigor, which might be attributed to her lack of formal training in the rudiments of analysis." Petrovich adds: "This proved to be a major handicap when she could no longer be regarded as a young prodigy to be admired but was judged by her peer mathematicians." Notwithstanding the problems with Germain's theory of vibrations, Gray states that "Germain's work was fundamental in the development of a general theory of elasticity." Mozans writes, however, that when the Eiffel tower was built and the architects inscribed the names of 72 great French scientists, Germain's name was not among them, despite the salience of her work to the tower's construction. Mozans asked: "Was she excluded from this list ... because she was a woman? It would seem so." Concerning her early work in number theory, J. H. Sampson states: "She was clever with formal algebraic manipulations; but there is little evidence that she really understood the , and her work of that period that has come down to us seems to touch only on rather superficial matters." Gray adds that "The inclination of sympathetic mathematicians to praise her work rather than to provide substantive criticism from which she might learn was crippling to her mathematical development." Yet Marilyn Bailey Ogilvie recognizes that "Sophie Germain's creativity manifested itself in pure and applied mathematics ... [she] provided imaginative and provocative solutions to several important problems", and, as Petrovich proposes, it may have been her very lack of training that gave her unique insights and approaches. Louis Bucciarelli and Nancy Dworsky, Germain's biographers, summarize as follows: "All the evidence argues that Sophie Germain had a mathematical brilliance that never reached fruition due to a lack of rigorous training available only to men." Germain was referenced and quoted in David Auburn's 2001 play "Proof." The protagonist is a young struggling female mathematician, Catherine, who found great inspiration in the work of Germain. Germain was also mentioned in John Madden's film adaptation of the same name in a conversation between Catherine (Gwyneth Paltrow) and Hal (Jake Gyllenhaal). In the fictional work "The Last Theorem" by Arthur C. Clarke and Frederik Pohl, Sophie Germain was credited with inspiring the central character, Ranjit Subramanian, to solve Fermat's Last Theorem. A new musical about Sophie Germain's life, entitled The Limit, premiered at VAULT Festival in London, 2019. The Sophie Germain Prize (), awarded annually by the Foundation Sophie Germain, is conferred by the Academy of Sciences in Paris. Its purpose is to honour a French mathematician for research in the foundations of mathematics. This award, in the amount of €8,000, was established in 2003, under the auspices of the Institut de France.
https://en.wikipedia.org/wiki?curid=27791
Suzanne Vega Suzanne Nadine Vega (born July 11, 1959) is an American singer-songwriter, musician and record producer, best known for her folk-inspired music. Vega's music career spans more than 30 years. She came to prominence in the mid 1980s, releasing four singles that entered the Top 40 charts in the UK during the 1980s and 1990s, including "Marlene on the Wall", "Left of Center", "Luka" and "No Cheap Thrill". "Tom's Diner," which was originally released as an a cappella recording on Vega's second album, "Solitude Standing" (1987), was remixed in 1990 as a dance track by English electronic duo DNA with Vega as featured artist, and it became a Top 10 hit in over five countries. The song was used as a test during the creation of the MP3 format. The critical role of her song in the development of the MP3 compression prompted Vega to be given the title of "The Mother of the MP3". Vega has released nine studio albums to date, the latest of which is "", released in 2016. Suzanne Nadine Vega was born on July 11, 1959, in Santa Monica, California. Her mother, Pat Vega (née Schumacher), is a computer systems analyst of German-Swedish heritage. Her father, Richard Peck, is of Scottish-English-Irish origin. They divorced soon after her birth. Her stepfather, Edgardo Vega Yunqué, also known as Ed Vega, was a writer and teacher from Puerto Rico. When Vega was two and a half, her family moved to New York City. She grew up in Spanish Harlem and the Upper West Side. She was not aware of having a different biological father, Richard Peck, until she was nine years old. They met for the first time in her late 20s, and they remain in contact. She attended the High School of Performing Arts, now renamed Fiorello H. LaGuardia High School, where she studied modern dance and graduated in 1977. While majoring in English literature at Barnard College, she performed in small venues in Greenwich Village, where she was a regular contributor to Jack Hardy's Monday night songwriters' group at the Cornelia Street Cafe and had some of her first songs published on "Fast Folk" anthology albums. In 1984, she received a major label recording contract, making her one of the first "Fast Folk" artists to break out on a major label. Vega's self-titled debut album was released in 1985 and was well received by critics in the U.S.; it reached platinum status in the United Kingdom. Produced by Lenny Kaye and Steve Addabbo, the songs feature Vega's acoustic guitar in straightforward arrangements. A video was released for the album's song "Marlene on the Wall", which went into MTV and VH1's rotations. During this period Vega also wrote lyrics for two songs ("Lightning" and "Freezing") on "Songs from Liquid Days" by composer Philip Glass. Vega's song "Left of Center" co-written with Steve Addabbo for the 1986 John Hughes film "Pretty in Pink" reached No. 32 on the UK Singles Chart in 1986. Her next effort, "Solitude Standing" (1987), garnered critical and commercial success, selling over one million copies in the U.S. It includes the international hit single "Luka", which is written about, and from the point of view of, an abused child—at the time an uncommon subject for a pop hit. While continuing a focus on Vega's acoustic guitar, the music is more strongly pop-oriented and features fuller arrangements. The a cappella "Tom's Diner" from this album was later a hit, remixed by two British dance producers under the name DNA, in 1990. The track was originally a bootleg, until Vega allowed DNA to release it through her record company, and it became her biggest hit. Vega's third album, "Days of Open Hand" (1990), continued in the style of her first two albums. In 1992, she released the album "99.9F°". It consists of a mixture of folk music, dance beats and industrial music. This record was awarded Gold status by the RIAA in recognition of selling over 500,000 copies in the U.S. The single "Blood Makes Noise" from this album peaked at number-one on Billboard's Modern Rock Tracks. Vega later married the album's producer Mitchell Froom. Her fifth album, "Nine Objects of Desire", was released in 1996. The music varies between a frugal, simple style and the industrial production of "99.9F°". This album contains "Caramel", featured in the movie "The Truth About Cats & Dogs", and later the trailer for the movie "Closer". A song not included on that album, "Woman on the Tier," was featured on the soundtrack of the movie "Dead Man Walking". In 1997 she took a singing part on the concept album "Heaven and Hell", a musical interpretation of the seven deadly sins by her colleague Joe Jackson, with whom she had already collaborated in 1986 on "Left of Center" from the "Pretty in Pink" soundtrack (with Vega singing and Jackson playing piano). In 1999, Avon Books published Vega's book "The Passionate Eye: The Collected Writings of Suzanne Vega", a volume of poems, lyrics, essays and journalistic pieces. In September 2001, Vega released a new album entitled "Songs in Red and Gray". Three songs deal with Vega's divorce from her first husband, Mitchell Froom. At the memorial concert for her brother Tim Vega in December 2002, Vega began her role as the subject of the direct-cinema documentary, "Some Journey", directed by Christopher Seufert of Mooncusser Films. The documentary has not been completed. Underground hip hop duo Felt named a track on their album "", released in 2002, "Suzanne Vega". In 2003, the 21-song greatest hits compilation "Retrospective: The Best of Suzanne Vega" was released. (The UK version of "Retrospective" included an eight-song bonus CD as well as a DVD containing 12 songs). In the same year she was invited by Grammy Award-winning jazz guitarist Bill Frisell to play at the "Century of Song" concerts at the famed "Ruhrtriennale" in Germany. In 2003, she hosted the American Public Media radio series "American Mavericks", about 20th century American composers, which received the Peabody Award for Excellence in Broadcasting. On August 3, 2006, Vega became the first major recording artist to perform live in the Internet-based virtual world, "Second Life". The event was hosted by John Hockenberry of public radio's "The Infinite Mind". On September 17, 2006, she performed in Central Park, as part of a benefit concert for the Save Darfur Coalition. During the concert she highlighted her support for Amnesty International, of which she has been a member since 1988. In early October 2006, Vega participated in the Academia Film Olomouc (AFO) in Olomouc, the Czech Republic, the oldest festival of documentary films in Europe, in which she appeared as a main guest. She was invited there as the subject of the documentary film by director Christopher Seufert, that had a test screening at the festival. At the end of the festival she performed her classic songs and added one brand new piece called "New York Is a Woman". Vega is also interviewed in the book "Everything Is Just a Bet" which was published in Czech in October 2006. The book contains 12 interview transcriptions from the talk show called "Stage Talks" that regularly runs in the Švandovo divadlo (Švandovo Theatre) in Prague. Vega introduced the book to the audience of the Švandovo divadlo (Švandovo Theatre), and together with some other Czech celebrities gave a signing session. She signed a new recording contract with Blue Note Records in the spring of 2006, and released "Beauty & Crime" on July 17, 2007. The album, produced by Jimmy Hogarth, won a Grammy Award for Best Engineered Album, Non-Classical. Her contract was not renewed and she was released in June 2008. In 2007, Vega followed the lead of numerous other mainstream artists and released her track "Pornographer's Dream" as podsafe. The song spent two weeks at number-one during 2007 and finished as the No. 11 hit of the year on the PMC Top10's annual countdown. In 2015, Vega joined The 14th Annual Independent Music Awards judging panel to assist independent musicians' careers. She was also a judge for the 6th, 7th, 8th, 9th, 10th, 11th, 12th and 13th Independent Music Awards. A partial cover version of her song "Tom's Diner" was used to introduce the 2010 British movie "4.3.2.1", with its lyrics largely rewritten to echo the plot. This musical hybrid was released as "Keep Moving". Vega participated in the Danger Mouse/Sparklehorse/David Lynch collaboration "Dark Night of the Soul". She wrote both melody and lyrics for her song, which is titled "The Man Who Played God", inspired by a biography of Pablo Picasso. Vega sang lead vocals on the song "Now I Am an Arsonist" with singer-songwriter Jonathan Coulton on his 2011 album, "Artificial Heart". Vega has re-recorded her back-catalogue, both for artistic and commercial (and control) reasons, in the "Close-up" series. Vol. 1 ("Love Songs") and Vol. 2 ("People & Places") appeared in 2010 while Vol. 3 ("States of Being") was released in July 2011 followed by Vol. 4 ("Songs of Family") in September 2012. Volumes 2, 3 and 4 of the "Close-Up" albums included previously unrecorded material; Volumes 2 and 3 each included one new collaboratively written song, while Volume 4 included three songs that Vega had written years earlier, but had not previously gotten around to recording. In all, Vega's "Close-Up" series features 60 re-recorded songs and five new compositions, representing about three-quarters of her lifetime songwriting output. While performing live, Vega and long-term collaborator Gerry Leonard began to introduce a number of new songs into the setlist, including the live favorite "I Never Wear White". Over the course of a year, the songs were completed and recorded in a live-studio setting with the help of a number of guests. Produced by Leonard, "Tales from the Realm of the Queen of Pentacles" was released in February 2014. It was her first album of new material in seven years and became Vega's first studio album to reach the UK Top 40 since 1992, peaking at No. 37. New album "" was released on October 14, 2016. On June 25, 2019, "The New York Times Magazine" listed Suzanne Vega among hundreds of artists whose material was reportedly destroyed in the 2008 Universal fire. At the age of nine she began to write poetry. She was encouraged to do so by her stepfather. It took her three years to write her first song, "Brother Mine", which was finished at the age of 14. It was first published on "Close-Up Vol. 4, Songs of Family", along with her other early song, "The Silver Lady". Vega has not learned to read musical notes; she sees the melody as a shape and chords as colors. She focuses on lyrics and melodic ideas; for advanced features – like intros or bridges – she relies on other artists she works with. Most of her albums, except the first one, were made in such cooperation. Vega finishes 80% of the songs she starts writing. The most important artistic influences on her work come from Lou Reed, Bob Dylan and Leonard Cohen. Some other important artists for her are Paul Simon and Laura Nyro. Vega and Duncan Sheik wrote a play "Carson McCullers Talks About Love", about the life of the writer Carson McCullers. In the play directed by Kay Matschullat, which premiered in 2011, Vega alternates between monologue and songs. Vega and Sheik were nominated for Outstanding Music in a Play for the 57th annual Drama Desk awards. The album "", based on this play, was released in 2016. Vega considers it to be a third version, because it's rewritten, and she made the first version in college. In early 2020, Vega played the role of "Band Leader" in an off-Broadway musical based on the 1969 movie "Bob & Carol & Ted & Alice", directed by Scott Elliott and produced at The New Group in New York City. She replaced Sheik, who wrote the show's music and co-wrote the lyrics with Amanda Green. In his review for "The New York Times", critic Ben Brantley called the "brandy-voiced" Vega "a delightful, smoothly sardonic presence. Vega has established her own recording label after the 2008 economic crisis. From that point, she stopped working for Blue Note Records and started thinking about re-recording her back catalog with new arrangements and gaining control over her works (which she eventually did with the "Close-Up Series"). The name "Amanuensis Productions" was meant as a private joke about "servant" (amanuensis) owning the "masters" (recording masters), also a pun at A&M still legally owning her previous master tapes. Running the label proved to be harder than she expected. In 2015 it just "broke even", but new licenses were coming for "Tom's Diner". On March 17, 1995, Vega married Mitchell Froom, a musician and a record producer (who played on and produced "99.9F°" and "Nine Objects of Desire"). They have a daughter, Ruby Froom (born July 8, 1994). The band Soul Coughing's "Ruby Vroom" album was named for her, with Vega's approval. Vega and Froom separated and divorced in 1998. On February 11, 2006, Vega married Paul Mills, a lawyer and poet, "22 years after he first proposed to her." Beginning in 2010, Ruby has occasionally performed with her mother on tour. Vega practices Nichiren Buddhism and is a member of the American branch of the worldwide Buddhist association Soka Gakkai International. ! Year !! Awards !! Work !! Category !! Result Studio albums Books
https://en.wikipedia.org/wiki?curid=27797
Semigroup In mathematics, a semigroup is an algebraic structure consisting of a set together with an associative binary operation. The binary operation of a semigroup is most often denoted multiplicatively: "x"·"y", or simply "xy", denotes the result of applying the semigroup operation to the ordered pair . Associativity is formally expressed as that for all "x", "y" and "z" in the semigroup. Semigroups may be considered a special case of magmas, where the operation is associative, or as a generalization of groups, without requiring the existence of an identity element or inverses. As in the case of groups or magmas, the semigroup operation need not be commutative, so "x"·"y" is not necessarily equal to "y"·"x"; a well-known example of an operation that is associative but non-commutative is matrix multiplication. If the semigroup operation is commutative, then the semigroup is called a "commutative semigroup" or (less often than in the analogous case of groups) it may be called an "abelian semigroup". A monoid is an algebraic structure intermediate between groups and semigroups, and is a semigroup having an identity element, thus obeying all but one of the axioms of a group; existence of inverses is not required of a monoid. A natural example is strings with concatenation as the binary operation, and the empty string as the identity element. Restricting to non-empty strings gives an example of a semigroup that is not a monoid. Positive integers with addition form a commutative semigroup that is not a monoid, whereas the non-negative integers do form a monoid. A semigroup without an identity element can be easily turned into a monoid by just adding an identity element. Consequently, monoids are studied in the theory of semigroups rather than in group theory. Semigroups should not be confused with quasigroups, which are a generalization of groups in a different direction; the operation in a quasigroup need not be associative but quasigroups preserve from groups a notion of division. Division in semigroups (or in monoids) is not possible in general. The formal study of semigroups began in the early 20th century. Early results include a Cayley theorem for semigroups realizing any semigroup as transformation semigroup, in which arbitrary functions replace the role of bijections from group theory. A deep result in the classification of finite semigroups is Krohn–Rhodes theory, analagous to the Jordan–Hölder decomposition for finite groups. Some other techniques for studying semigroups, like Green's relations, do not resemble anything in group theory. The theory of finite semigroups has been of particular importance in theoretical computer science since the 1950s because of the natural link between finite semigroups and finite automata via the syntactic monoid. In probability theory, semigroups are associated with Markov processes. In other areas of applied mathematics, semigroups are fundamental models for linear time-invariant systems. In partial differential equations, a semigroup is associated to any equation whose spatial evolution is independent of time. There are numerous special classes of semigroups, semigroups with additional properties, which appear in particular applications. Some of these classes are even closer to groups by exhibiting some additional but not all properties of a group. Of these we mention: regular semigroups, orthodox semigroups, semigroups with involution, inverse semigroups and cancellative semigroups. There also interesting classes of semigroups that do not contain any groups except the trivial group; examples of the latter kind are bands and their commutative subclass—semilattices, which are also s. A semigroup is a set formula_1 together with a binary operation "formula_2" (that is, a function formula_3) that satisfies the associative property: More succinctly, a semigroup is an associative magma. A left identity of a semigroup formula_1 (or more generally, magma) is an element formula_7 such that for all formula_8 in formula_1, formula_10. Similarly, a right identity is an element formula_11 such that for all formula_8 in formula_1, formula_14. Left and right identities are both called one-sided identities. A semigroup may have one or more left identities but no right identity, and vice versa. A two-sided identity (or just identity) is an element that is both a left and right identity. Semigroups with a two-sided identity are called monoids. A semigroup may have at most one two-sided identity. If a semigroup has a two-sided identity, then the two-sided identity is the only one-sided identity in the semigroup. If a semigroup has both a left identity and a right identity, then it has a two-sided identity (which is therefore the unique one-sided identity). A semigroup formula_1 without identity may be embedded in a monoid formed by adjoining an element formula_16 to formula_1 and defining formula_18 for all formula_19. The notation formula_20 denotes a monoid obtained from formula_1 by adjoining an identity "if necessary" (formula_22 for a monoid). Similarly, every magma has at most one absorbing element, which in semigroup theory is called a zero. Analogous to the above construction, for every semigroup formula_1, one can define formula_24, a semigroup with 0 that embeds formula_1. The semigroup operation induces an operation on the collection of its subsets: given subsets "A" and "B" of a semigroup "S", their product , written commonly as "AB", is the set (This notion is defined identically as it is for groups.) In terms of this operation, a subset "A" is called If "A" is both a left ideal and a right ideal then it is called an ideal (or a two-sided ideal). If "S" is a semigroup, then the intersection of any collection of subsemigroups of "S" is also a subsemigroup of "S". So the subsemigroups of "S" form a complete lattice. An example of a semigroup with no minimal ideal is the set of positive integers under addition. The minimal ideal of a commutative semigroup, when it exists, is a group. Green's relations, a set of five equivalence relations that characterise the elements in terms of the principal ideals they generate, are important tools for analysing the ideals of a semigroup and related notions of structure. The subset with the property that every element commutes with any other element of the semigroup is called the center of the semigroup. The center of a semigroup is actually a subsemigroup. A semigroup homomorphism is a function that preserves semigroup structure. A function between two semigroups is a homomorphism if the equation holds for all elements "a", "b" in "S", i.e. the result is the same when performing the semigroup operation after or before applying the map "f". A semigroup homomorphism between monoids preserves identity if it is a monoid homomorphism. But there are semigroup homomorphisms which are not monoid homomorphisms, e.g. the canonical embedding of a semigroup formula_1 without identity into formula_20. Conditions characterizing monoid homomorphisms are discussed further. Let formula_28 be a semigroup homomorphism. The image of formula_11 is also a semigroup. If formula_30 is a monoid with an identity element formula_31, then formula_32 is the identity element in the image of formula_11. If formula_34 is also a monoid with an identity element formula_35 and formula_35 belongs to the image of formula_11, then formula_38, i.e. formula_11 is a monoid homomorphism. Particularly, if formula_11 is surjective, then it is a monoid homomorphism. Two semigroups "S" and "T" are said to be isomorphic if there is a bijection with the property that, for any elements "a", "b" in "S", . Isomorphic semigroups have the same structure. A semigroup congruence formula_41 is an equivalence relation that is compatible with the semigroup operation. That is, a subset formula_42 that is an equivalence relation and formula_43 and formula_44 implies formula_45 for every formula_46 in "S". Like any equivalence relation, a semigroup congruence formula_41 induces congruence classes and the semigroup operation induces a binary operation formula_49 on the congruence classes: Because formula_41 is a congruence, the set of all congruence classes of formula_41 forms a semigroup with formula_49, called the quotient semigroup or factor semigroup, and denoted formula_54. The mapping formula_55 is a semigroup homomorphism, called the quotient map, canonical surjection or projection; if S is a monoid then quotient semigroup is a monoid with identity formula_56. Conversely, the kernel of any semigroup homomorphism is a semigroup congruence. These results are nothing more than a particularization of the first isomorphism theorem in universal algebra. Congruence classes and factor monoids are the objects of study in string rewriting systems. A nuclear congruence on "S" is one which is the kernel of an endomorphism of "S". A semigroup "S" satisfies the maximal condition on congruences if any family of congruences on "S", ordered by inclusion, has a maximal element. By Zorn's lemma, this is equivalent to saying that the ascending chain condition holds: there is no infinite strictly ascending chain of congruences on "S". Every ideal "I" of a semigroup induces a subsemigroup, the Rees factor semigroup via the congruence   ⇔   either or both "x" and "y" are in "I". The following notions introduce the idea that a semigroup is contained in another one. A semigroup T is a quotient of a semigroup S if there is a surjective semigroup morphism from S to T. For example, formula_57 is a quotient of formula_58, using the morphism consisting of taking the remainder modulo 2 of an integer. A semigroup T divides a semigroup S, noted formula_59 if T is a quotient of a subsemigroup S. In particular, subsemigroups of S divides T, while it is not necessarily the case that there are a quotient of S. Both of those relation are transitive. For any subset "A" of "S" there is a smallest subsemigroup "T" of "S" which contains "A", and we say that "A" generates "T". A single element "x" of "S" generates the subsemigroup { "x""n" | "n" ∈ Z+ }. If this is finite, then "x" is said to be of finite order, otherwise it is of infinite order. A semigroup is said to be periodic if all of its elements are of finite order. A semigroup generated by a single element is said to be monogenic (or cyclic). If a monogenic semigroup is infinite then it is isomorphic to the semigroup of positive integers with the operation of addition. If it is finite and nonempty, then it must contain at least one idempotent. It follows that every nonempty periodic semigroup has at least one idempotent. A subsemigroup which is also a group is called a subgroup. There is a close relationship between the subgroups of a semigroup and its idempotents. Each subgroup contains exactly one idempotent, namely the identity element of the subgroup. For each idempotent "e" of the semigroup there is a unique maximal subgroup containing "e". Each maximal subgroup arises in this way, so there is a one-to-one correspondence between idempotents and maximal subgroups. Here the term "maximal subgroup" differs from its standard use in group theory. More can often be said when the order is finite. For example, every nonempty finite semigroup is periodic, and has a minimal ideal and at least one idempotent. The number of finite semigroups of a given size (greater than 1) is (obviously) larger than the number of groups of the same size. For example, of the sixteen possible "multiplication tables" for a set of two elements eight form semigroups whereas only four of these are monoids and only two form groups. For more on the structure of finite semigroups, see Krohn–Rhodes theory. There is a structure theorem for commutative semigroups in terms of semilattices. A semilattice (or more precisely a meet-semilattice) formula_60 is a partially ordered set where every pair of elements formula_61 has a greatest lower bound, denoted formula_62. The operation formula_63 makes formula_64 into a semigroup satisfying the additional idempotence law formula_65. Given a homomorphism formula_66 from an arbitrary semigroup to a semilattice, each inverse image formula_67 is a (possibly empty) semigroup. Moreover, formula_68 becomes graded by formula_64, in the sense that formula_70 If formula_71 is onto, the semilattice formula_64 is isomorphic to the quotient of formula_1 by the equivalence relation formula_74 such that formula_75 iff formula_76. This equivalence relation is a semigroup congruence, as defined above. Whenever we take the quotient of a commutative semigroup by a congruence, we get another commutative semigroup. The structure theorem says that for any commutative semigroup formula_1, there is a finest congruence formula_74 such that the quotient of formula_68 by this equivalence relation is a semilattice. Denoting this semilattice by formula_80, we get a homomorphism formula_71 from formula_1 onto formula_80. As mentioned, formula_68 becomes graded by this semilattice. Furthermore, the components formula_85 are all Archimedean semigroups. An Archimedean semigroup is one where given any pair of elements formula_86, there exists an element formula_87 and formula_88 such that formula_89. The Archimedean property follows immediately from the ordering in the semilattice formula_64, since with this ordering we have formula_91 if and only if formula_89 for some formula_87 and formula_88. The group of fractions or group completion of a semigroup "S" is the group generated by the elements of "S" as generators and all equations which hold true in "S" as relations. There is an obvious semigroup homomorphism which sends each element of "S" to the corresponding generator. This has a universal property for morphisms from "S" to a group: given any group "H" and any semigroup homomorphism , there exists a unique group homomorphism with "k"="fj". We may think of "G" as the "most general" group that contains a homomorphic image of "S". An important question is to characterize those semigroups for which this map is an embedding. This need not always be the case: for example, take "S" to be the semigroup of subsets of some set "X" with set-theoretic intersection as the binary operation (this is an example of a semilattice). Since holds for all elements of "S", this must be true for all generators of "G"("S") as well: which is therefore the trivial group. It is clearly necessary for embeddability that "S" have the cancellation property. When "S" is commutative this condition is also sufficient and the Grothendieck group of the semigroup provides a construction of the group of fractions. The problem for non-commutative semigroups can be traced to the first substantial paper on semigroups. Anatoly Maltsev gave necessary and sufficient conditions for embeddability in 1937. Semigroup theory can be used to study some problems in the field of partial differential equations. Roughly speaking, the semigroup approach is to regard a time-dependent partial differential equation as an ordinary differential equation on a function space. For example, consider the following initial/boundary value problem for the heat equation on the spatial interval and times : Let be the "L""p" space of square-integrable real-valued functions with domain the interval and let "A" be the second-derivative operator with domain where "H"2 is a Sobolev space. Then the above initial/boundary value problem can be interpreted as an initial value problem for an ordinary differential equation on the space "X": On an heuristic level, the solution to this problem "ought" to be . However, for a rigorous treatment, a meaning must be given to the exponential of "tA". As a function of "t", exp("tA") is a semigroup of operators from "X" to itself, taking the initial state "u"0 at time to the state at time "t". The operator "A" is said to be the infinitesimal generator of the semigroup. The study of semigroups trailed behind that of other algebraic structures with more complex axioms such as groups or rings. A number of sources attribute the first use of the term (in French) to J.-A. de Séguier in "Élements de la Théorie des Groupes Abstraits" (Elements of the Theory of Abstract Groups) in 1904. The term is used in English in 1908 in Harold Hinton's "Theory of Groups of Finite Order". Anton Sushkevich obtained the first non-trivial results about semigroups. His 1928 paper "Über die endlichen Gruppen ohne das Gesetz der eindeutigen Umkehrbarkeit" ("On finite groups without the rule of unique invertibility") determined the structure of finite simple semigroups and showed that the minimal ideal (or Green's relations J-class) of a finite semigroup is simple. From that point on, the foundations of semigroup theory were further laid by David Rees, James Alexander Green, Evgenii Sergeevich Lyapin, Alfred H. Clifford and Gordon Preston. The latter two published a two-volume monograph on semigroup theory in 1961 and 1967 respectively. In 1970, a new periodical called "Semigroup Forum" (currently edited by Springer Verlag) became one of the few mathematical journals devoted entirely to semigroup theory. The representation theory of semigroups was developed in 1963 by Boris Schein using binary relations on a set "A" and composition of relations for the semigroup product. At an algebraic conference in 1972 Schein surveyed the literature on B"A", the semigroup of relations on "A". In 1997 Schein and Ralph McKenzie proved that every semigroup is isomorphic to a transitive semigroup of binary relations. In recent years researchers in the field have become more specialized with dedicated monographs appearing on important classes of semigroups, like inverse semigroups, as well as monographs focusing on applications in algebraic automata theory, particularly for finite automata, and also in functional analysis. If the associativity axiom of a semigroup is dropped, the result is a magma, which is nothing more than a set "M" equipped with a binary operation that is closed . Generalizing in a different direction, an n"-ary semigroup (also n"-semigroup, polyadic semigroup or multiary semigroup) is a generalization of a semigroup to a set "G" with a "n"-ary operation instead of a binary operation. The associative law is generalized as follows: ternary associativity is , i.e. the string "abcde" with any three adjacent elements bracketed. "N"-ary associativity is a string of length with any "n" adjacent elements bracketed. A 2-ary semigroup is just a semigroup. Further axioms lead to an "n"-ary group. A third generalization is the semigroupoid, in which the requirement that the binary relation be total is lifted. As categories generalize monoids in the same way, a semigroupoid behaves much like a category but lacks identities. Infinitary generalizations of commutative semigroups have sometimes been considered by various authors.
https://en.wikipedia.org/wiki?curid=27799
Super Mario Kart Super Mario Kart is a 1992 kart racing video game developed and published by Nintendo for the Super Nintendo Entertainment System video game console. The first game of the "Mario Kart" series, it was released in Japan and North America in 1992, and in Europe the following year. Selling 8.76 million copies worldwide, the game went on to become the fourth best selling SNES game of all time. "Super Mario Kart" was re-released on the Wii's Virtual Console in 2009, and on the Wii U's Virtual Console in 2013. Nintendo re-released "Super Mario Kart" in the United States in September 2017 as part of the company's Super NES Classic Edition. In "Super Mario Kart", the player takes control of one of eight "Mario" series characters, each with differing capabilities. In single player mode players can race against computer-controlled characters in multi-race cups over three difficulty levels. During the races, offensive and speed boosting power-ups can be used to gain an advantage. Alternatively players can race against the clock in a Time Trial mode. In multi-player mode two players can simultaneously take part in the cups or can race against each other one-on-one in Match Race mode. In a third multiplayer mode – Battle Mode – the aim is to defeat the other players by attacking them with power-ups, destroying balloons which surround each kart. "Super Mario Kart" received positive reviews and was praised for its presentation, innovation and use of Mode 7 graphics. It has been ranked among the greatest video games of all time by several organizations including "Edge", IGN, "The Age" and GameSpot, while "Guinness World Records" has named it as the top console game ever. It is often credited with creating the kart-racing subgenre of video games, leading other developers to try to duplicate its success. The game is also seen as having been key to expanding the "Mario" series into non-platforming games. This diversity has led to it becoming the best-selling game franchise of all time. Several sequels to "Super Mario Kart" have been released, for consoles, handhelds and in arcades, each enjoying critical and commercial success. While some elements have developed throughout the series, the core experience from "Super Mario Kart" has remained intact. "Super Mario Kart" is a kart racing game featuring several single and multiplayer modes. During the game, players take control of one of eight "Mario" franchise characters and drive karts around tracks with a "Mario" franchise theme. In order for them to begin driving, Lakitu will appear with a traffic light hanging on his fishing pole at the starting line, which starts the countdown. When the light turns green, the race or battle officially begins. During a race, the player's viewpoint is from behind his or her kart. The goal of the game is to either finish a race ahead of other racers, who are controlled by the computer and other players, or complete a circuit in the fastest time. There is also a battle mode in which the aim is to attack the karts of the other human players. Tiles marked with question marks are arrayed on the race tracks; they give special abilities (power-ups) to a player's kart if the vehicle passes over them. Power-ups, such as the ability to throw shells and bananas, allow racers to hit others with the objects, causing them to spin and lose control. A kart that obtains the star power-up is temporarily invulnerable to attack. Computer players have specific special powers associated with each character, that they are able to use throughout the race. Lines of coins are found on the tracks in competitive race modes. By running over these coins, a kart collects them and increases its top speed. Having coins also helps players when their kart is hit by another: instead of spinning and losing control, they lose a coin. Coins are also lost when karts are struck by power-ups or fall off the tracks. The game features advanced maneuvers such as power sliding and hopping. Power sliding allows a kart to maintain its speed while turning, although executing the maneuver for too long causes the kart to spin. Hopping helps a kart execute tighter turns: the kart makes a short hop and turns in the air, speeding off in the new direction when it lands. Reviewers praised "Super Mario Kart"s gameplay, describing the battle mode as "addictive" and the single player gameplay as "incredible". IGN stated that the gameplay mechanics defined the genre. "Super Mario Kart" has two single-player modes: Mario Kart GP (which stands for Grand Prix) and Time Trial. In Mario Kart GP, one player is required to race against seven computer-controlled characters in a series of five races which are called cups. Initially, there are three cups available – the Mushroom Cup, Flower Cup, and Star Cup – at two difficulty levels, 50cc and 100cc. By winning all three of the cups at the 100cc level, a fourth cup – the Special Cup – is unlocked. Winning all four cups at 100cc unlocks a new difficulty level, 150cc. Each cup consists of five five-lap races, each taking place on a distinct track. In order to continue through a cup, a position of fourth or higher must be achieved in each race. If a player finishes in the fifth to eighth position, they are "ranked out" and the race must be replayed – at the cost of one of a limited number of lives – until a placing of fourth or above is achieved. If the player has no lives when they rank out, the game is over. Points are accrued by finishing in the top four positions in a race; first to fourth place receive nine, six, three and one points. If a player finished in the same position three times in a row, then an extra life is awarded. The finishing order for that race will then become the starting lineup for the next race; for example, if a player finished in first place, then that player will start the next race in the same position. The racer with the highest number of points after all five races have been completed wins the cup. In time trial mode, players race against the clock through the same tracks that are present in Mario Kart GP mode, attempting to set the fastest time possible. "Super Mario Kart" also has three multiplayer modes; Mario Kart GP, Match Race, and Battle Mode. The multiplayer modes support two players and the second player uses the bottom half of the screen which is used as a map in the single-player modes. Mario Kart GP is the same as in single-player, the only difference being that there are now two human-controlled and six computer-controlled drivers. Match Race involves the two players going head to head on a track of their choice without any opponents. In Battle Mode, the two players again go head to head, but this time in one of four dedicated Battle Mode courses. Each player starts with three balloons around their kart which can be popped by power-ups fired by the other player. The first player to have all three of their balloons popped loses. "Super Mario Kart" features eight playable characters from the "Mario" series – Mario, Luigi, Princess Peach, Yoshi, Bowser, Donkey Kong Jr., Koopa Troopa and Toad. Each character's kart has different capabilities with differing levels of top speed, acceleration and handling. Mario, Luigi, Peach, Yoshi, Bowser and Toad returned in all of the subsequent "Mario Kart" games starting with "Mario Kart 64". During races, computer-controlled characters have special items, or superpowers, which they are able to use. These powers are specific to each character; for example, Yoshi drops eggs which cause players who hit them to lose coins and spin, while Donkey Kong Jr. throws bananas. The characters are rendered as sprites portrayed from sixteen different angles. The sprites were described as "detailed" by Nintendo Magazine System when the game was first reviewed and were thought to contribute to the "spectacular" graphics of the game as a whole. More recently, Nintendojo called the sprites "not-so-pretty" when they are rendered at a distance, and IGN has commented on the dated look of the game. "Super Mario Kart" was the first game to feature playable characters from the "Mario" series other than Mario or Luigi in a non-platforming game and the selection and different attributes of the characters is regarded as one of the game's strengths, IGN describing a well-balanced "all-star cast". All of the characters present in "Super Mario Kart" have gone on to appear in later games in the series, except for Koopa Troopa, who has only appeared intermittently after being replaced by Wario in "Mario Kart 64". Donkey Kong Jr. was replaced by Donkey Kong, who has appeared in every "Mario Kart" game since. This was Donkey Kong Jr.'s last appearance as a playable character, except for the "Mario Tennis" sub-series, including installments on the Nintendo 64 and Virtual Boy. The tracks in "Super Mario Kart" are based on locations in "Super Mario World" such as Donut Plains. Each of the four cups contains five different tracks for a total of twenty unique tracks, additionally there are four unique Battle Mode courses. The course outlines are marked out by impassable barriers and feature a variety of bends ranging from sharp hairpins to wide curves which players can power slide around. Numerous obstacles themed from the "Mario" series appear, such as Thwomps in the Bowser's Castle tracks, the Cheep-Cheeps from "Super Mario World" in Koopa Beach and pipe barriers which are found in the Mario Circuit tracks. Other features include off-road sections which slow down the karts such as the mud bogs in the Choco Island tracks. Each single-player track is littered with coins and power-up tiles, as well as turbo tiles which give the karts a boost of speed and jumps which launch the karts into the air. The tracks have received positive commentary with GameSpy describing them as wonderfully designed and IGN calling them perfect. When naming its top five "Mario Kart" tracks of all time in 2008, 1UP.com named Battle Mode Course 4 at number three and Rainbow Road – along with its subsequent versions in the series – at number one. The track themes in "Super Mario Kart" influenced later games in the series; recurring themes that first appeared in "Super Mario Kart" include haunted tracks, Bowser's castle and Rainbow Road. Some of the tracks from "Super Mario Kart" have been duplicated in later games. All twenty of the original tracks are unlockable as an extra feature in the Game Boy Advance sequel "". Remakes of Mario Circuit 1, Donut Plains 1, Koopa Beach 2 and Choco Island 2 appear as part of the Retro Grand Prix series in "Mario Kart DS", remakes of Ghost Valley 2, Mario Circuit 3, and Battle Course 4 appear as part of the Retro Grand Prix and battles in "Mario Kart Wii", remakes of Mario Circuit 2 and Rainbow Road appear as part of the Retro Grand Prix in "Mario Kart 7", a remake of Donut Plains 3 appears as part of the Retro Grand Prix and battles in "Mario Kart 8", a second remake of Rainbow Road appears in Mario Kart 8's first downloadable content pack, and a remake of Battle Course 1 appears as a Retro Battle Course in "Mario Kart 8 Deluxe". "Super Mario Kart" was produced by Shigeru Miyamoto and directed by Tadashi Sugiyama and Hideki Konno. In an interview Miyamoto has said that the development team originally set out to produce a game capable of displaying two players on the same game screen simultaneously. In the same interview Konno stated that development started with a desire to create a two player racing game in contrast to the single player gameplay of SNES launch title "F-Zero". The team found that due to limitations of the SNES hardware, the strong focus on multiplayer prevented the inclusions of tracks as elaborate as those found in "F-Zero". Computer and Video Games suggest that this initial emphasis on creating a two player experience is the reason for the game's horizontal split-screen during single player modes. The intention to create the racing modes of the game had been present from the start of the project and Battle Mode was developed from the desire to create a one-on-one mode where victory was not determined simply by competing for rank. The game did not start out as a "Mario" series game and the first prototype featured a generic man in overalls in the kart; the team decided that characters three heads tall would best suit the design of the karts. They did not decide on incorporating "Mario" series characters into the game until two or three months after the start of development. The choice was made after the development team when observing how one kart looked to another driving past it, decided to see what it would look like with Mario in the kart. Thinking that having Mario in the kart looked better than previous designs, the idea of a Mario themed racing game was born. Notable in the development of "Super Mario Kart" was its use of Mode 7 graphics. First seen in "F-Zero", Mode 7 is a form of texture mapping available on the SNES which allows a plane to be rotated and scaled freely, achieving a pseudo-three-dimensional appearance. 1UP.com have credited the use of Mode 7 with giving the game graphics which at the time of release were considered to be "breathtaking". Retrospective reflection on the Mode 7 visuals was mixed, with IGN stating that the once revolutionary technology now looks "crude and flickery". "Super Mario Kart" featured a DSP (Digital Signal Processor) chip; DSPs were used in SNES games as they provided a better handling of floating point calculations to assist with three-dimensional maths. The DSP-1 chip that was used in "Super Mario Kart" went on to be the most popular DSP chip to be used in SNES games. The music for the title was created by composer Soyo Oka. "Super Mario Kart" received critical acclaim and proved to be a commercial success; it received a Player's Choice release after selling one million copies and went on to sell 8.76 million copies, becoming the fourth best selling game ever for the SNES. Aggregate scoring sites GameRankings and MobyGames both give an average of more than 90 percent. Critics praised the game's Mode 7 graphics; in 1992 "Nintendo Magazine System" described them as superb and the graphics have since been described as among the best ever seen on the SNES. Another aspect of the game to have been praised is its gameplay, which Thunderbolt has described as the "deepest [and] most addictive... to be found on the SNES console". "Nintendo Magazine System" showed a preference for the multiplayer modes of the game and stated that while the "single player mode becomes dull quickly" the "two-player mode won't lose appeal". Retrospective reviews of the game have been positive with perfect scores given by review sites including Thunderbolt and HonestGamers. The use of the style and characters from the "Mario" franchise was also praised as well as the individual characteristics of each racer. Mean Machines described the game as having "struck gold" in a way that no other – not even its sequels – has matched and GameSpot named the game as one of the greatest games of all time for its innovation, gameplay and visual style. "Entertainment Weekly" wrote that although the game might appear to be a "cynical attempt by Nintendo to cash in on its Super Mario franchise" the review concluded that "plunking the familiar characters down in souped-up go-carts actually makes for a delightful racing game." "GamePro" said the game "does an excellent job of capturing the thrill of Go-card racing, and wraps it up in the familiar, fun, Mario-land atmosphere." The reviewer also praised the use of Mode 7 and challenging CPU-controlled opponents. "Super Mario Kart" has been listed among the best games ever made several times. In 1996, "Next Generation" listed it as number 37 on their "Top 100 Games of All Time", commenting that the controls are elegantly designed to offer "supreme fun." In 1999, "Next Generation" listed "Super Mario Kart" as number 7 on their "Top 50 Games of All Time", commenting that, "Imitated a thousand times, but never, ever, equalled, "Mario Kart" changed the rules for the driving game and gave the world one of the most engrossing and addictive two-player experiences ever." "Electronic Gaming Monthly" ranked it as the 15th best console video game of all time, attributing its higher ranking than "Mario Kart 64" (which came in 49th) to its superior track design and powerups. IGN ranked it as the 15th best game ever in 2005, describing it as "the original karting masterpiece" and as the 23rd best game ever in 2007, discussing its originality at time of release. "The Age" placed it at number 19 on their list of the 50 best games in 2005 and in 2007 "Edge" ranked "Super Mario Kart" at number 14 on a list of their 100 best games, noting its continued influence on video game design. The game is also included in Yahoo! Games UK's list of the hundred greatest games of all time which praises the appealing characters and power ups and 1UP.com's "Essential 50", a list of the fifty most important games ever made. The game placed 13th in "Official Nintendo Magazine"'s 100 greatest Nintendo games of all time. "Guinness World Records" ranked it at number 1 on a list of the top 50 console games of all time based on initial impact and lasting legacy. "Super Mario Kart" has been credited with inventing the "kart racing" subgenre of video gaming and soon after its release several other developers attempted to duplicate its success. In 1994, less than two years after the release of "Super Mario Kart", Sega released "Sonic Drift"; a kart racing game featuring characters from the "Sonic the Hedgehog" series. Also in 1994 Ubisoft released "Street Racer", a kart racing game for the SNES and Mega Drive/Genesis which included a four player mode not present in "Super Mario Kart". Apogee Software released "Wacky Wheels" for PC and Atari Corporation released "Atari Karts" for the Atari Jaguar in 1995. Future games that followed in the mould of "Super Mario Kart" include "South Park Rally", "Konami Krazy Racers", "Diddy Kong Racing", "Sonic & Sega All-Stars Racing" and several racing games in the "Crash Bandicoot" series. Response to the karting games released since "Super Mario Kart" has been mixed, with GameSpot describing them as tending to be bad while 1UP.com notes that countless developers have tried to improve upon the Mario Kart formula without success. "Super Mario Kart" is also credited as being the first non-platforming game to feature multiple playable characters from the "Mario" franchise. As well as several sequels Nintendo has released numerous other sporting and non-sporting Mario spin-offs since "Super Mario Kart"; a trend in part accredited to the commercial and critical success of the game. The "Mario" characters have appeared in many sports games including those relating to basketball, baseball, golf, tennis, and soccer. Non-sporting franchises using the "Mario" characters have also been created, including the "Super Smash Bros." series of fighting games and the "Mario Party" series of board game based, party games. "Mario" series characters have also made cameos in games from other series such as "SSX on Tour" and "NBA Street V3", both published by EA Sports. The genre-spanning nature of the Mario series that was sparked off by the success of "Super Mario Kart" has been described as key to the success and longevity of the franchise; keeping fans interested despite the infrequency of traditional Mario platforming games. Following this model the "Mario" series has gone on to become the best selling video game franchise of all time with 193 million units sold as of January 2007, almost 40 million units ahead of second-ranked franchise ("Pokémon", also by Nintendo). "Super Mario Kart" was re-released on the Japanese Virtual Console on June 9, 2009, and later in North America on November 23, 2009. Previously, when naming it as one of the most wanted games for the platform in November 2008, Eurogamer stated that problems emulating the Mode 7 graphics were responsible for its absence. The game was also released for the Wii U Virtual Console in Japan during June 2013, and in Europe on March 27, 2014. In addition, North America users was able to get the game starting from August 6, 2014 to celebrate the 22nd anniversary of the game, which also includes the new game update of "Mario Kart 8" on August 27, 2014. "Super Mario 3D World" has a stage with a look based on the Mario Circuit racetracks from "Super Mario Kart". A remixed version of the music can also be heard. "Super Mario Odyssey" also has a remix, when racing an RC car around a track in New Donk City in the Metro Kingdom. Several sequels to "Super Mario Kart" have been brought out for successive generations of Nintendo consoles, each receiving commercial success and critical acclaim. The first of these, "Mario Kart 64" was released in 1996 for the Nintendo 64 and was the first "Mario Kart" game to feature fully 3D graphics. Although reviewers including IGN and GameSpot felt that the single player gameplay was lacking compared to its predecessor, the simultaneous four-person multiplayer modes – a first for the Nintendo 64 – were praised. The second sequel, "", was released for the Game Boy Advance in 2001. It was described by GameSpot as more of a remake of "Super Mario Kart" than a sequel to "Mario Kart 64" and featured a return to the graphical style of the original. As well as featuring all new tracks, players are able to unlock the original SNES tracks if certain achievements are completed. "" was released for the GameCube in 2003. Unlike any other "Mario Kart" game before or since, it features two riders in each kart, allowing for a new form of cooperative multiplayer where one player controls the kart's movement and the other fires weapons. "Mario Kart DS", released for the Nintendo DS in 2005, was the first "Mario Kart" game to include online play via the Nintendo Wi-Fi Connection. It went on to become the best selling hand-held racing game of all time, selling 7.83 million units. The game also marks the debut of tracks appearing in previous games. "Mario Kart Wii" was released for the Wii in 2008 and incorporates motion controls and 12-player racing. Like "Mario Kart DS", it includes on-line play; it also allows racers to play as user-created Miis (after unlocking the Mii character) as well as "Mario" series characters and comes packaged with the Wii Wheel peripheral, which can act as the game's primary control mechanism when coupled with a Wii Remote. "Mario Kart Wii" went on to be the worldwide best-selling game of 2008 ahead of another Nintendo game – "Wii Fit" – and the critically acclaimed "Grand Theft Auto IV". "Mario Kart 7" for the Nintendo 3DS was released in 2011, which features racing on land, sea, and air. Also in "Mario Kart 7" is the ability to customize your kart and to race in first-person mode. Three "Mario Kart" arcade games have also been released, "Mario Kart Arcade GP" in 2005, "Mario Kart Arcade GP 2" in 2007, and "Mario Kart Arcade GP DX" in 2013. All of them were developed jointly by Nintendo and Namco and feature classic Namco characters including Pac-Man and Blinky. The most recent entry in the series is "Mario Kart 8" for the Wii U, which was released at the end of May 2014, which brings back gliders and propellers from "Mario Kart 7" as well as 12-player racing in "Mario Kart Wii". "Mario Kart 8" also includes a new feature called Mario Kart TV, where players can watch highlights of previous races and uploading them to YouTube. Another new feature is anti-gravity racing, where players can race on walls and ceilings. As the series has progressed, many aspects included in "Super Mario Kart" have been developed and altered. The power-up boxes which are flat against the track in "Super Mario Kart" due to the technical limitations of the SNES became floating boxes in later games. The roster of racers has expanded in recent games to include a greater selection of Nintendo characters including some which had not been created at the time of "Super Mario Kart's" release – such as Petey Piranha from "Super Mario Sunshine" who appeared in "Mario Kart: Double Dash!!". Multiplayer has remained a key feature of the series and has expanded from the two-player modes available in "Super Mario Kart"; first to allow up to four simultaneous players in "Mario Kart 64" and eventually up to twelve simultaneous online players in "Mario Kart Wii". Many of the track themes have been retained throughout the series, including Rainbow Road – the final track of the Special Cup – which has appeared in every "Mario Kart" console game. Other features present in "Super Mario Kart" have disappeared from the series. These include the "super-powers" of the computer characters, the feather power-up which allows players to jump high into the air and having a restricted number of lives. The only other "Mario Kart" games to feature the coin collecting of the original are "Mario Kart: Super Circuit", "Mario Kart 7", and "Mario Kart 8". The aspects of style and gameplay from "Super Mario Kart" that have been retained throughout the series have led Nintendo to face criticism for a lack of originality but the franchise is still considered to be a beloved household name by many, known for its familiar core gameplay.
https://en.wikipedia.org/wiki?curid=27801
Seymour Papert Seymour Aubrey Papert (; 29 February 1928 – 31 July 2016) was a South African-born American mathematician, computer scientist, and educator, who spent most of his career teaching and researching at MIT. He was one of the pioneers of artificial intelligence, and of the constructionist movement in education. He was co-inventor, with Wally Feurzeig and Cynthia Solomon, of the Logo programming language. Born to a Jewish family, Papert attended the University of the Witwatersrand, receiving a Bachelor of Arts degree in philosophy in 1949 followed by a PhD in mathematics in 1952. He then went on to receive a second doctorate, also in mathematics, at the University of Cambridge (1959), supervised by Frank Smithies. Papert worked as a researcher in a variety of places, including St. John's College, Cambridge, the Henri Poincaré Institute at the University of Paris, the University of Geneva, and the National Physical Laboratory in London before becoming a research associate at MIT in 1963. He held this position until 1967, when he became professor of applied math and was made co-director of the MIT Artificial Intelligence Laboratory by its founding director Professor Marvin Minsky, until 1981; he also served as Cecil and Ida Green professor of education at MIT from 1974 to 1981. Papert worked on learning theories, and was known for focusing on the impact of new technologies on learning in general, and in schools as learning organizations in particular. At MIT, Papert went on to create the Epistemology and Learning Research Group at the MIT Architecture Machine Group which later became the MIT Media Lab. Here, he was the developer of a theory on learning called constructionism, built upon the work of Jean Piaget in constructivist learning theories. Papert had worked with Piaget at the University of Geneva from 1958 to 1963 and was one of Piaget's protégés; Piaget himself once said that "no one understands my ideas as well as Papert". Papert has rethought how schools should work, based on these theories of learning. Papert used Piaget's work in his development of the Logo programming language while at MIT. He created Logo as a tool to improve the way children think and solve problems. A small mobile robot called the "Logo Turtle" was developed, and children were shown how to use it to solve simple problems in an environment of play. A main purpose of the Logo Foundation research group is to strengthen the ability to learn knowledge. Papert insisted a simple language or program that children can learn—like Logo—can also have advanced functionality for expert users. As part of his work with technology, Papert has been a proponent of the Knowledge Machine. He was one of the principals for the One Laptop Per Child initiative to manufacture and distribute The Children's Machine in developing nations. Papert also collaborated with the construction toy manufacturer Lego on their Logo-programmable Lego Mindstorms robotics kits, which were named after his groundbreaking 1980 book. Papert became a political activist early in his life. Janet Levine, a neighbour of the family in South Africa, says "My father said that he did not know why someone as talented as Seymour would throw his life away ‘for the Schwartzes’ (a derogatory Yiddish expression for black people)." He subsequently chose self exile. He was a leading figure in the revolutionary socialist circle around "Socialist Review" while living in London in the 1950s. Papert was also a prominent activist against South African apartheid policies during his university education. Papert was married to Dona Strauss, and later to Androula Christofides Henriques . Papert's third wife was MIT professor Sherry Turkle, and together they wrote the influential paper "Epistemological Pluralism and the Revaluation of the Concrete". In his final 24 years, Papert was married to Suzanne Massie, who is a Russian scholar and author of "Pavlovsk: The Life of a Russian Palace" and "Land of the Firebird". Papert (then aged 78), received a serious brain injury when struck by a motor scooter on 5 December 2006 while crossing the street with colleague Uri Wilensky when they were both attending the 17th International Commission on Mathematical Instruction (ICMI) Study conference in Hanoi, Vietnam. He underwent emergency surgery to remove a blood clot at the French Hospital of Hanoi before being transferred in a complex operation by Swiss Air Ambulance (REGA) Bombardier Challenger Jet to Boston, Massachusetts. He was moved to a hospital closer to his home in January 2007, but then developed sepsis which damaged a heart valve, which was later replaced. By 2008 he had returned home, could think and communicate clearly and walk "almost unaided", but still had "some complicated speech problems" and was in receipt of extensive rehabilitation support. His rehabilitation team used some of the very principles of experiential, hands-on learning that he had pioneered. Papert died at his home in Blue Hill, Maine, on 31 July 2016. Papert's work has been used by other researchers in the fields of education and computer science. He influenced the work of Uri Wilensky in the design of NetLogo and collaborated with him on the study of knowledge restructurations, as well as the work of Andrea diSessa and the development of "dynaturtles". In 1981, Papert along with several others in the Logo group at MIT, started Logo Computer Systems Inc. (LCSI), of which he was Board Chair for over 20 years. Working with LCSI, Papert designed a number of award-winning programs, including LogoWriter and Lego/Logo (marketed as Lego Mindstorms). He also influenced the research of Idit Harel Caperton, coauthoring articles and the book "Constructionism", and chairing the advisory board of the company MaMaMedia. He also influenced Alan Kay and the Dynabook concept, and worked with Kay on various projects. Papert won a Guggenheim fellowship in 1980, a Marconi International fellowship in 1981, the Software Publishers Association Lifetime Achievement Award in 1994, and the Smithsonian Award from "Computerworld" in 1997. Papert has been called by Marvin Minsky "the greatest living mathematics educator". MIT President L. Rafael Reif summarized Papert's lifetime of accomplishments: "With a mind of extraordinary range and creativity, Seymour Papert helped revolutionize at least three fields, from the study of how children make sense of the world, to the development of artificial intelligence, to the rich intersection of technology and learning. The stamp he left on MIT is profound. Today, as MIT continues to expand its reach and deepen its work in digital learning, I am particularly grateful for Seymour's groundbreaking vision, and we hope to build on his ideas to open doors to learners of all ages, around the world." In 2016 Papert's Alma Mater, University of Witwatersrand, awarded him the degree of "Doctor of Science in Engineering, honoris causa"
https://en.wikipedia.org/wiki?curid=27802
Search engine (computing) A search engine is an information retrieval system designed to help find information stored on a computer system. The search results are usually presented in a list and are commonly called "hits". Search engines help to minimize the time required to find information and the amount of information which must be consulted, akin to other techniques for managing information overload. The most public, visible form of a search engine is a Web search engine which searches for information on the World Wide Web. Search engines provide an interface to a group of items that enables users to specify criteria about an item of interest and have the engine find the matching items. The criteria are referred to as a search query. In the case of text search engines, the search query is typically expressed as a set of words that identify the desired concept that one or more documents may contain. There are several styles of search query syntax that vary in strictness. It can also switch names within the search engines from previous sites. Whereas some text search engines require users to enter two or three words separated by white space, other search engines may enable users to specify entire documents, pictures, sounds, and various forms of natural language. Some search engines apply improvements to search queries to increase the likelihood of providing a quality set of items through a process known as query expansion. Query understanding methods can be used as standardize query language. The list of items that meet the criteria specified by the query is typically sorted, or ranked. Ranking items by relevance (from highest to lowest) reduces the time required to find the desired information. Probabilistic search engines rank items based on measures of similarity (between each item and the query, typically on a scale of 1 to 0, 1 being most similar) and sometimes popularity or authority (see Bibliometrics) or use relevance feedback. Boolean search engines typically only return items which match exactly without regard to order, although the term "boolean search engine" may simply refer to the use of boolean-style syntax (the use of operators AND, OR, NOT, and XOR) in a probabilistic context. To provide a set of matching items that are sorted according to some criteria quickly, a search engine will typically collect metadata about the group of items under consideration beforehand through a process referred to as indexing. The index typically requires a smaller amount of computer storage, which is why some search engines only store the indexed information and not the full content of each item, and instead provide a method of navigating to the items in the search engine result page. Alternatively, the search engine may store a copy of each item in a cache so that users can see the state of the item at the time it was indexed or for archive purposes or to make repetitive processes work more efficiently and quickly. Other types of search engines do not store an index. Crawler, or spider type search engines (a.k.a. real-time search engines) may collect and assess items at the time of the search query, dynamically considering additional items based on the contents of a starting item (known as a seed, or seed URL in the case of an Internet crawler). Meta search engines store neither an index nor a cache and instead simply reuse the index or results of one or more other search engine to provide an aggregated, final set of results.
https://en.wikipedia.org/wiki?curid=27804
Spaced repetition Spaced repetition is an evidence-based learning technique that is usually performed with flashcards. Newly introduced and more difficult flashcards are shown more frequently while older and less difficult flashcards are shown less frequently in order to exploit the psychological spacing effect. The use of spaced repetition has been shown to increase rate of learning. Although the principle is useful in many contexts, spaced repetition is commonly applied in contexts in which a learner must acquire many items and retain them indefinitely in memory. It is, therefore, well suited for the problem of vocabulary acquisition in the course of second-language learning. A number of spaced repetition software programs have been developed to aid the learning process. Alternative names for spaced repetition include "spaced rehearsal", "expanding rehearsal", "graduated intervals", "repetition spacing", "repetition scheduling", "spaced retrieval" and "expanded retrieval". Over the years, techniques and tests have been formed to better patients with memory difficulties. Spaced repetition is one of these solutions to help better the patients' minds. Spaced repetition is used in many different areas of memory from remembering facts to remembering how to ride a bike to remembering past events from your childhood. Recovery Practice is used to see if an individual is able to recall something immediately after they have seen or studied it. Increasing recovery practice is frequently used as a technique in improving long-term memory, essentially for young children trying to learn and older individuals with memory diseases. The spaced repetition training was first tested by Landauer and Bjork in 1978; they gathered a group of psychology students showing the students pictures of a certain individual followed by that individual's name. This is also known as face name association. With the repetition of seeing the person's name and face they were able to associate the name and face of that individual shown with the expansion of time due to the spaced repetition. In 1989, C.J. Camp decided that using this technique with Alzheimer's patients may increase their duration of remembering particular things. These results show that the expansion of the time interval shows the strongest benefits for memory. Schacter, Rich, and Stampp in 1985 furthered the research to include people suffering from amnesia and other memory disorders. The findings showed that using spaced repetition can not only help students with name face association but individuals dealing with memory diseases. Spaced repetition is a method where the subject is asked to remember a certain fact with the time intervals increasing each time the fact is presented or said. If the subject is able to recall the information correctly the time is doubled to further help them keep the information fresh in their mind to recall in the future. With this method the patient is able to place the information in their long term memory. If they are unable to remember the information they go back to the previous step and continue to practice to help make the technique lasting. (Vance & Farr, 2007) The expansion is done to ensure a high success level of recalling the information on the first time and increasing the time interval to make the information long lasting to help keep the information always accessible in their mind. Throughout the development of spaced repetition they have found that patients using this technique with dementia are able to recall the information weeks—even months—later. The technique has been successful in helping dementia patients remembering particular objects' names, daily tasks, name face association, information about themselves, and many other facts and behaviors (Small, 2012). Sufficient test evidence shows that spaced repetition is valuable in learning new information and recalling information from the past. Small combines the works and findings of quite a few scientists to come up with five reasons why spaced repetition works: it helps show the relationship of routine memories, it shows the benefits of learning things with an expansion of time, it helps the patient with Alzheimer's dementia keeps their brain active, it has a high success level with little to no errors, and the technique is meaningful for the patient to do and remember more things (Small, 2012). Joltin et al. (2003), had a caregiver train a woman with Alzheimer's by giving her the name of her grandchild over the phone while asking her to associate with the picture of the grandchild posted on the refrigerator. After training, the woman was able to recall the name of her grandchild five days later. The notion that spaced repetition could be used for improving learning was first proposed in the book "Psychology of Study" by Prof. C. A. Mace in 1932: "Perhaps the most important discoveries are those which relate to the appropriate distribution of the periods of study...Acts of revision should be spaced in gradually increasing intervals, roughly intervals of one day, two days, four days, eight days, and so on." In 1939, H. F. Spitzer tested the effects of a type of spaced repetition on sixth-grade students in Iowa who were learning science facts. Spitzer tested over 3600 students in Iowa and showed that spaced repetition was effective. This early work went unnoticed, and the field was relatively quiet until the late 1960s when cognitive psychologists, including Melton and Landauer & Bjork, explored manipulation of repetition timing as a means to improve recall. Around the same time, Pimsleur language courses pioneered the practical application of spaced repetition theory to language learning, and in 1973 Sebastian Leitner devised his "Leitner system", an all-purpose spaced repetition learning system based on flashcards. With the increase in access to personal computers in the 1980s, spaced repetition began to be implemented with computer-assisted language learning -based solutions, enabling automated scheduling and statistic gathering, scaling to thousands of cards scheduled individually. To enable the user to reach a target level of achievement (e.g. 90% of all material correctly recalled at any given time point), the software adjusts the repetition spacing interval. Material that is hard appears more often and material that is easy less often, with difficulty defined according to the ease with which the user is able to produce a correct response. The data behind this initial research indicated that an increasing space between rehearsals (expanding) would yield a greater percentage of accuracy at test points. Spaced repetition with expanding intervals is believed to be so effective because with each expanded interval of repetition it becomes more difficult to retrieve the information because of the time elapsed between test periods; this creates a deeper level of processing of the learned info in long term memory at each point. Another reason that the expanding repetition model is believed to work so effectively is because the first test happens early on in the rehearsal process. The purpose of this is to increase repetition success. By having a first test that followed initial learning with a successful repetition, people are more likely to remember this successful repetition on following tests. Although expanding retrieval is commonly associated with spaced repetition, a uniform retrieval schedule is also a form of spaced repetition procedure. Spaced repetition is typically studied through the use of memorizing facts. Traditionally speaking, it has not been applied to fields that required some manipulation or thought beyond simple factual/semantic information. A more recent study has shown that spaced repetition can benefit tasks such as solving math problems. In a study conducted by Pashler, Rohrer, Cepeda, and Carpenter, participants had to learn a simple math principle in either a spaced or massed retrieval schedule. The participants given the spaced repetition learning tasks showed higher scores on a final test distributed after their final practice session. This is unique in the sense that it shows spaced repetition can be used to not only remember simple facts or contextual data, but it can also be used in fields, such as math, where manipulation and the use of particular principles or formulas (e.g. y = mx + b) is necessary. These researchers also found that it is beneficial for feedback to be applied when administering the tests. When a participant gave a wrong response, they were likely to get it correct on following tests if the researcher gave them the correct answer after a delayed period. Spaced repetition is a useful tool for learning that is relevant to many domains such as fact learning or mathematics, and many different tasks (expanding or uniform retrieval). Many studies over the years have contributed to the use and implementation of spaced repetition, and it still remains a subject of interest for many researchers. There are several families of algorithms for scheduling spaced repetition: Some have theorized that the precise length of intervals does not have a great impact on algorithm effectiveness, although it has been suggested by others that the interval (expanded vs. fixed interval, etc.) is quite important. The experimental results regarding this point are mixed. Graduated-interval recall is a type of spaced repetition published by Paul Pimsleur in 1967. It is used in the Pimsleur language learning system and is particularly suited to programmed audio instruction due to the very short times (measured in seconds or minutes) between the first few repetitions, as compared to other forms of spaced repetition which may not require such precise timings. The intervals published in Pimsleur's paper were: 5 seconds, 25 seconds, 2 minutes, 10 minutes, 1 hour, 5 hours, 1 day, 5 days, 25 days, 4 months, and 2 years. Most spaced repetition software (SRS) is modeled after the manual style of learning with physical flashcards: items to memorize are entered into the program as question-answer pairs. When a pair is due to be reviewed, the question is displayed on screen, and the user must attempt to answer. After answering, the user manually reveals the answer and then tells the program (subjectively) how difficult answering was. The program schedules pairs based on spaced repetition algorithms. Without a computer program, the user has to schedule physical flashcards; this is time-intensive and limits users to simple algorithms like the Leitner system. Further refinements with regard to software: Spaced repetition with expanding intervals has long been argued to be the most beneficial version of this learning procedure, but current research, which compared repetition procedures, has shown the difference between expanding repetition and uniform retrieval is either very little to nonexistent. Some researchers have found cases where uniform retrieval is better than expanding. The main speculation for this range of results is that prior research has not accounted for the possibility of their results being affected by either the spacing condition or the number of successful repetitions during study periods. There are two forms of implementing spacing in spaced repetition. The first form is absolute spacing. Absolute spacing is the measurement of all the trials within the learning and testing periods. An example of this would be that participants would study for a total of thirty trial periods, but the spacing of these trials can either be expanding or uniform. The second form is called relative spacing. Relative spacing measures the spacing of trials between each test. An example of this would be if the absolute spacing was thirty, participants would either have expanding intervals (1-5-10-14) or uniform intervals (5-5-5-5-5-5). This is important in measuring whether or not one type of repetition schedule is more beneficial than the other. A common criticism of repetition research has argued that many of the tests involved have simply measured retention on a short term scale. A study conducted by Karpicke and Bauernschmidt used this principle to determine the major differences between the different types of repetition. The two focused on studying long term retention by testing participants over the course of one week. The participants were either assigned to a uniform schedule or an expanding schedule. No matter what type of spacing was assigned to the ninety-six participants, each completed three repeated tests at the end of their rehearsal intervals. Once those tests were completed, participants came back one week later to complete a final retention test. The researchers concluded that it did not matter what kind of repetition schedule was used. The biggest contribution to effective long term learning was the spacing between the repeated tests (relative spacing).
https://en.wikipedia.org/wiki?curid=27805
SuperMemo SuperMemo (from "Super Memory") is a learning method and software package developed by SuperMemo World and SuperMemo R&D with Piotr Woźniak in Poland from 1985 to the present. It is based on research into long-term memory, and is a practical application of the spaced repetition learning method that has been proposed for efficient instruction by a number of psychologists as early as in the 1930s. The method is available as a computer program for Windows, Windows CE, Windows Mobile, (Pocket PC), Palm OS (PalmPilot), etc. Course software by the same company ("SuperMemo World") can also be used in a web browser or even without a computer. The desktop version of SuperMemo (since v. 2002) supports incremental reading, as well as traditional creation of question and answer flashcards. The SuperMemo program stores a database of questions and answers constructed by the user. When reviewing information saved in the database, the program uses the SuperMemo algorithm to decide what questions to show the user. The user then answers the question and rates their relative ease of recall - with grades of 1 to 5 (1 is the hardest, 5 is the easiest) - and their rating is used to calculate how soon they should be shown the question again. While the exact algorithm varies with the version of SuperMemo, in general, items that are harder to remember show up more frequently. Besides simple text questions and answers, the latest version of SuperMemo supports images, video, and HTML questions and answers. Since 2002, SuperMemo has had a unique set of features that distinguish it from other spaced repetition programs, called incremental reading. Whereas earlier versions were built around users entering information they wanted to use, using IR, users can import text that they want to learn from. The user reads the text inside of SuperMemo, and tools are provided to bookmark one's location in the text and automatically schedule it to be revisited later, extract valuable information, and turn extracts into questions for the user to learn. By automating the entire process of reading and extracting knowledge to be remembered all in the same program, time is saved from having to manually prepare information, and insights into the nature of learning can be used to make the entire process more natural for the user. Furthermore, since the process of extracting knowledge can often lead to the extraction of more information than can actually be feasibly remembered, a priority system is implemented that allows the user to ensure that the most important information is remembered when they can't review all information in the system. The specific algorithms SuperMemo uses have been published, and re-implemented in other programs. Different algorithms have been used; SM–0 refers to the original (non-computer-based) algorithm, while SM-2 refers to the original computer-based algorithm released in the 1987 (used in SuperMemo versions 1.0 through 3.0, referred to as SM-2 because SuperMemo version 2 was the most popular of these). Subsequent versions of the software have further optimized the algorithm. Piotr A. Wozniak, the developer of SuperMemo algorithms, released the description for SM-5 in a paper titled "Optimization of repetition spacing in the practice of learning." Little detail is specified in the algorithms released later than that. In 1995, SM-8, which capitalized on data collected by users of SuperMemo 6 and SuperMemo 7 and added a number of improvements that strengthened the theoretical validity of the function of optimum intervals and made it possible to accelerate its adaptation, was introduced in SuperMemo 8. In 2002, SM-11, the first SuperMemo algorithm that was resistant to interference from the delay or advancement of repetitions was introduced in SuperMemo 11 (aka SuperMemo 2002). In 2005, SM-11 was tweaked to introduce boundaries on A and B parameters computed from the Grade vs. Forgetting Index data. In 2011, SM-15, which notably eliminated two weaknesses of SM-11 that would show up in heavily overloaded collections with very large item delays, was introduced in Supermemo 15. In 2016, SM-17, the first version of the algorithm to incorporate the two component model of memory, was introduced in SuperMemo 17. The latest version of the SuperMemo algorithm is SM-18, released in 2019. The first computer-based SuperMemo algorithm has 3 inputs, being the repetition number, easiness factor, and inter-repetition interval. The repetition number is fairly self-explanatory, the easiness factor is a measure of how easy recall of an answer was, and inter-repetition interval describes the time (in days) between repetitions. A grade is given by the pupil, and with that, the variables are modified in the following way: If the grade given is greater than equal to 3, indicating a correct answer: If the grade given is less than equal to 3, indicating an incorrect answer: If easiness goes below 1.3, raise it to 1.3 The SM-2 algorithm uses the performance on a card to schedule only that card, while SM-3 and newer algorithms use card performance to schedule that card and similar cards. The additional optimizations sometimes yield perverse results – answering "hard" on a card may yield an interval longer than answering "easy" on a card – and are criticized as reducing the robustness of the algorithm, making it more sensitive to variations – non-uniform difficulty of cards (a problem in versions 4 to 6, according to Woźniak), inconsistencies in studying, and so forth. Woźniak disagreed with the criticism, but in practice the other factors affecting study make differences less important. Some of the algorithms have been reimplemented in other, often free programs such as Anki, Mnemosyne, and Emacs Org-mode's Org-drill. See full list of flashcard software. The SM-2 algorithm has proven most popular in other applications, and is used (in modified form) in Anki and Mnemosyne, among others. Org-drill implements SM-5 by default, and optionally other algorithms such as SM-2.
https://en.wikipedia.org/wiki?curid=27806
Samuel Pepys Samuel Pepys ( ; 23 February 1633 – 26 May 1703) was an administrator of the navy of England and Member of Parliament who is most famous for the diary he kept for a decade while still a relatively young man. Pepys had no maritime experience, but he rose to be the Chief Secretary to the Admiralty under both King Charles II and King James II through patronage, hard work, and his talent for administration. His influence and reforms at the Admiralty were important in the early professionalisation of the Royal Navy. The detailed private diary that Pepys kept from 1660 until 1669 was first published in the 19th century and is one of the most important primary sources for the English Restoration period. It provides a combination of personal revelation and eyewitness accounts of great events, such as the Great Plague of London, the Second Dutch War, and the Great Fire of London. Pepys was born in Salisbury Court, Fleet Street, London on 23 February 1633, the son of John Pepys (1601–1680), a tailor, and Margaret Pepys ("née" Kite; died 1667), daughter of a Whitechapel butcher. His great uncle Talbot Pepys was Recorder and briefly Member of Parliament (MP) for Cambridge in 1625. His father's first cousin Sir Richard Pepys was elected MP for Sudbury in 1640, appointed Baron of the Exchequer on 30 May 1654, and appointed Lord Chief Justice of Ireland on 25 September 1655. Pepys was the fifth of eleven children, but child mortality was high and he was soon the oldest survivor. He was baptised at St Bride's Church on 3 March 1633. Pepys did not spend all of his infancy in London; for a while, he was sent to live with nurse Goody Lawrence at Kingsland, just north of the city. In about 1644, Pepys attended Huntingdon Grammar School before being educated at St Paul's School, London, c. 1646–1650. He attended the execution of Charles I in 1649. In 1650, he went to the University of Cambridge, having received two exhibitions from St Paul's School (perhaps owing to the influence of Sir George Downing, who was chairman of the judges and for whom he later worked at the Exchequer) and a grant from the Mercers' Company. In October, he was admitted as a sizar to Magdalene College; he moved there in March 1651 and took his Bachelor of Arts degree in 1654. Later in 1654 or early in 1655, he entered the household of one of his father's cousins, Sir Edward Montagu, who was later created the 1st Earl of Sandwich. Pepys married fourteen-year-old Elisabeth de St Michel, a descendant of French Huguenot immigrants, first in a religious ceremony on 10 October 1655 and later in a civil ceremony on 1 December 1655 at St Margaret's, Westminster. From a young age, Pepys suffered from bladder stones in his urinary tract—a condition from which his mother and brother John also later suffered. He was almost never without pain, as well as other symptoms, including "blood in the urine" (hematuria). By the time of his marriage, the condition was very severe. In 1657 Pepys decided to undergo surgery; not an easy option, as the operation was known to be especially painful and hazardous. Nevertheless, Pepys consulted surgeon Thomas Hollier and, on 26 March 1658, the operation took place in a bedroom in the house of Pepys' cousin Jane Turner. Pepys' stone was successfully removed and he resolved to hold a celebration on every anniversary of the operation, which he did for several years. However, there were long-term effects from the operation. The incision on his bladder broke open again late in his life. The procedure may have left him sterile, though there is no direct evidence for this, as he was childless before the operation. In mid-1658 Pepys moved to Axe Yard, near the modern Downing Street. He worked as a teller in the Exchequer under George Downing. On 1 January 1660 ("1 January 1659/1660" in contemporary terms), Pepys began to keep a diary. He recorded his daily life for almost ten years. This record of a decade of Pepys' life is more than a million words long and is often regarded as Britain’s most celebrated diary. Pepys has been called the greatest diarist of all time due to his frankness in writing concerning his own weaknesses and the accuracy with which he records events of daily British life and major events in the 17th century. Pepys wrote about the contemporary court and theatre (including his amorous affairs with the actresses), his household, and major political and social occurrences. Historians have been using his diary to gain greater insight and understanding of life in London in the 17th century. Pepys wrote consistently on subjects such as personal finances, the time he got up in the morning, the weather, and what he ate. He wrote at length about his new watch which he was very proud of (and which had an alarm, a new accessory at the time), a country visitor who did not enjoy his time in London because he felt that it was too crowded, and his cat waking him up at one in the morning. Pepys' diary is one of a very few sources which provides such length in details of everyday life of an upper-middle-class man during the seventeenth century. Aside from day-to-day activities, Pepys also commented on the significant and turbulent events of his nation. England was in disarray when he began writing his diary. Oliver Cromwell had died just a few years before, creating a period of civil unrest and a large power vacuum to be filled. Pepys had been a strong supporter of Cromwell, but he converted to the Royalist cause upon the Protector’s death. He was on the ship that brought Charles II home to England. He gave a firsthand account of events, such as the coronation of King Charles II and the Restoration of the British Monarchy to the throne, the Anglo-Dutch war, the Great Plague, and the Great Fire of London. Pepys did not plan on his contemporaries ever seeing his diary, which is evident from the fact that he wrote in shorthand and sometimes in a "code" of various Spanish, French, and Italian words (especially when describing his illicit affairs). However, Pepys often juxtaposed profanities in his native English amidst his "code" of foreign words, a practice which would reveal the details to any casual reader. He did intend future generations to see the diary, as evidenced by its inclusion in his library and its catalogue before his death along with the shorthand guide he used and the elaborate planning by which he ensured his library survived intact after his death. The women whom he pursued, his friends, and his dealings are all laid out. His diary reveals his jealousies, insecurities, trivial concerns, and his fractious relationship with his wife. It has been an important account of London in the 1660s. The juxtaposition of his commentary on politics and national events, alongside the very personal, can be seen from the beginning. His opening paragraphs, written in January 1660, begin: The entries from the first few months were filled with news of General George Monck's march on London. In April and May of that year, he was encountering problems with his wife, and he accompanied Montagu's fleet to the Netherlands to bring Charles II back from exile. Montagu was made Earl of Sandwich on 18 June, and Pepys secured the position of Clerk of the Acts to the Navy Board on 13 July. As secretary to the board, Pepys was entitled to a £350 annual salary plus the various gratuities and benefits that came with the job—including bribes. He rejected an offer of £1,000 for the position from a rival and soon afterwards moved to official accommodation in Seething Lane in the City of London. Pepys stopped writing his diary in 1669. His eyesight began to trouble him and he feared that writing in dim light was damaging his eyes. He did imply in his last entries that he might have others write his diary for him, but doing so would result in a loss of privacy and it seems that he never went through with those plans. In the end, Pepys' fears were unjustified and he lived another 34 years without going blind, but he never took to writing his diary again. However, Pepys dictated a journal for two months in 1669–70 as a record of his dealings with the Commissioners of Accounts at that period. He also kept a diary for a few months in 1683 when he was sent to Tangier, Morocco as the most senior civil servant in the navy, during the English evacuation. The diary mostly covers work-related matters. On the Navy Board, Pepys proved to be a more able and efficient worker than colleagues in higher positions. This often annoyed Pepys and provoked much harsh criticism in his diary. Among his colleagues were Admiral Sir William Penn, Sir George Carteret, Sir John Mennes and Sir William Batten. Pepys learned arithmetic from a private tutor and used models of ships to make up for his lack of first-hand nautical experience, and ultimately came to play a significant role in the board's activities. In September 1660, he was made a Justice of the Peace; on 15 February 1662, Pepys was admitted as a Younger Brother of Trinity House; and on 30 April, he received the freedom of Portsmouth. Through Sandwich, he was involved in the administration of the short-lived English colony at Tangier. He joined the Tangier committee in August 1662 when the colony was first founded and became its treasurer in 1665. In 1663, he independently negotiated a £3,000 contract for Norwegian masts, demonstrating the freedom of action that his superior abilities allowed. He was appointed to a commission of the royal fishery on 8 April 1664. Pepys' job required him to meet many people to dispense money and make contracts. He often laments how he "lost his labour" having gone to some appointment at a coffee house or tavern, only to discover that the person whom he was seeking was not there. These occasions were a constant source of frustration to Pepys. Pepys' diary provides a first-hand account of the Restoration, and it is also notable for its detailed accounts of several major events of the 1660s, along with the lesser known diary of John Evelyn. In particular, it is an invaluable source for the study of the Second Anglo-Dutch War of 1665–7, the Great Plague of 1665, and the Great Fire of London in 1666. In relation to the Plague and Fire, C. S. Knighton has written: "From its reporting of these two disasters to the metropolis in which he thrived, Pepys' diary has become a national monument." Robert Latham, editor of the definitive edition of the diary, remarks concerning the Plague and Fire: "His descriptions of both—agonisingly vivid—achieve their effect by being something more than superlative reporting; they are written with compassion. As always with Pepys it is people, not literary effects, that matter." In early 1665, the start of the Second Anglo-Dutch War placed great pressure on Pepys. His colleagues were either engaged elsewhere or incompetent, and Pepys had to conduct a great deal of business himself. He excelled under the pressure, which was extreme due to the complexity and under-funding of the Royal Navy. At the outset, he proposed a centralised approach to supplying the fleet. His idea was accepted, and he was made surveyor-general of victualling in October 1665. The position brought a further £300 a year. Pepys wrote about the Second Anglo-Dutch War: "In all things, in wisdom, courage, force and success, the Dutch have the best of us and do end the war with victory on their side". And King Charles II said: "Don't fight the Dutch, imitate them". In 1667, with the war lost, Pepys helped to discharge the navy. The Dutch had defeated England on open water and now began to threaten English soil itself. In June 1667, they conducted their Raid on the Medway, broke the defensive chain at Gillingham, and towed away the , one of the Royal Navy's most important ships. As he had done during the Fire and the Plague, Pepys again removed his wife and his gold from London. The Dutch raid was a major concern in itself, but Pepys was personally placed under a different kind of pressure: the Navy Board and his role as Clerk of the Acts came under scrutiny from the public and from Parliament. The war ended in August and, on 17 October, the House of Commons created a committee of "miscarriages". On 20 October, a list was demanded from Pepys of ships and commanders at the time of the division of the fleet in 1666. However, these demands were actually quite desirable for him, as tactical and strategic mistakes were not the responsibility of the Navy Board. The Board did face some allegations regarding the Medway raid, but they could exploit the criticism already attracted by commissioner of Chatham Peter Pett to deflect criticism from themselves. The committee accepted this tactic when they reported in February 1668. The Board was, however, criticised for its use of tickets to pay seamen. These tickets could only be exchanged for cash at the Navy's treasury in London. Pepys made a long speech at the bar of the Commons on 5 March 1668 defending this practice. It was, in the words of C. S. Knighton, a "virtuoso performance". The commission was followed by an investigation led by a more powerful authority, the commissioners of accounts. They met at Brooke House, Holborn and spent two years scrutinising how the war had been financed. In 1669, Pepys had to prepare detailed answers to the committee's eight "Observations" on the Navy Board's conduct. In 1670, he was forced to defend his own role. A seaman's ticket with Pepys' name on it was produced as incontrovertible evidence of his corrupt dealings but, thanks to the intervention of the king, Pepys emerged from the sustained investigation relatively unscathed. Outbreaks of plague were not particularly unusual events in London; major epidemics had occurred in 1592, 1603, 1625 and 1636. Furthermore, Pepys was not among the group of people who were most at risk. He did not live in cramped housing, he did not routinely mix with the poor, and he was not required to keep his family in London in the event of a crisis. It was not until June 1665 that the unusual seriousness of the plague became apparent, so Pepys' activities in the first five months of 1665 were not significantly affected by it. Indeed, Claire Tomalin writes that "the most notable fact about Pepys' plague year is that to him it was one of the happiest of his life." In 1665, he worked very hard, and the outcome was that he quadrupled his fortune. In his annual summary on 31 December, he wrote, "I have never lived so merrily (besides that I never got so much) as I have done this plague time". Nonetheless, Pepys was certainly concerned about the plague. On 16 August he wrote: He also chewed tobacco as a protection against infection, and worried that wig-makers might be using hair from the corpses as a raw material. Furthermore, it was Pepys who suggested that the Navy Office should evacuate to Greenwich, although he did offer to remain in town himself. He later took great pride in his stoicism. Meanwhile, Elisabeth Pepys was sent to Woolwich. She did not return to Seething Lane until January 1666, and was shocked by the sight of St Olave's churchyard, where 300 people had been buried. In the early hours of 2 September 1666, Pepys was awakened by his servant who had spotted a fire in the Billingsgate area. He decided that the fire was not particularly serious and returned to bed. Shortly after waking, his servant returned and reported that 300 houses had been destroyed and that London Bridge was threatened. Pepys went to the Tower of London to get a better view. Without returning home, he took a boat and observed the fire for over an hour. In his diary, Pepys recorded his observations as follows: The wind was driving the fire westward, so he ordered the boat to go to Whitehall and became the first person to inform the king of the fire. According to his entry of 2 September 1666, Pepys recommended to the king that homes be pulled down in the path of the fire in order to stem its progress. Accepting this advice, the king told him to go to Lord Mayor Thomas Bloodworth and tell him to start pulling down houses. Pepys took a coach back as far as St Paul's Cathedral before setting off on foot through the burning city. He found the Lord Mayor, who said, "Lord! what can I do? I am spent: people will not obey me. I have been pulling down houses; but the fire overtakes us faster than we can do it." At noon, he returned home and "had an extraordinary good dinner, and as merry, as at this time we could be", before returning to watch the fire in the city once more. Later, he returned to Whitehall, then met his wife in St. James's Park. In the evening, they watched the fire from the safety of Bankside. Pepys writes that "it made me weep to see it". Returning home, Pepys met his clerk Tom Hayter who had lost everything. Hearing news that the fire was advancing, he started to pack up his possessions by moonlight. A cart arrived at 4 a.m. on 3 September and Pepys spent much of the day arranging the removal of his possessions. Many of his valuables, including his diary, were sent to a friend from the Navy Office at Bethnal Green. At night, he "fed upon the remains of yesterday's dinner, having no fire nor dishes, nor any opportunity of dressing any thing." The next day, Pepys continued to arrange the removal of his possessions. By then, he believed that Seething Lane was in grave danger, so he suggested calling men from Deptford to help pull down houses and defend the king's property. He described the chaos in the city and his curious attempt at saving his own goods: Pepys had taken to sleeping on his office floor; on Wednesday, 5 September, he was awakened by his wife at 2 a.m. She told him that the fire had almost reached All Hallows-by-the-Tower and that it was at the foot of Seething Lane. He decided to send her and his gold—about £2,350—to Woolwich. In the following days, Pepys witnessed looting, disorder, and disruption. On 7 September, he went to Paul's Wharf and saw the ruins of St Paul's Cathedral, of his old school, of his father's house, and of the house in which he had had his stone removed. Despite all this destruction, Pepys' house, office, and diary were saved. The diary gives a detailed account of Pepys' personal life. He liked wine, plays, and the company of other people. He also spent time evaluating his fortune and his place in the world. He was always curious and often acted on that curiosity, as he acted upon almost all his impulses. Periodically, he would resolve to devote more time to hard work instead of leisure. For example, in his entry for New Year's Eve, 1661, he writes: "I have newly taken a solemn oath about abstaining from plays and wine…" The following months reveal his lapses to the reader; by 17 February, it is recorded, "Here I drank wine upon necessity, being ill for the want of it." Pepys was one of the most important civil servants of his age, and was also a widely cultivated man, taking an interest in books, music, the theatre and science. He was passionately interested in music; he composed, sang, and played for pleasure, and even arranged music lessons for his servants. He played the lute, viol, violin, flageolet, recorder and spinet to varying degrees of proficiency. He was also a keen singer, performing at home, in coffee houses, and even in Westminster Abbey. He and his wife took flageolet lessons from master Thomas Greeting. He also taught his wife to sing and paid for dancing lessons for her (although these stopped when he became jealous of the dancing master). He was known to be brutal to his servants, once beating a servant Jane with a broom until she cried. He kept a boy servant whom he frequently beat with a cane, a birch rod, a whip or a rope’s end. Pepys was an investor in the Company of Royal Adventurers Trading to Africa, which held the monopoly in England on trading along the west coast of Africa in gold, silver, ivory and slaves. Propriety did not prevent him from engaging in a number of extramarital liaisons with various women that were chronicled in his diary, often in some detail, and generally using a cocktail of languages (English, French, Spanish and Latin) when relating the intimate details. The most dramatic of these encounters was with Deborah Willet, a young woman engaged as a companion for Elisabeth Pepys. On 25 October 1668, Pepys was surprised by his wife as he embraced Deb Willet; he writes that his wife "coming up suddenly, did find me imbracing the girl con "[with]" my hand sub "[under]" su "[her]" coats; and endeed I was with my main "[hand]" in her cunny. I was at a wonderful loss upon it and the girl also..." Following this event, he was characteristically filled with remorse, but (equally characteristically) continued to pursue Willet after she had been dismissed from the Pepys household. Pepys also had a habit of fondling the breasts of his maid Mary Mercer while she dressed him in the morning. "Mrs Knep was the wife of a Smithfield horsedealer, and the mistress of Pepys"—or at least "she granted him a share of her favours". Scholars disagree on the full extent of the Pepys/Knep relationship, but much of later generations' knowledge of Knep comes from the diary. Pepys first met Knep on 6 December 1665. He described her as "pretty enough, but the most excellent, mad-humoured thing, and sings the noblest that I ever heard in my life." He called her husband "an ill, melancholy, jealous-looking fellow" and suspected him of abusing his wife. Knep provided Pepys with backstage access and was a conduit for theatrical and social gossip. When they wrote notes to each other, Pepys signed himself "Dapper Dickey", while Knep was "Barbry Allen" (that popular song was an item in her musical repertory). The diary was written in one of the many standard forms of shorthand used in Pepys' time, in this case called tachygraphy and devised by Thomas Shelton. It is clear from its content that it was written as a purely personal record of his life and not for publication, yet there are indications that Pepys took steps to preserve the bound manuscripts of his diary. He wrote it out in fair copy from rough notes, and he also had the loose pages bound into six volumes, catalogued them in his library with all his other books, and is likely to have suspected that eventually someone would find them interesting. This tree resumes, in a more compact form and with a few additional details, trees published elsewhere in a box-like form. It is meant to help the reader of the "Diary" and also integrates some biographical informations found in the same sources. Pepys' health suffered from the long hours that he worked throughout the period of the diary. Specifically, he believed that his eyesight had been affected by his work. He reluctantly concluded in his last entry, dated 31 May 1669, that he should completely stop writing for the sake of his eyes, and only dictate to his clerks from then on, which meant that he could no longer keep his diary. Pepys and his wife took a holiday to France and the Low Countries in June–October 1669; on their return, Elisabeth fell ill and died on 10 November 1669. Pepys erected a monument to her in the church of St Olave's, Hart Street, London. Pepys never remarried, but he did have a long-term housekeeper named Mary Skinner who was assumed by many of his contemporaries to be his mistress and sometimes referred to as Mrs. Pepys. In his will, he left her an annuity of £200 and many of his possessions. In 1672 he became an Elder Brother of Trinity House and served in this capacity until 1689; he was Master of Trinity House in 1676–1677 and again in 1685–1686. In 1673 he was promoted to Secretary of the Admiralty Commission and elected MP for Castle Rising in Norfolk. In 1673 he was involved with the establishment of the Royal Mathematical School at Christ's Hospital, which was to train 40 boys annually in navigation, for the benefit of the Royal Navy and the English Merchant Navy. In 1675 he was appointed a Governor of Christ's Hospital and for many years he took a close interest in its affairs. Among his papers are two detailed memoranda on the administration of the school. In 1699, after the successful conclusion of a seven-year campaign to get the master of the Mathematical School replaced by a man who knew more about the sea, he was rewarded for his service as a Governor by being made a Freeman of the City of London. He also served as Master (without ever having been a Freeman or Liveryman) of the Clothworkers' Company (1677-8). At the beginning of 1679 Pepys was elected MP for Harwich in Charles II's third parliament which formed part of the Cavalier Parliament. He was elected along with Sir Anthony Deane, a Harwich alderman and leading naval architect, to whom Pepys had been patron since 1662. By May of that year, they were under attack from their political enemies. Pepys resigned as Secretary of the Admiralty. They were imprisoned in the Tower of London on suspicion of treasonable correspondence with France, specifically leaking naval intelligence. The charges are believed to have been fabricated under the direction of Anthony Ashley-Cooper, 1st Earl of Shaftesbury. Pepys was accused, among other things, of being a papist. They were released in July, but proceedings against them were not dropped until June 1680. Though he had resigned from the Tangier committee in 1679, in 1683 he was sent to Tangier to assist Lord Dartmouth with the evacuation and abandonment of the English colony. After six months' service, he travelled back through Spain accompanied by the naval engineer Edmund Dummer, returning to England after a particularly rough passage on 30 March 1684. In June 1684, once more in favour, he was appointed King's Secretary for the affairs of the Admiralty, a post that he retained after the death of Charles II (February 1685) and the accession of James II. The phantom Pepys Island, alleged to be near South Georgia, was named after him in 1684, having been first "discovered" during his tenure at the Admiralty. From 1685 to 1688, he was active not only as Secretary of the Admiralty, but also as MP for Harwich. He had been elected MP for Sandwich, but this election was contested and he immediately withdrew to Harwich. When James fled the country at the end of 1688, Pepys' career also came to an end. In January 1689, he was defeated in the parliamentary election at Harwich; in February, one week after the accession of William III and Mary II, he resigned his secretaryship. He was elected a Fellow of the Royal Society in 1665 and served as its President from 1 December 1684 to 30 November 1686. Isaac Newton's "Principia Mathematica" was published during this period, and its title page bears Pepys' name. There is a probability problem called the "Newton–Pepys problem" that arose out of correspondence between Newton and Pepys about whether one is more likely to roll at least one six with six dice or at least two sixes with twelve dice. It has only recently been noted that the gambling advice which Newton gave Pepys was correct, while the logical argument with which Newton accompanied it was unsound. He was imprisoned on suspicion of Jacobitism from May to July 1689 and again in June 1690, but no charges were ever successfully brought against him. After his release, he retired from public life at age 57. He moved out of London ten years later (1701) to a house in Clapham owned by his friend William Hewer, who had begun his career working for Pepys in the admiralty. Clapham was in the country at the time; it is now part of inner London. Pepys lived there until his death on 26 May 1703. He had no children and bequeathed his estate to his unmarried nephew John Jackson. Pepys had disinherited his nephew Samuel Jackson for marrying contrary to his wishes. When John Jackson died in 1724, Pepys' estate reverted to Anne, daughter of Archdeacon Samuel Edgeley, niece of Will Hewer and sister of Hewer Edgeley, nephew and godson of Pepys' old Admiralty employee and friend Will Hewer. Hewer was also childless and left his immense estate to his nephew Hewer Edgeley (consisting mostly of the Clapham property, as well as lands in Clapham, London, Westminster and Norfolk) on condition that the nephew (and godson) would adopt the surname Hewer. So Will Hewer's heir became Hewer Edgeley-Hewer, and he adopted the old Will Hewer home in Clapham as his residence. That is how the Edgeley family acquired the estates of both Samuel Pepys and Will Hewer, sister Anne inheriting Pepys' estate, and brother Hewer inheriting that of Will Hewer. On the death of Hewer Edgeley-Hewer in 1728, the old Hewer estate went to Edgeley-Hewer's widow Elizabeth, who left the estate to Levett Blackborne, the son of Abraham Blackborne, merchant of Clapham, and other family members, who later sold it off in lots. Lincoln's Inn barrister Levett Blackborne also later acted as attorney in legal scuffles for the heirs who had inherited the Pepys estate. Pepys' former protégé and friend Hewer acted as the executor of Pepys' estate. Pepys was buried along with his wife in St Olave's Church, Hart Street in London. Pepys was a lifelong bibliophile and carefully nurtured his large collection of books, manuscripts, and prints. At his death, there were more than 3,000 volumes, including the diary, all carefully catalogued and indexed; they form one of the most important surviving 17th-century private libraries. The most important items in the Library are the six original bound manuscripts of Pepys' diary, but there are other remarkable holdings, including: Pepys made detailed provisions in his will for the preservation of his book collection. His nephew and heir John Jackson died in 1723, when it was transferred intact to Magdalene College, Cambridge, where it can be seen in the Pepys Library. The bequest included all the original bookcases and his elaborate instructions that placement of the books "be strictly reviewed and, where found requiring it, more nicely adjusted". Motivated by the publication of Evelyn's Diary, Lord Granville deciphered a few pages. John Smith (later the Rector of St Mary the Virgin in Baldock) was then engaged to transcribe the diaries into plain English. He laboured at this task for three years, from 1819 to 1822, unaware until nearly finished that a key to the shorthand system was stored in Pepys' library a few shelves above the diary volumes. Others had apparently succeeded in reading the diary earlier, perhaps knowing about the key, because a work of 1812 quotes from a passage of it. Smith's transcription, which is also kept in the Pepys Library, was the basis for the first published edition of the diary, edited by Lord Braybrooke, released in two volumes in 1825. A second transcription, done with the benefit of the key, but often less accurately, was completed in 1875 by Mynors Bright and published in 1875–1879. This added about a third to the previously published text, but still left only about 80% of the diary in print. Henry B. Wheatley, drawing on both his predecessors, produced a new edition in 1893–1899, revised in 1926, with extensive notes and an index. All of these editions omitted passages (chiefly about Pepys' sexual adventures) which the editors thought too obscene ever to be printed. Wheatley, in the preface to his edition noted, "a few passages which cannot possibly be printed. It may be thought by some that these omissions are due to an unnecessary squeamishness, but it is not really so, and readers are therefore asked to have faith in the judgement of the editor." The complete, unexpurgated, and definitive edition, edited and transcribed by Robert Latham and William Matthews, was published by Bell & Hyman, London, and the University of California Press, Berkeley, in nine volumes, along with separate Companion and Index volumes, over the years 1970–1983. Various single-volume abridgements of this text are also available. The Introduction in volume I provides a scholarly but readable account of "The Diarist", "The Diary" ("The Manuscript", "The Shorthand", and "The Text"), "History of Previous Editions", "The Diary as Literature", and "The Diary as History". The Companion provides a long series of detailed essays about Pepys and his world. The first unabridged recording of the diary as an audiobook was published in 2015 by "Naxos AudioBooks". On 1 January 2003 Phil Gyford started a weblog, pepysdiary.com, that serialised the diary one day each evening together with annotations from public and experts alike. In December 2003 the blog won the best specialist blog award in "The Guardian"'s Best of British Blogs. In 1958 the BBC produced a serial called "Samuel Pepys!", in which Peter Sallis played the title role. In 2003 a television film "The Private Life of Samuel Pepys" aired on BBC2. Steve Coogan played Pepys. The 2004 film "Stage Beauty" concerns London theatre in the 17th century and is based on Jeffrey Hatcher's play "Compleat Female Stage Beauty", which in turn was inspired by a reference in Pepys' diary to the actor Edward Kynaston, who played female roles in the days when women were forbidden to appear on stage. Pepys is a character in the film and is portrayed as an ardent devotee of the theatre. Hugh Bonneville plays Pepys. Daniel Mays portrays Pepys in "The Great Fire", a 2014 BBC television miniseries. Pepys has also been portrayed in various other film and television productions, played by diverse actors including Mervyn Johns, Michael Palin, Michael Graham Cox and Philip Jackson. BBC Radio 4 has broadcast serialised radio dramatisations of the diary. In the 1990s it was performed as a "Classic Serial" starring Bill Nighy, and in the 2010s it was serialised as part of the "Woman's Hour" radio magazine programme. One audiobook edition of Pepys' diary selections is narrated by Kenneth Branagh. A fictionalised Pepys narrates the second chapter of Harry Turtledove's science fiction novel "A Different Flesh" (serialised 1985–1988, book form 1988). This chapter is entitled "And So to Bed" and written in the form of entries from the Pepys diary. The entries detail Pepys' encounter with American "Homo erectus" specimens (imported to London as beasts of burden) and his formation of the "transformational theory of life", thus causing evolutionary theory to gain a foothold in scientific thought in the 17th century rather than the 19th. Deborah Swift's 2017 novel "Pleasing Mr Pepys" is described as a "re-imagining of the events in Samuel Pepys's Diary". Several detailed studies of Pepys' life are available. Arthur Bryant published his three-volume study in 1933–1938, long before the definitive edition of the diary, but, thanks to Bryant's lively style, it is still of interest. In 1974 Richard Ollard produced a new biography that drew on Latham's and Matthew's work on the text, benefitting from the author's deep knowledge of Restoration politics. Other biographies include: "Samuel Pepys: A Life", by Stephen Coote (London: Hodder & Stoughton, 2000) and, "Samuel Pepys and His World", by Geoffrey Trease (London: Thames and Hudson, 1972). The most recent general study is by Claire Tomalin, which won the 2002 Whitbread Book of the Year award, the judges calling it a "rich, thoughtful and deeply satisfying" account that unearths "a wealth of material about the uncharted life of Samuel Pepys". Editions of letters and other publications by Pepys " The Diary".
https://en.wikipedia.org/wiki?curid=27808
Chemical synapse Chemical synapses are biological junctions through which neurons' signals can be sent to each other and to non-neuronal cells such as those in muscles or glands. Chemical synapses allow neurons to form circuits within the central nervous system. They are crucial to the biological computations that underlie perception and thought. They allow the nervous system to connect to and control other systems of the body. At a chemical synapse, one neuron releases neurotransmitter molecules into a small space (the synaptic cleft) that is adjacent to another neuron. The neurotransmitters are contained within small sacs called synaptic vesicles, and are released into the synaptic cleft by exocytosis. These molecules then bind to neurotransmitter receptors on the postsynaptic cell. Finally, the neurotransmitters are cleared from the synapse through one of several potential mechanisms including enzymatic degradation or re-uptake by specific transporters either on the presynaptic cell or on some other neuroglia to terminate the action of the neurotransmitter. The adult human brain is estimated to contain from 1014 to 5 × 1014 (100–500 trillion) synapses. Every cubic millimeter of cerebral cortex contains roughly a billion (short scale, i.e. 109) of them. The number of synapses in the human cerebral cortex has separately been estimated at 0.15 quadrillion (150 trillion) The word "synapse" was introduced by Sir Charles Scott Sherrington in 1897. Chemical synapses are not the only type of biological synapse: electrical and immunological synapses also exist. Without a qualifier, however, "synapse" commonly refers to chemical synapse. Synapses are functional connections between neurons, or between neurons and other types of cells. A typical neuron gives rise to several thousand synapses, although there are some types that make far fewer. Most synapses connect axons to dendrites, but there are also other types of connections, including axon-to-cell-body, axon-to-axon, and dendrite-to-dendrite. Synapses are generally too small to be recognizable using a light microscope except as points where the membranes of two cells appear to touch, but their cellular elements can be visualized clearly using an electron microscope. Chemical synapses pass information directionally from a presynaptic cell to a postsynaptic cell and are therefore asymmetric in structure and function. The presynaptic axon terminal, or synaptic bouton, is a specialized area within the axon of the presynaptic cell that contains neurotransmitters enclosed in small membrane-bound spheres called synaptic vesicles (as well as a number of other supporting structures and organelles, such as mitochondria and endoplasmic reticulum). Synaptic vesicles are docked at the presynaptic plasma membrane at regions called active zones. Immediately opposite is a region of the postsynaptic cell containing neurotransmitter receptors; for synapses between two neurons the postsynaptic region may be found on the dendrites or cell body. Immediately behind the postsynaptic membrane is an elaborate complex of interlinked proteins called the postsynaptic density (PSD). Proteins in the PSD are involved in anchoring and trafficking neurotransmitter receptors and modulating the activity of these receptors. The receptors and PSDs are often found in specialized protrusions from the main dendritic shaft called dendritic spines. Synapses may be described as symmetric or asymmetric. When examined under an electron microscope, asymmetric synapses are characterized by rounded vesicles in the presynaptic cell, and a prominent postsynaptic density. Asymmetric synapses are typically excitatory. Symmetric synapses in contrast have flattened or elongated vesicles, and do not contain a prominent postsynaptic density. Symmetric synapses are typically inhibitory. The synaptic cleft —also called synaptic gap— is a gap between the pre- and postsynaptic cells that is about 20 nm (0.02 μ) wide. The small volume of the cleft allows neurotransmitter concentration to be raised and lowered rapidly. An autapse is a chemical (or electrical) synapse formed when the axon of one neuron synapses with its own dendrites. Here is a summary of the sequence of events that take place in synaptic transmission from a presynaptic neuron to a postsynaptic cell. Each step is explained in more detail below. Note that with the exception of the final step, the entire process may run only a few hundred microseconds, in the fastest synapses. The release of a neurotransmitter is triggered by the arrival of a nerve impulse (or action potential) and occurs through an unusually rapid process of cellular secretion (exocytosis). Within the presynaptic nerve terminal, vesicles containing neurotransmitter are localized near the synaptic membrane. The arriving action potential produces an influx of calcium ions through voltage-dependent, calcium-selective ion channels at the down stroke of the action potential (tail current). Calcium ions then bind to synaptotagmin proteins found within the membranes of the synaptic vesicles, allowing the vesicles to fuse with the presynaptic membrane. The fusion of a vesicle is a stochastic process, leading to frequent failure of synaptic transmission at the very small synapses that are typical for the central nervous system. Large chemical synapses (e.g. the neuromuscular junction), on the other hand, have a synaptic release probability of 1. Vesicle fusion is driven by the action of a set of proteins in the presynaptic terminal known as SNAREs. As a whole, the protein complex or structure that mediates the docking and fusion of presynaptic vesicles is called the active zone. The membrane added by the fusion process is later retrieved by endocytosis and recycled for the formation of fresh neurotransmitter-filled vesicles. An exception to the general trend of neurotransmitter release by vesicular fusion is found in the type II receptor cells of mammalian taste buds. Here the neurotransmitter ATP is released directly from the cytoplasm into the synaptic cleft via voltage gated channels. Receptors on the opposite side of the synaptic gap bind neurotransmitter molecules. Receptors can respond in either of two general ways. First, the receptors may directly open ligand-gated ion channels in the postsynaptic cell membrane, causing ions to enter or exit the cell and changing the local transmembrane potential. The resulting change in voltage is called a postsynaptic potential. In general, the result is "excitatory" in the case of depolarizing currents, and "inhibitory" in the case of hyperpolarizing currents. Whether a synapse is excitatory or inhibitory depends on what type(s) of ion channel conduct the postsynaptic current(s), which in turn is a function of the type of receptors and neurotransmitter employed at the synapse. The second way a receptor can affect membrane potential is by modulating the production of chemical messengers inside the postsynaptic neuron. These second messengers can then amplify the inhibitory or excitatory response to neurotransmitters. After a neurotransmitter molecule binds to a receptor molecule, it must be removed to allow for the postsynaptic membrane to continue to relay subsequent EPSPs and/or IPSPs. This removal can happen through one or more processes: The strength of a synapse has been defined by Sir Bernard Katz as the product of (presynaptic) release probability "pr", quantal size "q" (the postsynaptic response to the release of a single neurotransmitter vesicle, a 'quantum'), and "n", the number of release sites. "Unitary connection" usually refers to an unknown number of individual synapses connecting a presynaptic neuron to a postsynaptic neuron. The amplitude of postsynaptic potentials (PSPs) can be as low as 0.4 mV to as high as 20 mV. The amplitude of a PSP can be modulated by neuromodulators or can change as a result of previous activity. Changes in the synaptic strength can be short-term, lasting seconds to minutes, or long-term (long-term potentiation, or LTP), lasting hours. Learning and memory are believed to result from long-term changes in synaptic strength, via a mechanism known as synaptic plasticity. Desensitization of the postsynaptic receptors is a decrease in response to the same neurotransmitter stimulus. It means that the strength of a synapse may in effect diminish as a train of action potentials arrive in rapid succession – a phenomenon that gives rise to the so-called frequency dependence of synapses. The nervous system exploits this property for computational purposes, and can tune its synapses through such means as phosphorylation of the proteins involved. Synaptic transmission can be changed by previous activity. These changes are called synaptic plasticity and may result in either a decrease in the efficacy of the synapse, called depression, or an increase in efficacy, called potentiation. These changes can either be long-term or short-term. Forms of short-term plasticity include synaptic fatigue or depression and synaptic augmentation. Forms of long-term plasticity include long-term depression and long-term potentiation. Synaptic plasticity can be either homosynaptic (occurring at a single synapse) or heterosynaptic (occurring at multiple synapses). Homosynaptic Plasticity (or also homotropic modulation) is a change in the synaptic strength that results from the history of activity at a particular synapse. This can result from changes in presynaptic calcium as well as feedback onto presynaptic receptors, i.e. a form of autocrine signaling. Homosynaptic plasticity can affect the number and replenishment rate of vesicles or it can affect the relationship between calcium and vesicle release. Homosynaptic plasticity can also be postsynaptic in nature. It can result in either an increase or decrease in synaptic strength. One example is neurons of the sympathetic nervous system (SNS), which release noradrenaline, which, besides affecting postsynaptic receptors, also affects presynaptic α2-adrenergic receptors, inhibiting further release of noradrenaline. This effect is utilized with clonidine to perform inhibitory effects on the SNS. Heterosynaptic Plasticity (or also heterotropic modulation) is a change in synaptic strength that results from the activity of other neurons. Again, the plasticity can alter the number of vesicles or their replenishment rate or the relationship between calcium and vesicle release. Additionally, it could directly affect calcium influx. Heterosynaptic plasticity can also be postsynaptic in nature, affecting receptor sensitivity. One example is again neurons of the sympathetic nervous system, which release noradrenaline, which, in addition, generates an inhibitory effect on presynaptic terminals of neurons of the parasympathetic nervous system. In general, if an excitatory synapse is strong enough, an action potential in the presynaptic neuron will trigger an action potential in the postsynaptic cell. In many cases the excitatory postsynaptic potential (EPSP) will not reach the threshold for eliciting an action potential. When action potentials from multiple presynaptic neurons fire simultaneously, or if a single presynaptic neuron fires at a high enough frequency, the EPSPs can overlap and summate. If enough EPSPs overlap, the summated EPSP can reach the threshold for initiating an action potential. This process is known as summation, and can serve as a high pass filter for neurons. On the other hand, a presynaptic neuron releasing an inhibitory neurotransmitter, such as GABA, can cause an inhibitory postsynaptic potential (IPSP) in the postsynaptic neuron, bringing the membrane potential farther away from the threshold, decreasing its excitability and making it more difficult for the neuron to initiate an action potential. If an IPSP overlaps with an EPSP, the IPSP can in many cases prevent the neuron from firing an action potential. In this way, the output of a neuron may depend on the input of many different neurons, each of which may have a different degree of influence, depending on the strength and type of synapse with that neuron. John Carew Eccles performed some of the important early experiments on synaptic integration, for which he received the Nobel Prize for Physiology or Medicine in 1963. When a neurotransmitter is released at a synapse, it reaches its highest concentration inside the narrow space of the synaptic cleft, but some of it is certain to diffuse away before being reabsorbed or broken down. If it diffuses away, it has the potential to activate receptors that are located either at other synapses or on the membrane away from any synapse. The extrasynaptic activity of a neurotransmitter is known as "volume transmission". It is well established that such effects occur to some degree, but their functional importance has long been a matter of controversy. Recent work indicates that volume transmission may be the predominant mode of interaction for some special types of neurons. In the mammalian cerebral cortex, a class of neurons called neurogliaform cells can inhibit other nearby cortical neurons by releasing the neurotransmitter GABA into the extracellular space. Along the same vein, GABA released from neurogliaform cells into the extracellular space also acts on surrounding astrocytes, assigning a role for volume transmission in the control of ionic and neurotransmitter homeostasis. Approximately 78% of neurogliaform cell boutons do not form classical synapses. This may be the first definitive example of neurons communicating chemically where classical synapses are not present. An electrical synapse is an electrically conductive link between two abutting neurons that is formed at a narrow gap between the pre- and postsynaptic cells, known as a gap junction. At gap junctions, cells approach within about 3.5 nm of each other, rather than the 20 to 40 nm distance that separates cells at chemical synapses. As opposed to chemical synapses, the postsynaptic potential in electrical synapses is not caused by the opening of ion channels by chemical transmitters, but rather by direct electrical coupling between both neurons. Electrical synapses are faster than chemical synapses. Electrical synapses are found throughout the nervous system, including in the retina, the reticular nucleus of the thalamus, the neocortex, and in the hippocampus. While chemical synapses are found between both excitatory and inhibitory neurons, electrical synapses are most commonly found between smaller local inhibitory neurons. Electrical synapses can exist between two axons, two dendrites, or between an axon and a dendrite. In some fish and amphibians, electrical synapses can be found within the same terminal of a chemical synapse, as in Mauthner cells. One of the most important features of chemical synapses is that they are the site of action for the majority of psychoactive drugs. Synapses are affected by drugs such as curare, strychnine, cocaine, morphine, alcohol, LSD, and countless others. These drugs have different effects on synaptic function, and often are restricted to synapses that use a specific neurotransmitter. For example, curare is a poison that stops acetylcholine from depolarizing the postsynaptic membrane, causing paralysis. Strychnine blocks the inhibitory effects of the neurotransmitter glycine, which causes the body to pick up and react to weaker and previously ignored stimuli, resulting in uncontrollable muscle spasms. Morphine acts on synapses that use endorphin neurotransmitters, and alcohol increases the inhibitory effects of the neurotransmitter GABA. LSD interferes with synapses that use the neurotransmitter serotonin. Cocaine blocks reuptake of dopamine and therefore increases its effects. During the 1950s, Bernard Katz and Paul Fatt observed spontaneous miniature synaptic currents at the frog neuromuscular junction. Based on these observations, they developed the 'quantal hypothesis' that is the basis for our current understanding of neurotransmitter release as exocytosis and for which Katz received the Nobel Prize in Physiology or Medicine in 1970. In the late 1960s, Ricardo Miledi and Katz advanced the hypothesis that depolarization-induced influx of calcium ions triggers exocytosis. Sir Charles Scott Sherringtonin coined the word 'synapse' and the history of the word was given by Sherrington in a letter he wrote to John Fulton:
https://en.wikipedia.org/wiki?curid=27809
Sleep and learning Multiple hypotheses explain the possible connections between sleep and learning in humans. Research indicates that sleep does more than allow the brain to rest. It may also aid the consolidation of long-term memories. REM sleep and slow-wave sleep play different roles in memory consolidation. REM is associated with the consolidation of nondeclarative (implicit) memories. An example of a nondeclarative memory would be a task that we can do without consciously thinking about it, such as riding a bike. Slow-wave, or non-REM (NREM) sleep, is associated with the consolidation of declarative (explicit) memories. These are facts that need to be consciously remembered, such as dates for a history class. Popular sayings can reflect the notion that remolded memories produce new creative associations in the morning, and that performance often improves after a time-interval that includes sleep. Current studies demonstrate that a healthy sleep produces a significant learning-dependent performance boost. The idea is that sleep helps the brain to edit its memory, looking for important patterns and extracting overarching rules which could be described as 'the gist', and integrating this with existing memory. The 'synaptic scaling' hypothesis suggests that sleep plays an important role in regulating learning that has taken place while awake, enabling more efficient and effective storage in the brain, making better use of space and energy. Healthy sleep must include the appropriate sequence and proportion of NREM and REM phases, which play different roles in the memory consolidation-optimization process. During a normal night of sleep, a person will alternate between periods of NREM and REM sleep. Each cycle is approximately 90 minutes long, containing a 20-30 minute bout of REM sleep. NREM sleep consists of sleep stages 1–4, and is where movement can be observed. A person can still move their body when they are in NREM sleep. If someone sleeping turns, tosses, or rolls over, this indicates that they are in NREM sleep. REM sleep is characterized by the lack of muscle activity. Physiological studies have shown that aside from the occasional twitch, a person actually becomes paralyzed during REM sleep. In motor skill learning, an interval of sleep may be critical for the expression of performance gains; without sleep these gains will be delayed. Procedural memories are a form of nondeclarative memory, so they would most benefit from the fast-wave REM sleep. In a study, procedural memories have been shown to benefit from sleep . Subjects were tested using a tapping task, where they used their fingers to tap a specific sequence of numbers on a keyboard, and their performances were measured by accuracy and speed. This finger-tapping task was used to simulate learning a motor skill. The first group was tested, retested 12 hours later while awake, and finally tested another 12 hours later with sleep in between. The other group was tested, retested 12 hours later with sleep in between, and then retested 12 hours later while awake. The results showed that in both groups, there was only a slight improvement after a 12-hour wake session, but a significant increase in performance after each group slept. This study gives evidence that REM sleep is a significant factor in consolidating motor skill procedural memories, therefore sleep deprivation can impair performance on a motor learning task. This memory decrement results specifically from the loss of stage 2, REM sleep. Declarative memory has also been shown to benefit from sleep, but not in the same way as procedural memory. Declarative memories benefit from the slow-waves nREM sleep. A study was conducted where the subjects learned word pairs, and the results showed that sleep not only prevents the decay of memory, but also actively fixates declarative memories. Two of the groups learned word pairs, then either slept or stayed awake, and were tested again. The other two groups did the same thing, except they also learned interference pairs right before being retested to try to disrupt the previously learned word pairs. The results showed that sleep was of "some" help in retaining the word pair associations, while against the interference pair, sleep helped "significantly". After sleep, there is increased insight. This is because sleep helps people to reanalyze their memories. The same patterns of brain activity that occur during learning have been found to occur again during sleep, only faster. One way that sleep strengthens memories is by weeding out the less successful connections between neurons in the brain. This weeding out is essential to prevent overactivity. The brain compensates for strengthening some synapses (connections) between neurons, by weakening others. The weakening process occurs mostly during sleep. This weakening during sleep allows for strengthening of other connections while we are awake. Learning is the process of strengthening connections, therefore this process could be a major explanation for the benefits that sleep has on memory. Research has shown that taking an afternoon nap increases learning capacity. A study tested two groups of subjects on a nondeclarative memory task. One group engaged in REM sleep, and one group did not (meaning that they engaged in NREM sleep). The investigators found that the subjects who engaged only in NREM sleep did not show much improvement. The subjects who engaged in REM sleep performed significantly better, indicating that REM sleep facilitated the consolidation of nondeclarative memories. A more recent study demonstrated that a procedural task was learned and retained better if it was encountered immediately before going to sleep, while a declarative task was learned better in the afternoon. A 2009 study based on electrophysiological recordings of large ensembles of isolated cells in the prefrontal cortex of rats revealed that cell assemblies that formed upon learning were more preferentially active during subsequent sleep episodes. More specifically, those replay events were more prominent during slow wave sleep and were concomitant with hippocampal reactivation events. This study has shown that neuronal patterns in large brain networks are tagged during learning so that they are replayed, and supposedly consolidated, during subsequent sleep. There have been other studies that have shown similar reactivation of learning pattern during motor skill and neuroprosthetic learning. Notably, new evidence is showing that reactivation and rescaling may be co-occurring during sleep. Sleep has been directly linked to the grades of students. One in four U.S. high school students admit to falling asleep in class at least once a week. Consequently, results have shown that those who sleep less do poorly. In the United States, sleep deprivation is common with students because almost all schools begin early in the morning and many of these students either choose to stay awake late into the night or cannot do otherwise due to delayed sleep phase syndrome. As a result, students that should be getting between 8.5 and 9.25 hours of sleep are getting only 7 hours. Perhaps because of this sleep deprivation, their grades are lower and their concentration is impaired. As a result of studies showing the effects of sleep deprivation on grades, and the different sleep patterns for teenagers, a school in New Zealand, changed its start time to 10:30 a.m., in 2006, to allow students to keep to a schedule that allowed more sleep. In 2009, Monkseaton High School, in North Tyneside, had 800 pupils aged 13–19 starting lessons at 10 a.m. instead of the normal 9 a.m. and reported that general absence dropped by 8% and persistent absenteeism by 27%. Similarly, a high school in Copenhagen has committed to providing at least one class per year for students who will start at 10 a.m. or later. College students represent one of the most sleep-deprived segments of our population. Only 11% of American college students sleep well, and 40% of students feel well rested only two days per week. About 73% have experienced at least some occasional sleep issues. This poor sleep is thought to have a severe impact on their ability to learn and remember information because the brain is being deprived of time that it needs to consolidate information which is essential to the learning process.
https://en.wikipedia.org/wiki?curid=27811
Systematics Biological systematics is the study of the diversification of living forms, both past and present, and the relationships among living things through time. Relationships are visualized as evolutionary trees (synonyms: cladograms, phylogenetic trees, phylogenies). Phylogenies have two components: branching order (showing group relationships) and branch length (showing amount of evolution). Phylogenetic trees of species and higher taxa are used to study the evolution of traits (e.g., anatomical or molecular characteristics) and the distribution of organisms (biogeography). Systematics, in other words, is used to understand the evolutionary history of life on Earth. In the study of biological systematics, researchers use the different branches to further understand the relationships between differing organisms. These branches are used to determine the applications and uses for modern day systematics. Biological systematics classifies species by using three specific branches. "Numerical systematics", or "biometry", uses biological statistics to identify and classify animals. "Biochemical systematics" classifies and identifies animals based on the analysis of the material that makes up the living part of a cell—such as the nucleus, organelles, and cytoplasm. "Experimental systematics" identifies and classifies animals based on the evolutionary units that comprise a species, as well as their importance in evolution itself. Factors such as mutations, genetic divergence, and hybridization all are considered evolutionary units. With the specific branches, researchers are able to determine the applications and uses for modern-day systematics. These applications include: John Lindley provided an early definition of systematics in 1830, although he wrote of "systematic botany" rather than using the term "systematics". In 1970 Michener "et al." defined "systematic biology" and "taxonomy" (terms that are often confused and used interchangeably) in relationship to one another as follows: Systematic biology (hereafter called simply systematics) is the field that (a) provides scientific names for organisms, (b) describes them, (c) preserves collections of them, (d) provides classifications for the organisms, keys for their identification, and data on their distributions, (e) investigates their evolutionary histories, and (f) considers their environmental adaptations. This is a field with a long history that in recent years has experienced a notable renaissance, principally with respect to theoretical content. Part of the theoretical material has to do with evolutionary areas (topics e and f above), the rest relates especially to the problem of classification. Taxonomy is that part of Systematics concerned with topics (a) to (d) above. The term "taxonomy" was coined by Augustin Pyramus de Candolle while the term "systematic" was coined by Carl Linnaeus the father of taxonomy. Taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, phylogenetics: At various times in history, all these words have had overlapping, related meanings. However, in modern usage, they can all be considered synonyms of each other. For example, Webster's 9th New Collegiate Dictionary of 1987 treats "classification", "taxonomy", and "systematics" as synonyms. According to this work, the terms originated in 1790, c. 1828, and in 1888 respectively. Some claim systematics alone deals specifically with relationships through time, and that it can be synonymous with phylogenetics, broadly dealing with the inferred hierarchy of organisms. This means it would be a subset of taxonomy as it is sometimes regarded, but the inverse is claimed by others. Europeans tend to use the terms "systematics" and "biosystematics" for the study of biodiversity as a whole, whereas North Americans tend to use "taxonomy" more frequently. However, taxonomy, and in particular alpha taxonomy, is more specifically the identification, description, and naming (i.e. nomenclature) of organisms, while "classification" focuses on placing organisms within hierarchical groups that show their relationships to other organisms. All of these biological disciplines can deal with both extinct and extant organisms. Systematics uses taxonomy as a primary tool in understanding, as nothing about an organism's relationships with other living things can be understood without it first being properly studied and described in sufficient detail to identify and classify it correctly. Scientific classifications are aids in recording and reporting information to other scientists and to laymen. The systematist, a scientist who specializes in systematics, must, therefore, be able to use existing classification systems, or at least know them well enough to skilfully justify not using them. Phenetics was an attempt to determine the relationships of organisms through a measure of overall similarity, making no distinction between plesiomorphies (shared ancestral traits) and apomorphies (derived traits). From the late-20th century onwards, it was superseded by cladistics, which rejects plesiomorphies in attempting to resolve the phylogeny of Earth's various organisms through time. systematists generally make extensive use of molecular biology and of computer programs to study organisms. Taxonomic characters are the taxonomic attributes that can be used to provide the evidence from which relationships (the phylogeny) between taxa are inferred. Kinds of taxonomic characters include:
https://en.wikipedia.org/wiki?curid=27813
Sleep Sleep is a naturally recurring state of mind and body, characterized by altered consciousness, relatively inhibited sensory activity, reduced muscle activity and inhibition of nearly all voluntary muscles during rapid eye movement (REM) sleep, and reduced interactions with surroundings. It is distinguished from wakefulness by a decreased ability to react to stimuli, but more reactive than a coma or disorders of consciousness, with sleep displaying very different and active brain patterns. Sleep occurs in repeating periods, in which the body alternates between two distinct modes: REM sleep and non-REM sleep. Although REM stands for "rapid eye movement", this mode of sleep has many other aspects, including virtual paralysis of the body. A well-known feature of sleep is the dream, an experience typically recounted in narrative form, which resembles waking life while in progress, but which usually can later be distinguished as fantasy. During sleep, most of the body's systems are in an anabolic state, helping to restore the immune, nervous, skeletal, and muscular systems; these are vital processes that maintain mood, memory, and cognitive function, and play a large role in the function of the endocrine and immune systems. The internal circadian clock promotes sleep daily at night. The diverse purposes and mechanisms of sleep are the subject of substantial ongoing research. Sleep is a highly conserved behavior across animal evolution. Humans may suffer from various sleep disorders, including dyssomnias such as insomnia, hypersomnia, narcolepsy, and sleep apnea; parasomnias such as sleepwalking and Rapid eye movement sleep behavior disorder; bruxism; and circadian rhythm sleep disorders. The advent of artificial light has substantially altered sleep timing in industrialized countries. The most pronounced physiological changes in sleep occur in the brain. The brain uses significantly less energy during sleep than it does when awake, especially during non-REM sleep. In areas with reduced activity, the brain restores its supply of adenosine triphosphate (ATP), the molecule used for short-term storage and transport of energy. In quiet waking, the brain is responsible for 20% of the body's energy use, thus this reduction has a noticeable effect on overall energy consumption. Sleep increases the sensory threshold. In other words, sleeping persons perceive fewer stimuli, but can generally still respond to loud noises and other salient sensory events. During slow-wave sleep, humans secrete bursts of growth hormone. All sleep, even during the day, is associated with secretion of prolactin. Key physiological methods for monitoring and measuring changes during sleep include electroencephalography (EEG) of brain waves, electrooculography (EOG) of eye movements, and electromyography (EMG) of skeletal muscle activity. Simultaneous collection of these measurements is called polysomnography, and can be performed in a specialized sleep laboratory. Sleep researchers also use simplified electrocardiography (EKG) for cardiac activity and actigraphy for motor movements. Sleep is divided into two broad types: non-rapid eye movement (non-REM or NREM) sleep and rapid eye movement (REM) sleep. Non-REM and REM sleep are so different that physiologists identify them as distinct behavioral states. Non-REM sleep occurs first and after a transitional period is called slow-wave sleep or deep sleep. During this phase, body temperature and heart rate fall, and the brain uses less energy. REM sleep, also known as paradoxical sleep, represents a smaller portion of total sleep time. It is the main occasion for dreams (or nightmares), and is associated with desynchronized and fast brain waves, eye movements, loss of muscle tone, and suspension of homeostasis. The sleep cycle of alternate NREM and REM sleep takes an average of 90 minutes, occurring 4–6 times in a good night's sleep. The American Academy of Sleep Medicine (AASM) divides NREM into three stages: N1, N2, and N3, the last of which is also called delta sleep or slow-wave sleep. The whole period normally proceeds in the order: N1 → N2 → N3 → N2 → REM. REM sleep occurs as a person returns to stage 2 or 1 from a deep sleep. There is a greater amount of deep sleep (stage N3) earlier in the night, while the proportion of REM sleep increases in the two cycles just before natural awakening. Awakening can mean the end of sleep, or simply a moment to survey the environment and readjust body position before falling back asleep. Sleepers typically awaken soon after the end of a REM phase or sometimes in the middle of REM. Internal circadian indicators, along with successful reduction of homeostatic sleep need, typically bring about awakening and the end of the sleep cycle. Awakening involves heightened electrical activation in the brain, beginning with the thalamus and spreading throughout the cortex. During a night's sleep, a small amount of time is usually spent in a waking state. As measured by electroencephalography, young females are awake for 0–1% of the larger sleeping period; young males are awake for 0–2%. In adults, wakefulness increases, especially in later cycles. One study found 3% awake time in the first ninety-minute sleep cycle, 8% in the second, 10% in the third, 12% in the fourth, and 13–14% in the fifth. Most of this awake time occurred shortly after REM sleep. Today, many humans wake up with an alarm clock; however, people can also reliably wake themselves up at a specific time with no need for an alarm. Many sleep quite differently on workdays versus days off, a pattern which can lead to chronic circadian desynchronization. Many people regularly look at television and other screens before going to bed, a factor which may exacerbate disruption of the circadian cycle. Scientific studies on sleep have shown that sleep stage at awakening is an important factor in amplifying sleep inertia. Sleep timing is controlled by the circadian clock (Process C), sleep–wake homeostasis (Process S), and to some extent by individual will. Sleep timing depends greatly on hormonal signals from the circadian clock, or Process C, a complex neurochemical system which uses signals from an organism's environment to recreate an internal day–night rhythm. Process C counteracts the homeostatic drive for sleep during the day (in diurnal animals) and augments it at night. The suprachiasmatic nucleus (SCN), a brain area directly above the optic chiasm, is presently considered the most important nexus for this process; however, secondary clock systems have been found throughout the body. An organism whose circadian clock exhibits a regular rhythm corresponding to outside signals is said to be "entrained"; an entrained rhythm persists even if the outside signals suddenly disappear. If an entrained human is isolated in a bunker with constant light or darkness, he or she will continue to experience rhythmic increases and decreases of body temperature and melatonin, on a period which slightly exceeds 24 hours. Scientists refer to such conditions as free-running of the circadian rhythm. Under natural conditions, light signals regularly adjust this period downward, so that it corresponds better with the exact 24 hours of an Earth day. The circadian clock exerts constant influence on the body, effecting sinusoidal oscillation of body temperature between roughly 36.2 °C and 37.2 °C. The suprachiasmatic nucleus itself shows conspicuous oscillation activity, which intensifies during subjective day (i.e., the part of the rhythm corresponding with daytime, whether accurately or not) and drops to almost nothing during subjective night. The circadian pacemaker in the suprachiasmatic nucleus has a direct neural connection to the pineal gland, which releases the hormone melatonin at night. Cortisol levels typically rise throughout the night, peak in the awakening hours, and diminish during the day. Circadian prolactin secretion begins in the late afternoon, especially in women, and is subsequently augmented by sleep-induced secretion, to peak in the middle of the night. Circadian rhythm exerts some influence on the nighttime secretion of growth hormone. The circadian rhythm influences the ideal timing of a restorative sleep episode. Sleepiness increases during the night. REM sleep occurs more during body temperature minimum within the circadian cycle, whereas slow-wave sleep can occur more independently of circadian time. The internal circadian clock is profoundly influenced by changes in light, since these are its main clues about what time it is. Exposure to even small amounts of light during the night can suppress melatonin secretion, and increase body temperature and wakefulness. Short pulses of light, at the right moment in the circadian cycle, can significantly 'reset' the internal clock. Blue light, in particular, exerts the strongest effect, leading to concerns that electronic media use before bed may interfere with sleep. Modern humans often find themselves desynchronized from their internal circadian clock, due to the requirements of work (especially night shifts), long-distance travel, and the influence of universal indoor lighting. Even if they have sleep debt, or feel sleepy, people can have difficulty staying asleep at the peak of their circadian cycle. Conversely they can have difficulty waking up in the trough of the cycle. A healthy young adult entrained to the sun will (during most of the year) fall asleep a few hours after sunset, experience body temperature minimum at 6 a.m., and wake up a few hours after sunrise. Generally speaking, the longer an organism is awake, the more it feels a need to sleep ("sleep debt"). This driver of sleep is referred to as Process S. The balance between sleeping and waking is regulated by a process called homeostasis. Induced or perceived lack of sleep is called sleep deprivation. Process S is driven by the depletion of glycogen and accumulation of adenosine in the forebrain that disinhibits the ventrolateral preoptic nucleus, allowing for inhibition of the ascending reticular activating system. Sleep deprivation tends to cause slower brain waves in the frontal cortex, shortened attention span, higher anxiety, impaired memory, and a grouchy mood. Conversely, a well-rested organism tends to have improved memory and mood. Neurophysiological and functional imaging studies have demonstrated that frontal regions of the brain are particularly responsive to homeostatic sleep pressure. There is disagreement on how much sleep debt is possible to accumulate, and whether sleep debt is accumulated against an individual's average sleep or some other benchmark. It is also unclear whether the prevalence of sleep debt among adults has changed appreciably in the industrialized world in recent decades. Sleep debt does show some evidence of being cumulative. Subjectively, however, humans seem to reach maximum sleepiness after 30 hours of waking. It is likely that in Western societies, children are sleeping less than they previously have. One neurochemical indicator of sleep debt is adenosine, a neurotransmitter that inhibits many of the bodily processes associated with wakefulness. Adenosine levels increase in the cortex and basal forebrain during prolonged wakefulness, and decrease during the sleep-recovery period, potentially acting as a homeostatic regulator of sleep. Coffee and caffeine temporarily block the effect of adenosine, prolong sleep latency, and reduce total sleep time and quality. Humans are also influenced by aspects of "social time", such as the hours when other people are awake, the hours when work is required, the time on the clock, etc. Time zones, standard times used to unify the timing for people in the same area, correspond only approximately to the natural rising and setting of the sun. The approximate nature of the time zone can be shown with China, a country which used to span five time zones and now officially uses only one (UTC+8). In polyphasic sleep, an organism sleeps several times in a 24-hour cycle, whereas in monophasic sleep this occurs all at once. Under experimental conditions, humans tend to alternate more frequently between sleep and wakefulness (i.e., exhibit more polyphasic sleep) if they have nothing better to do. Given a 14-hour period of darkness in experimental conditions, humans tended towards bimodal sleep, with two sleep periods concentrated at the beginning and at the end of the dark time. Bimodal sleep in humans was more common before the industrial revolution. Different characteristic sleep patterns, such as the familiarly so-called "early bird" and "night owl", are called "chronotypes". Genetics and sex have some influence on chronotype, but so do habits. Chronotype is also liable to change over the course of a person's lifetime. Seven-year-olds are better disposed to wake up early in the morning than are fifteen-year-olds. Chronotypes far outside the normal range are called circadian rhythm sleep disorders. The siesta habit has recently been associated with a 37% lower coronary mortality, possibly due to reduced cardiovascular stress mediated by daytime sleep. Short naps at mid-day and mild evening exercise were found to be effective for improved sleep, cognitive tasks, and mental health in elderly people. Monozygotic (identical) but not dizygotic (fraternal) twins tend to have similar sleep habits. Neurotransmitters, molecules whose production can be traced to specific genes, are one genetic influence on sleep which can be analyzed. The circadian clock has its own set of genes. Genes which may influence sleep include ABCC9, DEC2, Dopamine receptor D2 and variants near PAX 8 and VRK2. The quality of sleep may be evaluated from an objective and a subjective point of view. Objective sleep quality refers to how difficult it is for a person to fall asleep and remain in a sleeping state, and how many times they wake up during a single night. Poor sleep quality disrupts the cycle of transition between the different stages of sleep. Subjective sleep quality in turn refers to a sense of being rested and regenerated after awaking from sleep. A study by A. Harvey et al. (2002) found that insomniacs were more demanding in their evaluations of sleep quality than individuals who had no sleep problems. Homeostatic sleep propensity (the need for sleep as a function of the amount of time elapsed since the last adequate sleep episode) must be balanced against the circadian element for satisfactory sleep. Along with corresponding messages from the circadian clock, this tells the body it needs to sleep. A person who regularly awakens at an early hour will generally not be able to sleep much later than his or her normal waking time, even if moderately sleep-deprived. The timing is correct when the following two circadian markers occur after the middle of the sleep episode and before awakening: maximum concentration of the hormone melatonin, and minimum core body temperature. Human sleep needs vary by age and amongst individuals; sleep is considered to be adequate when there is no daytime sleepiness or dysfunction. Moreover, self-reported sleep duration is only moderately correlated with actual sleep time as measured by actigraphy, and those affected with sleep state misperception may typically report having slept only four hours despite having slept a full eight hours. Researchers have found that sleeping 6–7 hours each night correlates with longevity and cardiac health in humans, though many underlying factors may be involved in the causality behind this relationship. Sleep difficulties are furthermore associated with psychiatric disorders such as depression, alcoholism, and bipolar disorder. Up to 90 percent of adults with depression are found to have sleep difficulties. Dysregulation detected by EEG includes disturbances in sleep continuity, decreased delta sleep and altered REM patterns with regard to latency, distribution across the night and density of eye movements. By the time infants reach the age of two, their brain size has reached 90 percent of an adult-sized brain; a majority of this brain growth has occurred during the period of life with the highest rate of sleep. The hours that children spend asleep influence their ability to perform on cognitive tasks. Children who sleep through the night and have few night waking episodes have higher cognitive attainments and easier temperaments than other children. Sleep also influences language development. To test this, researchers taught infants a faux language and observed their recollection of the rules for that language. Infants who slept within four hours of learning the language could remember the language rules better, while infants who stayed awake longer did not recall those rules as well. There is also a relationship between infants' vocabulary and sleeping: infants who sleep longer at night at 12 months have better vocabularies at 26 months. Children need many hours of sleep per day in order to develop and function properly: up to 18 hours for newborn babies, with a declining rate as a child ages. Early in 2015, after a two-year study, the National Sleep Foundation in the US announced newly-revised recommendations as shown in the table below. The human organism physically restores itself during sleep, healing itself and removing metabolic wastes which build up during periods of activity. This restoration takes place mostly during slow-wave sleep, during which body temperature, heart rate, and brain oxygen consumption decrease. The brain, especially, requires sleep for restoration, whereas in the rest of the body these processes can take place during quiescent waking. In both cases, the reduced rate of metabolism enables countervailing restorative processes. While awake, metabolism generates reactive oxygen species, which are damaging to cells. During sleep, metabolic rates decrease and reactive oxygen species generation is reduced allowing restorative processes to take over. The sleeping brain has been shown to remove metabolic waste products at a faster rate than during an awake state. It is further theorized that sleep helps facilitate the synthesis of molecules that help repair and protect the brain from these harmful elements generated during waking. Anabolic hormones such as growth hormones are secreted preferentially during sleep. The concentration of the sugar compound glycogen in the brain increases during sleep, and is depleted through metabolism during wakefulness. Studies suggest that sleep deprivation may impair the body's ability to heal wounds. It has been shown that sleep deprivation affects the immune system in rats. It is now possible to state that "sleep loss impairs immune function and immune challenge alters sleep," and it has been suggested that sleep increases white blood cell counts. A 2014 study found that depriving mice of sleep increased cancer growth and dampened the immune system's ability to control cancers. The effect of sleep duration on somatic growth is not completely known. One study recorded growth, height, and weight, as correlated to parent-reported time in bed in 305 children over a period of nine years (age 1–10). It was found that "the variation of sleep duration among children does not seem to have an effect on growth." It is well-established that slow-wave sleep affects growth hormone levels in adult men. During eight hours' sleep, Van Cauter, Leproult, and Plat found that the men with a high percentage of slow-wave sleep (SWS) (average 24%) also had high growth hormone secretion, while subjects with a low percentage of SWS (average 9%) had low growth hormone secretion. It has been widely accepted that sleep must support the formation of long-term memory, and generally increasing previous learning and experiences recalls. However, its benefit seems to depend on the phase of sleep and the type of memory. For example, declarative and procedural memory recall tasks applied over early and late nocturnal sleep, as well as wakefulness controlled conditions, have been shown that declarative memory improves more during early sleep (dominated by SWS) while procedural memory during late sleep (dominated by REM sleep) does so. With regards to declarative memory, the functional role of SWS has been associated with hippocampal replays of previously encoded neural patterns that seem to facilitate long-term memories consolidation. This assumption is based on the active system consolidation hypothesis, which states that repeated reactivations of newly-encoded information in the hippocampus during slow oscillations in NREM sleep mediate the stabilization and gradual integration of declarative memory with pre-existing knowledge networks on the cortical level. It assumes the hippocampus might hold information only temporarily and in a fast-learning rate, whereas the neocortex is related to long-term storage and a slow-learning rate. This dialogue between the hippocampus and neocortex occurs in parallel with hippocampal sharp-wave ripples and thalamo-cortical spindles, synchrony that drives the formation of the spindle-ripple event which seems to be a prerequisite for the formation of long-term memories. Reactivation of memory also occurs during wakefulness and its function is associated with serving to update the reactivated memory with newly-encoded information, whereas reactivations during SWS are presented as crucial for memory stabilization. Based on targeted memory reactivation (TMR) experiments that use associated memory cues to triggering memory traces during sleep, several studies have been reassuring the importance of nocturnal reactivations for the formation of persistent memories in neocortical networks, as well as highlighting the possibility of increasing people’s memory performance at declarative recalls. Furthermore, nocturnal reactivation seems to share the same neural oscillatory patterns as reactivation during wakefulness, processes which might be coordinated by theta activity. During wakefulness, theta oscillations have been often related to successful performance in memory tasks, and cued memory reactivations during sleep have been showing that theta activity is significantly stronger in subsequent recognition of cued stimuli as compared to uncued ones, possibly indicating a strengthening of memory traces and lexical integration by cuing during sleep. However, the beneficial effect of TMR for memory consolidation seems to occur only if the cued memories can be related to prior knowledge. During sleep, especially REM sleep, people tend to have dreams: elusive first-person experiences which seem realistic while in progress, despite their frequently bizarre qualities. Dreams can seamlessly incorporate elements within a person's mind that would not normally go together. They can include apparent sensations of all types, especially vision and movement. People have proposed many hypotheses about the functions of dreaming. Sigmund Freud postulated that dreams are the symbolic expression of frustrated desires that have been relegated to the unconscious mind, and he used dream interpretation in the form of psychoanalysis in attempting to uncover these desires. Counterintuitively, penile erections during sleep are not more frequent during sexual dreams than during other dreams. The parasympathetic nervous system experiences increased activity during REM sleep which may cause erection of the penis or clitoris. In males, 80% to 95% of REM sleep is normally accompanied by partial to full penile erection, while only about 12% of men's dreams contain sexual content. John Allan Hobson and Robert McCarley propose that dreams are caused by the random firing of neurons in the cerebral cortex during the REM period. Neatly, this theory helps explain the irrationality of the mind during REM periods, as, according to this theory, the forebrain then creates a story in an attempt to reconcile and make sense of the nonsensical sensory information presented to it. This would explain the odd nature of many dreams. Using antidepressants, acetaminophen, ibuprofen, or alcoholic beverages is thought to potentially suppress dreams, whereas melatonin may have the ability to encourage them. Insomnia is a general term for difficulty falling asleep and/or staying asleep. Insomnia is the most common sleep problem, with many adults reporting occasional insomnia, and 10–15% reporting a chronic condition. Insomnia can have many different causes, including psychological stress, a poor sleep environment, an inconsistent sleep schedule, or excessive mental or physical stimulation in the hours before bedtime. Insomnia is often treated through behavioral changes like keeping a regular sleep schedule, avoiding stimulating or stressful activities before bedtime, and cutting down on stimulants such as caffeine. The sleep environment may be improved by installing heavy drapes to shut out all sunlight, and keeping computers, televisions and work materials out of the sleeping area. A 2010 review of published scientific research suggested that exercise generally improves sleep for most people, and helps sleep disorders such as insomnia. The optimum time to exercise "may" be 4 to 8 hours before bedtime, though exercise at any time of day is beneficial, with the exception of heavy exercise taken shortly before bedtime, which may disturb sleep. However, there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. Sleeping medications such as Ambien and Lunesta are an increasingly popular treatment for insomnia. Although these nonbenzodiazepine medications are generally believed to be better and safer than earlier generations of sedatives, they have still generated some controversy and discussion regarding side effects. White noise appears to be a promising treatment for insomnia. Obstructive sleep apnea is a condition in which major pauses in breathing occur during sleep, disrupting the normal progression of sleep and often causing other more severe health problems. Apneas occur when the muscles around the patient's airway relax during sleep, causing the airway to collapse and block the intake of oxygen. Obstructive sleep apnea is more common than central sleep apnea. As oxygen levels in the blood drop, the patient then comes out of deep sleep in order to resume breathing. When several of these episodes occur per hour, sleep apnea rises to a level of seriousness that may require treatment. Diagnosing sleep apnea usually requires a professional sleep study performed in a sleep clinic, because the episodes of wakefulness caused by the disorder are extremely brief and patients usually do not remember experiencing them. Instead, many patients simply feel tired after getting several hours of sleep and have no idea why. Major risk factors for sleep apnea include chronic fatigue, old age, obesity and snoring. People over age 60 with prolonged sleep (8-10 hours or more; average sleep duration of 7-8 hours in the elderly) have a 33% increased risk of all-cause mortality and 43% increased risk of cardiovascular diseases, while those with short sleep (less than 7 hours) have a 6% increased risk of all-cause mortality. Sleep disorders, including sleep apnea, insomnia, or periodic limb movements, occur more commonly in the elderly, each possibly impacting sleep quality and duration. A 2017 review indicated that older adults do not need less sleep, but rather have an impaired ability to obtain their sleep needs, and may be able to deal with sleepiness better than younger adults. Various practices are recommended to mitigate sleep disturbances in the elderly, such as having a light bedtime snack, avoidance of caffeine, daytime naps, excessive evening stimulation, and tobacco products, and using regular bedtime and wake schedules. Sleep disorders include narcolepsy, periodic limb movement disorder (PLMD), restless leg syndrome (RLS), upper airway resistance syndrome (UARS), and the circadian rhythm sleep disorders. Fatal familial insomnia, or FFI, an extremely rare genetic disease with no known treatment or cure, is characterized by increasing insomnia as one of its symptoms; ultimately sufferers of the disease stop sleeping entirely, before dying of the disease. Somnambulism, known as sleep walking, is a sleeping disorder, especially among children. Drugs which induce sleep, known as hypnotics, include benzodiazepines, although these interfere with REM; Nonbenzodiazepine hypnotics such as eszopiclone (Lunesta), zaleplon (Sonata), and zolpidem (Ambien); Antihistamines, such as diphenhydramine (Benadryl) and doxylamine; Alcohol (ethanol), despite its rebound effect later in the night and interference with REM; barbiturates, which have the same problem; melatonin, a component of the circadian clock, and released naturally at night by the pineal gland; and cannabis, which may also interfere with REM. Stimulants, which inhibit sleep, include caffeine, an adenosine antagonist; amphetamine, MDMA, empathogen-entactogens, and related drugs; cocaine, which can alter the circadian rhythm, and methylphenidate, which acts similarly; and other analeptic drugs like modafinil and armodafinil with poorly understood mechanisms. Dietary and nutritional choices may affect sleep duration and quality. One 2016 review indicated that a high-carbohydrate diet promoted a shorter onset to sleep and a longer duration of sleep than a high-fat diet. A 2012 investigation indicated that mixed micronutrients and macronutrients are needed to promote quality sleep. A varied diet containing fresh fruits and vegetables, low saturated fat, and whole grains may be optimal for individuals seeking to improve sleep quality. High-quality clinical trials on long-term dietary practices are needed to better define the influence of diet on sleep quality. Research suggests that sleep patterns vary significantly across cultures. The most striking differences are observed between societies that have plentiful sources of artificial light and ones that do not. The primary difference appears to be that pre-light cultures have more broken-up sleep patterns. For example, people without artificial light might go to sleep far sooner after the sun sets, but then wake up several times throughout the night, punctuating their sleep with periods of wakefulness, perhaps lasting several hours. The boundaries between sleeping and waking are blurred in these societies. Some observers believe that nighttime sleep in these societies is most often split into two main periods, the first characterized primarily by deep sleep and the second by REM sleep. Some societies display a fragmented sleep pattern in which people sleep at all times of the day and night for shorter periods. In many nomadic or hunter-gatherer societies, people will sleep on and off throughout the day or night depending on what is happening. Plentiful artificial light has been available in the industrialized West since at least the mid-19th century, and sleep patterns have changed significantly everywhere that lighting has been introduced. In general, people sleep in a more concentrated burst through the night, going to sleep much later, although this is not always the case. Historian A. Roger Ekirch thinks that the traditional pattern of "segmented sleep," as it is called, began to disappear among the urban upper class in Europe in the late 17th century and the change spread over the next 200 years; by the 1920s "the idea of a first and second sleep had receded entirely from our social consciousness." Ekirch attributes the change to increases in "street lighting, domestic lighting and a surge in coffee houses," which slowly made nighttime a legitimate time for activity, decreasing the time available for rest. Today in most societies people sleep during the night, but in very hot climates they may sleep during the day. During Ramadan, many Muslims sleep during the day rather than at night. In some societies, people sleep with at least one other person (sometimes many) or with animals. In other cultures, people rarely sleep with anyone except for an intimate partner. In almost all societies, sleeping partners are strongly regulated by social standards. For example, a person might only sleep with the immediate family, the extended family, a spouse or romantic partner, children, children of a certain age, children of a specific gender, peers of a certain gender, friends, peers of equal social rank, or with no one at all. Sleep may be an actively social time, depending on the sleep groupings, with no constraints on noise or activity. People sleep in a variety of locations. Some sleep directly on the ground; others on a skin or blanket; others sleep on platforms or beds. Some sleep with blankets, some with pillows, some with simple headrests, some with no head support. These choices are shaped by a variety of factors, such as climate, protection from predators, housing type, technology, personal preference, and the incidence of pests. Sleep has been seen in culture as similar to death since antiquity; in Greek mythology, Hypnos (the god of sleep) and Thanatos (the god of death) were both said to be the children of Nyx (the goddess of night). John Donne, Samuel Taylor Coleridge, Percy Bysshe Shelley, and other poets have all written poems about the relationship between sleep and death. Shelley describes them as "both so passing, strange and wonderful!" Many people consider dying in one's sleep the most peaceful way to die. Phrases such as "big sleep" and "rest in peace" are often used in reference to death, possibly in an effort to lessen its finality. Sleep and dreaming have sometimes been seen as providing the potential for visionary experiences. In medieval Irish tradition, in order to become a filí, the poet was required to undergo a ritual called the "imbas forosnai", in which they would enter a mantic, trancelike sleep. Many cultural stories have been told about people falling asleep for extended periods of time. The earliest of these stories is the ancient Greek legend of Epimenides of Knossos. According to the biographer Diogenes Laërtius, Epimenides was a shepherd on the Greek island of Crete. One day, one of his sheep went missing and he went out to look for it, but became tired and fell asleep in a cave under Mount Ida. When he awoke, he continued searching for the sheep, but could not find it, so he returned to his old farm, only to discover that it was now under new ownership. He went to his hometown, but discovered that nobody there knew him. Finally, he met his younger brother, who was now an old man, and learned that he had been asleep in the cave for fifty-seven years. A far more famous instance of a "long sleep" today is the Christian legend of the Seven Sleepers of Ephesus, in which seven Christians flee into a cave during pagan times in order to escape persecution, but fall asleep and wake up 360 years later to discover, to their astonishment, that the Roman Empire is now predominantly Christian. The American author Washington Irving's short story "Rip Van Winkle", first published in 1819 in his collection of short stories "The Sketch Book of Geoffrey Crayon, Gent.", is about a man in colonial America named Rip Van Winkle who falls asleep on one of the Catskill Mountains and wakes up twenty years later after the American Revolution. The story is now considered one of the greatest classics of American literature. Writing about the thematical representations of sleep in art, physician and sleep researcher Meir Kryger noted: "[Artists] have intense fascination with mythology, dreams, religious themes, the parallel between sleep and death, reward, abandonment of conscious control, healing, a depiction of innocence and serenity, and the erotic."
https://en.wikipedia.org/wiki?curid=27834
Superoxide dismutase Superoxide dismutase (SOD, ) is an enzyme that alternately catalyzes the dismutation (or partitioning) of the superoxide (O2−) radical into ordinary molecular oxygen (O2) and hydrogen peroxide (H2O2). Superoxide is produced as a by-product of oxygen metabolism and, if not regulated, causes many types of cell damage. Hydrogen peroxide is also damaging and is degraded by other enzymes such as catalase. Thus, SOD is an important antioxidant defense in nearly all living cells exposed to oxygen. One exception is "Lactobacillus plantarum" and related lactobacilli, which use a different mechanism to prevent damage from reactive O2−. SODs catalyze the disproportionation of superoxide: In this way, O2− is converted into two less damaging species. The pathway by which SOD-catalyzed dismutation of superoxide may be written, for Cu,Zn SOD, with the following reactions : The general form, applicable to all the different metal-coordinated forms of SOD, can be written as follows: where M = Cu (n=1) ; Mn (n=2) ; Fe (n=2) ; Ni (n=2). In a series of such reactions, the oxidation state and the charge of the metal cation oscillates between n and n+1: +1 and +2 for Cu, or +2 and +3 for the other metals . Irwin Fridovich and Joe McCord at Duke University discovered the enzymatic activity of superoxide dismutase in 1968. SODs were previously known as a group of metalloproteins with unknown function; for example, CuZnSOD was known as erythrocuprein (or hemocuprein, or cytocuprein) or as the veterinary anti-inflammatory drug "Orgotein". Likewise, Brewer (1967) identified a protein that later became known as superoxide dismutase as an indophenol oxidase by protein analysis of starch gels using the phenazine-tetrazolium technique. There are three major families of superoxide dismutase, depending on the protein fold and the metal cofactor: the Cu/Zn type (which binds both copper and zinc), Fe and Mn types (which bind either iron or manganese), and the Ni type (which binds nickel). In higher plants, SOD isozymes have been localized in different cell compartments. Mn-SOD is present in mitochondria and peroxisomes. Fe-SOD has been found mainly in chloroplasts but has also been detected in peroxisomes, and CuZn-SOD has been localized in cytosol, chloroplasts, peroxisomes, and apoplast. Three forms of superoxide dismutase are present in humans, in all other mammals, and most chordates. SOD1 is located in the cytoplasm, SOD2 in the mitochondria, and SOD3 is extracellular. The first is a dimer (consists of two units), whereas the others are tetramers (four subunits). SOD1 and SOD3 contain copper and zinc, whereas SOD2, the mitochondrial enzyme, has manganese in its reactive centre. The genes are located on chromosomes 21, 6, and 4, respectively (21q22.1, 6q25.3 and 4p15.3-p15.1). In higher plants, superoxide dismutase enzymes (SODs) act as antioxidants and protect cellular components from being oxidized by reactive oxygen species (ROS). ROS can form as a result of drought, injury, herbicides and pesticides, ozone, plant metabolic activity, nutrient deficiencies, photoinhibition, temperature above and below ground, toxic metals, and UV or gamma rays. To be specific, molecular O2 is reduced to O2− (a ROS called superoxide) when it absorbs an excited electron released from compounds of the electron transport chain. Superoxide is known to denature enzymes, oxidize lipids, and fragment DNA. SODs catalyze the production of O2 and H2O2 from superoxide (O2−), which results in less harmful reactants. When acclimating to increased levels of oxidative stress, SOD concentrations typically increase with the degree of stress conditions. The compartmentalization of different forms of SOD throughout the plant makes them counteract stress very effectively. There are three well-known and -studied classes of SOD metallic coenzymes that exist in plants. First, Fe SODs consist of two species, one homodimer (containing 1-2 g Fe) and one tetramer (containing 2-4 g Fe). They are thought to be the most ancient SOD metalloenzymes and are found within both prokaryotes and eukaryotes. Fe SODs are most abundantly localized inside plant chloroplasts, where they are indigenous. Second, Mn SODs consist of a homodimer and homotetramer species each containing a single Mn(III) atom per subunit. They are found predominantly in mitochondrion and peroxisomes. Third, Cu-Zn SODs have electrical properties very different from those of the other two classes. These are concentrated in the chloroplast, cytosol, and in some cases the extracellular space. Note that Cu-Zn SODs provide less protection than Fe SODs when localized in the chloroplast. Human white blood cells use enzymes such as NADPH oxidase to generate superoxide and other reactive oxygen species to kill bacteria. During infection, some bacteria (e.g., "Burkholderia pseudomallei") therefore produce superoxide dismutase to protect themselves from being killed. SOD out-competes damaging reactions of superoxide, thus protecting the cell from superoxide toxicity. The reaction of superoxide with non-radicals is spin-forbidden. In biological systems, this means that its main reactions are with itself (dismutation) or with another biological radical such as nitric oxide (NO) or with a transition-series metal. The superoxide anion radical (O2−) spontaneously dismutes to O2 and hydrogen peroxide (H2O2) quite rapidly (~105 M−1s−1 at pH 7). SOD is necessary because superoxide reacts with sensitive and critical cellular targets. For example, it reacts with the NO radical, and makes toxic peroxynitrite. Because the uncatalysed dismutation reaction for superoxide requires two superoxide molecules to react with each other, the dismutation rate is second-order with respect to initial superoxide concentration. Thus, the half-life of superoxide, although very short at high concentrations (e.g., 0.05 seconds at 0.1mM) is actually quite long at low concentrations (e.g., 14 hours at 0.1 nM). In contrast, the reaction of superoxide with SOD is first order with respect to superoxide concentration. Moreover, superoxide dismutase has the largest "k"cat/"K"M (an approximation of catalytic efficiency) of any known enzyme (~7 x 109 M−1s−1), this reaction being limited only by the frequency of collision between itself and superoxide. That is, the reaction rate is "diffusion-limited". The high efficiency of superoxide dismutase seems necessary: even at the subnanomolar concentrations achieved by the high concentrations of SOD within cells, superoxide inactivates the citric acid cycle enzyme aconitase, can poison energy metabolism, and releases potentially toxic iron. Aconitase is one of several iron-sulfur-containing (de)hydratases in metabolic pathways shown to be inactivated by superoxide. SOD1 is an extremely stable protein. In the holo form (both copper and zinc bound) the melting point is > 90 °C. In the apo form (no copper or zinc bound) the melting point is ~ 60 °C. By differential scanning calorimetry (DSC), holo SOD1 unfolds by a two-state mechanism: from dimer to two unfolded monomers. In chemical denaturation experiments, holo SOD1 unfolds by a three-state mechanism with observation of a folded monomeric intermediate. Superoxide is one of the main reactive oxygen species in the cell. As a consequence, SOD serves a key antioxidant role. The physiological importance of SODs is illustrated by the severe pathologies evident in mice genetically engineered to lack these enzymes. Mice lacking SOD2 die several days after birth, amid massive oxidative stress. Mice lacking SOD1 develop a wide range of pathologies, including hepatocellular carcinoma, an acceleration of age-related muscle mass loss, an earlier incidence of cataracts, and a reduced lifespan. Mice lacking SOD3 do not show any obvious defects and exhibit a normal lifespan, though they are more sensitive to hyperoxic injury. Knockout mice of any SOD enzyme are more sensitive to the lethal effects of superoxide-generating compounds, such as paraquat and diquat (herbicides). "Drosophila" lacking SOD1 have a dramatically shortened lifespan, whereas flies lacking SOD2 die before birth. Depletion of SOD1 and SOD2 in the nervous system and muscles of "Drosophila" is associated with reduced lifespan. The accumulation of neuronal and muscular ROS appears to contribute to age-associated impairments. When overexpression of mitochondrial SOD2 is induced, the lifespan of adult "Drosophila" is extended. Among black garden ants ("Lasius niger"), the lifespan of queens is an order of magnitude greater than of workers despite no systematic nucleotide sequence difference between them. The "SOD3" gene was found to be the most differentially over-expressed in the brains of queen vs worker ants. This finding raises the possibility of an important role of antioxidant function in modulating lifespan. SOD knockdowns in the worm "C. elegans" do not cause major physiological disruptions. However, the lifespan of "C. elegans" can be extended by superoxide/catalase mimetics suggesting that oxidative stress is a major determinant of the rate of aging. Knockout or null mutations in SOD1 are highly detrimental to aerobic growth in the budding yeast "Saccharomyces cerevisiae" and result in a dramatic reduction in post-diauxic lifespan. In wild-type "S. cerevisiae", DNA damage rates increased 3-fold with age, but more than 5-fold in mutants deleted for either the "SOD1" or "SOD2" genes. Reactive oxygen species levels increase with age in these mutant strains and show a similar pattern to the pattern of DNA damage increase with age. Thus it appears that superoxide dismutase plays a substantial role in preserving genome integrity during aging in "S. cerevisiae". SOD2 knockout or null mutations cause growth inhibition on respiratory carbon sources in addition to decreased post-diauxic lifespan. In the fission yeast "Schizosaccharomyces pombe", deficiency of mitochondrial superoxide dismutase SOD2 accelerates chronological aging. Several prokaryotic SOD null mutants have been generated, including "E. coli". The loss of periplasmic CuZnSOD causes loss of virulence and might be an attractive target for new antibiotics. Mutations in the first SOD enzyme (SOD1) can cause familial amyotrophic lateral sclerosis (ALS, a form of motor neuron disease). The most common mutation in the U.S. is A4V, while the most intensely studied is G93A. The other two isoforms of SOD have not been linked to many human diseases, however, in mice inactivation of SOD2 causes perinatal lethality and inactivation of SOD1 causes hepatocellular carcinoma. Mutations in SOD1 can cause familial ALS (several pieces of evidence also show that wild-type SOD1, under conditions of cellular stress, is implicated in a significant fraction of sporadic ALS cases, which represent 90% of ALS patients.), by a mechanism that is presently not understood, but not due to loss of enzymatic activity or a decrease in the conformational stability of the SOD1 protein. Overexpression of SOD1 has been linked to the neural disorders seen in Down syndrome. In patients with thalassemia, SOD will increase as a form of compensation mechanism. However, in the chronic stage, SOD does not seem to be sufficient and tends to decrease due to the destruction of proteins from the massive reaction of oxidant-antioxidant. In mice, the extracellular superoxide dismutase (SOD3, ecSOD) contributes to the development of hypertension. Diminished SOD3 activity has been linked to lung diseases such as Acute Respiratory Distress Syndrome (ARDS) or Chronic obstructive pulmonary disease (COPD). Superoxide dismutase is also not expressed in neural crest cells in the developing fetus. Hence, high levels of free radicals can cause damage to them and induce dysraphic anomalies (neural tube defects). SOD has powerful antiinflammatory activity. For example, SOD is a highly effective experimental treatment of chronic inflammation in colitis. Treatment with SOD decreases reactive oxygen species generation and oxidative stress and, thus, inhibits endothelial activation. Therefore, such antioxidants may be important new therapies for the treatment of inflammatory bowel disease. Likewise, SOD has multiple pharmacological activities. E.g., it ameliorates cis-platinum-induced nephrotoxicity in rodents. As "Orgotein" or "ontosein", a pharmacologically-active purified bovine liver SOD, it is also effective in the treatment of urinary tract inflammatory disease in man. For a time, bovine liver SOD even had regulatory approval in several European countries for such use. This was cut short by concerns about prion disease. An SOD-mimetic agent, TEMPOL, is currently in clinical trials for radioprotection and to prevent radiation-induced dermatitis. TEMPOL and similar SOD-mimetic nitroxides exhibit a multiplicity of actions in diseases involving oxidative stress. SOD may reduce free radical damage to skin—for example, to reduce fibrosis following radiation for breast cancer. Studies of this kind must be regarded as tentative, however, as there were not adequate controls in the study including a lack of randomization, double-blinding, or placebo. Superoxide dismutase is known to reverse fibrosis, possibly through de-differentiation of myofibroblasts back to fibroblasts. SOD is commercially obtained from marine phytoplankton, bovine liver, horseradish, cantaloupe, and certain bacteria. For therapeutic purpose, SOD is usually injected locally. There is no evidence that ingestion of unprotected SOD or SOD-rich foods can have any physiological effects, as all ingested SOD is broken down into amino acids before being absorbed. However, ingestion of SOD bound to wheat proteins could improve its therapeutic activity, at least in theory.
https://en.wikipedia.org/wiki?curid=27837
Sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order does matter. Like a set, it contains members (also called "elements", or "terms"). The number of elements (possibly infinite) is called the "length" of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and order does matter. Formally, a sequence can be defined as a function whose domain is either the set of the natural numbers (for infinite sequences) or the set of the first "n" natural numbers (for a sequence of finite length "n"). The position of an element in a sequence is its "rank" or "index"; it is the natural number for which the element is the image. The first element has index 0 or 1, depending on the context or a specific convention. When a symbol is used to denote a sequence, the "n"th element of the sequence is denoted by this symbol with "n" as subscript; for example, the "n"th element of the Fibonacci sequence "F" is generally denoted "F""n". For example, (M, A, R, Y) is a sequence of letters with the letter 'M' first and 'Y' last. This sequence differs from (A, R, M, Y). Also, the sequence (1, 1, 2, 3, 5, 8), which contains the number 1 at two different positions, is a valid sequence. Sequences can be "finite", as in these examples, or "infinite", such as the sequence of all even positive integers (2, 4, 6, ...). In computing and computer science, finite sequences are sometimes called strings, words or lists, the different names commonly corresponding to different ways to represent them in computer memory; infinite sequences are called streams. The empty sequence ( ) is included in most notions of sequence, but may be excluded depending on the context. A sequence can be thought of as a list of elements with a particular order. Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations and analysis. Sequences are also of interest in their own right and can be studied as patterns or puzzles, such as in the study of prime numbers. There are a number of ways to denote a sequence, some of which are more useful for specific types of sequences. One way to specify a sequence is to list the elements. For example, the first four odd numbers form the sequence (1, 3, 5, 7). This notation can be used for infinite sequences as well. For instance, the infinite sequence of positive odd integers can be written (1, 3, 5, 7, ...). Listing is most useful for infinite sequences with a pattern that can be easily discerned from the first few elements. Other ways to denote a sequence are discussed after the examples. The prime numbers are the natural numbers bigger than 1 that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence (2, 3, 5, 7, 11, 13, 17, ...). The prime numbers are widely used in mathematics and specifically in number theory. The Fibonacci numbers comprise the integer sequence whose elements are the sum of the previous two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...). For a large list of examples of integer sequences, see On-Line Encyclopedia of Integer Sequences. Other examples of sequences include ones made up of rational numbers, real numbers, and complex numbers. The sequence (.9, .99, .999, .9999, ...) approaches the number 1. In fact, every real number can be written as the limit of a sequence of rational numbers, e.g. via its decimal expansion. For instance, is the limit of the sequence (3, 3.1, 3.14, 3.141, 3.1415, ...). A related sequence is the sequence of decimal digits of , i.e. (3, 1, 4, 1, 5, 9, ...). This sequence does not have any pattern that is easily discernible by eye, unlike the preceding sequence, which is increasing. Other notations can be useful for sequences whose pattern cannot be easily guessed, or for sequences that do not have a pattern such as the digits of . One such notation is to write down a general formula for computing the "n"th term as a function of "n", enclose it in parentheses, and include a subscript indicating the set of values that "n" can take. For example, in this notation the sequence of even numbers could be written as formula_1. The sequence of squares could be written as formula_2. The variable "n" is called an index, and the set of values that it can take is called the index set. It is often useful to combine this notation with the technique of treating the elements of a sequence as individual variables. This yields expressions like formula_3, which denotes a sequence whose "n"th element is given by the variable formula_4. For example: One can consider multiple sequences at the same time by using different variables; e.g. formula_6 could be a different sequence than formula_3. One can even consider a sequence of sequences: formula_8 denotes a sequence whose "m"th term is the sequence formula_9. An alternative to writing the domain of a sequence in the subscript is to indicate the range of values that the index can take by listing its highest and lowest legal values. For example, the notation formula_10 denotes the ten-term sequence of squares formula_11. The limits formula_12 and formula_13 are allowed, but they do not represent valid values for the index, only the supremum or infimum of such values, respectively. For example, the sequence formula_14 is the same as the sequence formula_3, and does not contain an additional term "at infinity". The sequence formula_16 is a bi-infinite sequence, and can also be written as formula_17. In cases where the set of indexing numbers is understood, the subscripts and superscripts are often left off. That is, one simply writes formula_18 for an arbitrary sequence. Often, the index "k" is understood to run from 1 to ∞. However, sequences are frequently indexed starting from zero, as in In some cases the elements of the sequence are related naturally to a sequence of integers whose pattern can be easily inferred. In these cases the index set may be implied by a listing of the first few abstract elements. For instance, the sequence of squares of odd numbers could be denoted in any of the following ways. Moreover, the subscripts and superscripts could have been left off in the third, fourth, and fifth notations, if the indexing set was understood to be the natural numbers. In the second and third bullets, there is a well-defined sequence formula_25, but it is not the same as the sequence denoted by the expression. Sequences whose elements are related to the previous elements in a straightforward way are often defined using recursion. This is in contrast to the definition of sequences of elements as functions of their positions. To define a sequence by recursion, one needs a rule, called "recurrence relation" to construct each element in terms of the ones before it. In addition, enough initial elements must be provided so that all subsequent elements of the sequence can be computed by successive applications of the recurrence relation. The Fibonacci sequence is a simple classical example, defined by the recurrence relation with initial terms formula_27 and formula_28. From this, a simple computation shows that the first ten terms of this sequence are 0, 1, 1, 2, 3, 5, 8, 13, 21, and 34. A complicated example of a sequence defined by a recurrence relation is Recamán's sequence, defined by the recurrence relation with initial term formula_30 A "linear recurrence with constant coefficients" is a recurrence relation of the form where formula_32 are constants. There is a general method for expressing the general term formula_4 of such a sequence as a function of ; see Linear recurrence. In the case of the Fibonacci sequence, one has formula_34 and the resulting function of is given by Binet's formula. A holonomic sequence is a sequence defined by a recurrence relation of the form where formula_36 are polynomials in . For most holonomic sequences, there is no explicit formula for expressing explicitly formula_4 as a function of . Nevertheless, holonomic sequences play an important role in various areas of mathematics. For example, many special functions have a Taylor series whose sequence of coefficients is holonomic. The use of the recurrence relation allows a fast computation of values of such special functions. Not all sequences can be specified by a recurrence relation. An example is the sequence of prime numbers in their natural order (2, 3, 5, 7, 11, 13, 17, ...). There are many different notions of sequences in mathematics, some of which ("e.g.", exact sequence) are not covered by the definitions and notations introduced below. In this article, a sequence is formally defined as a function whose domain is an interval of integers. This definition covers several different uses of the word "sequence", including one-sided infinite sequences, bi-infinite sequences, and finite sequences (see below for definitions of these kinds of sequences). However, many authors use a narrower definition by requiring the domain of a sequence to be the set of natural numbers. This narrower definition has the disadvantage that it rules out finite sequences and bi-infinite sequences, both of which are usually called sequences in standard mathematical practice. Another disadvantage is that, if one removes the first terms of a sequence, one needs reindexing the remainder terms for fitting this definition. In some contexts, to shorten exposition, the codomain of the sequence is fixed by context, for example by requiring it to be the set R of real numbers, the set C of complex numbers, or a topological space. Although sequences are a type of function, they are usually distinguished notationally from functions in that the input is written as a subscript rather than in parentheses, that is, rather than . There are terminological differences as well: the value of a sequence at the lowest input (often 1) is called the "first element" of the sequence, the value at the second smallest input (often 2) is called the "second element", etc. Also, while a function abstracted from its input is usually denoted by a single letter, e.g. "f", a sequence abstracted from its input is usually written by a notation such as formula_38, or just as formula_39 Here is the domain, or index set, of the sequence. Sequences and their limits (see below) are important concepts for studying topological spaces. An important generalization of sequences is the concept of nets. A net is a function from a (possibly uncountable) directed set to a topological space. The notational conventions for sequences normally apply to nets as well. The length of a sequence is defined as the number of terms in the sequence. A sequence of a finite length "n" is also called an "n"-tuple. Finite sequences include the empty sequence ( ) that has no elements. Normally, the term "infinite sequence" refers to a sequence that is infinite in one direction, and finite in the other—the sequence has a first element, but no final element. Such a sequence is called a singly infinite sequence or a one-sided infinite sequence when disambiguation is necessary. In contrast, a sequence that is infinite in both directions—i.e. that has neither a first nor a final element—is called a bi-infinite sequence, two-way infinite sequence, or doubly infinite sequence. A function from the set Z of "all" integers into a set, such as for instance the sequence of all even integers ( ..., −4, −2, 0, 2, 4, 6, 8... ), is bi-infinite. This sequence could be denoted formula_40. A sequence is said to be "monotonically increasing", if each term is greater than or equal to the one before it. For example, the sequence formula_41 is monotonically increasing if and only if "a""n"+1 formula_42 "a""n" for all "n" ∈ N. If each consecutive term is strictly greater than (>) the previous term then the sequence is called strictly monotonically increasing. A sequence is monotonically decreasing, if each consecutive term is less than or equal to the previous one, and strictly monotonically decreasing, if each is strictly less than the previous. If a sequence is either increasing or decreasing it is called a monotone sequence. This is a special case of the more general notion of a monotonic function. The terms nondecreasing and nonincreasing are often used in place of "increasing" and "decreasing" in order to avoid any possible confusion with "strictly increasing" and "strictly decreasing", respectively. If the sequence of real numbers ("an") is such that all the terms are less than some real number "M", then the sequence is said to be bounded from above. In other words, this means that there exists "M" such that for all "n", "an" ≤ "M". Any such "M" is called an "upper bound". Likewise, if, for some real "m", "an" ≥ "m" for all "n" greater than some "N", then the sequence is bounded from below and any such "m" is called a "lower bound". If a sequence is both bounded from above and bounded from below, then the sequence is said to be bounded. A subsequence of a given sequence is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. For instance, the sequence of positive even integers (2, 4, 6, ...) is a subsequence of the positive integers (1, 2, 3, ...). The positions of some elements change when other elements are deleted. However, the relative positions are preserved. Formally, a subsequence of the sequence formula_3 is any sequence of the form formula_44, where formula_45 is a strictly increasing sequence of positive integers. Some other types of sequences that are easy to define include: An important property of a sequence is "convergence". If a sequence converges, it converges to a particular value known as the "limit". If a sequence converges to some limit, then it is convergent. A sequence that does not converge is divergent. Informally, a sequence has a limit if the elements of the sequence become closer and closer to some value formula_46 (called the limit of the sequence), and they become and remain "arbitrarily" close to formula_46, meaning that given a real number formula_48 greater than zero, all but a finite number of the elements of the sequence have a distance from formula_46 less than formula_48. For example, the sequence formula_51 shown to the right converges to the value 0. On the other hand, the sequences formula_52 (which begins 1, 8, 27, …) and formula_53 (which begins -1, 1, -1, 1, …) are both divergent. If a sequence converges, then the value it converges to is unique. This value is called the limit of the sequence. The limit of a convergent sequence formula_54 is normally denoted formula_55. If formula_54 is a divergent sequence, then the expression formula_55 is meaningless. A sequence of real numbers formula_54 converges to a real number formula_46 if, for all formula_60, there exists a natural number formula_61 such that for all formula_62 we have formula_63 If formula_54 is a sequence of complex numbers rather than a sequence of real numbers, this last formula can still be used to define convergence, with the provision that formula_65 denotes the complex modulus, i.e. formula_66. If formula_54 is a sequence of points in a metric space, then the formula can be used to define convergence, if the expression formula_68 is replaced by the expression formula_69, which denotes the distance between formula_4 and formula_46. If formula_54 and formula_73 are convergent sequences, then the following limits exist, and can be computed as follows: Moreover: A Cauchy sequence is a sequence whose terms become arbitrarily close together as n gets very large. The notion of a Cauchy sequence is important in the study of sequences in metric spaces, and, in particular, in real analysis. One particularly important result in real analysis is "Cauchy characterization of convergence for sequences": In contrast, there are Cauchy sequences of rational numbers that are not convergent in the rationals, e.g. the sequence defined by "x"1 = 1 and "x""n"+1 = is Cauchy, but has no rational limit, cf. . More generally, any sequence of rational numbers that converges to an irrational number is Cauchy, but not convergent when interpreted as a sequence in the set of rational numbers. Metric spaces that satisfy the Cauchy characterization of convergence for sequences are called complete metric spaces and are particularly nice for analysis. In calculus, it is common to define notation for sequences which do not converge in the sense discussed above, but which instead become and remain arbitrarily large, or become and remain arbitrarily negative. If formula_4 becomes arbitrarily large as formula_93, we write In this case we say that the sequence diverges, or that it converges to infinity. An example of such a sequence is . If formula_4 becomes arbitrarily negative (i.e. negative and large in magnitude) as formula_93, we write and say that the sequence diverges or converges to negative infinity. A series is, informally speaking, the sum of the terms of a sequence. That is, it is an expression of the form formula_98 or formula_99, where formula_54 is a sequence of real or complex numbers. The partial sums of a series are the expressions resulting from replacing the infinity symbol with a finite number, i.e. the "N"th partial sum of the series formula_98 is the number The partial sums themselves form a sequence formula_103, which is called the sequence of partial sums of the series formula_98. If the sequence of partial sums converges, then we say that the series formula_98 is convergent, and the limit formula_106 is called the value of the series. The same notation is used to denote a series and its value, i.e. we write formula_107. Sequences play an important role in topology, especially in the study of metric spaces. For instance: Sequences can be generalized to nets or filters. These generalizations allow one to extend some of the above theorems to spaces without metrics. The topological product of a sequence of topological spaces is the cartesian product of those spaces, equipped with a natural topology called the product topology. More formally, given a sequence of spaces formula_108, the product space is defined as the set of all sequences formula_110 such that for each "i", formula_111 is an element of formula_112. The canonical projections are the maps "pi" : "X" → "Xi" defined by the equation formula_113. Then the product topology on "X" is defined to be the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections "pi" are continuous. The product topology is sometimes called the Tychonoff topology. In analysis, when talking about sequences, one will generally consider sequences of the form which is to say, infinite sequences of elements indexed by natural numbers. It may be convenient to have the sequence start with an index different from 1 or 0. For example, the sequence defined by "xn" = 1/log("n") would be defined only for "n" ≥ 2. When talking about such infinite sequences, it is usually sufficient (and does not change much for most considerations) to assume that the members of the sequence are defined at least for all indices large enough, that is, greater than some given "N". The most elementary type of sequences are numerical ones, that is, sequences of real or complex numbers. This type can be generalized to sequences of elements of some vector space. In analysis, the vector spaces considered are often function spaces. Even more generally, one can study sequences with elements in some topological space. A sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K, where K is either the field of real numbers or the field of complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space. The most important sequences spaces in analysis are the ℓ"p" spaces, consisting of the "p"-power summable sequences, with the "p"-norm. These are special cases of L"p" spaces for the counting measure on the set of natural numbers. Other important classes of sequences like convergent sequences or null sequences form sequence spaces, respectively denoted "c" and "c"0, with the sup norm. Any sequence space can also be equipped with the topology of pointwise convergence, under which it becomes a special kind of Fréchet space called an FK-space. Sequences over a field may also be viewed as vectors in a vector space. Specifically, the set of "F"-valued sequences (where "F" is a field) is a function space (in fact, a product space) of "F"-valued functions over the set of natural numbers. Abstract algebra employs several types of sequences, including sequences of mathematical objects such as groups or rings. If "A" is a set, the free monoid over "A" (denoted "A"*, also called Kleene star of "A") is a monoid containing all the finite sequences (or strings) of zero or more elements of "A", with the binary operation of concatenation. The free semigroup "A"+ is the subsemigroup of "A"* containing all elements except the empty sequence. In the context of group theory, a sequence of groups and group homomorphisms is called exact, if the image (or range) of each homomorphism is equal to the kernel of the next: The sequence of groups and homomorphisms may be either finite or infinite. A similar definition can be made for certain other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms. In homological algebra and algebraic topology, a spectral sequence is a means of computing homology groups by taking successive approximations. Spectral sequences are a generalization of exact sequences, and since their introduction by , they have become an important research tool, particularly in homotopy theory. An ordinal-indexed sequence is a generalization of a sequence. If α is a limit ordinal and "X" is a set, an α-indexed sequence of elements of "X" is a function from α to "X". In this terminology an ω-indexed sequence is an ordinary sequence. In computer science, finite sequences are called lists. Potentially infinite sequences are called streams. Finite sequences of characters or digits are called strings. Infinite sequences of digits (or characters) drawn from a finite alphabet are of particular interest in theoretical computer science. They are often referred to simply as "sequences" or "streams", as opposed to finite "strings". Infinite binary sequences, for instance, are infinite sequences of bits (characters drawn from the alphabet {0, 1}). The set "C" = {0, 1}∞ of all infinite binary sequences is sometimes called the Cantor space. An infinite binary sequence can represent a formal language (a set of strings) by setting the "n" th bit of the sequence to 1 if and only if the "n" th string (in shortlex order) is in the language. This representation is useful in the diagonalization method for proofs.
https://en.wikipedia.org/wiki?curid=27838
Senryū Senryū is named after Edo period haikai poet Karai Senryū (柄井川柳, 1718–1790), whose collection launched the genre into the public consciousness. A typical example from the collection: This senryū, which can also be translated "Catching him / I see the robber / is my son," is not so much a personal experience of the author as an example of a type of situation (provided by a short comment called a maeku or fore-verse, which usually prefaces a number of examples) and/or a brief or witty rendition of an incident from history or the arts (plays, songs, tales, poetry, etc.). In this case, there was a historical incident of legendary proportion. Some senryū skirt the line between haiku and senryū. The following senryū by Shūji Terayama copies the haiku structure faithfully, down to a blatantly obvious kigo, but on closer inspection is absurd in its content: Terayama, who wrote about playing hide-and-seek in the graveyard as a child, thought of himself as the odd one out, the one who was always "it" in hide-and-seek. Indeed, the original haiku included the theme "oni" (the "it" in Japanese is a demon, though in some parts a very young child forced to play "it" was called a "sea slug" (namako)). To him, seeing a game of hide-and-seek, or recalling it as it grew cold would be a chilling experience. Terayama might also have recalled opening his eyes and finding himself all alone, feeling the cold more intensely than he did a minute before among other children. Either way, any genuinely personal experience would be haiku and not senryū in the classic sense. If you think Terayama's poem uses a child's game to express in hyperbolic metaphor how, in retrospect, life is short, and nothing more, then this would indeed work as a senryū. Otherwise, it is a bona-fide haiku. There is also the possibility that it is a joke about playing hide and seek, only to realize (winter having arrived during the months spent hiding) that no one wants to find you. In the 1970s, Michael McClintock edited "Seer Ox: American Senryu Magazine". In 1993, Michael Dylan Welch edited and published "Fig Newtons: Senryū to Go", the first anthology of English-language senryū. Additionally, one can regularly find senryū and related articles in some haiku publications. For example: Senryū regularly appear in the pages of "Modern Haiku", "Frogpond", "Bottle Rockets", "Woodnotes", "Tundra", and other haiku journals, often unsegregated from haiku. The Haiku Society of America holds the annual Gerald Brady Memorial Award for best unpublished senryū. Since about 1990, the Haiku Poets of Northern California has been running a senryū contest, as part of its San Francisco International Haiku and Senryu Contest.
https://en.wikipedia.org/wiki?curid=27840
Sorious Samura Sorious Samura (born 27 October 1963) is a Sierra Leonean journalist. He is best known for two CNN documentary films: "Cry Freetown" (2000) and "Exodus from Africa" (2001). The self-funded "Cry Freetown" depicts the most brutal period of the civil war in Sierra Leone with RUF rebels capturing the capital city (January 1999). The film won, among other awards, an Emmy Award and a Peabody. "Exodus from Africa" shows the harrowing effort by the best of young African male blood to break through to Europe via death- and danger-ridden paths from Sierra Leone and Nigeria, via Mali, the Sahara desert, Algeria, and Morocco through the Strait of Gibraltar to Spain. In his recent two projects "Living with Hunger" and "Living with Refugees" (nominated for an Emmy award), he takes reality television to its extreme, becoming the central character in the films by living the lifestyle of an Ethiopian villager and Sudanese refugee respectively; in doing this, he tries to break the boundary between "us" (the people watching on TV) and "them" (those before the camera) by becoming one of them (albeit for just a month). "Living with corruption", his latest documentary shown on CNN, describes the shocking reality of how corruption is spread across society both in Sierra Leone and Kenya, affecting mostly the poor. In 2010, Samura investigated attitudes to homosexuality in Africa in the Dispatches documentary "Africa's Last Taboo", produced for Channel 4. Samura is also one of the directors of 'Insight News TV', an independent television production company in the UK focused on international current affairs programming. Samura attended the Methodist Boys High School in the east end of Freetown. , he works in London, UK, and considers both London and Freetown his hometowns.
https://en.wikipedia.org/wiki?curid=27846
Steve Wozniak Stephen Gary Wozniak (; born August 11, 1950), also known by his nickname "Woz", is an American electronics engineer, programmer, philanthropist, and technology entrepreneur. In 1976, he co-founded Apple Inc., which later became the world's largest information technology company by revenue and the largest company in the world by market capitalization. Through their work at Apple in the 1970s and 1980s, he and Apple co-founder Steve Jobs are widely recognized as two prominent pioneers of the personal computer revolution. In 1975, Wozniak started developing the Apple I into the computer that launched Apple when he and Jobs first began marketing it the following year. He primarily designed the Apple II in 1977, known as one of the first highly successful mass-produced microcomputers, while Jobs oversaw the development of its foam-molded plastic case and early Apple employee Rod Holt developed the switching power supply. With software engineer Jef Raskin, Wozniak had a major influence over the initial development of the original Apple Macintosh concepts from 1979 to 1981, when Jobs took over the project following Wozniak's brief departure from the company due to a traumatic airplane accident. After permanently leaving Apple in 1985, Wozniak founded CL 9 and created the first programmable universal remote, released in 1987. He then pursued several other businesses and philanthropic ventures throughout his career, focusing largely on technology in K–12 schools. , Wozniak has remained an employee of Apple in a ceremonial capacity since stepping down in 1985. Steve Wozniak was born and raised in San Jose, California, the son of Margaret Louise Wozniak (née Kern) (1923–2014) from Washington state and Francis Jacob "Jerry" Wozniak (1925–1994) from Michigan. His father, Jerry Wozniak, was an engineer for Lockheed Corporation. He graduated from Homestead High School in 1968, in Cupertino, California. The name on Wozniak's birth certificate is "Stephan Gary Wozniak", but his mother said that she intended it to be spelled "Stephen," which is what he uses. Wozniak has mentioned his surname being Ukrainian and has spoken of his Ukrainian and Polish descent. In the early 1970s, Wozniak's blue box design earned him the nickname "Berkeley Blue" in the phreaking community. Wozniak has credited watching "Star Trek" and attending "Star Trek" conventions while in his youth as a source of inspiration for his starting Apple Inc. In 1969, Wozniak returned to the San Francisco Bay Area after being expelled from the University of Colorado Boulder in his first year for hacking the university's computer system and sending prank messages on it. He re-enrolled at De Anza College in Cupertino before transferring to the University of California, Berkeley, in 1971. In June 1971, as a self-taught project, Wozniak designed and built his first computer with his friend Bill Fernandez. Predating useful microprocessors, screens, and keyboards, and using a punch card and only 20 TTL chips donated by an acquaintance, they named it "Cream Soda" after their favorite beverage. A newspaper reporter stepped on the power supply cable and blew up the computer, but it served Wozniak as "a good prelude to my thinking 5 years later with the Apple I and Apple II computers". Before focusing his attention on Apple, he was employed at Hewlett-Packard (HP), where he designed calculators. It was during this time that he dropped out of UC Berkeley and befriended Steve Jobs. Wozniak was introduced to Jobs by Fernandez, who attended Homestead High School with Jobs in 1971. Jobs and Wozniak became friends when Jobs worked for the summer at HP, where Wozniak, too, was employed, working on a mainframe computer. Their first business partnership began later that year when Wozniak read an article titled “Secrets of the Little Blue Box” from the October 1971 issue of "Esquire", and started to build his own “blue boxes” that enabled one to make long-distance phone calls at no cost. Jobs, who handled the sales of the blue boxes, managed to sell some two hundred of them for $150 each, and split the profit with Wozniak. Jobs later told his biographer that if it hadn't been for Wozniak's blue boxes, "there wouldn't have been an Apple." In 1973, Jobs was working for arcade game company Atari, Inc. in Los Gatos, California. He was assigned to create a circuit board for the arcade video game "Breakout". According to Atari co-founder Nolan Bushnell, Atari offered $100 () for each chip that was eliminated in the machine. Jobs had little knowledge of circuit board design and made a deal with Wozniak to split the fee evenly between them if Wozniak could minimize the number of chips. Wozniak reduced the number of chips by 50, by using RAM for the brick representation. Too complex to be fully comprehended at the time, the fact that this prototype also had no scoring or coin mechanisms meant Woz's prototype could not be used. Jobs was paid the full bonus regardless. Jobs told Wozniak that Atari gave them only $700 and that Wozniak's share was thus $350 (). Wozniak did not learn about the actual $5,000 bonus () until ten years later. While dismayed, he said that if Jobs had told him about it and had said he needed the money, Wozniak would have given it to him. In 1975, Wozniak began designing and developing the computer that would eventually make him famous, the Apple I. On June 29 of that year, he tested his first working prototype, displaying a few letters and running sample programs. It was the first time in history that a character displayed on a TV screen was generated by a home computer. With the Apple I, Wozniak was largely working to impress other members of the Palo Alto-based Homebrew Computer Club, a local group of electronics hobbyists interested in computing. The Club was one of several key centers which established the home hobbyist era, essentially creating the microcomputer industry over the next few decades. Unlike other custom Homebrew designs, the Apple had an easy-to-achieve video capability that drew a crowd when it was unveiled. By March 1, 1976, Wozniak completed the basic design of the Apple I computer. He alone designed the hardware, circuit board designs, and operating system for the computer. Wozniak originally offered the design to HP while working there, but was denied by the company on five different occasions. Jobs then advised Wozniak to start a business of their own to build and sell bare printed circuit boards of the Apple I. Wozniak, at first skeptical, was later convinced by Jobs that even if they were not successful they could at least say to their grandchildren that they had had their own company. To raise the money they needed to build the first batch of the circuit boards, Wozniak sold his HP scientific calculator while Jobs sold his Volkswagen van. On April 1, 1976, Jobs and Wozniak formed Apple Computer Company (now called Apple Inc.) along with administrative supervisor Ronald Wayne, whose participation in the new venture was short-lived. The two decided on the name "Apple" shortly after Jobs returned from Oregon and told Wozniak about his time spent on an apple orchard there. After the company was formed, Jobs and Wozniak made one last trip to the Homebrew Computer Club to give a presentation of the fully assembled version of the Apple I. Paul Terrell, who was starting a new computer shop in Mountain View, California, called the Byte Shop, saw the presentation and was impressed by the machine. Terrell told Jobs that he would order 50 units of the Apple I and pay $500 each on delivery, but only if they came fully assembled, as he was not interested in buying bare printed circuit boards. Together the duo assembled the first boards in Jobs's parents' Los Altos home; initially in his bedroom and later (when there was no space left) in the garage. Wozniak's apartment in San Jose was filled with monitors, electronic devices, and computer games that he had developed. The Apple I sold for $666.66. Wozniak later said he had no idea about the relation between the number and the mark of the beast, and that he came up with the price because he liked "repeating digits". They sold their first 50 system boards to Terrell later that year. In November 1976, Jobs and Wozniak received substantial funding from a then-semi-retired Intel product marketing manager and engineer named Mike Markkula. At the request of Markkula, Wozniak resigned from his job at HP and became the vice president in charge of research and development at Apple. Wozniak's Apple I was similar to the Altair 8800, the first commercially available microcomputer, except the Apple I had no provision for internal expansion cards. With expansion cards, the Altair could attach to a computer terminal and be programmed in BASIC. In contrast, the Apple I was a hobbyist machine. Wozniak's design included a $25 CPU (MOS 6502) on a single circuit board with 256 bytes of ROM, 4K or 8K bytes of RAM, and a 40-character by 24-row display controller. Apple's first computer lacked a case, power supply, keyboard, and displayall components that had to be provided by the user. Eventually about 200 Apple I computers were produced in total. After the success of the Apple I, Wozniak designed the Apple II, the first personal computer with the ability to display color graphics, and BASIC programming language built in. Inspired by "the technique Atari used to simulate colors on its first arcade games", Wozniak found a way of putting colors into the NTSC system by using a chip, while colors in the PAL system are achieved by "accident" when a dot occurs on a line, and he says that to this day he has no idea how it works. During the design stage, Jobs argued that the Apple II should have two expansion slots, while Wozniak wanted eight. After a heated argument, during which Wozniak threatened that Jobs should "go get himself another computer", they decided to go with eight slots. Jobs and Wozniak introduced the Apple II at the April 1977 West Coast Computer Faire. Wozniak's first article about the Apple II was in "Byte" magazine in May 1977. It became one of the first highly successful mass-produced personal computers in the world. Wozniak also designed the Disk II floppy disk drive, released in 1978 specifically for use with the Apple II series to replace the slower cassette tape storage. In 1980, Apple went public to instant and significant financial profitability, making Jobs and Wozniak both millionaires. The Apple II's intended successor, the Apple III, released the same year, was a commercial failure and was discontinued in 1984. According to Wozniak, the Apple III "had 100 percent hardware failures", and that the primary reason for these failures was that the system was designed by Apple's marketing department, unlike Apple's previous engineering-driven projects. During the early design and development phase of the original Macintosh, Wozniak had a heavy influence over the project. Later named the "Macintosh 128k", it would become the first mass-market personal computer featuring an integral graphical user interface and mouse. The Macintosh would also go on to introduce the desktop publishing industry with the addition of the Apple LaserWriter, the first laser printer to feature vector graphics. In a 2013 interview, Wozniak said that in 1981, "Steve [Jobs] really took over the project when I had a plane crash and wasn't there." On February 7, 1981, the Beechcraft Bonanza A36TC which Wozniak was piloting crashed soon after takeoff from the Sky Park Airport in Scotts Valley, California. The airplane stalled while climbing, then bounced down the runway, broke through two fences, and crashed into an embankment. Wozniak and his three passengers—then-fiancée Candice Clark, her brother Jack Clark, and Jack's girlfriend, Janet Valleau—were injured. Wozniak sustained severe face and head injuries, including losing a tooth, and also suffered for the following five weeks from anterograde amnesia, the inability to create new memories. He had no memory of the crash, and did not remember his name while in the hospital or the things he did for a time after he was released. He would later state that Apple II computer games were what helped him regain his memory. The National Transportation Safety Board investigation report cited premature liftoff and pilot inexperience as probable causes of the crash. Wozniak did not immediately return to Apple after recovering from the airplane crash, seeing it as a good reason to leave. "Infinite Loop" characterized this time: "Coming out of the semi-coma had been like flipping a reset switch in Woz's brain. It was as if in his thirty-year old body he had regained the mind he'd had at eighteen before all the computer madness had begun. And when that happened, Woz found he had little interest in engineering or design. Rather, in an odd sort of way, he wanted to start over fresh." Later in 1981, after recovering from the plane crash, Wozniak enrolled back at UC Berkeley to complete his degree. Because his name was well known at this point, he enrolled under the name Rocky Raccoon Clark, which is the name listed on his diploma, although he did not officially receive his degree in electrical engineering and computer science until 1987. In May 1982 and 1983, Wozniak, with help from professional concert promoter Bill Graham, founded the company Unuson, an abbreviation of "unite us in song", which sponsored two US Festivals, with "US" pronounced like the pronoun, not as initials. Initially intended to celebrate evolving technologies, the festivals ended up as a technology exposition and a rock festival as a combination of music, computers, television, and people. After losing several million dollars on the 1982 festival, Wozniak stated that unless the 1983 event turned a profit, he would end his involvement with rock festivals and get back to designing computers. Later that year, Wozniak returned to Apple product development, desiring no more of a role than that of an engineer and a motivational factor for the Apple workforce. In the mid-1980s he designed the Apple Desktop Bus, a proprietary bit-serial peripheral bus that became the basis of all Macintosh and NeXT computer models. Starting in the mid-1980s, as the Macintosh experienced slow but steady growth, Apple's corporate leadership, including Steve Jobs, increasingly disrespected its flagship cash cow Apple II seriesand Wozniak along with it. The Apple II divisionother than Wozniakwas not invited to the Macintosh introduction event, and Wozniak was seen kicking the dirt in the parking lot. Although Apple II products provided about 85% of Apple's sales in early 1985, the company's January 1985 annual meeting did not mention the Apple II division or its employees, a typical situation that frustrated Wozniak. Even with the success he had helped to create at Apple, Wozniak believed that the company was hindering him from being who he wanted to be, and that it was "the bane of his existence". He enjoyed engineering, not management, and said that he missed "the fun of the early days". As other talented engineers joined the growing company, he no longer believed he was needed there, and by early 1985, Wozniak left Apple again, stating that the company had "been going in the wrong direction for the last five years". He then sold most of his stock. The Apple II platform financially carried the company well into the Macintosh era of the late 1980s; it was made semi-portable with the Apple IIc of 1984, was extended, with some input from Wozniak, by the 16-bit Apple IIGS of 1986, and was discontinued altogether in 1992. After his career at Apple, Wozniak founded CL 9 in 1985, which developed and brought the first programmable universal remote control to market in 1987, dubbed the "CORE". Beyond engineering, Wozniak's second lifelong goal had always been to teach elementary school because of the important role teachers play in students' lives. Eventually, he did teach computer classes to children from the fifth through ninth grades, and teachers as well. Unuson continued to support this, funding additional teachers and equipment. In 2001, Wozniak founded Wheels of Zeus (WOZ) to create wireless GPS technology to "help everyday people find everyday things much more easily". In 2002, he joined the board of directors of Ripcord Networks, Inc., joining Apple alumni Ellen Hancock, Gil Amelio, Mike Connor, and Wheels of Zeus co-founder Alex Fielding in a new telecommunications venture. Later the same year he joined the board of directors of Danger, Inc., the maker of the Hip Top. In 2006, Wheels of Zeus was closed, and Wozniak founded Acquicor Technology, a holding company for acquiring technology companies and developing them, with Apple alumni Hancock and Amelio. From 2009 through 2014 he was chief scientist at Fusion-io. In 2014 he became chief scientist at Primary Data, which was founded by some former Fusion-io executives. Silicon Valley Comic Con (SVCC) is an annual pop culture and technology convention at the San Jose McEnery Convention Center in San Jose, California. The convention was co-founded by Wozniak and Rick White, with Trip Hunter as CEO. Wozniak announced the annual event in 2015 along with Marvel legend Stan Lee. In October 2017, Wozniak founded Woz U, an online educational technology service for independent students and employees. As of December 2018, Woz U was licensed as a school with the Arizona state board. Though permanently leaving Apple as an active employee in 1985, Wozniak chose to never remove himself from the official employee list, and continues to represent the company at events or in interviews. Today he receives a stipend from Apple for this role, estimated in 2006 to be per year. He is also an Apple shareholder. He maintained a friendly acquaintance with Steve Jobs until Jobs's death in October 2011. However, in 2006, Wozniak stated that he and Jobs were not as close as they used to be. In a 2013 interview, Wozniak said that the original Macintosh "failed" under Steve Jobs, and that it was not until Jobs left that it became a success. He called the Apple Lisa group the team that had kicked Jobs out, and that Jobs liked to call the Lisa group "idiots for making [the Lisa computer] too expensive". To compete with the Lisa, Jobs and his new team produced a cheaper computer, one that, according to Wozniak, was "weak", "lousy" and "still at a fairly high price". "He made it by cutting the RAM down, by forcing you to swap disks here and there", says Wozniak. He attributed the eventual success of the Macintosh to people like John Sculley "who worked to build a Macintosh market when the Apple II went away". Wozniak is listed as the sole inventor on the following Apple patents: In 1990, Wozniak helped found the Electronic Frontier Foundation, providing some of the organization's initial funding and serving on its founding Board of Directors. He is the founding sponsor of the Tech Museum, Silicon Valley Ballet and Children's Discovery Museum of San Jose. Also since leaving Apple, Wozniak has provided all the money, and much onsite technical support, for the technology program in his local school district in Los Gatos. Un.U.Son. (Unite Us In Song), an organization Wozniak formed to organize the two US festivals, is now primarily tasked with supporting his educational and philanthropic projects. In 1986, Wozniak lent his name to the Stephen G. Wozniak Achievement Awards (popularly known as "Wozzie Awards"), which he presented to six Bay Area high school and college students for their innovative use of computers in the fields of business, art, and music. Wozniak is the subject of a student-made film production of his friend's (Joe Patane) nonprofit Dream Camp Foundation for high-level-need youth titled "Camp Woz: The Admirable Lunacy of Philanthropy". For his contributions to technology, Wozniak has been awarded a number of Honorary Doctor of Engineering degrees, which include the following: Steve Wozniak has been mentioned, represented, or interviewed countless times in media from the founding of Apple to the present. "Wired" magazine described him as a person of "tolerant, ingenuous self-esteem" who interviews with "a nonstop, singsong voice". Wozniak lives in Los Gatos, California. He applied for Australian citizenship in 2012, and has stated that he would like to live in Melbourne, Australia in the future. Wozniak has been referred to frequently by the nickname "Woz", or "The Woz"; he has also been called "The Wonderful Wizard of Woz" and "The Second Steve" (in regard to his early business partner and longtime friend, Steve Jobs). "WoZ" (short for "Wheels of Zeus") is the name of a company Wozniak founded in 2002. Wozniak describes his impetus for joining the Freemasons in 1979 as being able to spend more time with his then-wife, Alice Robertson, who belonged to the Order of the Eastern Star, associated with the Masons. Wozniak has said that he quickly rose to a third degree Freemason because, whatever he does, he tries to do well. He was initiated in 1979 at Charity Lodge No. 362 in Campbell, California, now part of Mt. Moriah Lodge No. 292 in Los Gatos. Today he is no longer involved: "I did become a Freemason and know what it's about but it doesn't really fit my tech/geek personality. Still, I can be polite to others from other walks of life. After our divorce was filed I never attended again but I did contribute enough for a lifetime membership." Wozniak was married to slalom canoe gold-medalist Candice Clark from June 1981 to 1987. They have three children together, the youngest being born after their divorce was finalized. After a high-profile relationship with actress Kathy Griffin, who described him on "Tom Green's House Tonight" in 2008 as "the biggest techno-nerd in the Universe", Wozniak married Janet Hill, his current spouse. On his religious views, Wozniak has called himself an "atheist or agnostic". He is a member of a Segway Polo team, the "Silicon Valley Aftershocks". In 2006, he co-authored with Gina Smith his autobiography, "". The book made "The New York Times Best Seller list". Wozniak's favorite video game is "Tetris" for Game Boy, and he had a high score for "Sabotage". In the 1990s he submitted so many high scores for Tetris to "Nintendo Power" that they would no longer print his scores, so he started sending them in under the reversed name "Evets Kainzow". Prior to the release of Game Boy, Wozniak called "Gran Trak 10" his "favorite game ever" and said that he played the arcade while developing hardware for the first version of "Breakout" for Atari. In 1985, Steve Jobs referred to Wozniak as a "Gran Trak 10" "addict". Wozniak has expressed his personal disdain for money and accumulating large amounts of wealth. He told "Fortune" magazine in 2017, "I didn’t want to be near money, because it could corrupt your values... I really didn’t want to be in that super ‘more than you could ever need’ category." He also said that he only invests in things "close to his heart". When Apple first went public in 1980, Wozniak offered $10 million of his own stock to early Apple employees, something Jobs refused to do. Wozniak has the condition prosopagnosia, or face-blindness. In March 2015, Wozniak stated that while he had originally dismissed the writings of Ray Kurzweil who stated machine intelligence will outpace human intelligence within several decades, Wozniak had come to change his mind: "I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently." Wozniak stated that he had started to identify a contradictory sense of foreboding about artificial intelligence, while still supporting the advance of technology. By June 2015, Wozniak changed his mind, stating that a superintelligence takeover would be good for humans: "They're going to be smarter than us and if they're smarter than us then they'll realise they need us... We want to be the family pet and be taken care of all the time... I got this idea a few years ago and so I started feeding my dog filet steak and chicken every night because 'do unto others'". In 2016, Wozniak changed his mind again, stating that he no longer worried about the possibility of superintelligence emerging because he is skeptical that computers will be able to compete with human "intuition": "A computer could figure out a logical endpoint decision, but that’s not the way intelligence works in humans". Wozniak added that if computers do become superintelligent, "they're going to be partners of humans over all other species just forever".
https://en.wikipedia.org/wiki?curid=27848
Saxons The Saxons (, , , , , ) were a group of early Germanic peoples whose name was given in the early Middle Ages to a large country (Old Saxony, ) near the North Sea coast of what is now Germany. In the late Roman Empire, the name was used to refer to Germanic coastal raiders, and also as a word something like the later "Viking". Their origins appear to be mainly somewhere in or near the above-mentioned German North Sea coast where they are found later, in Carolingian times. In Merovingian times, continental Saxons had also been associated with the activity and settlements on the coast of what later became Normandy. Their precise origins are uncertain, and they are sometimes described as fighting inland, coming into conflict with the Franks and Thuringians. There is possibly a single classical reference to a smaller homeland of an early Saxon tribe, but its interpretation is disputed (see below). According to this proposal, the Saxons' earliest area of settlement is believed to have been Northern Albingia. This general area is close to the probable homeland of the Angles. In contrast, the British "Saxons", today referred to in English as Anglo-Saxons, became a single nation bringing together Germanic peoples (Frisian, Jutish, Angle) with the Romanized Britons, establishing long-lasting post-Roman kingdoms equivalent to those formed by the Franks on the continent. Their earliest weapons and clothing south of the Thames were based on late Roman military fashions, but later immigrants north of the Thames showed a stronger North German influence. The term "Anglo-Saxon", combining the names of the Angles and the Saxons, came into use by the 8th century (for example Paul the Deacon) to distinguish the Germanic inhabitants of Britain from continental Saxons (referred to in the "Anglo-Saxon Chronicle" as "Ealdseaxe", 'old Saxons'), but both the Saxons of Britain and those of Old Saxony (Northern Germany) continued to be referred to as 'Saxons' in an indiscriminate manner, especially in the languages of Britain and Ireland. While the English Saxons were no longer raiders, the political history of the continental Saxons is unclear until the time of the conflict between their semi-legendary hero Widukind and the Frankish emperor Charlemagne. While the continental Saxons are no longer a distinctive ethnic group or country, their name lives on in the names of several regions and states of Germany, including Lower Saxony (which includes central parts of the original Saxon homeland known as Old Saxony), Saxony in Upper Saxony, as well as Saxony-Anhalt (which includes Old, Lower and Upper Saxon regions). The current state of Saxony has its name from dynastic history, and not ethnic history. The Saxons may have derived their name from "seax", a kind of knife for which they were known. The seax has a lasting symbolic impact in the English counties of Essex and Middlesex, both of which feature three seaxes in their ceremonial emblem. Their names, along with those of Sussex and Wessex, contain a remnant of the word "Saxon". The Elizabethan era play "Edmund Ironside" suggests the Saxon name derives from the Latin "saxa" (stone): In the Celtic languages, the words designating English nationality derive from the Latin word . The most prominent example, a loanword in English from Scottish Gaelic (older spelling: ), is the word , used by Scots, Scottish English and Gaelic-speakers in the 21st century as a jocular term for an English person. The "Oxford English Dictionary" (OED) gives 1771 as the date of the earliest written use of the word in English. The Gaelic name for England is (older spelling: , genitive: ), and (formed with a common adjective suffix ) means "English" in reference to people and things, though not to the English Language, which is . , the Irish word for an Englishman (with meaning England), has the same derivation, as do the words used in Welsh to describe the English people (, singular ) and the language and things English in general: and . Cornish terms the English , from the same derivation. In the 16th century Cornish-speakers used the phrase to feign ignorance of the English language. The Cornish words for the English people and England are and ('Land [Pays] of Saxons'). Similarly Breton, spoken in north-western France, has ('English'), ('the English language'), and for 'England'. The label "Saxons" (in ) also became attached to German settlers who were settled during the 12th century to southeastern Transylvania. From Transylvania, some of these Saxons migrated to neighbouring Moldavia, as the name of the town shows. Sascut lies in the part of Moldavia that is today part of Romania. During 's visit to the Republic of Venice (1706–09), much was made of his origins in Saxony; in particular, the Venetians greeted the 1709 performance of his opera "Agrippina" with the cry , "Cheers for the beloved Saxon!" The Finns and Estonians have changed their usage of the root "Saxon" over the centuries to apply now to the whole country of Germany ( and respectively) and the Germans ( and , respectively). The Finnish word (scissors) reflects the name of the old Saxon single-edged sword — seax — from which the name "Saxon" supposedly derives. In Estonian, means "a nobleman" or, colloquially, "a wealthy or powerful person". (As a result of 13th-century Northern Crusades, Estonia's upper class comprised mostly persons of German origin until well into the 20th century.) The word also survives as the surnames of  /  (in Low German or Low Saxon), and . The Dutch female first name, , originally meant 'a Saxon woman' (metathesis of "Saxia"). Following the downfall of Henry the Lion (11291195, Duke of Saxony 11421180), and the subsequent splitting of the Saxon tribal duchy into several territories, the name of the Saxon duchy was transferred to the lands of the Ascanian family. This led to the differentiation between "Lower Saxony", lands settled by the Saxon tribe and "Upper Saxony", the lands belonging to the House of Wettin. Gradually, the latter region became known as "Saxony", ultimately usurping the name's original meaning. The area formerly known as Upper Saxony now lies in Central Germany. Ptolemy's "Geographia," written in the 2nd century, is sometimes considered to contain the first mentioning of the Saxons. Some copies of this text mention a tribe called "Saxones" in the area to the north of the lower Elbe. However, other versions refer to the same tribe as "Axones." This may be a misspelling of the tribe that Tacitus in his "Germania" called "Aviones". According to this theory, "Saxones" was the result of later scribes trying to correct a name that meant nothing to them. On the other hand, Schütte, in his analysis of such problems in "Ptolemy's Maps of Northern Europe", believed that "Saxones" is correct. He notes that the loss of first letters occurs in numerous places in various copies of Ptolemy's work, and also that the manuscripts without "Saxones" are generally inferior overall. Schütte also remarks that there was a medieval tradition of calling this area "Old Saxony" (covering Westphalia, Angria and Eastphalia). This view is in line with Bede who mentions Old Saxony was near the Rhine, somewhere to the north of the river Lippe (Westphalia, northeastern part of modern German state Nordrhein-Westfalen). The first undisputed mention of the Saxon name in its modern form is from AD 356, when Julian, later the Roman Emperor, mentioned them in a speech as allies of Magnentius, a rival emperor in Gaul. Zosimus also mentions a specific tribe of Saxons, called the "Kouadoi", which have been interpreted as a misunderstanding for the Chauci, or Chamavi. They entered the Rhineland and displaced the recently settled Salian Franks from Batavi, whereupon some of the Salians began to move into the Belgian territory of Toxandria, supported by Julian. Both in this case and in others the Saxons were associated with using boats for their raids. In order to defend against Saxon raiders, the Romans created a military district called the "Litus Saxonicum" ("Saxon Coast") on both sides of the English Channel. In 441–442 AD, Saxons are mentioned for the first time as inhabitants of Britain, when an unknown Gaulish historian wrote: "The British provinces...have been reduced to Saxon rule". Saxons as inhabitants of present-day Northern Germany are first mentioned in 555, when the Frankish king Theudebald died, and the Saxons used the opportunity for an uprising. The uprising was suppressed by Chlothar I, Theudebald's successor. Some of their Frankish successors fought against the Saxons, others were allied with them. The Thuringians frequently appeared as allies of the Saxons. In the Netherlands, Saxons occupied the territory south of the Frisians and north of the Franks. In the west it reached as far as the Gooi region, in the south as far as the Lower Rhine. After the conquest of Charlemagne, this area formed the main part of the Bishopric of Utrecht. The Saxon duchy of Hamaland played an important role in the formation of the duchy of Guelders. The local language, although strongly influenced by standard Dutch, is still officially recognised as Dutch Low Saxon. In 569, some Saxons accompanied the Lombards into Italy under the leadership of Alboin and settled there. In 572, they raided southeastern Gaul as far as "Stablo", now Estoublon. Divided, they were easily defeated by the Gallo-Roman general Mummolus. When the Saxons regrouped, a peace treaty was negotiated whereby the Italian Saxons were allowed to settle with their families in Austrasia. Gathering their families and belongings in Italy, they returned to Provence in two groups in 573. One group proceeded by way of Nice and another via Embrun, joining up at Avignon. They plundered the territory and were as a consequence stopped from crossing the Rhône by Mummolus. They were forced to pay compensation for what they had robbed before they could enter Austrasia. These people are known only by documents, and their settlement cannot be compared to the archeological artifacts and remains that attest to Saxon settlements in northern and western Gaul. A Saxon king named Eadwacer conquered Angers in 463 only to be dislodged by Childeric I and the Salian Franks, allies of the Roman Empire. It is possible that Saxon settlement of Great Britain began only in response to expanding Frankish control of the Channel coast. Some Saxons already lived along the Saxon shore of Gaul as Roman foederati. They can be traced in documents, but also in archeology and in toponymy. The "Notitia Dignitatum" mentions the "Tribunus cohortis primae novae Armoricanae, Grannona in litore Saxonico". The location of "Grannona" is uncertain and was identified by the historians and toponymists at different places: mainly with the town known today as Granville (in Normandy) or nearby. The "Notitia Dignitatum" does not explain where these "Roman" soldiers came from. Some toponymists have proposed Graignes ("Grania" 1109–1113) as the location for "Grannona"/"Grannonum". Although some scholars believe it could be the same element "*gran", that is recognised in Guernsey ("Greneroi" 11th century), it most likely derives from the Gaulish god Grannos. This location is closer to Bayeux, where Gregory of Tours evokes otherwise the "Saxones Bajocassini" (Bessin Saxons), which were ineffective against the Breton Waroch II in 579. A Saxon unit of "laeti" settled at Bayeuxthe "Saxones Baiocassenses". These Saxons became subjects of Clovis I late in the 5th century. The Saxons of Bayeux comprised a standing army and were often called upon to serve alongside the local levy of their region in Merovingian military campaigns. In 589, the Saxons wore their hair in the Breton fashion at the orders of Fredegund and fought with them as allies against Guntram. Beginning in 626, the Saxons of the Bessin were used by Dagobert I for his campaigns against the Basques. One of their own, Aeghyna, was created a "dux" over the region of Vasconia. In 843 and 846 under king Charles the Bald, other official documents mention a "pagus" called "Otlinga Saxonia" in the Bessin region, but the meaning of "Otlinga" is unclear. Different Bessin toponyms were identified as typically Saxon, ex : Cottun ("Coltun" 1035–1037 ; "Cola"s "town"). It is the only place name in Normandy that can be interpreted as a "-tun" one (English "-ton"; cf. Colton). In contrast to this one example in Normandy are numerous "-thun" villages in the north of France, in Boulonnais, for example Alincthun, Verlincthun, and Pelingthun, showing, with other toponyms, an important Saxon or Anglo-Saxon settlement. Comparing the concentration of "-ham"/"-hem" (Anglo-Saxon "hām" > home) toponyms in the Bessin and in the Boulonnais gives more examples of Saxon settlement. In the area known today as Normandy, the "-ham" cases of Bessin are uniquethey do not exist elsewhere. Other cases were considered, but there is no determining example. For example, Canehan ("Kenehan" 1030/"Canaan" 1030–1035) could be the biblical name "Canaan" or Airan ("Heidram" 9thcentury), the Germanic masculine name "Hairammus". The Bessin examples are clear; for example, Ouistreham ("Oistreham" 1086), Étréham ("Oesterham" 1350 ?), Huppain ("*Hubbehain" ; "Hubba"s "home"), and Surrain ("Surrehain" 11thcentury). Another significant example can be found in the Norman onomastics: the widespread surname Lecesne, with variant spellings: LeCesne, Lesène, Lecène, and Cesne. It comes from Gallo-Romance *SAXINU "the Saxon", which is "saisne" in Old French. These examples are not derived from more recent Anglo-Scandinavian toponyms, because in that case they would have been numerous in the Norman regions (pays deCaux, Basse-Seine, North-Cotentin) settled by Germanic peoples. That is not the case, nor does Bessin belong to the "pagii", which were affected by an important wave of Anglo-Scandinavian immigration. In addition, archaeological finds add evidence to the documents and the results of toponymic research. Around the city of Caen and in the Bessin (Vierville-sur-Mer, Bénouville, Giverville, Hérouvillette), excavations have yielded numerous examples of Anglo-Saxon jewellery, design elements, settings, and weapons. All of these things were discovered in cemeteries in a context of the 5th, 6th and 7thcenturies AD. The oldest and most spectacular Saxon site found in France to date is Vron, in Picardy. There, archaeologists excavated a large cemetery with tombs dating from the Roman Empire until the 6thcentury. Furniture and other grave goods, as well as the human remains, revealed a group of people buried in the 4th and 5thcenturies AD. Physically different from the usual local inhabitants found before this period, they instead resembled the Germanic populations of the north. At the beginning (4thcentury), 92% were buried, sometimes with typical Germanic weapons. Then they were ranked to the east, when they were buried in the 5th and later to the beginning of the 6th century. A strong Anglo-Saxon influence became obvious for the middle of the period, but this influence later disappeared. Archaeological material, neighbouring toponymy, and texts support the same conclusion: settlement of Saxon foederati with their families. Further anthropological research by Joël Blondiaux shows these people were from Low Saxony. Saxons, along with Angles, Frisians and Jutes, invaded or migrated to the island of Great Britain (Britannia) around the time of the collapse of the Western Roman Empire. Saxon raiders had been harassing the eastern and southern shores of Britannia for centuries before, prompting the construction of a string of coastal forts called the "Litora Saxonica" or Saxon Shore. Before the end of Roman rule in Britannia, many Saxons and other folk had been permitted to settle in these areas as farmers. According to tradition, the Saxons (and other tribes) first entered Britain en masse as part of an agreement to protect the Britons from the incursions of the Picts, Gaels and others. The story, as reported in such sources as the "Historia Brittonum" and Gildas, indicates that the British king Vortigern allowed the Germanic warlords, later named as Hengist and Horsa by Bede, to settle their people on the Isle of Thanet in exchange for their service as mercenaries. According to Bede, Hengist manipulated Vortigern into granting more land and allowing for more settlers to come in, paving the way for the Germanic settlement of Britain. Historians are divided about what followed: some argue that the takeover of southern Great Britain by the Anglo-Saxons was peaceful. The known account from a native Briton who lived in the mid-5th century AD, Gildas, described events as a forced takeover by armed attack: Gildas described how the Saxons were later slaughtered at the battle of Mons Badonicus 44 years before he wrote his history, and their conquest of Britain halted. The 8th-century English historian Bede tells how their advance resumed thereafter. He said this resulted in a swift overrunning of the entirety of South-Eastern Britain, and the foundation of the Anglo-Saxon kingdoms. Four separate Saxon realms emerged: During the period of the reigns from Egbert to Alfred the Great, the kings of Wessex emerged as Bretwalda, unifying the country. They eventually organised it as the kingdom of England in the face of Viking invasions. The Continental Saxons living in what was known as "Old Saxony" (c. 531-804) appear to have become consolidated by the end of the 8th century. After subjugation by the Emperor Charlemagne, a political entity called the Duchy of Saxony (804-1296) appeared, covering Westphalia, Eastphalia, Angria and Nordalbingia (Holstein, southern part of modern-day Schleswig-Holstein state). The Saxons long resisted becoming Christians and being incorporated into the orbit of the Frankish kingdom. In 776 the Saxons promised to convert to Christianity and vow loyalty to the king, but, during Charlemagne's campaign in Hispania (778), the Saxons advanced to Deutz on the Rhine and plundered along the river. This was an oft-repeated pattern when Charlemagne was distracted by other matters. They were conquered by Charlemagne in a long series of annual campaigns, the Saxon Wars (772804). With defeat came enforced baptism and conversion as well as the union of the Saxons with the rest of the Germanic, Frankish empire. Their sacred tree or pillar, a symbol of Irminsul, was destroyed. Charlemagne also deported 10,000 Nordalbingian Saxons to Neustria and gave their now largely vacant lands in Wagria (approximately modern Plön and Ostholstein districts) to the loyal king of the Abotrites. Einhard, Charlemagne's biographer, says on the closing of this grand conflict: The war that had lasted so many years was at length ended by their acceding to the terms offered by the king; which were renunciation of their national religious customs and the worship of devils, acceptance of the sacraments of the Christian faith and religion, and union with the Franks to form one people. Under Carolingian rule, the Saxons were reduced to tributary status. There is evidence that the Saxons, as well as Slavic tributaries such as the Abodrites and the Wends, often provided troops to their Carolingian overlords. The dukes of Saxony became kings (Henry I, the Fowler, 919) and later the first emperors (Henry's son, Otto I, the Great) of Germany during the 10th century, but they lost this position in 1024. The duchy was divided in 1180 when Duke Henry the Lion refused to follow his cousin, Emperor Frederick Barbarossa, into war in Lombardy. During the High Middle Ages, under the Salian emperors and, later, under the Teutonic Knights, German settlers moved east of the Saale into the area of a western Slavic tribe, the Sorbs. The Sorbs were gradually Germanised. This region subsequently acquired the name Saxony through political circumstances, though it was initially called the March of Meissen. The rulers of Meissen acquired control of the Duchy of Saxony (only a remnant of the previous Duchy) in 1423; they eventually applied the name "Saxony" to the whole of their kingdom. Since then, this part of eastern Germany has been referred to as Saxony (), a source of some misunderstanding about the original homeland of the Saxons, with a central part in the present-day German state of Lower Saxony (). Bede, a Northumbrian writing around the year 730, remarks that "the old (that is, the continental) Saxons have no king, but they are governed by several ealdormen (or "satrapa") who, during war, cast lots for leadership but who, in time of peace, are equal in power." The "regnum Saxonum" was divided into three provinces – Westphalia, Eastphalia and Angria – which comprised about one hundred "pagi" or "Gaue". Each "Gau" had its own satrap with enough military power to level whole villages that opposed him. In the mid-9th century, Nithard first described the social structure of the Saxons beneath their leaders. The caste structure was rigid; in the Saxon language the three castes, excluding slaves, were called the "edhilingui" (related to the term aetheling), "frilingi" and "lazzi". These terms were subsequently Latinised as "nobiles" or "nobiliores"; "ingenui", "ingenuiles" or "liberi"; and "liberti", "liti" or "serviles". According to very early traditions that are presumed to contain a good deal of historical truth, the "edhilingui" were the descendants of the Saxons who led the tribe out of Holstein and during the migrations of the 6th century. They were a conquering warrior elite. The "frilingi" represented the descendants of the "amicii", "auxiliarii" and "manumissi" of that caste. The "lazzi" represented the descendants of the original inhabitants of the conquered territories, who were forced to make oaths of submission and pay tribute to the "edhilingui". The "Lex Saxonum" regulated the Saxons' unusual society. Intermarriage between the castes was forbidden by the "Lex," and wergilds were set based upon caste membership. The "edhilingui" were worth 1,440 solidi, or about 700 head of cattle, the highest wergild on the continent; the price of a bride was also very high. This was six times as much as that of the "frilingi" and eight times as much as the "lazzi". The gulf between noble and ignoble was very large, but the difference between a freeman and an indentured labourer was small. According to the "Vita Lebuini antiqua", an important source for early Saxon history, the Saxons held an annual council at Marklo (Westphalia) where they "confirmed their laws, gave judgment on outstanding cases, and determined by common counsel whether they would go to war or be in peace that year." All three castes participated in the general council; twelve representatives from each caste were sent from each "Gau". In 782, Charlemagne abolished the system of "Gaue" and replaced it with the "Grafschaftsverfassung", the system of counties typical of Francia. By prohibiting the Marklo councils, Charlemagne pushed the "frilingi" and "lazzi" out of political power. The old Saxon system of "Abgabengrundherrschaft", lordship based on dues and taxes, was replaced by a form of feudalism based on service and labour, personal relationships and oaths. Saxon religious practices were closely related to their political practices. The annual councils of the entire tribe began with invocations of the gods. The procedure by which dukes were elected in wartime, by drawing lots, is presumed to have had religious significance, i.e. in giving trust to divine providenceit seemsto guide the random decision making. There were also sacred rituals and objects, such as the pillars called Irminsul; these were believed to connect heaven and earth, as with other examples of trees or ladders to heaven in numerous religions. Charlemagne had one such pillar chopped down in 772 close to the Eresburg stronghold. Early Saxon religious practices in Britain can be gleaned from place names and the Germanic calendar in use at that time. The Germanic gods Woden, Frigg, Tiw and Thunor, who are attested to in every Germanic tradition, were worshipped in Wessex, Sussex and Essex. They are the only ones directly attested to, though the names of the third and fourth months (March and April) of the Old English calendar bear the names "Hrethmonath" and "Eosturmonath", meaning "month of Hretha" and "month of Ēostre." It is presumed that these are the names of two goddesses who were worshipped around that season. The Saxons offered cakes to their gods in February ("Solmonath"). There was a religious festival associated with the harvest, "Halegmonath" ("holy month" or "month of offerings", September). The Saxon calendar began on 25 December, and the months of December and January were called Yule (or "Giuli"). They contained a "Modra niht" or "night of the mothers", another religious festival of unknown content. The Saxon freemen and servile class remained faithful to their original beliefs long after their nominal conversion to Christianity. Nursing a hatred of the upper class, which, with Frankish assistance, had marginalised them from political power, the lower classes (the "plebeium vulgus" or "cives") were a problem for Christian authorities as late as 836. The "Translatio S.Liborii" remarks on their obstinacy in pagan "ritus et superstitio" (usage and superstition). The conversion of the Saxons in England from their original Germanic religion to Christianity occurred in the early to late 7th century under the influence of the already converted Jutes of Kent. In the 630s, Birinus became the "apostle to the West Saxons" and converted Wessex, whose first Christian king was Cynegils. The West Saxons begin to emerge from obscurity only with their conversion to Christianity and keeping written records. The Gewisse, a West Saxon people, were especially resistant to Christianity; Birinus exercised more efforts against them and ultimately succeeded in conversion. In Wessex, a bishopric was founded at Dorchester. The South Saxons were first evangelised extensively under Anglian influence; Aethelwalh of Sussex was converted by Wulfhere, King of Mercia and allowed Wilfrid, Bishop of York, to evangelise his people beginning in 681. The chief South Saxon bishopric was that of Selsey. The East Saxons were more pagan than the southern or western Saxons; their territory had a superabundance of pagan sites. Their king, Saeberht, was converted early and a diocese was established at London. Its first bishop, Mellitus, was expelled by Saeberht's heirs. The conversion of the East Saxons was completed under Cedd in the 650s and 660s. The continental Saxons were evangelised largely by English missionaries in the late 7th and early 8th centuries. Around 695, two early English missionaries, Hewald the White and Hewald the Black, were martyred by the "vicani", that is, villagers. Throughout the century that followed, villagers and other peasants proved to be the greatest opponents of Christianisation, while missionaries often received the support of the "edhilingui" and other noblemen. Saint Lebuin, an Englishman who between 745 and 770 preached to the Saxons, mainly in the eastern Netherlands, built a church and made many friends among the nobility. Some of them rallied to save him from an angry mob at the annual council at Marklo (near river Weser, Bremen). Social tensions arose between the Christianity-sympathetic noblemen and the pagan lower castes, who were staunchly faithful to their traditional religion. Under Charlemagne, the Saxon Wars had as their chief object the conversion and integration of the Saxons into the Frankish empire. Though much of the highest caste converted readily, forced baptisms and forced tithing made enemies of the lower orders. Even some contemporaries found the methods employed to win over the Saxons wanting, as this excerpt from a letter of Alcuin of York to his friend Meginfrid, written in 796, shows: If the light yoke and sweet burden of Christ were to be preached to the most obstinate people of the Saxons with as much determination as the payment of tithes has been exacted, or as the force of the legal decree has been applied for fault of the most trifling sort imaginable, perhaps they would not be averse to their baptismal vows. Charlemagne's successor, Louis the Pious, reportedly treated the Saxons more as Alcuin would have wished, and as a consequence they were faithful subjects. The lower classes, however, revolted against Frankish overlordship in favour of their old paganism as late as the 840s, when the "Stellinga" rose up against the Saxon leadership, who were allied with the Frankish emperor Lothair I. After the suppression of the "Stellinga", in 851 Louis the German brought relics from Rome to Saxony to foster a devotion to the Roman Catholic Church. The Poeta Saxo, in his verse "Annales" of Charlemagne's reign (written between 888 and 891), laid an emphasis on his conquest of Saxony. He celebrated the Frankish monarch as on par with the Roman emperors and as the bringer of Christian salvation to people. References are made to periodic outbreaks of pagan worship, especially of Freya, among the Saxon peasantry as late as the 12th century. In the 9th century, the Saxon nobility became vigorous supporters of monasticism and formed a bulwark of Christianity against the existing Slavic paganism to the east and the Nordic paganism of the Vikings to the north. Much Christian literature was produced in the vernacular Old Saxon, the notable ones being a result of the literary output and wide influence of Saxon monasteries such as Fulda, Corvey and Verden; and the theological controversy between the Augustinian, Gottschalk and Rabanus Maurus. From an early date, Charlemagne and Louis the Pious supported Christian vernacular works in order to evangelise the Saxons more efficiently. The "Heliand", a verse epic of the life of Christ in a Germanic setting, and "Genesis", another epic retelling of the events of the first book of the Bible, were commissioned in the early 9th century by Louis to disseminate scriptural knowledge to the masses. A council of Tours in 813 and then a synod of Mainz in 848 both declared that homilies ought to be preached in the vernacular. The earliest preserved text in the Saxon language is a baptismal vow from the late 8th or early 9thcentury; the vernacular was used extensively in an effort to Christianise the lowest castes of Saxon society.
https://en.wikipedia.org/wiki?curid=27850
Superparamagnetism Superparamagnetism is a form of magnetism which appears in small ferromagnetic or ferrimagnetic nanoparticles. In sufficiently small nanoparticles, magnetization can randomly flip direction under the influence of temperature. The typical time between two flips is called the Néel relaxation time. In the absence of an external magnetic field, when the time used to measure the magnetization of the nanoparticles is much longer than the Néel relaxation time, their magnetization appears to be in average zero; they are said to be in the superparamagnetic state. In this state, an external magnetic field is able to magnetize the nanoparticles, similarly to a paramagnet. However, their magnetic susceptibility is much larger than that of paramagnets. Normally, any ferromagnetic or ferrimagnetic material undergoes a transition to a paramagnetic state above its Curie temperature. Superparamagnetism is different from this standard transition since it occurs below the Curie temperature of the material. Superparamagnetism occurs in nanoparticles which are single-domain, i.e. composed of a single magnetic domain. This is possible when their diameter is below 3–50 nm, depending on the materials. In this condition, it is considered that the magnetization of the nanoparticles is a single giant magnetic moment, sum of all the individual magnetic moments carried by the atoms of the nanoparticle. Those in the field of superparamagnetism call this "macro-spin approximation". Because of the nanoparticle’s magnetic anisotropy, the magnetic moment has usually only two stable orientations antiparallel to each other, separated by an energy barrier. The stable orientations define the nanoparticle’s so called “easy axis”. At finite temperature, there is a finite probability for the magnetization to flip and reverse its direction. The mean time between two flips is called the Néel relaxation time formula_1 and is given by the following Néel–Arrhenius equation: where: This length of time can be anywhere from a few nanoseconds to years or much longer. In particular, it can be seen that the Néel relaxation time is an exponential function of the grain volume, which explains why the flipping probability becomes rapidly negligible for bulk materials or large nanoparticles. Let us imagine that the magnetization of a single superparamagnetic nanoparticle is measured and let us define formula_5 as the measurement time. If formula_6, the nanoparticle magnetization will flip several times during the measurement, then the measured magnetization will average to zero. If formula_7, the magnetization will not flip during the measurement, so the measured magnetization will be what the instantaneous magnetization was at the beginning of the measurement. In the former case, the nanoparticle will appear to be in the superparamagnetic state whereas in the latter case it will appear to be “blocked” in its initial state. The state of the nanoparticle (superparamagnetic or blocked) depends on the measurement time. A transition between superparamagnetism and blocked state occurs when formula_8. In several experiments, the measurement time is kept constant but the temperature is varied, so the transition between superparamagnetism and blocked state is seen as a function of the temperature. The temperature for which formula_8 is called the "blocking temperature": For typical laboratory measurements, the value of the logarithm in the previous equation is in the order of 20–25. When an external magnetic field "H" is applied to an assembly of superparamagnetic nanoparticles, their magnetic moments tend to align along the applied field, leading to a net magnetization. The magnetization curve of the assembly, i.e. the magnetization as a function of the applied field, is a reversible S-shaped increasing function. This function is quite complicated but for some simple cases: In the above equations: The initial slope of the formula_16 function is the magnetic susceptibility of the sample formula_17: The latter susceptibility is also valid for all temperatures formula_20 if the easy axes of the nanoparticles are randomly oriented. It can be seen from these equations that large nanoparticles have a larger "µ" and so a larger susceptibility. This explains why superparamagnetic nanoparticles have a much larger susceptibility than standard paramagnets: they behave exactly as a paramagnet with a huge magnetic moment. There is no time-dependence of the magnetization when the nanoparticles are either completely blocked (formula_21) or completely superparamagnetic (formula_22). There is, however, a narrow window around formula_23 where the measurement time and the relaxation time have comparable magnitude. In this case, a frequency-dependence of the susceptibility can be observed. For a randomly oriented sample, the complex susceptibility is: where From this frequency-dependent susceptibility, the time-dependence of the magnetization for low-fields can be derived: A superparamagnetic system can be measured with AC susceptibility measurements, where an applied magnetic field varies in time, and the magnetic response of the system is measured. A superparamagnetic system will show a characteristic frequency dependence: When the frequency is much higher than 1/τN, there will be a different magnetic response than when the frequency is much lower than 1/τN, since in the latter case, but not the former, the ferromagnetic clusters will have time to respond to the field by flipping their magnetization. The precise dependence can be calculated from the Néel–Arrhenius equation, assuming that the neighboring clusters behave independently of one another (if clusters interact, their behavior becomes more complicated). It is also possible to perform magneto-optical AC susceptibility measurements with magneto-optically active superparamagnetic materials such as iron oxide nanoparticles in the visible wavelength range. Superparamagnetism sets a limit on the storage density of hard disk drives due to the minimum size of particles that can be used. This limit is known as the superparamagnetic limit.
https://en.wikipedia.org/wiki?curid=27852
Separable space In mathematics, a topological space is called separable if it contains a countable, dense subset; that is, there exists a sequence formula_1 of elements of the space such that every nonempty open subset of the space contains at least one element of the sequence. Like the other axioms of countability, separability is a "limitation on size", not necessarily in terms of cardinality (though, in the presence of the Hausdorff axiom, this does turn out to be the case; see below) but in a more subtle topological sense. In particular, every continuous function on a separable space whose image is a subset of a Hausdorff space is determined by its values on the countable dense subset. Contrast separability with the related notion of second countability, which is in general stronger but equivalent on the class of metrizable spaces. Any topological space that is itself finite or countably infinite is separable, for the whole space is a countable dense subset of itself. An important example of an uncountable separable space is the real line, in which the rational numbers form a countable dense subset. Similarly the set of all vectors formula_2 in which formula_3 is rational for all "i" is a countable dense subset of formula_4; so for every formula_5 the formula_5-dimensional Euclidean space is separable. A simple example of a space that is not separable is a discrete space of uncountable cardinality. Further examples are given below. Any second-countable space is separable: if formula_7 is a countable base, choosing any formula_8 from the non-empty formula_9 gives a countable dense subset. Conversely, a metrizable space is separable if and only if it is second countable, which is the case if and only if it is Lindelöf. To further compare these two properties: We can construct an example of a separable topological space that is not second countable. Consider any uncountable set formula_10, pick some formula_11, and define the topology to be the collection of all sets that contain formula_12 (or are empty). Then, the closure of formula_13 is the whole space (formula_10 is the smallest closed set containing formula_12), but every set of the form formula_16 is open. Therefore, the space is separable but there cannot be a countable base. The property of separability does not in and of itself give any limitations on the cardinality of a topological space: any set endowed with the trivial topology is separable, as well as second countable, quasi-compact, and connected. The "trouble" with the trivial topology is its poor separation properties: its Kolmogorov quotient is the one-point space. A first-countable, separable Hausdorff space (in particular, a separable metric space) has at most the continuum cardinality formula_17. In such a space, closure is determined by limits of sequences and any convergent sequence has at most one limit, so there is a surjective map from the set of convergent sequences with values in the countable dense subset to the points of formula_10. A separable Hausdorff space has cardinality at most formula_19, where formula_17 is the cardinality of the continuum. For this closure is characterized in terms of limits of filter bases: if formula_21 and formula_22, then formula_23 if and only if there exists a filter base formula_24 consisting of subsets of formula_25 that converges to formula_26. The cardinality of the set formula_27 of such filter bases is at most formula_28. Moreover, in a Hausdorff space, there is at most one limit to every filter base. Therefore, there is a surjection formula_29 when formula_30 The same arguments establish a more general result: suppose that a Hausdorff topological space formula_10 contains a dense subset of cardinality formula_32. Then formula_10 has cardinality at most formula_34 and cardinality at most formula_35 if it is first countable. The product of at most continuum many separable spaces is a separable space . In particular the space formula_36 of all functions from the real line to itself, endowed with the product topology, is a separable Hausdorff space of cardinality formula_19. More generally, if formula_32 is any infinite cardinal, then a product of at most formula_39 spaces with dense subsets of size at most formula_32 has itself a dense subset of size at most formula_32 (Hewitt–Marczewski–Pondiczery theorem). Separability is especially important in numerical analysis and constructive mathematics, since many theorems that can be proved for nonseparable spaces have constructive proofs only for separable spaces. Such constructive proofs can be turned into algorithms for use in numerical analysis, and they are the only sorts of proofs acceptable in constructive analysis. A famous example of a theorem of this sort is the Hahn–Banach theorem. "For nonseparable spaces":
https://en.wikipedia.org/wiki?curid=27855
Schrödinger's cat Schrödinger's cat is a thought experiment, sometimes described as a paradox, devised by Austrian physicist Erwin Schrödinger in 1935, during the course of discussions with Albert Einstein. It illustrates what he saw as the problem of the Copenhagen interpretation of quantum mechanics applied to everyday objects. The scenario presents a hypothetical cat that may be simultaneously both alive and dead, a state known as a quantum superposition, as a result of being linked to a random subatomic event that may or may not occur. The thought experiment is also often featured in theoretical discussions of the interpretations of quantum mechanics, particularly in situations involving the measurement problem. Schrödinger coined the term "Verschränkung" (entanglement) in the course of developing the thought experiment. Schrödinger intended his thought experiment as a discussion of the EPR article—named after its authors Einstein, Podolsky, and Rosen—in 1935. The EPR article highlighted the counterintuitive nature of quantum superpositions, in which a quantum system such as an atom or photon can exist as a combination of multiple states corresponding to different possible outcomes. The prevailing theory, called the Copenhagen interpretation, says that a quantum system remains in superposition until it interacts with, or is observed by the external world. When this happens, the superposition collapses into one or another of the possible definite states. The EPR experiment shows that a system with multiple particles separated by large distances can be in such a superposition. Schrödinger and Einstein exchanged letters about Einstein's EPR article, in the course of which Einstein pointed out that the state of an unstable keg of gunpowder will, after a while, contain a superposition of both exploded and unexploded states. To further illustrate, Schrödinger described how one could, in principle, create a superposition in a large-scale system by making it dependent on a quantum particle that was in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's life or death depended on the state of a radioactive atom, whether it had decayed and emitted radiation or not. According to Schrödinger, the Copenhagen interpretation implies that "the cat remains both alive and dead" until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; on the contrary, he intended the example to illustrate the absurdity of the existing view of quantum mechanics. However, since Schrödinger's time, other interpretations of the mathematics of quantum mechanics have been advanced by physicists, some of which regard the "alive and dead" cat superposition as quite real. Intended as a critique of the Copenhagen interpretation (the prevailing orthodoxy in 1935), the Schrödinger's cat thought experiment remains a defining touchstone for modern interpretations of quantum mechanics. Physicists often use the way each interpretation deals with Schrödinger's cat as a way of illustrating and comparing the particular features, strengths, and weaknesses of each interpretation. Schrödinger wrote: Schrödinger's famous thought experiment poses the question, ""when" does a quantum system stop existing as a superposition of states and become one or the other?" (More technically, when does the actual quantum state stop being a non-trivial linear combination of states, each of which resembles different classical states, and instead begin to have a unique classical description?) If the cat survives, it remembers only being alive. But explanations of the EPR experiments that are consistent with standard microscopic quantum mechanics require that macroscopic objects, such as cats and notebooks, do not always have unique classical descriptions. The thought experiment illustrates this apparent paradox. Our intuition says that no observer can be in a mixture of states—yet the cat, it seems from the thought experiment, can be such a mixture. Is the cat required to be an observer, or does its existence in a single well-defined classical state require another external observer? Each alternative seemed absurd to Einstein, who was impressed by the ability of the thought experiment to highlight these issues. In a letter to Schrödinger dated 1950, he wrote: Note that the charge of gunpowder is not mentioned in Schrödinger's setup, which uses a Geiger counter as an amplifier and hydrocyanic poison instead of gunpowder. The gunpowder had been mentioned in Einstein's original suggestion to Schrödinger 15 years before, and Einstein carried it forward to the present discussion. Since Schrödinger's time, other interpretations of quantum mechanics have been proposed that give different answers to the questions posed by Schrödinger's cat of how long superpositions last and when (or "whether") they collapse. A commonly held interpretation of quantum mechanics is the Copenhagen interpretation. In the Copenhagen interpretation, a system stops being a superposition of states and becomes either one or the other when an observation takes place. This thought experiment makes apparent the fact that the nature of measurement, or observation, is not well-defined in this interpretation. The experiment can be interpreted to mean that while the box is closed, the system simultaneously exists in a superposition of the states "decayed nucleus/dead cat" and "undecayed nucleus/living cat", and that only when the box is opened and an observation performed does the wave function collapse into one of the two states. However, one of the main scientists associated with the Copenhagen interpretation, Niels Bohr, never had in mind the observer-induced collapse of the wave function, as he did not regard the wave function as physically real, but a statistical tool; thus, Schrödinger's cat did not pose any riddle to him. The cat would be either dead or alive long before the box is opened by a conscious observer. Analysis of an actual experiment found that measurement alone (for example by a Geiger counter) is sufficient to collapse a quantum wave function before there is any conscious observation of the measurement, although the validity of their design is disputed. (The view that the "observation" is taken when a particle from the nucleus hits the detector can be developed into objective collapse theories. The thought experiment requires an "unconscious observation" by the detector in order for waveform collapse to occur. In contrast, the many worlds approach denies that collapse ever occurs.) In 1957, Hugh Everett formulated the many-worlds interpretation of quantum mechanics, which does not single out observation as a special process. In the many-worlds interpretation, both alive and dead states of the cat persist after the box is opened, but are decoherent from each other. In other words, when the box is opened, the observer and the possibly-dead cat split into an observer looking at a box with a dead cat, and an observer looking at a box with a live cat. But since the dead and alive states are decoherent, there is no effective communication or interaction between them. When opening the box, the observer becomes entangled with the cat, so "observer states" corresponding to the cat's being alive and dead are formed; each observer state is entangled or linked with the cat so that the "observation of the cat's state" and the "cat's state" correspond with each other. Quantum decoherence ensures that the different outcomes have no interaction with each other. The same mechanism of quantum decoherence is also important for the interpretation in terms of consistent histories. Only the "dead cat" or the "alive cat" can be a part of a consistent history in this interpretation. Decoherence is generally considered to prevent simultaneous observation of multiple states. A variant of the Schrödinger's cat experiment, known as the quantum suicide machine, has been proposed by cosmologist Max Tegmark. It examines the Schrödinger's cat experiment from the point of view of the cat, and argues that by using this approach, one may be able to distinguish between the Copenhagen interpretation and many-worlds. The ensemble interpretation states that superpositions are nothing but subensembles of a larger statistical ensemble. The state vector would not apply to individual cat experiments, but only to the statistics of many similarly prepared cat experiments. Proponents of this interpretation state that this makes the Schrödinger's cat paradox a trivial matter, or a non-issue. This interpretation serves to "discard" the idea that a single physical system in quantum mechanics has a mathematical description that corresponds to it in any way. The relational interpretation makes no fundamental distinction between the human experimenter, the cat, or the apparatus, or between animate and inanimate systems; all are quantum systems governed by the same rules of wavefunction evolution, and all may be considered "observers". But the relational interpretation allows that different observers can give different accounts of the same series of events, depending on the information they have about the system. The cat can be considered an observer of the apparatus; meanwhile, the experimenter can be considered another observer of the system in the box (the cat plus the apparatus). Before the box is opened, the cat, by nature of its being alive or dead, has information about the state of the apparatus (the atom has either decayed or not decayed); but the experimenter does not have information about the state of the box contents. In this way, the two observers simultaneously have different accounts of the situation: To the cat, the wavefunction of the apparatus has appeared to "collapse"; to the experimenter, the contents of the box appear to be in superposition. Not until the box is opened, and both observers have the same information about what happened, do both system states appear to "collapse" into the same definite result, a cat that is either alive or dead. In the transactional interpretation the apparatus emits an advanced wave backward in time, which combined with the wave that the source emits forward in time, forms a standing wave. The waves are seen as physically real, and the apparatus is considered an "observer". In the transactional interpretation, the collapse of the wavefunction is "atemporal" and occurs along the whole transaction between the source and the apparatus. The cat is never in superposition. Rather the cat is only in one state at any particular time, regardless of when the human experimenter looks in the box. The transactional interpretation resolves this quantum paradox. The Zeno effect is known to cause delays to any changes from the initial state. On the other hand, the anti-Zeno effect accelerates the changes. For example, if you peek a look into the cat box frequently you may either cause delays to the fateful choice or, conversely, accelerate it. Both the Zeno effect and the anti-Zeno effect are real and known to happen to real atoms. The quantum system being measured must be strongly coupled to the surrounding environment (in this case to the apparatus, the experiment room ... etc.) in order to obtain more accurate information. But while there is no information passed to the outside world, it is considered to be a "quasi-measurement", but as soon as the information about the cat's well-being is passed on to the outside world (by peeking into the box) quasi-measurement turns into measurement. Quasi-measurements, like measurements, cause the Zeno effects. Zeno effects teach us that even without peeking into the box, the death of the cat would have been delayed or accelerated anyway due to its environment. According to objective collapse theories, superpositions are destroyed spontaneously (irrespective of external observation), when some objective physical threshold (of time, mass, temperature, irreversibility, etc.) is reached. Thus, the cat would be expected to have settled into a definite state long before the box is opened. This could loosely be phrased as "the cat observes itself", or "the environment observes the cat". Objective collapse theories require a modification of standard quantum mechanics to allow superpositions to be destroyed by the process of time evolution. The experiment as described is a purely theoretical one, and the machine proposed is not known to have been constructed. However, successful experiments involving similar principles, e.g. superpositions of relatively large (by the standards of quantum physics) objects have been performed. These experiments do not show that a cat-sized object can be superposed, but the known upper limit on "cat states" has been pushed upwards by them. In many cases the state is short-lived, even when cooled to near absolute zero. In quantum computing the phrase "cat state" sometimes refers to the GHZ state, wherein several qubits are in an equal superposition of all being 0 and all being 1; e.g., According to at least one proposal, it may be possible to determine the state of the cat "before" observing it. Wigner's friend is a variant on the experiment with two human observers: the first makes an observation on whether a flash of light is seen and then communicates his observation to a second observer. The issue here is, does the wave function "collapse" when the first observer looks at the experiment, or only when the second observer is informed of the first observer's observations? In another extension, prominent physicists have gone so far as to suggest that astronomers observing dark energy in the universe in 1998 may have "reduced its life expectancy" through a pseudo-Schrödinger's cat scenario, although this is a controversial viewpoint.
https://en.wikipedia.org/wiki?curid=27856
Sphere A sphere (from Greek —, "globe, ball") is a geometrical object in three-dimensional space that is the surface of a ball (viz., analogous to the circular objects in two dimensions, where a "circle" circumscribes its "disk"). Like a circle in a two-dimensional space, a sphere is defined mathematically as the set of points that are all at the same distance from a given point, but in a three-dimensional space. This distance is the radius of the ball, which is made up from all points with a distance less than (or, for a closed ball, less than "or equal to") from the given point, which is the center of the mathematical ball. These are also referred to as the radius and center of the sphere, respectively. The longest straight line segment through the ball, connecting two points of the sphere, passes through the center and its length is thus twice the radius; it is a diameter of both the sphere and its ball. While outside mathematics the terms "sphere" and "ball" are sometimes used interchangeably, in mathematics the above distinction is made between a "sphere", which is a two-dimensional closed surface embedded in a three-dimensional Euclidean space, and a "ball", which is a three-dimensional shape that includes the sphere and everything "inside" the sphere (a "closed ball"), or, more often, just the points "inside", but "not on" the sphere (an "open ball"). The distinction between "ball" and "sphere" has not always been maintained and especially older mathematical references talk about a sphere as a solid. This is analogous to the situation in the plane, where the terms "circle" and "disk" can also be confounded. In analytic geometry, a sphere with center and radius is the locus of all points such that Let be real numbers with and put Then the equation has no real points as solutions if formula_4 and is called the equation of an imaginary sphere. If formula_5, the only solution of formula_6 is the point formula_7 and the equation is said to be the equation of a point sphere. Finally, in the case formula_8, formula_6 is an equation of a sphere whose center is formula_10 and whose radius is formula_11. If in the above equation is zero then is the equation of a plane. Thus, a plane may be thought of as a sphere of infinite radius whose center is a point at infinity. The points on the sphere with radius formula_12 and center formula_13 can be parameterized via The parameter formula_15 can be associated with the angle counted positive from the direction of the positive "z"-axis through the center to the radius-vector, and the parameter formula_16 can be associated with the angle counted positive from the direction of the positive "x"-axis through the center to the projection of the radius-vector on the "xy"-plane. A sphere of any radius centered at zero is an integral surface of the following differential form: This equation reflects that position and velocity vectors of a point, and , traveling on the sphere are always orthogonal to each other. A sphere can also be constructed as the surface formed by rotating a circle about any of its diameters. Since a circle is a special type of ellipse, a sphere is a special type of ellipsoid of revolution. Replacing the circle with an ellipse rotated about its major axis, the shape becomes a prolate spheroid; rotated about the minor axis, an oblate spheroid. In three dimensions, the volume inside a sphere (that is, the volume of a ball, but classically referred to as the volume of a sphere) is where is the radius and is the diameter of the sphere. Archimedes first derived this formula by showing that the volume inside a sphere is twice the volume between the sphere and the circumscribed cylinder of that sphere (having the height and diameter equal to the diameter of the sphere). This may be proved by inscribing a cone upside down into semi-sphere, noting that the area of a cross section of the cone plus the area of a cross section of the sphere is the same as the area of the cross section of the circumscribing cylinder, and applying Cavalieri's principle. This formula can also be derived using integral calculus, i.e. disk integration to sum the volumes of an infinite number of circular disks of infinitesimally small thickness stacked side by side and centered along the -axis from to , assuming the sphere of radius is centered at the origin. At any given , the incremental volume () equals the product of the cross-sectional area of the disk at and its thickness (): The total volume is the summation of all incremental volumes: In the limit as approaches zero, this equation becomes: At any given , a right-angled triangle connects , and to the origin; hence, applying the Pythagorean theorem yields: Using this substitution gives which can be evaluated to give the result An alternative formula is found using spherical coordinates, with volume element so For most practical purposes, the volume inside a sphere inscribed in a cube can be approximated as 52.4% of the volume of the cube, since , where is the diameter of the sphere and also the length of a side of the cube and  ≈ 0.5236. For example, a sphere with diameter 1m has 52.4% the volume of a cube with edge length 1m, or about 0.524 m3. The surface area of a sphere of radius is: Archimedes first derived this formula from the fact that the projection to the lateral surface of a circumscribed cylinder is area-preserving. Another approach to obtaining the formula comes from the fact that it equals the derivative of the formula for the volume with respect to because the total volume inside a sphere of radius can be thought of as the summation of the surface area of an infinite number of spherical shells of infinitesimal thickness concentrically stacked inside one another from radius 0 to radius . At infinitesimal thickness the discrepancy between the inner and outer surface area of any given shell is infinitesimal, and the elemental volume at radius is simply the product of the surface area at radius and the infinitesimal thickness. At any given radius , the incremental volume () equals the product of the surface area at radius () and the thickness of a shell (): The total volume is the summation of all shell volumes: In the limit as approaches zero this equation becomes: Substitute : Differentiating both sides of this equation with respect to yields as a function of : This is generally abbreviated as: where is now considered to be the fixed radius of the sphere. Alternatively, the area element on the sphere is given in spherical coordinates by . In Cartesian coordinates, the area element is The total area can thus be obtained by integration: The sphere has the smallest surface area of all surfaces that enclose a given volume, and it encloses the largest volume among all closed surfaces with a given surface area. The sphere therefore appears in nature: for example, bubbles and small water drops are roughly spherical because the surface tension locally minimizes surface area. The surface area relative to the mass of a ball is called the specific surface area and can be expressed from the above stated equations as where is the density (the ratio of mass to volume). In case of a circle the circle can described by a parametric equation formula_37: see plane section of an ellipsoid. But more complicated surfaces may intersect a sphere in circles, too: The diagram shows the case, where the intersection of a cylinder and a sphere consists of two circles. Would the cylinder radius be equal to the sphere's radius, the intersection would be one circle, where both surfaces are tangent. In case of an spheroid with the same center and major axis as the sphere, the intersection would consist of two points (vertices), where the surfaces are tangent. If the sphere is described by a parametric representation one gets Clelia curves, if the angles are connected by the equation Special cases are: Viviani's curve (formula_40) and spherical spirals (formula_41). In navigation, a rhumb line or loxodrome is an arc crossing all meridians of longitude at the same angle. A rhumb line is not a spherical spiral. There is no simple connection between the angles formula_16 and formula_43. If a sphere is intersected by another surface, there may be more complicated spherical curves. The intersection of the sphere with equation formula_44 and the cylinder with equation formula_45 is not just one or two circles. It is the solution of the non linear system of equations A sphere is uniquely determined by four points that are not coplanar. More generally, a sphere is uniquely determined by four conditions such as passing through a point, being tangent to a plane, etc. This property is analogous to the property that three non-collinear points determine a unique circle in a plane. Consequently, a sphere is uniquely determined by (that is, passes through) a circle and a point not in the plane of that circle. By examining the common solutions of the equations of two spheres, it can be seen that two spheres intersect in a circle and the plane containing that circle is called the radical plane of the intersecting spheres. Although the radical plane is a real plane, the circle may be imaginary (the spheres have no real point in common) or consist of a single point (the spheres are tangent at that point). The angle between two spheres at a real point of intersection is the dihedral angle determined by the tangent planes to the spheres at that point. Two spheres intersect at the same angle at all points of their circle of intersection. They intersect at right angles (are orthogonal) if and only if the square of the distance between their centers is equal to the sum of the squares of their radii. If and are the equations of two distinct spheres then is also the equation of a sphere for arbitrary values of the parameters and . The set of all spheres satisfying this equation is called a pencil of spheres determined by the original two spheres. In this definition a sphere is allowed to be a plane (infinite radius, center at infinity) and if both the original spheres are planes then all the spheres of the pencil are planes, otherwise there is only one plane (the radical plane) in the pencil. If the pencil of spheres does not consist of all planes, then there are three types of pencils: All the tangent lines from a fixed point of the radical plane to the spheres of a pencil have the same length. The radical plane is the locus of the centers of all the spheres that are orthogonal to all the spheres in a pencil. Moreover, a sphere orthogonal to any two spheres of a pencil of spheres is orthogonal to all of them and its center lies in the radical plane of the pencil. A "great circle" on the sphere has the same center and radius as the sphere—consequently dividing it into two equal parts. The plane sections of a sphere are called "spheric sections—"which are either great circles for planes through the sphere's center or "small circles" for all others. Any plane that includes the center of a sphere divides it into two equal hemispheres. Any two intersecting planes that include the center of a sphere subdivide the sphere into four lunes or biangles, the vertices of which coincide with the antipodal points lying on the line of intersection of the planes. Any pair of points on a sphere that lie on a straight line through the sphere's center (i.e. the diameter) are called "antipodal points"—on the sphere, the distance between them is exactly half the length of the circumference. Any other (i.e. not antipodal) pair of distinct points on a sphere Spherical geometry shares many analogous properties to Euclidean once equipped with this "great-circle distance". And a much more abstract generalization of geometry also uses the same distance concept in the Riemannian circle. The hemisphere is conjectured to be the optimal (least area) isometric filling of the Riemannian circle. The antipodal quotient of the sphere is the surface called the real projective plane, which can also be thought of as the northern hemisphere with antipodal points of the equator identified. Terms borrowed directly from geography of the Earth, despite its spheroidal shape having greater or lesser departures from a perfect sphere (see geoid), are widely well-understood. In geometry unrelated to astronomical bodies, geocentric terminology should be used only for illustration and "noted" as such, unless there is no chance of misunderstanding. If a particular point on a sphere is (arbitrarily) designated as its "north pole", its antipodal point is called the "south pole". The great circle equidistant to each is then the "equator". Great circles through the poles are called lines of longitude (or meridians). A line "not on the sphere" but through its center connecting the two poles "may" be called the axis of rotation. Circles on the sphere that are parallel (i.e. not great circles) to the equator are lines of latitude. Spheres can be generalized to spaces of any number of dimensions. For any natural number , an "-sphere," often written as , is the set of points in ()-dimensional Euclidean space that are at a fixed distance from a central point of that space, where is, as before, a positive real number. In particular: Spheres for are sometimes called hyperspheres. The -sphere of unit radius centered at the origin is denoted and is often referred to as "the" -sphere. Note that the ordinary sphere is a 2-sphere, because it is a 2-dimensional surface (which is embedded in 3-dimensional space). The surface area of the unit ()-sphere is where is Euler's gamma function. Another expression for the surface area is and the volume is the surface area times or General recursive formulas also exist for the volume of an -ball. More generally, in a metric space , the sphere of center and radius is the set of points such that . If the center is a distinguished point that is considered to be the origin of , as in a normed space, it is not mentioned in the definition and notation. The same applies for the radius if it is taken to equal one, as in the case of a unit sphere. Unlike a ball, even a large sphere may be an empty set. For example, in with Euclidean metric, a sphere of radius is nonempty only if can be written as sum of squares of integers. In topology, an -sphere is defined as a space homeomorphic to the boundary of an -ball; thus, it is homeomorphic to the Euclidean -sphere, but perhaps lacking its metric. The -sphere is denoted . It is an example of a compact topological manifold without boundary. A sphere need not be smooth; if it is smooth, it need not be diffeomorphic to the Euclidean sphere (an exotic sphere). The Heine–Borel theorem implies that a Euclidean -sphere is compact. The sphere is the inverse image of a one-point set under the continuous function . Therefore, the sphere is closed. is also bounded; therefore it is compact. Remarkably, it is possible to turn an ordinary sphere inside out in a three-dimensional space with possible self-intersections but without creating any crease, in a process called sphere eversion. The basic elements of Euclidean plane geometry are points and lines. On the sphere, points are defined in the usual sense. The analogue of the "line" is the geodesic, which is a great circle; the defining characteristic of a great circle is that the plane containing all its points also passes through the center of the sphere. Measuring by arc length shows that the shortest path between two points lying on the sphere is the shorter segment of the great circle that includes the points. Many theorems from classical geometry hold true for spherical geometry as well, but not all do because the sphere fails to satisfy some of classical geometry's postulates, including the parallel postulate. In spherical trigonometry, angles are defined between great circles. Spherical trigonometry differs from ordinary trigonometry in many respects. For example, the sum of the interior angles of a spherical triangle always exceeds 180 degrees. Also, any two similar spherical triangles are congruent. In their book "Geometry and the Imagination" David Hilbert and Stephan Cohn-Vossen describe eleven properties of the sphere and discuss whether these properties uniquely determine the sphere. Several properties hold for the plane, which can be thought of as a sphere with infinite radius. These properties are:
https://en.wikipedia.org/wiki?curid=27859
Sápmi The region stretches over four countries: Norway, Sweden, Finland, and Russia. On the north it is bounded by the Barents Sea, on the west by the Norwegian Sea, and on the east by the White Sea. The area is historically referred to as Lapland () in English, although the term "Lapp" for its inhabitants is now often discouraged. Sápmi refers to the areas where the Sámi people have traditionally lived, but overlaps with other regions and definitions, including regions where Scandinavian settlement predates Sámi settlement and where the Sámi are only a tiny minority, e.g. Trøndelag. In practice most of the Sámi population is largely concentrated in a few traditional areas in the northernmost part of Sápmi, such as Kautokeino and Karasjok, with the exception of those who have left for the larger cities. The Sami people are estimated to make up around only 2.5% to 5% of the total population in the Sápmi area. No political organization advocates secession, although several groups desire more territorial autonomy and/or more self-determination for the region's indigenous population. Sápmi (and corresponding terms in other Sami languages) refers to both the Sami land and the Sami people. The word "Sámi" is the accusative-genitive form of the noun "Sápmi"—making the name's ("Sámi olbmot") meaning "people of Sápmi". The origin of the word is speculated to be related to the Baltic word "*žēmē", meaning "land". Also "Häme", the Finnish name for Tavastia, a historical province of Finland, is thought to have the same origin, and the same word is at least speculated to be the origin of "", the Finnish name for Finland. Sápmi is the name in North Sami, while the Julev Sami name is "Sábme" and the South Sami name is "Saepmie". In Norwegian and Swedish the term "Sameland" is often used. In modern Swedish and Norwegian, Sápmi is known as "Sameland", but in older Swedish it was known as "Lappmarken", "Lappland", and Finnmark, respectively. Originally these two names did refer to the entire Sápmi, but subsequently became applied to areas "exclusively" inhabited by the Sami. "Lappland" (Laponia) became the name of Sweden's northernmost province ("landskap") which in 1809 was split into one part that remained Swedish and one part falling under Finland (which became part of the Russian Empire). "Lappland" survives as the name of both Sweden's northernmost province and Finland's, also containing part of the old Ostrobothnian province. The terms "Lapp" and "Lappland" are regarded as offensive by some Sami people, who prefer the area's name in their own language, "". In older Norwegian, Sápmi was known as "Finnmork" or "Finnmark"; which is now the name of Norway's northernmost county. Northern Norway and Murmansk Oblast are sometimes marketed as "Norwegian Lapland" and "Russian Lapland", respectively. In the 17th century, Johannes Schefferus assumed the etymology of the term "Lapland" to be related to the Swedish word for "running", ""löpa"" (cognate with English, "to leap"). The largest part of Sápmi lies north of the Arctic Circle. The western portion is an area of fjords, deep valleys, glaciers and mountains, the highest point being Mount Kebnekaise (), in Swedish Lapland. The Swedish part of Sápmi is characterized by great rivers running from the northwest to the southeast. From the Norwegian province of Troms and Finnmark and eastward, the terrain is that of a low plateau with many marshes and lakes, the largest of which is Lake Inari in Finnish Lapland. The extreme northeastern section lies within the tundra region, but it does not have permafrost. In the 19th century scientific expeditions to Sápmi were undertaken, for instance by Jöns Svanberg. The climate is subarctic and vegetation is sparse, except in the densely forested southern portion. The mountainous west coast has significantly milder winters and more precipitation than the large areas east of the mountain chain. North of the Arctic Circle polar night characterize the winter season and midnight sun the summer season—both phenomena are longer the further north you go. Traditionally, the Sami divide the year in "eight" seasons instead of four. Reindeer, wolf, bear, and birds are the main forms of animal life, in addition to a myriad of insects in the short summer. Sea and river fisheries abound in the region. Steamers are operated on some of the lakes, and many ports are ice-free throughout the year. All ports along the Norwegian Sea in the west and the Barents Sea in the northeast to Murmansk are ice-free all year. The Gulf of Bothnia usually freezes over in winter. The ocean floor to the north and west of Sápmi has deposits of petroleum and natural gas. Sápmi contains valuable mineral deposits, particularly iron ore in Sweden, copper in Norway, and nickel and apatite in Russia. "East Sápmi" consists of the Kola peninsula and the Lake Inari region and is home to the eastern Sami languages. While being the most-heavily populated part of Sápmi, this is also the region where the indigenous population and their culture is weakest. Corresponds to the regions marked 6 through 9 on the map below. "Central Sápmi" consists of the western part of Finland's Sami Domicile Area, the parts of Norway north of the Saltfjellet mountains and areas on the Swedish side corresponding to this. Central Sápmi is the region where Sami culture is strongest, and home to North Sami—the most widely used Sami language. In the southernmost part of this subregion, however, Sami culture is rather weak—this is where the moribound "Bithun" Sami language is used. The areas around the Tysfjord fjord in Norway and the river Lule in Sweden are home to the "Julev" Sami language, one of the more widely used Sami languages. These correspond to the regions marked 3 through 5 on the map below. "South Sápmi" consists of the areas south of Saltfjellet and corresponding areas in Sweden, and is home to the southern languages. In this area Sami culture is mostly visible inland and on the coast of Baltic Sea, and the languages are spoken by few. Corresponds to the regions marked 1 and 2 on the map below to the south-east of region 1 in Sweden. The inner parts of Sápmi are often referred to as Lappi. The name is also found on the Russian side as "Laplandige" (the name of a natural reservation) and the Norwegian landscape of Finnmark is sometimes titled the "Norwegian Lapland", especially by the travel industry. "Lappi-" appears as a common component of place-names throughout central and southern Finland as well; in many cases, it probably refers to earlier Sami presence, though in some cases the underlying meaning may be merely "periphery" or "outlying district". Finally, Sápmi may also be sub-divided into cultural regions according to the states' borders, that obviously affects daily life for people no matter their ethnicity. These regions are commonly referred to as "sides" by Sami, for example "the Norwegian side" ("norgga bealli") or "the Finnish side" ("suoma bealli"). The Saamic languages are the region's main minority languages and also its original languages. They belong to the Uralic language family, and are most closely related to the Finnic languages. Many Sami languages are mutually unintelligible, but the languages originally formed a dialect continuum stretching southwest-northeast, so that a message could hypothetically be passed between Sami speakers from one end to the other and be understood by all. Today, however, many of the languages are moribund and thus there are "gaps" in the original continuum. On the map to the right numbers indicate Sámi Languages (Darkened areas represent municipalities that recognize Sami as an official language.): 1. South (Åarjil) Sámi, 2. Ume (Upme) Sámi, 3. Pite (Bitthun) Sámi, 4. Lule (Julev) Sámi, 5. North (Davvi) Sámi, 6. Skolt Sámi, 7. Inari (Ánár) Sámi, 8. Kildin Sámi, 9. Ter Sámi. Of these languages the North one is by far the most vital; whereas Ume, Pite and Ter seem to be dying languages. Kemi Sámi is extinct. North Sami is subdivided into three main dialects: West, East, and Coast. The written standard is based on the Western dialect. The language spoken by most people in the region is Russian, which is an East Slavic language. It is the dominant language on the Russian side of the border and also spoken by recently immigrated minority groups elsewhere in Sápmi. Earlier, a common pidgin language was spoken on the northern coast of Sápmi that combined elements of Russian, Norwegian, North Sami, and Kven. This language was known as Russenorsk. On the Russian side, there are also speakers of the East Slavic Belarusian and Ukrainian languages. Norwegian and Swedish dominate the largest part of Sápmi, including the entire Southern region and most of the Central region. There also used to be minorities speaking Norwegian on the Kola Peninsula. The Scandinavian languages are to a very large degree mutually intelligible, much more so than South Sami and North Sami. The Norwegian dialects spoken particularly in North and Central Norway Sami areas differ very much from the written bokmål standard. In Central Sápmi the Sámi dialects have taken the Scandinavian language trait of having a more or less constant emphasis on the first syllable of each spoken word. In the inner and northernmost parts of Sweden and Norway, however, people often speak Norwegian and Swedish close to the written standard, though with a heavy Uralic accent. The Finnic (i.e. Baltic Finnic) languages are spoken on the Finnish (Finnish), Swedish (Meänkieli—spoken by the Tornedalians) and Norwegian (Kven) sides of the borders. There also used to be minorities speaking Finnish on the Kola Peninsula. The languages are as mutually intelligible as the Scandinavian languages. Other Finnic languages include Karelian, Estonian, Livonian, Veps, Votic and Izhoran. Many are mutually intelligible. The number of people living in Sápmi is about 2 million, though it is difficult to give the precise number of inhabitants since certain counties and provinces only include parts of Sápmi. It is also difficult to account for the distribution of ethnic groups as many people have double or multiple ethnic identities—both seeing themselves as members of the majority population and being part of one or more minority groups. Different criteria are set when calculating the number of Sami, but the number is generally between 80,000 and 100,000. Many live in areas outside Sápmi such as Oulu, Oslo, Stockholm and Helsinki. Some Sami people have migrated to places outside the Sapmi vernacular region, such as in Canada and the United States. Many Sapmi people have settled in the northern parts of Minnesota. About 900,000 people inhabit Murmansk province (oblast'), but parts of this area lie outside Sápmi. About 758,600 of Murmansk's population claim to be exclusively Russian. Ethnic Russians also live elsewhere in Sápmi. The Russian side of Sápmi is ethnically diverse, with particularly big Ukrainian and Belarusian minorities. The Sami are one of the minor minorities in this part of Sápmi. About 850,000 people inhabit the Norwegian regions of North Norway (fully within Sápmi) and Trøndelag (mostly within Sápmi). However, many of the regions' inhabitants—particularly those of North Norway—are not exclusively Norwegian. Notable minority groups include the Sami, Finns, and Kvens. About 700,000 people inhabit the Swedish counties Norrbotten, Västerbotten, Västernorrland, and Jämtland. Many of the counties' inhabitants are not exclusively Swedish. Notable minority groups in the former three counties include the Sami, Tornedalians, and Finns. 13,226 people inhabit the Sami native region of Lapland, Finland. A great portion of these are Sami. These two ethnic groups, closely related to each other and also the Finns, mainly live on the Finnish, Swedish and Norwegian sides of Nordkalotten, respectively. In Sweden, there are two meanings to the word 'Norrbotten'. One is the older 'landskap' Norrbotten, which is much smaller than the modern county, 'län', Norrbotten, which encompasses all of north Sweden from Jävre. Norrbottens län also encompasses the northern part of the 'landskap' Lappland. The modern county Norrbotten has only a small minority of reindeer-herding Sami. The Tornedalians, who have lived, hunted, fished and farmed, mainly south and east of the line of arable climate and land (that line mostly coincides approximately with the border between the two landscapes Lappland and Norrbotten) for 700–1000 years, is a much larger minority. There are also a lot of Tornedalians in the mining district around Kiruna and Gällivare, and the increasingly restrictions on leisure, movement, fishing, and hunting for all but the reindeer-herding Sami minority are controversial and contested in Norrbotten. Norway, Finland and Sweden all have Sami Parliaments that to varying degrees are involved in governing the region—though mostly they only have authority over the matters of the Sami citizens of the states in which they are situated. Every Norwegian citizen registered as a Sami has the right to vote in the elections for the Sami Parliament of Norway. Elections are held every four years by direct vote from seven constituencies covering all of Norway (six of which are in Sápmi), and run parallel to the general Norwegian parliamentary elections. This is the Sami Parliament with the most influence over any part of Sápmi, as it is involved in the autonomy established by the Finnmark Act. The parliament is in Kárášjohka and its current president is Aili Keskitalo from the Norwegian Sami Association. The Sami Parliament of Sweden, situated in Kiruna (Northern Sami: "Giron"), is elected by a general vote where all registered Sami citizens of Sweden may attend. The current president is Lars-Anders Baer. Voting for elections to the Sámi Parliament of Finland is restricted to inhabitants of the Sami Domicile Area. The Parliament is in Inari (), and its current president is Pekka Aikio. In Russia there is no Sami Parliament. There are two Sami organizations that are members of the national umbrella organisation of indigenous peoples, the Russian Association of Indigenous Peoples of the North (RAIPON), and represent the Russian Sami in the Sami Council. RAIPON is represented in Russia's Public Chamber by Pavel Sulyandziga. On 14 December 2008 the first Congress of the Russian Sámi took place. The Conference decided to demand the formation of a Russian Sámi Parliament, to be elected by the local Sami. A suggestion to have the Russian Federation pick representatives to the Parliament was voted down with a clear majority. The Congress also chose a Council of Representatives that were to work for the establishment of a parliament and otherwise represent the Russian Sami. It is headed by Valentina Sovkina. On 2 March 2000, the Sami parliaments of Norway and Finland founded the Sami Parliamentary Council, and the Sami Parliament of Sweden joined two years later. Each parliament sends seven representatives, and observers are sent from the Sami organizations of Russia and the Sami Council. The Sami Parliamentary Council discuss cross-border cooperation, hand out the annual "Gollegiella" language development award, and represent the Sami people abroad. In addition to the parliaments and their common council, there is a Saami Council based on Saami organizations. This council also organizes interstate cooperation between the Saami, and also often represents the Saami in international fora such as the Barents Region. This organization is older than the Parliamentary Council, but not connected to the parliaments except that some of the NGOs double as party lists in Sami parliament elections. The Russian Federation consists of several types of subunits. The Russian side of Sápmi is within Murmansk Oblast. are governed by popularly elected parliaments and formally headed by governors. The governors are nominated by the president of Russia, and accepted or rejected by the local parliaments. However, should the parliament refuse to accept the president's nominee, the president is entitled to dissolve parliament and call local elections. Currently, the acting governor is , who was appointed on 21 March 2019 after the resignation of Marina Kovtun. Murmansk Oblast covers the Kola Peninsula and is home to Murmansk, the largest city north of the Arctic Circle and in the Sápmi. It is subdivided into several districts, of which the geographically largest is Lovozersky District. This is also the part of Russia where the Sami population is most numerous and visible. In the west of the province there is a large natural reserve known as "Laplandiya". The counties of Norway are governed by popularly elected assemblies, headed by county mayors. Formally, the counties are headed by county governors, but in practice these have limited influence today. The largest of Norway's landscapes, Finnmárku (Northern Sami) or Finnmark (Norwegian), is in Sápmi and has a special form of autonomy: 95% (about ) of the area is owned by the Finnmark Estate. The board of the Estate consists of equally many representatives from the Sami Parliament of Norway and Finnmark's county council. The two institutions appoint leaders of the board alternately. The administrative centre of Finnmárku (Finnmark) is Čáhcesuolu or Vadsø, in the far east of the county. The current county governor is Runar Sjåstad from the Norwegian Labour Party. Romsa or Troms is southwest of Finnmárku. Its administrative centre is the city after which the county is named, Romsa or Tromsø. Romsa is North Norway's biggest city and Sápmi's biggest city after Murmansk. Current "fylkesordfører" is Terje Olsen from the Conservative Party. A similar solution to the Finnmark Estate, Hålogalandsallmenningen, has been proposed for Romsa county and its southern neighbour Nordlánda. Nordland covers a long strip of coast that includes both North Sami, Julev Sami, Bithun Sami, and South Sami areas. Its administrative centre is Bådåddjo or Bodø. The current county governor is Mariette Korsrud from the Norwegian Labour Party. The southernmost parts of Norwegian Sapmi lie in Nord-Trøndelag and partially in Sør-Trøndelag, and the administrative centres of which are Steinkjer and Trondheim respectively. The latter city is outside Sápmi but well known for being the site of the first international Sami conference in February 1917. The county governors are Gunnar Viken (the Conservative Party) in Nord-Trøndelag and Tore Sandvik (Norwegian Labour Party) in Sør-Trøndelag. Lapland is a large northwestern province of Sweden, wholly within Sápmi. The traditional provinces of Sweden are cultural and historical entities; for administrative and political purposes they were replaced by the counties of Sweden (län) in 1634. Five counties are wholly or partially within Sápmi. "Län" are formally governed by the "landshövding", who is an envoy of the government and runs the government-appointed "länsstyrelse" that coordinates administration with national political goals for the county. Much of county politics is run by the county council or "landsting", which is elected by the inhabitants of the county; but the counties' top positions are still determined by those who win the general elections of Sweden. Norrbotten is mostly covered by Sápmi, although the lower Tornedalen region is often excluded. The administrative centre is Luleå in the Julev Sami area (Norrbotten includes North, Julev and Bithun areas). Current landshövding is Per-Ola Eriksson of the Centre Party. Sápmi covers the interior majority of Västerbotten, which are Ubmeje and South Sami regions. The administrative centre is Umeå, and the current landshövding is Chris Heister from the conservative Moderate Party. Västernorrland is an old part of Sapmi and still is. There are a lot of Sami in the coast of Baltic Sea (Gulf of Bothnia). Jämtland is sometimes considered a part of the Sápmi cultural region, and is a South Sami county. The administrative centre is Östersund. Current landshövding is Jöran Hägglund from the centre party Centerpartiet. Finland is subdivided into nineteen regions ("maakunta"). The regions are governed by regional councils, which are generally forums of cooperation between the municipalities and not elected by direct popular vote. Lapland (Lappi) is the northernmost of the regions, which stretches farther south than Sápmi. North Sami, Skolt Sami, and Aanaar Sami are indigenous to the region. Four municipalities in the northern part of Finnish Lapland constitute the Sami Domicile Area, "Sámiid Ruovttoguovlu", a region that is autonomous on issues regarding Sami culture and language. The region has its own football team, the Sápmi football team, which is organized by FA Sápmi. It is a member of ConIFA and the host of 2014 ConIFA World Football Cup. Sápmi football team won the 2006 VIVA World Cup and hosted the 2008 event. The Tour de Barents is a cross-country skiing race held in the region. The following towns and villages have a significant Sami population or host Sami institutions. Norwegian, Swedish, Finnish, or Russian toponyms are in parentheses.
https://en.wikipedia.org/wiki?curid=27861
Sydney Sydney ( ) is the state capital of New South Wales and the most populous city in Australia and Oceania. Located on Australia's east coast, the metropolis surrounds Port Jackson and extends about on its periphery towards the Blue Mountains to the west, Hawkesbury to the north, the Royal National Park to the south and Macarthur to the south-west. Sydney is made up of 658 suburbs, 40 local government areas and 15 contiguous regions. Residents of the city are known as "Sydneysiders". As of June 2019, Sydney's estimated metropolitan population was 5,312,163, meaning it is home to approximately 65% of the state's population. Indigenous Australians have inhabited the Sydney area for at least 30,000 years, and thousands of engravings remain throughout the region, making it one of the richest in Australia in terms of Aboriginal archaeological sites. During his first Pacific voyage in 1770, Lieutenant James Cook and his crew became the first Europeans to chart the eastern coast of Australia, making landfall at Botany Bay and inspiring British interest in the area. In 1788, the First Fleet of convicts, led by Arthur Phillip, founded Sydney as a British penal colony, the first European settlement in Australia. Phillip named the settlement after Thomas Townshend, 1st Viscount Sydney. Penal transportation to New South Wales ended soon after Sydney was incorporated as a city in 1842. A gold rush occurred in the colony in 1851, and over the next century, Sydney transformed from a colonial outpost into a major global cultural and economic centre. After World War II, it experienced mass migration and became one of the most multicultural cities in the world. At the time of the , more than 250 different languages were spoken in Sydney. In the 2016 Census, about 35.8% of residents spoke a language other than English at home. Furthermore, 45.4% of the population reported having been born overseas, and the city has the third-largest foreign-born population of any city in the world after London and New York City. Despite being one of the most expensive cities in the world, Sydney frequently ranks in the top ten most liveable cities in the world. It is classified as an Alpha+ World City by Globalization and World Cities Research Network, indicating its influence in the region and throughout the world. Ranked eleventh in the world for economic opportunity, Sydney has an advanced market economy with strengths in finance, manufacturing and tourism. There is a significant concentration of foreign banks and multinational corporations in Sydney and the city is promoted as Australia's financial capital and one of Asia Pacific's leading financial hubs. Established in 1850, the University of Sydney was Australia's first university and is regarded as one of the world's leading universities. Sydney is also home to the oldest library in Australia, the State Library of New South Wales, opened in 1826. Sydney has hosted major international sporting events such as the 2000 Summer Olympics. The city is among the top fifteen most-visited cities in the world, with millions of tourists coming each year to see the city's landmarks. Boasting over of nature reserves and parks, its notable natural features include Sydney Harbour, the Royal National Park, Royal Botanic Garden and Hyde Park, the oldest parkland in the country. Built attractions such as the Sydney Harbour Bridge and the World Heritage-listed Sydney Opera House are also well known to international visitors. The main passenger airport serving the metropolitan area is Kingsford-Smith Airport, one of the world's oldest continually operating airports. Established in 1906, Central station, the largest and busiest railway station in the state, is the main hub of the city's rail network. The first people to inhabit the area now known as Sydney were indigenous Australians who had migrated from northern Australia and before that from southeast Asia. While radiocarbon dating has shown evidence of human activity in the Sydney area from around 30,000 years ago, Aboriginal stone tools found in Western Sydney's gravel sediments indicate there was human settlement in the region from as far back as 45,000 to 50,000 years BP. The first meeting between the native people and the British occurred on 29 April 1770 when Lieutenant James Cook landed at Botany Bay on the Kurnell Peninsula and encountered the Gweagal clan. He noted in his journal that they were confused and somewhat hostile towards the foreign visitors. Cook was on a mission of exploration and was not commissioned to start a settlement. He spent a short time collecting food and conducting scientific observations before continuing further north along the east coast of Australia and claiming the new land he had discovered for Britain. Prior to the arrival of the British there were 4,000 to 8,000 native people in Sydney from as many as 29 different clans. The earliest British settlers called the natives Eora people. "Eora" is the term the indigenous population used to explain their origins upon first contact with the British. Its literal meaning is "from this place". Sydney Cove from Port Jackson to Petersham was inhabited by the Cadigal clan. The principal language groups were Darug, Guringai, and Dharawal. The earliest Europeans to visit the area noted that the indigenous people were conducting activities such as camping and fishing, using trees for bark and food, collecting shells, and cooking fish. Britain—before that, England—and Ireland had for a long time been sending their convicts across the Atlantic to the American colonies. That trade was ended with the Declaration of Independence by the United States in 1776. Britain decided in 1786 to found a new penal outpost in the territory discovered by Cook some 16 years earlier. Captain Philip led the First Fleet of 11 ships and about 850 convicts into Botany Bay on 18 January 1788, though deemed the location unsuitable due to poor soil and a lack of fresh water. He travelled a short way further north and arrived at Sydney Cove on 26 January 1788. This was to be the location for the new colony. Phillip described Port Jackson as being "without exception the finest harbour in the world". The colony was at first to be titled "New Albion" (after Albion, another name for Great Britain), but Phillip decided on "Sydney". The official proclamation and naming of the colony happened on 7 February 1788. Lieutenant William Dawes produced a town plan in 1790 but it was ignored by the colony's leaders. Sydney's layout today reflects this lack of planning. Between 1788 and 1792, 3,546 male and 766 female convicts were landed at Sydney—many "professional criminals" with few of the skills required for the establishment of a colony. The food situation reached crisis point in 1790. Early efforts at agriculture were fraught and supplies from overseas were scarce. From 1791 on, however, the more regular arrival of ships and the beginnings of trade lessened the feeling of isolation and improved supplies. The colony was not founded on the principles of freedom and prosperity. Maps from this time show no prison buildings; the punishment for convicts was transportation rather than incarceration, but serious offences were penalised by flogging and hanging. Phillip sent exploratory missions in search of better soils and fixed on the Parramatta region as a promising area for expansion and moved many of the convicts from late 1788 to establish a small township, which became the main centre of the colony's economic life, leaving Sydney Cove only as an important port and focus of social life. Poor equipment and unfamiliar soils and climate continued to hamper the expansion of farming from Farm Cove to Parramatta and Toongabbie, but a building programme, assisted by convict labour, advanced steadily. Officers and convicts alike faced starvation as supplies ran low and little could be cultivated from the land. The region's indigenous population was also suffering. It is estimated that half of the native people in Sydney died during the smallpox epidemic of 1789. Enlightened for his age, Phillip's personal intent was to establish harmonious relations with local Aboriginal people and try to reform as well as discipline the convicts of the colony. Phillip and several of his officers – most notably Watkin Tench – left behind journals and accounts which tell of immense hardships during the first years of settlement. Part of Macquarie's effort to transform the colony was his authorisation for convicts to re-enter society as free citizens. Roads, bridges, wharves, and public buildings were constructed using convict labour and by 1822 the town had banks, markets, and well-established thoroughfares. Parramatta Road was opened in 1811, which is one of Sydney's oldest roads and Australia's first highway between two cities – Sydney (present day city centre) and Parramatta. Conditions in the colony were not conducive to the development of a thriving new metropolis, but the more regular arrival of ships and the beginnings of maritime trade (such as wool) helped to lessen the burden of isolation. Between 1788 and 1792, convicts and their jailers made up the majority of the population; in one generation, however, a population of emancipated convicts who could be granted land began to grow. These people pioneered Sydney's private sector economy and were later joined by soldiers whose military service had expired, and later still by free settlers who began arriving from Britain. Governor Phillip departed the colony for England on 11 December 1792, with the new settlement having survived near starvation and immense isolation for four years. Between 1790 and 1816, Sydney became one of the many sites of the Australian Frontier Wars, a series of conflicts between the Kingdom of Great Britain and the resisting Indigenous clans. In 1790, when the British established farms along the Hawkesbury River, an Aboriginal leader Pemulwuy resisted the Europeans by waging a guerrilla-style warfare on the settlers in a series of wars known as the Hawkesbury and Nepean Wars which took place in western Sydney. He raided farms until Governor Macquarie dispatched troops from the British Army 46th Regiment in 1816 and ended the conflict by killing 14 Indigenous Australians in a raid on their campsite. In 1804, Irish convicts led the Castle Hill Rebellion, a rebellion by convicts against colonial authority in the Castle Hill area of the British colony of New South Wales. The first and only major convict uprising in Australian history suppressed under martial law, the rebellion ended in a battle fought between convicts and the colonial forces of Australia at Rouse Hill. The Rum Rebellion of 1808 was the only successful armed takeover of government in Australian history, where the Governor of New South Wales, William Bligh, was ousted by the New South Wales Corps under the command of Major George Johnston, who led the rebellion. Conflicts arose between the governors and the officers of the Rum Corps, many of which were land owners such as John Macarthur. Early Sydney was moulded by the hardship suffered by early settlers. In the early years, drought and disease caused widespread problems, but the situation soon improved. The military colonial government was reliant on the army, the New South Wales Corps. Macquarie served as the last autocratic Governor of New South Wales, from 1810 to 1821 and had a leading role in the social and economic development of Sydney which saw it transition from a penal colony to a budding free society. He established public works, a bank, churches, and charitable institutions and sought good relations with the Aborigines. Over the course of the 19th-century Sydney established many of its major cultural institutions. Governor Lachlan Macquarie's vision for Sydney included the construction of grand public buildings and institutions fit for a colonial capital. Macquarie Street began to take shape as a ceremonial thoroughfare of grand buildings. The year 1840 was the final year of convict transportation to Sydney, which by this time had a population of 35,000. Gold was discovered in the colony in 1851 and with it came thousands of people seeking to make money. Sydney's population reached 200,000 by 1871 and during this time the city entered a period of prosperity which was reflected in the construction of grand edifices. Temperance coffee palaces, hotels as well as other civic buildings such as libraries and museums were erected in the city. Demand for infrastructure to support the growing population and subsequent economic activity led to massive improvements to the city's railway and port systems throughout the 1850s and 1860s. After a period of rapid growth, further discoveries of gold in Victoria began drawing new residents away from Sydney towards Melbourne in the 1850s, which created a historically strong rivalry between Sydney and Melbourne. Nevertheless, Sydney exceeded Melbourne's population in the early twentieth century and remains Australia's largest city. Following the depression of the 1890s, the six colonies agreed to form the Commonwealth of Australia. Sydney's beaches had become popular seaside holiday resorts, but daylight sea bathing was considered indecent until the early 20th century. Under the reign of Queen Victoria federation of the six colonies occurred on 1 January 1901. Sydney, with a population of 481,000, then became the state capital of New South Wales. The Great Depression of the 1930s had a severe effect on Sydney's economy, as it did with most cities throughout the industrial world. For much of the 1930s up to one in three breadwinners was unemployed. Construction of the Sydney Harbour Bridge served to alleviate some of the effects of the economic downturn by employing 1,400 men between 1924 and 1932. The population continued to boom despite the Depression, having reached 1 million in 1925. The city had one of the largest tram networks in the British Empire until it was dismantled in 1961. When Britain declared war on Germany in 1939, Australia also entered. During the war Sydney experienced a surge in industrial development to meet the needs of a wartime economy. Far from mass unemployment, there were now labour shortages and women becoming active in male roles. Sydney's harbour was attacked by the Japanese in May and June 1942 with a direct attack from Japanese submarines with some loss of life. Households throughout the city had built air raid shelters and performed drills. Consequently, Sydney experienced population growth and increased cultural diversification throughout the post-war period. The people of Sydney warmly welcomed Queen Elizabeth II in 1954 when the reigning monarch stepped onto Australian soil for the first time to commence her Australian Royal Tour. Having arrived on the Royal Yacht Britannia through Sydney Heads, Her Majesty came ashore at Farm Cove. There were 1.7 million people living in Sydney at 1950 and almost 3 million by 1975. The Australian government launched a large scale multicultural immigration program. New industries such as information technology, education, financial services and the arts have risen. Sydney's iconic Opera House was opened in 1973 by Her Majesty. A new skyline of concrete and steel skyscrapers swept away much of the old lowrise and often sandstone skyline of the city in the 1960s and 1970s, with Australia Square being the tallest building in Sydney from its completion in 1967 until 1976 and is also notable for being the first skyscraper in Australia. This prolific growth of contemporary high-rise architecture was put in check by heritage laws in the 1990s onwards, which prevent demolition of any structure deemed historically significant. Since the 1970s Sydney has undergone a rapid economic and social transformation. As a result, the city has become a cosmopolitan melting pot. To relieve congestion on the Sydney Harbour Bridge, the Sydney Harbour Tunnel opened in August 1992. The 2000 Summer Olympics were held in Sydney and became known as the "best Olympic Games ever" by the President of the International Olympic Committee. Sydney has maintained extensive political, economic and cultural influence over Australia as well as international renown in recent decades. Following the Olympics, the city hosted the 2003 Rugby World Cup, the APEC Australia 2007 and Catholic World Youth Day 2008, led by Pope Benedict XVI. Sydney is a coastal basin with the Tasman Sea to the east, the Blue Mountains to the west, the Hawkesbury River to the north, and the Woronora Plateau to the south. The inner city measures , the Greater Sydney region covers , and the city's urban area is in size. In terms of physical size, the Sydney metropolitan area is comparable to both Tokyo, at and Los Angeles, at . Sydney spans two geographic regions. The Cumberland Plain lies to the south and west of the Harbour and is relatively flat. The Hornsby Plateau is located to the north and is dissected by steep valleys. The flat areas of the south were the first to be developed as the city grew. It was not until the construction of the Sydney Harbour Bridge that the northern reaches of the coast became more heavily populated. Seventy beaches can be found along its coastline with Bondi Beach being one of the most famous. The Nepean River wraps around the western edge of the city and becomes the Hawkesbury River before reaching Broken Bay. Most of Sydney's water storages can be found on tributaries of the Nepean River. The Parramatta River is mostly industrial and drains a large area of Sydney's western suburbs into Port Jackson. The southern parts of the city are drained by the Georges River and the Cooks River into Botany Bay. Sydney is made up of mostly Triassic rock with some recent igneous dykes and volcanic necks. The Sydney Basin was formed when the Earth's crust expanded, subsided, and filled with sediment in the early Triassic period. The sand that was to become the sandstone of today was washed there by rivers from the south and northwest, and laid down between 360 and 200 million years ago. The sandstone has shale lenses and fossil riverbeds. The Sydney Basin bioregion includes coastal features of cliffs, beaches, and estuaries. Deep river valleys known as rias were carved during the Triassic period in the Hawkesbury sandstone of the coastal region where Sydney now lies. The rising sea level between 18,000 and 6,000 years ago flooded the rias to form estuaries and deep harbours. Port Jackson, better known as Sydney Harbour, is one such ria. The most prevalent plant communities in the Sydney region are open grassy woodlands and some pockets of dry sclerophyll forests, which consist of eucalyptus trees, casuarinas, melaleucas, corymbias and angophoras, with shrubs (typically wattles, callistemons, grevilleas and banksias), and a semi-continuous grass in the understory. The plants in this community tend to have rough and spiky leaves, as they're grown in areas with low soil fertility. Sydney also features a few areas of wet sclerophyll forests which are found in the wetter, elevated areas in the north and the northeast. These forests are defined by straight, tall tree canopies with a moist understory of soft-leaved shrubs, tree ferns and herbs. Sydney is home to dozens of bird species, which commonly include the Australian raven, Australian magpie, crested pigeon, noisy miner and the pied currawong, among others. Introduced bird species ubiquitously found in Sydney are the common myna, common starling, house sparrow and the spotted dove. Reptile species are also numerous and predominantly include skinks. Sydney has a few mammal and spider species, such as the grey-headed flying fox and the Sydney funnel-web, respectively, and a huge diversity of marine species inhabiting its harbour and many beaches. Under the classic system Sydney has a temperate climate but under the Köppen–Geiger classification, Sydney has a humid subtropical climate ("Cfa") with warm summers, cool winters and uniform rainfall throughout the year. At Sydney's primary weather station at Observatory Hill, extreme temperatures have ranged from on 18 January 2013 to on 22 June 1932. An average of 14.9 days a year have temperatures at or above in the central business district (CBD). In contrast, the metropolitan area averages between 35 and 65 days, depending on the suburb. The highest minimum temperature recorded at Observatory Hill is , on 6 February 2011, while the lowest maximum temperature is , recorded on 19 July 1868. The average annual temperature of the sea ranges from in September to in February. The weather is moderated by proximity to the ocean, and more extreme temperatures are recorded in the inland western suburbs. Sydney experiences an urban heat island effect. This makes certain parts of the city more vulnerable to extreme heat, including coastal suburbs. In late spring and summer, temperatures over are not uncommon, though hot, dry conditions are usually ended by a southerly buster, a powerful southerly that brings gale winds and a rapid fall in temperature. Since Sydney borders the Blue Mountains, it can occasionally experience dry, Föhn-like and katabatic winds originating from the Great Dividing Range, usually between late winter and spring, that raise temperatures and elevate fire danger. Due to the inland location, frost is recorded early in the morning in Western Sydney a few times in winter. Autumn and spring are the transitional seasons, with spring showing a larger temperature variation than autumn. The rainfall has a moderate to low variability and it is spread through the months, but is slightly higher during the first half of the year. From 1990 to 1999, Sydney received around 20 thunderstorms per year. In late autumn and winter, east coast lows may bring large amounts of rainfall, especially in the CBD. In spring and summer, black nor'easters are usually the cause of heavy rain events, though other forms of low-pressure areas may also bring heavy deluge and afternoon thunderstorms. Depending on the wind direction, summer weather may be humid or dry, with the late summer/autumn period having a higher average humidity and dewpoints than late spring/early summer. In summer, most rain falls from thunderstorms and in winter from cold fronts. Snowfall was last reported in the Sydney City area in 1836, while a fall of graupel, or soft hail, was mistaken by many for snow, in July 2008. The city is rarely affected by cyclones, although remnants of ex-cyclones do affect the city. The El Niño–Southern Oscillation plays an important role in determining Sydney's weather patterns: drought and bushfire on the one hand, and storms and flooding on the other, associated with the opposite phases of the oscillation. Many areas of the city bordering bushland have experienced bushfires, these tend to occur during the spring and summer. The city is also prone to severe storms. One such storm was the 1999 hailstorm, which produced massive hailstones up to in diameter. The Bureau of Meteorology reported that 2002 to 2005 were the warmest summers in Sydney since records began in 1859. The summer of 2007–08, however, proved to be the coolest since 1996–97 and is the only summer this century to be at or below average in temperatures. In 2009, dry conditions brought a severe dust storm towards eastern Australia. The hottest day in the Sydney metropolitan area occurred in Penrith on 6 January 2020, where a high of was recorded. The regions of Sydney include the CBD or City of Sydney (colloquially referred to as 'the City') and Inner West, the Eastern Suburbs, Southern Sydney, Greater Western Sydney (including the South-west, Hills District and the Macarthur Region), and the Northern Suburbs (including the North Shore and Northern Beaches). The Greater Sydney Commission divides Sydney into five districts based on the 33 LGAs in the metropolitan area; the Western City, the Central City, the Eastern City, the North District, and the South District. The Australian Bureau of Statistics includes City of Central Coast (the former Gosford City and Wyong Shire) as part of Greater Sydney for population counts. This adds another 330,000 people to the metropolitan area covered by Greater Sydney Commission. The CBD extends about south from Sydney Cove. It is bordered by Farm Cove within the Royal Botanic Garden to the east and Darling Harbour to the west. Suburbs surrounding the CBD include Woolloomooloo and Potts Point to the east, Surry Hills and Darlinghurst to the south, Pyrmont and Ultimo to the west, and Millers Point and The Rocks to the north. Most of these suburbs measure less than in area. The Sydney CBD is characterised by considerably narrow streets and thoroughfares, created in its convict beginnings in the 18th century. Several localities, distinct from suburbs, exist throughout Sydney's inner reaches. Central and Circular Quay are transport hubs with ferry, rail, and bus interchanges. Chinatown, Darling Harbour, and Kings Cross are important locations for culture, tourism, and recreation. The Strand Arcade, which is located between Pitt Street Mall and George Street, is a historical Victorian-style shopping arcade. Opened on 1 April 1892, its shop fronts are an exact replica of the original internal shopping facades. Westfield Sydney, located beneath the Sydney Tower, is the largest shopping centre by area in Sydney. There is a long trend of gentrification amongst Sydney's inner suburbs. Pyrmont located on the harbour was redeveloped from a centre of shipping and international trade to an area of high density housing, tourist accommodation, and gambling. Originally located well outside of the city, Darlinghurst is the location of the historic, former Darlinghurst Gaol, manufacturing, and mixed housing. It had a period when it was known as an area of prostitution. The terrace style housing has largely been retained and Darlinghurst has undergone significant gentrification since the 1980s. Green Square is a former industrial area of Waterloo which is undergoing urban renewal worth $8 billion. On the city harbour edge, the historic suburb and wharves of Millers Point are being built up as the new area of Barangaroo. The enforced rehousing of local residents due to the Millers Point/Barangaroo development has caused significant controversy despite the $6 billion worth of economic activity it is expected to generate. The suburb of Paddington is a well known suburb for its streets of restored terrace houses, Victoria Barracks, and shopping including the weekly Oxford Street markets. The Inner West generally includes the Inner West Council, Municipality of Burwood, Municipality of Strathfield, and City of Canada Bay. These span up to about 11 km west of the CBD. Suburbs in the Inner West have historically housed working class industrial workers, but have undergone gentrification over the 20th century. The region now mainly features medium- and high-density housing. Major features in the area include the University of Sydney and the Parramatta River, as well as a large cosmopolitan community. The Anzac Bridge spans Johnstons Bay and connects Rozelle to Pyrmont and the City, forming part of the Western Distributor. The area is serviced by the T1, T2, and T3 railway lines, including the Main Suburban Line; which is the first to be constructed in New South Wales. Strathfield Railway Station is a secondary railway hub within Sydney, and major station on the Suburban and Northern lines. It was constructed in 1876, and will be a future terminus of Parramatta Light Rail. The area is also serviced by numerous bus routes and cycleways. Other shopping centres in the area include Westfield Burwood and DFO in Homebush. The Eastern Suburbs encompass the Municipality of Woollahra, the City of Randwick, the Waverley Municipal Council, and parts of the Bayside Council. The Greater Sydney Commission envisions a resident population of 1,338,250 people by 2036 in its Eastern City District (including the City and Inner West). They include some of the most affluent and advantaged areas in the country, with some streets being amongst the most expensive in the world. Wolseley Road, in Point Piper, has a top price of $20,900 per square metre, making it the ninth-most expensive street in the world. More than 75% of neighbourhoods in the Electoral District of Wentworth fall under the top decile of SEIFA advantage, making it the least disadvantaged area in the country. Major landmarks include Bondi Beach, a major tourist site; which was added to the Australian National Heritage List in 2008; and Bondi Junction, featuring a Westfield shopping centre and an estimated office work force of 6,400 by 2035, as well as a train station on the T4 Eastern Suburbs Line. The suburb of Randwick contains the Randwick Racecourse, the Royal Hospital for Women, the Prince of Wales Hospital, Sydney Children's Hospital, and the UNSW Kensington Campus. Randwick's 'Collaboration Area' has a baseline estimate of 32,000 jobs by 2036, according to the Greater Sydney Commission. Construction of the CBD and South East Light Rail was completed in April 2020. Main construction was due to be completed in 2018 but was delayed until 2020. The project aims to provide reliable and high-capacity tram services to residents in the City and South-East. Major shopping centres in the area include Westfield Bondi Junction and Westfield Eastgardens, although many residents shop in the City. Southern Sydney includes the suburbs in the local government areas of former Rockdale, Georges River Council (collectively known as the St George area), and broadly it also includes the suburbs in the local government area of Sutherland, south of the Georges River (colloquially known as 'The Shire'). The Kurnell peninsula, near Botany Bay, is the site of the first landfall on the eastern coastline made by Lt. (later Captain) James Cook in 1770. La Perouse, a historic suburb named after the French navigator Jean-François de Galaup, comte de Lapérouse (1741–88), is notable for its old military outpost at Bare Island and the Botany Bay National Park. The suburb of Cronulla in southern Sydney is close to Royal National Park, Australia's oldest national park. Hurstville, a large suburb with a multitude of commercial buildings and high-rise residential buildings dominating the skyline, has become a CBD for the southern suburbs. 'Northern Sydney' may also include the suburbs in the Upper North Shore, Lower North Shore and the Northern Beaches. The Northern Suburbs include several landmarks – Macquarie University, Gladesville Bridge, Ryde Bridge, Macquarie Centre and Curzon Hall in Marsfield. This area includes suburbs in the local government areas of Hornsby Shire, City of Ryde, the Municipality of Hunter's Hill and parts of the City of Parramatta. The North Shore, an informal geographic term referring to the northern metropolitan area of Sydney, consists of , , , , Killara, , , and many others. The Lower North Shore usually refers to the suburbs adjacent to the harbour such as , , Mosman, , Cremorne Point, , Milsons Point, , Northbridge, and North Sydney. and are often also considered as being part of the Lower North Shore. The Lower North Shore's eastern boundary is Middle Harbour, or at the Roseville Bridge at and . The Upper North Shore usually refers to the suburbs between and . It is made up of suburbs located within Ku-ring-gai and Hornsby Shire councils. The North Shore includes the commercial centres of North Sydney and Chatswood. North Sydney itself consists of a large commercial centre, with its own business centre, which contains the second largest concentration of high-rise buildings in Sydney, after the CBD. North Sydney is dominated by advertising, marketing businesses and associated trades, with many large corporations holding office in the region. The Northern Beaches area includes Manly, one of Sydney's most popular holiday destinations for much of the nineteenth and twentieth centuries. The region also features Sydney Heads, a series of headlands which form the wide entrance to Sydney Harbour. The Northern Beaches area extends south to the entrance of Port Jackson (Sydney Harbour), west to Middle Harbour and north to the entrance of Broken Bay. The 2011 Australian census found the Northern Beaches to be the most white and mono-ethnic district in Australia, contrasting with its more-diverse neighbours, the North Shore and the Central Coast. The Hills district generally refers to the suburbs in north-western Sydney including the local government areas of The Hills Shire, parts of the City of Parramatta Council and Hornsby Shire. Actual suburbs and localities that are considered to be in the Hills District can be somewhat amorphous and variable. For example, the Hills District Historical Society restricts its definition to the Hills Shire local government area, yet its study area extends from Parramatta to the Hawkesbury. The region is so named for its characteristically comparatively hilly topography as the Cumberland Plain lifts up, joining the Hornsby Plateau. Several of its suburbs also have "Hill" or "Hills" in their names, such as Baulkham Hills, Castle Hill, Seven Hills, Pendle Hill, Beaumont Hills, and Winston Hills, among others. Windsor and Old Windsor Roads are historic roads in Australia, as they are the second and third roads, respectively, laid in the colony. The greater western suburbs encompasses the areas of Parramatta, the sixth largest business district in Australia, settled the same year as the harbour-side colony, Bankstown, Liverpool, Penrith, and Fairfield. Covering and having an estimated resident population as at 2017 of 2,288,554, western Sydney has the most multicultural suburbs in the country. The population is predominantly of a working class background, with major employment in the heavy industries and vocational trade. Toongabbie is noted for being the third mainland settlement (after Sydney and Parramatta) set up after the British colonisation of Australia began in 1788, although the site of the settlement is actually in the separate suburb of Old Toongabbie. The western suburb of Prospect, in the City of Blacktown, is home to Raging Waters, a water park operated by Parques Reunidos. Auburn Botanic Gardens, a botanical garden situated in Auburn, attracts thousands of visitors each year, including a significant number from outside Australia. Another prominent park and garden in the west is Central Gardens Nature Reserve in Merrylands West. The greater west also includes Sydney Olympic Park, a suburb created to host the 2000 Summer Olympics, and Sydney Motorsport Park, a motorsport circuit located in Eastern Creek. The Boothtown Aqueduct in Greystanes is a 19th-century water bridge that is listed on the New South Wales State Heritage Register as a site of State significance. To the northwest, Featherdale Wildlife Park, an Australian zoo in Doonside, near Blacktown, is a major tourist attraction, not just for Western Sydney, but for NSW and Australia. Westfield Parramatta in Parramatta is Australia's busiest Westfield shopping centre, having 28.7 million customer visits per annum. Established in 1799, the Old Government House, a historic house museum and tourist spot in Parramatta, was included in the Australian National Heritage List on 1 August 2007 and World Heritage List in 2010 (as part of the 11 penal sites constituting the Australian Convict Sites), making it the only site in greater western Sydney to be featured in such lists. Moreover, the house is Australia's oldest surviving public building. Prospect Hill, a historically significant ridge in the west and the only area in Sydney with ancient volcanic activity, is also listed on the NSW State Heritage Register. Further to the southwest is the region of Macarthur and the city of Campbelltown, a significant population centre until the 1990s considered a region separate to Sydney proper. Macarthur Square, a shopping complex in Campbelltown, has become one of the largest shopping complexes in Sydney. The southwest also features Bankstown Reservoir, the oldest elevated reservoir constructed in reinforced concrete that is still in use and is listed on the New South Wales State Heritage Register. The southwest is home to one of Sydney's oldest trees, the Bland Oak, which was planted in the 1840s by William Bland in the suburb of Carramar. The earliest structures in the colony were built to the bare minimum of standards. Upon his appointment, Governor Lachlan Macquarie set ambitious targets for the architectural design of new construction projects. The city now has a world heritage listed building, several national heritage listed buildings, and dozens of Commonwealth heritage listed buildings as evidence of the survival of Macquarie's ideals. In 1814 the Governor called on a convict named Francis Greenway to design Macquarie Lighthouse. The lighthouse and its Classical design earned Greenway a pardon from Macquarie in 1818 and introduced a culture of refined architecture that remains to this day. Greenway went on to design the Hyde Park Barracks in 1819 and the Georgian style St James's Church in 1824. Gothic-inspired architecture became more popular from the 1830s. John Verge's Elizabeth Bay House and St Philip's Church of 1856 were built in Gothic Revival style along with Edward Blore's Government House of 1845. Kirribilli House, completed in 1858, and St Andrew's Cathedral, Australia's oldest cathedral, are rare examples of Victorian Gothic construction. From the late 1850s there was a shift towards Classical architecture. Mortimer Lewis designed the Australian Museum in 1857. The General Post Office, completed in 1891 in Victorian Free Classical style, was designed by James Barnet. Barnet also oversaw the 1883 reconstruction of Greenway's Macquarie Lighthouse. Customs House was built in 1844 to the specifications of Lewis, with additions from Barnet in 1887 and W L Vernon in 1899. The neo-Classical and French Second Empire style Town Hall was completed in 1889. Romanesque designs gained favour amongst Sydney's architects from the early 1890s. Sydney Technical College was completed in 1893 using both Romanesque Revival and Queen Anne approaches. The Queen Victoria Building was designed in Romanesque Revival fashion by George McRae and completed in 1898. It was built on the site of the Sydney Central Markets and accommodates 200 shops across its three storeys. As the wealth of the settlement increased, and as Sydney developed into a metropolis after Federation in 1901, its buildings became taller. Sydney's first tower was Culwulla Chambers on the corner of King Street and Castlereagh Street which topped out at making 12 floors. The Commercial Traveller's Club, located in Martin Place and built in 1908, was of similar height at 10 floors. It was built in a brick stone veneer and demolished in 1972 to make way for Harry Seidler's MLC Centre. This heralded a change in Sydney's cityscape and with the lifting of height restrictions in the 1960s there came a surge of high-rise construction. Acclaimed architects such as Jean Nouvel, Harry Seidler, Richard Rogers, Renzo Piano, Norman Foster, and Frank Gehry have each made their own contribution to the city's skyline. The Great Depression had a tangible influence on Sydney's architecture. New structures became more restrained with far less ornamentation than was common before the 1930s. The most notable architectural feat of this period is the Harbour Bridge. Its steel arch was designed by John Bradfield and completed in 1932. A total of 39,000 tonnes of structural steel span the between Milsons Point and Dawes Point. Modern and International architecture came to Sydney from the 1940s. Since its completion in 1973 the city's Opera House has become a World Heritage Site and one of the world's most renowned pieces of Modern design. It was conceived by Jørn Utzon with contributions from Peter Hall, Lionel Todd, and David Littlemore. Utzon was awarded the Pritzker Prize in 2003 for his work on the Opera House. Sydney is home to Australia's first building by renowned Canadian-American architect Frank Gehry, the Dr Chau Chak Wing Building (2015), based on the design of a tree house. An entrance from The Goods Line–a pedestrian pathway and former railway line–is located on the eastern border of the site. Contemporary buildings in the CBD include Citigroup Centre, Aurora Place, Chifley Tower, the Reserve Bank building, Deutsche Bank Place, MLC Centre, and Capita Centre. The tallest structure is Sydney Tower, designed by Donald Crone and completed in 1981. Regulations limited new buildings to a height of due to the proximity of Sydney Airport, although strict restrictions employed in the early 2000s have slowly been relaxed in the past ten years, with a maximum height restriction now sitting at 330 metres (1083 feet). Green bans and heritage overlays have been in place since at least 1977 to protect Sydney's heritage after controversial demolitions in the 1970s led to an outcry from Sydneysiders to preserve the old and keep history intact, sufficiently balancing old and new architecture. Sydney surpass both New York City and Paris real estate prices, having some of the most expensive in the world, The city remains Australia's most expensive housing market, with the mean house price at $1,142,212 as of December 2019 (over 25% higher the national mean house price). There were 1.76 million dwellings in Sydney in 2016 including 925,000 (57%) detached houses, 227,000 (14%) semi-detached terrace houses and 456,000 (28%) units and apartments. Whilst terrace houses are common in the inner city areas, it is detached houses that dominate the landscape in the outer suburbs. Due to environmental and economic pressures there has been a noted trend towards denser housing. There was a 30% increase in the number of apartments in Sydney between 1996 and 2006. Public housing in Sydney is managed by the Government of New South Wales. Suburbs with large concentrations of public housing include Claymore, Macquarie Fields, Waterloo, and Mount Druitt. The Government has announced plans to sell nearly 300 historic public housing properties in the harbourside neighbourhoods of Millers Point, Gloucester Street, and The Rocks. Sydney is one of the most expensive real estate markets globally. It is only second to Hong Kong with the average property costing 14 times the annual Sydney salary as of December 2016. A range of heritage housing styles can be found throughout Sydney. Terrace houses are found in the inner suburbs such as Paddington, The Rocks, Potts Point and Balmain–many of which have been the subject of gentrification. These terraces, particularly those in suburbs such as The Rocks, were historically home to Sydney's miners and labourers. In the present day, terrace houses now make up some of the most valuable real estate in the city. Federation homes, constructed around the time of Federation in 1901, are located in suburbs such as Penshurst, Turramurra, and in Haberfield. Haberfield is known as "The Federation Suburb" due to the extensive number of Federation homes. Workers cottages are found in Surry Hills, Redfern, and Balmain. California bungalows are common in Ashfield, Concord, and Beecroft. Larger modern homes are predominantly found in the outer suburbs, such as Stanhope Gardens, Kellyville Ridge, Bella Vista to the northwest, Bossley Park, Abbotsbury, and Cecil Hills to the west, and Hoxton Park, Harrington Park, and Oran Park to the southwest. The Royal Botanic Garden is the most important green space in the Sydney region, hosting both scientific and leisure activities. There are 15 separate parks under the administration of the City of Sydney. Parks within the city centre include Hyde Park, The Domain and Prince Alfred Park. The outer suburbs include Centennial Park and Moore Park in the east, Sydney Park and Royal National Park in the south, Ku-ring-gai Chase National Park in the north, and Western Sydney Parklands in the west, which is one of the largest urban parks in the world. The Royal National Park was proclaimed on 26 April 1879 and with is the second oldest national park in the world. The largest park in the Sydney metropolitan area is Ku-ring-gai Chase National Park, established in 1894 with an area of . It is regarded for its well-preserved records of indigenous habitation and more than 800 rock engravings, cave drawings and middens have been located in the park. The area now known as The Domain was set aside by Governor Arthur Phillip in 1788 as his private reserve. Under the orders of Macquarie the land to the immediate north of The Domain became the Royal Botanic Garden in 1816. This makes them the oldest botanic garden in Australia. The Gardens are not just a place for exploration and relaxation, but also for scientific research with herbarium collections, a library and laboratories. The two parks have a total area of with 8,900 individual plant species and receive over 3.5 million annual visits. To the south of The Domain is Hyde Park, the oldest public parkland in Australia which measures in area. Its location was used for both relaxation and the grazing of animals from the earliest days of the colony. Macquarie dedicated it in 1810 for the "recreation and amusement of the inhabitants of the town" and named it in honour of the original Hyde Park in London. Researchers from Loughborough University have ranked Sydney amongst the top ten world cities that are highly integrated into the global economy. The Global Economic Power Index ranks Sydney number eleven in the world. The Global Cities Index recognises it as number fourteen in the world based on global engagement. The prevailing economic theory in effect during early colonial days was mercantilism, as it was throughout most of Western Europe. The economy struggled at first due to difficulties in cultivating the land and the lack of a stable monetary system. Governor Lachlan Macquarie solved the second problem by creating two coins from every Spanish silver dollar in circulation. The economy was clearly capitalist in nature by the 1840s as the proportion of free settlers increased, the maritime and wool industries flourished, and the powers of the East India Company were curtailed. Wheat, gold, and other minerals became additional export industries towards the end of the 1800s. Significant capital began to flow into the city from the 1870s to finance roads, railways, bridges, docks, courthouses, schools and hospitals. Protectionist policies after federation allowed for the creation of a manufacturing industry which became the city's largest employer by the 1920s. These same policies helped to relieve the effects of the Great Depression during which the unemployment rate in New South Wales reached as high as 32%. From the 1960s onwards Parramatta gained recognition as the city's second CBD and finance and tourism became major industries and sources of employment. Sydney's nominal gross domestic product was AU$400.9 billion and AU$80,000 per capita in 2015. Its gross domestic product was AU$337 billion in 2013, the largest in Australia. The Financial and Insurance Services industry accounts for 18.1% of gross product and is ahead of Professional Services with 9% and Manufacturing with 7.2%. In addition to Financial Services and Tourism, the Creative and Technology sectors are focus industries for the City of Sydney and represented 9% and 11% of its economic output in 2012. There were 451,000 businesses based in Sydney in 2011, including 48% of the top 500 companies in Australia and two-thirds of the regional headquarters of multinational corporations. Global companies are attracted to the city in part because its time zone spans the closing of business in North America and the opening of business in Europe. Most foreign companies in Sydney maintain significant sales and service functions but comparably less production, research, and development capabilities. There are 283 multinational companies with regional offices in Sydney. Sydney has been ranked between the fifteenth and the fifth most expensive city in the world and is the most expensive city in Australia. Of the 15 categories only measured by UBS in 2012, workers receive the seventh highest wage levels of 77 cities in the world. Working residents of Sydney work an average of 1,846 hours per annum with 15 days of leave. The labour force of Greater Sydney Region in 2016 was 2,272,722 with a participation rate of 61.6%. It was made up of 61.2% full-time workers, 30.9% part-time workers, and 6.0% unemployed individuals. The largest reported occupations are professionals, clerical and administrative workers, managers, technicians and trades workers, and community and personal service workers. The largest industries by employment across Greater Sydney are Health Care and Social Assistance with 11.6%, Professional Services with 9.8%, Retail Trade with 9.3%, Construction with 8.2%, Education and Training with 8.0%, Accommodation and Food Services 6.7%, and Financial and Insurance Services with 6.6%. The Professional Services and Financial and Insurance Services industries account for 25.4% of employment within the City of Sydney. In 2016, 57.6% of working age residents had a total weekly income of less than $1,000 and 14.4% had a total weekly income of $1,750 or more. The median weekly income for the same period was $719 for individuals, $1,988 for families, and $1,750 for household. Unemployment in the City of Sydney averaged 4.6% for the decade to 2013, much lower than the current rate of unemployment in Western Sydney of 7.3%. Western Sydney continues to struggle to create jobs to meet its population growth despite the development of commercial centres like Parramatta. Each day about 200,000 commuters travel from Western Sydney to the CBD and suburbs in the east and north of the city. Home ownership in Sydney was less common than renting prior to the Second World War but this trend has since reversed. Median house prices have increased by an average of 8.6% per annum since 1970. The median house price in Sydney in March 2014 was $630,000. The primary cause for rising prices is the increasing cost of land and scarcity which made up 32% of house prices in 1977 compared to 60% in 2002. 31.6% of dwellings in Sydney are rented, 30.4% are owned outright and 34.8% are owned with a mortgage. 11.8% of mortgagees in 2011 had monthly loan repayments of less than $1,000 and 82.9% had monthly repayments of $1,000 or more. 44.9% of renters for the same period had weekly rent of less than $350 whilst 51.7% had weekly rent of $350 or more. The median weekly rent in Sydney is $450. Macquarie gave a charter in 1817 to form the first bank in Australia, the Bank of New South Wales. New private banks opened throughout the 1800s but the financial system was unstable. Bank collapses were a frequent occurrence and a crisis point was reached in 1893 when 12 banks failed. The Bank of New South Wales exists to this day as Westpac. The Commonwealth Bank of Australia was formed in Sydney in 1911 and began to issue notes backed by the resources of the nation. It was replaced in this role in 1959 by the Reserve Bank of Australia which is also based in Sydney. The Australian Securities Exchange began operating in 1987 and with a market capitalisation of $1.6 trillion is now one of the ten largest exchanges in the world. The Financial and Insurance Services industry now constitutes 43% of the economic product of the City of Sydney. Sydney makes up half of Australia's finance sector and has been promoted by consecutive Commonwealth Governments as Asia Pacific's leading financial centre. Structured finance was pioneered in Sydney and the city is a leading hub for asset management firms. In the 2017 Global Financial Centres Index, Sydney was ranked as having the eighth most competitive financial centre in the world. In 1985 the Federal Government granted 16 banking licences to foreign banks and now 40 of the 43 foreign banks operating in Australia are based in Sydney, including the People's Bank of China, Bank of America, Citigroup, UBS, Mizuho Bank, Bank of China, Banco Santander, Credit Suisse, State Street, HSBC, Deutsche Bank, Barclays, Royal Bank of Canada, Société Générale, Royal Bank of Scotland, Sumitomo Mitsui, ING Group, BNP Paribas, and Investec. Sydney has been a manufacturing city since the protectionist policies of the 1920s. By 1961 the industry accounted for 39% of all employment and by 1970 over 30% of all Australian manufacturing jobs were in Sydney. Its status has declined in more recent decades, making up 12.6% of employment in 2001 and 8.5% in 2011. Between 1970 and 1985 there was a loss of 180,000 manufacturing jobs. Despite this, Sydney still overtook Melbourne as the largest manufacturing centre in Australia in the 2010s. Its manufacturing output of $21.7 billion in 2013 was greater than that of Melbourne with $18.9 billion. Observers have noted Sydney's focus on the domestic market and high-tech manufacturing as reasons for its resilience against the high Australian dollar of the early 2010s. The "Smithfield-Wetherill Park Industrial Estate" in Western Sydney is the largest industrial estate in the Southern Hemisphere and is the centre of manufacturing and distribution in the region. Sydney is a gateway to Australia for many international visitors. It has hosted over 2.8 million international visitors in 2013, or nearly half of all international visits to Australia. These visitors spent 59 million nights in the city and a total of $5.9 billion. The countries of origin in descending order were China, New Zealand, the United Kingdom, the United States, South Korea, Japan, Singapore, Germany, Hong Kong, and India. The city also received 8.3 million domestic overnight visitors in 2013 who spent a total of $6 billion. 26,700 workers in the City of Sydney were directly employed by tourism in 2011. There were 480,000 visitors and 27,500 people staying overnight each day in 2012. On average, the tourism industry contributes $36 million to the city's economy per day. Popular destinations include the Sydney Opera House, the Sydney Harbour Bridge, Watsons Bay, The Rocks, Sydney Tower, Darling Harbour, the State Library of New South Wales, the Royal Botanic Garden, the Royal National Park, the Australian Museum, the Museum of Contemporary Art, the Art Gallery of New South Wales, the Queen Victoria Building, Sea Life Sydney Aquarium, Taronga Zoo, Bondi Beach, the Blue Mountains, and Sydney Olympic Park. Major developmental projects designed to increase Sydney's tourism sector include a casino and hotel at Barangaroo and the redevelopment of East Darling Harbour, which involves a new exhibition and convention centre, now Australia's largest. Sydney is the highest ranking city in the world for international students. More than 50,000 international students study at the city's universities and a further 50,000 study at its vocational and English language schools. International education contributes $1.6 billion to the local economy and creates demand for 4,000 local jobs each year. The population of Sydney in 1788 was less than 1,000. With convict transportation it almost tripled in ten years to 2,953. For each decade since 1961 the population has increased by more than 250,000. Sydney's population at the time of the 2011 census was 4,391,674. It has been forecast that the population will grow to between 8 and 8.9 million by 2061. Despite this increase, the Australian Bureau of Statistics predicts that Melbourne will replace Sydney as Australia's most populous city by 2026. The four most densely populated suburbs in Australia are located in Sydney with each having more than 13,000 residents per square kilometre (33,700 residents per square mile). The median age of Sydney residents is 36 and 12.9% of people are 65 or older. The married population accounts for 49.7% of Sydney whilst 34.7% of people have never been married. 48.9% of families are couples with children, 33.5% are couples without children, and 15.7% are single-parent families. Most immigrants to Sydney between 1840 and 1930 were British, Irish or Chinese. At the 2016 census, the most commonly nominated ancestries were: At the 2016 census, there were 2,071,872 people living in Sydney that were born overseas, accounting for 42.9% of the population Sydney, above Vancouver (42.5%), Los Angeles (37.7%), New York City (37.5%), Chicago (20.7%%), Paris (14.6%) and Berlin (13%). Only 33.1% of the population had both parents born in Australia. Sydney has the eighth-largest immigrant population among world metropolitan areas. Foreign countries of birth with the greatest representation are Mainland China, England, India, New Zealand, Vietnam and the Philippines. 1.5% of the population, or 70,135 people, identified as Indigenous Australians (Aboriginal Australians and Torres Strait Islanders) in 2016. 38.2% of people in Sydney speak a language other than English at home with Mandarin (4.7%), Arabic (4.0%), Cantonese (2.9%), Vietnamese (2.1%) and Greek (1.6%) the most widely spoken. The indigenous people of Sydney held totemic beliefs known as "dreamings". Governor Lachlan Macquarie made an effort to found a culture of formal religion throughout the early settlement and ordered the construction of churches such as St Matthew's, St Luke's, St James's, and St Andrew's. In 2011, 28.3% of Sydney residents identified themselves as Catholic, whilst 17.6% practised no religion. Additionally, 16.1% were Anglican, 4.7% were Muslim, 4.2% were Eastern Orthodox, 4.1% were Buddhist, 2.6% were Hindu, and 0.9% were Jewish. However, according to the 2016 census, 1,082,448 (25%) residents of Sydney's Urban Centre describe themselves as Catholic, while another 1,053,500 (24.4%) people consider themselves non-religious. A further 10.9% of residents identified themselves as Anglicans and an additional 5.8% as Muslim. These and other religious institutions have significantly contributed to the education and health of Sydney's residents over time, particularly through the building and management of schools and hospitals. Crime in Sydney is low, with "The Independent" ranking Sydney as the fifth safest city in the world in 2019. One of the biggest crime related issues to face the city in recent times was the introduction of lock-out laws in February 2014, in an attempt to curb alcohol fuelled violence. Patrons could not enter clubs or bars in the inner-city after 1:30am, and last drinks were called at 3am. The lock-out laws were removed in January 2020. Ku-ring-gai Chase National Park is rich in Indigenous Australian heritage, containing around 1,500 pieces of Aboriginal rock art – the largest cluster of Indigenous sites in Australia, surpassing Kakadu, which has around 5,000 sites but over a much greater land mass. The park's indigenous sites include petroglyphs, art sites, burial sites, caves, marriage areas, birthing areas, midden sites, and tool manufacturing locations, among others, which are dated to be around 5,000 years old. The inhabitants of the area were the Garigal people. Other rock art sites exist in the Sydney region, such as in Terrey Hills and Bondi, although the locations of most are not publicised to prevent damage by vandalism, and to retain their quality, as they are still regarded as sacred sites by Indigenous Australians. The Australian Museum opened in Sydney in 1827 with the purpose of collecting and displaying the natural wealth of the colony. It remains Australia's oldest natural history museum. In 1995 the Museum of Sydney opened on the site of the first Government House. It recounts the story of the city's development. Other museums based in Sydney include the Powerhouse Museum and the Australian National Maritime Museum. In 1866 then Queen Victoria gave her assent to the formation of the Royal Society of New South Wales. The Society exists "for the encouragement of studies and investigations in science, art, literature, and philosophy". It is based in a terrace house in Darlington owned by the University of Sydney. The Sydney Observatory building was constructed in 1859 and used for astronomy and meteorology research until 1982 before being converted into a museum. The Museum of Contemporary Art was opened in 1991 and occupies an Art Deco building in Circular Quay. Its collection was founded in the 1940s by artist and art collector John Power and has been maintained by the University of Sydney. Sydney's other significant art institution is the Art Gallery of New South Wales which coordinates the coveted Archibald Prize for portraiture. Contemporary art galleries are found in Waterloo, Surry Hills, Darlinghurst, Paddington, Chippendale, Newtown, and Woollahra. Sydney's first commercial theatre opened in 1832 and nine more had commenced performances by the late 1920s. The live medium lost much of its popularity to cinema during the Great Depression before experiencing a revival after World War II. Prominent theatres in the city today include State Theatre, Theatre Royal, Sydney Theatre, The Wharf Theatre, and Capitol Theatre. Sydney Theatre Company maintains a roster of local, classical, and international plays. It occasionally features Australian theatre icons such as David Williamson, Hugo Weaving, and Geoffrey Rush. The city's other prominent theatre companies are New Theatre, Belvoir, and Griffin Theatre Company. Sydney is also home to Event Cinemas' first theatre, which opened on George St in 1913, under its former Greater Union brand; the theatre currently operates, and is regarded as one of Australia's busiest cinema locations. The Sydney Opera House is the home of Opera Australia and Sydney Symphony. It has staged over 100,000 performances and received 100 million visitors since opening in 1973. Two other important performance venues in Sydney are Town Hall and the City Recital Hall. The Sydney Conservatorium of Music is located adjacent to the Royal Botanic Garden and serves the Australian music community through education and its biannual Australian Music Examinations Board exams. Many writers have originated in and set their work in Sydney. Others have visited the city and commented on it. Some of them are commemorated in the Sydney Writers Walk at Circular Quay. The city was the headquarters for Australia's first published newspaper, the "Sydney Gazette". Watkin Tench's "A Narrative of the Expedition to Botany Bay" (1789) and "A Complete Account of the Settlement at Port Jackson in New South Wales" (1793) have remained the best-known accounts of life in early Sydney. Since the infancy of the establishment, much of the literature set in Sydney were concerned with life in the city's slums and working-class communities, notably William Lane's "The Working Man's Paradise" (1892), Christina Stead's "Seven Poor Men of Sydney" (1934) and Ruth Park's "The Harp in the South" (1948). The first Australian-born female novelist, Louisa Atkinson, set various of her novels in Sydney. Contemporary writers, such as Elizabeth Harrower, were born in the city and thus set most of the work there–Harrower's debut novel "Down in the City" (1957) was mostly set in a King's Cross apartment. Well known contemporary novels set in the city include Melina Marchetta's "Looking for Alibrandi" (1992), Peter Carey's "30 Days in Sydney: A Wildly Distorted Account" (1999), J.M. Coetzee's "Diary of a Bad Year" (2007) and Kate Grenville's "The Secret River" (2010). The Sydney Writers' Festival is held every year between April and May. Filmmaking in Sydney was quite prolific until the 1920s when spoken films were introduced and American productions gained dominance in Australian cinema. The Australian New Wave of filmmaking saw a resurgence in film production in the city–with many notable features shot in the city between the 1970s and 80s, helmed by directors such as Bruce Beresford, Peter Weir and Gillian Armstrong. Fox Studios Australia commenced production in Sydney in 1998. Successful films shot in Sydney since then include "The Matrix", "Lantana", "", "Moulin Rouge!", "", "Australia", and "The Great Gatsby". The National Institute of Dramatic Art is based in Sydney and has several famous alumni such as Mel Gibson, Judy Davis, Baz Luhrmann, Cate Blanchett, Hugo Weaving and Jacqueline Mckenzie. Sydney is the host of several festivals throughout the year. The city's New Year's Eve celebrations are the largest in Australia. The Royal Easter Show is held every year at Sydney Olympic Park. Sydney Festival is Australia's largest arts festival. The travelling rock music festival Big Day Out originated in Sydney. The city's two largest film festivals are Sydney Film Festival and Tropfest. Vivid Sydney is an annual outdoor exhibition of art installations, light projections, and music. In 2015, Sydney was ranked 13th for being the top fashion capitals in the world. It hosts the Australian Fashion Week in autumn. The Sydney Mardi Gras has commenced each February since 1979. Sydney's Chinatown has had numerous locations since the 1850s. It moved from George Street to Campbell Street to its current setting in Dixon Street in 1980. The Spanish Quarter is based in Liverpool Street whilst Little Italy is located in Stanley Street. Popular nightspots are found at Kings Cross, Oxford Street, Circular Quay, and The Rocks. The Star is the city's only casino and is situated around Darling Harbour. "The Sydney Morning Herald" is Australia's oldest newspaper still in print. Now a compact form paper owned by Fairfax Media, it has been published continuously since 1831. Its competitor is the News Corporation tabloid "The Daily Telegraph" which has been in print since 1879. Both papers have Sunday tabloid editions called "The Sun-Herald" and "The Sunday Telegraph" respectively. "The Bulletin" was founded in Sydney in 1880 and became Australia's longest running magazine. It closed after 128 years of continuous publication. Sydney heralded Australia's first newspaper, the Sydney Gazette, published until 1842. Each of Australia's three commercial television networks and two public broadcasters is headquartered in Sydney. Nine's offices and news studios are based in Willoughby, Ten and Seven are based in Pyrmont, Seven has a news studio in the Sydney CBD in Martin Place the Australian Broadcasting Corporation is located in Ultimo, and the Special Broadcasting Service is based in Artarmon. Multiple digital channels have been provided by all five networks since 2000. Foxtel is based in North Ryde and sells subscription cable television to most parts of the urban area. Sydney's first radio stations commenced broadcasting in the 1920s. Radio became a popular tool for politics, news, religion, and sport and has managed to survive despite the introduction of television and the Internet. 2UE was founded in 1925 and under the ownership of Fairfax Media is the oldest station still broadcasting. Competing stations include the more popular 2GB, 702 ABC Sydney, KIIS 106.5, Triple M, Nova 96.9, and 2Day FM. Sydney's earliest migrants brought with them a passion for sport but were restricted by the lack of facilities and equipment. The first organised sports were boxing, wrestling, and horse racing from 1810 in Hyde Park. Horse racing remains popular to this day and events such as the Golden Slipper Stakes attract widespread attention. The first cricket club was formed in 1826 and matches were played within Hyde Park throughout the 1830s and 1840s. Cricket is a favoured sport in summer and big matches have been held at the Sydney Cricket Ground since 1878. The New South Wales Blues compete in the Sheffield Shield league and the Sydney Sixers and Sydney Thunder contest the national Big Bash Twenty20 competition. First played in Sydney in 1865, rugby grew to be the city's most popular football code by the 1880s. One-tenth of the state's population attended a New South Wales versus New Zealand rugby match in 1907. Rugby league separated from rugby union in 1908. The New South Wales Waratahs contest the Super Rugby competition, while the Sydney Rays represent the city in the National Rugby Championship. The national Wallabies rugby union team competes in Sydney in international matches such as the Bledisloe Cup, Rugby Championship, and World Cup. Sydney is home to nine of the sixteen teams in the National Rugby League competition: Canterbury-Bankstown Bulldogs, Cronulla-Sutherland Sharks, Manly-Warringah Sea Eagles, Penrith Panthers, Parramatta Eels, South Sydney Rabbitohs, St George Illawarra Dragons, Sydney Roosters, and Wests Tigers. New South Wales contests the annual State of Origin series against Queensland. Sydney FC and the Western Sydney Wanderers compete in the A-League (men's) and W-League (women's) soccer competitions and Sydney frequently hosts matches for the Australian national men's team, the Socceroos. The Sydney Swans and Greater Western Sydney Giants are local Australian rules football clubs that play in the Australian Football League. The Giants also compete in AFL Women's. The Sydney Kings compete in the National Basketball League. The Sydney Uni Flames play in the Women's National Basketball League. The Sydney Blue Sox contest the Australian Baseball League. The Waratahs are a member of the Australian Hockey League. The Sydney Bears and Sydney Ice Dogs play in the Australian Ice Hockey League. The Swifts are competitors in the national women's netball league. Women were first allowed to participate in recreational swimming when separate baths were opened at Woolloomooloo Bay in the 1830s. From being illegal at the beginning of the century, sea bathing gained immense popularity during the early 1900s and the first surf lifesaving club was established at Bondi Beach. Disputes about appropriate clothing for surf bathing surfaced from time to time and concerned men as well as women. The City2Surf is an annual running race from the CBD to Bondi Beach and has been held since 1971. In 2010, 80,000 runners participated which made it the largest run of its kind in the world. Sailing races have been held on Sydney Harbour since 1827. Yachting has been popular amongst wealthier residents since the 1840s and the Royal Sydney Yacht Squadron was founded in 1862. The Sydney to Hobart Yacht Race is a event that starts from Sydney Harbour on Boxing Day. Since its inception in 1945 it has been recognised as one of the most difficult yacht races in the world. Six sailors died and 71 vessels of the fleet of 115 failed to finish in the 1998 edition. The Royal Sydney Golf Club is based in Rose Bay and since its opening in 1893 has hosted the Australian Open on 13 occasions. Royal Randwick Racecourse opened in 1833 and holds several major cups throughout the year. Sydney benefitted from the construction of significant sporting infrastructure in preparation for its hosting of the 2000 Summer Olympics. The Sydney Olympic Park accommodates athletics, aquatics, tennis, hockey, archery, baseball, cycling, equestrian, and rowing facilities. It also includes the high capacity Stadium Australia used for rugby, soccer, and Australian rules football. Sydney Football Stadium was completed in 1988 and is used for rugby and soccer matches. Sydney Cricket Ground was opened in 1878 and is used for both cricket and Australian rules football fixtures. The Sydney International tennis tournament is held here at the beginning of each year as the warm-up for the Grand Slam in Melbourne. Two of the most successful tennis players in history: Ken Rosewall and Todd Woodbridge were born in and live in the city. During early colonial times the presiding Governor and his military shared absolute control over the population. This lack of democracy eventually became unacceptable for the colony's growing number of free settlers. The first indications of a proper legal system emerged with the passing of a Charter of Justice in 1814. It established three new courts, including the Supreme Court, and dictated that English law was to be followed. In 1823 the British Parliament passed an act to create the Legislative Council in New South Wales and give the Supreme Court the right of review over new legislation. From 1828 all of the common laws in force in England were to be applied in New South Wales wherever it was appropriate. Another act from the British Parliament in 1842 provided for members of the Council to be elected for the first time. The Constitution Act of 1855 gave New South Wales a bicameral government. The existing Legislative Council became the upper house and a new body called the Legislative Assembly was formed to be the lower house. An Executive Council was introduced and constituted five members of the Legislative Assembly and the Governor. It became responsible for advising the ruling Governor on matters related to the administration of the state. The colonial settlements elsewhere on the continent eventually seceded from New South Wales and formed their own governments. Tasmania separated in 1825, Victoria did so in 1850, and Queensland followed in 1859. With the proclamation of the Commonwealth of Australia in 1901 the status of local governments across Sydney was formalised and they became separate institutions from the state of New South Wales. Sydney is divided into local government areas (also known as councils or shires). These local government areas have elected councils which are responsible for functions delegated to them by the New South Wales Government. The 31 local government areas making up Sydney according to the New South Wales Division of Local Government are: Sydney is the location of the secondary official residences of the Governor-General of Australia and the Prime Minister of Australia, Admiralty House and Kirribilli House respectively. The Parliament of New South Wales sits in Parliament House on Macquarie Street. This building was completed in 1816 and first served as a hospital. The Legislative Council moved into its northern wing in 1829 and by 1852 had entirely supplanted the surgeons from their quarters. Several additions have been made to the building as the Parliament has expanded, but it retains its original Georgian façade. Government House was completed in 1845 and has served as the home of 25 Governors and 5 Governors-General. The Cabinet of Australia also meets in Sydney when needed. The highest court in the state is the Supreme Court of New South Wales which is located in Queen's Square in Sydney. The city is also the home of numerous branches of the intermediate District Court of New South Wales and the lower Local Court of New South Wales. Sydney has no distinct local government for its whole urban area. Public activities such as main roads, traffic control, public transport, policing, education, and major infrastructure projects are the responsibility of the New South Wales state government. It has tended to resist attempts to amalgamate Sydney's more populated local government areas as merged councils could pose a threat to its governmental power. Established in 1842, the City of Sydney is one such local government area and includes the CBD and some adjoining inner suburbs. It is responsible for fostering development in the local area, providing local services (waste collection and recycling, libraries, parks, sporting facilities), representing and promoting the interests of residents, supporting organisations that target the local community, and attracting and providing infrastructure for commerce, tourism, and industry. The City of Sydney is led by an elected Council and Lord Mayor who has in the past been treated as a representative of the entire city. In regards to emergency services, Greater Sydney is served by: In federal politics, Sydney was initially considered as a possibility for Australia's capital city; the newly created city of Canberra ultimately filled this role. Six Australian Prime Ministers have been born in Sydney, more than any other city, including first Prime Minister Edmund Barton and Malcolm Turnbull. Education became a proper focus for the colony from the 1870s when public schools began to form and schooling became compulsory. The population of Sydney is now highly educated. 90% of working age residents have completed some schooling and 57% have completed the highest level of school. 1,390,703 people were enrolled in an educational institution in 2011 with 45.1% of these attending school and 16.5% studying at a university. Undergraduate or postgraduate qualifications are held by 22.5% of working age Sydney residents and 40.2% of working age residents of the City of Sydney. The most common fields of tertiary qualification are commerce (22.8%), engineering (13.4%), society and culture (10.8%), health (7.8%), and education (6.6%). There are six public universities based in Sydney: The University of Sydney, University of New South Wales, University of Technology Sydney, Macquarie University, Western Sydney University, and Australian Catholic University. Five public universities maintain secondary campuses in the city for both domestic and international students: the University of Notre Dame Australia, Central Queensland University, Victoria University, University of Wollongong, and University of Newcastle. Charles Sturt University and Southern Cross University, both public universities, operate secondary campuses only designated for international students. In addition, four public universities offer programmes in Sydney through third-party education providers: University of the Sunshine Coast, La Trobe University, Federation University Australia and Charles Darwin University. 5.2% of residents of Sydney are attending a university. The University of New South Wales and the University of Sydney are ranked top 50 in the world, the University of Technology Sydney is ranked 193, while Macquarie University ranks 237, and Western Sydney University below 500. Sydney has public, denominational, and independent schools. 7.8% of Sydney residents are attending primary school and 6.4% are enrolled in secondary school. There are 935 public preschool, primary, and secondary schools in Sydney that are administered by the New South Wales Department of Education. 14 of the 17 selective secondary schools in New South Wales are based in Sydney. Public vocational education and training in Sydney is run by TAFE New South Wales and began with the opening of the Sydney Technical College in 1878. It offered courses in areas such as mechanical drawing, applied mathematics, steam engines, simple surgery, and English grammar. The college became the Sydney Institute in 1992 and now operates alongside its sister TAFE facilities across the Sydney metropolitan area, namely the Northern Sydney Institute, the Western Sydney Institute, and the South Western Sydney Institute. At the 2011 census, 2.4% of Sydney residents are enrolled in a TAFE course. The first hospital in the new colony was a collection of tents at The Rocks. Many of the convicts that survived the trip from England continued to suffer from dysentery, smallpox, scurvy, and typhoid. Healthcare facilities remained hopelessly inadequate despite the arrival of a prefabricated hospital with the Second Fleet and the construction of brand new hospitals at Parramatta, Windsor, and Liverpool in the 1790s. Governor Lachlan Macquarie arranged for the construction of Sydney Hospital and saw it completed in 1816. Parts of the facility have been repurposed for use as Parliament House but the hospital itself still operates to this day. The city's first emergency department was established at Sydney Hospital in 1870. Demand for emergency medical care increased from 1895 with the introduction of an ambulance service. The Sydney Hospital also housed Australia's first teaching facility for nurses, the Nightingale Wing, established with the input of Florence Nightingale in 1868. Healthcare gained recognition as a citizen's right in the early 1900s and Sydney's public hospitals came under the oversight of the Government of New South Wales. The administration of healthcare across Sydney is handled by eight local health districts: Central Coast, Illawarra Shoalhaven, Sydney, Nepean Blue Mountains, Northern Sydney, South Eastern Sydney, South Western Sydney, and Western Sydney. The Prince of Wales Hospital was established in 1852 and became the first of several major hospitals to be opened in the coming decades. St Vincent's Hospital was founded in 1857, followed by Royal Alexandra Hospital for Children in 1880, the Prince Henry Hospital in 1881, the Royal Prince Alfred Hospital in 1882, the Royal North Shore Hospital in 1885, the St George Hospital in 1894, and the Nepean Hospital in 1895. Westmead Hospital in 1978 was the last major facility to open. The motor vehicle, more than any other factor, has determined the pattern of Sydney's urban development since World War II. The growth of low density housing in the city's outer suburbs has made car ownership necessary for hundreds of thousands of households. The percentage of trips taken by car has increased from 13% in 1947 to 50% in 1960 and to 70% in 1971. The most important roads in Sydney were the nine Metroads, including the Sydney Orbital Network. Widespread criticism over Sydney's reliance on sprawling road networks, as well as the motor vehicle, have stemmed largely from proponents of mass public transport and high density housing. The Light Horse Interchange in western Sydney is the largest in the southern hemisphere. There can be up to 350,000 cars using Sydney's roads simultaneously during peak hour, leading to significant traffic congestion. 84.9% of Sydney households own a motor vehicle and 46.5% own two or more. Car dependency is an ongoing issue in Sydney–of people that travel to work, 58.4% use a car, 9.1% catch a train, 5.2% take a bus, and 4.1% walk. In contrast, only 25.2% of working residents in the City of Sydney use a car, whilst 15.8% take a train, 13.3% use a bus, and 25.3% walk. With a rate of 26.3%, Sydney has the highest utilisation of public transport for travel to work of any Australian capital city. Bus services today are conducted by a mixture of Government and private operators. In areas previously serviced by trams the government State Transit Authority operates, in other areas, there are private (albeit part funded by the state government) operators. Integrated tickets called Opal cards operate on both government and private bus routes. State Transit alone operated a fleet of 2,169 buses and serviced over 160 million passengers during 2014. In total, nearly 225 million boardings were recorded across the bus network NightRide is a nightly bus service that operate between midnight and 5am, also replacing trains for most of this period. Sydney once had one of the largest tram networks in the British Empire after London. It served routes covering . The internal combustion engine made buses more flexible than trams and consequently more popular, leading to the progressive closure of the tram network with the final tram operating in 1961. From 1930 there were 612 buses across Sydney carrying 90 million passengers per annum. In 1997, the Inner West Light Rail (also known as the Dulwich Hill Line) opened between Central station and Wentworth Park. It was extended to Lilyfield in 2000 and then Dulwich Hill in 2014. It links the Inner West and Darling Harbour with Central station and facilitated 9.1 million journeys in the 2016–17 financial year. A second, the CBD and South East Light Rail line serving the CBD and south-eastern suburbs opened partially in December 2019 and the remainder in April 2020. A light rail line serving Western Sydney has also been announced, due to open in 2023. Rail services are operated by Sydney Trains and Sydney Metro. Sydney Trains serves 175 stations across greater Sydney and had an annual ridership of 359 million passenger journeys in 2017–18. Sydney's railway was first constructed in 1854 with progressive extension to the network to serve both freight and passengers across the city, suburbs, and beyond to rural New South Wales. The main station is the Central railway station in the southern part of the CBD. In the 1850s and 1860s the railway reached Parramatta, Campbelltown, Liverpool, Blacktown, Penrith, and Richmond. In 2019, 91.6% of trains arrived on time. Sydney Metro, an automated rapid transit system separate from the suburban commuter network, commenced operation in 2019, with plans in place to extend the system through the CBD by 2024. The first segment, Sydney Metro Northwest, opened on 26 May 2019 and runs from Tallawong to Chatswood, with 13 stations over 36 km (22.4 mi) of twin tracks, mostly underground. A successor project Sydney Metro West from the CBD to Westmead is also planned. At the time the Sydney Harbour Bridge opened in 1932, the city's ferry service was the largest in the world. Patronage declined from 37 million passengers in 1945 to 11 million in 1963 but has recovered somewhat in recent years. From its hub at Circular Quay the ferry network extends from Manly to Parramatta. Sydney Airport, officially "Sydney Kingsford-Smith Airport", is located in the inner southern suburb of Mascot with two of the runways going into Botany Bay. It services 46 international and 23 domestic destinations. As the busiest airport in Australia it handled 37.9 million passengers in 2013 and 530,000 tonnes of freight in 2011. It has been announced that a new facility named Western Sydney Airport will be constructed at Badgerys Creek from 2016 at a cost of $2.5 billion. Bankstown Airport is Sydney's second busiest airport, and serves general aviation, charter and some scheduled cargo flights. Bankstown is also the fourth busiest airport in Australia by number of aircraft movements. Port Botany has surpassed Port Jackson as the city's major shipping port. Cruise ship terminals are located at Sydney Cove and White Bay. As climate change, greenhouse gas emissions and pollution have become a major issue for Australia, Sydney has in the past been criticised for its lack of focus on reducing pollution, cutting back on emissions and maintaining water quality. Since 1995, there have been significant developments in the analysis of air pollution in the Sydney metropolitan region. The development led to the release of the Metropolitan Air Quality Scheme (MAQS), which led to a broader understanding of the causation of pollution in Sydney, allowing the government to form appropriate responses to the pollution. The 2019–20 Australian bushfire season significantly impacted outer Sydney, and consequently dramatically reduced the air quality of the Sydney metropolitan area leading to a smoky haze that has lingered for many days throughout December. The air quality was 11 times the hazardous level in some days, even making it worse than New Delhi's, where it was also compared to "smoking 32 cigarettes" by Associate Professor Brian Oliver, a respiratory diseases scientist at the University of Technology Sydney. Australian cities are some of the most car dependent cities in the world, especially by world city standards, although Sydney's is the lowest of Australia's major cities at 66%. Furthermore, the city also has the highest usage of public transport in an Australian city, at 27%–making it comparable with New York City, Shanghai and Berlin. Despite its high ranking for an Australian city, Sydney has a low level of mass-transit services, with a historically low-density layout and significant urban sprawl, thus increasing the likelihood of car dependency. Strategies have been implemented to reduce private vehicle pollution by encouraging mass and public transit, initiating the development of high density housing and introducing a fleet of 10 new Nissan LEAF electric cars, the largest order of the pollution-free vehicle in Australia. Electric cars do not produce carbon monoxide and nitrous oxide, gases which contribute to climate change. Cycling trips have increased by 113% across Sydney's inner-city since March 2010, with about 2,000 bikes passing through top peak-hour intersections on an average weekday. Transport developments in the north-west and east of the city have been designed to encourage the use of Sydney's expanding public transportation system. The City of Sydney became the first council in Australia to achieve formal certification as carbon-neutral in 2008. The city has reduced its 2007 carbon emissions by 6% and since 2006 has reduced carbon emissions from city buildings by up to 20%. The City of Sydney introduced a "Sustainable Sydney 2030" program, with various targets planned and a comprehensive guide on how to reduce energy in homes and offices within Sydney by 30%. Reductions in energy consumption have slashed energy bills by $30 million a year. Solar panels have been established on many CBD buildings in an effort to minimise carbon pollution by around 3,000 tonnes a year. The city also has an "urban forest growth strategy", in which it aims to regular increase the tree coverage in the city by frequently planting trees with strong leaf density and vegetation to provide cleaner air and create moisture during hot weather, thus lowering city temperatures. Sydney has also become a leader in the development of green office buildings and enforcing the requirement of all building proposals to be energy-efficient. The One Central Park development, completed in 2013, is an example of this implementation and design. Obtaining sufficient fresh water was difficult during early colonial times. A catchment called the Tank Stream sourced water from what is now the CBD but was little more than an open sewer by the end of the 1700s. The Botany Swamps Scheme was one of several ventures during the mid 1800s that saw the construction of wells, tunnels, steam pumping stations, and small dams to service Sydney's growing population. The first genuine solution to Sydney's water demands was the Upper Nepean Scheme which came into operation in 1886 and cost over £2 million. It transports water from the Nepean, Cataract, and Cordeaux rivers and continues to service about 15% of Sydney's total water needs. Dams were built on these three rivers between 1907 and 1935. In 1977 the Shoalhaven Scheme brought several more dams into service. The WaterNSW now manages eleven major dams: Warragamba one of the largest domestic water supply dams in the world, Woronora, Cataract, Cordeaux, Nepean, Avon, Wingecarribee Reservoir, Fitzroy Falls Reservoir, Tallowa, the Blue Mountains Dams, and Prospect Reservoir. Water is collected from five catchment areas covering and total storage amounts to . The Sydney Desalination Plant came into operation in 2010. The two distributors which maintain Sydney's electricity infrastructure are Ausgrid and Endeavour Energy. Their combined networks include over 815,000 power poles and of electricity cables.
https://en.wikipedia.org/wiki?curid=27862
Sword A sword is a bladed melee weapon intended for cutting or thrusting that is longer than a knife or dagger, consisting of a long blade attached to a hilt. The precise definition of the term varies with the historical epoch or the geographic region under consideration. The blade can be straight or curved. Thrusting swords have a pointed tip on the blade, and tend to be straighter; slashing swords have a sharpened cutting edge on one or both sides of the blade, and are more likely to be curved. Many swords are designed for both thrusting and slashing. Historically, the sword developed in the Bronze Age, evolving from the dagger; the earliest specimens date to about 1600 BC. The later Iron Age sword remained fairly short and without a crossguard. The spatha, as it developed in the Late Roman army, became the predecessor of the European sword of the Middle Ages, at first adopted as the Migration Period sword, and only in the High Middle Ages, developed into the classical arming sword with crossguard. The word "sword" continues the Old English, "sweord". The use of a sword is known as swordsmanship or, in a modern context, as fencing. In the Early Modern period, western sword design diverged into roughly two forms, the thrusting swords and the sabers. Thrusting swords such as the rapier and eventually the smallsword were designed to impale their targets quickly and inflict deep stab wounds. Their long and straight yet light and well balanced design made them highly maneuverable and deadly in a duel but fairly ineffective when used in a slashing or chopping motion. A well aimed lunge and thrust could end a fight in seconds with just the sword's point, leading to the development of a fighting style which closely resembles modern fencing. The sabre and similar blades such as the cutlass were built more heavily and were more typically used in warfare. Built for slashing and chopping at multiple enemies, often from horseback, the saber's long curved blade and slightly forward weight balance gave it a deadly character all its own on the battlefield. Most sabers also had sharp points and double-edged blades, making them capable of piercing soldier after soldier in a cavalry charge. Sabers continued to see battlefield use until the early 20th century. The US Navy kept tens of thousands of sturdy cutlasses in their armory well into World War II and many were issued to Marines in the Pacific as jungle machetes. Non-European weapons called "sword" include single-edged weapons such as the Middle Eastern scimitar, the Chinese dao and the related Japanese katana. The Chinese jìan is an example of a non-European double-edged sword, like the European models derived from the double-edged Iron Age sword. The first weapons that can be described as "swords" date to around 3300 BC. They have been found in Arslantepe, Turkey, are made from arsenical bronze, and are about long. Some of them are inlaid with silver. The sword developed from the knife or dagger. A knife is unlike a dagger in that a knife has only one cutting surface, while a dagger has two cutting surfaces. Construction of longer blades became possible during the 3rd millennium BC in the Middle East, first in arsenic copper, then in tin-bronze. Blades longer than were rare and not practical until the late Bronze Age because the Young's modulus (stiffness) of bronze is relatively low, and consequently longer blades would bend easily. The development of the sword out of the dagger was gradual; the first weapons that can be classified as swords without any ambiguity are those found in Minoan Crete, dated to about 1700 BC, reaching a total length of more than . These are the "type A" swords of the Aegean Bronze Age. One of the most important, and longest-lasting, types swords of the European Bronze Age was the "Naue II" type (named for Julius Naue who first described them), also known as "Griffzungenschwert" (lit. "grip-tongue sword"). This type first appears in c. the 13th century BC in Northern Italy (or a general Urnfield background), and survives well into the Iron Age, with a life-span of about seven centuries. During its lifetime, metallurgy changed from bronze to iron, but not its basic design. Naue II swords were exported from Europe to the Aegean, and as far afield as Ugarit, beginning about 1200 BC, i.e. just a few decades before the final collapse of the palace cultures in the Bronze Age collapse. Naue II swords could be as long as 85 cm, but most specimens fall into the 60 to 70 cm range. Robert Drews linked the Naue Type II Swords, which spread from Southern Europe into the Mediterranean, with the Bronze Age collapse. Naue II swords, along with Nordic full-hilted swords, were made with functionality and aesthetics in mind. The hilts of these swords were beautifully crafted and often contained false rivets in order to make the sword more visually appealing. Swords coming from northern Denmark and northern Germany usually contained three or more fake rivets in the hilt. Sword production in China is attested from the Bronze Age Shang Dynasty. The technology for bronze swords reached its high point during the Warring States period and Qin Dynasty. Amongst the Warring States period swords, some unique technologies were used, such as casting high tin edges over softer, lower tin cores, or the application of diamond shaped patterns on the blade (see sword of Goujian). Also unique for Chinese bronzes is the consistent use of high tin bronze (17–21% tin) which is very hard and breaks if stressed too far, whereas other cultures preferred lower tin bronze (usually 10%), which bends if stressed too far. Although iron swords were made alongside bronze, it was not until the early Han period that iron completely replaced bronze. In the Indian subcontinent, earliest available Bronze age swords of copper were discovered in the Indus Valley Civilization sites in the northwestern regions of South Asia. Swords have been recovered in archaeological findings throughout the Ganges-Jamuna Doab region of Indian subcontinent, consisting of bronze but more commonly copper. Diverse specimens have been discovered in Fatehgarh, where there are several varieties of hilt. These swords have been variously dated to times between 1700–1400 BC, but were probably used more in the opening centuries of the 1st millennium BC. Iron became increasingly common from the 13th century BC. Before that the use of swords was less frequent. The iron was not quench-hardened although often containing sufficient carbon, but work-hardened like bronze by hammering. This made them comparable or only slightly better in terms of strength and hardness to bronze swords. They could still bend during use rather than spring back into shape. But the easier production, and the better availability of the raw material for the first time permitted the equipment of entire armies with metal weapons, though Bronze Age Egyptian armies were sometimes fully equipped with bronze weapons. Ancient swords are often found at burial sites. The sword was often placed on the right side of the corpse. Many times the sword was kept over the corpse. In many late Iron Age graves, the sword and the scabbard were bent at 180 degrees. It was known as killing the sword. Thus they might have considered swords as the most potent and powerful object. By the time of Classical Antiquity and the Parthian and Sassanid Empires in Iran, iron swords were common. The Greek xiphos and the Roman gladius are typical examples of the type, measuring some . The late Roman Empire introduced the longer spatha (the term for its wielder, spatharius, became a court rank in Constantinople), and from this time, the term "longsword" is applied to swords comparatively long for their respective periods. Swords from the Parthian and Sassanian Empires were quite long, the blades on some late Sassanian swords being just under a metre long. Swords were also used to administer various physical punishments, such as non-surgical amputation or capital punishment by decapitation. The use of a sword, an honourable weapon, was regarded in Europe since Roman times as a privilege reserved for the nobility and the upper classes. The Periplus of the Erythraean Sea mentions swords of Indian iron and steel being exported from ancient India to Greece. Blades from the Indian subcontinent made of Damascus steel also found their way into Persia. In the first millennium BC the Persian armies used a sword that was originally of Scythian design called the akinaka (acinaces). However, the great conquests of the Persians made the sword more famous as a Persian weapon, to the extent that the true nature of the weapon has been lost somewhat as the name Akinaka has been used to refer to whichever form of sword the Persian army favoured at the time. It is widely believed that the original akinaka was a 35 to 45 cm (14 to 18 inch) double-edged sword. The design was not uniform and in fact identification is made more on the nature of the scabbard than the weapon itself; the scabbard usually has a large, decorative mount allowing it to be suspended from a belt on the wearer's right side. Because of this, it is assumed that the sword was intended to be drawn with the blade pointing downwards ready for surprise stabbing attacks. In the 12th century, the Seljuq dynasty had introduced the curved shamshir to Persia, and this was in extensive use by the early 16th century. Chinese iron swords made their first appearance in the later part of the Western Zhou Dynasty, but iron and steel swords were not widely used until the 3rd century BC Han Dynasty. The Chinese Dao (刀 pinyin dāo) is single-edged, sometimes translated as sabre or broadsword, and the Jian (劍 or 剑 pinyin jiàn) is double-edged. The zhanmadao (literally "horse chopping sword"), an extremely long, anti-cavalry sword from the Song dynasty era. During the Middle Ages sword technology improved, and the sword became a very advanced weapon. The spatha type remained popular throughout the Migration period and well into the Middle Ages. Vendel Age spathas were decorated with Germanic artwork (not unlike the Germanic bracteates fashioned after Roman coins). The Viking Age saw again a more standardized production, but the basic design remained indebted to the spatha. Around the 10th century, the use of properly quenched hardened and tempered steel started to become much more common than in previous periods. The Frankish 'Ulfberht' blades (the name of the maker inlaid in the blade) were of particularly consistent high quality. Charles the Bald tried to prohibit the export of these swords, as they were used by Vikings in raids against the Franks. Wootz steel which is also known as Damascus steel was a unique and highly prized steel developed on the Indian subcontinent as early as the 5th century BC. Its properties were unique due to the special smelting and reworking of the steel creating networks of iron carbides described as a globular cementite in a matrix of pearlite. The use of Damascus steel in swords became extremely popular in the 16th and 17th centuries. It was only from the 11th century that Norman swords began to develop the crossguard (quillons). During the Crusades of the 12th to 13th century, this cruciform type of arming sword remained essentially stable, with variations mainly concerning the shape of the pommel. These swords were designed as cutting weapons, although effective points were becoming common to counter improvements in armour, especially the 14th-century change from mail to plate armour. It was during the 14th century, with the growing use of more advanced armour, that the hand and a half sword, also known as a "bastard sword", came into being. It had an extended grip that meant it could be used with either one or two hands. Though these swords did not provide a full two-hand grip they allowed their wielders to hold a shield or parrying dagger in their off hand, or to use it as a two-handed sword for a more powerful blow. In the Middle Ages, the sword was often used as a symbol of the word of God. The names given to many swords in mythology, literature, and history reflected the high prestige of the weapon and the wealth of the owner. From around 1300 to 1500, in concert with improved armour, innovative sword designs evolved more and more rapidly. The main transition was the lengthening of the grip, allowing two-handed use, and a longer blade. By 1400, this type of sword, at the time called "langes Schwert" (longsword) or "spadone", was common, and a number of 15th- and 16th-century "Fechtbücher" offering instructions on their use survive. Another variant was the specialized armour-piercing swords of the estoc type. The longsword became popular due to its extreme reach and its cutting and thrusting abilities. The estoc became popular because of its ability to thrust into the gaps between plates of armour. The grip was sometimes wrapped in wire or coarse animal hide to provide a better grip and to make it harder to knock a sword out of the user's hand. A number of manuscripts covering longsword combat and techniques dating from the 13th–16th centuries exist in German, Italian, and English, providing extensive information on longsword combatives as used throughout this period. Many of these are now readily available online. In the 16th century, the large zweihänder was used by the elite German and Swiss mercenaries known as doppelsöldners. Zweihänder, literally translated, means two-hander. The zweihänder possesses a long blade, as well as a huge guard for protection. It is estimated that some zweihänder swords were over long, with the one ascribed to Frisian warrior Pier Gerlofs Donia being long. The gigantic blade length was perfectly designed for manipulating and pushing away enemy pole-arms, which were major weapons around this time, in both Germany and Eastern Europe. Doppelsöldners also used katzbalgers, which means 'cat-gutter'. The katzbalger's S-shaped guard and blade made it perfect for bringing in when the fighting became too close to use a zweihänder. Civilian use of swords became increasingly common during the late Renaissance, with duels being a preferred way to honourably settle disputes. The side-sword was a type of war sword used by infantry during the Renaissance of Europe. This sword was a direct descendant of the arming sword. Quite popular between the 16th and 17th centuries, they were ideal for handling the mix of armoured and unarmoured opponents of that time. A new technique of placing one's finger on the ricasso to improve the grip (a practice that would continue in the rapier) led to the production of hilts with a guard for the finger. This sword design eventually led to the development of the civilian rapier, but it was not replaced by it, and the side-sword continued to be used during the rapier's lifetime. As it could be used for both cutting and thrusting, the term cut and thrust sword is sometimes used interchangeably with side-sword. As rapiers became more popular, attempts were made to hybridize the blade, sacrificing the effectiveness found in each unique weapon design. These are still considered side-swords and are sometimes labeled "sword rapier" or "cutting rapier" by modern collectors. Side-swords used in conjunction with bucklers became so popular that it caused the term swashbuckler to be coined. This word stems from the new fighting style of the side-sword and buckler which was filled with much "swashing and making a noise on the buckler". Within the Ottoman Empire, the use of a curved sabre called the Yatagan started in the mid-16th century. It would become the weapon of choice for many in Turkey and the Balkans. The sword in this time period was the most personal weapon, the most prestigious, and the most versatile for close combat, but it came to decline in military use as technology, such as the crossbow and firearms changed warfare. However, it maintained a key role in civilian self-defence. The earliest evidence of curved swords, or scimitars (and other regional variants as the Arabian saif, the Persian shamshir and the Turkic kilij) is from the 9th century, when it was used among soldiers in the Khurasan region of Persia. The takoba is a type of broadsword originating in the Sahel, descended from the various Byzantine and Islamic swords used across North Africa. Strongly associated with the Tuaregs, it has a straight double-edged blade measuring about 1 meter in length, usually imported from Europe. Abyssinian swords related to the Persian shamshir are known as shotel. The Ashanti people adopted swords under the name of akrafena. They are still used today in ceremonies, such as the Odwira festival. As steel technology improved, single-edged weapons became popular throughout Asia. Derived from the Chinese Jian or dao, the Korean hwandudaedo are known from the early medieval Three Kingdoms. Production of the Japanese tachi, a precursor to the katana, is recorded from c. AD 900 (see Japanese sword). Japan was famous for the swords it forged in the early 13th century for the class of warrior-nobility known as the Samurai. The types of swords used by the Samurai included the ōdachi (extra long field sword), tachi (long cavalry sword), katana (long sword), and wakizashi (shorter companion sword for katana). Japanese swords that pre-date the rise of the samurai caste include the tsurugi (straight double-edged blade) and chokutō (straight one-edged blade). Japanese swordmaking reached the height of its development in the 15th and 16th centuries, when samurai increasingly found a need for a sword to use in closer quarters, leading to the creation of the modern katana. Western historians have said that Japanese katana were among the finest cutting weapons in world military history. In Indonesia, the images of Indian style swords can be found in Hindu gods statues from ancient Java circa 8th to 10th century. However the native types of blade known as "kris", "parang", "klewang" and "golok" were more popular as weapons. These daggers are shorter than sword but longer than common dagger. In The Philippines, traditional large swords known as the Kampilan and the Panabas were used in combat by the natives. A notable wielder of the "kampilan" was Lapu-Lapu, the king of Mactan and his warriors who defeated the Spaniards and killed Portuguese explorer Ferdinand Magellan at the Battle of Mactan on 27 April 1521. Traditional swords in the Philippines were immediately banned, but the training in swordsmanship was later hidden from the occupying Spaniards by practices in dances. But because of the banning, Filipinos were forced to use swords that were disguised as farm tools. Bolos and baliswords were used during the revolutions against the colonialists not only because ammunition for guns was scarce, but also for concealability while walking in crowded streets and homes. Bolos were also used by young boys who joined their parents in the revolution and by young girls and their mothers in defending the town while the men were on the battlefields. During the Philippine–American War in events such as the Balangiga Massacre, most of an American company was hacked to death or seriously injured by bolo-wielding guerillas in Balangiga, Samar. When the Japanese took control of the country, several American special operations groups stationed in the Philippines were introduced to the Filipino Martial Arts and swordsmanship, leading to this style reaching America despite the fact that natives were reluctant to allow outsiders in on their fighting secrets. The "Khanda" is a double-edge straight sword. It is often featured in religious iconography, theatre and art depicting the ancient history of India. Some communities venerate the weapon as a symbol of Shiva. It is a common weapon in the martial arts in the Indian subcontinent. Khanda often appears in Hindu, Buddhist and Sikh scriptures and art. In Sri Lanka, a unique wind furnace was used to produce the high quality steel. This gave the blade a very hard cutting edge and beautiful patterns. For these reasons it became a very popular trading material. The Firangi (, derived from the Arabic term for a Western European a "Frank") was a sword type which used blades manufactured in Western Europe and imported by the Portuguese, or made locally in imitation of European blades. Because of its length the firangi is usually regarded as primarily a cavalry weapon. The sword has been especially associated with the Marathas, who were famed for their cavalry. However, the firangi was also widely used by Sikhs and Rajputs. The Talwar () is a type of curved sword from India and other countries of the Indian subcontinent, it was adopted by communities such as Rajputs, Sikhs and Marathas, who favored the sword as their main weapon. It became more widespread in the medieval era. The Urumi ( , lit. curling blade; ; Hindi: ) is a "sword" with a flexible whip-like blade. A single-edged type of sidearm used by the Hussites was popularized in 16th-century Germany under its Czech name "Dusack", also known as "Säbel auf Teutsch gefasst" ("sabre fitted in the German manner"). A closely related weapon is the "schnepf" or Swiss sabre used in Early Modern Switzerland. The cut-and-thrust mortuary sword was used after 1625 by cavalry during the English Civil War. This (usually) two-edged sword sported a half-basket hilt with a straight blade some 90–105 cm long. Later in the 17th century, the swords used by cavalry became predominantly single-edged. The so-called walloon sword ("épée wallone") was common in the Thirty Years' War and Baroque era. Its hilt was ambidextrous with shell-guards and knuckle-bow that inspired 18th century continental hunting hangers. Following their campaign in the Netherlands in 1672, the French began producing this weapon as their first regulation sword. Weapons of this design were also issued to the Swedish army from the time of Gustavus Adolphus until as late as the 1850s. The rapier is believed to have evolved either from the Spanish "espada ropera" or from the swords of the Italian nobility somewhere in the later part of the 16th century. The rapier differed from most earlier swords in that it was not a military weapon but a primarily civilian sword. Both the rapier and the Italian schiavona developed the crossguard into a basket-shaped guard for hand protection. During the 17th and 18th centuries, the shorter smallsword became an essential fashion accessory in European countries and the New World, though in some places such as the Scottish Highlands large swords as the basket-hilted broadsword were preferred, and most wealthy men and military officers carried one slung from a belt. Both the smallsword and the rapier remained popular dueling swords well into the 18th century. As the wearing of swords fell out of fashion, canes took their place in a gentleman's wardrobe. This developed to the gentlemen in the Victorian era to use the umbrella. Some examples of canes—those known as sword canes or swordsticks—incorporate a concealed blade. The French martial art "la canne" developed to fight with canes and swordsticks and has now evolved into a sport. The English martial art singlestick is very similar. With the rise of the pistol duel, the duelling sword fell out of fashion long before the practice of duelling itself. By about 1770, English duelists enthusiastically adopted the pistol, and sword duels dwindled. However, the custom of duelling with epées persisted well into the 20th century in France. Such modern duels were not fought to the death; the duellists' aim was instead merely to draw blood from the opponent's sword arm. Towards the end of its useful life, the sword served more as a weapon of self-defence than for use on the battlefield, and the military importance of swords steadily decreased during the Modern Age. Even as a personal sidearm, the sword began to lose its preeminence in the early 19th century, reflecting the development of reliable handguns. However, swords were still normally carried in combat by cavalrymen and by officers of other branches throughout the 19th and early 20th centuries, both in colonial and European warfare. For example, during the Aceh War the Acehnese Klewangs, a sword similar to the machete, proved very effective in close quarters combat with Dutch troops, leading the Royal Netherlands East Indies Army to adopt a heavy cutlass, also called klewang (very similar in appearance to the US Navy Model 1917 Cutlass) to counter it. Mobile troops armed with carbines and klewangs succeeded in suppressing Aceh resistance where traditional infantry with rifle and bayonet had failed. From that time on until the 1950s the Royal Dutch East Indies Army, Royal Dutch Army, Royal Dutch Navy and Dutch police used these cutlasses called Klewang. Swords continued in general peacetime use by cavalry of most armies during the years prior to World War I. The British Army formally adopted a completely new design of cavalry sword in 1908, almost the last change in British Army weapons before the outbreak of the war. At the outbreak of World War I infantry officers in all combatant armies then involved (French, German, British, Austro-Hungarian, Russian, Belgian and Serbian) still carried swords as part of their field equipment. On mobilization in August 1914 all serving British Army officers were required to have their swords sharpened as the only peacetime use of the weapon had been for saluting on parade. The high visibility and limited practical use of the sword however led to it being abandoned within weeks, although most cavalry continued to carry sabres throughout the war. While retained as a symbol of rank and status by at least senior officers of infantry, artillery and other branches the sword was usually left with non-essential bagage when units reached the front line. It was not until the late 1920s and early 1930s that this historic weapon was finally discarded for all but ceremonial purposes by most remaining horse mounted regiments of Europe and the Americas. In China troops used the long anti-cavalry Miao dao well into the Second Sino-Japanese War. The last units of British heavy cavalry switched to using armoured vehicles as late as 1938. Swords and other dedicated melee weapons were used occasionally by many countries during World War II, but typically as a secondary weapon as they were outclassed by coexisting firearms. A notable exception was the Imperial Japanese Army where, for cultural reasons, all officers and warrant officers carried the Type 94 "shin-gunto" ("new miltary sword") into battle from 1934 until 1945. Swords are commonly worn as a ceremonial item by officers in many military and naval services throughout the world. Occasions to wear swords include any event in dress uniforms where the rank-and-file carry arms: parades, reviews, courts-martial, tattoos, and changes of command. They are also commonly worn for officers' weddings, and when wearing dress uniforms to church—although they are rarely actually worn in the church itself. In the British forces they are also worn for any appearance at Court. In the United States, every Naval officer at or above the rank of Lieutenant Commander is required to own a sword, which can be prescribed for any formal outdoor ceremonial occasion; they are normally worn for changes of command and parades. For some Navy parades, cutlasses are issued to Petty Officers and Chief Petty Officers. In the U.S. Marine Corps every officer must own a sword, which is prescribed for formal parades and other ceremonies where dress uniforms are worn and the rank-and-file are under arms. On these occasions depending on their billet, Marine Non-Commissioned Officers (E-6 and above) may also be required to carry swords, which have hilts of a pattern similar to U.S. Naval officers' swords but are actually sabres. The USMC Model 1859 NCO Sword is the longest continuously-issued edged weapon in the U.S. inventory The Marine officer swords are of the Mameluke pattern which was adopted in 1825 in recognition of the Marines' key role in the capture of the Tripolitan city of Derna during the First Barbary War. Taken out of issue for approximately 20 years from 1855 until 1875, it was restored to service in the year of the Corps' centennial and has remained in issue since. The production of replicas of historical swords originates with 19th-century historicism. Contemporary replicas can range from cheap factory produced look-alikes to exact recreations of individual artifacts, including an approximation of the historical production methods. Some kinds of swords are still commonly used today as weapons, often as a side arm for military infantry. The Japanese katana, wakizashi and tanto are carried by some infantry and officers in Japan and other parts of Asia and the kukri is the official melee weapon for Nepal. Other swords in use today are the sabre, the scimitar, the shortsword and the machete. The sword consists of the blade and the hilt. The term "scabbard" applies to the cover for the sword blade when not in use. There is considerable variation in the detailed design of sword blades. The diagram opposite shows a typical Medieval European sword. Early iron blades have rounded points due to the limited metallurgy of the time. These were still effective for thrusting against lightly armoured opponents. As armour advanced, blades were made narrower, stiffer and sharply pointed to defeat the armour by thrusting. Dedicated cutting blades are wide and thin, and often have grooves known as fullers which lighten the blade at the cost of some of the blade's stiffness. The edges of a cutting sword are almost parallel. Blades oriented for the thrust have thicker blades, sometimes with a distinct midrib for increased stiffness, with a strong taper and an acute point. The geometry of a cutting sword blade allows for acute edge angles. An edge with an acuter angle is more inclined to degrade quickly in combat situations than an edge with a more obtuse angle. Also, an acute edge angle is not the primary factor of a blade's sharpness. The part of the blade between the center of percussion (CoP) and the point is called the "foible" (weak) of the blade, and that between the center of balance (CoB) and the hilt is the "forte" (strong). The section in between the CoP and the CoB is the "middle". The "ricasso" or "shoulder" identifies a short section of blade immediately below the guard that is left completely unsharpened. Many swords have no ricasso. On some large weapons, such as the German "Zweihänder", a metal cover surrounded the ricasso, and a swordsman might grip it in one hand to wield the weapon more easily in close-quarter combat. The ricasso normally bears the maker's mark. The tang is the extension of the blade to which the hilt is fitted. On Japanese blades, the maker's mark appears on the tang under the grip. The hilt is the collective term for the parts allowing for the handling and control of the blade; these consist of the grip, the pommel, and a simple or elaborate guard, which in post-Viking Age swords could consist of only a crossguard (called a cruciform hilt or quillons). The pommel was originally designed as a stop to prevent the sword slipping from the hand. From around the 11th century onward it became a counterbalance to the blade, allowing a more fluid style of fighting. It can also be used as a blunt instrument at close range, and its weight affects the centre of percussion. In later times a "sword knot" or "tassel" was sometimes added. By the 17th century, with the growing use of firearms and the accompanying decline in the use of armour, many rapiers and dueling swords had developed elaborate basket hilts, which protect the palm of the wielder and rendered the gauntlet obsolete. In late medieval and Renaissance era European swords, a flap of leather called the "chappe" or "rain guard" was attached to a sword's crossguard at the base of the hilt to protect the mouth of the scabbard and prevent water from entering. Common accessories to the sword include the scabbard, as well as the 'sword belt'. Sword typology is based on morphological criteria on one hand (blade shape (cross-section, taper, and length), shape and size of the hilt and pommel) and age and place of origin on the other (Bronze Age, Iron Age, European (medieval, early modern, modern), Asian). The relatively comprehensive Oakeshott typology was created by historian and illustrator Ewart Oakeshott as a way to define and catalogue European swords of the medieval period based on physical form, including blade shape and hilt configuration. The typology also focuses on the smaller, and in some cases contemporary, single-handed swords such as the arming sword. As noted above, the terms longsword, broad sword, great sword, and Gaelic claymore are used relative to the era under consideration, and each term designates a particular type of sword. In most Asian countries, a sword (jian 劍, geom (검), ken/tsurugi (剣), pedang) is a double-edged straight-bladed weapon, while a knife or saber (dāo 刀, do (도), to/katana (刀), pisau, golok) refers to a single-edged object. In Sikh history, the sword is held in very high esteem. A single-edged sword is called a kirpan, and its double-edged counterpart a khanda or tega. The South Indian "churika" is a handheld double-edged sword traditionally used in the Malabar region of Kerala. It is also worshipped as the weapon of Vettakkorumakan, the hunter god in Hinduism. European terminology does give generic names for single-edged and double-edged blades but refers to specific types with the term 'sword' covering them all. For example, the backsword may be so called because it is single-edged but the falchion which is also single-edged is given its own specific name. A two-handed sword is any sword that usually requires two hands to wield, or more specifically the very large swords of the 16th century. Throughout history two-handed swords have generally been less common than their one-handed counterparts, one exception being their common use in Japan. A Hand and a half sword, colloquially known as a "bastard sword", was a sword with an extended grip and sometimes pommel so that it could be used with either one or two hands. Although these swords may not provide a full two-hand grip, they allowed its wielders to hold a shield or parrying dagger in their off hand, or to use it as a two-handed sword for a more powerful blow. These should not be confused with a longsword, two-handed sword, or Zweihänder, which were always intended to be used with two hands. In fantasy, magic swords often appear, based on their use in myth and legend. The science fiction counterpart to these is known as an energy sword (sometimes also referred to as a "beam sword" or "laser sword"), a sword whose blade consists of, or is augmented by, concentrated energy. A well known example of this type of sword is the lightsaber, shown in the "Star Wars" franchise.
https://en.wikipedia.org/wiki?curid=27863
Surface (topology) In the part of mathematics referred to as topology, a surface is a two-dimensional manifold. Some surfaces arise as the boundaries of three-dimensional solids; for example, the sphere is the boundary of the solid ball. Other surfaces arise as graphs of functions of two variables; see the figure at right. However, surfaces can also be defined abstractly, without reference to any ambient space. For example, the Klein bottle is a surface that cannot be embedded in three-dimensional Euclidean space. Topological surfaces are sometimes equipped with additional information, such as a Riemannian metric or a complex structure, that connects them to other disciplines within mathematics, such as differential geometry and complex analysis. The various mathematical notions of surface can be used to model surfaces in the physical world. In mathematics, a surface is a geometrical shape that resembles a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, such as spheres. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not. A surface is a two-dimensional space; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a "coordinate patch" on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian). The concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the aerodynamic properties of an airplane, the central consideration is the flow of air along its surface. A "(topological) surface" is a topological space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a "(coordinate) chart". It is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as "local coordinates" and these homeomorphisms lead us to describe surfaces as being "locally Euclidean". In most writings on the subject, it is often assumed, explicitly or implicitly, that as a topological space a surface is also nonempty, second-countable, and Hausdorff. It is also often assumed that the surfaces under consideration are connected. The rest of this article will assume, unless specified otherwise, that a surface is nonempty, Hausdorff, second-countable, and connected. More generally, a "(topological) surface with boundary" is a Hausdorff topological space in which every point has an open neighbourhood homeomorphic to some open subset of the closure of the upper half-plane H2 in C. These homeomorphisms are also known as "(coordinate) charts". The boundary of the upper half-plane is the "x"-axis. A point on the surface mapped via a chart to the "x"-axis is termed a "boundary point". The collection of such points is known as the "boundary" of the surface which is necessarily a one-manifold, that is, the union of closed curves. On the other hand, a point mapped to above the "x"-axis is an "interior point". The collection of interior points is the "interior" of the surface which is always non-empty. The closed disk is a simple example of a surface with boundary. The boundary of the disc is a circle. The term "surface" used without qualification refers to surfaces without boundary. In particular, a surface with empty boundary is a surface in the usual sense. A surface with empty boundary which is compact is known as a 'closed' surface. The two-dimensional sphere, the two-dimensional torus, and the real projective plane are examples of closed surfaces. The Möbius strip is a surface on which the distinction between clockwise and counterclockwise can be defined locally, but not globally. In general, a surface is said to be "orientable" if it does not contain a homeomorphic copy of the Möbius strip; intuitively, it has two distinct "sides". For example, the sphere and torus are orientable, while the real projective plane is not (because the real projective plane with one point removed is homeomorphic to the open Möbius strip). In differential and algebraic geometry, extra structure is added upon the topology of the surface. This added structures can be a smoothness structure (making it possible to define differentiable maps to and from the surface), a Riemannian metric (making it possible to define length and angles on the surface), a complex structure (making it possible to define holomorphic maps to and from the surface—in which case the surface is called a Riemann surface), or an algebraic structure (making it possible to detect singularities, such as self-intersections and cusps, that cannot be described solely in terms of the underlying topology). Historically, surfaces were initially defined as subspaces of Euclidean spaces. Often, these surfaces were the locus of zeros of certain functions, usually polynomial functions. Such a definition considered the surface as part of a larger (Euclidean) space, and as such was termed "extrinsic". In the previous section, a surface is defined as a topological space with certain properties, namely Hausdorff and locally Euclidean. This topological space is not considered a subspace of another space. In this sense, the definition given above, which is the definition that mathematicians use at present, is "intrinsic". A surface defined as intrinsic is not required to satisfy the added constraint of being a subspace of Euclidean space. It may seem possible for some surfaces defined intrinsically to not be surfaces in the extrinsic sense. However, the Whitney embedding theorem asserts every surface can in fact be embedded homeomorphically into Euclidean space, in fact into E4: The extrinsic and intrinsic approaches turn out to be equivalent. In fact, any compact surface that is either orientable or has a boundary can be embedded in E3; on the other hand, the real projective plane, which is compact, non-orientable and without boundary, cannot be embedded into E3 (see Gramain). Steiner surfaces, including Boy's surface, the Roman surface and the cross-cap, are models of the real projective plane in E3, but only the Boy surface is an immersed surface. All these models are singular at points where they intersect themselves. The Alexander horned sphere is a well-known pathological embedding of the two-sphere into the three-sphere. The chosen embedding (if any) of a surface into another space is regarded as extrinsic information; it is not essential to the surface itself. For example, a torus can be embedded into E3 in the "standard" manner (which looks like a bagel) or in a knotted manner (see figure). The two embedded tori are homeomorphic, but not isotopic: They are topologically equivalent, but their embeddings are not. The image of a continuous, injective function from R2 to higher-dimensional Rn is said to be a parametric surface. Such an image is so-called because the "x"- and "y"- directions of the domain R2 are 2 variables that parametrize the image. A parametric surface need not be a topological surface. A surface of revolution can be viewed as a special kind of parametric surface. If "f" is a smooth function from R3 to R whose gradient is nowhere zero, then the locus of zeros of "f" does define a surface, known as an "implicit surface". If the condition of non-vanishing gradient is dropped, then the zero locus may develop singularities. Each closed surface can be constructed from an oriented polygon with an even number of sides, called a fundamental polygon of the surface, by pairwise identification of its edges. For example, in each polygon below, attaching the sides with matching labels ("A" with "A", "B" with "B"), so that the arrows point in the same direction, yields the indicated surface. Any fundamental polygon can be written symbolically as follows. Begin at any vertex, and proceed around the perimeter of the polygon in either direction until returning to the starting vertex. During this traversal, record the label on each edge in order, with an exponent of -1 if the edge points opposite to the direction of traversal. The four models above, when traversed clockwise starting at the upper left, yield Note that the sphere and the projective plane can both be realized as quotients of the 2-gon, while the torus and Klein bottle require a 4-gon (square). The expression thus derived from a fundamental polygon of a surface turns out to be the sole relation in a presentation of the fundamental group of the surface with the polygon edge labels as generators. This is a consequence of the Seifert–van Kampen theorem. Gluing edges of polygons is a special kind of quotient space process. The quotient concept can be applied in greater generality to produce new or alternative constructions of surfaces. For example, the real projective plane can be obtained as the quotient of the sphere by identifying all pairs of opposite points on the sphere. Another example of a quotient is the connected sum. The connected sum of two surfaces "M" and "N", denoted "M" # "N", is obtained by removing a disk from each of them and gluing them along the boundary components that result. The boundary of a disk is a circle, so these boundary components are circles. The Euler characteristic formula_5 of is the sum of the Euler characteristics of the summands, minus two: The sphere S is an identity element for the connected sum, meaning that . This is because deleting a disk from the sphere leaves a disk, which simply replaces the disk deleted from "M" upon gluing. Connected summation with the torus T is also described as attaching a "handle" to the other summand "M". If "M" is orientable, then so is . The connected sum is associative, so the connected sum of a finite collection of surfaces is well-defined. The connected sum of two real projective planes, , is the Klein bottle K. The connected sum of the real projective plane and the Klein bottle is homeomorphic to the connected sum of the real projective plane with the torus; in a formula, . Thus, the connected sum of three real projective planes is homeomorphic to the connected sum of the real projective plane with the torus. Any connected sum involving a real projective plane is nonorientable. A closed surface is a surface that is compact and without boundary. Examples are spaces like the sphere, the torus and the Klein bottle. Examples of non-closed surfaces are: an open disk, which is a sphere with a puncture; a cylinder, which is a sphere with two punctures; and the Möbius strip. As with any closed manifold, a surface embedded in Euclidean space that is closed with respect to the inherited Euclidean topology is "not" necessarily a closed surface; for example, a disk embedded in formula_7 that contains its boundary is a surface that is topologically closed, but not a closed surface. The "classification theorem of closed surfaces" states that any connected closed surface is homeomorphic to some member of one of these three families: The surfaces in the first two families are orientable. It is convenient to combine the two families by regarding the sphere as the connected sum of 0 tori. The number "g" of tori involved is called the "genus" of the surface. The sphere and the torus have Euler characteristics 2 and 0, respectively, and in general the Euler characteristic of the connected sum of "g" tori is . The surfaces in the third family are nonorientable. The Euler characteristic of the real projective plane is 1, and in general the Euler characteristic of the connected sum of "k" of them is . It follows that a closed surface is determined, up to homeomorphism, by two pieces of information: its Euler characteristic, and whether it is orientable or not. In other words, Euler characteristic and orientability completely classify closed surfaces up to homeomorphism. Closed surfaces with multiple connected components are classified by the class of each of their connected components, and thus one generally assumes that the surface is connected. Relating this classification to connected sums, the closed surfaces up to homeomorphism form a commutative monoid under the operation of connected sum, as indeed do manifolds of any fixed dimension. The identity is the sphere, while the real projective plane and the torus generate this monoid, with a single relation , which may also be written , since . This relation is sometimes known as ' after Walther von Dyck, who proved it in , and the triple cross surface is accordingly called '. Geometrically, connect-sum with a torus () adds a handle with both ends attached to the same side of the surface, while connect-sum with a Klein bottle () adds a handle with the two ends attached to opposite sides of an orientable surface; in the presence of a projective plane (), the surface is not orientable (there is no notion of side), so there is no difference between attaching a torus and attaching a Klein bottle, which explains the relation. Compact surfaces, possibly with boundary, are simply closed surfaces with a finite number of holes (open discs that have been removed). Thus, a connected compact surface is classified by the number of boundary components and the genus of the corresponding closed surface – equivalently, by the number of boundary components, the orientability, and Euler characteristic. The genus of a compact surface is defined as the genus of the corresponding closed surface. This classification follows almost immediately from the classification of closed surfaces: removing an open disc from a closed surface yields a compact surface with a circle for boundary component, and removing "k" open discs yields a compact surface with "k" disjoint circles for boundary components. The precise locations of the holes are irrelevant, because the homeomorphism group acts "k"-transitively on any connected manifold of dimension at least 2. Conversely, the boundary of a compact surface is a closed 1-manifold, and is therefore the disjoint union of a finite number of circles; filling these circles with disks (formally, taking the cone) yields a closed surface. The unique compact orientable surface of genus "g" and with "k" boundary components is often denoted formula_8 for example in the study of the mapping class group. A Riemann surface is a complex 1-manifold. On a purely topological level, a Riemann surface is therefore also an orientable surface in the sense of this article. In fact, every compact orientable surface is realizable as a Riemann surface. Thus compact Riemann surfaces are characterized topologically by their genus: 0, 1, 2, ... On the other hand, the genus does not characterize the complex structure. For example, there are uncountably many non-isomorphic compact Riemann surfaces of genus 1 (the elliptic curves). Non-compact surfaces are more difficult to classify. As a simple example, a non-compact surface can be obtained by puncturing (removing a finite set of points from) a closed manifold. On the other hand, any open subset of a compact surface is itself a non-compact surface; consider, for example, the complement of a Cantor set in the sphere, otherwise known as the Cantor tree surface. However, not every non-compact surface is a subset of a compact surface; two canonical counterexamples are the Jacob's ladder and the Loch Ness monster, which are non-compact surfaces with infinite genus. A non-compact surface "M" has a non-empty space of ends "E"("M"), which informally speaking describes the ways that the surface "goes off to infinity". The space "E"("M") is always topologically equivalent to a closed subspace of the Cantor set. "M" may have a finite or countably infinite number Nh of handles, as well as a finite or countably infinite number "N""p" of projective planes. If both "N""h" and "N""p" are finite, then these two numbers, and the topological type of space of ends, classify the surface "M" up to topological equivalence. If either or both of "N""h" and "N""p" is infinite, then the topological type of M depends not only on these two numbers but also on how the infinite one(s) approach the space of ends. In general the topological type of M is determined by the four subspaces of "E"("M") that are limit points of infinitely many handles and infinitely many projective planes, limit points of only handles, and limit points of neither. If one removes the assumption of second-countability from the definition of a surface, there exist (necessarily non-compact) topological surfaces having no countable base for their topology. Perhaps the simplest example is the Cartesian product of the long line with the space of real numbers. Another surface having no countable base for its topology, but "not" requiring the Axiom of Choice to prove its existence, is the Prüfer manifold, which can be described by simple equations that show it to be a real-analytic surface. The Prüfer manifold may be thought of as the upper half plane together with one additional "tongue" "T""x" hanging down from it directly below the point ("x",0), for each real "x". In 1925, Tibor Radó proved the theorem all Riemann surfaces (i.e., one-dimensional complex manifolds) are necessarily second-countable. By contrast, if one replaces the real numbers in the construction of the Prüfer surface by the complex numbers, one obtains a two-dimensional complex manifold (which is necessarily a 4-dimensional real manifold) with no countable base. The classification of closed surfaces has been known since the 1860s, and today a number of proofs exist. Topological and combinatorial proofs in general rely on the difficult result that every compact 2-manifold is homeomorphic to a simplicial complex, which is of interest in its own right. The most common proof of the classification is , which brings every triangulated surface to a standard form. A simplified proof, which avoids a standard form, was discovered by John H. Conway circa 1992, which he called the "Zero Irrelevancy Proof" or "ZIP proof" and is presented in . A geometric proof, which yields a stronger geometric result, is the uniformization theorem. This was originally proven only for Riemann surfaces in the 1880s and 1900s by Felix Klein, Paul Koebe, and Henri Poincaré. Polyhedra, such as the boundary of a cube, are among the first surfaces encountered in geometry. It is also possible to define "smooth surfaces", in which each point has a neighborhood diffeomorphic to some open set in E2. This elaboration allows calculus to be applied to surfaces to prove many results. Two smooth surfaces are diffeomorphic if and only if they are homeomorphic. (The analogous result does not hold for higher-dimensional manifolds.) Thus closed surfaces are classified up to diffeomorphism by their Euler characteristic and orientability. Smooth surfaces equipped with Riemannian metrics are of foundational importance in differential geometry. A Riemannian metric endows a surface with notions of geodesic, distance, angle, and area. It also gives rise to Gaussian curvature, which describes how curved or bent the surface is at each point. Curvature is a rigid, geometric property, in that it is not preserved by general diffeomorphisms of the surface. However, the famous Gauss–Bonnet theorem for closed surfaces states that the integral of the Gaussian curvature "K" over the entire surface "S" is determined by the Euler characteristic: This result exemplifies the deep relationship between the geometry and topology of surfaces (and, to a lesser extent, higher-dimensional manifolds). Another way in which surfaces arise in geometry is by passing into the complex domain. A complex one-manifold is a smooth oriented surface, also called a Riemann surface. Any complex nonsingular algebraic curve viewed as a complex manifold is a Riemann surface. Every closed orientable surface admits a complex structure. Complex structures on a closed oriented surface correspond to conformal equivalence classes of Riemannian metrics on the surface. One version of the uniformization theorem (due to Poincaré) states that any Riemannian metric on an oriented, closed surface is conformally equivalent to an essentially unique metric of constant curvature. This provides a starting point for one of the approaches to Teichmüller theory, which provides a finer classification of Riemann surfaces than the topological one by Euler characteristic alone. A "complex surface" is a complex two-manifold and thus a real four-manifold; it is not a surface in the sense of this article. Neither are algebraic curves defined over fields other than the complex numbers, nor are algebraic surfaces defined over fields other than the real numbers.
https://en.wikipedia.org/wiki?curid=27865
Surjective function In mathematics, a function "f" from a set "X" to a set "Y" is surjective (also known as onto, or a surjection), if for every element "y" in the codomain "Y" of "f", there is at least one element "x" in the domain "X" of "f" such that "f"("x") = "y". It is not required that "x" be unique; the function "f" may map one or more elements of "X" to the same element of "Y". The term "surjective" and the related terms "injective" and "bijective" were introduced by Nicolas Bourbaki, a group of mainly French 20th-century mathematicians who, under this pseudonym, wrote a series of books presenting an exposition of modern advanced mathematics, beginning in 1935. The French word "sur" means "over" or "above", and relates to the fact that the image of the domain of a surjective function completely covers the function's codomain. Any function induces a surjection by restricting its codomain to the image of its domain. Every surjective function has a right inverse, and every function with a right inverse is necessarily a surjection. The composition of surjective functions is always surjective. Any function can be decomposed into a surjection and an injection. A surjective function is a function whose image is equal to its codomain. Equivalently, a function "f" with domain "X" and codomain "Y" is surjective, if for every "y" in "Y", there exists at least one "x" in "X" with formula_1. Surjections are sometimes denoted by a two-headed rightwards arrow (), as in "f" : "X" ↠ "Y". Symbolically, A function is bijective if and only if it is both surjective and injective. If (as is often done) a function is identified with its graph, then surjectivity is not a property of the function itself, but rather a property of the mapping. This is, the function together with its codomain. Unlike injectivity, surjectivity cannot be read off of the graph of the function alone. The function is said to be a right inverse of the function if "f"("g"("y")) = "y" for every "y" in "Y" ("g" can be undone by "f"). In other words, "g" is a right inverse of "f" if the composition of "g" and "f" in that order is the identity function on the domain "Y" of "g". The function "g" need not be a complete inverse of "f" because the composition in the other order, , may not be the identity function on the domain "X" of "f". In other words, "f" can undo or ""reverse"" "g", but cannot necessarily be reversed by it. Every function with a right inverse is necessarily a surjection. The proposition that every surjective function has a right inverse is equivalent to the axiom of choice. If is surjective and "B" is a subset of "Y", then "f"("f" −1("B")) = "B". Thus, "B" can be recovered from its preimage . For example, in the first illustration, above, there is some function "g" such that "g"("C") = 4. There is also some function "f" such that "f"(4) = "C". It doesn't matter that "g"("C") can also equal 3; it only matters that "f" "reverses" "g". A function is surjective if and only if it is right-cancellative: given any functions , whenever "g" o "f" = "h" o "f", then "g" = "h". This property is formulated in terms of functions and their composition and can be generalized to the more general notion of the morphisms of a category and their composition. Right-cancellative morphisms are called epimorphisms. Specifically, surjective functions are precisely the epimorphisms in the category of sets. The prefix "epi" is derived from the Greek preposition "ἐπί" meaning "over", "above", "on". Any morphism with a right inverse is an epimorphism, but the converse is not true in general. A right inverse "g" of a morphism "f" is called a section of "f". A morphism with a right inverse is called a split epimorphism. Any function with domain "X" and codomain "Y" can be seen as a left-total and right-unique binary relation between "X" and "Y" by identifying it with its function graph. A surjective function with domain "X" and codomain "Y" is then a binary relation between "X" and "Y" that is right-unique and both left-total and right-total. The cardinality of the domain of a surjective function is greater than or equal to the cardinality of its codomain: If is a surjective function, then "X" has at least as many elements as "Y", in the sense of cardinal numbers. (The proof appeals to the axiom of choice to show that a function Specifically, if both "X" and "Y" are finite with the same number of elements, then is surjective if and only if "f" is injective. Given two sets "X" and "Y", the notation is used to say that either "X" is empty or that there is a surjection from "Y" onto "X". Using the axiom of choice one can show that and together imply that |"Y"| = |"X"|, a variant of the Schröder–Bernstein theorem. The composition of surjective functions is always surjective: If "f" and "g" are both surjective, and the codomain of "g" is equal to the domain of "f", then is surjective. Conversely, if is surjective, then "f" is surjective (but "g", the function applied first, need not be). These properties generalize from surjections in the category of sets to any epimorphisms in any category. Any function can be decomposed into a surjection and an injection: For any function there exist a surjection and an injection such that "h" = "g" o "f". To see this, define "Y" to be the set of preimages where "z" is in . These preimages are disjoint and partition "X". Then "f" carries each "x" to the element of "Y" which contains it, and "g" carries each element of "Y" to the point in "Z" to which "h" sends its points. Then "f" is surjective since it is a projection map, and "g" is injective by definition. Any function induces a surjection by restricting its codomain to its range. Any surjective function induces a bijection defined on a quotient of its domain by collapsing all arguments mapping to a given fixed image. More precisely, every surjection can be factored as a projection followed by a bijection as follows. Let "A"/~ be the equivalence classes of "A" under the following equivalence relation: "x" ~ "y" if and only if "f"("x") = "f"("y"). Equivalently, "A"/~ is the set of all preimages under "f". Let "P"(~) : "A" → "A"/~ be the projection map which sends each "x" in "A" to its equivalence class ["x"]~, and let "f""P" : "A"/~ → "B" be the well-defined function given by "f""P"(["x"]~) = "f"("x"). Then "f" = "f""P" o "P"(~).
https://en.wikipedia.org/wiki?curid=27873
Stephen Jay Gould Stephen Jay Gould (; September 10, 1941 – May 20, 2002) was an American paleontologist, evolutionary biologist, and historian of science. He was also one of the most influential and widely read authors of popular science of his generation. Gould spent most of his career teaching at Harvard University and working at the American Museum of Natural History in New York. In 1996, Gould was hired as the Vincent Astor Visiting Research Professor of Biology at New York University, where he divided his time teaching there and at Harvard. Gould's most significant contribution to evolutionary biology was the theory of punctuated equilibrium developed with Niles Eldredge in 1972. The theory proposes that most evolution is characterized by long periods of evolutionary stability, which is infrequently punctuated by swift periods of branching speciation. The theory was contrasted against phyletic gradualism, the popular idea that evolutionary change is marked by a pattern of smooth and continuous change in the fossil record. Most of Gould's empirical research was based on the land snail genera "Poecilozonites" and "Cerion". He also made important contributions to evolutionary developmental biology, receiving broad professional recognition for his book "Ontogeny and Phylogeny". In evolutionary theory he opposed strict selectionism, sociobiology as applied to humans, and evolutionary psychology. He campaigned against creationism and proposed that science and religion should be considered two distinct fields (or "non-overlapping magisteria") whose authorities do not overlap. Gould was known by the general public mainly for his 300 popular essays in "Natural History" magazine, and his numerous books written for both the specialist and non-specialist. In April 2000, the US Library of Congress named him a "Living Legend". Stephen Jay Gould was born in Queens, New York on September 10, 1941. His father Leonard was a court stenographer and a World War II veteran in the United States Navy. His mother Eleanor was an artist, whose parents were Jewish immigrants living and working in the city's Garment District. Gould and his younger brother Peter were raised in Bayside, a middle-class neighborhood in the northeastern section of Queens. He attended elementary school and graduated from Jamaica High School. When Gould was five years old his father took him to the Hall of Dinosaurs in the American Museum of Natural History, where he first encountered "Tyrannosaurus rex". "I had no idea there were such things—I was awestruck," Gould once recalled. It was in that moment that he decided to become a paleontologist. Raised in a secular Jewish home, Gould did not formally practice religion and preferred to be called an agnostic. When asked directly if he was an agnostic in "Skeptic" magazine, he responded: Though he "had been brought up by a Marxist father" he stated that his father's politics were "very different" from his own. In describing his own political views, he has said they "tend to the left of center." According to Gould the most influential political books he read were C. Wright Mills' "The Power Elite" and the political writings of Noam Chomsky. While attending Antioch College in the early 1960s, Gould was active in the civil rights movement and often campaigned for social justice. When he attended the University of Leeds as a visiting undergraduate, he organized weekly demonstrations outside a Bradford dance hall which refused to admit black people. Gould continued these demonstrations until the policy was revoked. Throughout his career and writings, he spoke out against cultural oppression in all its forms, especially what he saw as the pseudoscience used in the service of racism and sexism. Interspersed throughout his scientific essays for "Natural History" magazine, Gould frequently referred to his nonscientific interests and pastimes. As a boy he collected baseball cards and remained an avid New York Yankees fan throughout his life. As an adult he was fond of science fiction movies, but often lamented their poor storytelling and presentation of science. His other interests included singing baritone in the Boston Cecilia, and he was a great aficionado of Gilbert and Sullivan operas. He collected rare antiquarian books, possessed an enthusiasm for architecture, and delighted in city walks. He often traveled to Europe, and spoke French, German, Russian, and Italian. He sometimes alluded ruefully to his tendency to put on weight. Gould married artist Deborah Lee on October 3, 1965. Gould met Lee while they were students together at Antioch College. They had two sons, Jesse and Ethan, and were married for 30 years. His second marriage in 1995 was to artist and sculptor Rhonda Roland Shearer. In July 1982 Gould was diagnosed with peritoneal mesothelioma, a deadly form of cancer affecting the abdominal lining (the peritoneum). This cancer is frequently found in people who have ingested or inhaled asbestos fibers, a mineral which was used in the construction of Harvard's Museum of Comparative Zoology. After a difficult two-year recovery, Gould published a column for "Discover" magazine titled "The Median Isn't the Message," which discusses his reaction to discovering that, "mesothelioma is incurable, with a median mortality of only eight months after discovery." In his essay he describes the actual significance behind this fact, and his relief upon recognizing that statistical averages are useful abstractions, and by themselves do not encompass "our actual world of variation, shadings, and continua." The median is the halfway point, which means that 50% of people will die before eight months. However the other half may live significantly longer depending on the nature of the distribution. Gould needed to determine where his individual characteristics placed him within this range. Given that his cancer was detected early, he was young, optimistic, and had the best treatments available, he reasoned that he likely fell within the favorable tail of a right skewed distribution. After an experimental treatment of radiation, chemotherapy, and surgery, Gould made a full recovery, and his column became a source of comfort for many cancer patients. Gould was also an advocate of medical cannabis. When undergoing his cancer treatments he smoked marijuana to help alleviate the long periods of intense and uncontrollable nausea. According to Gould, the drug had a "most important effect" on his eventual recovery. He later complained that he could not understand how "any humane person would withhold such a beneficial substance from people in such great need simply because others use it for different purposes." On August 5, 1998, Gould's testimony assisted in the successful lawsuit of HIV activist Jim Wakeford, who sued the Government of Canada for the right to cultivate, possess, and use marijuana for medical purposes. In February 2002, a lesion was found on Gould's chest radiograph, and oncologists diagnosed him with stage IV cancer. Gould died 10 weeks later on May 20, 2002, from a metastatic adenocarcinoma of the lung, an aggressive form of cancer which had already spread to his brain, liver, and spleen. This cancer was unrelated to his previous bout of abdominal cancer in 1982. He died in his home "in a bed set up in the library of his SoHo loft, surrounded by his wife Rhonda, his mother Eleanor, and the many books he loved." Gould began his higher education at Antioch College, graduating with a double major in geology and philosophy in 1963. During this time, he also studied at the University of Leeds in the United Kingdom. After completing graduate work at Columbia University in 1967 under the guidance of Norman Newell, he was immediately hired by Harvard University where he worked until the end of his life (1967–2002). In 1973, Harvard promoted him to professor of geology and curator of invertebrate paleontology at the institution's Museum of Comparative Zoology. In 1982, Harvard awarded him the title of Alexander Agassiz Professor of Zoology. That same year, he received the Golden Plate Award of the American Academy of Achievement. In 1983, he was awarded a fellowship at the American Association for the Advancement of Science, where he later served as president (1999–2001). The AAAS news release cited his "numerous contributions to both scientific progress and the public understanding of science." He also served as president of the Paleontological Society (1985–1986) and of the Society for the Study of Evolution (1990–1991). In 1989 Gould was elected into the body of the National Academy of Sciences. Through 1996–2002 Gould was Vincent Astor Visiting Research Professor of Biology at New York University. In 2008, he was posthumously awarded the Darwin-Wallace Medal, along with 12 other recipients. (Until 2008, this medal had been awarded every 50 years by the Linnean Society of London.) Early in his career, Gould and his colleague Niles Eldredge developed the theory of punctuated equilibrium, which describes the rate of speciation in the fossil record as occurring relatively rapidly, which then alternates to a longer period of evolutionary stability. It was Gould who coined the term "punctuated equilibria" though the theory was originally presented by Eldredge in his doctoral dissertation on Devonian trilobites and his article published the previous year on allopatric speciation. According to Gould, punctuated equilibrium revised a key pillar "in the central logic of Darwinian theory." Some evolutionary biologists have argued that while punctuated equilibrium was "of great interest to biology generally," it merely modified neo-Darwinism in a manner that was fully compatible with what had been known before. Other biologists emphasize the theoretical novelty of punctuated equilibrium, and argued that evolutionary stasis had been "unexpected by most evolutionary biologists" and "had a major impact on paleontology and evolutionary biology." Comparisons were made to George Gaylord Simpson's work in "Tempo and Mode in Evolution" (1941). However Simpson describes the paleontological record as being characterized by predominantly gradual change (which he termed horotely), though he also documented examples of slow (bradytely), and rapid (tachytely) rates of evolution. Punctuated equilibrium and phyletic gradualism are not mutually exclusive, and examples of each have been documented in different lineages. The debate between these two models is often misunderstood by non-scientists, and according to Richard Dawkins has been oversold by the media. Some critics jokingly referred to the theory of punctuated equilibrium as "evolution by jerks", which prompted Gould to describe phyletic gradualism as "evolution by creeps." Gould made significant contributions to evolutionary developmental biology, especially in his work "Ontogeny and Phylogeny". In this book he emphasized the process of heterochrony, which encompasses two distinct processes: neoteny and terminal additions. Neoteny is the process where ontogeny is slowed down and the organism does not reach the end of its development. Terminal addition is the process by which an organism adds to its development by speeding and shortening earlier stages in the developmental process. Gould's influence in the field of evolutionary developmental biology continues to be seen today in areas such as the evolution of feathers. Gould was a champion of biological constraints, internal limitations upon developmental pathways, as well as other non-selectionist forces in evolution. Rather than direct adaptations, he considered many higher functions of the human brain to be the unintended side consequence of natural selection. To describe such co-opted features, he coined the term exaptation with paleontologist Elisabeth Vrba. Gould believed this feature of human mentality undermines an essential premise of human sociobiology and evolutionary psychology. In 1975, Gould's Harvard colleague E. O. Wilson introduced his analysis of animal behavior (including human behavior) based on a sociobiological framework that suggested that many social behaviors have a strong evolutionary basis. In response, Gould, Richard Lewontin, and others from the Boston area wrote the subsequently well-referenced letter to "The New York Review of Books" entitled, "Against 'Sociobiology'". This open letter criticized Wilson's notion of a "deterministic view of human society and human action." But Gould did not rule out sociobiological explanations for many aspects of animal behavior, and later wrote: "Sociobiologists have broadened their range of selective stories by invoking concepts of inclusive fitness and kin selection to solve (successfully I think) the vexatious problem of altruism—previously the greatest stumbling block to a Darwinian theory of social behavior... Here sociobiology has had and will continue to have success. And here I wish it well. For it represents an extension of basic Darwinism to a realm where it should apply." With Richard Lewontin, Gould wrote an influential 1979 paper entitled, "The Spandrels of San Marco and the Panglossian Paradigm", which introduced the architectural term "spandrel" into evolutionary biology. In architecture, a spandrel is a triangular space which exists over the haunches of an arch. Spandrels—more often called pendentives in this context—are found particularly in classical architecture, especially Byzantine and Renaissance churches. When visiting Venice in 1978, Gould noted that the spandrels of the San Marco cathedral, while quite beautiful, were not spaces planned by the architect. Rather the spaces arise as "necessary architectural byproducts of mounting a dome on rounded arches." Gould and Lewontin thus defined "spandrels" in the evolutionary biology context to mean any biological feature of an organism that arises as a necessary side consequence of other features, which is not directly selected for by natural selection. Proposed examples include the "masculinized genitalia in female hyenas, exaptive use of an umbilicus as a brooding chamber by snails, the shoulder hump of the giant Irish deer, and several key features of human mentality." In Voltaire's "Candide", Dr. Pangloss is portrayed as a clueless scholar who, despite the evidence, insists that "all is for the best in this best of all possible worlds". Gould and Lewontin asserted that it is Panglossian for evolutionary biologists to view all traits as atomized things that had been naturally selected for, and criticised biologists for not granting theoretical space to other causes, such as phyletic and developmental constraints. The relative frequency of spandrels, so defined, versus adaptive features in nature, remains a controversial topic in evolutionary biology. An illustrative example of Gould's approach can be found in Elisabeth Lloyd's case study suggesting that the female orgasm is a by-product of shared developmental pathways. Gould also wrote on this topic in his essay "Male Nipples and Clitoral Ripples," prompted by Lloyd's earlier work. Gould was criticized by philosopher Dan Dennett for using the term spandrel instead of pendentive, a spandrel that curves across a right angle to support a dome. Robert Mark, a professor of civil engineering at Princeton, offered his expertise in the pages of "American Scientist", noting that these definitions are often misunderstood in architectural theory. Mark concluded, "Gould and Lewontin's misapplication of the term spandrel for pendentive perhaps implies a wider latitude of design choice than they intended for their analogy. But Dennett's critique of the architectural basis of the analogy goes even further astray because he slights the technical rationale of the architectural elements in question." Gould favored the argument that evolution has no inherent drive towards long-term "progress". Uncritical commentaries often portray evolution as a ladder of progress, leading towards bigger, faster, and smarter organisms, the assumption being that evolution is somehow driving organisms to get more complex and ultimately more like humankind. Gould argued that evolution's drive was not towards complexity, but towards diversification. Because life is constrained to begin with a simple starting point (like bacteria), any diversity resulting from this start, by random walk, will have a skewed distribution and therefore be perceived to move in the direction of higher complexity. But life, Gould argued, can also easily adapt towards simplification, as is often the case with parasites. In a review of "", Richard Dawkins approved of Gould's general argument, but suggested that he saw evidence of a "tendency for lineages to improve cumulatively their adaptive fit to their particular way of life, by increasing the numbers of features which combine together in adaptive complexes. ... By this definition, adaptive evolution is not just incidentally progressive, it is deeply, dyed-in-the-wool, indispensably progressive." Gould's argued that analogies between biological evolution and cultural evolution "obfuscates far more than it enlightens." Gould preferred to use the term "cultural change," which works on Lamarckian modes of inheritance. Ideas learned in one generation are immediate transmitted throughout the population directly, thereby allowing cultural change to be legitimately progressive, directional, and forward driving. Biological evolution works by the slow and indirect mechanism of random variation and natural selection, and "includes no principle of predictable progress or movement to greater complexity." Gould never embraced cladistics as a method of investigating evolutionary lineages and process, possibly because he was concerned that such investigations would lead to neglect of the details in historical biology, which he considered all-important. In the early 1990s this led him into a debate with Derek Briggs, who had begun to apply quantitative cladistic techniques to the Burgess Shale fossils, about the methods to be used in interpreting these fossils. Around this time cladistics rapidly became the dominant method of classification in evolutionary biology. Inexpensive but increasingly powerful personal computers made it possible to process large quantities of data about organisms and their characteristics. Around the same time the development of effective polymerase chain reaction techniques made it possible to apply cladistic methods of analysis to biochemical and genetic features as well. Most of Gould's empirical research pertained to land snails. He focused his early work on the Bermudian genus "Poecilozonites", while his later work concentrated on the West Indian genus "Cerion". According to Gould ""Cerion" is the land snail of maximal diversity in form throughout the entire world. There are 600 described species of this single genus. In fact, they're not really species, they all interbreed, but the names exist to express a real phenomenon which is this incredible morphological diversity. Some are shaped like golf balls, some are shaped like pencils. ... Now my main subject is the evolution of form, and the problem of how it is that you can get this diversity amid so little genetic difference, so far as we can tell, is a very interesting one. And if we could solve this we'd learn something general about the evolution of form." Given "Cerion" extensive geographic diversity, Gould later lamented that if Christopher Columbus had only catalogued a single "Cerion" it would have ended the scholarly debate about which island Columbus had first set foot on in America. Gould is one of the most frequently cited scientists in the field of evolutionary theory. His 1979 "spandrels" paper has been cited more than 5,000 times. In "Paleobiology"—the flagship journal of his own speciality—only Charles Darwin and George Gaylord Simpson have been cited more often. Gould was also a considerably respected historian of science. Historian Ronald Numbers has been quoted as saying: "I can't say much about Gould's strengths as a scientist, but for a long time I've regarded him as the second most influential historian of science (next to Thomas Kuhn)." In a survey conducted in 2013 and 2014 among international experts on intelligence, Gould was the lowest rated among the important intelligence researchers by all criteria (quality and correctness; innovativeness and development of new ideas; impact in contributions and importance of oeuvre). Shortly before his death, Gould published "The Structure of Evolutionary Theory" (2002), a long treatise recapitulating his version of modern evolutionary theory. In an interview for the Dutch TV series "Of Beauty and Consolation" Gould remarked, "In a couple of years I will be able to gather in one volume my view of how evolution works. It is to me a great consolation because it represents the putting together of a lifetime of thinking into one source. That book will never be particularly widely read. It's going to be far too long, and it's only for a few thousand professionals—very different from my popular science writings—but it is of greater consolation to me because it is a chance to put into one place a whole way of thinking about evolution that I've struggled with all my life." Gould became widely known through his popular essays on evolution in the "Natural History" magazine. His essays were published in a series entitled "This View of Life" (a phrase from the concluding paragraph of Charles Darwin's "Origin of Species") from January 1974 to January 2001, amounting to a continuous publication of 300 essays. Many of his essays were reprinted in collected volumes that became bestselling books such as "Ever Since Darwin" and "The Panda's Thumb", "Hen's Teeth and Horse's Toes", and "The Flamingo's Smile". A passionate advocate of evolutionary theory, Gould wrote prolifically on the subject, trying to communicate his understanding of contemporary evolutionary biology to a wide audience. A recurring theme in his writings is the history and development of pre-evolutionary and evolutionary thought. He was also an enthusiastic baseball fan and sabermetrician (analyst of baseball statistics), and made frequent reference to the sport in his essays. Many of his baseball essays were anthologized in his posthumously published book "Triumph and Tragedy in Mudville" (2003). Although a self-described Darwinist, Gould's emphasis was less gradualist and reductionist than most neo-Darwinists. He fiercely opposed many aspects of sociobiology and its intellectual descendant evolutionary psychology. He devoted considerable time to fighting against creationism, creation science, and intelligent design. Most notably, Gould provided expert testimony against the equal-time creationism law in "McLean v. Arkansas". Gould later developed the term "non-overlapping magisteria" (NOMA) to describe how, in his view, science and religion should not comment on each other's realm. Gould went on to develop this idea in some detail, particularly in the books "Rocks of Ages" (1999) and "The Hedgehog, the Fox, and the Magister's Pox" (2003). In a 1982 essay for "Natural History" Gould wrote: An "anti-evolution petition" drafted by the Discovery Institute inspired the National Center for Science Education to create a pro-evolution counterpart called "Project Steve," which is named in Gould's honor. At a meeting of the executive council of the Committee for Skeptical Inquiry (CSI) in 2011 selected Gould for inclusion in CSI's "Pantheon of Skeptics" created to remember the legacy of deceased CSI fellows and their contributions to the cause of scientific skepticism. Gould also became a noted public face of science, often appearing on television. In 1984 Gould received his own "NOVA" special on PBS. Other appearances included interviews on CNN's "Crossfire" and "Talkback Live", NBC's "The Today Show", and regular appearances on PBS's "Charlie Rose" show. Gould was also a guest in all seven episodes of the Dutch talk series "A Glorious Accident", in which he appeared with his close friend Oliver Sacks. Gould was featured prominently as a guest in Ken Burns's PBS documentary "Baseball", as well as PBS's "Evolution" series. Gould was also on the Board of Advisers to the influential Children's Television Workshop television show "3-2-1 Contact", where he made frequent guest appearances. In 2001, the American Humanist Association named him the Humanist of the Year for his lifetime of work. Since 2013, Gould has been listed on the Advisory Council of the National Center for Science Education. In 1997 he voiced a cartoon version of himself on the television series "The Simpsons". In the episode "Lisa the Skeptic", Lisa finds a skeleton that many people believe is an apocalyptic angel. Lisa contacts Gould and asks him to test the skeleton's DNA. The fossil is discovered to be a marketing gimmick for a new mall. During production the only phrase Gould objected to was a line in the script that introduced him as the "world's most brilliant paleontologist". In 2002 the show paid tribute to Gould after his death, dedicating the season 13 finale to his memory. Gould had died two days before the episode aired. Gould received many accolades for his scholarly work and popular expositions of natural history,
https://en.wikipedia.org/wiki?curid=27875
Sleipnir In Norse mythology, Sleipnir (Old Norse "slippy" or "the slipper") is an eight-legged horse ridden by Odin. Sleipnir is attested in the "Poetic Edda", compiled in the 13th century from earlier traditional sources, and the "Prose Edda", written in the 13th century by Snorri Sturluson. In both sources, Sleipnir is Odin's steed, is the child of Loki and Svaðilfari, is described as the best of all horses, and is sometimes ridden to the location of Hel. The "Prose Edda" contains extended information regarding the circumstances of Sleipnir's birth, and details that he is grey in color. Sleipnir is also mentioned in a riddle found in the 13th century legendary saga "Hervarar saga ok Heiðreks", in the 13th-century legendary saga "Völsunga saga" as the ancestor of the horse Grani, and book I of "Gesta Danorum", written in the 12th century by Saxo Grammaticus, contains an episode considered by many scholars to involve Sleipnir. Sleipnir is generally accepted as depicted on two 8th century Gotlandic image stones: the Tjängvide image stone and the Ardre VIII image stone. Scholarly theories have been proposed regarding Sleipnir's potential connection to shamanic practices among the Norse pagans. In modern times, Sleipnir appears in Icelandic folklore as the creator of Ásbyrgi, in works of art, literature, software, and in the names of ships. In the "Poetic Edda", Sleipnir appears or is mentioned in the poems "Grímnismál", "Sigrdrífumál", "Baldrs draumar", and "Hyndluljóð". In "Grímnismál", Grimnir (Odin in disguise and not yet having revealed his identity) tells the boy Agnar in verse that Sleipnir is the best of horses ("Odin is the best of the Æsir, Sleipnir of horses"). In "Sigrdrífumál", the valkyrie Sigrdrífa tells the hero Sigurðr that runes should be cut "on Sleipnir's teeth and on the sledge's strap-bands." In "Baldrs draumar", after the Æsir convene about the god Baldr's bad dreams, Odin places a saddle on Sleipnir and the two ride to the location of Hel. The "Völuspá hin skamma" section of "Hyndluljóð" says that Loki produced "the wolf" with Angrboða, produced Sleipnir with Svaðilfari, and thirdly "one monster that was thought the most baleful, who was descended from Býleistr's brother." In the "Prose Edda" book "Gylfaginning", Sleipnir is first mentioned in chapter 15 where the enthroned figure of High says that every day the Æsir ride across the bridge Bifröst, and provides a list of the Æsir's horses. The list begins with Sleipnir: "best is Sleipnir, he is Odin's, he has eight legs." In chapter 41, High quotes the "Grímnismál" stanza that mentions Sleipnir. In chapter 43, Sleipnir's origins are described. Gangleri (described earlier in the book as King Gylfi in disguise) asks High who the horse Sleipnir belongs to and what there is to tell about it. High expresses surprise in Gangleri's lack of knowledge about Sleipnir and its origin. High tells a story set "right at the beginning of the gods' settlement, when the gods established Midgard and built Val-Hall" about an unnamed builder who has offered to build a fortification for the gods in three seasons that will keep out invaders in exchange for the goddess Freyja, the sun, and the moon. After some debate, the gods agree to this, but place a number of restrictions on the builder, including that he must complete the work within three seasons with the help of no man. The builder makes a single request; that he may have help from his stallion Svaðilfari, and due to Loki's influence, this is allowed. The stallion Svaðilfari performs twice the deeds of strength as the builder, and hauls enormous rocks to the surprise of the gods. The builder, with Svaðilfari, makes fast progress on the wall, and three days before the deadline of summer, the builder was nearly at the entrance to the fortification. The gods convene, and figured out who was responsible, resulting in a unanimous agreement that, along with most trouble, Loki was to blame. The gods declare that Loki would deserve a horrible death if he could not find a scheme that would cause the builder to forfeit his payment, and threatened to attack him. Loki, afraid, swore oaths that he would devise a scheme to cause the builder to forfeit the payment, whatever it would cost himself. That night, the builder drove out to fetch stone with his stallion Svaðilfari, and out from a wood ran a mare. The mare neighed at Svaðilfari, and "realizing what kind of horse it was," Svaðilfari became frantic, neighed, tore apart his tackle, and ran towards the mare. The mare ran to the wood, Svaðilfari followed, and the builder chased after. The two horses ran around all night, causing the building work to be held up for the night, and the previous momentum of building work that the builder had been able to maintain was not continued. When the Æsir realize that the builder is a hrimthurs, they disregard their previous oaths with the builder, and call for Thor. Thor arrives, and kills the builder by smashing the builder's skull into shards with the hammer Mjöllnir. However, Loki had "such dealings" with Svaðilfari that "somewhat later" Loki gave birth to a grey foal with eight legs; the horse Sleipnir, "the best horse among gods and men." In chapter 49, High describes the death of the god Baldr. Hermóðr agrees to ride to Hel to offer a ransom for Baldr's return, and so "then Odin's horse Sleipnir was fetched and led forward." Hermóðr mounts Sleipnir and rides away. Hermóðr rides for nine nights in deep, dark valleys where Hermóðr can see nothing. The two arrive at the river Gjöll and then continue to Gjöll bridge, encountering a maiden guarding the bridge named Móðguðr. Some dialogue occurs between Hermóðr and Móðguðr, including that Móðguðr notes that recently there had ridden five battalions of dead men across the bridge that made less sound than he. Sleipnir and Hermóðr continue "downwards and northwards" on the road to Hel, until the two arrive at Hel's gates. Hermóðr dismounts from Sleipnir, tightens Sleipnir's girth, mounts him, and spurs Sleipnir on. Sleipnir "jumped so hard and over the gate that it came nowhere near." Hermóðr rides up to the hall, and dismounts from Sleipnir. After Hermóðr's pleas to Hel to return Baldr are accepted under a condition, Hermóðr and Baldr retrace their path backward and return to Asgard. In chapter 16 of the book "Skáldskaparmál", a kenning given for Loki is "relative of Sleipnir." In chapter 17, a story is provided in which Odin rides Sleipnir into the land of Jötunheimr and arrives at the residence of the jötunn Hrungnir. Hrungnir asks "what sort of person this was" wearing a golden helmet, "riding sky and sea," and says that the stranger "has a marvellously good horse." Odin wagers his head that no horse as good could be found in all of Jötunheimr. Hrungnir admitted that it was a fine horse, yet states that he owns a much longer-paced horse; Gullfaxi. Incensed, Hrungnir leaps atop Gullfaxi, intending to attack Odin for Odin's boasting. Odin gallops hard ahead of Hrungnir, and, in his, fury, Hrungnir finds himself having rushed into the gates of Asgard. In chapter 58, Sleipnir is mentioned among a list of horses in "Þorgrímsþula": "Hrafn and Sleipnir, splendid horses [...]". In addition, Sleipnir occurs twice in kennings for "ship" (once appearing in chapter 25 in a work by the skald Refr, and "sea-Sleipnir" appearing in chapter 49 in "Húsdrápa", a work by the 10th century skald Úlfr Uggason). In "Hervarar saga ok Heiðreks", the poem "Heiðreks gátur" contains a riddle that mentions Sleipnir and Odin: In chapter 13 of "Völsunga saga", the hero Sigurðr is on his way to a wood and he meets a long-bearded old man he had never seen before. Sigurd tells the old man that he is going to choose a horse, and asks the old man to come with him to help him decide. The old man says that they should drive the horses down to the river Busiltjörn. The two drive the horses down into the deeps of Busiltjörn, and all of the horses swim back to land but a large, young, and handsome grey horse that no one had ever mounted. The grey-bearded old man says that the horse is from "Sleipnir's kin" and that "he must be raised carefully, because he will become better than any other horse." The old man vanishes. Sigurd names the horse Grani, and the narrative adds that the old man was none other than (the god) Odin. Sleipnir is generally considered as appearing in a sequence of events described in book I of "Gesta Danorum". In book I, the young Hadingus encounters "a certain man of great age who had lost an eye" who allies him with Liserus. Hadingus and Liserus set out to wage war on Lokerus, ruler of Kurland. Meeting defeat, the old man takes Hadingus with him onto his horse as they flee to the old man's house, and the two drink an invigorating draught. The old man sings a prophecy, and takes Hadingus back to where he found him on his horse. During the ride back, Hadingus trembles beneath the old man's mantle, and peers out of its holes. Hadingus realizes that he is flying through the air: "and he saw that before the steps of the horse lay the sea; but was told not to steal a glimpse of the forbidden thing, and therefore turned his amazed eyes from the dread spectacle of the roads that he journeyed." In book II, Biarco mentions Odin and Sleipnir: "If I may look on the awful husband of Frigg, howsoever he be covered in his white shield, and guide his tall steed, he shall in no way go safe out of Leire; it is lawful to lay low in war the war-waging god." Two of the 8th century picture stones from the island of Gotland, Sweden depict eight-legged horses, which are thought by most scholars to depict Sleipnir: the Tjängvide image stone and the Ardre VIII image stone. Both stones feature a rider sitting atop an eight-legged horse, which some scholars view as Odin. Above the rider on the Tjängvide image stone is a horizontal figure holding a spear, which may be a valkyrie, and a female figure greets the rider with a cup. The scene has been interpreted as a rider arriving at the world of the dead. The mid-7th century Eggja stone bearing the Odinic name "haras" (Old Norse 'army god') may be interpreted as depicting Sleipnir. John Lindow theorizes that Sleipnir's "connection to the world of the dead grants a special poignancy to one of the kennings in which Sleipnir turns up as a horse word," referring to the skald Úlfr Uggason's usage of "sea-Sleipnir" in his "Húsdrápa", which describes the funeral of Baldr. Lindow continues that "his use of Sleipnir in the kenning may show that Sleipnir's role in the failed recovery of Baldr was known at that time and place in Iceland; it certainly indicates that Sleipnir was an active participant in the mythology of the last decades of paganism." Lindow adds that the eight legs of Sleipnir "have been interpreted as an indication of great speed or as being connected in some unclear way with cult activity." Hilda Ellis Davidson says that "the eight-legged horse of Odin is the typical steed of the shaman" and that in the shaman's journeys to the heavens or the underworld, a shaman "is usually represented as riding on some bird or animal." Davidson says that while the creature may vary, the horse is fairly common "in the lands where horses are in general use, and Sleipnir's ability to bear the god through the air is typical of the shaman's steed" and cites an example from a study of shamanism by Mircea Eliade of an eight-legged foal from a story of a Buryat shaman. Davidson says that while attempts have been made to connect Sleipnir with hobby horses and steeds with more than four feet that appear in carnivals and processions, but that "a more fruitful resemblance seems to be on the bier on which a dead man is carried in the funeral procession by four bearers; borne along thus, he may be described as riding on a steed with eight legs." As an example, Davidson cites a funeral dirge from the Gondi people in India as recorded by Verrier Elwin, stating that "it contains references to Bagri Maro, the horse with eight legs, and it is clear from the song that it is the dead man's bier." Davidson says that the song is sung when a distinguished Muria dies, and provides a verse: Davidson adds that the representation of Odin's steed as eight-legged could arise naturally out of such an image, and that "this is in accordance with the picture of Sleipnir as a horse that could bear its rider to the land of the dead." Ulla Loumand cites Sleipnir and the flying horse Hófvarpnir as "prime examples" of horses in Norse mythology as being able to "mediate between earth and sky, between Ásgarðr, Miðgarðr and Útgarðr and between the world of mortal men and the underworld." The "Encyclopedia of Indo-European Culture" theorizes that Sleipnir's eight legs may be the remnants of horse-associated divine twins found in Indo-European cultures and ultimately stemming from Proto-Indo-European religion. The encyclopedia states that "[...] Sleipnir is born with an extra set of legs, thus representing an original pair of horses. Like Freyr and Njörðr, Sleipnir is responsible for carrying the dead to the otherworld." The encyclopedia cites parallels between the birth of Sleipnir and myths originally pointing to a Celtic goddess who gave birth to the Divine horse twins. These elements include a demand for a goddess by an unwanted suitor (the hrimthurs demanding the goddess Freyja) and the seduction of builders. According to Icelandic folklore, the horseshoe-shaped canyon Ásbyrgi located in Jökulsárgljúfur National Park, northern Iceland was formed by Sleipnir's hoof. Sleipnir is depicted with Odin on Dagfin Werenskiold's wooden relief "Odin på Sleipnir" (1945–1950) on the exterior of the Oslo City Hall in Oslo, Norway. Sleipnir has been and remains a popular name for ships in northern Europe, and Rudyard Kipling's short story entitled "Sleipnir, late Thurinda" (1888) features a horse named Sleipnir. A statue of Sleipnir (1998) stands in Wednesbury, England, a town which takes its name from the Anglo-Saxon version of Odin, Wōden.
https://en.wikipedia.org/wiki?curid=27881
Walter Scott Sir Walter Scott, 1st Baronet (15 August 1771 – 21 September 1832) was a Scottish historical novelist, poet, playwright, and historian. Many of his works remain classics of both English-language literature and of Scottish literature. Famous titles include "The Lady of the Lake" (narrative poem) and the novels "Waverley", "Old Mortality" (or "The Tale of Old Mortality"), "Rob Roy", "The Heart of Mid-Lothian", "The Bride of Lammermoor", and "Ivanhoe". Although primarily remembered for his extensive literary works and his political engagement, Scott was an advocate, judge and legal administrator by profession, and throughout his career combined his writing and editing work with his daily occupation as Clerk of Session and Sheriff-Depute of Selkirkshire. A prominent member of the Tory establishment in Edinburgh, Scott was an active member of the Highland Society, served a long term as President of the Royal Society of Edinburgh (1820–1832) and was a Vice President of the Society of Antiquaries of Scotland (1827–1829). Scott's knowledge of history, and his facility with literary technique, made him a seminal figure in the establishment of the historical novel genre, as well as an exemplar of European literary Romanticism. He was created a baronet "of Abbotsford in the County of Roxburgh," Scotland, in the Baronetage of the United Kingdom on 22 April 1820, which title became extinct on the death of his son the 2nd Baronet in 1847. Walter Scott was born on 15 August 1771, in a third-floor apartment on College Wynd in the Old Town of Edinburgh, a narrow alleyway leading from the Cowgate to the gates of the University of Edinburgh (Old College). He was the ninth child (six having died in infancy) of Walter Scott (1729–1799), a member of a cadet branch of the Clan Scott and a Writer to the Signet, by his wife Anne Rutherford, a sister of Daniel Rutherford and a descendant of both the Clan Swinton and the Haliburton family (the descent from which granted Walter's family the hereditary right of burial in Dryburgh Abbey). Walter was thus a cousin of the property developer James Burton (d.1837), born "Haliburton," and of his son the architect Decimus Burton. Walter subsequently became a member of the Clarence Club, of which the Burtons were also members. He survived a childhood bout of polio in 1773 that left him lame, a condition that was to have a significant effect on his life and writing. To cure his lameness he was sent in 1773 to live in the rural Scottish Borders at his paternal grandparents' farm at Sandyknowe, adjacent to the ruin of Smailholm Tower, the earlier family home. Here he was taught to read by his aunt Jenny Scott, and learned from her the speech patterns and many of the tales and legends that later characterised much of his work. In January 1775 he returned to Edinburgh, and that summer went with his aunt Jenny to take spa treatment at Bath in Somerset, Southern England, where they lived at 6 South Parade. In the winter of 1776 he went back to Sandyknowe, with another attempt at a water cure at Prestonpans during the following summer. In 1778 Scott returned to Edinburgh for private education to prepare him for school and joined his family in their new house, one of the first to be built in George Square. In October 1779 he began at the Royal High School in Edinburgh (in High School Yards). He was by then well able to walk and explore the city and the surrounding countryside. His reading included chivalric romances, poems, history and travel books. He was given private tuition by James Mitchell in arithmetic and writing, and learned from him the history of the Church of Scotland with emphasis on the Covenanters. In 1783 his parents, believing that he had outgrown his strength, sent him to stay for six months with his aunt Jenny at Kelso in the Scottish Borders: there he attended Kelso Grammar School where he met James Ballantyne and his brother John, who later became his business partners and printers. Scott began studying classics at the University of Edinburgh in November 1783, at the age of 12, a year or so younger than most of his fellow students. In March 1786, aged 15, he began an apprenticeship in his father's office to become a Writer to the Signet. At school and university Scott had become a friend of Adam Ferguson, whose father Professor Adam Ferguson hosted literary salons. Scott met the blind poet Thomas Blacklock, who lent him books and introduced him to the Ossian cycle of poems by James Macpherson. During the winter of 1786–87 the 15-year-old Scott met the Scots poet Robert Burns at one of these salons, their only meeting. When Burns noticed a print illustrating the poem "The Justice of the Peace" and asked who had written it, Scott alone named the author as John Langhorne, and was thanked by Burns. Scott describes this event in his memoirs where he whispers the answer to his friend Adam who tells Burns; another version of the event is described in "Literary Beginnings". When it was decided that he would become a lawyer, he returned to the university to study law, first taking classes in moral philosophy (under Dugald Stewart) and universal history (under Alexander Fraser Tytler) in 1789–90. During this second spell at university Scott played a prominent role in student intellectual activities: he co-founded the Literary Society in 1789, and he was elected to the Speculative Society the following year, becoming librarian and secretary-treasurer the following year. After completing his studies in law, he became a lawyer in Edinburgh. As a lawyer's clerk he made his first visit to the Scottish Highlands directing an eviction. He was admitted to the Faculty of Advocates in 1792. He had an unsuccessful love suit with Williamina Belsches of Fettercairn, who married Scott's friend Sir William Forbes, 7th Baronet. In February 1797, with the threat of a French invasion, Scott along with many of his friends joined the Royal Edinburgh Volunteer Light Dragoons, with which he served into the early 1800s, and was appointed quartermaster and secretary. The daily drill practices that year, starting at 5am, provide an indication of the determination with which this role was undertaken. Scott was prompted to embark on his literary career by the enthusiasm in Edinburgh during the 1790s for modern German literature. Recalling that period in 1827 Scott said that he 'was German-mad'. In 1796 he produced English versions of two poems by Gottfried August Bürger, "Der wilde Jäger" and "Lenore", publishing them as "The Chase and William and Helen". Scott responded to the contemporaneous German interest in national identity, folk culture, and medieval literature. This linked up with his own developing passion for traditional ballads. One of his favourite books since childhood had been Thomas Percy's "Reliques of Ancient English Poetry", and during the 1790s he engaged in research in manuscript collections and on Border 'raids' to collect ballads from oral performance. With the help of John Leyden he produced a two-volume collection "Minstrelsy of the Scottish Border" in 1802 containing 48 traditional ballads and two imitations apiece by Leyden and himself. Of the 48 traditional items, 26 were published for the first time. A greatly enlarged edition appeared in three volumes the following year. With many of the ballads Scott fused different versions to create more coherent texts, a practice he later repudiated. The "Minstrelsy" was the first, and most important, of a series of editorial projects over the following two decades, including the medieval romance "Sir Tristrem" (which Scott wrongly assumed to have been produced by Thomas the Rhymer) in 1804, the works of John Dryden (18 vols, 1808), and the works of Jonathan Swift (19 vols, 1814). On a trip to the English Lake District with old college friends he met Charlotte Charpentier (Anglicised to "Carpenter"), a daughter of Jean Charpentier of Lyon in France, and a ward of Lord Downshire in Cumberland, an Anglican. After three weeks of courtship, Scott proposed and they were married on Christmas Eve 1797 in St Mary's Church, Carlisle (in the nave of Carlisle Cathedral). After renting a house in Edinburgh's George Street, they moved to nearby South Castle Street. They had five children, of whom four survived by the time of Scott's death. His eldest son Sir Walter Scott, 2nd Baronet (1801–1847), inherited his father's estates and possessions: on 3 February 1825 he married Jane Jobson, only daughter of William Jobson of Lochore (died 1822) (by his wife Rachel Stuart (died 1863)), the heiress of Lochore and a niece of Lady Margaret Ferguson. In 1799 Scott was appointed Sheriff-Depute of the County of Selkirk, based in the Royal Burgh of Selkirk. In his early married days Scott had a decent living from his earnings as a lawyer, his salary as Sheriff-Depute, his wife's income, some revenue from his writing, and his share of his father's modest estate. After Walter Jr was born in 1801, the Scotts moved to a spacious three-storey house at 39 North Castle Street, which remained as Scott's base in Edinburgh until 1826, when it was sold by the trustees appointed after his financial ruin. From 1798 Scott had spent the summers in a cottage at Lasswade, where he entertained guests including literary figures, and it was there that his career as an author began. There were nominal residency requirements for his position of Sheriff-Depute, and at first he stayed at a local inn during the circuit. In 1804 he ended his use of the Lasswade cottage and leased the substantial house of Ashestiel, from Selkirk, was sited on the south bank of the River Tweed and incorporating an ancient tower house. At Scott's insistence the first edition of the "Minstrelsy" was printed by his friend James Ballantyne at Kelso. In 1798 James had published Scott's version of Goethe's "Erlkönig" in his newspaper "The Kelso Mail", and in 1799 he included it and the two Bürger translations in a small privately printed anthology "Apology for Tales of Terror". In 1800 Scott suggested that Ballantyne set up business in Edinburgh and provided a loan for him to make the transition in 1802. In 1805 they became partners in the printing business, and from then until the financial crash of 1826 Scott's works were routinely printed by the firm. Between 1805 and 1817 Scott produced five long narrative poems, each in six cantos, four shorter independently published poems, and many small metrical pieces. Until Lord Byron published the first two cantos of Childe Harold's Pilgrimage in 1812 and followed them up with his exotic oriental verse narratives, Scott was by far the most popular poet of the time. "The Lay of the Last Minstrel" (1805), in medieval romance form, grew out of Scott's plan to include a long original poem of his own in the second edition of the "Minstrelsy": it would be 'a sort of Romance of Border Chivalry & inchantment'. He owed the distinctive irregular accentual four-beat metre to Coleridge's "Christabel", which he had heard recited by John Stoddart (it was not to be published until 1816). Scott was able to draw on his unrivalled familiarity with Border history and legend acquired from oral and written sources beginning in his childhood to present an energetic and highly-coloured picture of sixteenth-century Scotland which both captivated the general public and, with its voluminous notes, also addressed itself to the antiquarian student. The poem has a strong moral theme, as human pride is placed in the context of the last judgment with the introduction of a version of the 'Dies irae' at the end. The work was an immediate success with almost all the reviewers and with readers in general, going through five editions in one year. The most celebrated lines are those which open the final stanza: Three years after "The Lay" Scott published "Marmion" (1808) telling a story of corrupt passions leading up to the disastrous climax of the Battle of Flodden in 1513. The main innovation involves the prefacing of each of the six cantos with an epistle from the author to a friend: William Stewart Rose, The Rev. John Marriot, William Erskine, James Skene, George Ellis, and Richard Heber: the epistles develop themes of moral positives and the special delights imparted by art. In an unprecedented move, the publisher Archibald Constable purchased the copyright of the poem for a thousand guineas at the beginning of 1807 when only the first epistle had been completed. Constable's faith was justified by the sales: the three editions published in 1808 sold 8,000 copies. The verse of "Marmion" is less striking than that of "The Lay", with the epistles in iambic tetrameters and the narrative in tetrameters with frequent trimeters. The reception by the reviewers was less favourable than that accorded "The Lay": style and plot were both found faulty, the epistles did not link up with the narrative, there was too much antiquarian pedantry, and Marmion's character was immoral. The most familiar lines in the poem sum up one of its main themes: 'O what a tanged web we weave,/ When first we practice to deceive!' Scott's meteoric poetic career reached its zenith with his third long narrative "The Lady of the Lake" (1810) which sold no fewer than 20,000 copies in the first year. The reviewers were very largely favourable, finding that the defects they had noted in "Marmion" were largely absent from the new work. In some ways it is a more conventional poem than its predecessors: the narrative is entirely in iambic tetrameters, and the story of the transparently disguised James V (King of Scots 1513‒42) is predictable: Coleridge wrote to Wordsworth: 'The movement of the Poem […] is between a sleeping Canter and a Marketwoman's trot—but it is endless—I seem never to have made any way—I never remember a narrative poem in which I felt the sense of Progress so languid'. But the metrical uniformity is relieved by frequent songs and the Perthshire Highland setting is presented as an enchanted landscape, which resulted in a phenomenal increase in the local tourist trade. Moreover the poem touches on a theme that was to be central to the Waverley Novels, the clash between neighbouring societies in different stages of development. The remaining two long narrative poems, "Rokeby" (1813), set in the Yorkshire estate of that name belonging to Scott's friend J. B. S. Morritt during the Civil War period, and "The Lord of the Isles" (1815), set in early fourteenth-century Scotland and culminating in the Battle of Bannockburn in 1314. Both works had generally favourable receptions and sold well but without rivalling the enormous success of "The Lady of the Lake". Scott also produced four minor narrative or semi-narrative poems between 1811 and 1817: The Vision of Don Roderick (1811); "The Bridal of Triermain" (published anonymously in 1813); "The Field of Waterloo" (1815); and "Harold the Dauntless" (published anonymously in 1817). Throughout his creative life Scott was an active reviewer. Although himself a Tory he reviewed for "The Edinburgh Review" between 1803 and 1806, but that journal's advocacy of peace with Napoleon led him to cancel his subscription in 1808. The following year, at the height of his poetic career, he was instrumental in the establishment of a Tory rival, "The Quarterly Review" to which he contributed reviews for the rest of his life. In 1813 Scott was offered the position of Poet Laureate. He declined, due to concerns that "such an appointment would be a poisoned chalice," as the Laureateship had fallen into disrepute, due to the decline in quality of work suffered by previous title holders, "as a succession of poetasters had churned out conventional and obsequious odes on royal occasions." He sought advice from the 4th Duke of Buccleuch, who counseled him to retain his literary independence, and the position went to Scott's friend, Robert Southey. The beginning of Scott's career as a novelist is attended with uncertainty. It is thought most likely that he began a narrative with an English setting in 1808 and laid it aside. The success of his Highland narrative poem "The Lady of the Lake" in 1810 seems to have put it into his head to resume the narrative and have his hero Edward Waverley journey to Scotland. Although "Waverley" was announced for publication at that stage, it was again laid and not resumed until late 1813 and completed for publication in 1814. Only a thousand copies were printed, but the work was an immediate success and 3,000 more copies were produced in two further editions the same year. "Waverley" turned out to be the first of 27 novels (eight of them published in pairs), and by the time the sixth of them, "Rob Roy", was published the print run for the first edition had been increased to 10,000 copies, thereafter the norm. Given Scott's established status as a poet, and the tentative nature of "Waverley" 's coming into being, it is not surprising that he followed a common practice at the period and published the work anonymously. Until his financial ruin in 1826 he continued this practice, and the novels mostly appeared as 'By the Author of Waverley' (or variants thereof) or as "Tales of My Landlord". It is not clear why he chose to do this (no fewer than eleven reasons have been suggested), especially since it was a fairly open secret, but as he himself said, with Shylock, 'such was my humour'. Scott was an almost exclusively historical novelist. Of his 27 novels only one ("Saint Ronan's Well") has an entirely modern setting. The dates of the action in the others range from 1794 in "The Antiquary" back to 1096 or 1097, the time of the First Crusade, in "Count Robert of Paris". Sixteen take place in Scotland. The first nine, from "Waverley" (1814) to "A Legend of Montrose" (1819), all have Scottish locations, and 17th- or 18th-century settings. Scott was better versed in his material than anyone: he was able to draw on oral tradition as well as a wide range of written sources in his ever-expanding library (many of them rare, and some of them unique copies). In general it is these pre-1820 novels that have attracted the attention of modern academic critics—especially: "Waverley" with its presentation of those 1745 Jacobites drawn from the Highland clans as obsolete and fanatical idealists; "Old Mortality" (1816) with its treatment of the 1679 Covenanters as fanatical and in many cases ridiculous (which prompted John Galt to produce a contrasting picture in his novel "Ringan Gilhaize" in 1823); "The Heart of Mid-Lothian" (1818) with its low-born heroine Jeanie Deans who makes a perilous journey to Windsor in 1737 to secure a promise of a royal pardon for her sister, falsely accused of infanticide; and the tragic "The Bride of Lammermoor" (1819), with its stern representative of a declined aristocratic family Edgar Ravenswood and his fiancée as the victims of the wife of an upstart lawyer in a time of political power-struggling preceding the Act of Union in 1707. In 1820, in a bold move, Scott shifted both period and location for "Ivanhoe" (1820) to 12th-century England. This meant that he was dependent on a limited range of sources, all of them printed: he had to bring together material from different centuries and also invent an artificial form of speech based on Elizabethan and Jacobean drama. The result is as much myth as history, but the novel remains his best-known work, the most likely to be encountered by the general reader. Eight of the subsequent seventeen novels also have medieval settings, though most of them are set towards the end of the period, for which Scott had a better supply of contemporaneous sources. His familiarity with Elizabethan and 17th-century English literature, partly resulting from his editorial work on pamphlets and other minor publications, meant that four of his works set in the England of that period—"Kenilworth" (1821), "The Fortunes of Nigel" and "Peveril of the Peak" (1821), and "Woodstock" (1826)—are able to present rich pictures of their societies. The most generally esteemed of Scott's later fictional creations, though, are three short stories: a supernatural narrative in Scots, 'Wandering Willie's Tale' in "Redgauntlet" (1824), and 'The Highland Widow' and 'The Two Drovers' in "Chronicles of the Canongate" (1827). As with any major writer there is no end to the complexity, subtlety, and contestability of Scott's work, but certain central linked themes can be observed recurring in most of his novels. Crucial to Scott's historical thinking is the concept that very different societies can be observed moving through the same stages as they develop, and also that humanity is basically unchanging, or as he puts it in the first chapter of "Waverley" that there are 'passions common to men in all stages of society, and which have alike agitated the human heart, whether it throbbed under the steel corslet of the fifteenth century, the brocaded coat of the eighteenth, or the blue frock and white dimity waistcoat of the present day'. It was one of Scott's main achievements to give lively and detailed pictures of different stages of Scottish, British, and European society while making it clear that for all the differences in the forms they took the human passions were the same as those of his own age. His readers could therefore appreciate the depiction of an unfamiliar society while having no difficulty in relating to the characters. Scott is fascinated by striking moments of transition between stages in societies. In a discussion of his early novels Coleridge observed that derive their 'long-sustained "interest" ' from 'the contest between the two great moving Principles of social Humanity—religious adherence to the Past and the Ancient, the Desire & the admiration of Permanence, on the one hand; and the Passion for increase of Knowledge, for Truth as the offspring of Reason, in short, the mighty Instincts of "Progression" and "Free-agency", on the other'. This is evident, for example, in "Waverley" as the hero is captivated by the romantic allure of the Jacobite cause embodied in Bonnie Prince Charlie and his followers before accepting that the time for such enthusiasms has gone and accepting the more rational, if humdrum, reality of Hanoverian Britain. Another example can be found in 15th-century Europe in the yielding of the old chivalric worldview of Charles Duke of Burgundy to the Machiavellian pragmatism of Louis XI. Scott is intrigued by the way that different stages of societal development can exist side by side in one country. When Waverley has his first experience of Highland ways after a raid on his Lowland host's cattle it 'seemed like a dream […] that these deeds of violence should be familiar to men's minds, and currently talked of, as falling with the common order of things, and happening daily in the immediate neighbourhood, without his having crossed the seas, and while he was yet in the otherwise well-ordered island of Great Britain'. A more complex version of this situation can be found in Scott's second novel, "Guy Mannering" (1815), which, 'set in 1781‒2, offers no simple opposition: the Scotland represented in the novel is at once backward and advanced, traditional and modern—it is a country in varied stages of progression in which there are many social subsets, each with its own laws and customs.' Scott's process of composition can be traced through the manuscripts (which have mostly been preserved), the more fragmentary sets of proofs, his correspondence, and publisher's records. He did not create detailed plans for his stories, and the remarks by the figure of 'the Author' in the Introductory Epistle to "The Fortunes of Nigel" probably reflect his own experience: 'I think there is a dæmon who seats himself on the feather of my pen when I begin to write, and leads it astray from the purpose. Characters expand under my hand; incidents are multiplied; the story lingers, while the materials increase—my regular mansion turns out a Gothic anomaly, and the work is complete long before I have attained the point I proposed'. Nevertheless, the manuscripts rarely show major deletions or changes of direction, and it is clear that Scott was able to keep control of his narrative. That was important, because as soon as he had made fair progress with a novel he would start sending batches of manuscript to be copied (to preserve his anonymity), and the copies were sent to be set up in type (as usual at the time the compositors would supply the punctuation). He received proofs, also in batches, and made many changes at that stage, but almost always these were local corrections and enhancements. As the number of novels accumulated they were from time to time republished in small collections: "Novels and Tales" (1819: "Waverley" to "A Tale of Montrose"); "Historical Romances" (1822: "Ivanhoe" to "Kenilworth"); "Novels and Romances" (1824 [1823]: "The Pirate" to "Quentin Durward"); and two series of "Tales and Romances" (1827: "St Ronan's Well" to "Woodstock"; 1833: "Chronicles of the Canongate" to "Castle Dangerous"). In the last years of his life Scott marked up interleaved copies of these collected editions to produce a final version of what were now officially called the "Waverley Novels": this is often referred to as the 'Magnum Opus' or 'Magnum Edition'. Scott provided each novel with an introduction and notes, and he made mostly small and piecemeal adjustments to the text. Issued in 48 well-produced monthly volumes between June 1829 and May 1833 at the modest price of five shillings (25p) these were an innovative, and highly profitable, marketing enterprise aimed at a wide readership: the print run was an astonishing 30,000. In his 'General Preface' to the 'Magnum Edition' Scott wrote that one factor prompting him to resume work on the manuscript of "Waverley" in 1813 had been a desire to do for Scotland what had been achieved in the fiction of Maria Edgeworth 'whose Irish characters have gone so far to make the English familiar with the character of their gay and kind-hearted neighbours of Ireland, that she may be truly said to have done more towards completing the Union, than perhaps all the legislative enactments by which it has been followed up [the Act of Union of 1801]'. Most of Scott's readers were English: with "Quentin Durward" (1823) and "Woodstock" (1826), for example, some 8000 of the 10,000 copies of the first edition went to London. In the Scottish novels the lower-class characters normally speak Scots, but Scott is careful not to make the Scots too dense, so that those unfamiliar with the language can follow the gist without understanding every word. Some have also argued that, although Scott was formally a supporter of the Union with England (and Ireland) his novels have a strong nationalist subtext for readers attuned to the appropriate wavelength. Scott's embarkation on his new career as a novelist in 1814 did not mean that he abandoned poetry. The Waverley Novels contain much original verse, including familiar songs such as 'Proud Maisie' from "The Heart of Mid-Lothian" (Ch. 41) and 'Look not thou on Beauty's charming' from "The Bride of Lammermoor", (Ch. 3). In most of the novels Scott preceded each chapter with an epigram or 'motto': most of these are in verse, and many are of his own composition, often imitating other writers such as Beaumont and Fletcher. Prompted by Scott, the Prince Regent (the future George IV) gave Scott and other officials permission in a Royal Warrant dated 28 October 1817 to conduct a search for the Crown Jewels ("Honours of Scotland"). During the years of the Protectorate under Cromwell the Crown Jewels had been hidden away, but had subsequently been used to crown Charles II. They were not used to crown subsequent monarchs, but were regularly taken to sittings of Parliament, to represent the absent monarch, until the Act of Union 1707. Thereafter, the honours were stored in Edinburgh Castle, but the large locked box in which they were stored was not opened for more than 100 years, and stories circulated that they had been "lost" or removed. On 4 February 1818, Scott and a small team of military men opened the box, and "unearthed" the honours from the Crown Room of Edinburgh Castle. On 19 August 1818 through Scott's effort, his friend Adam Ferguson was appointed Deputy Keeper of the "Scottish Regalia." The Scottish patronage system swung into action and after elaborate negotiations the Prince Regent granted Scott the title of baronet: in April 1820 he received the baronetcy in London, becoming Sir Walter Scott, 1st Baronet. After George's accession to the throne, the city council of Edinburgh invited Scott, at the sovereign's behest, to stage-manage the 1822 visit of King George IV to Scotland. With only three weeks for planning and execution, Scott created a spectacular and comprehensive pageant, designed not only to impress the King, but also in some way to heal the rifts that had destabilised Scots society. He used the event to contribute to the drawing of a line under an old world that pitched his homeland into regular bouts of bloody strife. Probably fortified by his vivid depiction of the pageant staged for the reception of Queen Elizabeth in "Kenilworth" he, along with his "production team," mounted what in modern days could be termed a PR event, in which the King was dressed in tartan, and was greeted by his people, many of whom were also dressed in similar tartan ceremonial dress. This form of dress, proscribed after the 1745 rebellion against the English, became one of the seminal, potent and ubiquitous symbols of Scottish identity. In 1825, a UK-wide banking crisis resulted in the collapse of the Ballantyne printing business, of which Scott was the only partner with a financial interest; the company's debts of £130,000 () caused his very public ruin. Rather than declare himself bankrupt, or to accept any kind of financial support from his many supporters and admirers (including the king himself), he placed his house and income in a trust belonging to his creditors, and determined to write his way out of debt. To add to his burdens, his wife Charlotte died in 1826. Whether in spite of these events, or because of them, Scott kept up his prodigious output. Between 1826 and 1832 he produced six novels, two short stories and two plays, eleven works or volumes of non-fiction, and a journal, in addition to several unfinished works. The nonfiction works included the "Life of Napoleon Buonaparte" in 1827, two volumes of the "History of Scotland" in 1829 and 1830, and four installments of the series entitled "Tales of a Grandfather – Being Stories Taken From Scottish History", written one per year over the period 1828–1831, among several others. Finally, Scott had recently been inspired by the diaries of Samuel Pepys and Lord Byron, and he began keeping a journal over the period, which, however, would not be published until 1890, as "The Journal of Sir Walter Scott". By then Scott's health was failing, and on 29 October 1831, in a vain search for improvement, he set off on a voyage to Malta and Naples on board HMS "Barham", a frigate put at his disposal by the Admiralty. He was welcomed and celebrated wherever he went, but on his journey home he had a final stroke and was transported back to die at Abbotsford on 21 September 1832. Scott was buried in Dryburgh Abbey, where his wife had earlier been interred. Lady Scott had been buried as an Episcopalian; at Scott's own funeral three ministers of the Church of Scotland officiated at Abbotsford and the service at Dryburgh was conducted by an Episcopal clergyman. Although Scott died owing money, his novels continued to sell, and the debts encumbering his estate were discharged shortly after his death. Scott was raised as a Presbyterian in the Church of Scotland. He was ordained as an elder in Duddingston Kirk in 1806, and sat in the General Assembly for a time as representative elder of the burgh of Selkirk. In adult life he also adhered to the Scottish Episcopal Church: he seldom attended church but read the Book of Common Prayer services in family worship. Scott's father was a Freemason, being a member of Lodge St David, No.36 (Edinburgh), and Scott also became a Freemason in his father's Lodge in 1801, albeit only after the death of his father. As a result of his early polio infection, Scott had a pronounced limp. He was described in 1820 as 'tall, well formed (except for one ankle and foot which made him walk lamely), neither fat nor thin, with forehead very high, nose short, upper lip long and face rather fleshy, complexion fresh and clear, eyes very blue, shrewd and penetrating, with hair now silvery white'. Although a determined walker, on horseback he experienced greater freedom of movement. When Scott was a boy, he sometimes travelled with his father from Selkirk to Melrose, where some of his novels are set. At a certain spot, the old gentleman would stop the carriage and take his son to a stone on the site of the Battle of Melrose (1526). During the summers from 1804, Scott made his home at the large house of Ashestiel, on the south bank of the River Tweed, north of Selkirk. When his lease on this property expired in 1811, he bought Cartley Hole Farm, downstream on the Tweed nearer Melrose. The farm had the nickname of "Clarty Hole," and Scott renamed it "Abbotsford" after a neighbouring ford used by the monks of Melrose Abbey. Following a modest enlargement of the original farmhouse in 1811–12, massive expansions took place in 1816–19 and 1822–24. Scott described the resulting building as 'a sort of romance in Architecture' and 'a kind of Conundrum Castle to be sure'. With his architects William Atkinson and Edward Blore Scott was a pioneer of the Scottish Baronial style of architecture, and Abbotsford is festooned with turrets and stepped gabling. Through windows enriched with the insignia of heraldry the sun shone on suits of armour, trophies of the chase, a library of more than 9,000 volumes, fine furniture, and still finer pictures. Panelling of oak and cedar and carved ceilings relieved by coats of arms in their correct colours added to the beauty of the house. It is estimated that the building cost Scott more than £25,000 (). More land was purchased until Scott owned nearly . In 1817 as part of the land purchases Scott bought the nearby mansion-house of Toftfield for his friend Adam Ferguson to live in along with his brothers and sisters and on which, at the ladies' request, he bestowed the name of Huntlyburn. Ferguson commissioned Sir David Wilkie to paint the Scott family resulting in the painting The Abbotsford Family in which Scott is seated with his family represented as a group of country folk. Ferguson is standing to the right with the feather in his cap and Thomas Scott, Scott's Uncle, is behind. The painting was exhibited at the Royal Academy in 1818. Abbotsford later gave its name to the Abbotsford Club, founded in 1834 in memory of Sir Walter Scott. Although he continued to be extremely popular and widely read, both at home and abroad, Scott's critical reputation declined in the last half of the 19th century as serious writers turned from romanticism to realism, and Scott began to be regarded as an author suitable for children. This trend accelerated in the 20th century. For example, in his classic study "Aspects of the Novel" (1927), E. M. Forster harshly criticized Scott's clumsy and slapdash writing style, "flat" characters, and thin plots. In contrast, the novels of Scott's contemporary Jane Austen, once appreciated only by the discerning few (including, as it happened, Scott himself) rose steadily in critical esteem, though Austen, as a female writer, was still faulted for her narrow ("feminine") choice of subject matter, which, unlike Scott, avoided the grand historical themes traditionally viewed as masculine. Nevertheless, Scott's importance as an innovator continued to be recognized. He was acclaimed as the inventor of the genre of the modern historical novel (which others trace to Jane Porter, whose work in the genre predates Scott's) and the inspiration for enormous numbers of imitators and genre writers both in Britain and on the European continent. In the cultural sphere, Scott's Waverley novels played a significant part in the movement (begun with James Macpherson's "Ossian" cycle) in rehabilitating the public perception of the Scottish Highlands and its culture, which had been formerly been viewed by the southern mind as a barbaric breeding ground of hill bandits, religious fanaticism, and Jacobite rebellions. Scott served as chairman of the Royal Society of Edinburgh and was also a member of the Royal Celtic Society. His own contribution to the reinvention of Scottish culture was enormous, even though his re-creations of the customs of the Highlands were fanciful at times. Through the medium of Scott's novels, the violent religious and political conflicts of the country's recent past could be seen as belonging to history—which Scott defined, as the subtitle of "Waverley" ("'Tis Sixty Years Since") indicates, as something that happened at least 60 years ago. His advocacy of objectivity and moderation and his strong repudiation of political violence on either side also had a strong, though unspoken, contemporary resonance in an era when many conservative English speakers lived in mortal fear of a revolution in the French style on British soil. Scott's orchestration of King George IV's visit to Scotland, in 1822, was a pivotal event intended to inspire a view of his home country that, in his view, accentuated the positive aspects of the past while allowing the age of quasi-medieval blood-letting to be put to rest, while envisioning a more useful, peaceful future. After Scott's work had been essentially unstudied for many decades, a revival of critical interest began in the middle of the 20th century. While F. R. Leavis had disdained Scott, seeing him as a thoroughly bad novelist and a thoroughly bad influence ("The Great Tradition" [1948]), György Lukács ("The Historical Novel" [1937, trans. 1962]) and David Daiches ("Scott's Achievement as a Novelist" [1951]) offered a Marxian political reading of Scott's fiction that generated a great deal of genuine interest in his work. These were followed in 1966 by a major thematic analysis covering most of the novels by Francis R. Hart ("Scott's Novels: The Plotting of Historic Survival"). Scott has proved particularly responsive to Postmodern approaches, most notably to the concept of the interplay of multiple voices highlighted by Mikhail Bakhtin, as suggested by the title of the volume with selected papers from the Fourth International Scott Conference held in Edinburgh in 1991, "Scott in Carnival". Scott is now increasingly recognised not only as the principal inventor of the historical novel and a key figure in the development of Scottish and world literature, but also as a writer of a depth and subtlety who challenges his readers as well as entertaining them. During his lifetime, Scott's portrait was painted by Sir Edwin Landseer and fellow Scots Sir Henry Raeburn and James Eckford Lauder. In Edinburgh, the 61.1-metre-tall Victorian Gothic spire of the Scott Monument was designed by George Meikle Kemp. It was completed in 1844, 12 years after Scott's death, and dominates the south side of Princes Street. Scott is also commemorated on a stone slab in Makars' Court, outside The Writers' Museum, Lawnmarket, Edinburgh, along with other prominent Scottish writers; quotes from his work are also visible on the Canongate Wall of the Scottish Parliament building in Holyrood. There is a tower dedicated to his memory on Corstorphine Hill in the west of the city and Edinburgh's Waverley railway station, opened in 1854, takes its name from his first novel. In Glasgow, Walter Scott's Monument dominates the centre of George Square, the main public square in the city. Designed by David Rhind in 1838, the monument features a large column topped by a statue of Scott. There is a statue of Scott in New York City's Central Park. Numerous Masonic Lodges have been named after Scott and his novels. For example: Lodge Sir Walter Scott, No. 859 (Perth, Australia) and Lodge Waverley, No. 597, (Edinburgh, Scotland). The annual Walter Scott Prize for Historical Fiction was created in 2010 by the Duke and Duchess of Buccleuch, whose ancestors were closely linked to Sir Walter Scott. At £25,000, it is one of the largest prizes in British literature. The award has been presented at Scott's historic home, Abbotsford House. Scott has been credited with rescuing the Scottish banknote. In 1826, there was outrage in Scotland at the attempt of Parliament to prevent the production of banknotes of less than five pounds. Scott wrote a series of letters to the "Edinburgh Weekly Journal" under the pseudonym ""Malachi Malagrowther"" for retaining the right of Scottish banks to issue their own banknotes. This provoked such a response that the Government was forced to relent and allow the Scottish banks to continue printing pound notes. This campaign is commemorated by his continued appearance on the front of all notes issued by the Bank of Scotland. The image on the 2007 series of banknotes is based on the portrait by Henry Raeburn. During and immediately after World War I there was a movement spearheaded by President Wilson and other eminent people to inculcate patriotism in American school children, especially immigrants, and to stress the American connection with the literature and institutions of the "mother country" of Great Britain, using selected readings in middle school textbooks. Scott's "Ivanhoe" continued to be required reading for many American high school students until the end of the 1950s. A bust of Scott is in the Hall of Heroes of the National Wallace Monument in Stirling. Twelve streets in Vancouver, British Columbia are named after Scott's books or characters. Letitia Elizabeth Landon was a great admirer of Scott and, on his death, she wrote two tributes to him: "On Walter Scott" in the Literary Gazette, and "Sir Walter Scott" in Fisher's Drawing Room Scrap Book, 1833. Towards the end of her life she began a series called "The Female Picture Gallery" with a series of character analyses based on the women in Scott's works. In Charles Baudelaire's "La Fanfarlo" (1847), poet Samuel Cramer says of Scott: In the novella, however, Cramer proves as deluded a romantic as any hero in one of Scott's novels. In Anne Brontë's "The Tenant of Wildfell Hall" (1848) the narrator, Gilbert Markham, brings an elegantly bound copy of "Marmion" as a present to the independent "tenant of Wildfell Hall" (Helen Graham) whom he is courting, and is mortified when she insists on paying for it. In a speech delivered at Salem, Massachusetts, on 6 January 1860, to raise money for the families of the executed abolitionist John Brown and his followers, Ralph Waldo Emerson calls Brown an example of true chivalry, which consists not in noble birth but in helping the weak and defenseless and declares that "Walter Scott would have delighted to draw his picture and trace his adventurous career." In his 1870 memoir, "Army Life in a Black Regiment", New England abolitionist Thomas Wentworth Higginson (later editor of Emily Dickinson), described how he wrote down and preserved Negro spirituals or "shouts" while serving as a colonel in the First South Carolina Volunteers, the first authorized Union Army regiment recruited from freedmen during the Civil War. He wrote that he was "a faithful student of the Scottish ballads, and had always envied Sir Walter the delight of tracing them out amid their own heather, and of writing them down piecemeal from the lips of aged crones." According to his daughter Eleanor, Scott was "an author to whom Karl Marx again and again returned, whom he admired and knew as well as he did Balzac and Fielding." In his 1883 "Life on the Mississippi", Mark Twain satirized the impact of Scott's writings, declaring (with humorous hyperbole) that Scott "had so large a hand in making Southern character, as it existed before the [American Civil] war," that he is "in great measure responsible for the war." He goes on to coin the term "Sir Walter Scott disease," which he blames for the South's lack of advancement. Twain also targeted Scott in "Adventures of Huckleberry Finn", where he names a sinking boat the "Walter Scott" (1884); and, in "A Connecticut Yankee in King Arthur's Court" (1889), the main character repeatedly utters "great Scott" as an oath; by the end of the book, however, he has become absorbed in the world of knights in armor, reflecting Twain's ambivalence on the topic. The idyllic Cape Cod retreat of suffragists Verena Tarrant and Olive Chancellor in Henry James' "The Bostonians" (1886) is called Marmion, evoking what James considered the Quixotic idealism of these social reformers. In "To the Lighthouse" by Virginia Woolf, Mrs. Ramsey glances at her husband: In 1951, science-fiction author Isaac Asimov wrote "Breeds There a Man...?", a short story with a title alluding vividly to Scott's "The Lay of the Last Minstrel" (1805). In "To Kill a Mockingbird" (1960), the protagonist's brother is made to read Walter Scott's book "Ivanhoe" to the ailing Mrs. Henry Lafayette Dubose. In "Mother Night" (1961) by Kurt Vonnegut Jr., memoirist and playwright Howard W. Campbell Jr. prefaces his text with the six lines beginning "Breathes there the man..." In "Knights of the Sea" (2010) by Canadian author Paul Marlowe, there are several quotes from and references to "Marmion", as well as an inn named after "Ivanhoe", and a fictitious Scott novel entitled "The Beastmen of Glen Glammoch". Although Scott's own appreciation of music was basic, to say the least, he had a considerable influence on composers. Some ninety operas based to a greater or lesser extent on his poems and novels have been traced, the most celebrated being Rossini's "La donna del lago" (1819) and Donizetti's "Lucia di Lammermoor" (1835) . Many of his songs were set to music by composers throughout the nineteenth century. Seven songs from "The Lady of the Lake" were set, in German translations, by Schubert, one of them being 'Ellens dritter Gesang' popularly known as 'Schubert's "Ave Maria"', and three lyrics, also in translation, by Beethoven in his "Twenty-Five Scottish Songs", Op. 108. Other notable musical responses include three overtures: "Waverley" (1828) and "Rob Roy" (1831) by Berlioz, and "The Land of the Mountain and the Flood" (1887, alluding to "The Lay of the Last Minstrel") by Hamish MacCunn. The Waverley Novels are full of eminently paintable scenes and many nineteenth-century artists responded to the stimulus. Among the outstanding examples of paintings of Scott subjects are: Richard Parkes Bonington's "Amy Robsart and the Earl of Leicester" ("c." 1827) from "Kenilworth" in the Ashmolean Museum, Oxford; Delacroix's "L'Enlèvement de Rebecca" (1846) from "Ivanhoe" in the Metropolitan Museum of Art, New York; and Millais's "The Bride of Lammermoor" (1878) in Bristol Museum and Art Gallery. The Waverley Novels is the title given to the long series of Scott novels released from 1814 to 1832 which takes its name from the first novel, "Waverley". The following is a chronological list of the entire series: Other novels: Many of the short poems or songs released by Scott (or later anthologized) were originally not separate pieces but parts of longer poems interspersed throughout his novels, tales, and dramas.
https://en.wikipedia.org/wiki?curid=27884
Suffolk Suffolk () is an East Anglian county of historic origin in England. It has borders with Norfolk to the north, Cambridgeshire to the west and Essex to the south. The North Sea lies to the east. The county town is Ipswich; other important towns include Lowestoft, Bury St Edmunds, Newmarket, and Felixstowe, one of the largest container ports in Europe. The county is low-lying but it has quite a few hills (especially more to the west), and has largely arable land with the wetlands of the Broads in the north. The Suffolk Coast and Heaths are an Area of Outstanding Natural Beauty. The Anglo-Saxon settlement Suffolk, and East Anglia generally, occurred on a large scale, possibly following a period of depopulation by the previous inhabitants, the Romanized descendants of the Iceni. By the fifth century, they had established control of the region. The Anglo-Saxon inhabitants later became the "north folk" and the "south folk", from which developed the names "Norfolk" and "Suffolk". Suffolk and several adjacent areas became the kingdom of East Anglia, which later merged with Mercia and then Wessex. Suffolk was originally divided into four separate Quarter Sessions divisions. In 1860, the number of divisions was reduced to two. The eastern division was administered from Ipswich and the western from Bury St Edmunds. Under the Local Government Act 1888, the two divisions were made the separate administrative counties of East Suffolk and West Suffolk; Ipswich became a county borough. A few Essex parishes were also added to Suffolk: Ballingdon-with-Brundon and parts of Haverhill and Kedington. On 1 April 1974, under the Local Government Act 1972, East Suffolk, West Suffolk, and Ipswich were merged to form the unified county of Suffolk. The county was divided into several local government districts: Babergh, Forest Heath, Ipswich, Mid Suffolk, St Edmundsbury, Suffolk Coastal, and Waveney. This act also transferred some land near Great Yarmouth to Norfolk. As introduced in Parliament, the Local Government Act would have transferred Newmarket and Haverhill to Cambridgeshire and Colchester from Essex; such changes were not included when the act was passed into law. In 2007, the Department for Communities and Local Government referred Ipswich Borough Council's bid to become a new unitary authority to the Boundary Committee. The Boundary Committee consulted local bodies and reported in favour of the proposal. It was not, however, approved by the Secretary of State for Communities and Local Government. Beginning in February 2008, the Boundary Committee again reviewed local government in the county, with two possible options emerging. One was that of splitting Suffolk into two unitary authorities – Ipswich and Felixstowe and Rural Suffolk; and the other, that of creating a single county-wide controlling authority – the "One Suffolk" option. In February 2010, the then-Minister Rosie Winterton announced that no changes would be imposed on the structure of local government in the county as a result of the review, but that the government would be: "asking Suffolk councils and MPs to reach a consensus on what unitary solution they want through a countywide constitutional convention". Following the May 2010 general election, all further moves towards any of the suggested unitary solutions ceased on the instructions of the incoming Coalition government. In 2018 it was determined that Forest Heath and St Edmundsbury would be merged to form a new West Suffolk district, while Waveney and Suffolk Coastal would similarly form a new East Suffolk district. These changes took effect on 1 April 2019. West Suffolk, like nearby East Cambridgeshire, is renowned for archaeological finds from the Stone Age, the Bronze Age, and the Iron Age. Bronze Age artefacts have been found in the area between Mildenhall and West Row, in Eriswell and in Lakenheath. Many bronze objects, such as swords, spearheads, arrows, axes, palstaves, knives, daggers, rapiers, armour, decorative equipment (in particular for horses), and fragments of sheet bronze, are entrusted to St. Edmundsbury heritage service, housed at West Stow just outside Bury St. Edmunds. Other finds include traces of cremations and barrows. In the east of the county is Sutton Hoo, the site of one of England's most significant Anglo-Saxon archaeological finds, a ship burial containing a collection of treasures including a Sword of State, gold and silver bowls, and jewellery and a lyre. Located in the East of England, much of Suffolk is low-lying, founded on Pleistocene sand and clays. These rocks are relatively unresistant and the coast is eroding rapidly. Coastal defences have been used to protect several towns, but several cliff-top houses have been lost to coastal erosion and others are under threat. The continuing protection of the coastline and the estuaries, including the Blyth, Alde and Deben, has been, and remains, a matter of considerable discussion. The coastal strip to the East contains an area of heathland known as "The Sandlings" which runs almost the full length of the coastline. Suffolk is also home to nature reserves, such as the RSPB site at Minsmere, and Trimley Marshes, a wetland under the protection of Suffolk Wildlife Trust. The west of the county lies on more resistant Cretaceous chalk. This chalk is responsible for a sweeping tract of largely downland landscapes that stretches from Dorset in the south west to Dover in the south east and north through East Anglia to the Yorkshire Wolds. The chalk is less easily eroded so forms the only significant hills in the county. The highest point in the county is Great Wood Hill, the highest point of the Newmarket Ridge, near the village of Rede, which reaches . The county flower is the oxlip. According to estimates by the Office for National Statistics, the population of Suffolk in 2014 was 738,512, split almost evenly between males and females. Roughly 22% of the population was aged 65 or older, and 90.84% were "White British". Historically, the county's population has mostly been employed as agricultural workers. An 1835 survey showed Suffolk to have 4,526 occupiers of land employing labourers, 1,121 occupiers not employing labourers, 33,040 labourers employed in agriculture, 676 employed in manufacture, 18,167 employed in retail trade or handicraft, 2,228 'capitalists, bankers etc.', 5,336 labourers (non-agricultural), 4,940 other males aged over 20, 2,032 male servants and 11,483 female servants. The same publication records the total population of the county at 296,304. Most English counties have nicknames for people from that county, such as a Tyke from Yorkshire and a Yellowbelly from Lincolnshire. A traditional nicknames for people from Suffolk is 'Suffolk Fair-Maids' referring to the supposed beauty of its female inhabitants in the Middle Ages. Another is 'Silly Suffolk', derived from the Old English word sælig meaning blessed referring to the long history of Christianity in the county, its many fine churches, and the infuential Bury Abbey. Use of the term ‘Silly Suffolk’ can be dated to 1819 with its origins likely being older. There are several towns in the county with Ipswich being the largest and most populous. At the time of the 2011 census, a population of 730,000 lived in the county with 133,384 living in Ipswich. The table below shows all towns with over 20,000 inhabitants. The majority of agriculture in Suffolk is either arable or mixed. Farm sizes vary from anything around 80 acres (32 hectares) to over 8,000. Soil types vary from heavy clays to light sands. Crops grown include winter wheat, winter barley, sugar beet, oilseed rape, winter and spring beans and linseed, although smaller areas of rye and oats can be found growing in areas with lighter soils along with a variety of vegetables. The continuing importance of agriculture in the county is reflected in the Suffolk Show, which is held annually in May at Ipswich. Although latterly somewhat changed in nature, this remains primarily an agricultural show. Well-known companies in Suffolk include Greene King and Branston Pickle in Bury St Edmunds. Birds Eye has its largest UK factory in Lowestoft, where all its meat products and frozen vegetables are processed. Huntley & Palmers biscuit company has a base in Sudbury. The UK horse racing industry is based in Newmarket. There are two USAF bases in the west of the county close to the A11. Sizewell B nuclear power station is at Sizewell on the coast near Leiston. Bernard Matthews Farms have some processing units in the county, specifically Holton. Southwold is the home of Adnams Brewery. The Port of Felixstowe is the largest container port in the United Kingdom. Other ports are at Lowestoft and Ipswich, run by Associated British Ports. BT has its main research and development facility at Martlesham Heath. Below is a chart of regional gross value added of Suffolk at current basic prices published by "Office for National Statistics" with figures in millions of British Pounds Sterling. Suffolk has a comprehensive education system with fourteen independent schools. Unusually for the UK, some of Suffolk had a 3-tier school system in place with primary schools (ages 5–9), middle schools (ages 9–13) and upper schools (ages 13–16). However, a 2006 Suffolk County Council study concluded that Suffolk should move to the 2-tier school system used in the majority of the UK. For the purpose of conversion to 2-tier, the 3-tier system was divided into 4 geographical area groupings and corresponding phases. The first phase was the conversion of schools in Lowestoft and Haverhill in 2011, followed by schools in north and west Suffolk in 2012. The remainder of the changeovers to 2-tier took place from 2013, for those schools that stayed within Local government control, and did not become Academies and/or free schools. The majority of schools thus now (2019) operate the more common primary to high school (11–16). Many of the county's upper schools have a sixth form and most further education colleges in the county offer A-level courses. In terms of school population, Suffolk's individual schools are large with the Ipswich district with the largest school population and Forest Heath the smallest, with just two schools. In 2013, a letter said that "...nearly a fifth of the schools inspected were judged inadequate. This is unacceptable and now means that Suffolk has a higher proportion of pupils educated in inadequate schools than both the regional and national averages." The Royal Hospital School near Ipswich is the largest independent boarding school in Suffolk. Other boarding schools within Suffolk include Woodbridge School, Culford School, Framlingham College, Barnardiston Hall Preparatory School, Saint Felix School and Finborough School. The Castle Partnership Academy Trust in Haverhill is the county's only All-through Academy Chain. Comprising Castle Manor Academy and Place Farm Primary Academy, the Academy Trust supports all-through education and provides opportunities for young people aged 3 to 18. Sixth form colleges in the county include Lowestoft Sixth Form College and One in Ipswich. Suffolk is home to four further education colleges: Lowestoft College, Easton & Otley College, Suffolk New College (Ipswich) and West Suffolk College (Bury St Edmunds). The county has one university, with branches spread across different towns. University of Suffolk was, prior to August 2016, known as University Campus Suffolk. Up until it became independent it was a collaboration between the University of Essex and the University of East Anglia which sponsored its formation and validated its degrees. UOS accepted its first students in September 2007. Until then Suffolk was one of only four counties in England which did not have a University campus. The University of Suffolk was granted Taught Degree Awarding Powers by the Quality Assurance Agency for Higher Education in November 2015, and in May 2016 it was awarded University status by the Privy Council and renamed The University of Suffolk on 1 August 2016. The University operates at five sites with its central hub in Ipswich. Others include Lowestoft, Bury St. Edmunds, and Great Yarmouth in Norfolk. The University operates two academic faculties and in had students. Some 30% of the student body are classed as mature students and 68% of University students are female. Founded in 1948 by Benjamin Britten, the annual Aldeburgh Festival is one of the UK's major classical music festivals. Originating in Aldeburgh, it has been held at the nearby Snape Maltings since 1967. Since 2006, Henham Park, has been home to the annual Latitude Festival. This mainly open-air festival, which has grown considerably in size and scope, includes popular music, comedy, poetry and literary events. The FolkEast festival is held at Glemham Hall in August and attracts international acoustic, folk and roots musicians whilst also championing local businesses, heritage and crafts. In 2015 it was also home to the first instrumental festival of musical instruments and makers. More recently, LeeStock Music Festival has been held in Sudbury. A celebration of the county, "Suffolk Day", was instigated in 2017. The Suffolk dialect is very distinctive. Epenthesis and yod-dropping is common, along with non-conjugation of verbs. The county's sole professional football club is Ipswich Town. Formed in 1878, the club were Football League champions in 1961–62, FA Cup winners in 1977–78 and UEFA Cup winners in 1980–81. Ipswich Town currently play in League One, the third tier of English football. The next highest ranked teams in Suffolk are Leiston, Lowestoft Town and Needham Market, who all participate in the Southern League Premier Division Central, the seventh tier of English football. The town of Newmarket is the headquarters of British horseracing – home to the largest cluster of training yards in the country and many key horse racing organisations including the National Stud, and Newmarket Racecourse. Tattersalls bloodstock auctioneers and the National Horseracing Museum are also in the town. Point to point racing takes place at Higham and Ampton. Speedway racing has been staged in Suffolk since at least the 1950s, following the construction of the Foxhall Stadium, just outside Ipswich, home of the Ipswich Witches. The Witches are currently members of the Premier League, the UK's first division. National League team Mildenhall Fen Tigers are also from Suffolk. Suffolk C.C.C. compete in the Eastern Division of the Minor Counties Championship. The club has won the championship three times outright and has shared the title one other time as well as winning the MCCA Knockout Trophy once. Home games are played in Bury St Edmunds, Copdock, Exning, Framlingham, Ipswich and Mildenhall. Novels set in Suffolk include parts of "David Copperfield" by Charles Dickens, "The Fourth Protocol", by Frederick Forsyth, "Unnatural Causes" by P.D. James, Dodie Smith's "The Hundred and One Dalmatians", "The Rings of Saturn" by W. G. Sebald, and among Arthur Ransome's children's books, "We Didn't Mean to Go to Sea, Coot Club" and "Secret Water" take place in part in the county. Roald Dahl's short story "The Mildenhall Treasure" is set in Mildenhall. A TV series about a British antiques dealer, "Lovejoy", was filmed in various locations in Suffolk. The reality TV series "Space Cadets" was filmed in Rendlesham Forest, although the producers fooled participants into believing that they were in Russia. Several towns and villages in the county have been used for location filming of other television programmes and cinema films. These include the BBC Four TV series "Detectorists", an episode of "Kavanagh QC," and the films "Iris and" "Drowning by Numbers". During 2017 and 2018, a total of £3.8million was spent by film crews in. Suffolk The Rendlesham Forest Incident is one of the most famous UFO events in England and is sometimes referred to as "Britain's Roswell". The song "Castle on the Hill" by singer-songwriter Ed Sheeran was referred to by him as "a love letter to Suffolk", with lyrical reference to his hometown of Framlingham and Framlingham Castle. In the arts, Suffolk is noted for having been the home to two of England's best regarded painters, Thomas Gainsborough and John Constable – the Stour Valley area is branded as "Constable Country" – and one of its most noted composers, Benjamin Britten. Other artists of note from Suffolk include sculptress Dame Elizabeth Frink, the cartoonist Carl Giles (a bronze statue of his character "Grandma" to commemorate this is located in Ipswich town centre), poets George Crabbe and Robert Bloomfield, writer and Literary editor Ronald Blythe, actors Ralph Fiennes and Bob Hoskins, actress and singer Kerry Ellis, musician and record producer Brian Eno, singer Dani Filth, of the Suffolk-based extreme metal group, Cradle of Filth, singer-songwriter Ed Sheeran, and coloratura soprano Christina Johnston. Hip-hop DJ Tim Westwood is originally from Suffolk and the influential DJ and radio presenter John Peel made the county his home. Contemporary painter, Maggi Hambling, was born, and resides, in Suffolk. Norah Lofts, author of best-selling historical novels, lived for decades in Bury St. Edmunds where she died and was buried in 1983. Sir Peter Hall the founder of the Royal Shakespeare Company was born in Bury St. Edmunds, and Sir Trevor Nunn the theatre director was born in Ipswich. Suffolk's contributions to sport include Formula One magnate Bernie Ecclestone and former England footballers Terry Butcher, Kieron Dyer and Matthew Upson. Due to Newmarket being the centre of British horse racing many jockeys have settled in the county, including Lester Piggott and Frankie Dettori. Significant ecclesiastical figures from Suffolk include Simon Sudbury, a former Archbishop of Canterbury; Tudor-era Catholic prelate Thomas, Cardinal Wolsey; and author, poet and Benedictine monk John Lydgate. Edward FitzGerald, the first translator of The Rubaiyat of Omar Khayyam, was born in Bredfield. Other significant persons from Suffolk include the suffragette Dame Millicent Garrett Fawcett; the captain of "HMS Beagle", Robert FitzRoy; Witch-finder General Matthew Hopkins; educationist Hugh Catchpole; and Britain's first female physician and mayor, Elizabeth Garrett Anderson. Charity leader Sue Ryder settled in Suffolk and based her charity in Cavendish. King of East Anglia and Christian martyr St Edmund (after whom the town of Bury St Edmunds is named) was killed by invading Danes in the year 869. St Edmund was the patron saint of England until he was replaced by St George in the 13th century. 2006 saw the failure of a campaign to have St Edmund named as the patron saint of England, but in 2007 he was named patron saint of Suffolk, with St Edmund's Day falling on 20 November. His flag is flown in Suffolk on that day.
https://en.wikipedia.org/wiki?curid=27886
Solitaire Solitaire is any tabletop game which one can play by oneself, usually with cards. The term "solitaire" is also used for single-player games of concentration and skill using a set layout tiles, pegs or stones. These games include peg solitaire and mahjong solitaire. Most solitaire games function as a puzzle which, due to a different starting position, may (or may not) be solved in a different fashion each time.
https://en.wikipedia.org/wiki?curid=27891
Glossary of patience terms There are a number of common features in many Patience games (or solitaire games as they are called in the US) such as "building down" and the "foundations" and "tableau". These are used to simplify the descriptions of new games. The layout describes the piles of cards in use during the game, and the restrictions on these piles. There are a number of different kinds of piles which have become standard across a number of games. "Building" involves cards being placed in their final location, in stacks or cascades according to various rules. The building terms are usually combined in game explanations. For instance, a game may describe "building up in sequence by suit". The terms in this table are generally preceded by the word "building" (as in the previous sentence). "Packing" means ordering cards in sequence in an intermediate location, usually the tableau, until they can be placed on the foundations. The terms above are useful for describing the rules of the game. The terms in this section tend to be more useful for describing things happening during the state of play. Most are derived from Lady Cadogan (see below). The following terms are used by Peter Arnold in his book "Card Games for One" () and may be terms exclusively used in British English in explaining solitaire games: The following terms are from the book "Illustrated Games of Patience" by Lady Cadogan (1874). This defines the long-forgotten term, talon (= "stock"), which is still in use in Germany and has been re-introduced by English authors like Parlett. Also note the term Marriage of cards.
https://en.wikipedia.org/wiki?curid=27893
Syrinx In classical Greek mythology, Syrinx (Greek Σύριγξ) was a nymph and a follower of Artemis, known for her chastity. Pursued by the amorous god Pan, she ran to a river's edge and asked for assistance from the river nymphs. In answer, she was transformed into hollow water reeds that made a haunting sound when the god's frustrated breath blew across them. Pan cut the reeds to fashion the first set of pan pipes, which were thenceforth known as "syrinx". The word "syringe" was derived from this word. The story of the syrinx is told in Achilles Tatius' "Leukippe and Kleitophon" where the heroine is subjected to a virginity test by entering a cave where Pan has left syrinx pipes that will sound a melody if she passes. The story became popular among artists and writers in the 19th century. The Victorian artist and poet Thomas Woolner wrote "Silenus", a long narrative poem about the myth, in which Syrinx becomes the lover of Silenus, but drowns when she attempts to escape rape by Pan. As a result of the crime, Pan is transmuted into a demon figure and Silenus becomes a drunkard. Amy Clampitt's poem "Syrinx" refers to the myth by relating the whispering of the reeds to the difficulties of language. Longus makes reference to Syrinx in his tale of "Daphnis and Chloe" in Book 2:34. Whilst the description of the tale here is modified to that of Ovid, it nevertheless incorporates Pan's desire to have her. Longus, however, makes no reference to Syrinx receiving aid from the Nymphs in his version, instead Syrinx hides from Pan in amongst some reeds and disappeared into the marsh. Upon realising what had happened to Syrinx, Pan created the first set of panpipes from the reeds she was transformed into, allowing her to be with him for the rest of his days. The story was used as a central theme by Aifric Mac Aodha in her poetry collection "Gabháil Syrinx". Samuel R. Delany features an instrument called a syrynx in his science-fiction novel "Nova". Syrinx is the name of one of the main characters in the Night's Dawn Trilogy of space opera novels by British author Peter F. Hamilton. In the trilogy, Syrinx is a member of the transhumanist future society known as Edenism, and serves as the captain of the "Oenone", a living starship. A 1972 poem by James Merrill, titled "Syrinx", draws on several aspects on the mythological tale, with the poet himself identifying with the celebrated nymph, desiring to become not just a "reed" but a "thinking reed" (in contrast to a "thinking stone", as critic Helen Vendler has observed, noting the influence of a Wallace Stevens lyric, "Le Monocle de Mon Oncle"). The poet aspires to return to his "scarred case" with minimal suffering inflicted by "the great god Pain", a play of words on the Greek god Pan. "Syrinx" is the final poem in Merrill's 1972 collection, "Braving the Elements". Cassandra J. Bruner's 2019 poem, Prayer: Syrinx, takes the nymph's viewpoint and asks why there is not a complementary story in which Pan suffers metamorphosis. In "Dark Places of Wisdom", Peter Kingsley discusses in some detail the use of the word in Parmenides' poem and in association with the ancient practice of incubation The British Victorian artist Arthur Hacker depicted Syrinx in his 1892 nude. This painting in oil on canvas is currently on display in Manchester Art Gallery. A sculpture of Syrinx created in 1925 by sculptor William McMillan is displayed at the Kelvingrove Art Gallery and Museum in Glasgow. Sculptor Adolph Wolter was commissioned in 1973 to create a replacement for a stolen sculpture of Syrinx in Indianapolis, United States. This work was a replacement for a similar statue by Myra Reynolds Richards that had been stolen. The sculpture sits in University Park located in the city's Indiana World War Memorial Plaza. Claude Debussy based his 1913 "Syrinx (La Flute De Pan)" on Pan's sadness over losing his love. The piece is still popular today; it was used as incidental music in the play "Psyché" by Gabriel Mourey. Maurice Ravel incorporated the character of the Syrinx into his ballet "Daphnis et Chloé". Gustav Holst alludes to the story of Pan and Syrinx in the opening of his "First Choral Symphony," which draws from the text of John Keats' 1818 poem "Endymion." French Baroque composer Michel Pignolet de Montéclair composed "Pan et Syrinx", a cantata for voice and ensemble (No. 4 of "Second livre de cantates"). Danish composer Carl Nielsen composed "Pan and Syrinx" ("Pan og Syrinx"), Op. 49, FS 87. Canadian electronic progressive rock band Syrinx took their name from the legend. Canadian progressive rock band Rush have a movement titled "The Temples of Syrinx" in their song "2112" on their album "2112". The song is about a dystopian futuristic society in which the arts, particularly music, have been suppressed by the Priests of the Temples of Syrinx. Related to the Rush reference, Maryland based rockers Clutch mention the Temples of Syrinx in their song "10001110101" from their album "Robot Hive/Exodus".
https://en.wikipedia.org/wiki?curid=27899
Savate Savate (), also known as boxe française, savate boxing, French boxing or French footfighting, is a French kickboxing combat sport that uses the hands and feet as weapons combining elements of English boxing with graceful kicking techniques. Only foot kicks are allowed, unlike some systems such as Muay Thai, which allow the use of the knees or shins. "Savate" is a French word for "old shoe or boot". Savate fighters wear specially designed boots. A male practitioner of savate is called a tireur while a female is called a tireuse. Savate takes its name from the French for "old shoe" (heavy footwear, especially the boots used by French military and sailors) ("cf." French-English loanwords sabot and sabotage and Spanish cognate "zapato"). The modern formalized form is mainly an amalgam of French street fighting techniques from the beginning of the 19th century. Savate was then a type of street fighting common in Paris and northern France. In the south, especially in the port of Marseille, sailors developed a fighting style involving high kicks and open-handed slaps. It is conjectured that this kicking style was developed in this way to allow the fighter to use a hand to hold onto something for balance on a rocking ship's deck, and that the kicks and slaps were used on land to avoid the legal penalties for using a closed fist, which was considered a deadly weapon under the law. It was known as the "jeu marseillais" (game from Marseille), and was later renamed "chausson" (slipper, after the type of shoes the sailors wore). In contrast, at this time in England (the home of boxing and the Queensberry rules), kicking was seen as unsportsmanlike. Traditional savate was a northern French development, especially in Paris' slums, and always used heavy shoes and boots derived from its potential military origins. Street fighting savate, unlike chausson, kept the kicks low, almost never targeted above the groin, and they were delivered with vicious, bone-breaking intent. Parisian savate also featured open hand blows, in thrusting or smashing palm strikes (la baffe) or in stunning slaps targeted to facial nerves. Techniques of savate or chausson were at this time also developed in the ports of northwest Italy and northeastern Spain—hence one savate kick named the "Italian hunt" ("chasse italiane"). The two key historical figures in the history of the shift from street fighting to the modern sport of savate are Michel Casseux (also known as "le Pisseux") (1794–1869) and Charles Lecour (1808–1894). Casseux opened the first establishment in 1825 for practicing and promoting a regulated version of chausson and savate (disallowing head butting, eye gouging, grappling, etc.). However the sport had not shaken its reputation as a street-fighting technique. Casseux's pupil Charles Lecour was exposed to the English art of boxing when he witnessed an English boxing match in France between English pugilist Owen Swift and Jack Adams in 1838. Lecour also took part in a friendly sparring match with Swift later in that same year. Lecour felt that he was at a disadvantage, using his hands only to bat his opponent's fists away, rather than to punch. He then trained in boxing for a time before combining boxing with chausson and savate to create the sport of savate (or "boxe française", as we know it today). At some point "la canne" and "le baton", stick fighting, were added, and some form of stick fencing, such as "la canne", is commonly part of savate training. Those who train purely for competition may omit this. Savate was developed professionally by Lecour's student Joseph Charlemont and then his son Charles Charlemont. Charles continued his father's work and in 1899 fought an English boxer named Jerry Driscoll. He won the match with a round-kick ("fouetté median") in the eighth round although the English said that it was a kick to the groin. According to the well known English referee, Bernard John Angle of the National Sporting Club, in his book "My Sporting Memories" (London, 1925), "Driscoll did not know what he was taking on" when he agreed "to meet the Frenchman at his own game". Angle also said that, "The contest ended in Jerry being counted out to a blow in the groin from the Frenchman's knee." He further alleged that "the timekeeper saved Charlemont several times". After the fight Driscoll bore no grudges, considering the blow to have been "an accident". The French claimed victory for their man by stoppage, following a round-kick to Driscoll's stomach. Savate was later codified under a Committee National de Boxe Française under Charles Charlemont's student Count Pierre Baruzy (dit Barozzi). The Count is seen as the father of modern savate and was 11-time Champion of France and its colonies, his first ring combat and title prior to World War I. "Savate de Dėfense", "Défense Savate" or "Savate de Rue" ("street savate") is the name given to those methods of fighting excluded from savate competition. The International Savate Federation (FIS) is the official worldwide ruling body of savate. Perhaps the ultimate recognition of the respectability of savate came in 1924 when it was included as a demonstration sport in the Olympic Games in Paris. In 2008, savate was recognised by the International University Sports Federation (FISU) – this recognition allows savate to hold official University World Championships; the first was held in Nantes, France in 2010. The 25th anniversary of the founding of the International Savate Federation, in March 2010, was celebrated with a visit to Lausanne, to meet with International Olympic Committee President Jacques Rogge. FISav President Gilles Le Duigou was presented with a memento depicting the Olympic Rings. In April 2010, the International Savate Federation was accepted as a member of SportAccord (previously known as AGFIS) – a big step forward on the road to Olympic recognition. Despite its roots, savate is a relatively safe sport to learn. Today, savate is practiced all over the world by amateurs: from Australia to the U.S. and from Finland to Britain. Many countries (including the United States) have national federations devoted to promoting savate. Modern codified savate provides for three levels of competition: "assaut", "pre-combat" and "combat".Assaut requires the competitors to focus on their technique while still making contact; referees assign penalties for the use of excessive force.Pre-combat allows for full-strength fighting so long as the fighters wear protective gear such as helmets and shinguards.Combat, the most intense level, is the same as pre-combat, but protective gear other than groin protection and mouthguards is prohibited. Many martial arts provide ranking systems, such as belt colours. Savate uses glove colours to indicate a fighter's level of proficiency (unlike arts such as "karate", which assign new belts at each promotion, moving to a higher colour rank in savate does not necessarily entail a change in the colour of one's actual gloves, and a given fighter may continue using the same pair of gloves through multiple promotions). Novices begin at no colour. The qualifications for competition vary depending on the association or commission. In the French Federation a yellow glove can compete, and in Belgium a green glove can compete. In the United States, the competition levels start at novice (6 months). In Russia there is no requirement for a specific glove colour in order to compete. The ranking of savate: Boxe Française is divided into three roads that a savateur can choose to take. In some clubs there is also a rank of aide-moniteur, while in other associations there is no rank of initiateur. Eight to twelve years of training on average are necessary for a student to reach professeur level; eight years average depending on skills. In France the professional professeur must have a French state certificate of specialized teaching (CQP AS, BEES 1st, 2nd and 3rd degree, 1st de CCB BPJEPS, DEJEPS, DESJEPS). These diplomas are university level education in sports with specialisation in savate (supervised by the French Federation of BF Savate and associated disciplines ( Canne, Self Defense, Lutte, baton) (i.e.:FFBFSDA). The international federation (FIS), however, is still allowed to award professeur to non-French nationals without requiring such rigid system of education. French nationals have to submit and succeed to the rigid system of education and prove themselves in competition as well as being respected by peers, in order to have a slight chance to become a DTD (directeur technique départemental). Like any sport federations in France, the French and International Federation of Savate are under the control of France Ministry of Sport and Youth. This makes these two federations extremely powerful federations on the world scene. These two federations have followed a set of national traditions. Nowadays, savate is just a term meaning Savate-Boxe Française. In the 1970s the term "savate" was rarely used in France to refer to the formalised sport: people mostly used the term Savate boxe française, Boxe-Française Savate, B.F, B.F.S., S.B.F. or simply boxe française. The term savate remains in use mostly outside France or when speaking a language other than French. The global distribution of schools (salles) today is best explained through their stylistic approaches: These are the different stylistic approaches of the French arts of pugilism in the world today. In the USA it is said that Daniel Duby brought Savate to the west coast in Southern California. The first real FFBFSDA/ FIS club of boxe-francaise savate was open in 1983 on the east coast in Philadelphia, under Jean-Noel Eynard, FFBFSDA/ FIS Professeur with the assistance of former FFBFSDA/ FIS DTN Bob Alix. In 1988 the US Registry of Savate was created on the east coast which became the American Registry of Savate Instructors and Clubs in 1994 (ARSIC-International). Meanwhile, on the west coast Savate clubs were spurring from the California association of Savate. A couple years later, under the collaborative assistance of a steering committee made of Gilles le Duigou (FIS), JN Eynard, ARSIC-International (PA), Armando Basulto (NJ) and Norman Taylor, USSF president(NJ) as well as few other individuals from California, the official name of United State Savate Federation was given to this combined association. The teaching efforts of Jean-Noel Eynard, Salem Assli and Nicolas Saignac contributed to the further development of Boxe Francaise Savate in the USA. ARSIC-International has been instrumental at promoting Savate in the USA. In official competitions, competitors wear an "intégrale" or a vest and savate trousers. They wear boxing gloves (with or without padded palms) and savate boots. Savate is the only kicking and punching (only) style to use footwear, although some other Combat sports, such as Shoot Fighting and some forms of MMA sometimes also wear grappling type shoes/boots. Savate boots can be used to hit with the sole, the top of the foot, the toe, or the heel. Sometimes a helmet can be worn, e.g. in junior competitions and in the early rounds of Combat (full contact) bouts. In competitive or competition "savate", which includes Assaut, Pre-Combat, and Combat types, there are only four kinds of kicks allowed along with four kinds of punches allowed: Savate did not begin as a sport, but as a form of self-defence and fought on the streets of Paris and Marseille. This type of savate was known as Savate des Rues. In addition to kicks and punches, training in Savate des Rues (Streets Savate) includes knee and elbow strikes along with locks, sweeps, throws, headbutts, and takedowns. The International Savate Federation holds World Championships in three disciplines: Savate Assaut, Savate Combat and Canne de combat. World Savate Combat Championships are being held for Seniors (over 21 years) and Juniors (18 to 21 years). In the 1964 beach party film "Bikini Beach", a French female bodyguard claims to be an expert in Savate and uses kicks to defend herself. In Marvel comics, movies, and cartoons, the criminal mercenary Batroc the Leaper is a master of Savate. X-Men superhero, Gambit is trained in Savate. In the 1967 novel "Logan's Run", protagonist Logan 3 often uses Savate kicks for self-defense. In DC Comics, the character Nightwing is described as having a personal fighting style that combines Savate with the Filipino martial art Arnis. Ash Crimson from "The King of Fighters" fights in a basic Savate style that uses his own green flames. Ash himself is portrayed by SNK as a character with an unknown origin, because his lineage as descendant of Saiki (Those from the Past's leader), but it says too that he was raised by the Blanctorche clan, a French family. In the "Tekken" series, Brazilian character Katarina Alves uses savate as her fighting syle. In the Japanese manga series "Medaka Box" Zenkichi Hitoyoshi is a master of Savate, and emphasizes its Open Handed Style Techniques with his "Altered God Mode: Model Zenkichi" which makes his hands as sharp as blades. In the issue "Flight 714" of The Adventures of Tintin, Professor Calculus states that he used to be proficient in Savate in his younger years. However, when attempting a kick, he ends up falling terribly, prompting stunned reactions from the onlookers. Dazed, Calculus remarks that he is out of practice. In "The Black Island", Tintin himself kicks a villain and calls it a Savate move. In the 1995 martial arts video "Savate", Olivier Gruner plays a savate fighter in the American West using Savate to give his enemies a good thrashing. In a 1974 episode of "The Six Million Dollar Man" titled "Dr. Wells is Missing", a "Master of the French art of Savate" called Yamo fights with Steve Austin, using the original style of the art (kicks only).
https://en.wikipedia.org/wiki?curid=27901
Sextus Julius Africanus Sextus Julius Africanus (c. 160 – c. 240; Greek: Σέξτος Ἰούλιος ὁ Ἀφρικανός or ὁ Λίβυς) was a Christian traveler and historian of the late second and early third centuries. He is important chiefly because of his influence on Eusebius, on all the later writers of Church history among the Church Fathers, and on the whole Greek school of chroniclers. The Suidas claims Africanus was a "Libyan philosopher", while Gelzer considers him of Roman descent. Julius called himself a native of Jerusalem – which some scholars consider his birthplace – and lived at the neighbouring Emmaus. His chronicle indicates his familiarity with the topography of historic Judea. Little of Africanus's life is known and all dates are uncertain. One tradition places him under the Emperor Gordianus III (238–244), others mentions him under Severus Alexander (222–235). He appears to have known Abgar VIII.(176–213). Africanus may have served under Septimius Severus against the Osrhoenians in 195. He went on an embassy to the emperor Severus Alexander to ask for the restoration of Emmaus, which had fallen into ruins. His mission succeeded, and Emmaus was henceforward known as Nicopolis. Africanus traveled to Greece and Rome and went to Alexandria to study, attracted by the fame of its catechetical school, possibly about the year 215. He knew Greek (in which language he wrote), Latin, and Hebrew. He was at one time a soldier and had been a pagan; he wrote all his works as a Christian. Whether Africanus was a layman or a cleric remains controversial. Louis-Sébastien Le Nain de Tillemont argued from Africanus's addressing the priest Origen as "dear brother" that Julius must have been a priest himself but Gelzer points out that such an argument is inconclusive. Africanus wrote "Chronographiai", a history of the world in five volumes. The work covers the period from Creation to the year AD 221. He calculated the period between Creation and Jesus as 5500 years, placing the Incarnation on the first day of AM 5501 (our modern March 25 1 BC). (Note that this dating implies that the birth of Jesus was in December, nine months later.) This method of reckoning led to several Creation eras being used in the Greek Eastern Mediterranean, which all placed Creation within one decade of 5500 BC. The history, which had an apologetic aim, is no longer extant. But copious extracts from it are to be found in the "Chronicon" of Eusebius, who used it extensively in compiling the early episcopal lists. There are also fragments in George Syncellus, Cedrenus and the "Chronicon Paschale." Eusebius gives some extracts from his letter to one Aristides, reconciling the apparent discrepancy between Matthew and Luke in the genealogy of Christ by a reference to the Jewish law of Levirate marriage, which compelled a man to marry the widow of his deceased brother, if the latter died without issue. His terse and pertinent letter to Origen impugning the authority of the part of the Book of Daniel that tells the story of Susanna, and Origen's wordy and uncritical answer, are both extant. The ascription to Africanus of an encyclopaedic work entitled "Kestoi" (Κέστοι "Embroidered"), treating of agriculture, natural history, military science, etc., has been disputed on account of its secular and often credulous character. August Neander suggested that it was written by Africanus before he had devoted himself to religious subjects. A fragment of the "Kestoi" was found in the Oxyrhynchus papyri. According to the New Schaff-Herzog Encyclopedia of Religious Knowledge, the Kestoi "appears to have been intended as a sort of encyclopedia of the material sciences with the cognate mathematical and technical branches, but to have contained a large proportion of merely curious, trifling, or miraculous matters, on which account the authorship of Julius has been questioned. Among the parts published are sections on agriculture, liturgiology, tactics, and medicine (including veterinary practise)." Only fragments of his religious writings have been preserved. One fragment deals with eschatology. After referring to the standard interpretation of the 'ram' and the 'he-goat', as symbolizing Persia and Greece, Africanus suggested that the 2300 days might be taken form months, totaling about 185 years which he applied to the time from the Capture of Jerusalem to the 20 year of Artaxerxes. He seems to be the only one who developed this interpretation. Africanus begins the seventy weeks Daniel 9 with the twentieth year of Artaxerxes, in Olympiad 83, year 4, (444 B.C.) and ends the period in Olympiad 202, year 2, (31 A.D.) or 475 solar years inclusive, which would be equivalent to 490 uncorrected lunar years. This work does not survive except in fragments, chiefly those preserved by Eusebius and Georgius Syncellus. In turn Africanus preserves fragments of the work of Polemon of Athens' Greek History.
https://en.wikipedia.org/wiki?curid=27903
Soul food Soul food is an ethnic cuisine traditionally prepared and eaten by African Americans in the Southern United States. The cuisine originated with the foods that were given to enslaved West Africans on southern plantations during the American colonial period; however, it was strongly influenced by the traditional practices of West Africans and Native Americans from its inception. Due to the historical presence of African Americans in the region, soul food is closely associated with the cuisine of the American South although today it has become an easily-identifiable and celebrated aspect of mainstream American food culture. The expression "soul food" originated in the mid-1960s, when "soul" was a common word used to describe African American culture. The term "soul food" became popular in the 1960s and 1970s in the midst of the Black Power movement. One of the earliest written uses of the term is found in "The Autobiography of Malcolm X," which was published in 1965. LeRoi Jones (Amiri Baraka) published an article entitled "Soul Food" and was one of the key proponents for establishing the food as a part of the Black American identity. Those who had participated in the Great Migration found within soul food a reminder of the home and family they had left behind after moving to unfamiliar northern cities. Soul food restaurants were Black-owned businesses that served as neighborhood meeting places where people socialized and ate together. The origins of recipes considered soul food can be traced back to before slavery, as African (mostly West African) and European (mostly British) foodways were adapted to the environment of the region. Many of the foods integral to the cuisine originate from the limited rations given to enslaved people by their planters and masters. Enslaved people were typically given a peck of cornmeal and 3-4 pounds of pork per week, and from those rations come soul food staples such as cornbread, fried catfish, barbecued ribs, chitterlings, and neckbones. It has been noted that enslaved Africans were the primary consumers of cooked greens (collards, beets, dandelion, kale, and purslane) and sweet potatoes for a portion of US history. Enslaved peoples needed to eat foods with high amounts of calories to balance out spending long days working in the fields. This led to time-honored soul food traditions like frying foods, breading meats and fishes with cornmeal, and mixing meats with vegetables (e.g. putting pork in collard greens). Eventually, this slave-invented style of cooking started to get adopted into larger Southern culture, as slave owners gave special privileges to slaves with cooking skills. Impoverished whites and blacks in the South cooked many of the same dishes stemming from the soul tradition, but styles of preparation sometimes varied. Certain techniques popular in soul and Southern cuisines (i.e., frying meat and using all parts of the animal for consumption) are shared with ancient cultures all over the world, including China, Egypt, and Rome. Introduction of soul food to northern cities such as Washington D.C. also came from private chefs in the White House. Many American Presidents have desired French cooking, and have sought after black chefs given their Creole background. The 23rd President of the United States Benjamin Harrison, and former first lady Caroline Harrison, took this same route when they terminated their French cooking staff for a black woman by the name of Dolly Johnson. One famous relationship includes the bond formed between President Lyndon B. Johnson and Zephyr Wright. Wright became a great influence to Johnson in fighting for civil rights as he saw her treatment and segregation as they would travel throughout the south. Johnson even had Wright present at the signing of several civil rights laws. Lizzie McDuffie,a former maid and cook to Franklin Delano Roosevelt, assisted her boss during the 1936 election simply by making the president more relatable to black voters. With public awareness of black Americans preparing food in the presidential kitchen, this in turn helped to sway minority votes for hopeful presidential candidates such as, John F. Kennedy, and Ronald Reagan. Southern Native American culture (Cherokee, Chickasaw, Choctaw, Creek, Seminole) is an important element of Southern cuisine. From their cultures came one of the main staples of the Southern diet: corn (maize) – either ground into meal or limed with an alkaline salt to make hominy, in a Native American process known as nixtamalization. Corn was used to make all kinds of dishes, from the familiar cornbread and grits, to liquors such as moonshine and whiskey (which are still important to the Southern economy). Many fruits are available in this region: blackberries, muscadines, raspberries, and many other wild berries were part of Southern Native Americans' diets, as well. African, European, and Native Americans of the American South supplemented their diets with meats derived from the hunting of native game. What meats people ate depended on seasonal availability and geographical region. Common game included opossums, rabbits, and squirrels. Livestock, adopted from Europeans, in the form of cattle and hogs, were kept. When game or livestock was killed, the entire animal was used. Aside from the meat, it was common for them to eat organ meats such as brains, livers, and intestines. This tradition remains today in hallmark dishes like chitterlings (commonly called "chit'lins"), which are fried small intestines of hogs; livermush (a common dish in the Carolinas made from hog liver); and pork brains and eggs. The fat of the animals, particularly hogs, was rendered and used for cooking and frying. Many of the early European settlers in the South learned Native American cooking methods, and so cultural diffusion was set in motion for the Southern dish. Scholars have noted the substantial African influence found in soul food recipes, especially from the West and Central regions of Africa. This influence can be seen through the heat level of many soul food dishes, as well as many ingredients found within them. Peppers used to add spice to food included malagueta pepper, as well as peppers native to the western hemisphere such as red (cayenne) peppers. Several foods that are essential in southern cuisine and soul food were domesticated or consumed in the African savanna and the tropical regions of West and Central Africa. These include pigeon peas, black-eyed peas, many leafy greens, and sorghum. It has also been noted that a species of rice was domesticated in Africa, thus many Africans who were brought to the Americas kept their knowledge for rice cooking. Rice is a staple side dish in soul food and is the center of dishes such as red beans and rice. There are many documented parallels between the foodways of West Africans and soul food recipes. The consumption of sweet potatoes in the US is reminiscent of the consumption of yams in West Africa. The frequent consumption of cornbread by African-Americans is analogous to West Africans' use of fufu to soak up stews. West Africans also cooked meat over open pits, and thus it is possible that enslaved Africans came to the New World with knowledge of this cooking technique (it is also possible they learned it from Native Americans, since Native Americans barbecued as a cooking technique). Researchers state that many tribes in Africa utilized a vegetarian/plant based diet because of its simplicity which most African dishes are based upon. This included the way food was prepared as well as served. It was not uncommon to see food served out of an empty gourd. Many techniques to change the overall flavor of staple food items such as nuts, seeds, and rice contributed to added dimensions of evolving flavors. These techniques included roasting, frying with palm oil, baking in ashes, and steaming in leaves such as banana leaf. Because it was illegal in many states for slaves to learn to read or write, soul food recipes and cooking techniques tended to be passed along orally, until after emancipation. The first soul food cookbook is attributed to Abby Fisher, entitled "What Mrs. Fisher Knows About Old Southern Cooking" and published in 1881. "Good Things to Eat" was published in 1911; the author, Rufus Estes, was a former slave who worked for the Pullman railway car service. Many other cookbooks were written by Black Americans during that time, but as they were not widely distributed, most are now lost. Since the mid-20th century, many cookbooks highlighting soul food and African-American foodways have been compiled and published. One notable soul food chef is celebrated traditional Southern chef and author Edna Lewis, who released a series of books between 1972 and 2003, including "A Taste of Country Cooking" in which she weaves stories of her childhood in Freetown, Virginia into her recipes for "real Southern food". Another early and influential soul food cookbook is Vertamae Grosvenor's "Vibration Cooking, or the Travel Notes of a Geechee Girl", originally published in 1970, focused on South Carolina Lowcountry/Geechee/Gullah cooking. Its focus on spontaneity in the kitchen—cooking by "vibration" rather than precisely measuring ingredients, as well as "making do" with ingredients on hand—captured the essence of traditional African-American cooking techniques. The simple, healthful, basic ingredients of lowcountry cuisine, like shrimp, oysters, crab, fresh produce, rice and sweet potatoes, made it a bestseller. Usher boards and Women's Day committees of various religious congregations large and small, and even public service and social welfare organizations such as the National Council of Negro Women (NCNW) have produced cookbooks to fund their operations and charitable enterprises. The NCNW produced its first cookbook, "The Historical Cookbook of the American Negro", in 1958, and revived the practice in 1993, producing a popular series of cookbooks featuring recipes by famous Black Americans, among them: "The Black Family Reunion Cookbook" (1991), "Celebrating Our Mothers' Kitchens: Treasured Memories and Tested Recipes" (1994), and "Mother Africa's Table: A Chronicle of Celebration" (1998). The NCNW also recently reissued "The Historical Cookbook". Soul food originated in the southern region of the US and is consumed by African-Americans across the nation. Traditional soul food cooking is seen as one of the ways enslaved Africans passed their traditions to their descendants once they were brought to the US, and is a cultural creation stemming from slavery and Native American and European influences. Recipes considered soul food are popular in the South due to the accessibility and affordability of the ingredients, as well as the proximity that African-Americans and white Americans maintained during periods of slavery and reconstruction. Scholars have noted that while white Americans provided the material culture for soul food dishes, the cooking techniques found in many of the dishes have been visibly influenced by the enslaved Africans themselves. Dishes derived by slaves consisted of many vegetables and grains because slave owners felt more meat would cause the slave to become lethargic with less energy to tend to the crops. The bountiful vegetables that were found in Africa, were substituted in dishes down south with new leafy greens consisting of dandelion, turnip, and beet greens. Pork, more specifically Hog became introduced into several dishes in the form of cracklins from the skin, pig's feet, chitterlings, and lard used to increase the fat intake into vegetarian dishes. Spices such as thyme, and bay leaf blended with onion and garlic gave dishes their own characteristics. Figures such as LeRoi Jones (Amiri Baraka), Elijah Muhammad, and Dick Gregory played notable roles in shaping the conversation around soul food. Muhammad and Gregory opposed soul food because they felt it was unhealthy food and was slowly killing African-Americans. They saw soul food as a remnant of oppression and felt it should be left behind. Many African-Americans were offended by the Nation of Islam’s rejection of pork as it is a staple ingredient used to flavor many dishes. Stokely Carmichael also spoke out against soul food, claiming that it was not true African food due to its colonial and European influence. Despite this, many voices in the Black Power Movement saw soul food as something African-Americans should take pride in, and used it to distinguish African-Americans from white Americans. Proponents of soul food embraced the concept of it, and used it as a counterclaim to the argument that African-Americans had no culture or cuisine. The magazine "Ebony Jr!" was important in transmitting the cultural relevance of soul food dishes to middle-class African-American children who typically ate a more standard American diet. Soul food is frequently found at religious rituals and social events such as funerals, fellowship, Thanksgiving, and Christmas in the Black community. Soul food has been the subject of many popular culture creations such as the 1997 film, then turned television series "Soul Food", as well the eponymous 1995 rap album released by Goodie Mob. In 2013, American rapper Schoolboy Q released a single titled “Collard Greens”. "Further information: Soul food health trends" Soul food prepared traditionally and consumed in large amounts can be detrimental to one's health. Opponents to soul food have been vocal about health concerns surrounding the culinary traditions since the name was coined in the mid-twentieth century. Soul food has been criticized for its high starch, fat, sodium, cholesterol, and caloric content, as well as the inexpensive and often low-quality nature of the ingredients such as salted pork and cornmeal. In light of this, soul food has been implicated by some in the disproportionately high rates of high blood pressure (hypertension), type 2 diabetes, clogged arteries (atherosclerosis), stroke, and heart attack suffered by African-Americans. Figures who led discussions surrounding the negative impacts of soul food include Dr. Alvenia Fulton, Dick Gregory, and Elijah Muhammad. On the other hand, critics and traditionalists have argued for attempts to make soul food healthier, also make it less tasty, as well as less culturally/ethnically authentic. A foundational difference in how health is perceived of being contemporary is that soul food may differ from 'traditional' styles is the widely different structures of agriculture. Fueled by federal subsidies, the agricultural system in the United States became industrialized as the nutritional value of most processed foods, and not just those implicated in a traditional perception of soul food, have degraded. This urges a consideration of how concepts of racial authenticity evolve alongside changes in the structures that make some foods more available and accessible than others. An important aspect of the preparation of soul food was the reuse of cooking lard. Because many cooks could not afford to buy new shortening to replace what they used, they would pour the liquefied cooking grease into a container. After cooling completely, the grease re-solidified and could be used again the next time the cook required lard. With changing fashions and perceptions of "healthy" eating, some cooks may use preparation methods that differ from those of cooks who came before them: using liquid oil like vegetable oil or canola oil for frying and cooking; and, using smoked turkey instead of pork, for example. Changes in hog farming techniques have also resulted in drastically leaner pork, in the 21st and late 20th centuries. Some cooks have even adapted recipes to include vegetarian alternatives to traditional ingredients, including tofu and soy-based analogues. Several of the ingredients included in soul food recipes have pronounced health benefits. Collard and other greens are rich sources of several vitamins (including vitamin A, B6, folic acid or vitamin B9, vitamin K, and C), minerals (manganese, iron, and calcium), fiber, and small amounts of omega-3 fatty acids. They also contain a number of phytonutrients, which are thought to play a role in the prevention of ovarian and breast cancers. However, the traditional preparation of soul food vegetables often consists of high temperatures or slow cooking methods, which can lead to the water-soluble vitamins (e.g., Vitamin C and the B complex vitamins) to be destroyed or leached out into the water in which the greens cooked. This water is often consumed and is known as pot liquor. Peas and legumes are inexpensive sources of protein; they also contain important vitamins, minerals, and fiber.
https://en.wikipedia.org/wiki?curid=27910
Septuagint The Greek Old Testament, or Septuagint (from the ; often abbreviated "70"; in Roman numerals, ), is the earliest extant Koine Greek translation of books from the Hebrew Bible, various biblical apocrypha, and deuterocanonical books. The first five books of the Hebrew Bible, known as the Torah or the Pentateuch, were translated in the mid-3rd century BCE; they did not survive as original translation texts, however, except as rare fragments. The remaining books of the Greek Old Testament are presumably translations of the 2nd century BCE. The full title () derives from the story recorded in the Letter of Aristeas that the Hebrew Torah was translated into Greek at the request of Ptolemy II Philadelphus (285–247 BCE) by 70 Jewish scholars or, according to later tradition, 72: six scholars from each of the Twelve Tribes of Israel, who independently produced identical translations. The miraculous character of the Aristeas legend might indicate the esteem and disdain in which the translation was held at the time; Greek translations of Hebrew scriptures were in circulation among the Alexandrian Jews. Egyptian papyri from the period have led most scholars to view as probable Aristeas' dating of the translation of the Pentateuch to the third century BCE. Whatever share the Ptolemaic court may have had in the translation, it satisfied a need felt by the Jewish community (in whom the knowledge of Hebrew was waning among the demands of every-day life). Greek scriptures were in wide use by the time of Jesus and Paul of Tarsus (early Christianity) because most Christian proselytes, God-fearers, and other gentile sympathizers of Hellenistic Judaism could not read Hebrew. The text of the Greek Old Testament is quoted more often than the original Hebrew Bible text in the Greek New Testament (particularly the Pauline epistles) by the Apostolic Fathers, and later by the Greek Church Fathers. Modern critical editions of the Greek Old Testament are based on the Codices Alexandrinus, Sinaiticus, and Vaticanus. The fourth- and fifth-century Greek Old Testament manuscripts have different lengths. The Codex Alexandrinus, for example, contains all four books of the Maccabees; the Codex Sinaiticus contains 1 and 4 Maccabees, and the Codex Vaticanus contains none of the four books. "Septuagint" is derived from the Latin phrase "versio septuaginta interpretum" ("translation of the seventy interpreters"), which was derived from the . It was not until the time of Augustine of Hippo (354–430 CE) that the Greek translation of the Jewish scriptures was called by the Latin term "Septuaginta". The Roman numeral (seventy) is commonly used as an abbreviation, in addition to formula_1 or "G". According to the legend, seventy-two Jewish scholars were asked by Ptolemy II Philadelphus, the Greek king of Egypt, to translate the Torah from Biblical Hebrew to Greek for inclusion in the Library of Alexandria. This narrative is found in the pseudepigraphic Letter of Aristeas to his brother Philocrates, and is repeated by Philo of Alexandria, Josephus (in "Antiquities of the Jews"), and by later sources (including Augustine of Hippo). It is also found in the Tractate Megillah of the Babylonian Talmud: Philo of Alexandria, who relied extensively on the Septuagint, writes that the number of scholars was chosen by selecting six scholars from each of the twelve tribes of Israel. According to later rabbinic tradition (which considered the Greek translation as a distortion of sacred text and unsuitable for use in the synagogue), the Septuagint was given to Ptolemy two days before the annual Tenth of Tevet fast. The 3rd century BCE is supported for the Torah translation by a number of factors, including its Greek being representative of early Koine Greek, citations beginning as early as the 2nd century BCE, and early manuscripts datable to the 2nd century. After the Torah, other books were translated over the next two to three centuries. It is unclear which was translated when, or where; some may have been translated twice (into different versions), and then revised. The quality and style of the translators varied considerably from book to book, from a literal translation to paraphrasing to an interpretative style. The translation process of the Septuagint and from the Septuagint into other versions can be divided into several stages: the Greek text was produced within the social environment of Hellenistic Judaism, and completed by 132 BCE. With the spread of Early Christianity, this Septuagint in turn was rendered into Latin in a variety of versions and the latter, collectively known as the "Vetus Latina", were also referred to as the Septuagint. initially in Alexandria but elsewhere as well. The Septuagint also formed the basis for the Slavonic, Syriac, Old Armenian, Old Georgian, and Coptic versions of the Christian Old Testament. The Septuagint is written in Koine Greek. Some sections contain Semiticisms, idioms and phrases based on Semitic languages such as Hebrew and Aramaic. Other books, such as Daniel and Proverbs, have a stronger Greek influence. The Septuagint may also clarify pronunciation of pre-Masoretic Hebrew; many proper nouns are spelled with Greek vowels in the translation, but contemporary Hebrew texts lacked vowel pointing. However, it is unlikely that all biblical-Hebrew sounds had precise Greek equivalents. As the translation progressed, the canon of the Greek Bible expanded. The Hebrew Bible, also called the Tanakh, has three parts: the Torah (law), the Nevi'im (prophets), and the Ketuvim (writings). The Septuagint has four: law, history, poetry, and prophets. The books of the Apocrypha were inserted at appropriate locations. Extant copies (dating from the 4th century CE) of the Septuagint contain books and additions which are not present in the Hebrew Bible (not found in the Palestinian Jewish canon), and are not uniform in their contents. According to some scholars, there is no evidence that the Septuagint included these additional books. These copies of the Septuagint include books known as "anagignoskomena" in Greek and in English as deuterocanon (derived from the Greek words for "second canon"), books not included in the Jewish canon. These books are estimated to have been written between 200 BCE and 50 CE. Among them are the first two books of Maccabees; Tobit; Judith; the Wisdom of Solomon; Sirach; Baruch (including the Letter of Jeremiah), and additions to Esther and Daniel. The Septuagint version of some books, such as Daniel and Esther, are longer than those in the Masoretic Text. The Septuagint Book of Jeremiah is shorter than the Masoretic Text. The Psalms of Solomon, 3 Maccabees, 4 Maccabees, the Epistle of Jeremiah, the Book of Odes, the Prayer of Manasseh and Psalm 151 are included in some copies of the Septuagint. Several reasons have been given for the rejection of the Septuagint as scriptural by mainstream rabbinic Judaism since Late Antiquity. Differences between the Hebrew and the Greek were found. The Hebrew source texts in some cases (particularly the Book of Daniel) used for the Septuagint differed from the Masoretic tradition of Hebrew texts, which were affirmed as canonical by the rabbis. The rabbis also wanted to distinguish their tradition from the emerging tradition of Christianity, which frequently used the Septuagint. As a result of these teachings, other translations of the Torah into Koine Greek by early Jewish rabbis have survived only as rare fragments. The Septuagint became synonymous with the Greek Old Testament, a Christian canon incorporating the books of the Hebrew canon with additional texts. Although the Roman Catholic and Eastern Orthodox Churches include most of the books in the Septuagint in their canons, Protestant churches usually do not. After the Protestant Reformation, many Protestant Bibles began to follow the Jewish canon and exclude the additional texts (which came to be called the Apocrypha) as noncanonical. The Apocrypha are included under a separate heading in the King James version of the Bible. All the books in Western Old Testament biblical canons are found in the Septuagint, although the order does not always coincide with the Western book order. The Septuagint order is evident in the earliest Christian Bibles, which were written during the fourth century. Some books which are set apart in the Masoretic Text are grouped together. The Books of Samuel and the Books of Kings are one four-part book entitled Βασιλειῶν (Of Reigns) in the Septuagint. The Books of Chronicles supplement Reigns, known as Παραλειπομένων (Of Things Left Out). The Septuagint organizes the minor prophets in its twelve-part Book of Twelve. Some ancient scriptures are found in the Septuagint, but not in the Hebrew Bible. The additional books are Tobit; Judith; the Wisdom of Solomon; Wisdom of Jesus son of Sirach; Baruch and the Letter of Jeremiah, which became chapter six of Baruch in the Vulgate; additions to Daniel (The Prayer of Azarias, the Song of the Three Children, Susanna, and Bel and the Dragon); additions to Esther; 1 Maccabees; 2 Maccabees; 3 Maccabees; 4 Maccabees; 1 Esdras; Odes (including the Prayer of Manasseh); the Psalms of Solomon, and Psalm 151. Fragments of deuterocanonical books in Hebrew are among the Dead Sea Scrolls found at Qumran. Sirach, whose text in Hebrew was already known from the Cairo Geniza, has been found in two scrolls (2QSir or 2Q18, 11QPs_a or 11Q5) in Hebrew. Another Hebrew scroll of Sirach has been found in Masada (MasSir). Five fragments from the Book of Tobit have been found in Qumran: four written in Aramaic and one written in Hebrew (papyri 4Q, nos. 196-200). Psalm 151 appears with a number of canonical and non-canonical psalms in the Dead Sea scroll 11QPs(a) (also known as 11Q5), a first-century-CE scroll discovered in 1956. The scroll contains two short Hebrew psalms, which scholars agree were the basis for Psalm 151. The canonical acceptance of these books varies by Christian tradition. In the most ancient copies of the Bible which contain the Septuagint version of the Old Testament, the Book of Daniel is not the original Septuagint version but a copy of Theodotion's translation from the Hebrew which more closely resembles the Masoretic text. The Septuagint version was discarded in favor of Theodotion's version in the 2nd to 3rd centuries CE. In Greek-speaking areas, this happened near the end of the 2nd century; in Latin-speaking areas (at least in North Africa), it occurred in the middle of the 3rd century. The reason for this is unknown. Several Old Greek texts of the Book of Daniel have been discovered, and the original form of the book is being reconstructed. The pre-Christian Jews Philo and Josephus considered the Septuagint equal to the Hebrew text. Manuscripts of the Septuagint have been found among the Dead Sea Scrolls, and were thought to have been in use among Jews at the time. Several factors led most Jews to abandon the Septuagint around the second century CE. The earliest gentile Christians used the Septuagint out of necessity, since it was the only Greek version of the Bible and most (if not all) of these early non-Jewish Christians could not read Hebrew. The association of the Septuagint with a rival religion may have made it suspect in the eyes of the newer generation of Jews and Jewish scholars. Jews instead used Hebrew or Aramaic Targum manuscripts later compiled by the Masoretes and authoritative Aramaic translations, such as those of Onkelos and Rabbi Yonathan ben Uziel. Perhaps most significant for the Septuagint, as distinct from other Greek versions, was that the Septuagint began to lose Jewish sanction after differences between it and contemporary Hebrew scriptures were discovered. Even Greek-speaking Jews tended to prefer other Jewish versions in Greek (such as the translation by Aquila), which seemed to be more concordant with contemporary Hebrew texts. The Early Christian church used the Greek texts, since Greek was a "lingua franca" of the Roman Empire at the time and the language of the Greco-Roman Church while Aramaic was the language of Syriac Christianity. The relationship between the apostolic use of the Septuagint and the Hebrew texts is complicated. Although the Septuagint seems to have been a major source for the Apostles, it is not the only one. St. Jerome offered, for example, and , , , and as examples found in Hebrew texts but not in the Septuagint. Matthew 2:23 is not present in current Masoretic tradition either; according to Jerome, however, it was in . The New Testament writers, freely used the Greek translation when citing the Jewish scriptures (or quoting Jesus doing so), implying that Jesus, his apostles, and their followers considered it reliable. In the early Christian Church, the presumption that the Septuagint was translated by Jews before the time of Christ and that it lends itself more to a Christological interpretation than 2nd-century Hebrew texts in certain places was taken as evidence that "Jews" had changed the Hebrew text in a way that made it less Christological. Irenaeus writes about that the Septuagint clearly identifies a "virgin" (Greek "παρθένος"; "bethulah" in Hebrew) who would conceive. The word "almah" in the Hebrew text was, according to Irenaeus, interpreted by Theodotion and Aquila (Jewish converts), as a "young woman" who would conceive. Again according to Irenaeus, the Ebionites used this to claim that Joseph was the biological father of Jesus. To him that was heresy facilitated by late anti-Christian alterations of the scripture in Hebrew, as evident by the older, pre-Christian Septuagint. Jerome broke with church tradition, translating most of the Old Testament of his Vulgate from Hebrew rather than Greek. His choice was sharply criticized by Augustine, his contemporary. Although Jerome argued for the superiority of the Hebrew texts in correcting the Septuagint on philological and theological grounds, because he was accused of heresy he also acknowledged the Septuagint texts. Acceptance of Jerome's version increased, and it displaced the Septuagint's Old Latin translations. The Eastern Orthodox Church prefers to use the Septuagint as the basis for translating the Old Testament into other languages, and uses the untranslated Septuagint where Greek is the liturgical language. Critical translations of the Old Testament which use the Masoretic Text as their basis consult the Septuagint and other versions to reconstruct the meaning of the Hebrew text when it is unclear, corrupted, or ambiguous. According to the New Jerusalem Bible foreword, "Only when this (the Masoretic Text) presents insuperable difficulties have emendations or other versions, such as the ... LXX, been used." The translator's preface to the New International Version reads, "The translators also consulted the more important early versions (including) the Septuagint ... Readings from these versions were occasionally followed where the MT seemed doubtful ..." Modern scholarship holds that the Septuagint was written from the 3rd through the 1st centuries BCE, but nearly all attempts at dating specific books (except for the Pentateuch, early- to mid-3rd century BCE) are tentative. Later Jewish revisions and recensions of the Greek against the Hebrew are well-attested. The best-known are Aquila (128 CE), Symmachus, and Theodotion. These three, to varying degrees, are more-literal renderings of their contemporary Hebrew scriptures compared to the Old Greek (the original Septuagint). Modern scholars consider one (or more) of the three to be new Greek versions of the Hebrew Bible. Although much of Origen's "Hexapla" (a six-version critical edition of the Hebrew Bible) is lost, several compilations of fragments are available. Origen kept a column for the Old Greek (the Septuagint), which included readings from all the Greek versions in a critical apparatus with diacritical marks indicating to which version each line (Gr. στίχος) belonged. Perhaps the "Hexapla" was never copied in its entirety, but Origen's combined text was copied frequently (eventually without the editing marks) and the older uncombined text of the Septuagint was neglected. The combined text was the first major Christian recension of the Septuagint, often called the "Hexaplar recension". Two other major recensions were identified in the century following Origen by Jerome, who attributed these to Lucian (the Lucianic, or Antiochene, recension) and Hesychius (the Hesychian, or Alexandrian, recension). The oldest manuscripts of the Septuagint include 2nd-century-BCE fragments of Leviticus and Deuteronomy (Rahlfs nos. 801, 819, and 957) and 1st-century-BCE fragments of Genesis, Exodus, Leviticus, Numbers, Deuteronomy, and the Twelve Minor Prophets (Alfred Rahlfs nos. 802, 803, 805, 848, 942, and 943). Relatively-complete manuscripts of the Septuagint postdate the Hexaplar recension, and include the fourth-century-CE Codex Vaticanus and the fifth-century Codex Alexandrinus. These are the oldest-surviving nearly-complete manuscripts of the Old Testament in any language; the oldest extant complete Hebrew texts date to about 600 years later, from the first half of the 10th century. The 4th-century Codex Sinaiticus also partially survives, with many Old Testament texts. The Jewish (and, later, Christian) revisions and recensions are largely responsible for the divergence of the codices. The Codex Marchalianus is another notable manuscript. The text of the Septuagint is generally close to that of the Masoretes and Vulgate. is identical in the Septuagint, Vulgate and the Masoretic Text, and to the end of the chapter is the same. There is only one noticeable difference in that chapter, at 4:7: The differences between the Septuagint and the MT fall into four categories: The Biblical manuscripts found in Qumran, commonly known as the Dead Sea Scrolls (DSS), have prompted comparisons of the texts associated with the Hebrew Bible (including the Septuagint). Emanuel Tov, editor of the translated scrolls, identifies five broad variants of DSS texts: The textual sources present a variety of readings; Bastiaan Van Elderen compares three variations of Deuteronomy 32:43, the Song of Moses: The text of all print editions is derived from the recensions of Origen, Lucian, or Hesychius: The first English translation (which excluded the apocrypha) was Charles Thomson's in 1808, which was revised and enlarged by C. A. Muses in 1954 and published by the Falcon's Wing Press. The Septuagint with Apocrypha: Greek and English was translated by Lancelot Brenton in 1854. It is the traditional translation and most of the time since its publication it has been the only one readily available, and it has continually been in print. The translation, based on the Codex Vaticanus, contains the Greek and English texts in parallel columns. It has an average of four footnoted, transliterated words per page, abbreviated "Alex" and "GK". Updating the English of Brenton's translation. "The Complete Apostles' Bible" (translated by Paul W. Esposito) was published in 2007. Using the Masoretic Text in the 23rd Psalm (and possibly elsewhere), it omits the apocrypha. A New English Translation of the Septuagint and the Other Greek Translations Traditionally Included Under that Title (NETS), an academic translation based on the New Revised Standard version (in turn based on the Masoretic Text) was published by the International Organization for Septuagint and Cognate Studies (IOSCS) in October 2007. The "Apostolic Bible Polyglot," published in 2003, is a Greek-English interlinear Septuagint which may be used in conjunction with the reprint of Brenton's translation. It includes the Greek books of the Hebrew canon (without the apocrypha) and the Greek New Testament, numerically coded to the AB-Strong numbering system, and set in monotonic orthography. The version includes a concordance and index. The "Orthodox Study Bible", published in early 2008, is a new translation of the Septuagint based on the Alfred Rahlfs edition of the Greek text. Two additional major sources have been added: the 1851 Brenton translation and the New King James Version text in places where the translation matches the Hebrew Masoretic text. This edition includes the NKJV New Testament and extensive commentary from an Eastern Orthodox perspective. Nicholas King completed "The Old Testament" in four volumes and "The Bible". Brenton's Septuagint, Restored Names Version, (SRNV) has been published in two volumes. The Hebrew-names restoration, based on the Westminster Leningrad Codex, focuses on the restoration of the Divine Name and has extensive Hebrew and Greek footnotes. The "Eastern Orthodox Bible" would have been an extensive revision and correction of Brenton's translation (which was primarily based on the Codex Vaticanus). With modern language and syntax, it would have had extensive introductory material and footnotes with significant inter-LXX and LXX/MT variants before being cancelled. The Holy Orthodox Bible, by Peter A. Papoutsis, and the Michael Asser English translation of the Septuagint are based on the Church of Greece's Septuagint text. The International Organization for Septuagint and Cognate Studies (IOSCS), a non-profit learned society, promotes international research into and study of the Septuagint and related texts. The society declared 8 February 2006 International Septuagint Day, a day to promote the work on campuses and in communities. The IOSCS publishes the "Journal of Septuagint and Cognate Studies". General Texts and translations The LXX and the NT
https://en.wikipedia.org/wiki?curid=27915
Codex Sinaiticus Codex Sinaiticus (, "Sinaïtikós Kṓdikas", ; Shelfmarks and references: London, British Library, Add MS 43725; Gregory-Aland nº א [Aleph] or 01, [Soden δ 2]) or "Sinai Bible" is one of the four great uncial codices, ancient, handwritten copies of a Christian Bible in Greek. The codex is a historical treasure. The codex is an Alexandrian text-type manuscript written in uncial letters on parchment and dated paleographically to the mid-4th century. Scholarship considers the Codex Sinaiticus to be one of the most important Greek texts of the New Testament, along with the Codex Vaticanus. Until Constantin von Tischendorf's discovery of the Sinaiticus text, the Codex Vaticanus was unrivaled. The Codex Sinaiticus came to the attention of scholars in the 19th century at Saint Catherine's Monastery in the Sinai Peninsula, with further material discovered in the 20th and 21st centuries. Although parts of the codex are scattered across four libraries around the world, most of the manuscript is held today in the British Library in London, where it is on public display. Since its discovery, study of the Codex Sinaiticus has proven to be useful to scholars for critical studies of biblical text. While large portions of the Old Testament are missing, it is assumed that the codex originally contained the whole of both Testaments. About half of the Greek Old Testament (or "Septuagint") survived, along with a complete New Testament, the entire Deuterocanonical books, the Epistle of Barnabas and portions of The Shepherd of Hermas. The codex consists of parchment, originally in double sheets, which may have measured about 40 by 70 cm. The whole codex consists, with a few exceptions, of quires of eight leaves, a format popular throughout the Middle Ages. Each line of the text has some twelve to fourteen Greek uncial letters, arranged in four columns (48 lines per column) with carefully chosen line breaks and slightly ragged right edges. When opened, the eight columns thus presented to the reader have much the same appearance as the succession of columns in a papyrus roll. The poetical books of the Old Testament are written stichometrically, in only two columns per page. The codex has almost 4,000,000 uncial letters. The work was written in "scriptio continua" with neither breathings nor polytonic accents. Occasional points and a few ligatures are used, though "nomina sacra" with overlines are employed throughout. Some words usually abbreviated in other manuscripts (such as πατηρ and δαυειδ), are in this codex written in both full and abbreviated forms. The following nomina sacra are written in abbreviated forms: ΘΣ ΚΣ ΙΣ ΧΣ ΠΝΑ ΠΝΙΚΟΣ ΥΣ ΑΝΟΣ ΟΥΟΣ ΔΑΔ ΙΛΗΜ ΙΣΡΛ ΜΗΡ ΠΗΡ ΣΩΡ. Almost regularly, a plain iota is replaced by the epsilon-iota diphthong (commonly though imprecisely known as itacism), e.g. ΔΑΥΕΙΔ instead οf ΔΑΥΙΔ, ΠΕΙΛΑΤΟΣ instead of ΠΙΛΑΤΟΣ, ΦΑΡΕΙΣΑΙΟΙ instead of ΦΑΡΙΣΑΙΟΙ, etc. Each rectangular page has the proportions 1.1 to 1, while the block of text has the reciprocal proportions, 0.91 (the same proportions, rotated 90°). If the gutters between the columns were removed, the text block would mirror the page's proportions. Typographer Robert Bringhurst referred to the codex as a "subtle piece of craftsmanship". The folios are made of vellum parchment primarily from calf skins, secondarily from sheep skins. (Tischendorf himself thought that the parchment had been made from antelope skins, but modern microscopic examination has shown otherwise.) Most of the quires or signatures contain four sheets, save two containing five. It is estimated that the hides of about 360 animals were employed for making the folios of this codex. As for the cost of the material, time of scribes and binding, it equals the lifetime wages of one individual at the time. The portion of the codex held by the British Library consists of 346½ folios, 694 pages (38.1 cm x 34.5 cm), constituting over half of the original work. Of these folios, 199 belong to the Old Testament, including the apocrypha (deuterocanonical), and 147½ belong to the New Testament, along with two other books, the Epistle of Barnabas and part of The Shepherd of Hermas. The apocryphal books present in the surviving part of the Septuagint are 2 Esdras, Tobit, Judith, 1 and 4 Maccabees, Wisdom, and Sirach. The books of the New Testament are arranged in this order: the four Gospels, the epistles of Paul (Hebrews follows 2 Thess.), the Acts of the Apostles, the General Epistles, and the Book of Revelation. The fact that some parts of the codex are preserved in good condition while others are in very poor condition implies they were separated and stored in several places. The text of the Old Testament contains the following passages: The text of the New Testament lacks several passages: Some passages were excluded by the correctors: These omissions are typical for the Alexandrian text-type. (see Luke 7:10) – It has additional word πολλα ("numerous"): "and cast out numerous demons in your name?". It is not supported by any other manuscript. Luke 1:26 – "Nazareth" is called "a city of Judea". Luke 2:37 – εβδομηκοντα ("seventy"), all manuscripts have ογδοηκοντα ("eighty"); John 1:28 – The second corrector made unique textual variant Βηθαραβα. This textual variant has only codex 892, syrh and several other manuscripts. John 1:34 – It reads ὁ ἐκλεκτός ("chosen one") together with the manuscripts formula_15, formula_1106, b, e, ff2, syrc, and syrs instead of ordinary word υἱος ("son"). John 2:3 – Where ordinarily reading "And when they wanted wine", or "And when wine failed", Codex Sinaiticus has "And they had no wine, because the wine of the marriage feast was finished" (supported by a and j); John 6:10 – It reads τρισχιλιοι ("three thousands") for πεντακισχιλιοι ("five thousands"); the second corrector changed into πεντακισχιλιοι. Acts 11:20 – It reads εὐαγγελιστας ("Evangelists") instead of ἑλληνιστάς ("Hellenists"); In Acts 14:9, the word "not" inserted before "heard"; in Hebr. 2:4 "harvests" instead of "distributions"; in 1Peter 5:13 word "Babylon" replaced into "Church". 2 Timothy 4:10 – it reads Γαλλιαν ("Gaul") for Γαλατιαν ("Galatia") This reading of the codex is supported by Ephraemi Rescriptus, 81, 104, 326, 436. It is the oldest witness for the phrase μη αποστερησης ("do not defraud") in Mark 10:19. This phrase was not included by the manuscripts: Codex Vaticanus (added by second corrector), Codex Cyprius, Codex Washingtonianus, Codex Athous Lavrensis, "f"1, "f"13, 28, 700, 1010, 1079, 1242, 1546, 2148, ℓ "10", ℓ "950", ℓ "1642", ℓ "1761", syrs, arm, geo. This is variant of the majority manuscripts. In Mark 13:33 it is the oldest witness of the variant και προσευχεσθε ("and pray"). Codex B and D do not include this passage. In Luke 8:48 it has θυγατερ ("daughter") as in the Byzantine manuscripts, instead of the Alexandrian θυγατηρ ("daughter"), supported by the manuscripts: B K L W Θ. In 1 John 5:6 it has textual variant δι' ὕδατος καὶ αἵματος καὶ πνεύματος ("through water and blood and spirit") together with the manuscripts: Codex Alexandrinus, 104, 424c, 614, 1739c, 2412, 2495, ℓ "598"m, syrh, copsa, copbo, Origen. Bart D. Ehrman says this was a corrupt reading from a proto-orthodox scribe, although this conclusion has not gained wide support. For most of the New Testament, "Codex Sinaiticus" is in general agreement with "Codex Vaticanus Graecus 1209" and "Codex Ephraemi Rescriptus", attesting the Alexandrian text-type. A notable example of an agreement between the Sinaiticus and Vaticanus texts is that they both omit the word εικη ('without cause', 'without reason', 'in vain') from Matthew 5:22 ""But I say unto you, That whosoever is angry with his brother without a cause shall be in danger of the judgement"". In John 1:1–8:38 "Codex Sinaiticus" differs from Vaticanus and all other Alexandrian manuscripts. It is in closer agreement with "Codex Bezae" in support of the Western text-type. For example, in John 1:4 Sinaiticus and Codex Bezae are the only Greek manuscripts with textual variant ἐν αὐτῷ ζωὴ ἐστίν ("in him is life") instead of ἐν αὐτῷ ζωὴ ᾓν ("in him was life"). This variant is supported by Vetus Latina and some Sahidic manuscripts. This portion has a large number of corrections. There are a number of differences between Sinaiticus and Vaticanus; Hoskier enumerated 3036 differences: A large number of these differences are due to iotacisms and variants in transcribing Hebrew names. These two manuscripts were not written in the same scriptorium. According to Fenton Hort "Sinaiticus" and "Vaticanus" were derived from a common original much older source, "the date of which cannot be later than the early part of the second century, and may well be yet earlier". Example of differences between Sinaiticus and Vaticanus in Matt 1:18–19: B. H. Streeter remarked a great agreement between the codex and Vulgate of Jerome. According to him, Origen brought to Caesarea the Alexandrian text-type that was used in this codex, and used by Jerome. Between the 4th and 12th centuries, seven or more correctors worked on this codex, making it one of the most corrected manuscripts in existence. Tischendorf during his investigation in Petersburg enumerated 14,800 corrections only in the portion which was held in Petersburg (2/3 of the codex). According to David C. Parker the full codex has about 23,000 corrections. In addition to these corrections some letters were marked by dots as doubtful (e.g. ṪḢ). Corrections represent the Byzantine text-type, just like corrections in codices: Bodmer II, Regius (L), Ephraemi (C), and Sangallensis (Δ). They were discovered by Edward Ardron Hutton. Little is known of the manuscript's early history. According to Hort, it was written in the West, probably in Rome, as suggested by the fact that the chapter division in the Acts of the Apostles common to Sinaiticus and Vaticanus occurs in no other Greek manuscript, but is found in several manuscripts of the Latin Vulgate. Robinson countered this argument, suggesting that this system of chapter divisions was introduced into the Vulgate by Jerome himself, as a result of his studies at Caesarea. According to Kenyon the forms of the letters are Egyptian and they were found in Egyptian papyri of earlier date. Gardthausen Ropes and Jellicoe thought it was written in Egypt. Harris believed that the manuscript came from the library of Pamphilus at Caesarea, Palestine. Streeter, Skeat, and Milne also believed that it was produced in Caesarea. The codex has been dated paleographically to the mid-4th century. It could not have been written before 325 because it contains the Eusebian Canons, which is a "terminus post quem". "The "terminus ante quem" is less certain, but, according to Milne and Skeat, is not likely to be much later than about 360." Tischendorf theorized that Codex Sinaiticus was one of the fifty copies of the Bible commissioned from Eusebius by Roman Emperor Constantine after his conversion to Christianity ("De vita Constantini", IV, 37). This hypothesis was supported by Pierre Batiffol, Gregory and Skeat believed that it was already in production when Constantine placed his order, but had to be suspended in order to accommodate different page dimensions. Frederic G. Kenyon argued: "There is not the least sign of either of them ever having been at Constantinople. The fact that Sinaiticus was collated with the manuscript of Pamphilus so late as the sixth century seems to show that it was not originally written at Caesarea". Tischendorf believed that four separate scribes (whom he named A, B, C and D) copied the work and that five correctors (whom he designated a, b, c, d and e) amended portions. He posited that one of the correctors was contemporaneous with the original scribes, and that the others worked in the 6th and 7th centuries. It is now agreed, after Milne and Skeat's reinvestigation, that Tischendorf was wrong, in that scribe C never existed. According to Tischendorf, scribe C wrote the poetic books of the Old Testament. These are written in a different format from the rest of the manuscript – they appear in two columns (the rest of books is in four columns), written stichometrically. Tischendorf probably interpreted the different formatting as indicating the existence of another scribe. The three remaining scribes are still identified by the letters that Tischendorf gave them: A, B, and D. Correctors were more, at least seven (a, b, c, ca, cb, cc, e). Modern analysis identifies at least three scribes: Scribe B was a poor speller, and scribe A was not very much better; the best scribe was D. Metzger states: "scribe A had made some unusually serious mistakes". Scribes A and B more often used "nomina sacra" in contracted forms (ΠΝΕΥΜΑ contracted in all occurrences, ΚΥΡΙΟΣ contracted except in 2 occurrences), scribe D more often used forms uncontracted. D distinguished between sacral and nonsacral using of ΚΥΡΙΟΣ. His errors are the substitution of ΕΙ for Ι, and Ι for ΕΙ in medial positions, both equally common. Otherwise substitution of Ι for initial ΕΙ is unknown, and final ΕΙ is only replaced in word ΙΣΧΥΕΙ, confusing of Ε and ΑΙ is very rare. In the Book of Psalms this scribe has 35 times ΔΑΥΕΙΔ instead of ΔΑΥΙΔ, while scribe A normally uses an abbreviated form ΔΑΔ. Scribe A's was a "worse type of phonetic error". Confusion of Ε and ΑΙ occurs in all contexts. Milne and Skeat characterised scribe B as "careless and illiterate". The work of the original scribe is designated by the siglum א*. A paleographical study at the British Museum in 1938 found that the text had undergone several corrections. The first corrections were done by several scribes before the manuscript left the scriptorium. Readings which they introduced are designated by the siglum אa. Milne and Skeat have observed that the superscription to 1 Maccabees was made by scribe D, while the text was written by scribe A. Scribe D corrects his own work and that of scribe A, but scribe A limits himself to correcting his own work. In the 6th or 7th century, many alterations were made (אb) – according to a colophon at the end of the book of Esdras and Esther the source of these alterations was "a very ancient manuscript that had been corrected by the hand of the holy martyr Pamphylus" (martyred in 309). If this is so, material beginning with 1 Samuel to the end of Esther is Origen's copy of the Hexapla. From this colophon, the correction is concluded to have been made in Caesarea Maritima in the 6th or 7th centuries. The pervasive iotacism, especially of the diphthong, remains uncorrected. The Codex may have been seen in 1761 by the Italian traveller, Vitaliano Donati, when he visited the Saint Catherine's Monastery at Sinai in Egypt. His diary was published in 1879, in which was written: In questo monastero ritrovai una quantità grandissima di codici membranacei... ve ne sono alcuni che mi sembravano anteriori al settimo secolo, ed in ispecie una Bibbia in membrane bellissime, assai grandi, sottili, e quadre, scritta in carattere rotondo e belissimo; conservano poi in chiesa un Evangelistario greco in caractere d'oro rotondo, che dovrebbe pur essere assai antico. "In this monastery I found a great number of parchment codices ... there are some which seemed to be written before the seventh century, and especially a Bible (made) of beautiful vellum, very large, thin and square parchments, written in round and very beautiful letters; moreover there are also in the church a Greek Evangelistarium in gold and round letters, it should be very old." The "Bible on beautiful vellum" may be the Codex Sinaiticus, and the gold evangelistarium is likely Lectionary 300 on the Gregory-Aland list. German Biblical scholar Constantin von Tischendorf wrote about his visit to the monastery in "Reise in den Orient" in 1846 (translated as "Travels in the East" in 1847), without mentioning the manuscript. Later, in 1860, in his writings about the Sinaiticus discovery, Tischendorf wrote a narrative about the monastery and the manuscript that spanned from 1844 to 1859. He wrote that in 1844, during his first visit to the Saint Catherine's Monastery, he saw some leaves of parchment in a waste-basket. They were "rubbish which was to be destroyed by burning it in the ovens of the monastery", although this is firmly denied by the Monastery. After examination he realized that they were part of the Septuagint, written in an early Greek uncial script. He retrieved from the basket 129 leaves in Greek which he identified as coming from a manuscript of the Septuagint. He asked if he might keep them, but at this point the attitude of the monks changed. They realized how valuable these old leaves were, and Tischendorf was permitted to take only one-third of the whole, i.e. 43 leaves. These leaves contained portions of 1 Chronicles, Jeremiah, Nehemiah, and Esther. After his return they were deposited in the Leipzig University Library, where they remain. In 1846 Tischendorf published their contents, naming them the 'Codex Friderico-Augustanus' (in honor of Frederick Augustus and keeping secret the source of the leaves). Other portions of the same codex remained in the monastery, containing all of Isaiah and 1 and 4 Maccabees. In 1845, Archimandrite Porphyrius Uspensky (1804–1885), at that time head of the Russian Ecclesiastical Mission in Jerusalem and subsequently Bishop of Chigirin, visited the monastery and the codex was shown to him, together with leaves which Tischendorf had not seen. In 1846, Captain C. K. MacDonald visited Mount Sinai, saw the codex, and bought two codices (495 and 496) from the monastery. In 1853, Tischendorf revisited the Saint Catherine's Monastery to get the remaining 86 folios, but without success. Returning in 1859, this time under the patronage of Tsar Alexander II of Russia, he was shown the "Codex Sinaiticus". He would later claim to have found it discarded in a rubbish bin. (This story may have been a fabrication, or the manuscripts in question may have been unrelated to "Codex Sinaiticus": Rev. J. Silvester Davies in 1863 quoted "a monk of Sinai who... stated that according to the librarian of the monastery the whole of Codex Sinaiticus had been in the library for many years and was marked in the ancient catalogues... Is it likely... that a manuscript known in the library catalogue would have been jettisoned in the rubbish basket." Indeed, it has been noted that the leaves were in "suspiciously good condition" for something found in the trash.) Tischendorf had been sent to search for manuscripts by Russia's Tsar Alexander II, who was convinced there were still manuscripts to be found at the Sinai monastery. The text of this part of the codex was published by Tischendorf in 1862: This work has been digitised in full and all four volumes may be consulted online. It was reprinted in four volumes in 1869: The complete publication of the codex was made by Kirsopp Lake in 1911 (New Testament), and in 1922 (Old Testament). It was the full-sized black and white facsimile of the manuscript, "made from negatives taken from St. Petersburg by my wife and myself in the summer of 1908". The story of how Tischendorf found the manuscript, which contained most of the Old Testament and all of the New Testament, has all the interest of a romance. Tischendorf reached the monastery on 31 January; but his inquiries appeared to be fruitless. On 4 February, he had resolved to return home without having gained his object: On the afternoon of this day I was taking a walk with the steward of the convent in the neighbourhood, and as we returned, towards sunset, he begged me to take some refreshment with him in his cell. Scarcely had he entered the room, when, resuming our former subject of conversation, he said: "And I, too, have read a Septuagint" – i.e. a copy of the Greek translation made by the Seventy. And so saying, he took down from the corner of the room a bulky kind of volume, wrapped up in a red cloth, and laid it before me. I unrolled the cover, and discovered, to my great surprise, not only those very fragments which, fifteen years before, I had taken out of the basket, but also other parts of the Old Testament, the New Testament complete, and, in addition, the Epistle of Barnabas and a part of the Shepherd of Hermas. After some negotiations, he obtained possession of this precious fragment. James Bentley gives an account of how this came about, prefacing it with the comment, "Tischendorf therefore now embarked on the remarkable piece of duplicity which was to occupy him for the next decade, which involved the careful suppression of facts and the systematic denigration of the monks of Mount Sinai." He conveyed it to Tsar Alexander II, who appreciated its importance and had it published as nearly as possible in facsimile, so as to exhibit correctly the ancient handwriting. In 1869 the Tsar sent the monastery 7,000 rubles and the monastery of Mount Tabor 2,000 rubles by way of compensation. The document in Russian formalising this was published in 2007 in Russia and has since been translated. Regarding Tischendorf's role in the transfer to Saint Petersburg, there are several views. The codex is currently regarded by the monastery as having been stolen. This view is hotly contested by several scholars in Europe. Kirsopp Lake wrote: Those who have had much to do with Oriental monks will understand how improbable it is that the terms of the arrangement, whatever it was, were ever known to any except a few of the leaders. In a more neutral spirit, New Testament scholar Bruce Metzger writes: Certain aspects of the negotiations leading to the transfer of the codex to the Tsar's possession are open to an interpretation that reflects adversely on Tischendorf's candour and good faith with the monks at Saint Catherine's Monastery. For a recent account intended to exculpate him of blame, see Erhard Lauch's article 'Nichts gegen Tischendorf' in "Bekenntnis zur Kirche: Festgabe für Ernst Sommerlath zum 70. Geburtstag" (Berlin, c. 1961); for an account that includes a hitherto unknown receipt given by Tischendorf to the authorities at the monastery promising to return the manuscript from Saint Petersburg 'to the Holy Confraternity of Sinai at its earliest request'. On 13 September 1862 Constantine Simonides, skilled in calligraphy and with a controversial background with manuscripts, made the claim in print in "The Manchester Guardian" that he had written the codex himself as a young man in 1839 in the Panteleimonos monastery at Athos. Constantin von Tischendorf, who worked with numerous Bible manuscripts, was known as somewhat flamboyant, and had ambitiously sought money from several royal families for his ventures, who had indeed funded his trips. Simonides had a somewhat obscure history, as he claimed he was at Mt. Athos in the years preceding Tischendorf's contact, making the claim at least plausible. Simonides also claimed his father had died and the invitation to Mt. Athos came from his uncle, a monk there, but subsequent letters to his father were found among his possessions at his death. Simonides claimed the false nature of the document in "The Manchester Guardian" in an exchange of letters among scholars and others, at the time. Henry Bradshaw, a British librarian known to both men, defended the Tischendorf find of the Sinaiticus, casting aside the accusations of Simonides. Since Bradshaw was a social 'hub' among many diverse scholars of the day, his aiding of Tischendorf was given much weight. Simonides died shortly after, and the issue lay dormant for many years. Tischendorf answered Simonides in "Allgemeine Zeitung" (December), that only in the New Testament there are many differences between it and all other manuscripts. Henry Bradshaw, a bibliographer, combatted the claims of Constantine Simonides in a letter to "The Manchester Guardian" (26 January 1863). Bradshaw argued that the Codex Sinaiticus brought by Tischendorf from the Greek monastery of Mount Sinai was not a modern forgery or written by Simonides. The controversy seems to regard the misplaced use of the word 'fraud' or 'forgery' since it may have been a repaired text, a copy of the Septuagint based upon Origen's Hexapla, a text which has been rejected for centuries because of its lineage from Eusebius who introduced Arian doctrine into the courts of Constantine I and II. Not every scholar and Church minister was delighted about the codex. Burgon, a supporter of the Textus Receptus, suggested that Codex Sinaiticus, as well as codices Vaticanus and Codex Bezae, were the most corrupt documents extant. Each of these three codices "clearly exhibits a fabricated text – is the result of arbitrary and reckless recension." The two most weighty of these three codices, א and B, he likens to the "two false witnesses" of Matthew. In the early 20th century Vladimir Beneshevich (1874–1938) discovered parts of three more leaves of the codex in the bindings of other manuscripts in the library of Mount Sinai. Beneshevich went on three occasions to the monastery (1907, 1908, 1911) but does not tell when or from which book these were recovered. These leaves were also acquired for St. Petersburg, where they remain. For many decades, the Codex was preserved in the Russian National Library. In 1933, the Soviet Union sold the codex to the British Museum (after 1973 British Library) for £100,000 raised by public subscription (worth £ in 2020). After coming to Britain it was examined by Skeat and Milne using an ultra-violet lamp. In May 1975, during restoration work, the monks of Saint Catherine's Monastery discovered a room beneath the St. George Chapel which contained many parchment fragments. Kurt Aland and his team from the Institute for New Testament Textual Research were the first scholars who were invited to analyse, examine and photograph these new fragments of the New Testament in 1982. Among these fragments were twelve complete leaves from the "Sinaiticus", 11 leaves of the Pentateuch and 1 leaf of the Shepherd of Hermas. Together with these leaves 67 Greek Manuscripts of New Testament have been found (uncials 0278 – 0296 and some minuscules). In June 2005, a team of experts from the UK, Europe, Egypt, Russia and USA undertook a joint project to produce a new digital edition of the manuscript (involving all four holding libraries), and a series of other studies was announced. This will include the use of hyperspectral imaging to photograph the manuscripts to look for hidden information such as erased or faded text. This is to be done in cooperation with the British Library. More than one quarter of the manuscript was made publicly available at The Codex Sinaiticus Website on 24 July 2008. On 6 July 2009, 800 more pages of the manuscript were made available, showing over half of the entire text, although the entire text was intended to be shown by that date. The complete document is now available online in digital form and available for scholarly study. The online version has a fully transcribed set of digital pages, including amendments to the text, and two images of each page, with both standard lighting and raked lighting to highlight the texture of the parchment. Prior to 1 September 2009, the University of the Arts London PhD student, Nikolas Sarris, discovered the previously unseen fragment of the Codex in the library of Saint Catherine's Monastery. It contains the text of Book of Joshua 1:10. The codex is now split into four unequal portions: 347 leaves in the British Library in London (199 of the Old Testament, 148 of the New Testament), 12 leaves and 14 fragments in the Saint Catherine's Monastery, 43 leaves in the Leipzig University Library, and fragments of 3 leaves in the Russian National Library in Saint Petersburg. Saint Catherine's Monastery still maintains the importance of a letter, handwritten in 1844 with an original signature of Tischendorf confirming that he borrowed those leaves. However, recently published documents, including a deed of gift dated 11 September 1868 and signed by Archbishop Kallistratos and the monks of the monastery, indicate that the manuscript was acquired entirely legitimately. This deed, which agrees with a report by Kurt Aland on the matter, has now been published. Unfortunately this development is not widely known in the English-speaking world, as only German- and Russian-language media reported on it in 2009. Doubts as to the legality of the gift arose because when Tischendorf originally removed the manuscript from Saint Catherine's Monastery in September 1859, the monastery was without an archbishop, so that even though the intention to present the manuscript to the Tsar had been expressed, no legal gift could be made at the time. Resolution of the matter was delayed through the turbulent reign of Archbishop Cyril (consecrated 7 December 1859, deposed 24 August 1866), and the situation only formalised after the restoration of peace. Skeat in his article "The Last Chapter in the History of the Codex Sinaiticus" concluded in this way: This is not the place to pass judgements, but perhaps I may say that, as it seems to me, both the monks and Tischendorf deserve our deepest gratitude, Tischendorf for having alerted the monks to the importance of the manuscript, and the monks for having undertaken the daunting task of searching through the vast mass of material with such spectacular results, and then doing everything in their power to safeguard the manuscript against further loss. If we accept the statement of Uspensky, that he saw the codex in 1845, the monks must have worked very hard to complete their search and bind up the results in so short a period. Along with Codex Vaticanus, the Codex Sinaiticus is considered one of the most valuable manuscripts available, as it is one of the oldest and likely closer to the original text of the Greek New Testament. It is the only uncial manuscript with the complete text of the New Testament, and the only ancient manuscript of the New Testament written in four columns per page which has survived to the present day. With only 300 years separating the Codex Sinaiticus and the proposed lifetime of Jesus, it is considered by some to be more accurate than most New Testament copies in preserving readings where almost all manuscripts are assumed by them to be in error. For the Gospels, Sinaiticus is considered among some people as the second most reliable witness of the text (after Vaticanus); in the Acts of the Apostles, its text is equal to that of Vaticanus; in the Epistles, Sinaiticus is assumed to be the most reliable witness of the text. In the Book of Revelation, however, its text is corrupted and is considered of poor quality, and inferior to the texts of Codex Alexandrinus, Papyrus 47, and even some minuscule manuscripts in this place (for example, Minuscule 2053, 2062).
https://en.wikipedia.org/wiki?curid=27916
St. John Fisher College St. John Fisher College is a private liberal arts college in Rochester, New York. It is named after John Fisher (1469–1535), an English Catholic bishop, cardinal, theologian, and martyr, who presided over the Diocese of Rochester, Kent, England, and is venerated by Roman Catholics as a saint. St. John Fisher College was founded as a men's college in 1948 by the Basilian Fathers and with the aid of James E. Kearney, then the Bishop of the Roman Catholic Diocese of Rochester. The College became independent in 1968 and coeducational in 1971. Today, Fisher is an independent, liberal arts institution in the Catholic tradition of American higher education. It was listed as a census-designated place in 2020. Fisher is made up of five schools. It offers 35 undergraduate majors, as well as a variety of master's and doctoral programs. The School of Arts and Sciences is the largest school within St. John Fisher College. It offers degrees and minors in over 20 undergraduate academic disciplines. The School is named after Ralph C. Wilson, Jr., the founding owner of the NFL's Buffalo Bills. It is accredited by the National Council for Accreditation of Teacher Education and offers undergraduate degrees in Inclusive Adolescence Education and Inclusive Childhood Education. It also offers a master's degree and initial certification program for those areas. Teachers already holding initial certification can earn graduate degrees and professional certification in Literacy Education (B-6 and 5–12), Special Education, and Educational Leadership, as well as an accelerated Doctor of Education in Executive Leadership. The School of Education is active in community outreach programs including a literacy center that provides tutoring and small group instruction in literacy for elementary through high school students. The School of Education works closely with local school districts including the Rochester City School District, which hosts a number of Professional Development Sites where practicing teachers and pre-service teachers work alongside education faculty to develop best practices. Fisher's business programs are accredited by the Association to Advance Collegiate Schools of Business (AACSB International). When this accreditation was gained, all business programs at the College were brought together in 2003 to form the College's first professional school, the School of Business. The Wegmans School of Pharmacy is one of five pharmacy schools in New York State and is the first pharmacy school in the Greater Rochester community. It opened in fall 2006 and became fully accredited in May 2010. It awards a Doctor of Pharmacy degree to candidates who successfully complete four years of professional study. The school was made possible by a $5 million gift from the late Robert Wegman, who served for many years as president of Wegmans Food Markets. This School is also named after Robert Wegman, who contributed $8 million to the college to create the School of Nursing. Fisher's nursing programs are fully accredited by the New York State Education Department and the Commission on Collegiate Nursing Education. The college also offers an online RN to BSN program, master's degrees in both Nursing and Mental Health Counseling, and a Doctor of Nursing Practice (DNP) degree. Nearly all first-year students receive some form of financial assistance. Need-based and merit-based scholarships, as well as grants, loans, and part-time employment, are available for eligible students. Two unique scholarships are awarded to incoming freshmen. The College is a founding member of the Empire 8 Athletic Association and competes with other full member schools. It competes at the NCAA Division III level, and is a member of the Eastern College Athletic Conference (ECAC), the Empire 8, the Liberty League (men's and women's rowing), and the United Volleyball Conference (men's). Its mascot is the cardinal. During the 2014–15 season, St. John Fisher College won Empire 8 championships for men's indoor track & field, men's basketball, women's basketball, men's outdoor track & field, men's golf, and women's lacrosse. Growney Stadium is home to Fisher's football, field hockey, soccer, and lacrosse teams. The stadium's all-weather playing field has lighting and a 2,500 seat grandstand. The Manning & Napier Varsity Gymnasium is home to the men's and women's basketball teams. Dugan Yard is Fisher's baseball field. Other outdoor facilities include the Polisseni Track and Field Complex, regulation-sized practice fields (which serve as the home rugby fields), and a softball diamond. In 2006, Fisher's football team finished the season with a 12–2 record overall and shared the Empire 8 Conference title. Fisher received an at-large bid into the NCAA Division III Tournament, in which they defeated Union College, Springfield College, and Rowan University to reach the national semifinals, which they lost to Mount Union College, the defending national champions, by a score of 26–14. In 2007, Fisher's men's basketball team won the Empire 8 Conference title for the 5th consecutive year and the 6th time in seven years. In 2006, Fisher advanced to the Elite Eight of the NCAA Men's Division III Basketball Championship Tournament. The women's basketball program was led for 34 seasons by Phil Kahler, who posted a career record of 797 wins (the most in Division III history) and 175 losses with a career winning percentage of .821. Under Kahler, the women's basketball program reached the NCAA Division III Championship Tournament 14 times and played in the NCAA Women's Division III Basketball Championship game in 1988 and 1990. Kahler retired shortly before the start of the 2008–09 basketball season and was replaced on the bench by Marianne O'Connor Ermi, his top assistant coach for 20 seasons. The women's basketball team is now led by Melissa Kuberka who was hired as a head coach before the 2017-18 season. Since 2000, St. John Fisher College has been home to the Buffalo Bills' NFL summer training camp. Many campus clubs and organizations are available to students. Four of the major organizations on campus include the Student Government Association, the Student Activities Board, the Residence Hall Association, and Commuter Council. Other clubs include music groups, language clubs, cultural organizations, student publications, and intramural sports. Many academic departments also sponsor clubs. Fisher students can contribute to the community through a variety of service organizations including Students With a Vision and Colleges Against Cancer. Numerous service projects occur each year including Project Community Convergence, Relay for Life, the Giant Read, and the Sweetheart Ball. The Annual Teddi Dance for Love is a 24-hour dance marathon started by Lou Buttino in 1983 that benefits Camp Good Days and Special Times, Inc. This project funds a trip to Florida for the children of Camp Good Days and has raised over $1 million since its inception. In 2015, St. John Fisher College received the Carnegie Community Engagement Classification from the Carnegie Foundation for the Advancement of Teaching and the New England Resource Center for Higher Education (NERCHE).
https://en.wikipedia.org/wiki?curid=27917
Scouting The Scout movement, also known as Scouting or the Scouts, is a voluntary non-political educational movement for young people open to all without distinction of gender, origin, race or creed, in accordance with the purpose, principles and method conceived by the founder, Lord Baden-Powell. The purpose of the Scout Movement is to contribute to the development of young people in achieving their full physical, intellectual, emotional, social and spiritual potentials as individuals, as responsible citizens and as members of their local, national and international communities. During the first half of the twentieth century, the movement grew to encompass three major age groups for boys (Cub Scout, Boy Scout, Rover Scout) and, in 1910, a new organization, Girl Guides, was created for girls (Brownie Guide, Girl Guide and Girl Scout, Ranger Guide). It is one of several worldwide youth organizations. In 1906 and 1907 Robert Baden-Powell, a lieutenant general in the British Army, wrote a book for boys about reconnaissance and . This book, "Scouting for Boys", was based on his earlier books about military scouting, with influence and support of Frederick Russell Burnham (Chief of Scouts in British Africa), Ernest Thompson Seton of the Woodcraft Indians, William Alexander Smith of the Boys' Brigade, and his publisher Pearson. In mid-1907 Baden-Powell held a camp on Brownsea Island in England to test ideas for his book. This camp and the publication of "Scouting for Boys" (London, 1908) are generally regarded as the start of the Scout movement. The movement employs the Scout method, a programme of informal education with an emphasis on practical outdoor activities, including camping, woodcraft, aquatics, hiking, backpacking, and sports. Another widely recognized movement characteristic is the Scout uniform, by intent hiding all differences of social standing in a country and making for equality, with neckerchief and campaign hat or comparable headwear. Distinctive uniform insignia include the fleur-de-lis and the trefoil, as well as badges and other patches. The two largest umbrella organizations are the World Organization of the Scout Movement (WOSM), for boys-only and co-educational organizations, and the World Association of Girl Guides and Girl Scouts (WAGGGS), primarily for girls-only organizations but also accepting co-educational organizations. The year 2007 marked the centenary of Scouting worldwide, and member organizations planned events to celebrate the occasion. The trigger for the Scouting movement was the 1908 publication of "Scouting for Boys" written by Robert Baden-Powell. At Charterhouse, one of England's most famous public schools, Baden-Powell had an interest in the outdoors. Later, as a military officer, Baden-Powell was stationed in British India in the 1880s where he took an interest in military scouting and in 1884 he published "Reconnaissance and Scouting". In 1896, Baden-Powell was assigned to the Matabeleland region in Southern Rhodesia (now Zimbabwe) as Chief of Staff to Gen. Frederick Carrington during the Second Matabele War. In June 1896 he met here and began a lifelong friendship with Frederick Russell Burnham, the American-born Chief of Scouts for the British Army in Africa. This was a formative experience for Baden-Powell not only because he had the time of his life commanding reconnaissance missions into enemy territory, but because many of his later Boy Scout ideas originated here. During their joint scouting patrols into the Matobo Hills, Burnham augmented Baden-Powell's woodcraft skills, inspiring him and sowing seeds for both the programme and for the code of honour later published in "Scouting for Boys". Practised by frontiersmen of the American Old West and indigenous peoples of the Americas, woodcraft was generally little known to the British Army but well known to the American scout Burnham. These skills eventually formed the basis of what is now called "scoutcraft", the fundamentals of Scouting. Both men recognised that wars in Africa were changing markedly and the British Army needed to adapt; so during their joint scouting missions, Baden-Powell and Burnham discussed the concept of a broad training programme in woodcraft for young men, rich in exploration, tracking, fieldcraft, and self-reliance. During this time in the Matobo Hills Baden-Powell first started to wear his signature campaign hat like the one worn by Burnham, and acquired his kudu horn, the Ndebele war instrument he later used every morning at Brownsea Island to wake the first Boy Scouts and to call them together in training courses. Three years later, in South Africa during the Second Boer War, Baden-Powell was besieged in the small town of Mafikeng (Mafeking) by a much larger Boer army. The Mafeking Cadet Corps was a group of youths that supported the troops by carrying messages, which freed the men for military duties and kept the boys occupied during the long siege. The Cadet Corps performed well, helping in the defence of the town (1899–1900), and were one of the many factors that inspired Baden-Powell to form the Scouting movement. Each member received a badge that illustrated a combined compass point and spearhead. The badge's logo was similar to the fleur-de-lis shaped arrowhead that Scouting later adopted as its international symbol. The Siege of Mafeking was the first time since his own childhood that Baden-Powell, a regular serving soldier, had come into the same orbit as "civilians"—women and children—and discovered for himself the usefulness of well-trained boys. In the United Kingdom, the public, through newspapers, followed Baden-Powell's struggle to hold Mafeking, and when the siege was broken he had become a national hero. This rise to fame fuelled the sales of the small instruction book he had written in 1899 about military scouting and wilderness survival, "Aids to Scouting," that owed much to what he had learned from discussions with Burnham. On his return to England, Baden-Powell noticed that boys showed considerable interest in "Aids to Scouting", which was unexpectedly used by teachers and youth organizations as their first Scouting handbook. He was urged to rewrite this book for boys, especially during an inspection of the Boys' Brigade, a large youth movement drilled with military precision. Baden-Powell thought this would not be attractive and suggested that the Boys' Brigade could grow much larger were Scouting to be used. He studied other schemes, parts of which he used for Scouting. In July 1906 Ernest Thompson Seton sent Baden-Powell a copy of his 1902 book "The Birchbark Roll of the Woodcraft Indians". Seton, a British-born Canadian-American living in the United States, met Baden-Powell in October 1906, and they shared ideas about youth training programs. In 1907 Baden-Powell wrote a draft called "Boy Patrols". In the same year, to test his ideas, he gathered 21 boys of mixed social backgrounds (from boy's schools in the London area and a section of boys from the Poole, Parkstone, Hamworthy, Bournemouth, and Winton Boys' Brigade units) and held a week-long camp in August on Brownsea Island in Poole Harbour, Dorset. His organizational method, now known as the Patrol System and a key part of Scouting training, allowed the boys to organize themselves into small groups with an elected patrol leader. In late 1907, Baden-Powell went on an extensive speaking tour arranged by his publisher, Arthur Pearson, to promote his forthcoming book, "Scouting for Boys". He had not simply rewritten his "Aids to Scouting"; he omitted the military aspects and transferred the techniques (mainly survival skills) to non-military heroes: backwoodsmen, explorers (and later on, sailors and airmen). He also added innovative educational principles (the Scout method) by which he extended the attractive game to a personal mental education. At the beginning of 1908, Baden-Powell published "Scouting for Boys" in six fortnightly parts, setting out activities and programmes which existing youth organisations could use. The reaction was phenomenal, and quite unexpected. In a very short time, Scout Patrols were created up and down the country, all following the principles of Baden-Powell's book. In 1909, the first Scout Rally was held at Crystal Palace in London, to which 11,000 Scouts came—and some girls dressed as Scouts and calling themselves "Girl Scouts". Baden-Powell retired from the Army and, in 1910, he formed The Boy Scouts Association, and later The Girl Guides. By the time of The Boy Scouts Association's first census in 1910, it had over 100,000 Scouts. "Scouting for Boys" was published in England later in 1908 in book form. The book is now the fourth-bestselling title of all time, and was the basis for the later American version of the "Boy Scout Handbook". At the time, Baden-Powell intended that the scheme would be used by established organizations, in particular the Boys' Brigade, from the founder William A. Smith. However, because of the popularity of his person and the adventurous outdoor games he wrote about, boys spontaneously formed Scout patrols and flooded Baden-Powell with requests for assistance. He encouraged them, and the Scouting movement developed momentum. In 1910 Baden-Powell formed The Boy Scouts Association in the United Kingdom. As the movement grew, Sea Scouts, Air Scouts, and other specialized units were added to the program. The scouts law is for boys, as follows; In his original book on boy scouting, General Baden-Powell introduced the Scout promise, as follows: “Before he becomes a scout, a boy must take the scout's oath, thus: On my honour I promise that--- While taking this oath the scout will stand, holding his right hand raised level with his shoulder, palm to the front, thumb resting on the nail of the little finger and the other three fingers upright, pointing upwards:--- This is the scout's salute and secret sign. The Boy Scout Movement swiftly established itself throughout the British Empire soon after the publication of "Scouting for Boys". By 1908, Scouting was established in Gibraltar, Malta, Canada, Australia, New Zealand, and South Africa. In 1909 Chile was the first country outside the British dominions to have a Scouting organization recognized by Baden-Powell. The first Scout rally, held in 1909 at the Crystal Palace in London, attracted 10,000 boys and a number of girls. By 1910, Argentina, Denmark, Finland, France, Germany, Greece, India, Malaya, Mexico, the Netherlands, Norway, Russia, Sweden, and the United States had Boy Scouts. The program initially focused on boys aged 11 to 18, but as the movement grew the need became apparent for leader training and programs for younger boys, older boys, and girls. The first Cub Scout and Rover Scout programs were in place by the late 1910s. They operated independently until they obtained official recognition from their home country's Scouting organization. In the United States, attempts at Cub programs began as early as 1911, but official recognition was not obtained until 1930. Girls wanted to become part of the movement almost as soon as it began. Baden-Powell and his sister Agnes Baden-Powell introduced the Girl Guides in 1910, a parallel movement for girls, sometimes named Girl Scouts. Agnes Baden-Powell became the first president of the Girl Guides when it was formed in 1910, at the request of the girls who attended the Crystal Palace Rally. In 1914, she started Rosebuds—later renamed Brownies—for younger girls. She stepped down as president of the Girl Guides in 1920 in favor of Robert's wife Olave Baden-Powell, who was named Chief Guide (for England) in 1918 and World Chief Guide in 1930. At that time, girls were expected to remain separate from boys because of societal standards, though co-educational youth groups did exist. By the 1990s, two-thirds of the Scout organizations belonging to WOSM had become co-educational. Baden-Powell could not single-handedly advise all groups who requested his assistance. Early Scoutmaster training camps were held in London and Yorkshire in 1910 and 1911. Baden-Powell wanted the training to be as practical as possible to encourage other adults to take leadership roles, so the Wood Badge course was developed to recognize adult leadership training. The development of the training was delayed by World War I, and the first Wood Badge course was not held until 1919. Wood Badge is used by Boy Scout associations and combined Boy Scout and Girl Guide associations in many countries. Gilwell Park near London was purchased in 1919 on behalf of The Scout Association as an adult training site and Scouting campsite. Baden-Powell wrote a book, "Aids to Scoutmastership", to help Scouting Leaders, and wrote other handbooks for the use of the new Scouting sections, such as Cub Scouts and Girl Guides. One of these was "Rovering to Success", written for Rover Scouts in 1922. A wide range of leader training exists in 2007, from basic to program-specific, including the Wood Badge training. Important elements of traditional Scouting have their origins in Baden-Powell's experiences in education and military training. He was a 50-year-old retired army general when he founded Scouting, and his revolutionary ideas inspired thousands of young people, from all parts of society, to get involved in activities that most had never contemplated. Comparable organizations in the English-speaking world are the Boys' Brigade and the non-militaristic Woodcraft Folk; however, they never matched the development and growth of Scouting. Aspects of Scouting practice have been criticized as too militaristic. Local influences have also been a strong part of Scouting. By adopting and modifying local ideologies, Scouting has been able to find acceptance in a wide variety of cultures. In the United States, Scouting uses images drawn from the U.S. frontier experience. This includes not only its selection of animal badges for Cub Scouts, but the underlying assumption that American native peoples are more closely connected with nature and therefore have special wilderness survival skills which can be used as part of the training program. By contrast, British Scouting makes use of imagery drawn from the Indian subcontinent, because that region was a significant focus in the early years of Scouting. Baden-Powell's personal experiences in India led him to adopt Rudyard Kipling's "The Jungle Book" as a major influence for the Cub Scouts; for example, the name used for the Cub Scout leader, Akela (whose name was also appropriated for the Webelos), is that of the leader of the wolf pack in the book. The name "Scouting" seems to have been inspired by the important and romantic role played by military scouts performing reconnaissance in the wars of the time. In fact, Baden-Powell wrote his original military training book, "Aids To Scouting", because he saw the need for the improved training of British military-enlisted scouts, particularly in initiative, self-reliance, and observational skills. The book's popularity with young boys surprised him. As he adapted the book as "Scouting for Boys", it seems natural that the movement adopted the names "Scouting" and "Boy Scouts." "Duty to God" is a principle of Scouting, though it is applied differently in various countries. The Boy Scouts of America (BSA) take a strong position, excluding atheists. The Scout Association in the United Kingdom permits variations to its Promise, in order to accommodate different religious obligations. While for example in the predominantly atheist Czech Republic the Scout oath doesn't mention God altogether with the organization being strictly irreligious, in 2014, United Kingdom Scouts were given the choice of being able to make a variation of the Promise that replaced "duty to God" with "uphold our Scout values", Scouts Canada defines Duty to God broadly in terms of "adherence to spiritual principles" and leaves it to the individual member or leader whether they can follow a Scout Promise that includes Duty to God. Worldwide, roughly one in three Scouts are Muslim. Scouting is taught using the Scout method, which incorporates an informal educational system that emphasizes practical activities in the outdoors. Programs exist for Scouts ranging in age from 6 to 25 (though age limits vary slightly by country), and program specifics target Scouts in a manner appropriate to their age. The Scout method is the principal method by which the Scouting organizations, boy and girl, operate their units. WOSM describes Scouting as "a voluntary nonpolitical educational movement for young people open to all without distinction of origin, race or creed, in accordance with the purpose, principles and method conceived by the Founder". It is the goal of Scouting "to contribute to the development of young people in achieving their full physical, intellectual, social and spiritual potentials as individuals, as responsible citizens and as members of their local, national and international communities." The principles of Scouting describe a code of behavior for all members, and characterize the movement. The Scout method is a progressive system designed to achieve these goals, comprising seven elements: law and promise, learning by doing, team system, symbolic framework, personal progression, nature, and adult support. While community service is a major element of both the WOSM and WAGGGS programs, WAGGGS includes it as an extra element of the Scout method: service in the community. The Scout Law and Promise embody the joint values of the Scouting movement worldwide, and bind all Scouting associations together. The emphasis on "learning by doing" provides experiences and hands-on orientation as a practical method of learning and building self-confidence. Small groups build unity, camaraderie, and a close-knit fraternal atmosphere. These experiences, along with an emphasis on trustworthiness and personal honor, help to develop responsibility, character, self-reliance, self-confidence, reliability, and readiness; which eventually lead to collaboration and leadership. A program with a variety of progressive and attractive activities expands a Scout's horizon and bonds the Scout even more to the group. Activities and games provide an enjoyable way to develop skills such as dexterity. In an outdoor setting, they also provide contact with the natural environment. Since the birth of Scouting, Scouts worldwide have taken a Scout Promise to live up to ideals of the movement, and subscribe to the Scout Law. The form of the promise and laws have varied slightly by country and over time, but must fulfil the requirements of the WOSM to qualify a National Scout Association for membership. The Scout Motto, 'Be Prepared', has been used in various languages by millions of Scouts since 1907. Less well-known is the Scout Slogan, 'Do a good turn daily'. Common ways to implement the Scout method include having Scouts spending time together in small groups with shared experiences, rituals, and activities, and emphasizing 'good citizenship' and decision-making by young people in an age-appropriate manner. Weekly meetings often take place in local centres known as Scout dens. Cultivating a love and appreciation of the outdoors and outdoor activities is a key element. Primary activities include camping, woodcraft, aquatics, hiking, backpacking, and sports. Camping is most often arranged at the unit level, such as one Scout troop, but there are periodic camps (known in the US as "camporees") and "jamborees". Camps occur a few times a year and may involve several groups from a local area or region camping together for a weekend. The events usually have a theme, such as pioneering. World Scout Moots are gatherings, originally for Rover Scouts, but mainly focused on Scout Leaders. Jamborees are large national or international events held every four years, during which thousands of Scouts camp together for one or two weeks. Activities at these events will include games, Scoutcraft competitions, badge, pin or patch trading, aquatics, woodcarving, archery and activities related to the theme of the event. In some countries a highlight of the year for Scouts is spending at least a week in the summer engaging in an outdoor activity. This can be a camping, hiking, sailing, or other trip with the unit, or a summer camp with broader participation (at the council, state, or provincial level). Scouts attending a summer camp work on Scout badges, advancement, and perfecting Scoutcraft skills. Summer camps can operate specialty programs for older Scouts, such as sailing, backpacking, canoeing and whitewater, caving, and fishing. At an international level Scouting perceives one of its roles as the promotion of international harmony and peace. Various initiatives are in train towards achieving this aim including the development of activities that benefit the wider community, challenge prejudice and encourage tolerance of diversity. Such programs include co-operation with non-Scouting organisations including various NGOs, the United Nations and religious institutions as set out in "The Marrakech Charter". The Scout uniform is a widely recognized characteristic of Scouting. In the words of Baden-Powell at the 1937 World Jamboree, it "hides all differences of social standing in a country and makes for equality; but, more important still, it covers differences of country and race and creed, and makes all feel that they are members with one another of the one great brotherhood". The original uniform, still widely recognized, consisted of a khaki button-up shirt, shorts, and a broad-brimmed campaign hat. Baden-Powell also wore shorts, because he believed that being dressed like a Scout helped to reduce the age-imposed distance between adult and youth. Uniform shirts are now frequently blue, orange, red or green and shorts are frequently replaced by long trousers all year or only under cold weather. While designed for smartness and equality, the Scout uniform is also practical. Shirts traditionally have thick seams to make them ideal for use in makeshift stretchers—Scouts were trained to use them in this way with their staves, a traditional but deprecated item. The leather straps and toggles of the campaign hats or Leaders' Wood Badges could be used as emergency tourniquets, or anywhere that string was needed in a hurry. Neckerchiefs were chosen as they could easily be used as a sling or triangular bandage by a Scout in need. Scouts were encouraged to use their garters for shock cord where necessary. Distinctive insignia for all are Scout uniforms, recognized and worn the world over, include the Wood Badge and the World Membership Badge. Scouting has two internationally known symbols: the trefoil is used by members of the World Association of Girl Guides and Girl Scouts (WAGGGS) and the fleur-de-lis by member organizations of the WOSM and most other Scouting organizations. The swastika was used as an early symbol by the Boy Scouts Association of the United Kingdom and others. Its earliest use in Scouting was on the Thanks Badge introduced in 1911. Lord Baden-Powell's 1922 design for the Medal of Merit added a swastika to the Scout Arrowhead to symbolize good luck for the recipient. In 1934, Scouters requested a change to the design because of the connection of the swastika with its more recent use by the German National Socialist Workers (Nazi) Party. A new Medal of Merit was issued by the Boy Scouts Association in 1935. Scouting and Guiding movements are generally divided into sections by age or school grade, allowing activities to be tailored to the maturity of the group's members. These age divisions have varied over time as they adapt to the local culture and environment. Scouting was originally developed for adolescents—youths between the ages of 11 and 17. In most member organizations, this age group composes the Scout or Guide section. Programs were developed to meet the needs of young children (generally ages 6 to 10) and young adults (originally 18 and older, and later up to 25). Scouts and Guides were later split into "junior" and "senior" sections in many member organizations, and some organizations dropped the young adults' section. The exact age ranges for programs vary by country and association. The national programs for younger children include Tiger Cubs, Cub Scouts, Brownies, Daisies, Rainbow Guides, Beaver Scouts, Joey Scouts, Keas, and Teddies. Programs for post-adolescents and young adults include the Senior Section, Rover Scouts, Senior Scouts, Venture Scouts, Explorer Scouts, and the Scout Network. Many organizations also have a program for members with special needs. This is usually known as Extension Scouting, but sometimes has other names, such as Scoutlink. The Scout Method has been adapted to specific programs such as Air Scouts, Sea Scouts, Rider Guides and Scoutingbands . In many countries, Scouting is organized into neighborhood Scout Groups, or Districts, which contain one or more sections. Under the umbrella of the Scout Group, sections are divided according to age, each having their own terminology and leadership structure. Adults interested in Scouting or Guiding, including former Scouts and Guides, often join organizations such as the International Scout and Guide Fellowship. In the United States and the Philippines, university students might join the co-ed service fraternity Alpha Phi Omega. In the United Kingdom, university students might join the Student Scout and Guide Organisation, and after graduation, the Scout and Guide Graduate Association. Scout units are usually operated by adult volunteers, such as parents and carers, former Scouts, students, and community leaders, including teachers and religious leaders. Scout Leadership positions are often divided into 'uniform' and 'lay' positions. Uniformed leaders have received formal training, such as the Wood Badge, and have received a warrant for a rank within the organization. Lay members commonly hold part-time roles such as meeting helpers, committee members and advisors, though there are a small number of full-time lay professionals. A unit has uniformed positions—such as the Scoutmaster and assistants—whose titles vary among countries. In some countries, units are supported by lay members, who range from acting as meeting helpers to being members of the unit's committee. In some Scout associations, the committee members may also wear uniforms and be registered Scout leaders. Above the unit are further uniformed positions, called Commissioners, at levels such as district, county, council or province, depending on the structure of the national organization. Commissioners work with lay teams and professionals. Training teams and related functions are often formed at these levels. In the UK and in other countries, the national Scout organization appoints the Chief Scout, the most senior uniformed member. Following its foundation in the United Kingdom, Scouting spread around the globe. The first association outside the British Empire was founded in Chile in May 21, 1909 after a visit by Baden Powell. In most countries of the world, there is now at least one Scouting (or Guiding) organization. Each is independent, but international cooperation continues to be seen as part of the Scout Movement. In 1922 the WOSM started as the governing body on policy for the national Scouting organizations (then male only). In addition to being the governing policy body, it organizes the World Scout Jamboree every four years. In 1928 the WAGGGS started as the equivalent to WOSM for the then female-only national Scouting/Guiding organizations. It is also responsible for its four international centres: Our Cabaña in Mexico, Our Chalet in Switzerland, Pax Lodge in the United Kingdom, and Sangam in India. Today at the international level, the two largest umbrella organizations are: There have been different approaches to co-educational Scouting. Some countries have maintained separate Scouting organizations for boys and girls, In other countries, especially within Europe, Scouting and Guiding have merged, and there is a single organization for boys and girls, which is a member of both the WOSM and the WAGGGS. The United States-based Boy Scouts of America permitted girls to join in early 2018. In others, such as Australia and the United Kingdom, the national Scout association has opted to admit both boys and girls, but is only a member of the WOSM, while the national Guide association has remained as a separate movement and member of the WAGGGS. In some countries like Greece, Slovenia and Spain there are separate associations of Scouts (members of WOSM) and guides (members of WAGGGS), both admitting boys and girls. The Scout Association in the United Kingdom has been co-educational at all levels since 1991, and this was optional for groups until the year 2000 when new sections were required to accept girls. The Scout Association transitioned all Scout groups and sections across the UK to become co-educational by January 2007, the year of Scouting's centenary. The traditional Baden-Powell Scouts' Association has been co-educational since its formation in 1970. In the United States, the Cub Scout and Boy Scout programs of the BSA were for boys only until 2018; it has changed its policies and is now inviting girls to join, as local packs organize all-girl dens (same uniform, same book, same activities). For youths age 14 and older, Venturing has been co-educational since the 1930s. The Girl Scouts of the USA (GSUSA) is an independent organization for girls and young women only. Adult leadership positions in the BSA and GSUSA are open to both men and women. In 2006, of the 155 WOSM member National Scout Organizations (representing 155 countries), 122 belonged only to WOSM, and 34 belonged to both WOSM and WAGGGS. Of the 122 which belonged only to WOSM, 95 were open to boys and girls in some or all program sections, and 20 were only for boys. All 34 that belonged to both WOSM and WAGGGS were open to boys and girls. WAGGGS had 144 Member Organizations in 2007 and 110 of them belonged only to WAGGGS. Of these 110, 17 were coeducational and 93 admitted only girls. As of 2019, there are over 50 million registered Scouts and as of 2006 10 million registered Guides around the world, from 216 countries and territories. Fifteen years passed between the first publication of "Scouting for Boys" and the creation of the current largest supranational Scout organization, WOSM, and millions of copies had been sold in dozens of languages. By that point, Scouting was the purview of the world's youth, and several Scout associations had already formed in many countries. Alternative groups have formed since the original formation of the Scouting "Boy Patrols". They can be a result of groups or individuals who maintain that the WOSM and WAGGGS are more political and less youth-based than envisioned by Lord Baden-Powell. They believe that Scouting in general has moved away from its original intent because of political machinations that happen to longstanding organizations, and want to return to the earliest, simplest methods. Others do not want to follow all the original ideals of Scouting but still desire to participate in Scout-like activities. In 2008, there were at least 539 independent Scouting organizations around the world, 367 of them were a member of either WAGGGS or WOSM. About half of the remaining 172 Scouting organizations are only local or national oriented. About 90 national or regional Scouting associations have created their own international Scouting organizations. Those are served by five international Scouting organizations: Some Scout-like organizations are also served by international organizations, many with religious elements, for example: After the inception of Scouting in the early 1900s, some nations' programs have taken part in social movements such as the nationalist resistance movements in India. Although Scouting was introduced to Africa by British officials as a way to strengthen their rule, the values they based Scouting on helped to challenge the legitimacy of British imperialism. Likewise, African Scouts used the Scout Law's principle that a Scout is a brother to all other Scouts to collectively claim full imperial citizenship. A study has found a strong link between participating in Scouting and Guiding as a young person, and having significantly better mental health. The data, from almost 10,000 individuals, came from a lifelong UK-wide study of people born in November 1958, known as the National Child Development Study. In the United Kingdom, The Scout Association had been criticised for its insistence on the use of a religious promise, leading the organization to introduce an alternative in January 2014 for those not wanting to mention a god in their promise. This change made the organisation entirely non-discriminatory on the grounds of race, gender, sexuality, and religion (or lack thereof). The Boy Scouts of America was the focus of criticism in the United States for not allowing the open participation of homosexuals until removing the prohibition in 2013. Authoritarian communist regimes such as the Soviet Union in 1920 and fascist regimes like Nazi Germany in 1934 often either absorbed the Scout movement into government-controlled organizations, or banned Scouting entirely. Scouting has been a facet of culture during most of the twentieth century in many countries; numerous films and artwork focus on the subject. Movie critic Roger Ebert mentioned the scene in which the young Boy Scout, Indiana Jones, discovers the Cross of Coronado in the movie "Indiana Jones and the Last Crusade", as "when he discovers his life mission". The works of painters Ernest Stafford Carlos, Norman Rockwell, Pierre Joubert and Joseph Csatari and the 1966 film "Follow Me, Boys!" are prime examples of this ethos. Scouting is often dealt with in a humorous manner, as in the 1989 film "Troop Beverly Hills", the 2005 film "Down and Derby", and the film "Scout Camp". In 1980, Scottish singer and songwriter Gerry Rafferty recorded "I was a Boy Scout" as part of his "Snakes and Ladders" album.
https://en.wikipedia.org/wiki?curid=27918
Sociobiology Sociobiology is a field of biology that aims to examine and explain social behavior in terms of evolution. It draws from disciplines including psychology, ethology, anthropology, evolution, zoology, archaeology, and population genetics. Within the study of human societies, sociobiology is closely allied to evolutionary anthropology, human behavioral ecology and evolutionary psychology. Sociobiology investigates social behaviors such as mating patterns, territorial fights, pack hunting, and the hive society of social insects. It argues that just as selection pressure led to animals evolving useful ways of interacting with the natural environment, so also it led to the genetic evolution of advantageous social behavior. While the term "sociobiology" originated at least as early as the 1940s, the concept did not gain major recognition until the publication of E. O. Wilson's book "" in 1975. The new field quickly became the subject of controversy. Critics, led by Richard Lewontin and Stephen Jay Gould, argued that genes played a role in human behavior, but that traits such as aggressiveness could be explained by social environment rather than by biology. Sociobiologists responded by pointing to the complex relationship between nature and nurture. E. O. Wilson defined sociobiology as "the extension of population biology and evolutionary theory to social organization". Sociobiology is based on the premise that some behaviors (social and individual) are at least partly inherited and can be affected by natural selection. It begins with the idea that behaviors have evolved over time, similar to the way that physical traits are thought to have evolved. It predicts that animals will act in ways that have proven to be evolutionarily successful over time. This can, among other things, result in the formation of complex social processes conducive to evolutionary fitness. The discipline seeks to explain behavior as a product of natural selection. Behavior is therefore seen as an effort to preserve one's genes in the population. Inherent in sociobiological reasoning is the idea that certain genes or gene combinations that influence particular behavioral traits can be inherited from generation to generation. For example, newly dominant male lions often kill cubs in the pride that they did not sire. This behavior is adaptive because killing the cubs eliminates competition for their own offspring and causes the nursing females to come into heat faster, thus allowing more of his genes to enter into the population. Sociobiologists would view this instinctual cub-killing behavior as being inherited through the genes of successfully reproducing male lions, whereas non-killing behavior may have died out as those lions were less successful in reproducing. The philosopher of biology Daniel Dennett suggested that the political philosopher Thomas Hobbes was the first sociobiologist, arguing that in his 1651 book "Leviathan" Hobbes had explained the origins of morals in human society from an amoral sociobiological perspective. The geneticist of animal behavior John Paul Scott coined the word "sociobiology" at a 1948 conference on genetics and social behaviour which called for a conjoint development of field and laboratory studies in animal behavior research. With John Paul Scott's organizational efforts, a "Section of Animal Behavior and Sociobiology" of the ESA (acronym stands for?) was created in 1956, which became a Division of Animal Behavior of the American Society of Zoology in 1958. In 1956, E. O. Wilson came in contact this emerging sociobiology through his PhD student Stuart A. Altmann, who had been in close relation with the participants to the 1948 conference. Altmann developed his own brand of sociobiology to study the social behavior of rhesus macaques, using statistics, and was hired as a "sociobiologist" at the Yerkes Regional Primate Research Center in 1965. Wilson's sociobiology is different from John Paul Scott's or Altmann's, insofar as he drew on mathematical models of social behavior centered on the maximisation of the genetic fitness by W. D. Hamilton, Robert Trivers, John Maynard Smith, and George R. Price. The three sociobiologies by Scott, Altmann and Wilson have in common to place naturalist studies at the core of the research on animal social behavior and by drawing alliances with emerging research methodologies, at a time when "biology in the field" was threatened to be made old-fashioned by "modern" practices of science (laboratory studies, mathematical biology, molecular biology). Once a specialist term, "sociobiology" became widely known in 1975 when Wilson published his book "Sociobiology: The New Synthesis", which sparked an intense controversy. Since then "sociobiology" has largely been equated with Wilson's vision. The book pioneered and popularized the attempt to explain the evolutionary mechanics behind social behaviors such as altruism, aggression, and nurturance, primarily in ants (Wilson's own research specialty) and other Hymenoptera, but also in other animals. However, the influence of evolution on behavior has been of interest to biologists and philosophers since soon after the discovery of evolution itself. Peter Kropotkin's "", written in the early 1890s, is a popular example. The final chapter of the book is devoted to sociobiological explanations of human behavior, and Wilson later wrote a Pulitzer Prize winning book, "On Human Nature", that addressed human behavior specifically. Edward H. Hagen writes in "The Handbook of Evolutionary Psychology" that sociobiology is, despite the public controversy regarding the applications to humans, "one of the scientific triumphs of the twentieth century." "Sociobiology is now part of the core research and curriculum of virtually all biology departments, and it is a foundation of the work of almost all field biologists" Sociobiological research on nonhuman organisms has increased dramatically and continuously in the world's top scientific journals such as "Nature" and "Science". The more general term behavioral ecology is commonly substituted for the term sociobiology in order to avoid the public controversy. Sociobiologists maintain that human behavior, as well as nonhuman animal behavior, can be partly explained as the outcome of natural selection. They contend that in order to fully understand behavior, it must be analyzed in terms of evolutionary considerations. Natural selection is fundamental to evolutionary theory. Variants of hereditary traits which increase an organism's ability to survive and reproduce will be more greatly represented in subsequent generations, i.e., they will be "selected for". Thus, inherited behavioral mechanisms that allowed an organism a greater chance of surviving and/or reproducing in the past are more likely to survive in present organisms. That inherited adaptive behaviors are present in nonhuman animal species has been multiply demonstrated by biologists, and it has become a foundation of evolutionary biology. However, there is continued resistance by some researchers over the application of evolutionary models to humans, particularly from within the social sciences, where culture has long been assumed to be the predominant driver of behavior. Sociobiology is based upon two fundamental premises: Sociobiology uses Nikolaas Tinbergen's four categories of questions and explanations of animal behavior. Two categories are at the species level; two, at the individual level. The species-level categories (often called "ultimate explanations") are The individual-level categories (often called "proximate explanations") are Sociobiologists are interested in how behavior can be explained logically as a result of selective pressures in the history of a species. Thus, they are often interested in instinctive, or intuitive behavior, and in explaining the similarities, rather than the differences, between cultures. For example, mothers within many species of mammals – including humans – are very protective of their offspring. Sociobiologists reason that this protective behavior likely evolved over time because it helped the offspring of the individuals which had the characteristic to survive. This parental protection would increase in frequency in the population. The social behavior is believed to have evolved in a fashion similar to other types of nonbehavioral adaptations, such as a coat of fur, or the sense of smell. Individual genetic advantage fails to explain certain social behaviors as a result of gene-centred selection. E.O. Wilson argued that evolution may also act upon groups. The mechanisms responsible for group selection employ paradigms and population statistics borrowed from evolutionary game theory. Altruism is defined as "a concern for the welfare of others". If altruism is genetically determined, then altruistic individuals must reproduce their own altruistic genetic traits for altruism to survive, but when altruists lavish their resources on non-altruists at the expense of their own kind, the altruists tend to die out and the others tend to increase. An extreme example is a soldier losing his life trying to help a fellow soldier. This example raises the question of how altruistic genes can be passed on if this soldier dies without having any children. Within sociobiology, a social behavior is first explained as a sociobiological hypothesis by finding an evolutionarily stable strategy that matches the observed behavior. Stability of a strategy can be difficult to prove, but usually, it will predict gene frequencies. The hypothesis can be supported by establishing a correlation between the gene frequencies predicted by the strategy, and those expressed in a population. Altruism between social insects and littermates has been explained in such a way. Altruistic behavior, behavior that increases the reproductive fitness of others at the apparent expense of the altruist, in some animals has been correlated to the degree of genome shared between altruistic individuals. A quantitative description of infanticide by male harem-mating animals when the alpha male is displaced as well as rodent female infanticide and fetal resorption are active areas of study. In general, females with more bearing opportunities may value offspring less, and may also arrange bearing opportunities to maximize the food and protection from mates. An important concept in sociobiology is that temperament traits exist in an ecological balance. Just as an expansion of a sheep population might encourage the expansion of a wolf population, an expansion of altruistic traits within a gene pool may also encourage increasing numbers of individuals with dependent traits. Studies of human behavior genetics have generally found behavioral traits such as creativity, extroversion, aggressiveness, and IQ have high heritability. The researchers who carry out those studies are careful to point out that heritability does not constrain the influence that environmental or cultural factors may have on those traits. Criminality is actively under study, but extremely controversial. There are arguments that in some environments criminal behavior might be adaptive. The novelist Elias Canetti also has noted applications of sociobiological theory to cultural practices such as slavery and autocracy. Genetic mouse mutants illustrate the power that genes exert on behaviour. For example, the transcription factor FEV (aka Pet1), through its role in maintaining the serotonergic system in the brain, is required for normal aggressive and anxiety-like behavior. Thus, when FEV is genetically deleted from the mouse genome, male mice will instantly attack other males, whereas their wild-type counterparts take significantly longer to initiate violent behaviour. In addition, FEV has been shown to be required for correct maternal behaviour in mice, such that offspring of mothers without the FEV factor do not survive unless cross-fostered to other wild-type female mice. A genetic basis for instinctive behavioural traits among non-human species, such as in the above example, is commonly accepted among many biologists; however, attempting to use a genetic basis to explain complex behaviours in human societies has remained extremely controversial. Steven Pinker argues that critics have been overly swayed by politics and a fear of biological determinism, accusing among others Stephen Jay Gould and Richard Lewontin of being "radical scientists", whose stance on human nature is influenced by politics rather than science, while Lewontin, Steven Rose and Leon Kamin, who drew a distinction between the politics and history of an idea and its scientific validity, argue that sociobiology fails on scientific grounds. Gould grouped sociobiology with eugenics, criticizing both in his book "The Mismeasure of Man". Noam Chomsky has expressed views on sociobiology on several occasions. During a 1976 meeting of the Sociobiology Study Group, as reported by Ullica Segerstråle, Chomsky argued for the importance of a sociobiologically informed notion of human nature. Chomsky argued that human beings are biological organisms and ought to be studied as such, with his criticism of the "blank slate" doctrine in the social sciences (which would inspire a great deal of Steven Pinker's and others' work in evolutionary psychology), in his 1975 "Reflections on Language". Chomsky further hinted at the possible reconciliation of his anarchist political views and sociobiology in a discussion of Peter Kropotkin's "", which focused more on altruism than aggression, suggesting that anarchist societies were feasible because of an innate human tendency to cooperate. Wilson has claimed that he had never meant to imply what "ought" to be, only what "is" the case. However, some critics have argued that the language of sociobiology readily slips from "is" to "ought", an instance of the naturalistic fallacy. Pinker has argued that opposition to stances considered anti-social, such as ethnic nepotism, is based on moral assumptions, meaning that such opposition is not falsifiable by scientific advances. The history of this debate, and others related to it, are covered in detail by , , and .
https://en.wikipedia.org/wiki?curid=27919
♯P In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute "f"("x")", where "f" is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. An NP decision problem is often of the form "Are there any solutions that satisfy certain constraints?" For example: The corresponding #P function problems ask "how many" rather than "are there any". For example: Clearly, a #P problem must be at least as hard as the corresponding NP problem. If it's easy to count answers, then it must be easy to tell whether there are any answers—just count them and see whether the count is greater than zero. One consequence of Toda's theorem is that a polynomial-time machine with a #P oracle (P#P) can solve all problems in PH, the entire polynomial hierarchy. In fact, the polynomial-time machine only needs to make one #P query to solve any problem in PH. This is an indication of the extreme difficulty of solving #P-complete problems exactly. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For more information on this, see #P-complete. The closest decision problem class to #P is PP, which asks whether a majority (more than half) of the computation paths accept. This finds the most significant bit in the #P problem answer. The decision problem class ⊕P (pronounced "Parity-P") instead asks for the least significant bit of the #P answer. The complexity class #P was first defined by Leslie Valiant in a 1979 article on the computation of the permanent of a square matrix, in which he proved that permanent is #P-complete. Larry Stockmeyer has proved that for every #P problem formula_13 there exists a randomized algorithm using an oracle for SAT, which given an instance formula_14 of formula_13 and formula_16 returns with high probability a number formula_6 such that formula_18. The runtime of the algorithm is polynomial in formula_14 and formula_20. The algorithm is based on the leftover hash lemma.
https://en.wikipedia.org/wiki?curid=27924
♯P-complete The #P-complete problems (pronounced "sharp P complete" or "number P complete") form a complexity class in computational complexity theory. The problems in this complexity class are defined by having the following two properties: A polynomial-time algorithm for solving a #P-complete problem, if it existed, would solve the P versus NP problem by implying that P and NP are equal. No such algorithm is known, nor is a proof known that such an algorithm does not exist. Examples of #P-complete problems include: Some #P-complete problems correspond to easy (polynomial time) problems. Determining the satisfiability of a boolean formula in DNF is easy: such a formula is satisfiable if and only if it contains a satisfiable conjunction (one that does not contain a variable and its negation), whereas counting the number of satisfying assignments is #P-complete. Furthermore, deciding 2-satisfiability is easy compared to counting the number of satisfying assignments. Topologically sorting is easy in contrast to counting the number of topological sortings. A single perfect matching can be found in polynomial time, but counting all perfect matchings is #P-complete. The perfect matching counting problem was the first counting problem corresponding to an easy P problem shown to be #P-complete, in a 1979 paper by Leslie Valiant which also defined the class #P and the #P-complete problems for the first time. There are probabilistic algorithms that return good approximations to some #P-complete problems with high probability. This is one of the demonstrations of the power of probabilistic algorithms. Many #P-complete problems have a fully polynomial-time randomized approximation scheme, or "FPRAS," which, informally, will produce with high probability an approximation to an arbitrary degree of accuracy, in time that is polynomial with respect to both the size of the problem and the degree of accuracy required. Jerrum, Valiant, and Vazirani showed that every #P-complete problem either has an FPRAS, or is essentially impossible to approximate; if there is any polynomial-time algorithm which consistently produces an approximation of a #P-complete problem which is within a polynomial ratio in the size of the input of the exact answer, then that algorithm can be used to construct an FPRAS.
https://en.wikipedia.org/wiki?curid=27925
Scrabble Scrabble is a word game in which two to four players score points by placing tiles, each bearing a single letter, onto a game board divided into a 15×15 grid of squares. The tiles must form words that, in crossword fashion, read left to right in rows or downward in columns, and be included in a standard dictionary or lexicon. The name "Scrabble" is a trademark of Mattel in most of the world, except in the United States and Canada, where it is a trademark of Hasbro. The game is sold in 121 countries and is available in 29 languages; approximately 150 million sets have been sold worldwide, and roughly one-third of American and half of British homes have a "Scrabble" set. There are approximately 4,000 Scrabble clubs around the world. The game is played by two to four players on a square game board imprinted with a 15×15 grid of cells (individually known as "squares"), each of which accommodates a single letter tile. In official club and tournament games, play is between two players or, occasionally, between two teams, each of which collaborates on a single rack. The board is marked with "premium" squares, which multiply the number of points awarded: eight dark red "triple-word" squares, 17 pale red "double-word" squares, of which one, the center square (H8), is marked with a star or other symbol; 12 dark blue "triple-letter" squares, and 24 pale blue "double-letter" squares. In 2008, Hasbro changed the colors of the premium squares to orange for TW, red for DW, blue for DL, and green for TL, but the original premium square color scheme is still preferred for "Scrabble" boards used in tournaments. In an English-language set, the game contains 100 tiles, 98 of which are marked with a letter and a point value ranging from 1 to 10. The number of points for each lettered tile is based on the letter's frequency in standard English; commonly used letters such as vowels are worth one point, while less common letters score higher, with Q and Z each worth 10 points. The game also has two blank tiles that are unmarked and carry no point value. The blank tiles can be used as substitutes for any letter; once laid on the board, however, the choice is fixed. Other language sets use different letter set distributions with different point values. Tiles are usually made of wood or plastic and are square and thick, making them slightly smaller than the squares on the board. Only the rosewood tiles of the deluxe edition vary in width up to for different letters. Travelling versions of the game often have smaller tiles (e.g. ); sometimes they are magnetic to keep them in place. The capital letter is printed in black at the centre of the tile face and the letter's point value printed in a smaller font at the bottom right corner. Most modern replacement tile sets come at .700" X .800". S is one of the most versatile tiles in English-language Scrabble because it can be appended to many words to pluralize them (or in the case of most verbs, convert them to the third person singular present tense, as in the word PLUMMETS); Alfred Butts included only four S tiles to avoid making the game "too easy". Q is considered the most troublesome letter, as almost all words with it also contain U; a similar problem occurs in other languages like French, Dutch, Italian and German. J is also difficult to play due to its low frequency and a scarcity of words having it at the end. C and V may be troublesome in the endgame, since no two-letter words with them exist, save for CH in the "Collins Scrabble Words" lexicon. In 1938, the American architect Alfred Mosher Butts created the game as a variation on an earlier word game he invented, called "Lexiko". The two games had the same set of letter tiles, whose distributions and point values Butts worked out by performing a frequency analysis of letters from various sources, including "The New York Times". The new game, which he called "Criss-Crosswords," added the 15×15 gameboard and the crossword-style gameplay. He manufactured a few sets himself but was not successful in selling the game to any major game manufacturers of the day. In 1948, James Brunot, a resident of Newtown, Connecticut and one of the few owners of the original "Criss-Crosswords" game, bought the rights to manufacture the game in exchange for granting Butts a royalty on every unit sold. Although he left most of the game (including the distribution of letters) unchanged, Brunot slightly rearranged the "premium" squares of the board and simplified the rules; he also renamed the game "Scrabble", a real word which means "to scratch frantically". In 1949, Brunot and his family made sets in a converted former schoolhouse in Dodgingtown, Connecticut, a section of Newtown. They made 2,400 sets that year but lost money. According to legend, "Scrabble"s big break came in 1952 when Jack Straus, president of Macy's, played the game on vacation. Upon returning from vacation, he was surprised to find that his store did not carry the game. He placed a large order, and within a year, "everyone had to have one." In 1952, unable to meet demand himself, Brunot sold manufacturing rights to Long Island-based Selchow and Righter, one of the manufacturers who, like Parker Brothers and Milton Bradley Company, had previously rejected the game. Harriet T. Righter licensed the game from entrepreneur James Brunot in 1952. "It's a nice little game. It will sell well in bookstores", she remembered saying about Scrabble when she first saw it. In its second year as a Selchow and Righter product, nearly four million sets were sold. Selchow and Righter bought the trademark to the game in 1972. JW Spear (now a subsidiary of Mattel) began selling the game in Australia and the UK on January 19, 1955. In 1986, Selchow and Righter was sold to Coleco, which soon afterward went bankrupt. Hasbro purchased the company's assets, including "Scrabble" and "Parcheesi". In 1984, "Scrabble" was turned into a daytime game show on NBC. The "Scrabble" game show ran from July 1984 to March 1990, with a second run from January to June 1993. The show was hosted by Chuck Woolery. Its tagline in promotional broadcasts was, "Every man dies; not every man truly Scrabbles." In 2011, a new TV variation of "Scrabble", called "Scrabble Showdown", aired on The Hub cable channel, which is a joint venture of Discovery Communications, Inc. and Hasbro. "Scrabble" was inducted into the National Toy Hall of Fame in 2004. The "box rules" included in each copy of the North American edition have been edited four times: in 1953, 1976, 1989, and 1999. The major changes in 1953 were as follows: The major changes in 1976 were as follows: The editorial changes made in 1989 did not affect gameplay. The major changes in 1999 were as follows: In the notation system common in tournament play, columns are labeled with the letters "A–O" and rows with the numbers "1–15". (On "Scrabble" boards manufactured by Mattel as well as on the Internet Scrabble Club, rows are lettered while columns are numbered instead.) A play is usually identified in the format "xy WORD score" or "WORD xy score", where "x" denotes the column or row on which the play's main word extends, "y" denotes the second coordinate of the main word's first letter, and "WORD" is the main word. Although it is unnecessary, additional words formed by the play are sometimes listed after the main word and a slash. When the play of a single tile forms words in each direction, one of the words is arbitrarily chosen to serve as the main word for purposes of notation. When a blank tile is employed in the main word, the letter it has been chosen to represent is indicated with a lower case letter, or, in handwritten notation, with a square around the letter. When annotating a play, previously existing letters on the board are usually enclosed in parentheses. Exchanges are often annotated by a minus sign followed by the tiles that were exchanged alphabetically; for example, if a player holds EIIISTU, exchanging two I's and a U would be denoted as "−IIU." The image at right gives examples of valid plays and how they would typically be annotated using the notation system. Additionally, a number of symbols have been employed to indicate the validity of words in different lexica: Before the game, a resource, either a word list or a dictionary, is selected for the purpose of adjudicating any challenges during the game. The tiles are either put in an opaque bag or placed face down on a flat surface. Opaque cloth bags and customized tiles are staples of clubs and tournaments, where games are rarely played without both. Next, players decide the order in which they play. The normal approach is for players to each draw one tile: The player who picks the letter closest to the beginning of the alphabet goes first, with blank tiles taking precedence over the letter A. In most North American tournaments, the rules of the US-based North American Scrabble Players Association (NASPA) stipulate instead that players who have gone first in the fewest previous games in the tournament go first, and when that rule yields a tie, those who have gone second the most go first. If there is still a tie, tiles are drawn as in the standard rules. At the beginning of the game, each player draws seven tiles from the bag and places them on their rack, concealed from the other player(s). The first played word must be at least two letters long, and cover H8 (the center square). Thereafter, any move is made by using one or more tiles to place a word on the board. This word may use one or more tiles already on the board and must join with the cluster of tiles already on the board. On each turn, the player has three options: A proper play uses one or more of the player's tiles to form a continuous string of letters that make a word (the play's "main word") on the board, reading either left-to-right or top-to-bottom. The main word must either use the letters of one or more previously played words or else have at least one of its tiles horizontally or vertically adjacent to an already played word. If any words other than the main word are formed by the play, they are scored as well and are subject to the same criteria of acceptability. See Scoring for more details. A blank tile may represent any letter, and scores zero points, regardless of its placement or what letter it represents. Its placement on a double-word or triple-word square causes the corresponding premium to be applied to the word(s) in which it is used. Once a blank tile is placed, it remains that particular letter for the remainder of the game. After making a play, the player announces the score for that play, and then, if the game is being played with a clock, starts the opponent's clock. The player can change their play as long as the player's clock is running, but commits to the play when they start the opponent's clock. The player then draws tiles from the bag to replenish their rack to seven tiles. If there are not enough tiles in the bag to do so, the player takes all the remaining tiles. If a player has made a play and has not yet drawn a tile, the opponent may choose to challenge any or all words formed by the play. The player challenged must then look up the words in question using a specified word source (such as OTCWL, the "Official Scrabble Players Dictionary", or CSW) and if any one of them is found to be unacceptable, the play is removed from the board, the player returns the newly played tiles to their rack, and the turn is forfeited. In tournament play, a challenge may be to the entire play or any one or more words formed in the play, and judges (human or computer) are used, so players are not entitled to know which word(s) are invalid. Penalties for unsuccessfully challenging an acceptable play vary in club and tournament play and are described in greater detail below. Under North American tournament rules, the game ends when either: When the game ends, each player's score is reduced by the sum of their unplayed letters. In addition, if a player has used all of their letters (known as "going out" or "playing out"), the sum of the other player's unplayed letters is added to that player's score; in tournament play, a player who goes out adds twice that sum, and their opponent is not penalized. Plays can be made in a number of ways (in what follows, it is assumed that the word JACK has been played on a previous turn; letters in parentheses represent tiles already on the board): Any combination of these is allowed in a play, as long as all the letters placed on the board in one play lie in one row or column and are connected by a main word, and any run of tiles on two or more consecutive squares along a row or column constitutes a valid word. Words must read either left-to-right or top-to-bottom. Diagonal plays are not allowed. The score for any play is determined this way: When the letters to be drawn have run out, the final play can often determine the winner. This is particularly the case in close games with more than two players. Scoreless turns can occur when a player passes, exchanges tiles, or loses a challenge. The latter rule varies slightly in international tournaments. A scoreless turn can also theoretically occur if a play consists of only blank tiles, but this is extremely unlikely in actual play. Suppose Player 1 plays QUANT 8D, with the Q on a DLS and T on the center star. The score for this play would be (2 × 10 + 1 + 1 + 1 + 1) × 2 = 48 (following the order of operations). Player 2 extends the play to ALI(QUANT) 8A with the A on the TWS at 8A. The score for this play would be (1 + 1 + 1 + 10 + 1 + 1 + 1 + 1) × 3 = 51. Note that the Q is not doubled for this play. Player 1 has DDIIIOO and plays OIDIOID 9G. The score for the word OIDIOID would be (2 × 1 + 1 + 2 × 2 + 1 + 1 + 1 + 2 × 2) = 14. Additionally, Player 1 formed NO and TI, which score 1 + 2 × 1 = 3 and 1 + 1 = 2 points respectively. Therefore, the sum of all the values of the words formed is 14+3+2 = 19. But since this is a seven-letter play, 50 points are added, resulting in a total score of 69. Player 1 now has a 117–51 lead. The player with the highest final score wins the game. In case of a tie, the player with the highest score before adjusting for unplayed tiles wins the game. In tournament play, a tie counts as 1/2 a win for both players. Acceptable words are the primary entries in some chosen dictionary, and all of their inflected forms. Words that are hyphenated, capitalized (such as proper nouns), or apostrophized are not allowed, unless they also appear as acceptable entries; JACK is a proper noun, but the word "JACK" is acceptable because it has other usages as a common noun (automotive, vexillological, etc.) and verb that are acceptable. Acronyms or abbreviations, other than those that have acceptable entries (such as "AWOL", "RADAR", "LASER", and "SCUBA") are not allowed. Variant spellings, slang or offensive terms, archaic or obsolete terms, and specialized jargon words are allowed if they meet all other criteria for acceptability, but archaic spellings (e.g. "NEEDE" for "NEED") are generally not allowed. Foreign words are not allowed in English-language "Scrabble" unless they have been incorporated into the English language, as with PATISSERIE, KILIM, and QI. Vulgar and offensive words are generally excluded from the OSPD4 but allowed in club and tournament play. Proper nouns and other exceptions to the usual rules are allowed in some limited contexts in the spin-off game "Scrabble Trickster". Names of recognized computer programs are permitted as an acceptable proper noun (For example, WinZIP). The memorization of two-letter words is considered an essential skill in this game. There are two popular competition word lists used in various parts of the world: The first is used in America, Canada, Israel and Thailand, and the second in all other English-speaking countries The North American 2006 "Official Tournament and Club Word List, Second Edition" (OWL2) went into official use in American, Canadian, Israeli and Thai club and tournament play on March 1, 2006 (or, for school use, the bowdlerized "Official Scrabble Players Dictionary, Fifth Edition" (OSPD5)). North American competitions use the "Long Words List" for longer words. The OWL2 and the OSPD5 are compiled using four (originally five) major college-level dictionaries, including Merriam-Webster (10th and 11th editions, respectively). If a word appears, at least historically, in any one of the dictionaries, it will be included in the OWL2 and the OSPD5. If the word has only an offensive meaning, it is only included in the OWL2. The key difference between the OSPD5 and the OWL2 is that the OSPD5 is marketed for "home and school" use, with expurgated words which their source dictionaries judged offensive, rendering the "Official Scrabble Players Dictionary" less fit for official "Scrabble" play. The OSPD5, released in 2014, is available in bookstores, whereas the OWL2 is only available through NASPA). In all other English-speaking countries, the competition word list is "Collins Scrabble Words" 2019 edition, known as "CSW19". Versions of this lexicon prior to 2007 were known as SOWPODS. The lexicon includes all allowed words of length 2 to 15 letters. This list contains all OWL2 words plus words sourced from Chambers and Collins English dictionaries. This book is used to adjudicate at the World Scrabble Championship and all other major international competitions outside North America. Tournaments are also occasionally played to CSW in North America, particularly since 2010. NASPA officially rates CSW tournaments alongside OWL tournaments, using a separate rating system. The penalty for a successfully challenged play is nearly universal: the offending player removes the tiles played and forfeits his or her turn. (In some online games, an option known as "void" may be used, wherein unacceptable words are automatically rejected by the program. The player is then required to make another play, with no penalty applied.) The penalty for an unsuccessful challenge (where all words formed by the play are deemed valid) varies considerably, including: Under NASPA tournament rules, a player may request to "hold" the opponent's play in order to consider whether to challenge it, provided that the opponent has not yet drawn replacement tiles. If player A holds, player A's clock still runs, and player B may not draw provisional replacement tiles until 15 seconds after the hold was announced (which tiles must then be kept separate). There is no limit on how long player A may hold the play. If player A successfully challenges after player B drew provisional replacement tiles, player B must show the drawn tiles before returning them to the bag. Tens of thousands play club and tournament "Scrabble" worldwide. All tournament (and most club) games are played with a game clock and a set time control. Although casual games are often played with unlimited time, this is problematic in competitive play among players for whom the number of evident legal plays is immense. Almost all tournament games involve only two players; typically, each has 25 minutes in which to make all of their plays. For each minute by which a player oversteps the time control, a penalty of 10 points is assessed. The number of minutes is rounded up, so, for example, if a player oversteps time control by two minutes and five seconds, the penalty is 30 points. Also, most players use molded plastic tiles, not engraved like the original wooden tiles, eliminating the potential for a cheating player to "braille" (feel for particular tiles, especially blanks, in the bag). Players are allowed "tracking sheets", pre-printed with the letters in the initial pool, from which tiles can be crossed off as they are played. Tracking tiles is an important aid to strategy, especially during the endgame, when no tiles remain to be drawn and each player can determine exactly what is on the opponent's rack. Notable and regularly held tournaments include: Other important tournaments include: Clubs in North America typically meet one day a week for three or four hours and some charge a small admission fee to cover their expenses and prizes. Clubs also typically hold at least one open tournament per year. Tournaments are usually held on weekends, and between six and nine games are played each day. There are also clubs in the UK and many other countries. There are a number of internationally rated SOWPODS tournaments. During off hours at tournaments, many players socialize by playing consultation (team) "Scrabble", Clabbers, Anagrams, Boggle, Words with Friends, Scramble with Friends and other games. Maven is a computer opponent for the game, created by Brian Sheppard. The official "Scrabble" computer game in North America uses a version of Maven as its artificial intelligence and is published by Atari. Outside North America, the official "Scrabble" computer game is published by Ubisoft. Quackle is an open-source alternative to Maven of comparable strength, created by a five-person team led by Jason Katz-Brown. A Qt cross-platform version of Quackle is available on GitHub. Several video game versions of "Scrabble" have been released for various platforms, including PC, Mac, Amiga, Commodore 64, Sinclair ZX Spectrum, Game Boy, Game Boy Color, Game Boy Advance, Nintendo DS, PlayStation, PlayStation 2, PlayStation Portable, iPod, iPad, Game.com, Palm OS, Amstrad CPC, Xbox 360, Kindle, Wii and mobile phones. The Nintendo DS version of "Scrabble 2007 Edition" made news when parents became angry over the game's AI using potentially offensive language during gameplay. A number of websites offer the possibility to play "Scrabble" online against other users, such as ScrabbleScores.com, the Internet Scrabble Club and www.pogo.com from Electronic Arts (North America only). Facebook initially offered a variation of "Scrabble" called Scrabulous as a third-party application add-on. On July 24, 2008, Hasbro filed a copyright infringement lawsuit against the creators of Scrabulous. Four days later, Scrabulous was disabled for users in North America, eventually reappearing as "Lexulous" in September 2008, with changes made to distinguish it from Scrabble. By December 20, Hasbro had withdrawn its lawsuit. Mattel launched its official version of online "Scrabble", "Scrabble by Mattel", on Facebook in late March 2008. The application was developed by Gamehouse, a division of RealNetworks who had been licensed by Mattel. Since Hasbro controls the copyright for North America with the copyright for the rest of the world belonging Mattel, the Gamehouse Facebook application was available only to players outside the United States and Canada. Meanwhile, the version developed by Electronic Arts for Hasbro was available throughout the world. When Gamehouse ceased support for its application, Mattel replaced it with the Electronic Arts version in May 2013. This decision was met with criticism from its user base. The Hasbro version continues to be available worldwide but now uses IP lookup to display Hasbro branding to North American players and Mattel branding to the rest of the world. Electronic Arts has also released mobile apps for Android and iOS, allowing players to continue the same game on more than one platform. As well as facilities to play occasional games online, there are many options to play in leagues. The biggest of these is the FSL Scrabble League, which is played at https://scrabblescores.com. In 2020, the license for Scrabble passed from Electronic Arts to Scopely, which launched the app Scrabble GO on March 5, 2020, with the Electronic Arts version discontinued on June 5, 2020. The new app was very different, leading to protests, and Scopely soon began to offer a 'Classic' version, without some of the extras initially offered: "this updated mode is reimagined to reflect the ask for a streamlined experience. Features such as boosts, rewards and all other game modes are disabled", the company announced. In 1987, a board game was released by Selchow & Righter, based on the game show hosted by Chuck Woolery that aired on NBC from 1984 to 1990 (and for five months in 1993). Billed as the "Official Home Version" of the game show (or officially as the "TV Scrabble Home Game"), game play bears more resemblance to the game show than it does to a traditional "Scrabble" game, although it does utilize a traditional "Scrabble" gameboard in play. On September 17, 2011, a new game show based on "Scrabble", called "Scrabble Showdown", debuted on The Hub with Justin "Kredible" Willman as the host of the program. Each week, teams play various activities based on the board game in order to win big prizes including a trip to anywhere from around the world. A new licensed product, "Super Scrabble", was launched in North America by Winning Moves Games in 2004 under license from Hasbro, with the deluxe version (with turntable and lock-in grid) released in February 2007. A Mattel-licensed product for the rest of the world was released by Tinderbox Games in 2006. This set comprises 200 tiles in slightly modified distribution to the standard set and a 21×21 playing board. The following records were achieved during international competitive club or tournament play, according to authoritative sources, including the book "Everything Scrabble" by Joe Edley and John D. Williams, Jr. (revised edition, Pocket Books, 2001) and the Scrabble FAQ. When available, separate records are listed based upon different official word lists: Two other records are believed to have been achieved under a British format known as the "high score rule", in which a player's tournament result is determined only by the player's own scores, and not by the differentials between that player's scores and the opponents'. Play in this system "encourages elaborate setups often independently mined by the two players", and is significantly different from the standard game in which defensive considerations play a major role. While the "high score" rule has led to impressively high records, it is currently out of favor. Hypothetical scores in possible and legal but highly unlikely plays and games are far higher, primarily through the use of words that cover three triple-word-score squares. The highest reported score for a single play is 1780 (OSPD) and 1785 (SOWPODS) using oxyphenbutazone. When only adding the word sesquioxidizing to these official lists, one could theoretically score 2015 (OSPD) and 2044 (SOWPODS) points in a single move. The highest reported combined score for a theoretical game based on SOWPODS is 4046 points. Other records are available for viewing at , an unofficial record book which includes the above as sources and expands on other topics. In August 1984, Peter Finan and Neil Smith played "Scrabble" for 153 hours at St. Anselm's College, Birkenhead, Merseyside, setting a new duration record. A longer record was never recorded by "Guinness Book of Records", as the publishers decided that duration records of this nature were becoming too dangerous and stopped accepting them. Versions of the game have been released in several other languages. The game was called Alfapet when it was introduced in Sweden in 1954, but since the mid-1990s, the game has also been known as Scrabble in Sweden. Alfapet is now another crossword game, created by the owners of the name Alfapet. A Russian version is called "Erudit". Versions have been prepared for Dakotah, Haitian Creole, Dakelh (Carrier language), and Tuvan. For languages with digraphs counted as single letters, such as Welsh and Hungarian, the game features separate tiles for those digraphs. An Irish-language version of Scrabble was published by Glór na nGael in 2010. The previous year the same organisation published the Junior version of the game and two years later it republished Junior Scrabble using a two-sided (and two skill level) board. There are numerous variations of the game. While they are similar to the original "Scrabble" game, they include minor variations. For example, Literati draws random tiles instead of providing a finite number of tiles for the game, assigns different point levels to each letter and has a slightly different board layout, whereas Lexulous assigns eight letters to each player instead of seven. Words with Friends uses a different board layout and different letter values, as does Words of Gold. Duplicate Scrabble is a popular variant in French speaking countries. Every player has the same letters on the same board and the players must submit a paper slip at the end of the allotted time (usually 3 minutes) with the highest scoring word they have found. This is the format used for the French World Scrabble Championships but it is also used in Romanian and Dutch. There is no limit to the number of players that can be involved in one game, and at Vichy in 1998 there were 1485 players, a record for French "Scrabble" tournaments. In one variation of "Scrabble", blanks score points corresponding to the letters the blanks are used to represent. For example, if one played blank to represent a Z, it would get ten; a blank to represent a V or an H would get four; a blank to represent a D would get 2 and blank to represent a T, N, L, S or R or any of the vowels would get one. Popular among tournament Scrabble players is "Clabbers". In Clabbers, any move that consists of anagrams of allowable words is allowed. For example, because ETAERIO is allowable in ordinary Collins Scrabble, EEAIORT would be allowable in Clabbers. A junior version, called "Junior Scrabble", has been marketed. This has slightly different distributions of frequencies of letter tiles to the standard "Scrabble" game. Word games similar to or influenced by "Scrabble" include Bananagrams, Boggle, Dabble, Nab-It!, Perquackey, Puzzlage, Quiddler, Scribbage, Tapple, Upwords, and WordSpot. There are also number-based variations, such as Equate (game), GoSum, Mathable, Numble, Numbler, Triolet, Yushino and Numenko. The game has been released in numerous gameboard formats appealing to various user groups. The original boards included wood tiles and many "deluxe" sets still do. Both games are made by Winning Moves and feature smaller, plastic tiles that are held in place on the board with little plastic posts. The standard version features exactly the same 100 tiles as regular "Scrabble". The Tile Lock Super Scrabble features the same 200 tiles that are in Super Scrabble. Editions are available for travelers who may wish to play in a conveyance such as a train or plane, or to pause a game in progress and resume later. Many versions thus include methods to keep letters from moving, such as pegboards, recessed tile holders and magnetic tiles. Players' trays are also designed with stay-fast holders. Such boards are also typically designed to be reoriented by each player to put the board upright during the game, as well as folded and stowed with the game in progress. At the opposite end, some "deluxe" editions offer superior materials and features. These include editions on a rotating turntable, so players can always face the board, with the letters upright and a raised grid that holds the tiles in place. More serious players often favor custom "Scrabble" boards, often made of acrylic glass or hardwood, that have superior rotating mechanisms and personalized graphics. An edition has been released (in association with the Royal National Institute of Blind People (RNIB)) with larger board and letters for players with impaired vision. The colours on the board are more contrasting, and the font size has been increased from 16 to 24 point. The tiles are in bold 48 point. An introduction to tournament "Scrabble" and its players can be found in the book "Word Freak" by Stefan Fatsis. In the process of writing, Fatsis himself became a high-rated tournament player. The "Scrabble Player's Handbook", edited by Stewart Holden and written by an international group of tournament players, gives the information a serious player needs to advance to successful tournament play. There have been numerous documentaries made about the game, including:
https://en.wikipedia.org/wiki?curid=27929
Sedevacantism Sedevacantism is the position held by some people who identify as Catholic that the present occupier of the Holy See is not truly the pope due to the mainstream church's espousal of what they see as the heresy of modernism and that, for lack of a valid pope, the See has been vacant since the death of Pope Pius XII in 1958 or the death of Pope John XXIII in 1963. The term "sedevacantism" is derived from the Latin phrase "sede vacante", which means "with the chair [of Saint Peter] vacant". The phrase is commonly used to refer specifically to a vacancy of the Holy See from the death or resignation of a pope to the election of his successor. "Sedevacantism" as a term in English appears to date from the 1980s, though the movement itself is older. Among those who maintain that the see of Rome, occupied by what they declare to be an illegitimate pope, was really vacant, some have chosen an alternative pope of their own, and thus in their view ended the vacancy of the see, and are known sometimes as "conclavists". The number of sedevacantists is largely unknown, with some claiming estimates of tens of thousands to hundreds of thousands. Many active sedevacantists are involved with traditionalist chapels, societies and congregations, such as the Congregation of Mary Immaculate Queen or the Society of Saint Pius V, attending their chapels for Mass and Confession; other sedevacantists attend services of the Eastern Catholic Church or the Society of Saint Pius X, although the SSPX officially condemns sedevacantism. Sedevacantists claim that the post–Vatican II Mass is invalid or illegitimate. Sedevacantism owes its origins to the rejection of the theological and disciplinary changes implemented following the Second Vatican Council (1962–65). Sedevacantists reject this Council, on the basis of their interpretation of its documents on ecumenism and religious liberty, among others, which they see as contradicting the traditional teachings of the Catholic Church and as denying the unique mission of Catholicism as the one true religion, outside of which there is no salvation. They also say that new disciplinary norms, such as the Mass of Paul VI, promulgated on 3 April 1969, undermine or conflict with the historical Catholic faith and are deemed heresies. They conclude, on the basis of their rejection of the revised Mass rite and of postconciliar Church teaching as false, that the popes involved are false also. Among even traditionalist Catholics, this is a quite divisive question, so that many who hold it prefer to say nothing of their view. Others believe it is fine to go to Masses or Divine Liturgies of Eastern Catholic Churches, where Francis' name is said in the Roman Canon or Anaphora, for the sake of fulfilling the obligation to attend Mass and to have access to the sacraments. Other sedevacantists, basing on the fact that Canon law prohibits episcopal consecrations without papal mandate, prefer to stay at home and reject Masses offered by sedevacantists, because all sedevacantist bishops (and consequently priests) today derive their Holy Orders from bishops who do not have papal mandate to consecrate bishops, such as Archbishop Ngô Đình Thục and Bishop Alfredo Méndez-Gonzalez. Traditionalist Catholics other than sedevacantists recognize as legitimate the line of Popes leading to and including Pope Francis. Some of them hold that one or more of the most recent popes have held and taught unorthodox beliefs, but do not go so far as to say that they have been formal heretics though as of 2018 there are some who are considering Pope Francis a heretic because of the encyclical "Amoris Laetitia". Sedevacantists, however, claim that the infallible Magisterium of the Catholic Church could not have decreed the changes made in the name of the Second Vatican Council, and conclude that those who issued these changes could not have been acting with the authority of the Catholic Church. Accordingly, they hold that Pope Paul VI and his successors left the true Catholic Church and thus lost legitimate authority in the Church. A formal heretic, they say, cannot be the Catholic pope. Sedevacantists defend their position using numerous arguments, including that particular provisions of canon law prevent a heretic from being elected or remaining as pope. Paul IV's 1559 bull, "Cum ex apostolatus officio", stipulated that a heretic cannot be elected pope, while Canon 188.4 of the 1917 Code of Canon Law provides that a cleric who publicly defects from the Catholic faith automatically loses any office he had held in the Church. A number of writers have engaged sedevacantists in debate on some of these points. Theologian Brian Harrison has argued that Pius XII's conclave legislation permitted excommunicated cardinals to attend, from which he argues that they could also be legitimately elected. Opponents of Harrison have argued that a phrase in Pius XII's legislation, "Cardinals who have been deposed or who have resigned, however, are barred and may not be reinstated even for the purpose of voting", though it speaks of someone deposed or resigned from the cardinalate, not of someone who may have incurred automatic excommunication but has not been officially declared excommunicated, means that, even if someone is permitted to attend, that does not automatically translate into electability. While Sedevacantists' arguments often hinge on their interpretation of modernism as being a heresy, this is also debated. The "Catholic Encyclopedia" in 1913 said: "The pope himself, if notoriously guilty of heresy, would cease to be pope because he would cease to be a member of the Church." Likewise, theologians Wernz-Vidal, commentary on Canon Law: "Ius canonicum" :Through notorious and openly divulged heresy, the Roman Pontiff, should he fall into heresy, by that very fact "(ipso facto)" is deemed to be deprived of the power of jurisdiction even before any declaratory judgment by the Church... A Pope who falls into public heresy would cease "ipso facto" to be a member of the Church; therefore, he would also cease to be head of the Church.There are estimated to be between several tens of thousands and more than two hundred thousand sedevacantists worldwide, mostly concentrated in the United States, Canada, France, the UK, Italy, and Australia, but the actual size of the sedevacantist movement has never been accurately assessed. It remains extremely difficult to establish the size of the movement for a wide range of reasons – not all sedevacantists identify themselves as such, nor do they necessarily adhere to sedevacantist groups or societies. (See further the section on statistics in the article "Traditionalist Catholic".) Catholic doctrine teaches the four marks of the true Church are that it is One, Holy, Catholic, and Apostolic. Sedevacantists base their claim to be the remnant Catholic Church on what they see as the presence in them of these four "marks", absent, they say, in the Church since the Second Vatican Council. Their critics counter that sedevacantists are not one, forming numerous splinter groups, each of them in disagreement with the rest. Most sedevacantists hold the Holy Orders conferred with the present revised rites of the Catholic Church to be invalid due to defect both of intention and form. Because they consider the 1968 revision of the rite of Holy Orders to have invalidated it, they conclude that the great majority of the bishops listed in the Holy See's "Annuario Pontificio", including Benedict XVI and Francis themselves, are in reality merely priests or even laymen. One of the earliest proponents of sedevacantism was the American Francis Schuckardt. Although still working within the "official" Church in 1967, he publicly took the position in 1968 that the Holy See was vacant and that the Church that had emerged from the Second Vatican Council was no longer Catholic. An associate of his, Daniel Q. Brown, arrived at the same conclusion. In 1969, Brown received episcopal orders from an Old Catholic bishop, and in 1971 he in turn consecrated Schuckardt. Schuckardt founded a congregation called the Tridentine Latin Rite Catholic Church. In 1970, a Japanese layman, Yukio Nemoto (1925–88), created Seibo No Mikuni (Kingdom of Our Lady, 聖母の御国), a sedevacantist group. Another founding sedevacantist was Father Joaquín Sáenz y Arriaga, a Jesuit theologian from Mexico. He put forward sedevacantist ideas in his books "The New Montinian Church" (August 1971) and "Sede Vacante" (1973). His writings gave rise to the sedevacantist movement in Mexico, led by Sáenz, Father Moisés Carmona and Father Adolfo Zamora, and also inspired Father Francis E. Fenton in the U.S. In the years following the Second Vatican Council other priests took up similar positions, including: Catholic doctrine holds that any bishop can validly ordain any baptised man to the priesthood or to the episcopacy, provided that he has the correct intention and uses a doctrinally acceptable rite of ordination, whether or not he has official permission of any sort to perform the ordination. Absent specified conditions, canon law forbids ordination to the episcopate without a mandate from the pope, and both those who confer such ordination without the papal mandate and those who receive it are subject to excommunication. In a specific pronouncement in 1976, the Congregation for the Doctrine of the Faith declared devoid of canonical effect the consecration ceremony conducted for the Palmarian Catholic Church by Archbishop Ngô Đình Thục on 31 December 1975, though it refrained from pronouncing on its validity. This declaration also applied pre-emptively to any later ordinations by those who received ordination in the ceremony. Of those then ordained, seven who are known to have returned to full communion with Rome did so as laymen. When Archbishop Emmanuel Milingo conferred episcopal ordination on four men in Washington on 24 September 2006, the Holy See's Press Office declared that "the Church does not recognize and does not intend in the future to recognize these ordinations or any ordinations derived from them, and she holds that the canonical state of the four alleged bishops is the same as it was prior to the ordination." This denial of canonical status means Milingo had no authority to exercise any ministry. However, Rev. Ciro Benedettini, of the Holy See Press Office, who was responsible for publicly issuing, during the press conference, the communiqué on Milingo, stated to reporters that any ordinations the excommunicated Milingo had performed prior to his laicization were "illicit but valid", while any subsequent ordinations would be invalid. The bishops who are or have been active within the sedevacantist movement can be divided into four categories: To date, this category seems to consist of only two individuals, both now deceased: the Vietnamese Archbishop Thục (consecrated 1938) (who, before his death in 1984, was reconciled with the Church of Pope John Paul II) and the Chicago-born Bishop Alfredo Méndez-Gonzalez (consecrated 1960), the former Bishop of Arecibo, Puerto Rico. Which essentially means the "Thục line" of bishops deriving from Archbishop Thục. While the "Thục line" is lengthy and complex, reportedly comprising 200 or more individuals, the sedevacantist community generally accepts and respects most of the 12 or so bishops following from the three or four final consecrations that the Archbishop performed (those of Bishops Guerard des Lauriers, Carmona, Zamora, and Datessen). Bishop Méndez consecrated one priest to the episcopacy, Clarence Kelly of the Society of St. Pius V, who further consecrated Joseph Santay. Many bishops in the "Thục line" are part of the conclavist Palmarian Catholic Church, due to the very numerous episcopal consecrations within its organization, with their first five bishops consecrated by Archbishop Thục himself, who believed in their claim that the Blessed Virgin Mary chose him to be the one to supply their organization with bishops. All priests in the Palmarian Catholic Church are bishops too. On 24 September 1991, Mark Pivarunas was consecrated a bishop at Mount Saint Michael by Bishop Moisés Carmona. On 30 November 1993, Bishop Pivarunas conferred episcopal consecration to Father Daniel Dolan in Cincinnati, Ohio, and on 11 May 1999, he consecrated Martin Davila for the Unión Católica Trento to succeed Bishop Carmona. A considerable number of sedevacantist bishops are thought to derive from Bishop Carlos Duarte Costa, who in 1945 set up his own schismatic "Brazilian Catholic Apostolic Church". Carlos Duarte Costa was not a sedevacantist, and instead questioned the status of the papacy itself – he denied Papal Infallibility and rejected the pope's universal jurisdiction. In further contrast to most Catholic traditionalism Duarte Costa was left-wing. More numerous are those who have had recourse to the Old Catholic line of succession. Bishops of this category include Francis Schuckardt and others associated with him. The orders of the original Old Catholic Church are regarded by the Roman Catholic Church as valid, though no such declaration of recognition has been issued with regard to the several independent Catholic churches that claim to trace their episcopal orders to this church. Some shadow of doubt hovers over the validity of the orders received from these bishops, and the claimants have not received wide acceptance in the sedevacantist community, though most have at least some small congregation. Lucian Pulvermacher (also known as Pope Pius XIII) and Gordon Bateman of the small conclavist ""True Catholic Church"" fall into this category. For Lucian Pulvermacher to be consecrated a bishop, he interpreted a passage of theologian Ludwig Ott to mean that he can, as pope, although just a simple priest, give himself special authority to confer Holy Orders. So he proceeded to ordain Bateman as a priest then consecrated him as a bishop. Bateman then in turn as bishop consecrated Pulvermacher as bishop. Against sedevacantism, Catholics advance arguments such as: Sedevacantists advance counter-arguments, such as: They say that they do not repudiate the dogma of papal infallibility as defined at the First Vatican Council, and maintain that, on the contrary, they are the fiercest defenders of this doctrine, since they teach that the Apostolic See of Peter, under the rule of a true pope, cannot promulgate contradictory teachings. Like other traditionalist Catholics, sedevacantists criticize liturgical revisions made by the Holy See since the Second Vatican Council: Sedevacantism appears to be centred in, and by far strongest in the United States, and secondarily in other English-speaking countries such as Canada (mainly Ontario) and the United Kingdom, as well as Poland, Mexico, Italy, Germany, Japan, South Korea, Singapore, and Brazil. Anthony Cekada has described the United States as a "sedevacantist bastion", contrasting it with France, where the non-sedevacantist Society of Saint Pius X has a virtual monopoly on the traditionalist Catholic movement. Sedevacantist groups include:
https://en.wikipedia.org/wiki?curid=27930
Sailor Moon The manga was adapted into an anime series produced by Toei Animation and broadcast in Japan from 1992 to 1997. Toei also developed three animated feature films, a television special, and three short films based on the anime. A live-action television adaptation, "Pretty Guardian Sailor Moon", aired from 2003 to 2004, and a second anime series, "Sailor Moon Crystal", began simulcasting in 2014. The manga series was licensed for an English language release by Kodansha Comics in North America, and in Australia and New Zealand by Random House Australia. The entire anime series has been licensed by Viz Media for an English language release in North America and by Madman Entertainment in Australia and New Zealand. Since its release, "Pretty Soldier Sailor Moon" has received acclaim, with praise for its art, characterization, and humor. The manga has sold over 35 million copies worldwide, making it one of the best-selling "shōjo" manga series. The franchise has also generated in worldwide merchandise sales. In Juban, Tokyo, a middle-school student named Usagi Tsukino befriends Luna, a talking black cat who gives her a magical brooch enabling her to transform into Sailor Moon: a soldier destined to save Earth from the forces of evil. Luna and Usagi assemble a team of fellow Sailor Soldiers to find their princess and the Silver Crystal. They encounter the studious Ami Mizuno, who awakens as Sailor Mercury; Rei Hino, a local Shinto shrine maiden who awakens as Sailor Mars; Makoto Kino, a tall and strong transfer student who awakens as Sailor Jupiter; and Minako Aino, a young aspiring idol who had awakened as Sailor Venus a few months prior, accompanied by her talking feline companion Artemis. Additionally, they befriend Mamoru Chiba, a high school student who assists them on occasion as Tuxedo Mask. In the first arc, the group battles the Dark Kingdom. Led by Queen Beryl, a team of generals—the —attempt to find the Silver Crystal and free an imprisoned, evil entity called Queen Metaria. Usagi and her team discover that in their previous lives they were members of the ancient Moon Kingdom in a period of time called the Silver Millennium. The Dark Kingdom waged war against them, resulting in the destruction of the Moon Kingdom. Its ruler Queen Serenity later sent her daughter Princess Serenity, her protectors the Sailor Soldiers, their feline advisers Luna and Artemis, and the princess' true love Prince Endymion into the future to be reborn through the power of the Silver Crystal. The team recognizes Usagi as the reincarnated Serenity and Mamoru as Endymion. The Soldiers kill the Four Kings, who turn out to have been Endymion's guardians who defected in their past lives. In a final confrontation with the Dark Kingdom, Minako kills Queen Beryl; she and the other Soldiers then sacrifice their lives in an attempt to destroy Queen Metaria. Using the Silver Crystal, Usagi defeats Metaria and resurrects her friends. At the beginning of the second arc, Usagi and Mamoru's daughter Chibiusa arrives from the future to find the Silver Crystal. As a result, the Soldiers encounter Wiseman and his Black Moon Clan, who are pursuing her. Chibiusa takes the Soldiers to the future city Crystal Tokyo, where her parents rule as Neo-Queen Serenity and King Endymion. During their journey, they meet Sailor Pluto, guardian of the Time-Space Door. Pluto stops the Clan's ruler Prince Demand from destroying the spacetime continuum, leading to her death. Chibiusa later awakens as a Soldier—Sailor Chibi Moon and helps Usagi kill Wiseman's true form, Death Phantom. The third arc revolves around a group of lifeforms called the Death Busters, created by Professor Soichi Tomoe, who seek to transport the entity Pharaoh 90 to Earth to merge with the planet. Tomoe's daughter, Hotaru, is possessed by the entity Mistress 9, who must open the dimensional gateway through which Pharaoh 90 must travel. Auto-racer Haruka Tenoh and violinist Michiru Kaioh appear as Sailor Uranus and Sailor Neptune, who guard the outer rim of the Solar System from external threats. Physics student Setsuna Meioh, Sailor Pluto's reincarnation, joins the protagonists. Usagi obtains the Holy Grail, transforms into Super Sailor Moon, and attempts to use the power of the Grail and the Silver Crystal to destroy Pharaoh 90. This causes Hotaru to awaken as Sailor Saturn, whom Haruka, Michiru, and Setsuna initially perceive as a threat. As the harbinger of death, Hotaru uses her power of destruction to sever Pharaoh 90 from the Earth and instructs Setsuna to use her power over time-space to close the dimensional gateway. In the fourth arc, Usagi and her friends enter high school and fight against the Dead Moon Circus, led by Queen Nehelenia, the self-proclaimed "rightful ruler" of both Silver Millennium and Earth. Nehelenia invades Elysion, which hosts the Earth's Golden Kingdom, capturing its High Priest Helios and instructing her followers to steal the Silver Crystal. As Prince Endymion, Mamoru is revealed to be the owner of the Golden Crystal, the sacred stone of the Golden Kingdom. Mamoru and the Soldiers combine their powers with those of the Holy Grail, enabling Usagi to transform into Eternal Sailor Moon and kill Nehelenia. Four of Nehelenia's henchmen, the Amazoness Quartet, are revealed to be Sailor Soldiers called the Sailor Quartet, who are destined to become Chibiusa's guardians in the future; they had been awakened prematurely and corrupted by Nehelenia. In the fifth and final arc, Usagi and her friends are drawn into a battle against Shadow Galactica, a group of false Sailor Soldiers. Their leader, Sailor Galaxia, plans to steal the Sailor Crystals of true Soldiers to take over the galaxy and kill an evil lifeform known as Chaos. When Galaxia kills Mamoru and most of the Sailor Soldiers, she steals their Sailor Crystals. Usagi travels to the Galaxy Cauldron to defeat Galaxia and revive her teammates. Joining Usagi are the Sailor Starlights who come from the planet Kinmoku, their ruler Princess Kakyuu and the infant Sailor Chibichibi, who comes from the distant future. Later, Chibiusa and the Sailor Quartet join Usagi and company. After numerous battles and the death of Galaxia, Sailor Chibichibi reveals her true form as Sailor Cosmos. After defeating Chaos with the Silver Crystal, Usagi revives Mamoru and the Sailor Soldiers, before returning to Earth. The series ends with Usagi and Mamoru's wedding six years later. Naoko Takeuchi redeveloped "Sailor Moon" from her 1991 manga serial "", which was first published on August 20, 1991, and featured Sailor Venus as the main protagonist. Takeuchi wanted to create a story with a theme about girls in outer space. While discussing with her editor Fumio Osano, he suggested the addition of Sailor fuku. When "Codename: Sailor V" was proposed for adaptation into an anime by Toei Animation, Takeuchi redeveloped the concept so Sailor Venus became a member of a team. The resulting manga series became a fusion of the popular magical girl genre and the "Super Sentai" series, of which Takeuchi was a fan. Recurring motifs include astronomy, astrology, gemology, Greek and Roman mythology, Japanese elemental themes, teen fashions, and schoolgirl antics. Takeuchi said discussions with Kodansha originally envisaged a single story arc; the storyline was developed in meetings a year before serialization began. After completing the arc, Toei and Kodansha asked Takeuchi to continue the series. She wrote four more story arcs, which were often published simultaneously with the five corresponding seasons of the anime adaptation. The anime ran one or two months behind the manga. As a result, the anime follows the storyline of the manga fairly closely, although there are deviations. Takeuchi later said because Toei's production staff were mostly male, she feels the anime has "a slight male perspective." Takeuchi later said she planned to kill off the protagonists, but Osano rejected the notion and said, "["Sailor Moon"] is a shōjo manga!" When the anime adaptation was produced, the protagonists were killed in the final battle with the Dark Kingdom, although they were revived. Takeuchi resented that she was unable to do that in her version. Takeuchi also intended for the "Sailor Moon" anime adaptation to last for one season, but due to the immense popularity, Toei asked Takeuchi to continue the series. At first, she struggled to develop another storyline to extend the series. While discussing with Osano, he suggested the inclusion of Usagi's daughter from the future, Chibiusa. Written and illustrated by Naoko Takeuchi, "Sailor Moon" was serialized in the monthly manga anthology "Nakayoshi" from December 28, 1991 to February 3, 1997. The side-stories were serialized simultaneously in "RunRun"—another of Kodansha's manga magazines. The 52 individual chapters were published in 18 "tankōbon" volumes by Kodansha from July 6, 1992, to April 4, 1997. In 2003, the chapters were re-released in a collection of 12 "shinzōban" volumes to coincide with the release of the live-action series. The manga was retitled "Pretty Guardian Sailor Moon" and included new cover art, and revised dialogue and illustrations. The ten individual short stories were also released in 2 volumes. In 2013, the chapters were once again re-released in 10 "kanzenban" volumes to commemorate the manga's 20th anniversary, which includes digitally remastered artwork, new covers and color artwork from its "Nakayoshi" run. The books have been enlarged from the typical Japanese manga size to A5. The short stories were republished in two volumes, with the order of the stories shuffled. "Codename: Sailor V" was also included in the third edition. The "Sailor Moon" manga was initially licensed for an English release by Mixx (later Tokyopop) in North America. The manga was first published as a serial in "MixxZine" beginning in 1997, but was later removed from the magazine and made into a separate, monthly comic to finish the first, second and third arcs. At the same time, the fourth and fifth arcs were printed in a secondary magazine called "Smile". The series was later collected into three-part graphic novels spanning eighteen volumes, which were published from December 1, 1998, to September 18, 2001. Tokyopop's license expired in 2005 and its edition went out of print. Daily pages from the Tokyopop version ran in the Japanimation Station, a service accessible to users of America Online. In May 2005, Tokyopop's license to the Sailor Moon manga expired, and its edition went out of print. In 2011, Kodansha Comics announced it would publish the "Sailor Moon" manga and the lead-in series "Codename: Sailor V" in English. It would also re-publish the twelve volumes of "Sailor Moon" simultaneously with the two-volume edition of "Codename Sailor V", from September 2011 to July 2013. The first volume of the two related short stories was published on September 10, 2013; the other was published on November 26. On July 1, 2019, Kondasha Comics released the Eternal editions digitally, following the announcement the day before about the series being released digitally in ten different languages. The manga has also been licensed in other English-speaking countries. In the United Kingdom, the volumes are distributed by Turnaround Publisher Services. In Australia, the manga is distributed by Penguin Books Australia. The manga has been licensed in Russia and CIS for distribution by XL-Media publishing company, a subdivision of Eksmo publishing company. The date of release is unknown. Toei Animation produced an anime television series based on the 52 manga chapters, also titled "Pretty Soldier Sailor Moon". Junichi Sato directed the first season, Kunihiko Ikuhara took over second through fourth season, and Takuya Igarashi directed the fifth and final season. The series premiered in Japan on TV Asahi on March 7, 1992, and ran for 200 episodes until its conclusion on February 8, 1997. Most of the international versions, including the English adaptations, are titled "Sailor Moon". On July 6, 2012, Kodansha and Toei Animation announced that it would commence production of a new anime adaptation of "Sailor Moon", called "Pretty Guardian Sailor Moon Crystal", for a simultaneous worldwide release in 2013 as part of the series' 20th anniversary celebrations. "Crystal" premiered on July 5, 2014, and new episodes would air on the first and third Saturdays of each month. New cast were announced, along with Kotono Mitsuishi reprising her role as Sailor Moon. The first two seasons were released together, covering their corresponding arcs of the manga ("Dark Kingdom" and "Black Moon"). A third season (subtitled "Death Busters", based on the "Infinity" arc on the manga) premiered on Japanese television on April 4, 2016. The fourth season (subtitled "Dead Moon", based on "Dream" arc of the manga) continued as a 2-Part theatrical anime film project under "Pretty Guardian Sailor Moon Eternal", with Part 1 originally to be released in September 11, 2020, but postponed to January 8, 2021 release, and Part 2 to be released on February 11, 2021. Munehisa Sakai directed the first and second season, while Chiaki Kon directed the third season and the two films. Three animated theatrical feature films based on the original "Sailor Moon" series have been released in Japan: "" in 1993, followed by "" in 1994, and "" in 1995. The films are side-stories that do not correlate with the timeline of the original series. A one-hour television special was aired on TV Asahi in Japan on April 8, 1995. Kunihiko Ikuhara directed the first film, while the latter two were directed by Hiroki Shibata. In 1997, an article in "Variety" stated that The Walt Disney Company was interested in acquiring the rights to "Sailor Moon" as a live action film to be directed by Stanley Tong. In 2017, it was revealed that "Pretty Guardian Sailor Moon Crystal" anime’s fourth season would continue as a two-part theatrical anime film project adapting the "Dream" arc from the manga (subtitled "Dead Moon"). On June 30, 2019, it was announced that the title of the movies will be "Pretty Guardian Sailor Moon Eternal". The first film was originally to be released on September 11, 2020,, but postponed to January 8, 2021 release, and the second film is going to be released on February 11, 2021. Chiaki Kon returned from the anime’s third season to direct the two films. There have been numerous companion books to "Sailor Moon". Kodansha released some of these books for each of the five story arcs, collectively called the "Original Picture Collection". The books contain cover art, promotional material and other work by Takeuchi. Many of the drawings are accompanied by comments on the way she developed her ideas, created each picture and commentary on the anime interpretation of her story. Another picture collection, "Volume Infinity", was released as a self-published, limited-edition artbook after the end of the series in 1997. This art book includes drawings by Takeuchi and her friends, her staff, and many of the voice actors who worked on the anime. In 1999, Kodansha published the "Materials Collection"; this contained development sketches and notes for nearly every character in the manga, and for some characters that never appeared. Each drawing includes notes by Takeuchi about costume pieces, the mentality of the characters and her feelings about them. It also includes timelines for the story arcs and for the real-life release of products and materials relating to the anime and manga. A short story, "Parallel Sailor Moon" is also featured, celebrating the year of the rabbit. Sailor Moon was also adapted for publication as novels and released in 1998. The first book was written by Stuart J. Levy and the following written by Lianne Sentar. In mid-1993, the first musical theater production based on "Sailor Moon" premiered; Anza Ohyama starred as Sailor Moon. Thirty such musicals in all have been produced, with one in pre-production. The shows' stories include anime-inspired plotlines and original material. Music from the series has been released on about 20 memorial albums. The popularity of the musicals has been cited as a reason behind the production of the live-action television series, "Pretty Guardian Sailor Moon". During the original run musicals ran in the winter and summer of each year, with summer musicals staged at the Sunshine Theater in the Ikebukuro area of Tokyo. In the winter, musicals toured to other large cities in Japan, including Osaka, Fukuoka, Nagoya, Shizuoka, Kanazawa, Sendai, Saga, Oita, Yamagata and Fukushima. The final incarnation of the first run, , went on stage in January 2005, following which, Bandai officially put the series on a hiatus. On June 2, 2013, Fumio Osano announced on his Twitter page that the "Sailor Moon" musicals would begin again in September 2013. The 20th anniversary show "La Reconquista" ran from September 13 to 23 at Shibuya's AiiA Theater Tokyo, with Satomi Ōkubo as Sailor Moon. Satomi Ōkubo reprised the role in the 2014 production "Petite Étrangère" which ran from August 21 to September 7, 2014, again at AiiA Theater Tokyo. In 1993, Renaissance-Atlantic Entertainment, Bandai and Toon Makers, Inc. conceptualized their own version of "Sailor Moon", which was half live-action and half Western-style animation. Toon Makers produced a 17-minute proof of concept presentation video as well as a two-minute music video, both of which were directed by Rocky Sotoloff, for this concept. Renaissance-Atlantic presented the concept to Toei, but it was turned down as their concept would have cost significantly more than simply exporting and dubbing the anime adaptation. At the 1998 Anime Expo convention in Los Angeles, the music video was shown. It has since been copied numerous times and has been viewed on many streaming video sites. Because of the relatively poor quality of the source video and circulated footage, many anime fans thought that the music video was actually a leaked trailer for the project. Additional copies of the footage have since been uploaded to the Internet and served only to bolster the mistaken assumption, in addition to incorrectly attributing the production to Saban Entertainment, who became known for a similar treatment that created the "Power Rangers" series. In 1998, Frank Ward, along with his company Renaissance-Atlantic Entertainment, tried to revive the idea of doing a live-action series based on Sailor Moon, this time called "Team Angel", without the involvement of Toon Makers. A 2-minute reel was produced and sent to Bandai America, but was also rejected. In 2003, Toei Company produced a Japanese live-action "Sailor Moon" television series using the new translated English title of "Pretty Guardian Sailor Moon". Its 49 episodes were broadcast on Chubu-Nippon Broadcasting from October 4, 2003 to September 25, 2004. "Pretty Guardian Sailor Moon" featured Miyuu Sawai as Usagi Tsukino, Rika Izumi (credited as Chisaki Hama) as Ami Mizuno, Keiko Kitagawa as Rei Hino, Mew Azama as Makoto Kino, Ayaka Komatsu as Minako Aino, Jouji Shibue as Mamoru Chiba, Keiko Han reprising her voice role as Luna from the original anime and Kappei Yamaguchi voicing Artemis. The series was an alternate retelling of the Dark Kingdom arc, adding a storyline different from that in the manga and first anime series, with original characters and new plot developments. In addition to the main episodes, two direct-to-video releases appeared after the show ended its television broadcast. "Special Act" is set four years after the main storyline ends, and shows the wedding of the two main characters. "Act Zero" is a prequel showing the origins of and Tuxedo Mask. The "Sailor Moon" franchise has spawned several video games across various genres and platforms. Most were made by Bandai and its subsidy Angel; others were produced by Banpresto. The early games were side-scrolling fighters; later ones were unique puzzle games, or versus fighting games. "" was a turn-based role-playing video game. The only "Sailor Moon" game produced outside Japan, 3VR New Media's "The 3D Adventures of Sailor Moon", went on sale in North America in 1997. A video game called "Sailor Moon: La Luna Splende" ("Sailor Moon: The Shining Moon") was released on March 16, 2011 for the Nintendo DS. The Dyskami Publishing Company released "Sailor Moon Crystal Dice Challenge", created by James Ernest of Cheapass Games and based on the "Button Men" tabletop game in 2017, and "Sailor Moon Crystal Truth or Bluff" in 2018. A Sailor Moon attraction, "Pretty Guardian Sailor Moon: The Miracle 4-D", was announced for Universal Studios Japan. It featured Sailor Moon and the Inner Guardians arriving at the theme park, only to discover and stop the Youma’s plan from stealing people’s energies. The attraction ran from March 16 through July 24, 2018. The sequel attraction, "Pretty Guardian Sailor Moon: The Miracle 4-D: Moon Palace arc" was also announced. The sequel attraction featured all 10 Sailor Guardians and Super Sailor Moon. The attraction ran from May 31 through August 25, 2019. An ice skating show of "Sailor Moon" was announced in June 30, 2019, starring Evgenia Medvedeva as the lead. The name for the ice-skating show was announced as "Pretty Guardian Sailor Moon: Prism on Ice", as well as the additional casts, with Anza from the first "Sailor Moon" musicals to play Queen Serenity, and the main voice actresses of "Sailor Moon Crystal" anime series to voice their individual characters. Takuya Hiramatsu from the musicals will write the screenplay, Yuka Sato and Benji Schwimmer are in charge of choreography, and Akiko Kosaka & Lunar Eclipse Meeting will write the music for the show. The show was set to debut in early June of 2020, but was postponed to June of 2021, due to COVID-19. "Sailor Moon" is one of the most popular manga series of all time and continues to enjoy high readership worldwide. More than one million copies of its "tankōbon" volumes had been sold in Japan by the end of 1995. By the series' 20th anniversary in 2012, the manga had sold over 35 million copies in over fifty countries, and the franchise has generated in worldwide merchandise sales as of 2014. The manga won the Kodansha Manga Award in 1993 for "shōjo". The English adaptations of both the manga and the anime series became the first successful shōjo title in the United States. The character of Sailor Moon is recognized as one of the most important and popular female superheroes of all time. "Sailor Moon" has also become popular internationally. "Sailor Moon" was broadcast in Spain and France beginning in December 1993; these became the first countries outside Japan to broadcast the series. It was later aired in Russia, South Korea, the Philippines, China, Italy, Taiwan, Thailand, Indonesia and Hong Kong, before North America picked up the franchise for adaptation. In the Philippines, "Sailor Moon" was one of its carrier network's main draws, helping it to become the third-biggest network in the country. In 2001, the "Sailor Moon" manga was Tokyopop's best selling property, outselling the next-best selling titles by at least a factor of 1.5. In Diamond Comic Distributors's May 1999 "Graphic Novel and Trade Paperback" category, "Sailor Moon" Volume 3 was the best-selling comic book in the United States. In his 2007 book "", Jason Thompson gave the manga series three stars out of four. He enjoyed the blending of "shōnen" and "shōjo" styles and said the combat scenes seemed heavily influenced by "Saint Seiya", but shorter and less bloody. He also said the manga itself appeared similar to "Super Sentai" television shows. Thompson found the series fun and entertaining, but said the repetitive plot lines were a detriment to the title, which the increasing quality of art could not make up for; even so, he called the series "sweet, effective entertainment." Thompson said although the audience for "Sailor Moon" is both male and female, Takeuchi does not use excessive fanservice for males, which would run the risk of alienating her female audience. Thompson said fight scenes are not physical and "boil down to their purest form of a clash of wills", which he says "makes thematic sense" for the manga. Comparing the manga and anime, Sylvain Durand said the manga artwork is "gorgeous", but its storytelling is more compressed and erratic and the anime has more character development. Durand said "the sense of tragedy is greater" in the manga's telling of the "fall of the Silver Millennium," giving more detail about the origins of the Shitennou and on Usagi's final battle with Beryl and Metaria. Durand said the anime omits information that makes the story easy to understand, but judges the anime more "coherent" with a better balance of comedy and tragedy, whereas the manga is "more tragic" and focused on Usagi and Mamoru's romance. For the week of September 11, 2011, to September 17, 2011, the first volume of the re-released "Sailor Moon" manga was the best-selling manga on "The New York Times" Manga Best Sellers list, with the first volume of "Codename: Sailor V" in second place. The first print run of the first volume sold out after four weeks. With their dynamic heroines and action-oriented plots, many attribute the manga and anime series to reinvigorating the magical girl genre. After its success, many similar magical girl series, including "Magic Knight Rayearth", "Wedding Peach", "Nurse Angel Ririka SOS", "Saint Tail", and "Pretty Cure", emerged. "Sailor Moon" has been called "the biggest breakthrough" in English-dubbed anime until 1995, when it premiered on YTV, and "the pinnacle of little kid "shōjo" anime." Cultural anthropologist Matt Thorn said that soon after "Sailor Moon", "shōjo" manga started appearing in book shops instead of fandom-dominated comic shops. The series are credited as beginning a wider movement of girls taking up "shōjo" manga. Canadian librarian Gilles Poitras defines a generation of anime fans as those who were introduced to anime by "Sailor Moon" in the 1990s, saying they were both much younger than other fans and were also mostly female. Historian Fred Patten credits Takeuchi with popularizing the concept of a "Super Sentai"-like team of magical girls, and Paul Gravett credits the series with revitalizing the magical girl genre itself. A reviewer for "THEM Anime Reviews" also credited the anime series with changing the genre—its heroine must use her powers to fight evil, not simply have fun as previous magical girls had done. The series has also been compared to "Mighty Morphin Power Rangers", "Buffy the Vampire Slayer", and "Sabrina the Teenage Witch". Sailor Moon also influenced the development of "", "W.I.T.C.H.", "Winx Club", "LoliRock", "Star vs. the Forces of Evil", and "Totally Spies!" In western culture, "Sailor Moon" is sometimes associated with the feminist and Girl Power movements and with empowering its viewers, especially regarding the "credible, charismatic and independent" characterizations of the Sailor Soldiers, which were "interpreted in France as an unambiguously feminist position". Although "Sailor Moon" is regarded as empowering to women and feminism in concept, through the aggressive nature and strong personalities of the Sailor Soldiers, it is a specific type of feminist concept where "traditional feminine ideals [are] incorporated into characters that act in traditionally male capacities". While the Sailor Soldiers are strong, independent fighters who thwart evil—which is generally a masculine stereotype—they are also ideally feminized in the transformation of the Sailor Soldiers from teenage girls into magical girls, with heavy emphasis on jewelry, make-up and their highly sexualized outfits with cleavage, short skirts, and accentuated waists. The most notable hyper-feminine features of the Sailor Soldiers—and most other females in Japanese girls' comics—are the girls' thin bodies, long legs, and, in particular, round, orb-like eyes. Eyes are commonly known as the primal source within characters where emotion is evoked—sensitive characters have larger eyes than insensitive ones. Male characters generally have smaller eyes that have no sparkle or shine in them like the eyes of the female characters. The stereotypical role of women in Japanese culture is to undertake romantic and loving feelings; therefore, the prevalence of hyper-feminine qualities like the openness of the female eye in Japanese girls' comics is clearly exhibited in "Sailor Moon". Thus, "Sailor Moon" emphasizes a type of feminist model by combining traditional masculine action with traditional female affection and sexuality through the Sailor Soldiers. Its characters are often described with "catty stereotypes", Sailor Moon's character, in particular, being singled out as less than feminist. In English-speaking countries, "Sailor Moon" developed a cult following among anime fans and male university students. Patrick Drazen says the Internet was a new medium that fans used to communicate and played a role in the popularity of "Sailor Moon". Fans could use the Internet to communicate about the series, organize campaigns to return "Sailor Moon" to U.S. broadcast, to share information about episodes that had not yet aired, or to write fan fiction. In 2004, one study said there were 3,335,000 websites about Sailor Moon, compared to 491,000 for Mickey Mouse. Gemma Cox of "Neo" magazine said part of the series' allure was that fans communicated via the Internet about the differences between the dub and the original version. The "Sailor Moon" fandom was described in 1997 as being "small and dispersed." In a United States study, twelve children paid rapt attention to the fighting scenes in "Sailor Moon", although when asked whether they thought "Sailor Moon" was violent, only two said yes and the other ten described the episodes as "soft" or "cute."
https://en.wikipedia.org/wiki?curid=27931
SEUL Simple End User Linux is an advocacy group that promotes Linux programs in education and science. SEUL also hosts numerous free software projects and efforts, such as the WorldForge Project's website. The SEUL/Edu project seeks to further the use of Linux and open-source software in schools, and was one of the groups which laid the groundwork for the SchoolForge project. The SEUL/Sci project (now dormant) focused on the use of Linux and Free Software in research. Other project of note connected to SEUL is Tor (anonymity network).
https://en.wikipedia.org/wiki?curid=27932
Two-party system A two-party system is a party system where two major political parties dominate the political landscape. At any point in time, one of the two parties typically holds a majority in the legislature and is usually referred to as the "majority" or "governing party" while the other is the "minority" or "opposition party". Around the world, the term has different senses. For example, in the United States, Jamaica, Malta, and Zimbabwe, the sense of "two-party system" describes an arrangement in which all or nearly all elected officials belong to one of the only two major parties, and third parties rarely win any seats in the legislature. In such arrangements, two-party systems are thought to result from various factors like winner-takes-all election rules. In such systems, while chances for third-party candidates winning election to major national office are remote, it is possible for groups within the larger parties, or in opposition to one or both of them, to exert influence on the two major parties. In contrast, in Canada, the United Kingdom and Australia and in other parliamentary systems and elsewhere, the term "two-party system" is sometimes used to indicate an arrangement in which two major parties dominate elections but in which there are viable third parties which do win seats in the legislature, and in which the two major parties exert proportionately greater influence than their percentage of votes would suggest. Explanations for why a political system with free elections may evolve into a two-party system have been debated. A leading theory, referred to as Duverger's law, states that two parties are a natural result of a winner-take-all voting system. In countries such as Britain, two major parties emerge which have strong influence and tend to elect most of the candidates, but a multitude of lesser parties exist with varying degrees of influence, and sometimes these lesser parties are able to elect officials who participate in the legislature. Political systems based on the Westminster system, which is a particular style of parliamentary democracy based on the British model and found in many commonwealth countries, a majority party will form the government and the minority party will form the opposition, and coalitions of lesser parties are possible; in the rare circumstance in which neither party is the majority, a hung parliament arises. Sometimes these systems are described as "two-party systems" but they are usually referred to as "multi-party" systems. There is not always a sharp boundary between a two-party system and a multi-party system. Generally, a two-party system becomes a dichotomous division of the political spectrum with an ostensibly right-wing and left-wing party: the Nationalist Party vs. the Labour Party in Malta, Liberal/National Coalition vs. Labor in Australia, Republicans vs. Democrats in the United States and the Conservative Party vs. the Labour Party in the United Kingdom. Other parties in these countries may have seen candidates elected to local or subnational office, however. In some governments, certain chambers may resemble a two-party system and others a multi-party system. For example, the politics of Australia are largely two-party (the Liberal/National Coalition is often considered a single party at a national level due to their long-standing alliance in forming government and additionally rarely compete for the same seat) for the Australian House of Representatives, which is elected by instant-runoff voting, known within Australia as preferential voting. However, third parties are more common in the Australian Senate, which uses a proportional voting system more amenable to minor parties. In Canada, there is a multiparty system at the federal level and in the largest provinces of British Columbia, Ontario, Quebec, Manitoba as well as the smaller New Brunswick, Newfoundland And Labrador, Nova Scotia and Yukon Territory. However, many of the provinces have effectively become two-party systems in which only two parties regularly get members elected. Examples include British Columbia (where the battles are between the New Democratic Party and the BC Liberals), Alberta (between the Alberta New Democratic Party and United Conservative Party), Saskatchewan (between the Saskatchewan Party and New Democratic Party), New Brunswick (between the Liberals and the Progressive Conservatives) and Prince Edward Island (between Liberals and Progressive Conservatives). The English speaking countries of the Caribbean while inheriting their basic political system from Great Britain have become two party systems. The politics of Jamaica are between the People's National Party and the Jamaica Labour Party. The politics of Guyana are between the People's Progressive Party and APNU which is actually a coalition of smaller parties. The politics of Trinidad and Tobago are between the People's National Movement and the United National Congress. The Politics of Belize are between the United Democratic Party and the People's United Party. The Politics of the Bahamas are between the Progressive Liberal Party and the Free National Movement. The politics of Barbados are between the Democratic Labour Party and the Barbados Labour Party. The politics of Zimbabwe are effectively a two party system between the Robert Mugabe founded Zimbabwe African National Union-Patriotic Front and the opposition coalition Movement for Democratic Change. India has a multi-party system but also shows characteristics of a two party system with the National Democratic Alliance (NDA) and United Progressive Alliance (UPA) as the two main players. Both NDA and UPA are not two political parties but alliances of several smaller parties. Other smaller parties not aligned with either NDA or UPA exist, and overall command about 20% of the 2009 seats in the Lok Sabha and had further increased to 28% in the 2014 general election. Most Latin American countries also have presidential systems very similar to the US often with winner takes all systems. Due to the common accumulation of power in the Presidential office both the official party and the main opposition became important political protagonists causing historically two-party systems. Some of the first manifestations of this particularity was with the liberals and conservatives that often fought for power in all Latin America causing the first two-party systems in most Latin American countries which often lead to civil wars in places like Colombia, Ecuador, Mexico, Venezuela, the Central American Republic and Peru, with fights focusing specially on opposing/defending the privileges of the Catholic Church and the creole aristocracy. Other examples of primitive two-party systems included the Pelucones vs Pipiolos in Chile, Federalists vs Unitarians in Argentina, Colorados vs Liberals in Paraguay and Colorados vs Nationals in Uruguay. However, as in other regions, the original rivalry between liberals and conservatives was overtaken by a rivalry between center-left (often social-democratic) parties vs center-right liberal conservative parties, focusing more in economic differences than in cultural and religious differences as it was common during the liberal vs conservative period. Examples of this include National Liberation Party vs Social Christian Unity Party in Costa Rica, the peronista Justicialist Party vs Radical Civic Union in Argentina, Democratic Action vs COPEI in Venezuela, the Colombian Liberal Party vs the Colombian Conservative Party in Colombia, Democratic Revolutionary Party vs Panameñista Party in Panama and Liberal Party vs National Party in Honduras. After the democratization of Central America following the end of the Central American crisis in the 90s former far-left guerrillas and former right-wing authoritarian parties, now in peace, make some similar two-party systems in countries like Nicaragua between the Sandinista National Liberation Front and the Liberals and in El Salvador between the Farabundo Martí National Liberation Front and the Nationalist Republican Alliance. The traditional two-party dynamic started to break after a while, especially in early 2000s; alternative parties won elections breaking the traditional two-party systems including Rafael Caldera's (National Convergence) victory in Venezuela in 1993, Álvaro Uribe (Colombia First) victory in 2002, Tabaré Vázquez (Broad Front) victory in Uruguay in 2004, Ricardo Martinelli (Democratic Change) victory in 2009 in Panama, Luis Guillermo Solís (Citizens' Action Party ) victory in 2014 in Costa Rica, Mauricio Macri (Republican Proposal) victory in 2015 in Argentina and Nayib Bukele (Grand Alliance for National Unity) victory in 2019 in El Salvador, all of them from non-traditional third parties in their respective countries. In some countries like Chile and Venezuela the political system is now split in two large multy-party alliances or blocs, one on the left and one on the right of the spectrum (Concertación/New Majority vs Alliance in Chile, Democratic Unity Roundtable vs Great Patriotic Pole in Venezuela). Malta is somewhat unusual in that while the electoral system is single transferable vote (STV), traditionally associated with proportional representation, minor parties have not had much success. Politics is dominated between the centre-left Labour Party and the centre-right Nationalist Party, with no third parties winning seats in Parliament between 1962 and 2017. The United States has two dominant political parties; historically, there have been few instances in which third party candidates won an election. In the First Party System, only Alexander Hamilton's Federalist Party and Thomas Jefferson's Democratic-Republican Party were significant political parties. Toward the end of the First Party System, the Republicans dominated a one-party system (primarily under the Presidency of James Monroe). Under the Second Party System, the Democratic-Republican Party split during the election of 1824 into Adams' Men and Jackson's Men. In 1828, the modern Democratic Party formed in support of Andrew Jackson. The National Republicans were formed in support of John Quincy Adams. After the National Republicans collapsed, the Whig Party and the Free Soil Party quickly formed and collapsed. In 1854, the modern Republican Party formed from a loose coalition of former Whigs, Free Soilers and other anti-slavery activists. Abraham Lincoln became the first Republican president in 1860. During the Third Party System 1854 until the mid-1890s, the Republican Party was the dominant political faction, but the Democrats held a strong, loyal coalition in the Solid South. During the Fourth Party System from about 1896 to 1932, the Republicans remained the dominant Presidential party, although Democrats Grover Cleveland and Woodrow Wilson were both elected to two terms. In 1932, at the onset of the Fifth Party System that began in 1932, Democrats took firm control of national politics with the landslide victories of Franklin D. Roosevelt in four consecutive elections. Other than the two terms of Republican Dwight Eisenhower from 1953 to 1961, Democrats retained firm control of the Presidency until the mid-1960s. Since the mid-1960s, despite a number of landslides (such as Richard Nixon carrying 49 states and 61% of the popular vote over George McGovern in 1972; Ronald Reagan carrying 49 states and 58% of the popular vote over Walter Mondale in 1984), Presidential elections have been competitive between the predominant Republican and Democratic parties and no one party has been able to hold the Presidency for more than three consecutive terms. In the election of 2012, only 4% separated the popular vote between Barack Obama (51%) and Mitt Romney (47%), although Obama won the electoral vote (332–206). There was a significant change in U.S. politics in 1960, and this is seen by some as a transition to a sixth party system. Throughout every American party system, no third party has won a Presidential election or majorities in either house of Congress. Despite that, third parties and third party candidates have gained traction and support. In the election of 1912, Theodore Roosevelt won 27% of the popular vote and 88 electoral votes running as a Progressive. In the 1992 Presidential election, Ross Perot won 19% of the popular vote but no electoral votes running as an Independent. Modern American politics, in particular the electoral college system, has been described as duopolistic since the Republican and Democratic parties have dominated and framed policy debate as well as the public discourse on matters of national concern for about a century and a half. Third Parties have encountered various blocks in getting onto ballots at different levels of government as well as other electoral obstacles, such as denial of access to general election debates. Since 1987, the Commission on Presidential Debates, established by the Republican and Democratic parties themselves, supplanted debates run since 1920 by the League of Women Voters. The League withdrew its support in protest in 1988 over objections of alleged stagecraft such as rules for camera placement, filling the audience with supporters, approved moderators, predetermined question selection, room temperature and others. The Commission maintains its own rules for admittance and has only admitted a single third-party candidate to a televised debate, Ross Perot in 1992. South Korea has a multi-party system that has sometimes been described as having characteristics of a two-party system. Furthermore, the Lebanese Parliament is mainly made up of two bipartisan alliances. Although both alliances are made up of several political parties on both ends of the political spectrum the two way political situation has mainly arisen due to strong ideological differences in the electorate. Once again this can mainly be attributed to the winner takes all thesis. Historically, Brazil had a two-party system for most of its military dictatorship (1964–1985): the military junta banned all existing parties when it took power and created a pro-government party, the National Renewal Alliance and an official opposition party, the Brazilian Democratic Movement. The two parties were dissolved in 1979, when the regime allowed other parties to form. A report in "The Christian Science Monitor" in 2008 suggested that Spain was moving towards a "greater two-party system" while acknowledging that Spain has "many small parties". However a 2015 article published by "WashingtonPost.com" written by academic Fernando Casal Bértoa noted the decline in support for the two main parties, the People's Party (PP) and the Spanish Socialist Workers' Party (PSOE) in recent years, with these two parties winning only 52 percent of the votes in that year's regional and local elections. He explained this as being due to the Spanish economic crisis, a series of political corruption scandals and broken campaign promises. He argued that the emergence of the new Citizens and Podemos parties would mean the political system would evolve into a two-bloc system, with an alliance of the PP and Citizens on the right facing a leftist coalition of PSOE, Podemos and the United Left. Two-party systems can be contrasted with: There are several reasons why, in some systems, two major parties dominate the political landscape. There has been speculation that a two-party system arose in the United States from early political battling between the federalists and anti-federalists in the first few decades after the ratification of the Constitution, according to several views. In addition, there has been more speculation that the winner-takes-all electoral system as well as particular state and federal laws regarding voting procedures helped to cause a two-party system. Political scientists such as Maurice Duverger and William H. Riker claim that there are strong correlations between voting rules and type of party system. Jeffrey D. Sachs agreed that there was a link between voting arrangements and the effective number of parties. Sachs explained how the first-past-the-post voting arrangement tended to promote a two-party system: Consider a system in which voters can vote for any candidate from any one of many parties. Suppose further that if a party gets 15% of votes, then that party will win 15% of the seats in the legislature. This is termed "proportional representation" or more accurately as "party-proportional representation". Political scientists speculate that proportional representation leads logically to multi-party systems, since it allows new parties to build a niche in the legislature: In contrast, a voting system that allows only a single winner for each possible legislative seat is sometimes termed a plurality voting system or single-winner voting system and is usually described under the heading of a "winner-takes-all" arrangement. Each voter can cast a single vote for any candidate within any given legislative district, but the candidate with the most votes wins the seat, although variants, such as requiring a majority, are sometimes used. What happens is that in a general election, a party that consistently comes in third in every district is unlikely to win any legislative seats even if there is a significant proportion of the electorate favoring its positions. This arrangement strongly favors large and well–organized political parties that are able to appeal to voters in many districts and hence win many seats, and discourages smaller or regional parties. Politically oriented people consider their only realistic way to capture political power is to run under the auspices of the two dominant parties. In the U.S., forty-eight states have a standard "winner-takes-all" electoral system for amassing presidential votes in the Electoral College system. The "winner-takes-all" principle applies in presidential elections, since if a presidential candidate gets the most votes in any particular state, "all" of the electoral votes from that state are awarded. In all but two states, Maine and Nebraska, the presidential candidate winning a plurality of votes wins all of the electoral votes, a practice called the unit rule. Duverger concluded that "plurality election single-ballot procedures are likely to produce two-party systems, whereas proportional representation and runoff designs encourage multipartyism." He suggested there were two reasons why "winner-takes-all" systems leads to a two-party system. First, the weaker parties are pressured to form an alliance, sometimes called a "fusion", to try to become big enough to challenge a large dominant party and, in so doing, gain political clout in the legislature. Second, voters learn, over time, not to vote for candidates outside of one of the two large parties since their votes for third party candidates are usually ineffectual. As a result, weaker parties are eliminated by voters over time. Duverger pointed to statistics and tactics to suggest that voters tended to gravitate towards one of the two main parties, a phenomenon which he called "polarization", and tend to shun third parties. For example, some analysts suggest that the Electoral College system in the United States, by favoring a system of winner-takes-all in presidential elections, is a structural choice favoring only two major parties. Gary Cox suggested that America's two-party system was highly related with economic prosperity in the country: An effort in 2012 by centrist groups to promote ballot access by Third Party candidates called Americans Elect spent $15 million to get ballot access but failed to elect any candidates. The lack of choice in a two-party model in politics has often been compared to the variety of choices in the marketplace. Third parties, meaning a party other than one of the two dominant parties, are possible in two-party systems, but they are often unlikely to exert much influence by gaining control of legislatures or by winning elections. While there are occasional opinions in the media expressed about the possibility of third parties emerging in the United States, for example, political insiders such as the 1980 presidential candidate John Anderson think the chances of one appearing in the early twenty-first century is remote. A report in "The Guardian" suggested that American politics has been "stuck in a two-way fight between Republicans and Democrats" since the Civil War, and that third-party runs had little meaningful success. Third parties in a two-party system can be: When third parties are built around an ideology which is at odds with the majority mindset, many members belong to such a party not for the purpose of expecting electoral success but rather for personal or psychological reasons. In the U.S., third parties include older ones such as the Libertarian Party and the Green Party and newer ones such as the Pirate Party. Many believe that third parties don't affect American politics by winning elections, but they can act as "spoilers" by taking votes from one of the two major parties. They act like barometers of change in the political mood since they push the major parties to consider their demands. An analysis in "New York Magazine" by Ryan Lizza in 2006 suggested that third parties arose from time to time in the nineteenth century around single-issue movements such as abolition, women's suffrage, and the direct election of senators, but were less prominent in the twentieth century. A so-called "third party" in the United Kingdom are the Liberal Democrats. In the 2010 election, the Liberal Democrats received 23% of the votes but only 9% of the seats in the House of Commons. While electoral results do not necessarily translate into legislative seats, the Liberal Democrats can exert influence if there is a situation such as a hung parliament. In this instance, neither of the two main parties (at present, the Conservative Party and the Labour Party) have sufficient authority to run the government. Accordingly, the Liberal Democrats can in theory exert tremendous influence in such a situation since they can ally with one of the two main parties to form a coalition. This happened in the Coalition government of 2010. Yet in that more than 13% of the seats in the British House of Commons are held in 2011 by representatives of political parties other than the two leading political parties of that nation, contemporary Britain is considered by some to be a multi-party system, and not a two-party system. The two party system in the United Kingdom allows for other parties to exist, although the main two parties tend to dominate politics; in this arrangement, other parties are not excluded and can win seats in Parliament. In contrast, the two party system in the United States has been described as a duopoly or an enforced two-party system, such that politics is almost entirely dominated by either the Republicans or Democrats, and third parties rarely win seats in Congress. Some historians have suggested that two-party systems promote centrism and encourage political parties to find common positions which appeal to wide swaths of the electorate. It can lead to political stability which leads, in turn, to economic growth. Historian Patrick Allitt of the Teaching Company suggested that it is difficult to overestimate the long-term economic benefits of political stability. Sometimes two-party systems have been seen as preferable to multi-party systems because they are simpler to govern, with less fractiousness and greater harmony, since it discourages radical minor parties, while multi-party systems can sometimes lead to hung parliaments. Italy, with a multi-party system, has had years of divisive politics since 2000, although analyst Silvia Aloisi suggested in 2008 that the nation may be moving closer to a two-party arrangement. The two-party has been identified as simpler since there are fewer voting choices. Two-party systems have been criticized for downplaying alternative views, being less competitive, encouraging voter apathy since there is a perception of fewer choices, and putting a damper on debate within a nation. In a proportional representation system, lesser parties can moderate policy since they are not usually eliminated from government. One analyst suggested the two-party approach may not promote inter-party compromise but may encourage partisanship. In "The Tyranny of the Two-party system", Lisa Jane Disch criticizes two-party systems for failing to provide enough options since only two choices are permitted on the ballot. She wrote: There have been arguments that the winner-take-all mechanism discourages independent or third-party candidates from running for office or promulgating their views. Ross Perot's former campaign manager wrote that the problem with having only two parties is that the nation loses "the ability for things to bubble up from the body politic and give voice to things that aren't being voiced by the major parties." One analyst suggested that parliamentary systems, which typically are multi-party in nature, lead to a better "centralization of policy expertise" in government. Multi-party governments permit wider and more diverse viewpoints in government, and encourage dominant parties to make deals with weaker parties to form winning coalitions. Analyst Chris Weigant of "the Huffington Post" wrote that "the parliamentary system is inherently much more open to minority parties getting much better representation than third parties do in the American system". After an election in which the party changes, there can be a "polar shift in policy-making" when voters react to changes. Political analyst A. G. Roderick, writing in his book "Two Tyrants", argued that the two American parties, the Republicans and Democrats, are highly unpopular in 2015, and are not part of the political framework of state governments, and do not represent 47% of the electorate who identify themselves as "independents". He makes a case that the American president should be elected on a non-partisan basis, and asserts that both political parties are "cut from the same cloth of corruption and corporate influence." Others have attributed the two party system to encouraging an environment which stifles individual thought processes and analysis. In a two party system, knowledge about political leaning facilitate assumptions to be made about an individuals opinions on a wide variety of topics (e.g. abortion, taxes, the space program, a viral pandemic, human sexuality, the environment, warfare, opinions on police, etc.) which have no causal connection with each other.“The more destructive problem is the way this skews the discussion of the issues facing the nation. The media – meaning news sources from Fox News to the New York Times and everything in between – seem largely incapable of dealing with any issue outside of the liberal versus conservative paradigm. Whether it’s dealing with ISIS, the debt ceiling, or climate change, the media frames every issue as a simple debate between the Democratic and the Republican positions. This creates the ludicrous idea that every public policy problem has two, and only two, approaches. That’s nonsense. Certainly some problems have only two resolutions, some have only one, but most have a range of possible solutions. But the “national” debate presents every issue as a simplistic duality, which trivializes everything.” —Michael Coblenz, 2016 The two-party system, in the sense of the looser definition, where two parties dominate politics but in which third parties can elect members and gain some representation in the legislature, can be traced to the development of political parties in the United Kingdom. There was a division in English politics at the time of the Civil War and Glorious Revolution in the late 17th century. The Whigs supported Protestant constitutional monarchy against absolute rule and the Tories, originating in the Royalist (or "Cavalier") faction of the English Civil War, were conservative royalist supporters of a strong monarchy as a counterbalance to the republican tendencies of Parliament. In the following century, the Whig party's support base widened to include emerging industrial interests and wealthy merchants. The basic matters of principle that defined the struggle between the two factions, were concerning the nature of constitutional monarchy, the desirability of a Catholic king, the extension of religious toleration to nonconformist Protestants, and other issues that had been put on the liberal agenda through the political concepts propounded by John Locke, Algernon Sidney and others. Vigorous struggle between the two factions characterised the period from the Glorious Revolution to the 1715 Hanoverian succession, over the legacy of the overthrow of the Stuart dynasty and the nature of the new constitutional state. This proto two-party system fell into relative abeyance after the accession to the throne of George I and the consequent period of Whig supremacy under Robert Walpole, during which the Tories were systematically purged from high positions in government. However, although the Tories were dismissed from office for half a century, they still retained a measure of party cohesion under William Wyndham and acted as a united, though unavailing, opposition to Whig corruption and scandals. At times they cooperated with the "Opposition Whigs", Whigs who were in opposition to the Whig government; however, the ideological gap between the Tories and the Opposition Whigs prevented them from coalescing as a single party. The old Whig leadership dissolved in the 1760s into a decade of factional chaos with distinct "Grenvillite", "Bedfordite", "Rockinghamite", and "Chathamite" factions successively in power, and all referring to themselves as "Whigs". Out of this chaos, the first distinctive parties emerged. The first such party was the Rockingham Whigs under the leadership of Charles Watson-Wentworth and the intellectual guidance of the political philosopher Edmund Burke. Burke laid out a philosophy that described the basic framework of the political party as "a body of men united for promoting by their joint endeavours the national interest, upon some particular principle in which they are all agreed". As opposed to the instability of the earlier factions, which were often tied to a particular leader and could disintegrate if removed from power, the two party system was centred on a set of core principles held by both sides and that allowed the party out of power to remain as the Loyal Opposition to the governing party. A genuine two-party system began to emerge, with the accession to power of William Pitt the Younger in 1783 leading the new Tories, against a reconstituted "Whig" party led by the radical politician Charles James Fox. The two party system matured in the early 19th century era of political reform, when the franchise was widened and politics entered into the basic divide between conservatism and liberalism that has fundamentally endured up to the present. The modern Conservative Party was created out of the "Pittite" Tories by Robert Peel, who issued the Tamworth Manifesto in 1834 which set out the basic principles of Conservatism – the necessity in specific cases of reform in order to survive, but an opposition to unnecessary change, that could lead to "a perpetual vortex of agitation". Meanwhile, the Whigs, along with free trade Tory followers of Robert Peel, and independent Radicals, formed the Liberal Party under Lord Palmerston in 1859, and transformed into a party of the growing urban middle-class, under the long leadership of William Ewart Gladstone. The two party system had come of age at the time of Gladstone and his Conservative rival Benjamin Disraeli after the 1867 Reform Act. Although the Founding Fathers of the United States did not originally intend for American politics to be partisan, early political controversies in the 1790s saw the emergence of a two-party political system, the Federalist Party and the Democratic-Republican Party, centred on the differing views on federal government powers of Secretary of the Treasury Alexander Hamilton and James Madison. However, a consensus reached on these issues ended party politics in 1816 for a decade, a period commonly known as the Era of Good Feelings. Partisan politics revived in 1829 with the split of the Democratic-Republican Party into the Jacksonian Democrats led by Andrew Jackson, and the Whig Party, led by Henry Clay. The former evolved into the modern Democratic Party and the latter was replaced with the Republican Party as one of the two main parties in the 1850s.
https://en.wikipedia.org/wiki?curid=31605
The Day After The Day After is an American television film that first aired on November 20, 1983, on the ABC television network. More than 100 million people, in nearly 39 million households, watched the program during its initial broadcast. With a 46 rating and a 62% share of the viewing audience during its initial broadcast, it was the seventh-highest-rated non-sports show up to that time and set a record as the highest-rated television film in history—a record it still held as recently as a 2009 report. The film postulates a fictional war between NATO forces and the Warsaw Pact countries that rapidly escalates into a full-scale nuclear exchange between the United States and the Soviet Union. The action itself focuses on the residents of Lawrence, Kansas and Kansas City, Missouri, and of several family farms near nuclear missile silos. The cast includes JoBeth Williams, Steve Guttenberg, John Cullum, Jason Robards, and John Lithgow. The film was written by Edward Hume, produced by Robert Papazian, and directed by Nicholas Meyer. It was released on DVD on May 18, 2004, by MGM. Uniquely for a Western movie made during the Cold War, it was broadcast on Soviet Union's state TV in 1987. The story follows several citizens—and people they encounter—in and around Kansas City, Missouri, and the college town of Lawrence, Kansas, to its west. The film's narrative is structured as a before-during-after scenario of a nuclear attack: the first segment introduces the various characters and their stories; the second shows the nuclear disaster itself; and the third details the effects of the fallout on the characters. During the first segment, as the characters are introduced, the chronology of events leading up to the war is depicted entirely via television and radio news broadcasts, as well as communications among U.S. military personnel and hearsay, enhanced by characters' reactions and analysis of the events. The Soviet Union is shown to have begun a military buildup in East Germany (which the Soviets insist are Warsaw Pact exercises) with the goal of intimidating the United States, the United Kingdom, and France into withdrawing from West Berlin. When the United States does not back down, Soviet armored divisions are sent to the border between East and West Germany. During the late hours of Friday, September 15, news broadcasts report a "widespread rebellion among several divisions of the East German Army." As a result, the Soviets blockade West Berlin. Tensions mount, and the United States issues an ultimatum that the Soviets stand down from the blockade by 6:00a.m. the next day, and noncompliance will be interpreted as an act of war. The Soviets refuse, and the President of the United States orders all U.S. military forces around the world on DEFCON 2 alert. On Saturday, September 16, NATO forces in West Germany invade East Germany through the Helmstedt-Marienborn checkpoint to free Berlin. The Soviets hold the Marienborn corridor and inflict heavy casualties on NATO troops. Two Soviet MiG-25s cross into West German airspace and bomb a NATO munitions storage facility, also striking a school and a hospital. A subsequent radio broadcast states that Moscow is being evacuated. At this point, major U.S. cities begin mass evacuations as well. Unconfirmed reports soon follow that nuclear weapons were used in Wiesbaden and Frankfurt. Meanwhile, in the Persian Gulf, naval warfare erupts, as radio reports tell of ship sinkings on both sides. The Soviet Army eventually reaches the Rhine. Seeking to prevent Soviet forces from invading France and causing the rest of Western Europe to fall, NATO halts the Soviet advance by airbursting three low-yield tactical nuclear weapons over advancing Soviet troops. Soviet forces counter by launching a nuclear strike on NATO headquarters in Brussels. In response, the United States Strategic Air Command begins scrambling B-52 bombers. The Soviet Air Force then destroys a BMEWS station at RAF Fylingdales in England, and another at Beale Air Force Base in California. Meanwhile, on board the EC-135 Looking Glass aircraft, the order comes in from the President for a full nuclear strike against the Soviet Union. A Minuteman Missile crew launch ten Minuteman Missiles from their launch station at Whiteman Air Force Base. Dozens of other launch facilities do the same. Within minutes, over 1,000 missiles from the U.S. are launched. Almost simultaneously, an Air Force officer receives a report that a massive Soviet nuclear assault against the United States has been launched, further updated with a report that over 300 Soviet intercontinental ballistic missiles (ICBMs) are inbound. It is deliberately left unclear in the film whether the Soviet Union or the United States launches the main nuclear attack first. The first salvo of the Soviet nuclear attack on the Midwestern United States (as shown from the point of view of the residents of central Kansas and western Missouri) occurs when a large-yield nuclear weapon air bursts at high altitude over Kansas City, Missouri. This generates an electromagnetic pulse (EMP) that shuts down the electric power grid to nearby Whiteman Air Force Base's operable Minuteman II missile silos and the surrounding areas. Thirty seconds later, incoming Soviet ICBMs begin to hit military and population targets. Higginsville, Kansas City, Sedalia, and all the way south to El Dorado Springs, Missouri, are blanketed with ground burst nuclear weapons. While the story provides no specifics, it strongly suggests that U.S. cities, military, and industrial bases are heavily damaged or destroyed. The aftermath depicts the Midwestern and Northwestern United States as a blackened wasteland of burned-out cities filled with burn, blast, and radiation victims. Eventually, the U.S. President delivers a radio address in which he declares there is now a ceasefire between the United States and the Soviet Union (which, although not shown, has suffered the same devastating effects) and states there has not been any surrender by the United States. Dr. Russell Oakes lives in the upper-class Brookside neighborhood with his wife and works in a hospital in downtown Kansas City. He is scheduled to teach a hematology class at the University of Kansas (KU) hospital in nearby Lawrence, Kansas, and is "en route" when he hears an alarming Emergency Broadcast System alert on his car radio. The sine waves attention signal vibrates and then a woman announces an advisory message. He exits the crowded freeway and attempts to contact his wife but gives up due to the long line at a phone booth. Oakes attempts to return to his home via the K-10 freeway and is the only eastbound motorist. The nuclear attack begins, and Kansas City is gripped with panic as air raid sirens wail. Oakes' car is permanently disabled by the EMP from the first high-altitude detonation, as are all motor vehicles and electricity. Oakes is about away from downtown when the missiles hit. His family, many colleagues, and almost all of Kansas City's population are killed. He walks to Lawrence, which has been severely damaged from the blasts, and, at the university hospital, treats the wounded with Dr. Sam Hachiya and Nurse Nancy Bauer. Also at the university, science Professor Joe Huxley and students use a Geiger counter to monitor the nuclear fallout outside. They build a makeshift radio to maintain contact with Dr. Oakes at the hospital as well as to locate any other broadcasting survivors beyond their area. Airman Billy McCoy is stationed at a Minuteman missile silo near Whiteman Air Force Base, east-southeast of Kansas City, and is called to duty during the DEFCON 2 alert. His crew are among the first to witness the initial missile launches, indicating full-scale nuclear war. After it becomes clear that a Soviet counterstrike is imminent, the airmen panic. Several stubbornly insist that they should stay at their post and take shelter in the silo, while others, including McCoy, point out that it is futile because the silo will not withstand a direct hit. McCoy tells them they have done their jobs and speeds away in an Air Force truck to retrieve his wife and child in Sedalia ( east of Whiteman AFB), but the truck is permanently disabled by an EMP from an airburst detonation. McCoy abandons the truck and takes shelter inside an overturned semi truck trailer, barely escaping the oncoming nuclear blast. After the attack, McCoy walks towards a town and finds an abandoned store, where he takes candy bars and other provisions, while gunfire is heard in the distance. While standing in line for a drink of water from a well pump, McCoy befriends a man who is mute and shares his provisions. McCoy asks another man along the road about Sedalia, and the man indicates that Sedalia and Windsor no longer exist. As McCoy and his companion both begin to suffer the effects of radiation sickness, they leave a refugee camp and head to the hospital at Lawrence, where McCoy ultimately succumbs to the radiation sickness. Farmer Jim Dahlberg and his family live in rural Harrisonville, Missouri, very close to a field of missile silos about south-southeast of Kansas City. While the family is preparing for the wedding of their elder daughter, Denise, to KU senior Bruce Gallatin, Jim prepares for the impending attack by converting their basement into a makeshift fallout shelter. As the missiles are launched, he forcefully carries his wife Eve, who refuses to accept the reality of the escalating crisis and continues making wedding preparations, downstairs into the basement. While running to the shelter, the Dahlbergs' son, Danny, inadvertently looks behind him just as a missile detonates in the distance and is instantly blinded and carried back to the shelter by Dahlberg. KU student Stephen Klein, while hitchhiking home to Joplin, Missouri, stumbles upon the farm and persuades the Dahlbergs to take him in. After several days in the basement, Denise, distraught over the situation and the unknown whereabouts of Bruce, escapes from the basement and runs about the field that is cluttered with dead animals. She sees a clear blue sky and thinks the worst is over. However, the field is actually covered in radioactive fallout. Klein goes after her, attempting to warn her about the effects of the invisible nuclear radiation that is going through her cells like X-rays, but Denise, ignoring this warning, tries to run from him. Eventually, Klein is able to chase Denise back to safety in the basement, but not before Denise runs to the stairs to find her wedding dress. During a makeshift church service, while the minister tries to express how lucky they are to have survived, Denise begins to bleed externally from her groin due to radiation sickness from her run through the field. Klein takes Danny and Denise to Lawrence for treatment. Dr. Hachiya attempts to treat Danny, and Klein also develops radiation sickness. Dahlberg, upon returning from an emergency farmers' meeting, confronts a group of silent survivors squatting on his farm and attempts to persuade them to move somewhere else, only to be shot and killed mid-sentence by one of the squatters. Ultimately, the situation at the hospital becomes grim. Dr. Oakes collapses from exhaustion and, upon awakening several days later, learns that Nurse Bauer has died from meningitis. Oakes, suffering from terminal radiation sickness, decides to return to Kansas City to see his home for the last time, while Dr. Hachiya stays behind. Oakes hitches a ride on an Army National Guard truck, where he witnesses U.S. military personnel blindfolding and executing looters. After somehow managing to locate where his home was, he finds the charred remains of his wife's wristwatch and a family huddled in the ruins. Oakes angrily orders them to leave his home. The family silently offers Oakes food, causing him to collapse in despair, as a member of the family comforts him. As the scene fades to black, Professor Huxley calls into his makeshift radio: "Hello? Is anybody there? Anybody at all?" There is no response. "The Day After" was the idea of ABC Motion Picture Division president Brandon Stoddard, who, after watching "The China Syndrome", was so impressed that he envisioned creating a film exploring the effects of nuclear war on the United States. Stoddard asked his executive vice president of television movies and miniseries Stu Samuels to develop a script. Samuels created the title "The Day After" to emphasize that the story was not about a nuclear war itself, but the aftermath. Samuels suggested several writers and eventually Stoddard commissioned veteran television writer Edward Hume to write the script in 1981. ABC, which financed the production, was concerned about the graphic nature of the film and how to appropriately portray the subject on a family-oriented television channel. Hume undertook a massive amount of research on nuclear war and went through several drafts until finally ABC deemed the plot and characters acceptable. Originally, the film was based more around and in Kansas City, Missouri. Kansas City was not bombed in the original script, although Whiteman Air Force Base was, making Kansas City suffer shock waves and the horde of survivors staggering into town. There was no Lawrence, Kansas, in the story, although there was a small Kansas town called "Hampton". While Hume was writing the script, he and producer Robert Papazian, who had great experience in on-location shooting, took several trips to Kansas City to scout locations and met with officials from the Kansas film commission and from the Kansas tourist offices to search for a suitable location for "Hampton." It came down to a choice of either Warrensburg, Missouri, and Lawrence, Kansas, both college towns—Warrensburg was home of Central Missouri State University and was near Whiteman Air Force Base and Lawrence was home of the University of Kansas and was near Kansas City. Hume and Papazian ended up selecting Lawrence, due to the access to a number of good locations: a university, a hospital, football and basketball venues, farms, and a flat countryside. Lawrence was also agreed upon as being the "geographic center" of the United States. The Lawrence people were urging ABC to change the name "Hampton" to "Lawrence" in the script. Back in Los Angeles, the idea of making a TV movie showing the true effects of nuclear war on average American citizens was still stirring up controversy. ABC, Hume, and Papazian realized that for the scene depicting the nuclear blast, they would have to use state-of-the-art special effects and they took the first step by hiring some of the best special effects people in the business to draw up some storyboards for the complicated blast scene. Then, ABC hired Robert Butler to direct the project. For several months, this group worked on drawing up storyboards and revising the script again and again; then, in early 1982, Butler was forced to leave "The Day After" because of other contractual commitments. ABC then offered the project to two other directors, who both turned it down. Finally, in May, ABC hired feature film director Nicholas Meyer, who had just completed the blockbuster "". Meyer was apprehensive at first and doubted ABC would get away with making a television film on nuclear war without the censors diminishing its effect. However, after reading the script, Meyer agreed to direct "The Day After." Meyer wanted to make sure he would film the script he was offered. He did not want the censors to censor the film, nor the film to be a regular Hollywood disaster movie from the start. Meyer figured the more "The Day After" resembled such a film, the less effective it would be, and preferred to present the facts of nuclear war to viewers. He made it clear to ABC that no big TV or film stars should be in "The Day After." ABC agreed, although they wanted to have one star to help attract European audiences to the film when it would be shown theatrically there. Later, while flying to visit his parents in New York City, Meyer happened to be on the same plane with Jason Robards and asked him to join the cast. Meyer plunged into several months of nuclear research, which made him quite pessimistic about the future, to point of becoming ill each evening when he came home from work. Meyer and Papazian also made trips to the ABC censors, and to the United States Department of Defense during their research phase, and experienced conflicts with both. Meyer had many heated arguments over elements in the script, that the network censors wanted cut out of the film. The Department of Defense said they would cooperate with ABC if the script made clear that the Soviet Union launched their missiles first—something Meyer and Papazian took pains not to do. In any case, Meyer, Papazian, Hume, and several casting directors spent most of July 1982 taking numerous trips to Kansas City. In between casting in Los Angeles, where they relied mostly on unknowns, they would fly to the Kansas City area to interview local actors and scenery. They were hoping to find some real Midwesterners for smaller roles. Hollywood casting directors strolled through shopping malls in Kansas City, looking for local people to fill small and supporting roles, while the daily newspaper in Lawrence ran an advertisement calling for local residents of all ages to sign up for jobs as a large number of extras in the film and a professor of theater and film at the University of Kansas was hired to head up the local casting of the movie. Out of the eighty or so speaking parts, only fifteen were cast in Los Angeles. The remaining roles were filled in Kansas City and Lawrence. While in Kansas City, Meyer and Papazian toured the Federal Emergency Management Agency offices in Kansas City. When asked what their plans for surviving nuclear war were, a FEMA official replied that they were experimenting with putting evacuation instructions in telephone books in New England. "In about six years, everyone should have them." This meeting led Meyer to later refer to FEMA as "a complete joke." It was during this time that the decision was made to change "Hampton" in the script to "Lawrence." Meyer and Hume figured since Lawrence was a real town, that it would be more believable and besides, Lawrence was a perfect choice to play a representative of Middle America. The town boasted a "socio-cultural mix," sat near the exact geographic center of the continental U.S., and Hume and Meyer's research told them that Lawrence was a prime missile target, because 150 Minuteman missile silos stood nearby. Lawrence had some great locations, and the people there were more supportive of the project. Suddenly, less emphasis was put on Kansas City, the decision was made to have the city completely annihilated in the script, and Lawrence was made the primary location in the film. ABC originally planned to air "The Day After" as a four-hour "television event", spread over two nights with total running time of 180 minutes without commercials. Director Nicholas Meyer felt the original script was padded, and suggested cutting out an hour of material to present the whole film in one night. The network stuck with their two night broadcast plan, and Meyer filmed the entire three hour script, as evidenced by a 172 minute work-print that has surfaced. Subsequently, the network found that it was difficult to find advertisers, considering the subject matter. ABC relented, and told Meyer he could edit the film for a one-night broadcast version. Meyer's original single-night cut ran two hours and twenty minutes, which he presented to the network. After this screening, many executives were deeply moved and some even cried, leading Meyer to believe they approved of his cut. Nevertheless, a further six-month struggle ensued over the final shape of the film. Network censors had opinions about the inclusion of specific scenes, and ABC itself, eventually intent on "trimming the film to the bone", made demands to cut out many scenes Meyer strongly lobbied to keep. Finally Meyer and his editor Bill Dornisch balked. Dornisch was fired, and Meyer walked away from the project. ABC brought in other editors, but the network ultimately was not happy with the results they produced. They finally brought Meyer back and reached a compromise, with Meyer paring down "The Day After" to a final running time of 120 minutes. "The Day After" was initially scheduled to premiere on ABC in May 1983, but the post-production work to reduce the film's length pushed back its initial airdate to November. Censors forced ABC to cut an entire scene of a child having a nightmare about nuclear holocaust and then sitting up, screaming. A psychiatrist told ABC that this would disturb children. "This strikes me as ludicrous," Meyer wrote in "TV Guide" at the time, "not only in relation to the rest of the film, but also when contrasted with the huge doses of violence to be found on any average evening of TV viewing." In any case, they made a few more cuts, including to a scene where Denise possesses a diaphragm. Another scene, where a hospital patient abruptly sits up screaming, was excised from the original television broadcast but restored for home video releases. Meyer persuaded ABC to dedicate the film to the citizens of Lawrence, and also to put a disclaimer at the end of the film, following the credits, letting the viewer know that "The Day After" downplayed the true effects of nuclear war so they would be able to have a story. The disclaimer also included a list of books that provide more information on the subject. "The Day After" received a large promotional campaign prior to its broadcast. Commercials aired several months in advance, ABC distributed half a million "viewer's guides" that discussed the dangers of nuclear war and prepared the viewer for the graphic scenes of mushroom clouds and radiation burn victims. Discussion groups were also formed nationwide. Composer David Raksin wrote original music and adapted music from "The River" (a documentary film score by concert composer Virgil Thomson), featuring an adaptation of the hymn "How Firm a Foundation". Although he recorded just under 30 minutes of music, much of it was edited out of the final cut. Music from the "First Strike" footage, conversely, was not edited out. Due to the film's being shortened from the original three hours (running time) to two, several planned special-effects scenes were scrapped, although storyboards were made in anticipation of a possible "expanded" version. They included a "bird's eye" view of Kansas City at the moment of two nuclear detonations as seen from a Boeing 737 airliner on approach to the city's airport, as well as simulated newsreel footage of U.S. troops in West Germany taking up positions in preparation of advancing Soviet armored units, and the tactical nuclear exchange in Germany between NATO and the Warsaw Pact, which follows after the attacking Warsaw Pact force breaks through and overwhelms the NATO lines. ABC censors severely toned down scenes to reduce the body count or severe burn victims. Meyer refused to remove key scenes but reportedly some eight and a half minutes of excised footage still exist, significantly more graphic. Some footage was reinstated for the film's release on home video. Additionally, the nuclear attack scene was longer and supposed to feature very graphic and very accurate shots of what happens to a human body during a nuclear blast. Examples included people being set on fire, their flesh carbonizing, being burned to the bone, eyes melting, faceless heads, skin hanging, deaths from flying glass and debris, limbs torn off, being crushed, blown from buildings by the shockwave, and people in fallout shelters suffocating during the firestorm. Also cut were images of radiation sickness, as well as graphic post-attack violence from survivors such as food riots, looting, and general lawlessness as authorities attempted to restore order. One cut scene shows surviving students battling over food. The two sides were to be athletes versus the science students under the guidance of Professor Huxley. Another brief scene later cut related to a firing squad, where two U.S. soldiers are blindfolded and executed. In this scene, an officer reads the charges, verdict and sentence, as a bandaged chaplain reads the Last Rites. A similar sequence occurs in a 1965 UK-produced faux documentary, "The War Game". In the original broadcast of "The Day After", when the U.S. president addresses the nation, the voice was an imitation of Ronald Reagan. In subsequent broadcasts, that voice was overdubbed by a stock actor. Home video releases in the U.S. and internationally come in at various running times, many listed at 126 or 127 minutes; full screen (4:3 aspect ratio) seems to be more common than widescreen. RCA videodiscs of the early 1980s were limited to 2 hours per disc, so that full screen release appears to be closest to what originally aired on ABC in the US. A 2001 U.S. VHS version (Anchor Bay Entertainment, Troy, Michigan) lists a running time of 122 minutes. A 1995 double laser disc "director's cut" version (Image Entertainment) runs 127 minutes, includes commentary by director Nicholas Meyer and is "presented in its 1.75:1 European theatrical aspect ratio" (according to the LD jacket). Two different German DVD releases run 122 and 115 minutes; edits reportedly downplay the Soviet Union's role. On its original broadcast (Sunday, November 20, 1983), John Cullum warned viewers before the film was premiered that the film contains graphic and disturbing scenes, and encourages parents who have young children watching, to watch together and discuss the issues of nuclear warfare. ABC and local TV affiliates opened 1-800 hotlines with counselors standing by. There were no commercial breaks after the nuclear attack. ABC then aired a live debate on Viewpoint, hosted by "Nightline"s Ted Koppel, featuring scientist Carl Sagan, former Secretary of State Henry Kissinger, Elie Wiesel, former Secretary of Defense Robert McNamara, General Brent Scowcroft and conservative commentator William F. Buckley Jr.. Sagan argued against nuclear proliferation, while Buckley promoted the concept of nuclear deterrence. Sagan described the arms race in the following terms: "Imagine a room awash in gasoline, and there are two implacable enemies in that room. One of them has nine thousand matches, the other seven thousand matches. Each of them is concerned about who's ahead, who's stronger." The film and its subject matter were prominently featured in the news media both before and after the broadcast, including on such covers as" TIME", "Newsweek", "U.S. News & World Report," and "TV Guide." Critics tended to claim the film was either sensationalizing nuclear war or that it was too tame. The special effects and realistic portrayal of nuclear war received praise. The film received 12 Emmy nominations and won two Emmy awards. It was rated "way above average" in "Leonard Maltin's Movie Guide", until all reviews for movies exclusive to TV were removed from the publication. In the United States, 38.5 million households, or an estimated 100 million people, watched "The Day After" on its first broadcast, a record audience for a made-for-TV movie. Producers Sales Organization released the film theatrically around the world, in the Eastern Bloc, China, North Korea and Cuba (this international version contained six minutes of footage not in the telecast edition). Since commercials are not sold in these markets, Producers Sales Organization failed to gain revenue to the tune of an undisclosed sum. Years later this international version was released to tape by Embassy Home Entertainment. Commentator Ben Stein, critical of the movie's message (i.e. that the strategy of Mutual Assured Destruction would lead to a war), wrote in the Los Angeles "Herald-Examiner" what life might be like in an America under Soviet occupation. Stein's idea was eventually dramatized in the miniseries "Amerika", also broadcast by ABC. The "New York Post" accused Meyer of being a traitor, writing, "Why is Nicholas Meyer doing Yuri Andropov's work for him?" Much press comment focused on the unanswered question in the film of who started the war. Richard Grenier in the "National Review" accused "The Day After" of promoting "unpatriotic" and pro-Soviet attitudes. Television critic Matt Zoller Seitz in his 2016 book co-written with Alan Sepinwall titled "" named "The Day After" as the 4th greatest American TV-movie of all time, writing: "Very possibly the bleakest TV-movie ever broadcast, "The Day After" is an explicitly antiwar statement dedicated entirely to showing audiences what would happen if nuclear weapons were used on civilian populations in the United States." President Ronald Reagan watched the film more than a month before its screening, on Columbus Day, October 10, 1983. He wrote in his diary that the film was "very effective and left me greatly depressed," and that it changed his mind on the prevailing policy on a "nuclear war". The film was also screened for the Joint Chiefs of Staff. A government advisor who attended the screening, a friend of Meyer's, told him "If you wanted to draw blood, you did it. Those guys sat there like they were turned to stone." Four years later, the Intermediate-Range Nuclear Forces Treaty was signed and in Reagan's memoirs he drew a direct line from the film to the signing. Reagan supposedly later sent Meyer a telegram after the summit, saying, "Don't think your movie didn't have any part of this, because it did." However, in a 2010 interview, Meyer said that this telegram was a myth, and that the sentiment stemmed from a friend's letter to Meyer; he suggested the story had origins in editing notes received from the White House during the production, which "...may have been a joke, but it wouldn't surprise me, him being an old Hollywood guy." The film also had impact outside the U.S. In 1987, during the era of Mikhail Gorbachev's "glasnost" and "perestroika" reforms, the film was shown on Soviet television. Four years earlier, Georgia Rep. Elliott Levitas and 91 co-sponsors introduced a resolution in the U.S. House of Representatives "[expressing] the sense of the Congress that the American Broadcasting Company, the Department of State, and the U.S. Information Agency should work to have the television movie "The Day After" aired to the Soviet public." "The Day After" won two Emmy Awards and received 10 other Emmy nominations. Emmy Awards won: Emmy Award nominations:
https://en.wikipedia.org/wiki?curid=31607
Tic-tac-toe Tic-tac-toe (American English), noughts and crosses (British English), or Xs and Os is a paper-and-pencil game for two players, "X" and "O", who take turns marking the spaces in a 3×3 grid. The player who succeeds in placing three of their marks in a horizontal, vertical, or diagonal row is the winner. In order to win the game, a player must place three of their marks in a horizontal, vertical, or diagonal row. The following example game is won by the first player, X: Players soon discover that the best play from both parties leads to a draw. Hence, tic-tac-toe is most often played by young children, who often have not yet discovered the optimal strategy. Because of the simplicity of tic-tac-toe, it is often used as a pedagogical tool for teaching the concepts of good sportsmanship and the branch of artificial intelligence that deals with the searching of game trees. It is straightforward to write a computer program to play tic-tac-toe perfectly or to enumerate the 765 essentially different positions (the state space complexity) or the 26,830 possible games up to rotations and reflections (the game tree complexity) on this space. The game can be generalized to an m,n,k-game in which two players alternate placing stones of their own color on an "m"×"n" board, with the goal of getting "k" of their own color in a row. Tic-tac-toe is the (3,3,3)-game. Harary's generalized tic-tac-toe is an even broader generalization of tic-tac-toe. It can also be generalized as a nd game. Tic-tac-toe is the game where n equals 3 and d equals 2. If played optimally by both players, the game always ends in a draw, making tic-tac-toe a futile game. Games played on three-in-a-row boards can be traced back to ancient Egypt, where such game boards have been found on roofing tiles dating from around 1300 BCE. An early variation of tic-tac-toe was played in the Roman Empire, around the first century BC. It was called "terni lapilli" ("three pebbles at a time") and instead of having any number of pieces, each player only had three, thus they had to move them around to empty spaces to keep playing. The game's grid markings have been found chalked all over Rome. Another closely related ancient game is three men's morris which is also played on a simple grid and requires three pieces in a row to finish, and Picaria, a game of the Puebloans. The different names of the game are more recent. The first print reference to "noughts and crosses" (nought being an alternative word for zero), the British name, appeared in 1858, in an issue of "Notes and Queries". The first print reference to a game called "tick-tack-toe" occurred in 1884, but referred to "a children's game played on a slate, consisting in trying with the eyes shut to bring the pencil down on one of the numbers of a set, the number hit being scored". "Tic-tac-toe" may also derive from "tick-tack", the name of an old version of backgammon first described in 1558. The US renaming of "noughts and crosses" as "tic-tac-toe" occurred in the 20th century. In 1952, "OXO" (or "Noughts and Crosses"), developed by British computer scientist Sandy Douglas for the EDSAC computer at the University of Cambridge, became one of the first known video games. The computer player could play perfect games of tic-tac-toe against a human opponent. In 1975, tic-tac-toe was also used by MIT students to demonstrate the computational power of Tinkertoy elements. The Tinkertoy computer, made out of (almost) only Tinkertoys, is able to play tic-tac-toe perfectly. It is currently on display at the Museum of Science, Boston. When considering only the state of the board, and after taking into account board symmetries (i.e. rotations and reflections), there are only 138 terminal board positions. A combinatorics study of the game shows that when "X" makes the first move every time, the game outcomes are as follows: A player can play a perfect game of tic-tac-toe (to win or at least draw) if, each time it is their turn to play, they choose the first available move from the following list, as used in Newell and Simon's 1972 tic-tac-toe program. The first player, who shall be designated "X", has 3 possible strategically distinct positions to mark during the first turn. Superficially, it might seem that there are 9 possible positions, corresponding to the 9 squares in the grid. However, by rotating the board, we will find that, in the first turn, every corner mark is strategically equivalent to every other corner mark. The same is true of every edge (side middle) mark. From a strategical point of view, there are therefore only three possible first marks: corner, edge, or center. Player X can win or force a draw from any of these starting marks; however, playing the corner gives the opponent the smallest choice of squares which must be played to avoid losing. This might suggest that the corner is the best opening move for X, however another study shows that if the players are not perfect, an opening move in the center is best for X. The second player, who shall be designated "O", must respond to X's opening mark in such a way as to avoid the forced win. Player O must always respond to a corner opening with a center mark, and to a center opening with a corner mark. An edge opening must be answered either with a center mark, a corner mark next to the X, or an edge mark opposite the X. Any other responses will allow X to force the win. Once the opening is completed, O's task is to follow the above list of priorities in order to force the draw, or else to gain a win if X makes a weak play. More detailed, to guarantee a draw, O should adopt the following strategies: When X plays corner first, and O is not a perfect player, the following may happen: Consider a board with the nine positions numbered as follows: When X plays 1 as their opening move, then O should take 5. Then X takes 9 (in this situation, O should not take 3 or 7, O should take 2, 4, 6 or 8): or 6 (in this situation, O should not take 4 or 7, O should take 2, 3, 8 or 9. In fact, taking 9 is the best move, since a non-perfect player X may take 4, then O can take 7 to win). In both of these situations (X takes 9 or 6 as second move), X has a property to win. If X is not a perfect player, X may take 2 or 3 as second move. Then this game will be a draw, X cannot win. If X plays 1 opening move, and O is not a perfect player, the following may happen: Although O takes the only good position (5) as first move, but O takes a bad position as second move: Although O takes good positions as the first two moves, but O takes a bad position as third move: O takes a bad position as first move (except of 5, all other positions are bad): Many board games share the element of trying to be the first to get "n"-in-a-row, including three men's morris, nine men's morris, pente, gomoku, Qubic, Connect Four, Quarto, Gobblet, Order and Chaos, Toss Across, and Mojo. Tic-tac-toe is an instance of an m,n,k-game, where two players alternate taking turns on an "m"×"n" board until one of them gets "k" in a row. Harary's generalized tic-tac-toe is an even broader generalization. Other variations of tic-tac-toe include: One can play on a board of 4x4 squares, winning in several ways. Winning can include: 4 in a straight line, 4 in a diagonal line, 4 in a diamond, or 4 to make a square. Another variant, Qubic, is played on a 4×4×4 board; it was solved by Oren Patashnik in 1980 (the first player can force a win). Higher dimensional variations are also possible. The game has a number of English names. Sometimes, the games tic-tac-toe (where players keep adding "pieces") and three men's morris (where pieces start to move after a certain number have been placed) are confused with each other. Various game shows have been based on tic-tac-toe and its variants:
https://en.wikipedia.org/wiki?curid=31609
Tallinn Airport Tallinn Airport (, ) or Lennart Meri Tallinn Airport () is the largest airport in Estonia and serves as a hub for the national airline Nordica, as well as the secondary hub for AirBaltic and LOT Polish Airlines. It was also the home base of the now defunct national airline Estonian Air. Tallinn Airport is open to both domestic and international flights. It is located southeast of the centre of Tallinn on the eastern shore of Lake Ülemiste – close enough to be considered a city airport as the city centre is extremely close to it. It was formerly known as "Ülemiste Airport". The airport has a single asphalt/concrete runway, 08/26, that is and large enough to handle wide-bodied aircraft such as the Boeing 747, five taxiways and fourteen terminal gates. Since 29 March 2009 the airport is officially known as "Lennart Meri Tallinn Airport", in honour of the leader of the Estonian independence movement and second President of Estonia Lennart Meri. The airport has also been used for military purposes. It has served as an interceptor aircraft base, being home to the 384th Interceptor Aircraft Regiment (384 IAP), which operated MiG-23P aircraft. Prior to the establishment of the present airport in Ülemiste area, Lasnamäe Airfield was the primary airport of Tallinn, serving as a base for Aeronaut airline. After Aeronaut went bankrupt in 1928, air service was continued by Deruluft, which used Nehatu instead, from the centre of Tallinn. The first seaplane harbour on the shores of Lake Ülemiste was built 1928 to 1929 in order to serve Finnish seaplanes. The use of this harbour ended in World War II. On 26 March 1929 Riigikogu passed an expropriation act in order to establish a public airport. 10 ha of land was expropriated from Dvigatel joint-stock company and another 22 ha was expropriated from descendants of Vagner. 10 million sents were paid to land-owners as indemnity. Land leveling and renovation works took another 5 million sents. The building of Tallinn Airport started on 16 November 1931, and the first test landing was commenced by captain Reissar piloting Estonian Air Force Avro 594 Avian, tail number 120. The airport was opened officially on 20 September 1936, although it had been operational a good while before the official opening - LOT Polish Airlines, which commenced its first passenger flight from Tallinn on 18 August 1932 with Fokker F.VIIb/3m from Lasnamäe Airfield, later relocated the flights to Tallinn Airport and in 1935 the airport had 6 arrivals and departures on average every day. In April 1935 a ramp for seaplanes was built on a shore of Lake Ülemiste, together with a small arch bridge and a customs office, which allowed seaplanes to be relocated from a sea port. The same year the airport administration building was erected, which also served initially as a waiting place for travellers. The total cost of the whole airport project, including the cost of building flight hangars, was 25 million sents. As the very first runways had soft surface, it made them unavailable for takeoffs and landings during spring and autumn seasons. Therefore, only seaplanes stationed at Lake Ülemiste were able to carry out flights, and during winter months, it was possible to use the frozen surface of the lake as a runway for small airplanes. The concrete paved runways of the first stage, inaugurated together with the opening of the airport, were about 40 metres wide and 300 metres long. As they were arranged in a form of a triangle, they allowed takeoffs and landings in six directions. These were the first concrete-paved runway in Estonia, it was needed some 5,396 cubic meters of stone, 4,100 cubic meters of construction aggregate and 137 tons of cement to construct them. In addition, 3 km of pipeworks was laid for drainage purposes. Before World War II, Tallinn Airport had regular connections to abroad by at least Aerotransport (now part of the SAS Group), Deutsche Luft Hansa, LOT and the Finnish company Aero (now Finnair). On 5 April 1937 the Helsinki-Tallinn-Warsaw-Jerusalem route was inaugurated by Mr. Bobkowski, the assistant of the Polish Minister of Transport. The length of the route was and the journey time was 34 hours. Passengers and cargo numbers grew quickly, from 4,100 passengers and 6,730 kg of cargo in 1933 to 11,892 passengers and 14,726 kg of cargo in 1937. Preparation and design works for a new passenger terminal started in 1938. 14 various projects were submitted for the architectural contest of the new terminal building, with the one from the architect Artur Jürvetson winning the contest in February the same year. The construction costs were estimated at 300 thousand Estonian kroons. The first airplane of then the flag carrier of Estonia, AGO, arrived at Tallinn Airport on 5 October 1939, flying the route Dessau - Königsberg - Tallinn. As Estonia was occupied by Soviet Union, on 22 July 1940 the order was made by Soviet occupation authorities to transfer the airport to Soviet Air Forces. All aircraft, which were at the airport at that time, including interned Polish Lockheed 14, two Junkers Ju 52 of AGO and PTO-4 trainer aircraft of Estonian Airclub, were relocated to Lasnamäe Airfield. During the German occupation, regular international connections were announced on 16 October and already restored on 15 November 1941, when Deutsche Lufthansa and Aero O/Y started the route Helsinki-Tallinn-Riga-Königsberg-Berlin. From 1942 to 1944 "Sonderstaffel Buschmann" was based at Tallinn Airport. Between 1945 and 1989, Aeroflot was the only airline that served Tallinn Airport. The construction of the new passenger terminal, which was put on hold due to war, resumed. The building, which was redesigned in accordance with the Stalinist architecture, was finished in 1954 and commissioned on 7 November 1955. Regular flights with jet aircraft began on 2 October 1962 with a maiden passenger flight from Moscow with a Tu-124, which was the latest Soviet airliner. As the terminal built in 1954 became obsolete and unable to cope with growing airport traffic, the construction of the current terminal building began in 1976 and the terminal was opened in 1980, prior to the 1980 Summer Olympics sailing event, which was held in the city. The architect of the new terminal was Mihhail Piskov, who took visual inspiration from traditional Estonian housebarns, and the interior designer was Maile Grünberg. The runway was also lengthened then. The first foreign airline since World War II to operate regular flights from Tallinn was SAS, whose first flight to the airport took place on 25 November 1989. The construction works of the first cargo terminal (Cargo 1), located in the middle of future cargo area on the north side of the airport, were carried out from September 1997 until March 1998. The passenger terminal building was completely modernised in 1999, increasing its capacity to 1.4 million passengers per year and after that greatly expanded in 2008. The growing demand for extra space for cargo operations, created a situation where there was need for cargo terminal expansion, Cargo 2. In order to meet the growing demand for new cargo facilities at Tallinn Airport, the number of cargo terminals was later expanded to four. In year 2012 a new aircraft maintenance hangar was opened and a number of passengers passed two million mark the first time in the history of the airport. On 11 January 2013 the airport was accepted into Airport Carbon Accreditation emission managing and reduction programme by ACI. The year 2013 saw an introduction of an automatic border control system and a start of construction of a new business aviation hangar complex. The airport underwent a large expansion project between January 2006 and September 2008. The existing terminal was expanded by and the architects of the project were Jean Marie Bonnard, Pia Tasa and Inge Sirkel-Suviste. The terminal was expanded in three directions, resulting in 18 new gates, separate lounges for Schengen and non-Schengen passengers, 10 new check-in desks and a new restaurant and cafes. Due to the gallery that connects all the gates and was constructed in the middle of the terminal building the terminal became T-shaped. The projecting terminal section enables a two-level traffic for international passengers. The renewed terminal has nine passenger bridges. The extensions constructed at the ends of the terminal building became additional rooms for registering for the flights and for delivering arriving luggage. Outside the terminal, the apron was refurbished and expanded and a new taxiway was added. The new terminal allows the airport to handle twice as many passengers as it could handle before. The renovated terminal received the award "Concrete Building of the Year 2008" by the Estonian Concrete Association. After the death of former president of Estonia Lennart Meri on 14 March 2006, journalist Argo Ideon from Eesti Ekspress proposed to honour the president's memory by naming Tallinn Airport after him – "" (Lennart Meri International Airport), drawing parallels with John F. Kennedy International Airport, Charles de Gaulle Airport, Sabiha Gökçen International Airport etc. Ideon's article also mentioned the fact that Meri himself had shown concern for the condition of the then Soviet-era construction (in one memorable case Meri, having arrived from Japan, led the group of journalists that were expecting him, to the airport's toilets to do the interview there, in order to point out the shoddy condition of the facilities). The name change was discussed at a board meeting on 29 March 2006, and on the opening of the new terminal on 19 September 2008, Prime Minister Andrus Ansip officially announced the renaming would take place in March 2009 In 2011 a new project of cruise turnarounds was launched in cooperation with Tallinn Passenger Port and Happy Cruises. More than 7,000 Spanish passengers travelled that year on charter flights to and from Tallinn Airport. As the airport is located only 5 km from the city center cruise quay, transfer time from airport to cruise ship is under an hour. In 2012, Pullmantur Air started its charter operations from Madrid–Barajas Airport with three Airbus 321s and two to three Boeing 747s. During the summer 2012 about 16,000 tourists were transferred. The company continued operations in 2013, transferring 25,000 tourists in five turnarounds, as well as there was one partial turnaround operation for the cruise ship MS Deutschland operated by Peter Deilmann Cruises. In 2015, cruise tourists were attended to by four airlines – Iberia, Iberia Express, Wamos Air, and Vueling. Some 5,000 passengers were expected during three turnarounds for Pullmantur Cruises cruise line. Tallinn Airport served 9,369 cruise turnaround passengers in 2015. No cruise turnarounds are expected in summer 2016 due to construction works, but the airport plans to continue them in 2017. On 7 November 2015, Estonian Air was liquidated following an adverse decision by the European Commission. This meant a significant temporary loss of business for the airport, as Estonian Air had been the largest carrier, accounting for one third of all capacity in 2014. According to Erik Sakkov, board member of Tallinn Airport, the future plans include expanding the runway by 600–700 metres to serve regular long-haul flights, also building of a brand-new taxiway, new storage facilities, a new point-to-point terminal and expansion of the existing passenger terminal, so it can serve arriving and departing passengers on two different levels. On 21 February 2013 the environmental impact assessment of the airport development project started. The project includes the runway lengthening by 720 metres, installation of the ILS Category II equipment, also lengthening of the existing northern taxiway till the end of the expanded runway, constructing of a whole new taxiway and a new apron area on the southern side of the airport, installation of the new perimeter security systems and constructing of an engine test facility and dedicated snow storage and de-icing areas. Among other benefits the extension would enable planes to fly higher above the city of Tallinn by moving threshold of the runway further from Lake Ülemiste, thus reducing noise level. The public discussion of the runway extension environmental effects evaluation report took place on 16 December 2013 and the construction work to extend the runway has begun on 1 May 2016. The length of the renovated runway is 3480 meters, the construction contract was concluded with Lemminkäinen Eesti. On 17 November 2016 the airport administration reported, that the runway expansion works are completed, thus the runway became the longest one in the Baltic states. The runway and the main taxiway were extended to the east and a new system of navigation lights was installed. In the summer and autumn of 2016 the construction work caused restrictions on nighttime flight operations but had no impact on scheduled operations. The soil of the safety area around the extended runway was enforced to reduce potential risks to aircraft in the event of runway overrun or excursion. In the course of the expansion work in 2016 some 45,000 tons of asphalt and 4,000 m3 of concrete were laid down, also 60 kilometers of new duct access was built and 100 kilometers of new cables and 400 new navigation lights installed, as well as 10 kilometers of new rainwater removal infrastructure built. The expansion of the airstrip increased the airport's safety area by 41 hectares and five kilometers of new service roads were built. The whole expansion works must be completed by the end of 2017. On 12 June 2013 the City Administration of Tallinn approved a detailed planning for a 0.91 ha land plot, on which a new maintenance hangar is going to be built. Total five-year investment plan amounts of more than 100 million euros. The airport is investing €126 million during the 2015–2021 period. The most important project is the reconstruction of the runway infrastructure at cost of €75 million. Additional investment of €2.5 million would be made in flight terminal in order to change its layout and improve the terminal's security, capacity and VIP area. А multi-storey car park for 1,200 vehicles and 150 taxis would be built due to the consistently increasing need for parking spots around the airport. Work on the task and procurement conditions of the parking structure began in 2014. It will be located in front of the passenger terminal and should be completed in 2017 according to current plans. On 10 April 2019, Tallinn Airport announced plans to expand the airport terminal and build an airport city by 2035. The expanded terminal is planned to serve 6 to 8 million passengers per year with an expanded area of 85 000 m² and 26 gates instead of 13. As the airport's current facilities could not serve more than 2.5 million passengers per year and the number of passengers is rapidly growing (38.2% in year 2011), a new terminal dedicated to low-cost airlines is planned to be built. On 12 April 2012 Tallinn Airport announced, that it will build next year a new terminal with five stands for low-cost airlines, which will be easily removable and extendable. The new terminal would be intended for low-cost airlines such as Ryanair, Easyjet and Norwegian that do not want to pay that much to the airport and do not need many airport services. The new terminal is intended for the service of one million passengers and the space previously occupied by low-cost airlines would pass into the disposition of Nordica and other traditional airlines. There are one passenger terminal and four cargo terminals at the airport. Although it does not share a linear terminal building - rather a pier terminal - the airport heavily resembles Hong Kong's old Kai Tak Airport in that the terminal is to the right of the western end of the runway (08), and the other end (26) is connected by a long parallel taxiway. Estonian EXPO Center year-round permanent exhibition is located near the Gate 3, acting as a live advertising space where promotion representatives introduce the companies taking part in the exhibition and help finding cooperation partners in particular fields of business. The center was opened on 22 July 2010. VKG has opened an oil shale themed exposition at Gate 4 on 9 January 2013, showing the history and development of Estonian oil shale industry. The Estonian Tourist Board has opened a brand new "Visit Estonia" themed exposition at Gate 5 on 2 October 2013. The gate is divided into three parts: a children's territory with a Lotte-themed playhouse, an interactive, informative waiting area decorated with Estonian national patterns and a bridge from the gate to the airplane that introduces travellers to Estonian nature. A lending library was open on 9 May 2013 in a special area by Gate 1. All books were donated by public including Estonian president Toomas Hendrik Ilves and the First Lady of Estonia Evelin Ilves. The library will have books in ten different languages, the majority being in Estonian, Russian and English. There will also be a selection of children's books. On 16 August 2013 Tallinn Airport unveiled a gallery and started exhibiting artists' work in the Passenger Terminal. The gallery of rotating exhibitions on the 1st floor of the Passenger Terminal is open to all arriving and departing passengers as well as those seeing them off or meeting them. On 1 September 2013, the airport opened an automatic border control system, that should accelerate procedures for passengers travelling out of the Schengen area. The fully automated border crossing system consists of two automated gates and six registering kiosks. The Nordea Lounge services business class passengers of Aeroflot, Air Baltic, Finnair, Flybe, LOT Polish Airlines, Lufthansa and SAS, as well as Priority Pass and members of the Metropolis loyalty programme. Additional Tallinn Airport GH check-in terminal is located at the Radisson Blu Hotel Tallinn. Travellers can check in online and print boarding cards directly from the lobby. The system allows to check in 24 hours before departure and choose own specific seat. The museum is located in a small building near the terminal, also a relatively large area nearby will be transformed into open-air exhibition. Two ancient cult stones, which it is necessary to move during the expansion of the runway, will be transferred to that exhibition. The whole museum plot will be separated from the airfield. The museum will have a direct access from E263 motorway (shares the same route with Estonian main road 2). Additionally, a platform with a view onto the runway will be constructed, giving good possibilities for aircraft spotting. The activity centre opened in 2016. On 20 March 2013 the airport authorities announced a public procurement for constructing a new hangar complex. The cornerstone of the new complex was laid on 27 September 2013. It has a surface area of , is located right next to the existing General Aviation Terminal and will be servicing aircraft within a distance of up to 3,000 kilometers from Tallinn. The complex is intended for accommodating a total of nine planes, eight of them are mid-size business jets and one aircraft the size of a large corporate aircraft. It consists of five hangars: the Hangar 1 for the large aircraft (such as Boeing 737, Airbus A318 or Airbus A319), hangars 2 to 5 are intended for smaller business jets (Bombardier Challenger 605, Learjet 60). The whole complex was opened on 15 April 2014 and its operator is Panaviatic, which is going to expand its business jet operations from Tallinn Airport. Apart from providing hangarage for business jets, the new complex also offers MRO services by Panaviatic's subsidiary AS Panaviatic Maintenance. The total investment was close to 5 million euros and the whole complex is the largest in the Baltic states. Magnetic MRO has its facilities and headquarters on the airport property. On 6 September 2012 the company opened a new column-free three-bay hangar for Base Maintenance works of narrow-body aircraft, such as Boeing 737 and Airbus A320. The company has in total three main Base Maintenance lines, and two additional lines for lighter checks and modification works. With the addition of the new hangar, the maximum annual line maintenance capacity of the company boosted to 72 aircraft from the present 24. Magnetic MRO said the new hangar will allow it carry out a planned doubling of its workforce. On 21 December 2015 Magnetic MRO announced a launch of the second painting hangar, which will be built in co-operation with Tallinn Airport, in response to growing demand for painting services. The new hangar with further expansion possibilities will be capable of housing aircraft in size up to Boeing 737 MAX 9 and Airbus A321neo, as well as regional aircraft, and according to the agreement, the hangar is planned to be finalized and ready for use by 1 June 2017. Tallinn Airport has 4 cargo terminals with total warehouse space of ca 11,600 m2. The size of warehouse in Cargo 1 is 3601 m2 and 2066 m2 are dedicated for the office area. Cargo terminal is operated by different operators (including integrators) and Tallinn Airport Ltd. only acts as a lessor. The size of Cargo 2 warehouse is 1255 m2 and 758 m2 are dedicated for office space. Cargo 2 is operated by TNT Express Worldwide. Other logistics operators include DHL, UPS and FedEx. The following airlines operate scheduled year-round or seasonal routes at Tallinn Airport: Total passengers using the airport has increased on average by 14.2% annually since 1998. On 16 November 2012 Tallinn Airport has reached two million passenger landmark for the first time in its history. Passenger data reflects international and domestic flights combined, share of domestic flights compared to international flights was marginal. Passenger and cargo numbers exclude direct transit. The best connection between downtown Tallinn and the airport is provided by tramline "4". The tram network extension to the airport terminal was opened on 1 September 2017. Trams mostly go with 6-minute intervals, the journey from downtown to the airport (and vice versa) takes 18–19 minutes. Trams run through the 150-metre long Ülemiste tram tunnel beneath the Tallinn-Narva railway. Like all public transportation in Tallinn, the tram is free to the city's residents. The line "2" offers a connection to Mõigu subdistrict of Tallinn(Mõigu is located 1–2 km southeast from airport towards Tartu). On the returning route from Mõigu to Tallinn downtown (and further to Tallinn Passenger Port) the line "2" stops in Tartu Road (on the other side of parking house, not in public transportation terminal (or tram terminal)). Therefore, when going to city centre it is more convenient (easier) to take tram than bus "2". The line "2" buses go mostly with 20-minute intervals. The line "49" provides connections to Viimsi Parish, as well as to Iru subdistrict, Iru village and Pirita and Lasnamäe districts. The line "65" provides a connection to Lasnamäe district. Tallinn AirportShuttle share taxi provides a connection from Tallinn Airport to any location in Tallinn. Long-distance services include: The nearest station is Ülemiste train station, which lies about 800 metres from the airport, near Ülemiste Keskus. It provides access to regional rail and commuter rail lines of Elron. The station and Tallinn Airport are connected through the bus lines "49" and"65" and the tram line "4". The airport is accessed by the E263 expressway (which shares the same route with the Estonian national road T2). The E20 expressway (which follows the T1) intersects with the E263 expressway away from the airport towards the city centre. The E67 expressway (Via Baltica, follows the Estonian national road T4) is easily accessible via the dual carriageway Järvevana Road, which provides a direct connection with E263 at the intersection.
https://en.wikipedia.org/wiki?curid=31611
Giant cell arteritis Giant cell arteritis (GCA), also called temporal arteritis, is an inflammatory disease of large blood vessels. Symptoms may include headache, pain over the temples, flu-like symptoms, double vision, and difficulty opening the mouth. Complication can include blockage of the artery to the eye with resulting blindness, aortic dissection, and aortic aneurysm. GCA is frequently associated with polymyalgia rheumatica. The cause is unknown. The underlying mechanism involves inflammation of the small blood vessels that occur within the walls of larger arteries. This mainly affects arteries around the head and neck, though some in the chest may also be affected. Diagnosis is suspected based on symptoms, blood tests, and medical imaging, and confirmed by biopsy of the temporal artery. However, in about 10% of people the temporal artery is normal. Treatment is typically with high doses of steroids such as prednisone or prednisolone. Once symptoms have resolved the dose is then decreased by about 15% per month. Once a low dose is reached, the taper is slowed further over the subsequent year. Other medications that may be recommended include bisphosphonates to prevent bone loss and a proton-pump inhibitor to prevent stomach problems. It affects about 1 in 15,000 people over the age of 50 per year. The condition typically only occurs in those over the age of 50, being most common among those in their 70s. Females are more often affected than males. Those of northern European descent are more commonly affected. Life expectancy is typically normal. The first description of the condition occurred in 1890. Common symptoms of giant cell arteritis include: The inflammation may affect blood supply to the eye; blurred vision or sudden blindness may occur. In 76% of cases involving the eye, the ophthalmic artery is involved, causing arteritic anterior ischemic optic neuropathy. Giant cell arteritis may present with atypical or overlapping features. Early and accurate diagnosis is important to prevent ischemic vision loss. Therefore, this condition is considered a medical emergency. While studies vary as to the exact relapse rate of giant cell arteritis, relapse of this condition can occur. It most often happens at low doses of prednisone (<20 mg/day), during the first year of treatment, and the most common signs of relapse are headache and polymyalgia rheumatica. The varicella-zoster virus (VZV) antigen was found in 74% of temporal artery biopsies that were GCA-positive, suggesting that the VZV infection may trigger the inflammatory cascade. The disorder may co-exist (in about half of cases) with polymyalgia rheumatica (PMR), which is characterized by sudden onset of pain and stiffness in muscles (pelvis, shoulder) of the body and is seen in the elderly. GCA and PMR are so closely linked that they are often considered to be different manifestations of the same disease process. PMR usually lacks the cranial symptoms, including headache, pain in the jaw while chewing, and vision symptoms, that are present in GCA. Giant cell arteritis can affect the aorta and lead to aortic aneurysm and aortic dissection. Up to 67% of people with GCA having evidence of an inflamed aorta, which can increase the risk of aortic aneurysm and dissection. There are arguments for the routine screening of each person with GCA for this possible life-threatening complication by imaging the aorta. Screening should be done on a case-by-case basis based on the signs and symptoms of people with GCA. The pathological mechanism is the result of an inflammatory cascade that is triggered by an as of yet determined cause resulting in dendritic cells in the vessel wall recruiting T cells and macrophages to form granulomatous infiltrates. These infiltrates erode the middle and inner layers of the arterial tunica media leading to conditions such as aneurysm and dissection. Activation of T helper 17 (Th17) cells involved with interleukin (IL) 6, IL-17, IL-21 and IL-23 play a critical part; specifically, Th17 activation leads to further activation of Th17 through IL-6 in a continuous, cyclic fashion. This pathway is suppressed with glucocorticoids, and more recently it has been found that IL-6 inhibitors also play a suppressive role. The gold standard for diagnosing temporal arteritis is biopsy, which involves removing a small part of the vessel under local anesthesia and examining it microscopically for giant cells infiltrating the tissue. However, a negative result does not definitively rule out the diagnosis; since the blood vessels are involved in a patchy pattern, there may be unaffected areas on the vessel and the biopsy might have been taken from these parts. Unilateral biopsy of a 1.5–3 cm length is 85-90% sensitive (1 cm is the minimum). A Characterised as intimal hyperplasia and medial granulomatous inflammation with elastic lamina fragmentation with a CD 4+ predominant T cell infiltrate, currently biopsy is only considered confirmatory for the clinical diagnosis, or one of the diagnostic criteria. Radiological examination of the temporal artery with ultrasound yields a halo sign. Contrast-enhanced brain MRI and CT is generally negative in this disorder. Recent studies have shown that 3T MRI using super high resolution imaging and contrast injection can non-invasively diagnose this disorder with high specificity and sensitivity. GCA is considered a medical emergency due to the potential of irreversible vision loss. Corticosteroids, typically high-dose prednisone (1 mg/kg/day), should be started as soon as the diagnosis is suspected (even before the diagnosis is confirmed by biopsy) to prevent irreversible blindness secondary to ophthalmic artery occlusion. Steroids do not prevent the diagnosis from later being confirmed by biopsy, although certain changes in the histology may be observed towards the end of the first week of treatment and are more difficult to identify after a couple of months. The dose of corticosteroids is generally slowly tapered over 12–18 months. Oral steroids are at least as effective as intravenous steroids, except in the treatment of acute visual loss where intravenous steroids appear to offer significant benefit over oral steroids. Short-term side effects of prednisone are uncommon but can include mood changes, avascular necrosis, and an increased risk of infection. Some of the side effects associated with long-term use include weight gain, diabetes mellitus, osteoporosis, avascular necrosis, glaucoma, cataracts, cardiovascular disease, and an increased risk of infection. It is unclear if adding a small amount of aspirin is beneficial or not as it has not been studied. Injections of tocilizumab may also be used. Tocilizumab is a humanized antibody that targets the interleukin-6 receptor, which is a key cytokine involved in the progression of GCA. Tocilizumab has been found to be effective at minimizing both recurrence, and flares of GCA when used both on its own and with corticosteroids. Long term use of tocilizumab requires further investigation. Tocilizumab may increase the risk of gastrointestinal perforation and infections, however it does not appear that there are more risks than using corticosteroids. Giant cell arteritis typically only occurs in those over the age of 50; particularly those in their 70s. It affects about 1 in 15,000 people over the age of 50 per year. It is more common in women than in men, by a ratio of 2:1, and more common in those of Northern European descent, as well as in those residing further from the Equator. The terms "giant cell arteritis" and "temporal arteritis" are sometimes used interchangeably, because of the frequent involvement of the temporal artery. However, other large vessels such as the aorta can be involved. Giant-cell arteritis is also known as "cranial arteritis" and "Horton's disease." The name (giant cell arteritis) reflects the type of inflammatory cell involved.
https://en.wikipedia.org/wiki?curid=31620