text
stringlengths
10
951k
source
stringlengths
39
44
Proteinoid Proteinoids, or thermal proteins, are protein-like, often cross-linked molecules formed abiotically from amino acids. Sidney W. Fox initially proposed that they may have been precursors to the first living cells (protocells). The term was also used in the 1960s to describe peptides that are shorter than twenty amino acids found in hydrolysed protein, but this term is no longer commonly used. In trying to uncover the intermediate stages of abiogenesis, scientist Sidney W. Fox in the 1950s and 1960s, studied the spontaneous formation of peptide structures under conditions that might plausibly have existed early in Earth's history. He demonstrated that amino acids could spontaneously form small chains called peptides. In one of his experiments, he allowed amino acids to dry out as if puddled in a warm, dry spot in prebiotic conditions. He found that, as they dried, the amino acids formed long, often cross-linked, thread-like microscopic polypeptide globules, he named "proteinoid microspheres". The abiotic polymerization of amino acids into proteins through the formation of peptide bonds was thought to occur only at temperatures over 140 °C. However, the biochemist Sidney Walter Fox and his co-workers discovered that phosphoric acid acted as a catalyst for this reaction. They were able to form protein-like chains from a mixture of 18 common amino acids at 70 °C in the presence of phosphoric acid, and dubbed these protein-like chains proteinoids. Fox later found naturally occurring proteinoids similar to those he had created in his laboratory in lava and cinders from Hawaiian volcanic vents and determined that the amino acids present polymerized due to the heat of escaping gases and lava. Other catalysts have since been found; one of them, amidinium carbodiimide, is formed in primitive Earth experiments and is effective in dilute aqueous solutions. When present in certain concentrations in aqueous solutions, proteinoids form small microspheres. This is because some of the amino acids incorporated into proteinoid chains are more hydrophobic than others, and so proteinoids cluster together like droplets of oil in water. These structures exhibit a few characteristics of living cells: Fox thought that the microspheres may have provided a cell compartment within which organic molecules could have become concentrated and protected from the outside environment during the process of chemical evolution. Proteinoid microspheres are today being considered for use in pharmaceuticals, providing microscopic biodegradable capsules in which to package and deliver oral drugs. In another experiment using a similar method to set suitable conditions for life to form, Fox collected volcanic material from a cinder cone in Hawaii. He discovered that the temperature was over just beneath the surface of the cinder cone, and suggested that this might have been the environment in which life was created—molecules could have formed and then been washed through the loose volcanic ash and into the sea. He placed lumps of lava over amino acids derived from methane, ammonia and water, sterilized all materials, and baked the lava over the amino acids for a few hours in a glass oven. A brown, sticky substance formed over the surface and when the lava was drenched in sterilized water a thick, brown liquid leached out. It turned out that the amino acids had combined to form proteinoids, and the proteinoids had combined to form small spheres. Fox called these "microspheres". His protobionts were not cells, although they formed clumps and chains reminiscent of bacteria. Based upon such experiments, Colin Pittendrigh stated in December 1967 that "laboratories will be creating a living cell within ten years," a remark that reflected the typical contemporary levels of ignorance of the complexity of cell structures. Fox has likened the amino acid globules to cells, and proposed it bridged the macromolecule to cell transition. However, his hypothesis was later dismissed as proteinoids are not proteins, they feature mostly non-peptide bonds and amino acid cross-linkages not present in living organisms. Furthermore, they have no compartmentalization and there is no information content in the molecules. Although their role as an evolutionary precursor has been superseded, the hypothesis was a catalyst to further investigate other mechanisms that could have brought about abiogenesis, such as the RNA world, PAH world, Iron–sulfur world, and protocell hypotheses.
https://en.wikipedia.org/wiki?curid=24959
Permanent Court of International Justice The Permanent Court of International Justice, often called the World Court, existed from 1922 to 1946. It was an international court attached to the League of Nations. Created in 1920 (although the idea of an international court was several centuries old), the Court was initially well-received from states and academics alike, with many cases submitted to it for its first decade of operation. Between 1922 and 1940 the Court heard a total of 29 cases and delivered 27 separate advisory opinions. With the heightened international tension in the 1930s, the Court became less used. By a resolution from the League of Nations on 18 April 1946, both the Court and the League ceased to exist and were replaced by the International Court of Justice and the United Nations. The Court's mandatory jurisdiction came from three sources: the Optional Clause of the League of Nations, general international conventions and special bipartite international treaties. Cases could also be submitted directly by states, but they were not bound to submit material unless it fell into those three categories. The Court could issue either judgments or advisory opinions. Judgments were directly binding but not advisory opinions. In practice, member states of the League of Nations followed advisory opinions anyway for fear of possibly undermining the moral and legal authority of the Court and the League. An international court had long been proposed; Pierre Dubois suggested it in 1305 and Émeric Crucé in 1623. An idea of an international court of justice arose in the political world at the First Hague Peace Conference in 1899, where it was declared that arbitration between states was the easiest solution to disputes, providing a temporary panel of judges to arbitrate in such cases, the Permanent Court of Arbitration. At the Second Hague Peace Conference in 1907, a draft convention for a permanent Court of Arbitral Justice was written although disputes and other pressing business at the Conference meant that such a body was never established, owing to difficulties agreeing on a procedure to select the judges. The outbreak of the First World War, and, in particular, its conclusion made it clear to many academics that some kind of world court was needed, and it was widely expected that one would be established. Article 14 of the Covenant of the League of Nations, created after the Treaty of Versailles, allowed the League to investigate setting up an international court. In June 1920, an Advisory Committee of jurists appointed by the League of Nations finally established a working guideline for the appointment of judges, and the Committee was then authorised to draft a constitution for a permanent court not of arbitration but of justice. The Statute of the Permanent Court of International Justice was accepted in Geneva on December 13, 1920. The Court first sat on 30 January 1922, at the Peace Palace, The Hague, covering preliminary business during the first session (such as establishing procedure and appointing officers) Nine judges sat, along with three deputies, since Antonio Sánchez de Bustamante y Sirven, Ruy Barbosa and Wang Ch'ung-hui were unable to attend, the last being at the Washington Naval Conference. The Court elected Bernard Loder as President and Max Huber as Vice-President; Huber was replaced by Charles Andre Weiss a month later. On 14 February the Court was officially opened, and rules of procedure were established on 24 March, when the court ended its first session. The court first sat to decide cases on 15 June. During its first year of business, the Court issued three advisory opinions, all related to the International Labour Organization created by the Treaty of Versailles. The initial reaction to the Court was good, from politicians, practising lawyers and academics alike. Ernest Pollock, the former Attorney General for England and Wales said, "May we not as lawyers regard the establishment of an International Court of Justice as an advance in the science that we pursue?" John Henry Wigmore said that the creation of the Court "should have given every lawyer a thrill of cosmic vibration", and James Brown Scott wrote that "the one dream of our ages has been realised in our time". Much praise was heaped upon the appointment of an American judge despite the fact that the United States had not become a signatory to the Court's protocol, and it was thought that it would soon do so. The Court faced increasing work as it went on, allaying the fears of those commentators who had believed the Court would become like the Supreme Court of the United States, which was not presented with a case for its first six terms. The Court was given nine cases during 1922 and 1923, however, with judgments called "cases" and advisory opinions called "questions". Three cases were disposed of during the Court's first session, one during an extraordinary sitting between 8 January and 7 February 1923 (the Tunis-Morocco Nationality Question), four during the second ordinary sitting between 15 June 1923 and 15 September 1923 (Eastern Carelia Question, S.S. "Wimbledon" case, German Settlers Question, Acquisition of Polish Nationality Question) and one during a second extraordinary session from 12 November to 6 December 1923 (Jaworznia Question). A replacement for Ruy Barbosa (who had died on 1 March 1923 without hearing any cases) was also found, with the election of Epitácio Pessoa on 10 September 1923. The workload the following year was reduced, containing two judgments and one advisory opinion; the Mavrommatis Palestine Concessions Case, the Interpretation of the Treaty of Neuilly Case (the first case of the Court's Chamber of Summary Procedure) and the Monastery of Saint-Naoum Question. During the same year, a new President and Vice-President were elected, since they were mandated to serve for a term of three years. At the elections on 4 September 1924, Charles Andre Weiss was again elected Vice-President and Max Huber became the second President of the Court. Judicial pensions were created at the same time, with a judge being given 1/30th of his annual pay for every year he had served once he had both retired and turned 65. 1925 was an exceedingly busy year for the court, which sat for 210 days, with four extraordinary sessions as well as the ordinary session, producing 3 judgments and 4 advisory opinions. The first judgment was given in the Exchange of Greek and Turkish Populations Case, the second (by the Court of Summary Procedure) was on the interpretation of the Interpretation of the Treaty of Neuilly Case, and the third in the Mavrommatis Palestine Concessions Case. The 4 advisory opinions issued by the Court were in the Polish Postal Service in Danzig Question, the Expulsion of the Ecumenical Patriarch Question, the Treaty of Lausanne Question and the German Interests in Polish Upper Silesia Question. 1926 saw reduced business, with only one ordinary session and one extraordinary session; it was, however, the first year that all 11 judges had been present to hear cases. The court heard two cases, providing one judgment and one advisory opinion; a second question on German Interests in Polish Upper Silesia, this time a judgment rather than an advisory opinion, and an advisory opinion on the International Labour Organization. Despite the reduction of work in 1926, 1927 was another busy year, the Court sitting continuously from 15 June to 16 December, handing down 4 orders, 4 judgments and 1 advisory opinion. The judgments were in the Belgium-China Case, the Case Concerning the Factory at Chorzow, the Lotus Case and a continuation of the Mavrommatis Jerusalem Concessions Case. 3 of the advisory opinions were on the Competence of the European Commission on the Danube, and the 4th was on the Jurisdiction of Danzig Courts. The 4 orders were on the German Interests in Polish Upper Silesia. This year saw another set of elections; on 6 December, with Dionisio Anzilotti elected President and Charles Andre Weiss elected Vice-President. Weiss died the following year, and John Bassett Moore resigned; Max Huber was elected Vice-President on 12 September 1928 to succeed Weiss, while a second death (Lord Finlay) left the Court increasingly understaffed. Replacements for Moore and Finlay were elected on 19 September 1929; Henri Fromageot and Cecil Hurst respectively. After the second round of elections in September 1930, the Court was reorganised. On 16 January 1931 Mineichirō Adachi was appointed President, and Gustavo Guerrero Vice-President. The United States never joined the World Court, primarily because enemies of the League of Nations in the Senate argued that the Court was too closely linked to the League of Nations. The leading opponent was Senator William Borah, Republican of Idaho. The United States finally recognised the Court's jurisdiction, following a long and drawn out process. President Warren G. Harding had first suggested US involvement in 1923, and on 9 December 1929, three court protocols were signed. The U.S. demanded a veto over cases involving the U.S. but other nations rejected the idea. President Franklin Roosevelt did not risk his political capital and gave only passive support even though a two-thirds vote of approval was needed in the Senate. A barrage of telegrams flooded Congress, inspired by attacks made by Charles Coughlin and others. The treaty failed by seven votes on January 29, 1935. The United States finally accepted the Court's jurisdiction on 28 December 1935, but the treaty was never ratified, and the U.S. never joined. Francis Boyle attributes the failure to a strong isolationist element in the US Senate, arguing that the ineffectiveness shown by US nonparticipation in the Court and other international institutions could be linked to the start of the Second World War. 1933 was a busy year for the court, which cleared its 20th case (and "greatest triumph"); the Eastern Greenland Case. This period was marked by growing international tension, however, with Japan and Germany announcing their withdrawal from the League of Nations, to come into effect in 1935. That did not directly affect the Court, since the protocol accepting Court jurisdiction was separately ratified, but it influenced whether a nation would be willing to bring a case before it, as evidenced by Germany's withdrawal from two pending cases. 1934, the Court's 13th year, "has been in keeping with the traditions associated with that number", with few cases since the world's governments were more concerned with the growing international tension. The Court's business continued to be small in 1935, 1936, 1937, 1938, and 1939 although 1937 was marked by Monaco's acceptance of the Court protocol. The Court's judicial output in 1940 consisted entirely of a set of orders, completed in a meeting between 19 and 26 February, caused by an international situation, which left the Court with "uncertain prospects for the future". Following the German invasion of the Netherlands, the Court was unable to meet although the Registrar and President were afforded full diplomatic immunity. Informed that the situation would not be tolerated after diplomatic missions from other nations left The Hague on 16 July, the President and Registrar left the Netherlands and moved to Switzerland, accompanied by their staff. The Court was unable to meet between 1941 and 1944, but the framework remained intact, and it soon became apparent that the Court would be dissolved. In 1943, an international panel met to consider "the question of the Permanent Court of International Justice", meeting from 20 March to 10 February 1944. The panel agreed that the name and functioning of the Court should be preserved but for some future court rather than a continuation of the current one. Between 21 August and 7 October 1944, the Dumbarton Oaks Conference was held, which, among other things, created an international court attached to the United Nations, to succeed the Permanent Court of International Justice. As a result of these conferences and others, the judges of the Permanent Court of International Justice officially resigned in October 1945 and, via a resolution by the League of Nations on 18 April 1946, the Court and the League both ceased to exist, being replaced by the International Court of Justice and the United Nations. The Court initially consisted of 11 judges and 4 deputy judges, recommended by member states of the League of Nations to the Secretary General of the League of Nations, who would put them before the Council and Assembly for election. The Council and Assembly were to bear in mind that the elected panel of judges was to represent every major legal tradition in the League, along with "every major civilization". Each member state was allowed to recommend 4 potential judges, with a maximum of 2 from its own nation. Judges were elected by a straight majority vote, held independently in the Council and Assembly. The judges served for a period of nine years, with their term limits all expiring at the same time, necessitating a completely new set of elections. The judges were independent and rid themselves of their nationality for the purposes of hearing cases, owing allegiance to no individual member state, but it was forbidden to have more than one judge from the same state. As a sign of their independence from national ties, judges were given full diplomatic immunity when engaged in Court business. The only requirements for judges were "high moral character" and "the qualifications required in their respective countries [for] the highest judicial offices" or to be "jurisconsults of recognized competence in international law". The first panel was elected on 14 September 1921, with the 4 deputies being elected on the 16th. On the first vote, Rafael Altamira y Crevea of Spain, Dionisio Anzilotti of Italy, Bernard Loder of the Netherlands, Ruy Barbosa of Brazil, Yorozu Oda of Japan, Charles Andre Weiss of France, Antonio Sánchez de Bustamante y Sirven of Cuba and Lord Finlay of the United Kingdom were elected by a majority vote of both the Council and Assembly on the first ballot taken. The second ballot elected John Bassett Moore of the United States, and the sixth Didrik Nyholm of Denmark and Max Huber of Switzerland. As the deputy judges, Wang Ch'ung-hui of China, Demetre Negulesco of Romania and Michaelo Yovanovich of Yugoslavia were elected. The Assembly and Council disagreed on the fourth deputy judge, but Frederik Beichmann of Norway was eventually appointed. Deputy judges were only substitutes for absent judges and were not afforded a vote in altering court procedure or contributing at other times. As such, they were allowed to act as counsel in international cases where they were not sitting as judges. In 1930, the number of judges was increased to 15, and a new set of elections were held. The election was held on 25 September 1930, with 14 candidates receiving a majority on the first ballot and a 15th, Francisco José Urrutia, receiving a majority on the second. The full court was Urrutia, Mineichiro Adachi, Rafael Altamira y Crevea, Dionisio Anzilotti, Bustamante, Willem van Eysinga, Henri Fromageot, José Gustavo Guerrero, Cecil Hurst, Edouard Rolin-Jaequemyns, Frank B. Kellogg, Negulesco, Michał Jan Rostworowski, Walther Schücking and Wang Ch'ung-hui. Judges were paid 15,000 Dutch florins a year, with daily expenses of 50 florins to pay for living expenses, and an additional 45,000 florins for the President, who was required to live at The Hague. Travelling expenses were also provided, and a "duty allowance" of 100 florins was provided when the court was sitting, with 150 for the Vice-President. This duty allowance was limited to 20,000 florins a year for the judges and 30,000 florins for the Vice-President; as such, it provided for 200 days of court hearings, with no allowance provided if the court sat for longer. The deputy judges received no salary but, when called up for service, were provided with travel expenses, 50 florins a day for living expenses and 150 florins a day as a duty allowance. Under the Covenant of the League of Nations, all League members agreed that if there was a dispute between states they "recognize to be suitable for submission to arbitration and which cannot be satisfactorily settled by diplomacy", the matter would be submitted to the Court for arbitration, with suitable disputes being over the interpretation of an international treaty, a question on international law, the validity of facts, which, if true, would breach international obligations and the nature of any reparations to be made for breaching international obligations. The original Statutes of the Court provided that all 11 judges were required to sit in every case. There were three exceptions: when reviewing Labour Clauses from a peace treaty such as the Treaty of Versailles (which was done by a special chamber of 5 judges, appointed every 3 years), when reviewing cases on communications or transport arising from a peace treaty (which used a similar procedure) and when hearing summary procedure cases, which were reviewed by a panel of 3 judges. To prevent the appearance of any bias in the court's makeup, if there was a judge belonging to one member state on the panel and the other member state was not "represented", they had the ability to select an "ad hoc" judge of their own nationality to hear the case. In a full court hearing, that increased the number to 12; in one of the 5-man chambers, the new judge took the place of one of the original 5. That did not apply to summary procedure cases. The "ad hoc" judge, selected by the member state, was expected to fulfil all the requirements of a normal judge; the President of the Court had ultimate discretion over whether to authorise him to sit. The Court was mandated to open on 15 June each year and continue until all cases were finished, with extraordinary sessions if required; by 1927, there were more extraordinary sessions than ordinary ones. The Court's business being conducted in English and French as official languages, and hearings were public unless it was otherwise specified. After receiving files in a case calculated to lead to a judgment, the judges would exchange their views informally on the salient legal points of the case, and a time limit for producing a judgment would then be set. Then, each judge would write an anonymous summary containing his opinion; the opinions would be circulated among the Court for 2 or 3 days before the President drafted a judgment containing a summary of those submitted by individual judges. The Court would then agree on the decision that they wished to reach, along with the main points of argument they wished to use. Once this was done, a Committee of 4, including the President, the Registrar and two judges elected by secret ballot, drafted a final judgment, which was then voted on by the entire Court. Once a final judgment was set, it was given to the public and the press. Every judgment contained the reasons behind the decision and the judges assenting; dissenting judges were allowed to deliver their own judgment, with all judgments read in open court before the agents of the parties to the dispute. Judgments could not be revised except on the discovery of some fact unknown when the Court sat but not if the fact was known but not discussed because of negligence. The Court also issued "advisory opinions", which arose from Article 14 of the Covenant creating the Court, which provided, "The Court may also give an advisory opinion upon any dispute referred to it by the Council or Assembly". Goodrich interprets that as indicating that the drafters intended a purely advisory capacity for the Court, not a binding one. Manley Ottmer Hudson (who sat as a judge) said that an advisory opinion "was what it purported to be. It is advisory. It is not in any sense a judgement... hence it is not in any way binding on any state", but Charles De Visscher argued that in certain situations, an advisory opinion could be binding on the League of Nations Council and, under certain circumstances, some states; M. Politis agreed, saying that the Court's advisory opinions were equivalent to a binding judgment. In 1927, the Court appointed a committee to look at this issue, and it reported that "where there are in fact contending parties, the difference between contentious cases and advisory cases is only nominal... so the view that advisory opinions are not binding is more theoretical than real". In practice, advisory opinions were usually followed, mostly due to the fear that if this "revolutionary" international court's decisions were not followed, it would undermine its authority. The court retained the discretion to avoid giving an advisory opinion, which it used on occasion. Other than the judges, the Court also included a Registrar and his Secretariat, the Registry. When the Court met for its initial session, opened on 30 January 1922 to allow for the establishment of procedure and the appointment of Court officials, the Secretary-General of the League of Nations passed an emergency resolution through the Assembly, which designated an official of the League and his staff as the Registrar and Registry respectively, with the first Registrar being Åke Hammarskjöld. The Registrar, required to reside within The Hague, was initially tasked with drawing up a plan to create an efficient Secretariat, using the smallest number of staff possible and costing as little as possible. As a result, he decided to have each member of the Secretariat as the head of a particular Department, so the numbers of actual employees could be increased or decreased as necessary without impacting on the actual Registry. In 1927, the post of Deputy-Registrar was created, tasked with dealing with legal research for the Court and answering all diplomatic correspondence received by the Registry. The first Deputy-Registrar was Paul Ruegger; after his resignation on 17 August 1928, Julio Lopez Olivan was selected to succeed him. Olivan resigned in 1931 to take over from Hammarskjöld as Registrar, and was replaced by M. L. J. H. Jorstad. The three principal officers of the Registry, after the Registrar and Deputy-Registrar, were the three Editing Secretaries. The first Editing Secretary, known as the Drafting Secretary, was tasked with drafting the Court's publications (including the Confidential Bulletin, a document exclusively received by judges of the court) and Sections D and E of the official journal, comprising the legislative clauses conferring jurisdiction on the Court and the Court's Annual Report. The second Editing Secretary, known as the Oral Secretary, was mainly responsible for the oral interpretation and translation of the Court's discussions. For public hearings, he was assisted by interpreters, but for private meetings, only he, the Registrar and the Deputy-Registrar were admitted. As a result of this duty, the Oral Secretary was also tasked with writing Section C of the official journal, which comprised the oral interpretations of Court minutes, along with cases and questions put before the court. The third Secretary, known as the Written Secretary, was tasked with the written translations of the Court's business, which were "both numerous and voluminous". He was assisted in this by the other Secretaries and by translators for languages not his own; all Secretaries were expected to speak English and French fluently and to have working knowledge of German and Spanish. The Registry was split into several Departments; the Archives, the Accounting and Establishment, the Printing Service and the Copying Department. The Archives included a distribution service for the Court's documents and the legal texts used by the Court itself and was described as one of the most difficult departments to organise. The Accounting and Establishment Department dealt with the requests for and allocation of the Court's yearly budget, which was drawn up by the Registrar, approved by the Court and submitted to the League of Nations. The Printing Department, run from a single printing plant in Leiden, was created to allow the circulation of the Court's publishings. The Copying Department comprised shorthand, typing and copying services, and included secretaries for the Registrar and judges, emergency reporters capable of taking notes down verbatim and copyists; the smallest of the departments, it comprised between 12 and 40 staff depending on the business of the Court. The Court's jurisdiction was largely optional, but there were some situations in which they had "compulsory jurisdiction", and states were required to refer cases to them. That came from three sources: the Optional Clause of the League of Nations, general international conventions and "special bipartite international treaties". The Optional Clause was a clause attached to the protocol establishing the court and required all signatories to refer certain classes of dispute to the court, with compulsory judgments resulting. There were approximately 30 international conventions under which the Court had similar jurisdiction, including the Treaty of Versailles, the Air Navigation Convention, the Treaty of St. Germain and all mandates signed by the League of Nations. It was also foreseen that there would be clauses inserted in bipartite international treaties, which would allow the referral of disputes to the Court; that occurred, with such provisions found in treaties between Czechoslovakia and Austria, and between Czechoslovakia and Poland. Throughout its existence, the Court widened its jurisdiction as much as possible. Strictly speaking, the Court's jurisdiction was only for disputes between states, but it regularly accepted disputes that were between a state and an individual if a second state brought the individual's case to the Court. It argued that the second state assertsled its rights, and the cases therefore became one between two states. The proviso that the Court was for disputes that could not "be satisfactorily settled by diplomacy" never made it require evidence that diplomatic discussions had been attempted before bringing the case. In the Loan Cases, it asserted jurisdiction despite the fact that there was no alleged breach of international law, and it could not be shown that there was any international element to the claim. The Court justified itself by saying that the Covenant of the League of Nations allowed it to have jurisdiction in cases over "the existence of any fact which, if established, would constitute a breach of international obligations" and argued that since the fact "may be of any kind", it had jurisdiction if the dispute is one of municipal law. It had been long established that municipal law may be considered as a side point to a dispute over international law, but the Loan Cases discussed municipal law without the application of any international points.
https://en.wikipedia.org/wiki?curid=24960
Prince Albert (genital piercing) The Prince Albert (PA) is one of the most common male genital piercings. The PA is "a ring-style piercing that extends along the underside of the glans from the urethral opening to where the glans meets the shaft of the penis." The related "reverse Prince Albert piercing" enters through the urethra and exits through a hole pierced in the top of the glans. While some piercers may choose to avoid the nerve bundle that runs along the center of the frenulum altogether, others may choose otherwise. The piercing can be centred if the bearer is circumcised. Otherwise, the piercing must be done off-centre so that the surrounding skin can reposition itself. The Prince Albert healing time can take from 4 weeks to 6 months. A fresh PA piercing may cause bleeding, swelling and inflammation. In rare cases, it can lead to local infections. Some men find that the dribble caused by the PA when urinating necessitates sitting down to urinate. With practice, some men can control the stream while standing. Some PA wearers report it enhances sexual pleasure for both partners. However, others penetrated by males with this piercing report discomfort. PA rings can cause additional discomfort to female partners in cases when the penis comes in contact with the cervix. Sexual partners of those with piercings may experience complications during oral sex such as chipped teeth, choking, foreign bodies getting stuck between the partner's teeth, and mucosal injury to receptive partners. As with many piercings, there is risk of the jewelry becoming caught on clothing and being pulled or torn out. Very large gauge or heavy jewelry can cause thinning of the tissue between the urethral opening and the healed fistula resulting in an accidental tearing or other complications with sexual experiences. Conversely, extremely thin jewelry can cause the same tearing in what is commonly referred to as the "cheese cutter effect", either during sudden torsion or over a long period of wearing, especially if the thin jewelry bears any weight. Prince Albert piercings are typically pierced at either 12 or 10g (2 or 2.5mm). They are often (gradually) stretched soon after, with jewelry within the 8g to 2g (3mm to 6.5mm) range being the most popular. One of the reasons not to perform the initial piercing at a small diameter (16g or 14g) or otherwise to immediately stretch it to 10g or 8g using a taper is to prevent the 'cheese-cutter effect', although personal preference and individual anatomy also play a role in these decisions. Further stretching to sizes 0 or 00g (8 or 9mm) and larger is not uncommon. If a sufficiently heavy barbell or ring is worn continuously, a mild form of 'auto-stretching' can be observed. This means that stretching to a larger gauge is easier and might not require a taper. While most wearers find that PAs are comfortable to wear and rarely remove them, even during sex, some individuals have found that extremely large or heavy jewelry is uncomfortable to wear for long periods or interferes with the sexual functioning of the penis. Jewelry suitably worn in a Prince Albert piercing includes the circular barbell, curved barbell, captive bead, segment ring, and the prince's wand. Curved barbells used for PA piercings are worn such that one ball sits on the lower side of the penis and the other ball sits at the urethral opening. This type of jewelry prevents discomfort that can come from larger jewelry moving around during daily wear. The origin of this piercing is unknown. Many theories suggest that the piercing was used to secure the penis in some manner, rather than having a sexual or cultural purpose. In modern times, the Prince Albert piercing was popularized by Jim Ward in the early 1970s. In West Hollywood, Ward met Richard Simonton (aka Doug Malloy) and Fakir Musafar. Together, these men further developed the Prince Albert piercing. Malloy published a pamphlet in which he concocted fanciful histories of genital piercings in particular. These apocryphal tales—which included the notion that Albert, the Prince Consort invented the piercing that shares his name in order to tame the appearance of his large penis in tight trousers—are widely circulated as urban legend. No historical proof of their veracity has been located independent of Malloy's assertions. Like many other male genital piercings, it had a history of practice in gay male subculture in the twentieth century. It became more prominently known when body piercing expanded in the late 1970s and was gradually embraced by popular culture.
https://en.wikipedia.org/wiki?curid=24961
Paint Your Wagon (musical) Paint Your Wagon is a Broadway musical comedy, with book and lyrics by Alan J. Lerner and music by Frederick Loewe. The story centers on a miner and his daughter and follows the lives and loves of the people in a mining camp in Gold Rush-era California. Popular songs from the show included "Wand'rin' Star", "I Talk to the Trees" and "They Call the Wind Maria". The musical ran on Broadway in 1951 and in the West End in 1953. In 1969 the film version also titled "Paint Your Wagon" was released. It had a highly revised plot and some new songs composed by Lerner and André Previn. In the California Wilderness in May 1853, a crusty old miner, Ben Rumson, is conducting a makeshift funeral for a friend. Meanwhile, his 16-year-old daughter Jennifer discovers gold dust. Ben claims the land, and prospectors start flocking to the brand new town of Rumson ("I'm On My Way"). Two months later Rumson has a population of 400, all of whom are men except for Jennifer. Prospector Jake Whippany is waiting to save enough money to send for Cherry and her Fandango girls ("Rumson"), while Jennifer senses the tension building in town ("What's Going On Here?"). Julio Valveras, a handsome young miner forced to live and work outside of town because he is Mexican, comes to town with dirty laundry and runs into Jennifer, who volunteers to do his laundry. They also talk to each other ("I Talk to the Trees"). Steve Bulmarck and the other men ponder the lonely nomadic life they lead in the song "They Call the Wind Maria". Two months later the men want Ben to send Jennifer away, and he wishes her mother was still alive to help him ("I Still See Elisa"). Jennifer is in love with Julio ("How Can I Wait?"), and when Ben sees Jennifer dancing with Julio's clothes, he decides to send her East on the next stage. Jacob Woodling, a Mormon man with two wives, Sarah and Elizabeth, arrives in Rumson where the men demand Jacob sell one of his wives. To his surprise, Ben finds himself wooing Elizabeth ("In Between") and wins her for $800 ("Whoop-Ti-Yay"). Jennifer is disgusted by her father's actions and runs away, telling Julio that she will be reunited with him in a year's time ("Carino Mio"). Cherry and her Fandango girls arrive ("There's a Coach Comin' In"). Julio learns his claim is running dry which means he has to move on to make a living and that he will not be there to greet Jennifer when she returns. A year later in October, the miners celebrate the high times in Rumson now that the Fandango girls are around ("Hand Me Down That Can o' Beans"). Edgar Crocker, a miner who has saved his money, falls for Elizabeth and she responds, although Ben does not notice since he thinks Raymond Janney is in love with her (he is). Another miner, Mike Mooney, tells Julio about a lake that has gold dust on the bottom and he considers looking for it ("Another Autumn"). Jennifer returns in December, having learned civilized ways back East ("All for Him"). Ben tells his daughter that he will soon be moving on since he was not meant to stay in one place for long ("Wand'rin' Star"). The next day as Cherry and the girls are packing to leave they tell her about Julio leaving to find the lake with a bottom of gold. Raymond Janney offers to buy Elizabeth from Ben for $3,000, but she runs off with Edgar Crocker. Word comes of another strike 40 miles south of Rumson and the rest of the town packs up to leave except for Jennifer, who is waiting for Julio to return, and Ben, who suddenly realizes that Rumson is indeed his town. Late in April, Julio appears, a broken man. Ben welcomes him and Julio is amazed to see Jennifer is there. As they move toward each other, the wagons filled with people move on. The musical had a pre-Broadway try-out at the Shubert Theater in Philadelphia opening on September 17, 1951. It opened on Broadway at the Shubert Theatre on November 12, 1951, and closed on July 19, 1952, after 289 performances. The production was directed by Daniel Mann, set design by Oliver Smith, costume design by Motley, lighting design by Peggy Clark, music for dances arranged by Trude Rittmann, with dances and musical ensembles by Agnes de Mille set to the orchestrations of Ted Royal. It starred James Barton (as Ben Rumson), Olga San Juan (Jennifer Rumson), Tony Bavaar (Julio Valveras), Gemze de Lappe (Yvonne Sorel), James Mitchell (Pete Billings), Kay Medford (Cherry), and Marijane Maricle (Elizabeth Woodling). Burl Ives and Eddie Dowling later took over the role of Ben Rumson. De Mille later restaged the dances as a stand-alone ballet, "Gold Rush". The West End production opened on February 11, 1953 at Her Majesty's Theatre and ran for 477 performances. It starred real life father and daughter Bobby Howes and Sally Ann Howes. The Australian production opened on November 27, 1954 at Her Majesty's Theatre in Melbourne, with Alec Kellaway as Ben. A new production, with a revised libretto by David Rambo, was premiered at the Brentwood Theatre, produced by the Geffen Playhouse in association with Christopher Allen, D. Constantine Conte, and Larry Spellman in Los Angeles, California, from November 23, 2004, to January 9, 2005. This new world premiere adaptation was directed by Gilbert Cates and choreographed by Kay Cole. Design team included musical director Steve Orich, who provided arrangements and orchestrations. The design team featured Daniel Ionazzi (scenic and lighting), David Kay Mickeleson (costume) and Phil Allen (sound). The cast included Thomas F. Wilson as Ben Rumson, Jessica Rush as his daughter Jennifer and Sharon Lawrence as Lily. One change from the original was "They Call the Wind Maria" staged as an ensemble number instead of a showcase solo. A subsequent production was produced by the Pioneer Theatre Company in Salt Lake City, Utah and ran from September 28, 2007, through October 13, 2007. The director was Charles Morey and choreographer Patti D'Beck, with a cast of nearly 30. The musical was presented in an Encores! staged concert production at New York City Center in March 2015. The production was directed by Marc Bruni, and starred Keith Carradine as Ben Rumson, Alexandra Socha as Jennifer and Justin Guarini as Julio Valveras. In 2010, Steven Suskin wrote, "The interwoven use of ballet that worked so well in the highlands was less effective on the prairies, and the subject matter was harsh and cold. In spite of the show's failure, Loewe displayed ... an uncanny ability to write scores indigenous to the time and locale of the characters and plots."
https://en.wikipedia.org/wiki?curid=24962
Pacific Overtures Pacific Overtures is a musical with music and lyrics by Stephen Sondheim, and a book by John Weidman. The show is set in Japan beginning in 1853 and follows the difficult westernization of Japan, told from the point of view of the Japanese. In particular, the story focuses on the lives of two friends caught in the change. Sondheim wrote the score in a quasi-Japanese style of parallel 4ths and no leading tone. He did not use the pentatonic scale; the 4th degree of the major scale is represented from the opening number through the finale, as Sondheim found just five pitches too limiting. The music contrasts Japanese contemplation ("There is No Other Way") with Western ingenuousness ("Please Hello") while over the course of the 127 years, Western harmonies, tonality and even lyrics are infused into the score. The score is generally considered to be one of Sondheim's most ambitious and sophisticated efforts. The original Broadway production of "Pacific Overtures" in 1976 was staged in Kabuki style, with men playing women's parts and set changes made in full view of the audience by black-clad stagehands. It opened to mixed reviews and closed after six months, despite being nominated for ten Tony Awards. Given its unusual casting and production demands, "Pacific Overtures" remains one of Stephen Sondheim's least-performed musicals. The show is occasionally staged by opera companies. The cast requires an abundance of gifted male Asian actors who must play male and female parts. Women join the ensemble for only half of the last song; during the finale, after the lyric: “more surprises next,” 20 female actors join the cast and sing the remaining 1:42 of the show. This creates expensive and challenging casting and thus most regional and community theaters, universities and schools are unable to produce it. The most recent revival in 2017 at Classic Stage Company, helmed by John Doyle and starring George Takei as The Reciter, featured a cast of only 10 people, 8 men and 2 women. This also featured a revised book by John Weidman that had a running time of 90 minutes (as compared to the previous 2 hour 30 minute original run time). The title of the work is drawn directly from text in a letter from Admiral Perry addressed to the Emperor dated July 7, 1853: "Many of the large ships-of-war destined to visit Japan have not yet arrived in these seas, though they are hourly expected; and the undersigned, as an evidence of his friendly intentions, has brought but four of the smaller ones, designing, should it become necessary, to return to Edo in the ensuing spring with a much larger force. But it is expected that the government of your imperial majesty will render such return unnecessary, by acceding at once to the very reasonable and "pacific overtures" contained in the President's letter, and which will be further explained by the undersigned on the first fitting occasion." In addition to playing on the musical term "overture" and the geographical reference to the Pacific Ocean there is also the irony, revealed as the story unfolds, that these "pacific overtures" to initiate commercial exploitation of the Pacific nation were backed by a none too subtle threat of force. "Pacific Overtures" previewed in Boston and ran at The Kennedy Center for a month before opening on Broadway at the Winter Garden Theatre on January 11, 1976. It closed after 193 performances on June 27, 1976. Directed by Harold Prince, the choreography was by Patricia Birch, scenic design by Boris Aronson, costume design by Florence Klotz, and lighting design by Tharon Musser. The original cast recording was released originally by RCA Records and later on CD. This production was nominated for 10 Tony Awards, and won Best Scenic Design (Boris Aronson) and Best Costume Design (Florence Klotz). The original Broadway production was filmed and broadcast on Japanese television in 1976. An off-Broadway production ran at the Promenade Theatre from October 25, 1984 for 109 performances, transferring from an earlier production at the York Theatre Company. Directed by Fran Soeder with choreography by Janet Watson, the cast featured Ernest Abuba and Kevin Gray. The European premiere was directed by Howard Lloyd-Lewis (Library Theatre, Manchester) at Wythenshawe Forum in 1986 with choreography by Paul Kerryson who subsequently directed the show in 1993 at Leicester Haymarket Theatre. Both productions featured Mitch Sebastian in the role of Commodore Perry. A production was mounted in London by the English National Opera in 1987. The production was recorded in its entirety on CD, preserving nearly the entire libretto as well as the score. A critically acclaimed 2001 Chicago Shakespeare Theater production, directed by Gary Griffin, transferred to the West End Donmar Warehouse, where it ran from June 30, 2003 until September 6, 2003 and received the 2004 Olivier Award for Outstanding Musical Production. In 2002 the New National Theatre of Tokyo presented two limited engagements of their production, which was performed in Japanese with English supertitles. The production ran at Avery Fisher Hall, Lincoln Center from July 9, 2002 through July 13, and then at the Eisenhower Theater, Kennedy Center, from September 3, 2002, through September 8. A Broadway revival by the Roundabout Theatre Company (an English-language mounting of the previous New National Theatre of Tokyo production) ran at Studio 54 from December 2, 2004, to January 30, 2005, directed by Amon Miyamoto and starring BD Wong as the Narrator and several members of the original cast. A new Broadway recording, with new (reduced) orchestrations by Jonathan Tunick was released by PS Classics, with additional material not included on the original cast album. The production was nominated for four Tony Awards, including Best Revival of a Musical. The orchestrations were "scaled back" for a 7-piece orchestra. "Variety" noted that "the heavy use of traditional lutes and percussion instruments like wood blocks, chimes and drums showcases the craftsmanship behind this distinctly Japanese-flavored score." In 2017, Classic Stage Company revived "Pacific Overtures" for a limited run Off-Broadway, with a new abridged book by John Weidman and new orchestrations by Jonathan Tunick. This production was directed by current Artistic Director John Doyle and starred George Takei as the Reciter. It began previews on April 6, 2017 and opened on May 4, 2017. Originally scheduled to close on May 27, it was extended twice, and closed on June 18, 2017. This production was a "New York Times" Critic's Pick, "Variety" 's 2017 Top 5 NY Theater Production, and "Hollywood Reporter" 's 2017 Top 10 NY Theater Production. It also received numerous nominations from the Drama Desk, Drama League, Outer Critics Circle, and Lucille Lortel Awards. This version ran as a 90-minute one-act with a 10-member cast in modern-dress and included all the songs from the original production except for "Chrysanthemum Tea" and eliminated the instrumental/dance number "Lion Dance". Conceived as a Japanese playwright's version of an American musical about American influences on Japan, "Pacific Overtures" opens in July 1853. Since the foreigners were expelled from the island empire, explains the Reciter, elsewhere wars are fought and machines are rumbling, but in Nippon they plant rice, exchange bows and enjoy peace and serenity, and there has been nothing to threaten the changeless cycle of their days ("The Advantages of Floating in the Middle of the Sea"). But President Millard Fillmore, determined to open up trade with Japan, has sent Commodore Matthew C. Perry across the Pacific. To the consternation of Lord Abe and the Shogun's other Councillors, the stirrings of trouble begin with the appearance of Manjiro, a fisherman who had been lost at sea and rescued by Americans. He has returned to Japan and now attempts to warn the authorities of the approaching warships, but is instead arrested for consorting with foreigners. A minor samurai, Kayama Yezaemon, is appointed Prefect of Police at Uraga to drive the Americans away - news which leaves his wife Tamate grief-stricken, since Kayama will certainly fail and both will then have to commit "seppuku". As he leaves, she expresses her feelings in dance as two Observers describe the scene and sing her thoughts and words ("There Is No Other Way"). As a Fisherman, a Thief, and other locals relate the sight of the "Four Black Dragons" roaring through the sea, an extravagant Oriental caricature of the USS Powhatan pulls into harbor. Kayama is sent to meet with the Americans but he is laughed at and rejected as not being important enough. He enlists the aid of Manjiro, the only man in Japan who has dealt with Americans, and disguised as a great lord Manjiro is able to get an answer from them: Commodore Perry must meet the Shogun within six days or else he will shell the city. Facing this ultimatum, the Shogun refuses to commit himself to an answer and takes to his bed. Exasperated by his indecision and procrastination, his Mother, with elaborate courtesy, poisons him. ("Chrysanthemum Tea"). Kayama devises a plan by which the Americans can be received without technically setting foot on Japanese soil, thanks to a covering of tatami mats and a raised Treaty House, for which he is made Governor of Uraga. He and Manjiro set off for Uraga, forging a bond of friendship through the exchange of "Poems". Kayama has saved Japan, but it is too late to save Tamate: when Kayama arrives at his home, he finds that she is dead, having committed "seppuku" after having received no news of Kayama for many days. Already events are moving beyond the control of the old order: the two men pass a Madam instructing her inexperienced Oiran girls in the art of seduction as they prepare for the arrival of the foreign devils ("Welcome to Kanagawa"). Commodore Perry and his men disembark and, on their "March to the Treaty House", demonstrate their goodwill by offering such gifts as two bags of Irish potatoes and a copy of Owen's "Geology of Minnesota". The negotiations themselves are observed through the memories of three who were there: a warrior hidden beneath the floor of the Treaty House who could hear the debates, a young boy who could see the action from his perch in the tree outside, and the boy as an old man recalling that without "Someone In a Tree", a silent watcher, history is incomplete. Initially, it seems as if Kayama has won; the Americans depart in peace. But the barbarian figure of Commodore Perry leaps out to perform a traditional Kabuki "Lion Dance", which ends as a strutting, triumphalist, all-American cakewalk. The child emperor (portrayed by a puppet manipulated by his advisors) reacts with pleasure to the departure of the Americans, promoting Lord Abe to Shogun, confirming Kayama as Governor of Uraga and raising Manjiro to the rank of Samurai. The crisis appears to have passed, but to the displeasure of Lord Abe the Americans return to request formal trading arrangements. To the tune of a Sousa march, an American ambassador bids "Please Hello" to Japan and is followed by a Gilbertian British ambassador, a clog-dancing Dutchman, a gloomy Russian and a dandified Frenchman all vying for access to Japan's markets. With the appearance of this new group of westerners, the faction of the Lords of the South grow restless. They send a politically charged gift to the Emperor, a storyteller who tells a vivid, allegorical tale of a brave young emperor who frees himself from his cowardly Shogun. Fifteen years pass as Kayama and Manjiro dress themselves for tea. As Manjiro continues to dress in traditional robes for the tea ceremony, Kayama gradually adopts the manners, culture and dress of the newcomers, proudly displaying a new pocket watch, cutaway coat and "A Bowler Hat". Although Kayama, as stated in his reports to the Shogun, manages to reach an "understanding" with the Western merchants and diplomats, tensions abound between the Japanese and the "barbarians". Three British sailors on shore leave mistake the daughter of a samurai for a geisha ("Pretty Lady"). Though their approach is initially gentle, they grow more persistent to the point where they offer her money; the girl cries for help and her father kills one of the confused sailors. Kayama and Abe travel to the Emperor's court discussing the situation. While on the road, their party is attacked by cloaked assassins sent by the Lords of the South and Abe is assassinated. Kayama is horrified to discover that one of the assassins is his former friend, Manjiro; they fight and Kayama is killed. In the ensuing turmoil, the puppet Emperor seizes real power and vows that Japan will modernize itself. As the country moves from one innovation to the "Next!", the Imperial robes are removed layer by layer to show the Reciter in modern dress. Contemporary Japan - the country of Toyota, Seiko, air and water pollution and market domination - assembles itself around him and its accomplishments are extolled. "Nippon. The Floating Kingdom. There was a time when foreigners were not welcome here. But that was long ago..." he says, "Welcome to Japan." Proscenium Servants, Sailors and Townspeople: Kenneth S. Eiland, Timm Fujili, Joey Ginza, Patrick Kinser-Lau, Diane Lam, Tony Marinyo, Kevin Maung, Kim Miyori, Dingo Secretario, Freda Foh Shen, Mark Hsu Seyers, Gedde Watanabe, Leslie Watanabe, Ricardo Tobia "Someone in a Tree", where two witnesses describe negotiations between the Japanese and Americans, is Sondheim's favorite song out of everything he has written. "A Bowler Hat" presents the show's theme, as a samurai gradually becomes more Westernized as he progressively adopts the habits and affectations of the foreigners he is meant to supervise. “Pretty Lady” is a contrapuntal trio of three British sailors who have mistaken a young girl for a geisha and are attempting to woo her. This is, perhaps, the musical fusion highlight of the show as the orchestra and lays descending parallel 4ths and the singers use a counterpoint form established during the Western Renaissance; again the chord progression is often IV to I, again eschewing Pentatonics. "The New York Times" review of the original 1976 production said "The lyrics are totally Western and—as is the custom with Mr. Sondheim—devilish, wittily and delightfully clever. Mr. Sondheim is the most remarkable man in the Broadway musical today—and here he shows it victoriously...Mr. Prince's staging uses all the familiar Kabuki tricks—often with voices screeching in the air like lonely sea birds—and stylizations with screens and things, and stagehands all masked in black to make them invisible to the audience. Like choreography, the direction is designed to meld Kabuki with Western forms...the attempt is so bold and the achievement so fascinating, that its obvious faults demand to be overlooked. It tries to soar—sometimes it only floats, sometimes it actually sinks—but it tries to soar. And the music and lyrics are as pretty and as well-formed as a bonsai tree. "Pacific Overtures" is very, very different." Walter Kerr's article in "The New York Times" on the original 1976 production said "But no amount of performing, or of incidental charm, can salvage "Pacific Overtures." The occasion is essentially dull and immobile because we are never properly placed in it, drawn neither East nor West, given no specific emotional or cultural bearings." Ruth Mitchell, assistant to Mr. Prince, said in an interview with WPIX that a sense of not belonging was intentional as that was the very point of the show. Frank Rich, reviewing the 1984 revival for "The New York Times" stated that "the show attempts an ironic marriage of Broadway and Oriental idioms in its staging, its storytelling techniques and, most of all, in its haunting Stephen Sondheim songs. It's a shotgun marriage, to be sure - with results that are variously sophisticated and simplistic, beautiful and vulgar. But if "Pacific Overtures" is never going to be anyone's favorite Sondheim musical, it is a far more forceful and enjoyable evening at the Promenade than it was eight years ago at the Winter Garden...Many of the songs are brilliant, self-contained playlets. In "Four Black Dragons" various peasants describe the arrival of the American ships with escalating panic, until finally the nightmarish event does seem to be, as claimed, "the end of the world."..."Someone in a Tree," is a compact "Rashomon" - and as fine as anything Mr. Sondheim has written...The single Act II triumph, "Bowler Hat," could well be a V. S. Naipaul tale set to music and illustrated with spare Japanese brushstrokes..."Bowler Hat" delivers the point of "Pacific Overtures" so artfully that the rest of Act II seems superfluous." The 2004 production was not as well received. It was based on a critically praised Japanese language production by director Amon Miyamoto. Ben Brantley, reviewing for "The New York Times" wrote: "Now Mr. Miyamoto and "Pacific Overtures" have returned with an English-speaking, predominantly Asian-American cast, which makes distracting supertitles unnecessary. The show's sets, costumes and governing concept remain more or less the same. Yet unlike the New National Theater of Tokyo production, which was remarkable for its conviction and cohesiveness, this latest incarnation from the Roundabout Theater Company has the bleary, disoriented quality of someone suffering from jet lag after a sleepless trans-Pacific flight. Something has definitely been lost in the retranslation." Of the cast, Brantley wrote, "Even as they sing sweetly and smile engagingly, they appear to be asking themselves, "What am I doing here?""
https://en.wikipedia.org/wiki?curid=24965
Peter Stuyvesant Peter Stuyvesant (English pronunciation () in Dutch also "Pieter" and "Petrus" Stuyvesant); (1592–1672) served as the last Dutch director-general of the colony of New Netherland from 1647 until it was ceded provisionally to the English in 1664, after which it was renamed New York. He was a major figure in the early history of New York City and his name has been given to various landmarks and points of interest throughout the city (e.g. Stuyvesant High School, Stuyvesant Town, Bedford–Stuyvesant neighborhood, etc.). Stuyvesant's accomplishments as director-general included a great expansion for the settlement of New Amsterdam beyond the southern tip of Manhattan. Among the projects built by Stuyvesant's administration were the protective wall on Wall Street, the canal that became Broad Street, and Broadway. Stuyvesant, himself a member of the Dutch Reformed Church, opposed religious pluralism and came into conflict with Lutherans, Jews, Roman Catholics and Quakers as they attempted to build places of worship in the city and practice their faiths. However, Stuyvesant particularly supported Antisemitism, and loathed Jews not merely through religion, but also through race. Peter Stuyvesant was born in 1592 in Peperga, Friesland, in the Netherlands, to Balthasar Stuyvesant, a Reformed Calvinist minister, and Margaretha Hardenstein. He grew up in Peperga, Scherpenzeel, and Berlikum. At the age of 20, Stuyvesant went to the University of Franeker, where he studied languages and philosophy, but several years later he was expelled from the school after he seduced the daughter of his landlord. He was then sent to Amsterdam by his father, where Stuyvesant – now using the Latinized version of his first name, "Petrus", to indicate that he had university schooling – joined the Dutch West India Company. In 1630, the company assigned him to be their commercial agent on a small island just off of Brazil, Fernando de Noronha, and then five years later transferred him to the nearby Brazilian state of Pernambuco. In 1638, he was moved again, this time to the colony of Curaçao, the main Dutch naval base in the West Indies, where, just four years later, at barely 30 years old, he became the acting governor of that colony, as well as Aruba and Bonaire, a position he held until 1644. In April 1644, he coordinated and led an attack on the island of Saint Martin – which the Spanish had taken from the Dutch, and had almost been recaptured by them in 1625 – with an armada of 12 ships carrying more than a thousand men. He invested the island when the Spanish would not surrender, but was not successful in preventing them from getting supplies from Puerto Rico. A cannonball crushed Stuyvesant's right leg, and it was amputated just below the knee. Still in severe pain, he called off the siege a month later. Stuyvesant returned to the Netherlands for convalescence, where his right leg was replaced with a wooden peg. Stuyvesant was given the nicknames "Peg Leg Pete" and "Old Silver Nails" because he used a wooden stick studded with silver nails as a prosthesis. The West India Company saw the loss of Stuyvesant's leg as a "Roman" sacrifice, while Stuyvesant himself saw the fact that he did not die from his injury as a sign that God was saving him to do great things. A year later, in May 1645, he was selected by the Company to replace Willem Kieft as Director-General of the New Netherland colony, including New Amsterdam, the site of present-day New York City. Stuyvesant had to wait for his appointment to be confirmed by the Dutch States-General. During that time he married Judith Bayard, who was the daughter of a Huguenot minister, and hailed from Breda. Together, they left Amsterdam in December 1646, and, after stopping at Curaçao, arrived in New Amsterdam by May Kieft's administration of the colony had left the colony in terrible condition. Only a small number of villages remained after Kieft's wars, and many of the inhabitants had been driven away to return home, leaving only 250 to 300 men able to carry arms. Kieft himself had accumulated over 4,000 guilders during his term in office, and had become an alcoholic. With certainty that putting New Netherland to rights was the work which God had saved him for, Stuyvesant began the task of rebuilding the physical and moral state of the colony, returning it to being the kind of well-run place that the Dutch preferred. He told the people "I shall govern you as a father his children." In September 1647, Stuyvesant appointed an advisory council of nine men as representatives of the colonists. In 1648, a conflict started between him and Brant Aertzsz van Slechtenhorst, the commissary of the patroonship Rensselaerwijck, which surrounded Fort Orange (present-day Albany). Stuyvesant claimed he had power over Rensselaerwijck, despite special privileges granted to Kiliaen van Rensselaer in the patroonship regulations of 1629. When Van Slechtenhorst refused, Stuyvesant sent a group of soldiers to enforce his orders. The controversy that followed resulted in the founding of the new settlement, Beverwijck. Stuyvesant became involved in a dispute with Theophilus Eaton, the governor of English New Haven Colony, over the border of the two colonies. In September 1650, a meeting of the commissioners on boundaries took place in Hartford, Connecticut, called the Treaty of Hartford, to settle the border between New Amsterdam and the English colonies to the north and east. The border was arranged to the dissatisfaction of the Nine Men, who declared that "the governor had ceded away enough territory to found fifty colonies each fifty miles square." Stuyvesant then threatened to dissolve the council. A new plan of municipal government was arranged in the Netherlands, and the name "New Amsterdam" was officially declared on 2 February 1653. Stuyvesant made a speech for the occasion, saying that his authority would remain undiminished. Stuyvesant was then ordered to the Netherlands, but the order was soon revoked under pressure from the States of Holland and the city of Amsterdam. Stuyvesant prepared against an attack by ordering the citizens to dig a ditch from the North River to the East River and to erect a fortification. In 1653, a convention of two deputies from each village in New Netherland demanded reforms, and Stuyvesant commanded that assembly to disperse, saying: "We derive our authority from God and the company, not from a few ignorant subjects." In the summer of 1655, he sailed down the Delaware River with a fleet of seven vessels and about 700 men and took possession of the colony of New Sweden, which was renamed "New Amstel." In his absence, Pavonia was attacked by Native Americans, during the "Peach War" on 15 September 1655. In 1657, the directors of the Dutch West India Company wrote to Stuyvesant to tell him that they were not going to be able to send him all the tradesmen that he requested and that he would have to purchase slaves in addition to the tradesmen he would receive. During the colonial era, New York City became both a site from which fugitives fled bondage and a destination for runaways. The colonies closest to New Netherland, Connecticut and Maryland, encouraged Dutch slaves to escape and refused to return them. In 1650, Governor Petrus Stuyvesant threatened to offer freedom to Maryland slaves unless that colony stopped sheltering runaways from the Dutch outpost. In 1660, Stuyvesant was quoted as saying that "Nothing is of greater importance than the early instruction of youth." In 1661, New Amsterdam had one grammar school, two free elementary schools, and had licensed 28 schoolmasters. In 1657, Stuyvesant, who did not tolerate full religious freedom in the colony, and was strongly committed to the supremacy of the Dutch Reformed Church, refused to allow Lutherans the right to organize a church. When he also issued an ordinance forbidding them from worshiping in their own homes, the directors of the Dutch West Indies Company, three of whom were Lutherans, told him to rescind the order and allow private gatherings of Lutherans. Freedom of religion was further tested when Stuyvesant refused to allow the permanent settlement of Jewish refugees from Dutch Brazil in New Amsterdam (without passports), and join the handful of existing Jewish traders (with passports from Amsterdam). Stuyvesant attempted to have Jews "in a friendly way to depart" the colony. As he wrote to the Amsterdam Chamber of the Dutch West India Company in 1654, he hoped that "the deceitful race, — such hateful enemies and blasphemers of the name of Christ, — be not allowed to further infect and trouble this new colony." He referred to Jews as a "repugnant race" and "usurers", and was concerned that "Jewish settlers should not be granted the same liberties enjoyed by Jews in Holland, lest members of other persecuted minority groups, such as Roman Catholics, be attracted to the colony." Stuyvesant's decision was again rescinded after pressure from the directors of the Company. As a result, Jewish immigrants were allowed to stay in the colony as long as their community was self-supporting, however, Stuyvesant and the company would not allow them to build a synagogue, forcing them to worship instead in a private house. In 1657, the Quakers, who were newly arrived in the colony, drew his attention. He ordered the public torture of Robert Hodgson, a 23-year-old Quaker convert who had become an influential preacher. Stuyvesant then made an ordinance, punishable by fine and imprisonment, against anyone found guilty of harboring Quakers. That action led to a protest from the citizens of Flushing, which came to be known as the Flushing Remonstrance, considered by some historians a precursor to the United States Constitution's provision on freedom of religion in the Bill of Rights. In 1664, King Charles II of England ceded to his brother, the Duke of York, later King James II, a large tract of land that included all of New Netherland. Four English ships bearing 450 men, commanded by Richard Nicolls, seized the Dutch colony. On 30 August 1664, George Cartwright sent the governor a letter demanding surrender. He promised "life, estate, and liberty to all who would submit to the king's authority." On 6 September 1664, Stuyvesant sent Johannes de Decker, a lawyer for the West India Company, and five others to sign the Articles of Capitulation. Nicolls was declared governor, and the city was renamed New York. Stuyvesant obtained civil rights and freedom of religion in the Articles of Capitulation. The Dutch settlers mainly belonged to the Dutch Reformed church, a Calvinist denomination, holding to the Three Forms of Unity (Belgic Confession, Heidelberg Catechism, Canons of Dordt). The English were Anglicans, holding to the 39 Articles, a Protestant confession, with bishops. In 1665, Stuyvesant went to the Netherlands to report on his term as governor. On his return to the colony, he spent the remainder of his life on his farm of sixty-two acres outside the city, called the Great Bouwerie, beyond which stretched the woods and swamps of the village of Nieuw Haarlem. A pear tree that he reputedly brought from the Netherlands in 1647 remained at the corner of Thirteenth Street and Third Avenue until 1867 when it was destroyed by a storm, bearing fruit almost to the last. The house was destroyed by fire in 1777. He also built an executive mansion of stone called Whitehall. In 1645, Stuyvesant married Judith Bayard (–1687) of the Bayard family. Her brother, Samuel Bayard, was the husband of Stuyvesant's sister, Anna Stuyvesant. Petrus and Judith had two sons together: He died in August 1672 and his body was entombed in the east wall of St. Mark's Church in-the-Bowery, which sits on the site of Stuyvesant’s family chapel. Stuyvesant and his family were large landowners in the northeastern portion of New Amsterdam, and the Stuyvesant name is currently associated with four places in Manhattan's East Side, near present-day Gramercy Park: the Stuyvesant Town housing complex; the site of the original Stuyvesant High School, still marked Stuyvesant on its front face, on East 15th Street near First Avenue, Stuyvesant Square, a park in the area; and the Stuyvesant Apartments on East 18th Street. The new Stuyvesant High, a premier public high school, is on Chambers Street near the World Trade Center. His farm, called the "Bouwerij" – the seventeenth-century Dutch word for "farm" – was the source for the name of the Manhattan street and surrounding neighborhood named "The Bowery". The contemporary neighborhood of Bedford–Stuyvesant, Brooklyn includes Stuyvesant Heights and retains its name. Also named after him are the hamlets of Stuyvesant and Stuyvesant Falls in Columbia County, New York, where descendants of the early Dutch settlers still live and where the Dutch Reformed Church remains an important part of the community, as well as shopping centers, yacht clubs and other buildings and facilities throughout the area where the Dutch colony once was. A statue of Stuyvesant by J. Massey Rhind situated at Bergen Square in Jersey City was dedicated in 1915 to mark the 250th anniversary of the Dutch settlement there The World War II Liberty Ship was named in his honor. The last acknowledged direct descendant of Peter Stuyvesant to bear his surname was Augustus van Horne Stuyvesant, Jr., who died a bachelor in 1953 at the age of 83 in his mansion at 2 East 79th Street. Rutherfurd Stuyvesant, the 19th-century New York developer, and his descendants are also descended from Peter Stuyvesant; however, Rutherford Stuyvesant's name was changed from Stuyvesant Rutherford in 1863 to satisfy the terms of the 1847 will of Peter Gerard Stuyvesant. His descendants include:
https://en.wikipedia.org/wiki?curid=24968
Phish Phish is an American rock band that formed in Burlington, Vermont, in 1983. The band is known for musical improvisation, extended jams, blending of genres, and a dedicated fan base. The band consists of guitarist Trey Anastasio, bassist Mike Gordon, drummer Jon Fishman, and keyboardist Page McConnell, all of whom perform vocals, with Anastasio being the primary lead vocalist. The band was formed by Anastasio, Gordon, Fishman and guitarist Jeff Holdsworth, who were joined by McConnell in 1985. Holdsworth departed the band in 1986, and the lineup has remained stable since. Phish performed together for 15 years before beginning a two-year hiatus in October 2000. The band regrouped in late 2002, but disbanded again in August 2004. They reunited in March 2009 for a series of three consecutive concerts at Hampton Coliseum in Hampton, Virginia, and have since resumed performing regularly. Phish's music blends elements of a wide variety of genres, including funk, progressive rock, psychedelic rock, folk, country, jazz, blues, bluegrass, and pop. The band was part of a movement of improvisational rock groups, inspired by the Grateful Dead and colloquially known as "jam bands", that gained considerable popularity as touring concert acts in the 1990s. Phish has developed a large and dedicated following by word of mouth, the exchange of live recordings, and selling over 8 million albums and DVDs in the United States. Phish was formed at the University of Vermont (UVM) in 1983 by guitarists Trey Anastasio and Jeff Holdsworth, bassist Mike Gordon, and drummer Jon Fishman. Anastasio and Fishman had met that October, after Anastasio overheard Fishman playing drums in his dormitory room, and asked if he and Holdsworth could jam with him. Gordon met the trio shortly thereafter, after he answered a want-ad for a bass guitarist that Anastasio had posted around the university. The new group performed their first concert at Harris Millis Cafeteria at the University of Vermont on December 2, 1983, where they played a set of classic rock covers, including two songs by the Grateful Dead. The band performed one more concert in 1983, and then did not perform again for nearly a year, stemming from Anastasio's suspension from the university following a prank he had pulled with a friend. Anastasio returned to his hometown of Princeton, New Jersey following the prank, and briefly attended Mercer County Community College. While there, he reconnected with his childhood friend Tom Marshall, and the pair began a songwriting collaboration and recorded material that would appear on the "Bivouac Jaun" demo tape. Marshall and Anastasio have subsequently composed the majority of Phish's original songs throughout their career. Anastasio returned to Burlington in late 1984, and resumed performing with Gordon, Holdsworth and Fishman; The quartet eventually named themselves Phish, and they played their first concert under that name on October 23 of that year. The band was named both after Fishman, whose nickname is "Fish," and "phshhhh", an onomatopoeia of the sound of a brush on a snare drum. Anastasio designed the band's logo, which featured the group's name inside a stylized fish. The band would collaborate with percussionist Marc Daubert, a friend of Anastasio's, in the fall of 1984. Daubert ceased performing with the band in early 1985. Keyboardist Page McConnell met Phish in early 1985, when he arranged for them to play a spring concert at Goddard College, the small university he attended in Plainfield, Vermont. He began performing with the band as a guest shortly thereafter, and made his live debut during the third set of their May 3, 1985 concert at UVM's Redstone Campus. In the summer of 1985, Phish went on a short hiatus while Anastasio and Fishman vacationed in Europe; during this time, McConnell offered to join the band permanently, and moved to Burlington to learn their repertoire from Gordon. McConnell officially joined Phish as a full-time band member in September 1985. Phish performed with a five-piece lineup for about six months after McConnell joined, a period which ended when Holdsworth quit the group in March 1986 following a religious conversion. Holdsworth's departure from the band solidified its "Trey, Page, Mike, and Fish" lineup, which remains in place to this day. With the encouragement of McConnell (who received $50 for each transferee), Anastasio and Fishman relocated in mid-1986 to Goddard College. Phish distributed at least six different experimental self-titled cassettes during this era, including "The White Tape". This first studio recording was circulated in two variations: the first, mixed in a dorm room as late as 1985, received a higher distribution than the second studio remix of the original four tracks, c. 1987. The older version was officially released under the title "Phish" in August 1998. Jesse Jarnow's book "" details much of the band's early years at Goddard College, including their early relationship with fellow Goddard students Richard "Nancy" Wright and Jim Pollock. Pollock and Wright were musical collaborators who made experimental recordings on multi-track cassettes, and had been introduced to Phish through McConnell, who co-hosted a radio program on WGDR with Pollock. Phish adopted a number of Nancy's songs into their own set, including "Halley's Comet", "I Didn't Know", and "Dear Mrs. Reagan", the latter song being written by Nancy and Pollock. In "Heads", Jarnow argues that Wright and his music were highly influential to Phish's early style and experimental sound. Wright amicably ended his association with Phish in 1989, but Pollock has continued to collaborate with Phish over the years, designing some of their album covers and concert posters. The band's actions demonstrate an identity with their "hometown" of Burlington, Vermont. By 1985, the group had encountered Burlington luthier Paul Languedoc, who would eventually design four guitars for Anastasio and two basses for Gordon. In October 1986, he began working as their sound engineer. Since then, Languedoc has built exclusively for the two, and his designs and traditional wood choices have given Phish a unique instrumental identity. During the late 1980s, Phish began to play regularly at Nectar's bar and restaurant in downtown Burlington, and performed dozens of concerts across multiple residencies through March 1989. The band's 1992 album "A Picture of Nectar" was named in honor of the bar's owner, Nectar Rorris, and its cover features his face superimposed onto an orange. As his senior project for Goddard College, Anastasio penned "The Man Who Stepped into Yesterday", a nine-song progressive rock concept album that would become Phish's second studio experiment. Recorded between 1987 and 1988, it was submitted in July of that year, accompanied by a written thesis. The song cycle that developed from the project – known as Gamehendge – grew to include an additional eight songs. The band performed the suite in concert on five different occasions: in 1988, 1991, 1993, and twice in 1994 without replicating the song list. "The Man Who Stepped Into Yesterday" has never received an official release, but a bootleg tape has circulated for decades, and songs such as "Wilson" and "The Lizards" remain concert staples for the band. Beginning in the spring of 1988, members of the band began practicing in earnest, sometimes locking themselves in a room and jamming for hours on end. One such jam took place at Anastasio's apartment, with a second at Paul Languedoc's house in August 1989. They called these jam sessions "Oh Kee Pa Ceremonies", saying the name was chosen by Anastasio after seeing the films "A Man Called Horse" and "Modern Primitives," which depict fictional versions of a Mandan Native American ceremony. In July 1988, the band performed their first concerts outside of the northeastern United States, when they embarked on a seven-date tour in Colorado. These shows are excerpted on their 2006 live compilation "Colorado '88". On January 26, 1989, Phish played the Paradise Rock Club in Boston. The owners of the club had never heard of Phish and refused to book them, so the band rented the club for the night. The show sold out due to the caravan of fans that had traveled to see the band. The concert was Phish's breakthrough on the northeastern regional music circuit, and the band began to book concerts at other large rock clubs, theaters, and small auditoriums throughout the area, such as the Somerville Theatre, Worcester Memorial Auditorium and Wetlands Preserve. That spring, the band self-released their debut full-length studio album, "Junta", and sold copies on cassette tape at their concerts. The album includes a studio recording of the epic "You Enjoy Myself", which is considered to be the band's signature song. Later in 1989, the band hired Chris Kuroda as their lighting director. Kuroda subsequently became well known for his artistic light shows at the group's concerts. By late 1990, Phish's concerts were becoming more and more intricate, often making a consistent effort to involve the audience in the performance. In a special "secret language", the audience would react in a certain manner based on a particular musical cue from the band. For instance, if Anastasio "teased" a motif from "The Simpsons" theme song, the audience would yell, "D'oh!" in imitation of . In 1992, Phish introduced a collaboration between audience and band called the "Big Ball Jam" in which each band member would throw a large beach ball into the audience and play a note each time his ball was hit. In so doing, the audience was helping to create an original composition. On occasion, performances of "You Enjoy Myself" involved Gordon and Anastasio performing synchronized maneuvers and jumping on mini-trampolines while simultaneously playing their instruments. Fishman would also regularly step out from behind his drum kit during concerts to sing cover songs, which were often punctuated by him playing an Electrolux vacuum cleaner like an instrument. The band released their second album, "Lawn Boy", in September 1990 on Absolute A Go Go, a small independent label that had a distribution deal with the larger Rough Trade Records. The album had been recorded the previous year, after the band had won studio time at engineer Dan Archer's Archer Studios when they came in first place at an April 1989 battle of the bands competition in Burlington. Phish, along with Bob Dylan, the Grateful Dead, and the Beatles, was one of the first bands to have a Usenet newsgroup, rec.music.phish, which launched in 1991. Aware of the band's growing popularity, Elektra Records signed them that year after they were recommended to the record label by A&R representative Sue Drew. In the summer of 1991, the band embarked on a 14-date tour of the eastern United States accompanied by a three-piece horn section dubbed the Giant Country Horns. In August of that year, Phish played an outdoor concert at their friend Amy Skelton's horse farm in Auburn, Maine that acted as a prototype for their later all-day festival events. In 1992, the band released their third studio album, "A Picture of Nectar", their first release for the major label Elektra. Subsequently, the label also reissued the band's first two albums. Later in 1992, Phish participated in the first annual H.O.R.D.E. festival, which provided them with their first national tour of major amphitheaters. The lineup, among others, included Phish, Blues Traveler, the Spin Doctors, and Widespread Panic. That summer, the band toured Europe with the Violent Femmes and later toured Europe and the U.S. with Santana. Throughout the latter tour, Carlos Santana regularly invited some or all of the members of Phish to jam with his band during their headlining performances. The band ended 1992 with a New Year's Eve performance at the Matthews Arena in Boston, Massachusetts, a performance that was simulcast throughout the Boston area by radio station WBCN. The concert was filled with several new "secret language" cues they had taught their audience in order to deliberately confuse radio listeners. Phish began headlining major amphitheaters in the summer of 1993. That year, the group released their fourth album, "Rift", a concept album which featured a cover painted by David Welker that referenced almost all of the songs on the record. The album was the band's first to appear on the "Billboard 200" album chart, debuting at #51 in February 1993. In March 1994, the band released their fifth studio album "Hoist." The album featured an array of guest performers, including country singer Alison Krauss, banjoist Béla Fleck, former Sly & The Family Stone member Rose Stone, actor Jonathan Frakes, and the horn section of R&B group Tower of Power. To promote the album, Gordon directed the band's only official music video, for its first single "Down with Disease". The clip gained some MTV airplay starting in June of that year. "Down with Disease" became a minor hit on rock radio in the United States, and was the band's first song to appear on a "Billboard" music chart when it peaked at #33 on the magazine's Hot Mainstream Rock Tracks chart that summer. To further promote "Hoist", the band released an experimental short-subject documentary called "Tracking", also directed by Gordon, which depicted the recording sessions for the album. Foreshadowing their future tradition of festivals, Phish coupled camping with their 1994 summer tour finale at Sugarbush North in Warren, Vermont, that show eventually being released as "Live Phish Volume 2". On Halloween of that year, the group promised to don a fan-selected "musical costume" by playing an entire album from another band. After an extensive mail-based poll, Phish performed the Beatles' White Album as the second of their three sets at the Glens Falls Civic Center in upstate New York. The "musical costume" concept subsequently became a recurring part of Phish's tours, with the band playing a different album whenever they had a concert scheduled for Halloween night. In October 1994, "Crimes of the Mind", the debut album by Anastasio's friend and collaborator Steve "The Dude of Life" Pollak, was released by Elektra Records; The album, which had been recorded in 1991, was billed to "The Dude of Life and Phish" and features all four members of Phish acting as Pollak's backing band. On December 30, 1994, the band made their first appearance on national network television when they performed "Chalk Dust Torture" on "Late Night with David Letterman". The band would go on to appear on the program seven more times before David Letterman's retirement as host in 2015. For their 1994 New Years Run, Phish played the Civic Centers in Philadelphia and Providence as well as sold-out shows at Madison Square Garden and Boston Garden, which marked their debut performances at both venues. For the December 31 show at the Boston Garden, the band rode around the arena in a float shaped like a hot dog. The stunt was reprised at their 1999 New Year's Eve concert before the hot dog was donated to the Rock and Roll Hall of Fame. Following the death of Grateful Dead frontman Jerry Garcia in the summer of 1995 and the appearance of "Down with Disease" on "Beavis and Butt-Head", the band experienced a surge in the growth of their fan base and an increased awareness in popular culture. In their tradition of playing a well-known album by another band for Halloween, Phish contracted a full horn section for their performance of The Who's "Quadrophenia" in 1995. Their first live album, "A Live One", was released during the summer of 1995 and featured selections from various concerts from their 1994 winter tour. "A Live One" became Phish's first RIAA-certified gold album in November 1995. In 1997, "A Live One" became the band's first Platinum album, certified for sales of 1 million copies in the United States, and remains their best selling album to date. In the fall of 1995, the band challenged its audience to two games of travelling chess. Each show on the tour featured a pair of moves: The band took its turn either at the beginning of or during the first set. The audience was invited to gather at the Greenpeace table in the venue's lobby during the setbreak to determine its move. Two games were played on the tour; The audience conceded the first game on November 15 in Florida, and the band conceded the second game at its New Year's Eve concert at Madison Square Garden. These were the only two games that were played, which left the final score tied at 1–1. The band's concerts from 1995 are held in high regard by their fans, with Parke Puterbaugh noting in "Phish: The Biography" that the year "Ended with what many fans consider Phish's greatest tour (fall 1995), greatest month of touring (December 1995) and one of their greatest single shows ()." Following an appearance at the New Orleans Jazz & Heritage Festival in April 1996, The band spent the summer of that year opening for Santana on their European tour. In August 1996, the band held their first festival, The Clifford Ball, at the decommissioned Plattsburgh Air Force Base on the New York side of Lake Champlain. The festival attracted 70,000 attendees, making it both Phish's biggest concert crowd to that point and the largest single concert by attendance in the United States in 1996. Phish retreated to their Vermont recording studio and recorded hours and hours of improvisations, sometimes overlaying them on one another, and included some of the result on the second half of their sixth studio album "Billy Breathes", which they released in the fall of 1996. Alongside traditional rock-based crescendos, the album has more acoustic guitar than their previous records, and was regarded by the band and some fans as their crowning studio achievement. The album's first single, "Free", peaked at No. 24 on the Billboard Hot Modern Rock Tracks chart and No. 11 on the Mainstream Rock Tracks chart, and was their most successful song on both charts. By 1997, Phish's concert improvisational ventures were developing into a new funk-inspired jamming style. Vermont-based ice cream conglomerate Ben & Jerry's launched "Phish Food" that year. The band officially licensed their name for use with the product, the only time they have ever allowed a third-party company to do so, and were directly involved with the creation of the flavor. Proceeds from the flavor are donated to the band's non-profit charity The WaterWheel Foundation, which raises funds for the preservation of Vermont's Lake Champlain. On August 8, 1997, Phish webcast one of their concerts live over the internet for the first time. On August 16 and 17, 1997, Phish held their second festival, The Great Went, over two days at the Loring Air Force Base in Limestone, Maine, near the Canada–United States border. A version of the song "Bathtub Gin", that was performed on the festival's second night, is considered to be one of the best improvisational live performances of the band's career. In October 1997, the band released their second live album "Slip Stitch and Pass", which featured selections from their March 1997 concert at the Markthalle Hamburg in Hamburg, Germany. Following the Great Went, the band embarked on a fall tour that was dubbed by fans as the "Phish Destroys America" tour after a 1970s kung fu-inspired poster for the opening date in Las Vegas. The 21-date tour is considered one of the group's most popular and acclaimed tours, and several concerts were later officially released on live album sets such as "Live Phish Volume 11" in 2002. In April 1998, the band embarked on the Island Tour - a four night tour with two shows at the Nassau Coliseum in Uniondale, New York on Long Island and another two at the Providence Civic Center in Providence, Rhode Island. The four concerts are highly regarded by fans due to the band's exploration of a jazz-funk musical style they had been playing for the previous year, which Anastasio dubbed "cowfunk". The band performed the tour in the middle of studio sessions for their seventh album, and were inspired by the quality of their performances to further incorporate the cowfunk style into subsequent sessions. The resulting album, "The Story of the Ghost", was released in October 1998. The album's first single "Birds of a Feather", which had been premiered on the Island Tour, became a #14 hit on Billboard's Adult Alternative Songs chart. To promote "The Story of the Ghost", Phish performed several songs from the album on the public television music show "Sessions at West 54th" in October 1998, and were interviewed for the program by its host David Byrne of Talking Heads. In the summer of 1998, The band held Lemonwheel, their second festival at Loring Air Force Base in Maine. The two-day event attracted 60,000 attendees. The band played another summer festival in 1999, called Camp Oswego and held at the Oswego County Airport in Volney, New York. Unlike other Phish festivals, Camp Oswego featured a prominent second stage of additional performers aside from Phish, including Del McCoury, The Slip and Ozomatli. In July 1999, the band released an album of improvisational instrumentals titled "The Siket Disc". The band followed that release with "Hampton Comes Alive", a six-disc box set released in November 1999, which contained the entirety of their performances on November 21 and 22, 1998 at the Hampton Coliseum in Hampton, Virginia. The set marked the first time that complete recordings of Phish concerts were officially released by Elektra Records. To celebrate the new millennium, Phish hosted a two-day outdoor festival at the Big Cypress Seminole Indian Reservation in Florida in December 1999. The festival's climactic New Year's Eve concert, referred to by fans as simply "The Show," started at 11:35 p.m. on December 31, 1999, and continued through to sunrise on January 1, 2000, approximately eight hours later. This concert has been referred to as a peak musical experience by the band. The band's performance of the song "Heavy Things" at the festival was broadcast live as part of ABC's "2000 Today" millennium coverage, giving the band their biggest television audience up to that point. 75,000 people attended the sold-out two-day festival. In 2017, "Rolling Stone" named the Big Cypress festival one of the "50 Greatest Concerts of the Last 50 Years." 2000 saw no Halloween show, no summer festival and no new full-band compositions: May's "Farmhouse" contained material dating from 1997 and original material from Anastasio's 1999 solo acoustic/electric club tour. "Heavy Things", which was released as the album's first single, became the band's only song to appear on a mainstream pop radio format, reaching #29 on "Billboard"'s Adult Top 40 chart that July. The song also became the band's biggest hit to date on the Adult Alternative Songs chart, reaching #2 there. In June 2000, the band embarked on a seven-date headlining tour of Japan. In July, they taped an appearance on the PBS music show "Austin City Limits", which was aired in October. In the summer of 2000, the band announced that they would take their first "extended time-out" following their upcoming fall tour. Anastasio officially announced the impending hiatus to the band's fans during their September 30 concert at the Thomas & Mack Center in Paradise, Nevada. During the tour's last concert on October 7, at the Shoreline Amphitheatre in Mountain View, California, the band made no reference to the hiatus, and left the stage without saying a word following their encore performance of "You Enjoy Myself", as The Beatles' "Let It Be" played over the venue's sound system. "Bittersweet Motel", a documentary film about the band directed by Todd Phillips, was released in August 2000, shortly before the hiatus began. The documentary captures the band's 1997 and 1998 tours, the Great Went festival and the recording of "The Story of the Ghost". Phish were nominated in two categories at the 43rd Annual Grammy Awards in 2001: Best Boxed Recording Package for "Hampton Comes Alive" and Best Instrumental Rock Performance for "First Tube" from "Farmhouse". During Phish's hiatus, Elektra Records continued to issue archival releases of the band's concerts on compact disc. Between September 2001 and May 2003, the label released 20 entries in the Live Phish Series. These multi-disc sets featured complete soundboard recordings of concerts that were particularly popular with the band and their fanbase, similar to the Grateful Dead's Dick's Picks archival series. In November 2002, the label released the band's first concert DVD, "", which featured the entirety of the September 2000 concert at which Anastasio announced the hiatus. In April 2002, Phish guest starred on the episode "Weekend at Burnsie's" of the animated series "The Simpsons". The episode marked the band's first appearance together, albeit as animated characters, since the hiatus began. Phish provided their own voices for the episode and performed a snippet of "Run Like an Antelope". In August 2002, Phish's manager John Paluska announced the band planned to end their hiatus that December with a New Year's Eve concert at Madison Square Garden. They also recorded "Round Room" in only four days and released it on December 10. The band had initially planned to record the new album live at the Madison Square Garden concert, but instead felt that demos they had recorded of the material were strong enough to merit release as a studio album. Four days after the release of "Round Room", the band made their only appearance as a musical guest on "Saturday Night Live", where they debuted the song "46 Days" and appeared in two comedy sketches. During their return concert on December 31, McConnell's brother was introduced as actor Tom Hanks. The impostor sang a line of the song "Wilson", prompting some media outlets to report that the actor had appeared at the concert. In order to avoid the exhaustion and pitfalls of previous years' high-paced touring, Phish played sporadically after the reunion, with tours lasting about two weeks. At the end of the 2003 summer tour, Phish returned to Limestone, Maine for It, their first festival since Big Cypress. The event drew crowds of over 60,000 fans, once again making Limestone one of the largest cities in Maine for a weekend. Highlights from the festival were released on a DVD set, also called "It", in October 2004. In November and December 2003, the band celebrated its 20th anniversary with a four-show mini-tour of shows in New York, Pennsylvania, and Massachusetts. The December 1 show featured a guest appearance by former member Jeff Holdsworth, who sat in with the band on five songs, including his compositions "Possum" and "Camel Walk". On May 25, 2004, Anastasio announced on the band's website that the band was breaking up after their summer tour. He wrote that he had met with the other members earlier that month to discuss the "Strong feelings I’ve been having that Phish has run its course, and that we should end it now while it’s still on a high note." By the end of the meeting, he said, "We realized that after almost twenty-one years together, we were faced with the opportunity to graciously step away in unison, as a group, united in our friendship and our feelings of gratitude." Their final album (at the time), "Undermind", was released in late spring. In the summer of 2004, the band jammed with rapper Jay-Z at one show, shot the concert film "" for broadcast in movie theaters, and performed a seven-song set atop the marquee of the Ed Sullivan Theater during the "Late Show with David Letterman" to fans who had gathered on the street. The 2004 tour finished with the band's seventh summer festival on August 14 and 15, which were billed as their final performances. The Coventry festival was named for the town in Vermont that hosted the event, which was held at the nearby Newport State Airport. The festival had a 70,000 ticket cap, and the event grounds featured a radio station and farmer's market. The two-day event was plagued by torrential rainstorms, an abundance of mud at the event grounds, and a traffic jam on Interstate 91 leading up to Newport State Airport that grew to at least 30 miles in length. On August 14, Gordon announced on local radio that no further cars would be allowed into the concert site due to the weather, but many concert-goers parked their vehicles on roadsides and hiked to the airport despite the conditions. An estimated 65,000 fans attended the two-day Coventry festival. Both nights of the event were also simulcast to movie theaters around the United States. The band performed what was at the time their final concert on August 15, 2004, the festival's second night. The concert ended with a final encore of the song "The Curtain (With)". After Coventry, the members of the band admitted they were disappointed with their performance at the festival; In the official book "Phish: The Biography", Anastasio expressed that "Coventry itself was a nightmare. It was emotional, but it was not like we were at our finest. I certainly wasn't". Following the break-up, the band's members remained in amicable communication with one another. All four members pursued solo careers or performed with side-projects. In 2005, Phish formed their own record label, JEMP Records, to release archival CD and DVD sets. The label's first release was "", which was released in conjunction with Rhino Records in December 2005. The album was named the 42nd greatest live album of all time by "Rolling Stone" in April 2015. The label subsequently released several other archival live box sets, including "Colorado '88" (2006), "Vegas 96" (2007), "At the Roxy" (2008) and "The Clifford Ball" (2009). In December 2006, Anastasio was arrested in Whitehall, New York for drug possession and driving while intoxicated, and was sentenced to 14 months in a drug court program. In 2007, while Anastasio was undergoing rehabilitation, the other members of Phish surprised him on his birthday with an instrumental recording they had made for him to play along with on guitar. During his rehabilitation, Anastasio said he "spent 24 hours a day thinking about nothing but Phish" and began discussing a reunion with the other members of the band. Phish received the Jammys Lifetime Achievement Award on May 7, 2008, in The Theater at Madison Square Garden. All four members attended the ceremony and gave a speech, and both McConnell and Anastasio performed, although not together. In response to a June 2008 rumor that Phish had reunited to record a new album, McConnell wrote a letter on the band's website updating fans on the current relations between the band's members. McConnell wrote that while the members remained friends, they were currently busy with other projects and the reunion rumors were premature. He added, "Later this year we hope to spend some time together and take a look at what possible futures we might enjoy." That September, the band played three songs at the wedding of their former tour manager Brad Sands. Later in 2008, the band reconvened at The Barn, Anastasio's farmhouse studio in Burlington, Vermont, for jamming sessions and rehearsals. On October 1, 2008, the band announced on their website that they had officially reunited, and would play their first shows in five years in March 2009 at the Hampton Coliseum in Hampton, Virginia. The three reunion concerts were held on March 6, 7, and 8, 2009, with "Fluffhead" being the first song the band played onstage at the first show. Approximately 14,000 people attended the concerts over the course of three days, and the band made the shows available for free download on their LivePhish website for a limited time, in order to accommodate fans who were unable to attend. When the band decided to reunite, the members agreed to limit their touring schedule, and they have typically performed about 50 concerts a year since. Following the reunion weekend, Phish embarked on a summer tour which began in May with a concert at Fenway Park in Boston. The Fenway show was followed by 25-date tour which included performances at the 2009 edition of the Bonnaroo Music Festival in Tennessee and a four date stand at Red Rocks Amphitheatre in Colorado. At Bonnaroo, Phish was joined by Bruce Springsteen on guitar for three songs. Phish's fourteenth studio album, "Joy", produced by Steve Lillywhite, was released September 8, 2009. A single from the album, "Time Turns Elastic", was released on iTunes in late May. In October, the band held Festival 8, their first multi-day festival event since Coventry in 2004, at the Empire Polo Club in Indio, California. In March 2010, Anastasio inducted Genesis, one of his favorite bands, into the Rock and Roll Hall of Fame at the museum's annual ceremony in New York City. In addition to Anastasio's speech, Phish performed the Genesis songs "Watcher of the Skies" and "No Reply at All" at the event. Phish toured in the summer and fall of 2010, and their August 10 concert at the Utica Memorial Auditorium was released on the DVD/CD box-set "Live in Utica" the following May. Phish's ninth festival, Super Ball IX, took place at the Watkins Glen International race track in Watkins Glen, New York on July 1–3, 2011. It was the first concert to take place at Watkins Glen International since Summer Jam at Watkins Glen in 1973. In September, the band played a benefit concert in Essex Junction, Vermont which raised $1.2 million for Vermont flood victim relief in the aftermath of Hurricane Irene. In June 2012, Phish headlined Bonnaroo 2012 with the Red Hot Chili Peppers and Radiohead. During their 2013 Halloween concert at Boardwalk Hall in Atlantic City, New Jersey, the band played twelve new songs from their upcoming album, which at the time had the working title "Wingsuit" and would later be renamed "Fuego". Phish ended 2013 with a New Year's Eve concert that also celebrated their 30th anniversary, as they had played their first concert in December 1983. The concert featured a nine-minute montage film celebrating the band's career, and the band performed an entire set in the middle of the arena from atop an equipment truck. Phish released "Fuego", their first studio album in five years, on June 24, 2014. The album peaked at number 7 on the "Billboard 200" album chart, and became their highest charting album since "Billy Breathes" reached the same position in 1996. During their Halloween 2014 concert at MGM Grand Las Vegas, the band performed a set consisting of ten original songs inspired by the 1964 Walt Disney Records sound effects album "Chilling, Thrilling Sounds of the Haunted House." 2014 ended with a three-set show on New Year's Eve in Miami, Florida, followed by three more nights of performances to ring in 2015. In 2015, Phish performed both a summer tour and their tenth multi-day festival event, Magnaball, was held at the Watkins International Speedway in New York in August. Phish's sixteenth studio album, "Big Boat", was released on October 7, 2016, on JEMP Records. Phish played a 13-night concert residency at New York City's Madison Square Garden from July 21 to August 6, 2017, each show featuring unique set lists, none of which repeated a single song. Named "The Baker's Dozen", each concert featured a loose theme with performances of unique cover songs and a special doughnut served each night to the audience by Federal Donuts of Philadelphia. Phish planned to hold an eleventh summer festival, Curveball, in Watkins Glen, New York in 2019, but the festival was canceled by New York Department of Health officials, one day before it was scheduled to begin, due to water quality issues from flooding in the Watkins Glen, New York area. At their Halloween concert that October at the MGM Grand in Las Vegas, the band performed a set of all-new original material that they promoted as a "cover" of "í rokk" by Kasvot Växt, a fictional 1980s Scandinavian progressive rock band they had created. Phish released the Kasvot Växt set as a standalone live release on Spotify on November 10, 2018. All four concerts in the 2018 Halloween run were livestreamed in 4K resolution, which marked the first time that a major musical act had ever offered a 4K livestreaming option. "Between Me and My Mind", a documentary film directed by Steven Cantor about Anastasio's life, his Ghosts of the Forest side-project and Phish's 2017 New Year's Eve concert, was screened the Tribeca Film Festival in April 2019. In June 2019, SiriusXM launched Phish Radio, a satellite radio station dedicated to the band's music. Phish released their fifteenth studio album "Sigma Oasis" on April 2, 2020. The album was premiered through a listening party on their LivePhish app, SiriusXM radio station and Facebook page. The album consists entirely of material the band had been performing in concert over the course of the previous decade, but had yet to appear on a studio release. Due to the COVID-19 pandemic, Phish postponed their 2020 summer tour until 2021. Before 2020, Phish had embarked on a summer tour every year since their 2009 reunion. Phish's popularity grew in the 1990s due to fans sharing concert recordings that had been taped by audience members and distributed online for free. Phish were among the first musical acts to utilize the internet to grow their fanbase, with fans using file-sharing websites such as etree and BitTorrent to share concerts. In 1998, "Rolling Stone" described Phish as "the most important band of the '90s." In "The New Rolling Stone Album Guide", the band is described as having helped "spawn a new wave of bands oriented around group improvisation and extended instrumental grooves". Phish's festival events in the 1990s inspired the foundation of the Bonnaroo Music Festival in Tennessee, which was first held in 2002. Co-founder Rick Farman, a Phish fan, consulted Phish managers Richard Glasgow and John Paluska about festival infrastructure during the early stages of planning. The festivals also inspired other jam band-oriented concert events, such as the Disco Biscuits' Camp Bisco, Electric Forest Festival, and the Big Ears Festival. While Phish has had eight of their singles appear on "Billboard"'s Adult Alternative Songs chart since its inception in 1996, even the band's most successful songs would not be recognizable to the average music listener. Phish are well known to their loyal fans, called Phishheads, but the group's music and fan culture are otherwise polarizing to general audiences. The tribal nature of Phish supporters has encouraged comparisons of Phishheads to the Juggalos of Insane Clown Posse. Phish runs parallel to their jam band counterpart the Grateful Dead, and like the Grateful Dead, they hold a special place in commercialism where authenticity is manufactured and sold. Phish heavily contributes to music based tourism with their "traveling communities" of fans, and they have been simultaneously hailed and criticized for their near-constant tour dates, which bring with them the capital value of tourism and necessitates the increased security and community planning that come with any music festival. Jordan Hoffman of Thrillist explains "the solace many find in attending religious services is somewhat mirrored for me in seeing Phish," and even though Phish fans are generally considered welcoming and friendly, the reception of the group from the outside is often one of unease and confusion. The music of Phish is "oriented around group improvisation and super-extended grooves" that draw on a range of rock-oriented influences, including psychedelic rock, progressive rock, jazz fusion, funk, reggae, hard rock, alternative rock, post-punk and various "acoustic" genres, such as folk and bluegrass. Some Phish songs use different vocal approaches, such as a cappella (unaccompanied) sections of barbershop quartet-style vocal harmonies. The band began to include barbershop segments in their concerts in 1993, when the four members began taking lessons from McConnell's landlord, who was a judge at barbershop competitions. In the 1997 official biography "The Phish Book", Anastasio coined the term "cow-funk" to describe the band's late 1990s funk and jazz-funk-influenced playing style, observing that "What we’re doing now is really more about groove than funk. Good funk, real funk, is not played by four white guys from Vermont." Phish were often compared to the Grateful Dead during the 1990s, a comparison that the band members often resisted or distanced themselves from. The two bands were compared due to their emphasis on live performances, improvisational jamming style, musical similarities, and traveling fanbase. In November 1995, Anastasio told the "Baltimore Sun" "When we first came into the awareness of the media, it would always be the Dead or Zappa they'd compare us to. All of these bands I love, you know? But I got very sensitive about it." Early in their career, Phish would occasionally cover Grateful Dead songs in concert, but the band stopped doing so by the late 1980s. In "Phish: The Biography", Parke Puterbaugh observed "The bottom line is while it's impossible to imagine Phish without the Grateful Dead as forebears, many other musicians figured as influences upon them. Some of them - such as Carlos Santana and Frank Zappa - were arguably at least as significant as the Grateful Dead. In reality, the media certainly overplayed the Grateful Dead connection and Phish probably underplayed it, at least in their first decade." Anastasio has also cited progressive rock artists such as King Crimson and Genesis as significant influences on Phish's early material; In a 2019 "New York Times" interview, he observed, "If you listen to the first couple of Phish albums, they don’t sound anything like the Grateful Dead. I was more interested in Yes." The driving force behind Phish is the popularity of their concerts and the fan culture surrounding the event. Each a production unto itself, the band is known to consistently change set lists and details, as well as the addition of their own antics to ensure that no two shows are ever the same. With fans flocking to venues hours before they open, the concert is the centerpiece of an event that includes a temporary community in the parking lot, complete with "Shakedown Street": at times a garment district, art district, food court, or pharmacy. Phish concerts typically feature two sets, with an intermission in between. During concerts, songs often segue into one another, or produce improvisational jams that can last 10 minutes or more depending on the song. Several regularly performed songs in Phish's repertoire have never appeared on one of their studio albums; Those include "Possum", "Mike's Song", "I Am Hydrogen", "Weekapaug Groove", "Harry Hood", "Runaway Jim", "Suzy Greenberg", "AC/DC Bag" and "The Lizards", all of which date to 1990 or earlier and have been played by Phish over 300 times in concert. Because Phish's reputation is so grounded in their live performances, concert recordings are commonly traded commodities. In December 2002, the band launched the LivePhish website, from which official soundboard recordings can be purchased. Legal field recordings produced by tapers with boom microphones from the audience in compliance with Phish's tape trading policy are frequently traded on any number of music message boards. Although technically not allowed, live videos of Phish shows are also traded by fans and are tolerated as long as they are for non-profit, personal use. Phish fans have been noted for their extensive collections of fan-taped concert recordings; owning recordings of entire tours and years is widespread. Fans recordings are generally sourced from the officially designated tapers' section at each show, by fans with devoted sound recording rigs. Tickets for the tapers' section are acquired separately from regular audience tickets, and directly from the band's website, instead of the venue or a service like Ticketmaster. However, tapers are also required to purchase a general admission ticket for concerts. The band disallowed tapers from patching directly into Paul Languedoc's soundboard in 1990, after a fan unplugged some of his equipment during a concert that June. In 2014, the band launched their own on-demand streaming service, LivePhish+. The platform features hundreds of soundboard recordings of the band's concerts for streaming, including all of their shows from 2002 onwards, as well as all of their studio albums. Phish continue to allow fans to tape and distribute audience recordings of their concerts after the launch of the LivePhish storefront and streaming services. Several books on Phish have been published, including two official publications: "The Phish Book", a 1998 coffee table book credited to the band members and journalist Richard Gehr which focused on the band's activities during 1996 and 1997, and "Phish: The Biography", a semi-official biographical book written by music journalist and Phish fan Parke Puterbaugh, was published in 2009 and was based on interviews with the four band members, their friends and crew. An installment of the 33⅓ book series on "A Live One", written by Walter Holland, was published in 2015. In addition to books, there have been multiple podcasts which have focused on Phish, its music and fanbase as their central topics of discussion. Among the first was "Analyze Phish" which was hosted by comedians Harris Wittels and Scott Aukerman for the Earwolf podcast network, and ran for ten episodes posted between 2011 and 2014. The podcast followed Wittels, a devoted fan of the band, in his humorous attempts to get Aukerman to enjoy their music. Despite its truncated run, "Analyze Phish" inspired Phish lyricist Tom Marshall to start his own Phish podcast, "Under the Scales", in 2016. In 2018, Marshall co-founded the Osiris Podcasting Network, which hosts "Under the Scales" and other music podcasts, many of which are devoted to Phish or other jam bands. On 15 September 2019, C13Originals debuted "Long May They Run", a music documentary podcast series. The first season, consisting of 10 episodes, focused on Phish's history and influence on the live music scene. The band's music began appearing in the "Rock Band" video game series in 2009. Their song "Wilson" (December 30, 1994 at Madison Square Garden, New York, NY as released on "A Live One"), appeared in "Rock Band"'s Bonnaroo song pack, along with other songs by artists playing at the Bonnaroo Festival that year. A Phish "Live Track Pack" for "Guitar Hero World Tour" became available on June 25, 2009. Recordings of "Sample in a Jar" (December 1, 1994 at Salem Armory, Salem, Oregon), "Down With Disease" (December 1, 1995 at Hersheypark Arena, Hershey, Pennsylvania) and "Chalk Dust Torture" (November 16, 1994, Hill Auditorium, University of Michigan, Ann Arbor, Michigan, as released on "A Live One") have been released, compatible with Xbox 360, PS3, and Wii. On August 19, 2010, it was confirmed that "Llama" would be a playable song in Rock Band 3, released on October 26, 2010. Seattle Seahawks fans began mimicking Phish's song "Wilson" by chanting the song's opening line when quarterback Russell Wilson took the field during games. The new tradition started after Anastasio made the suggestion at shows in Seattle. NFL Films made a short documentary on the cultural phenomenon. New York Mets catcher Wilson Ramos also uses the song as his walk-up music.
https://en.wikipedia.org/wiki?curid=24969
PA-RISC PA-RISC is an instruction set architecture (ISA) developed by Hewlett-Packard. As the name implies, it is a reduced instruction set computer (RISC) architecture, where the PA stands for Precision Architecture. The design is also referred to as HP/PA for Hewlett Packard Precision Architecture. The architecture was introduced on 26 February 1986, when the HP 3000 Series 930 and HP 9000 Model 840 computers were launched featuring the first implementation, the TS1. PA-RISC has been succeeded by the Itanium (originally IA-64) ISA, jointly developed by HP and Intel. HP stopped selling PA-RISC-based HP 9000 systems at the end of 2008 but supported servers running PA-RISC chips until 2013. In the late 1980s, HP was building four series of computers, all based on CISC CPUs. One line was the IBM PC compatible Intel i286-based Vectra Series, started in 1986. All others were non-Intel systems. One of them was the HP Series 300 of Motorola 68000-based workstations, another Series 200 line of technical workstations based on a custom silicon on sapphire (SOS) chip design, the SOS based 16-bit HP 3000 classic series, and finally the HP 9000 Series 500 minicomputers, based on their own (16 and 32-bit) FOCUS microprocessor. Precision Architecture is the result of what was known inside Hewlett-Packard as the Spectrum program. HP planned to use Spectrum to move all of their non-PC compatible machines to a single RISC CPU family. The first processors were introduced in 1986. It had thirty-two 32-bit integer registers and sixteen 64-bit floating-point registers. The number of floating-point registers was doubled in the 1.1 version to 32 once it became apparent that 16 were inadequate and restricted performance. The architects included Allen Baum, Hans Jeans, Michael J. Mahon, Ruby Bei-Loh Lee, Russel Kao, Steve Muchnick, Terrence C. Miller, David Fotland, and William S. Worley. The first implementation was the TS1, a central processing unit built from discrete transistor–transistor logic (74F TTL) devices. Later implementations were multi-chip VLSI designs fabricated in NMOS processes (NS1 and NS2) and CMOS (CS1 and PCX). They were first used in a new series of HP 3000 machines in the late 1980s – the 930 and 950, commonly known at the time as Spectrum systems, the name given to them in the development labs. These machines ran MPE-XL. The HP 9000 machines were soon upgraded with the PA-RISC processor as well, running the HP-UX version of UNIX. Other operating systems ported to the PA-RISC architecture include Linux, OpenBSD, NetBSD and NeXTSTEP. An interesting aspect of the PA-RISC line is that most of its generations have no Level 2 cache. Instead large Level 1 caches are used, formerly as separate chips connected by a bus, and now integrated on-chip. Only the PA-7100LC and PA-7300LC had L2 caches. Another innovation of the PA-RISC was the addition of vectorized instructions (SIMD) in the form of MAX, which were first introduced on the PA-7100LC. Precision RISC Organization, an industry group led by HP, was founded in 1992, to promote the PA-RISC architecture. Members included Convex, Hitachi, Hughes Aircraft, Mitsubishi, NEC, OKI, Prime, Stratus, Yokogawa, Red Brick Software, and Allegro Consultants, Inc.. The ISA was extended in 1996 to 64 bits, with this revision named PA-RISC 2.0. PA-RISC 2.0 also added fused multiply–add instructions, which help certain floating-point intensive algorithms, and the MAX-2 SIMD extension, which provides instructions for accelerating multimedia applications. The first PA-RISC 2.0 implementation was the PA-8000, which was introduced in January 1996.
https://en.wikipedia.org/wiki?curid=24970
Preacher (comics) Preacher is an American comic book series published by Vertigo, an imprint of DC Comics. The series was created by writer Garth Ennis and artist Steve Dillon with painted covers by Glenn Fabry. The series consists of 75 issues in total – 66 regular, monthly issues, five one-shot specials, and a four-issue "Preacher: Saint of Killers" limited series. The final monthly issue, number 66, was published in October 2000. The entire run has been collected in three series of trade paperbacks, an original run of nine volumes, a second run of six, and three special oversized "Absolute" volumes. "Preacher" tells the story of Jesse Custer, a preacher in the small Texas town of Annville. Custer is accidentally possessed by the supernatural creature named Genesis, the infant of the unauthorized, unnatural coupling of an angel and a demon. The incident flattens Custer's church and kills his entire congregation. Genesis has no sense of individual will, but since it is composed of both pure goodness and pure evil, its power might rival that of God Himself, making Jesse Custer, bonded to Genesis, potentially the most powerful being in the universe. Driven by a strong sense of right and wrong, Custer journeys across the United States attempting to literally find God, who abandoned Heaven the moment Genesis was born. He also begins to discover the truth about his new powers. They allow him, when he wills it, to command the obedience of those who hear and comprehend his words. He is joined by his old girlfriend Tulip O'Hare, as well as a hard-drinking Irish vampire named Cassidy. During the course of their journeys, the three encounter enemies and obstacles both sacred and profane, including The Saint of Killers, an invincible, quick-drawing, perfect-aiming, come-lately Angel of Death answering only to "He who sits on the throne"; a disfigured suicide attempt survivor turned rock-star named Arseface; a serial-killer called the 'Reaver-Cleaver'; The Grail, a secret organization controlling the governments of the world and protecting the bloodline of Jesus; Herr Starr, ostensible Allfather of the Grail, a megalomaniac with a penchant for prostitutes, who wishes to use Custer for his own ends; several fallen angels; and Jesse's own redneck 'family' — particularly his nasty Cajun grandmother, her mighty bodyguard Jody, and the Zoophilic T.C. Additionally, the book "Preacher: Dead or Alive" () collects Fabry's covers to the series. Garth Ennis, feeling "Preacher" would translate perfectly as a film, sold the film rights to Electric Entertainment. Rachel Talalay was hired to direct, with Ennis writing the script. Rupert Harvey and Tom Astor were set as producers. By May 1998, Ennis completed three drafts of the script, based largely on the "Gone to Texas" story arc. The filmmakers found it difficult financing "Preacher" because investors found the idea religiously controversial. Ennis approached Kevin Smith and Scott Mosier to help finance the film under their View Askew Productions banner. Ennis, Smith and Mosier pitched "Preacher" to Bob Weinstein at Miramax Films. Weinstein was confused by the characterization of Jesse Custer. Miramax also did not want to share the box office gross with Electric Entertainment, ultimately dropping the pitch. By May 2000, Smith and Mosier were still attached to produce with Talalay directing, but Smith did not know the status of "Preacher", feeling it would languish in development hell. By then, Storm Entertainment, a UK-based production company known for their work on independent films, joined the production with Electric Entertainment. In September 2001, the two companies announced "Preacher" had been greenlighted to commence pre-production, with filming to begin in November and Talalay still directing Ennis' script. The production and start dates were pushed back because of financial issues of the $25 million projected budget. James Marsden was cast in the lead role as Jesse Custer sometime in 2002. He explained, "It was something I never knew anything about, but once I got my hands on the comic books, I was blown away by it." In a March 2004 interview, Marsden said the filmmakers were hoping for filming to start the following August. With the full-length film adaptation eventually abandoned with budgetary concerns, HBO announced in November 2006 that they commissioned Mark Steven Johnson and Howard Deutch to produce a television pilot. Johnson was to write with Deutch directing. Impressed with Johnson's pilot script, HBO had him write the series bible for the first season. Johnson originally planned "to turn each comic book issue into a single episode" on a shot-for-shot basis. "I gave [HBO] the comics, and I said, 'Every issue is an hour'. Garth Ennis said 'You don't have to be so beholden to the comic'. And I'm like, 'No, no, no. It's got to be like the comic'." Johnson also wanted to make sure that one-shots were included as well. Johnson changed his position, citing new storylines conceived by Ennis. "Well, there would be nothing new to add if we did that, so Garth [Ennis] and I have been creating new stories for the series," he said. "I love the book so much and I was telling Garth that he has to make the stories we are coming up with as comics because I want to see them." By August 2008, new studio executives at HBO decided to abandon the idea, finding it too stylistically dark and religiously controversial. Columbia Pictures then purchased the film rights in October 2008 with Sam Mendes planned to direct. Neal H. Moritz and Jason Netter would have produced the film. The previous scripts written by Ennis would not have been used. On November 16, 2013, it was announced that AMC would be shooting a pilot for "Preacher". On November 18, 2013, "BleedingCool" confirmed that Seth Rogen and Evan Goldberg had developed the series pilot with Sam Catlin, and that it would be distributed by Sony Pictures Television. On February 7, 2014 it was made public that AMC was officially developing the series to television based on the pilot written by Seth Rogen and Evan Goldberg. Rogen has no plans to co-star in the series. On May 9, 2014, AMC announced that "Preacher" was picked up to series. "Preacher" was slated to premiere mid to late 2015, as announced by Seth Rogen, with the script for the series complete and the pilot ordered by the studio. Comic creators Steve Dillon and Garth Ennis were to work on this project as co-executive producer. On April 17, 2015, Seth Rogen tweeted that Dominic Cooper was cast in the role of Jesse Custer, Joseph Gilgun as Cassidy, Ruth Negga as Tulip O'Hare, Ian Colletti as Arseface, and W. Earl Brown as Sheriff Hugo Root. On September 9, 2015, Seth Rogen announced via Twitter that the series ordered to a ten-episode season and was due to premiere in mid-2016. The series premiered on AMC on Sunday, May 22, 2016. In 2017 a second season, with thirteen episodes, aired. In 2018, a ten-episode third season aired. In November 2018, the series was renewed for a final fourth season, with production relocating to Australia. Stephen King has said that his comic book series "" was influenced by "Preacher". The character Yorick from "", has a Zippo lighter with the words "Fuck Communism" engraved, identical to the one owned by Jesse Custer in "Preacher". When asked about it he says it's "from this book I read once... a graphic novel. You know, like a comic book." The phrase originated as a 1963 satirical poster produced by "The Realist" magazine's Paul Krassner. This lighter appears later in the series when Yorick and Agent 355 are being held by Russian agents at gunpoint, who find the lighter and take offense to it. Also, in volume 4 "Safeword", Yorick says "pardners", which is used several times in "Preacher", in lieu of "partners". IGN declared Preacher the third-greatest Vertigo comic, after "Saga of the Swamp Thing" and "Sandman". Jesse Custer was ranked the 11th Greatest Comic Book Character by "Empire" magazine. The Saint of Killers was ranked at number 42 on the same list.
https://en.wikipedia.org/wiki?curid=24971
Preacher A preacher is a person who delivers sermons or homilies on religious topics to an assembly of people. Less common are preachers who preach on the street, or those whose message is not necessarily religious, but who preach components such as a moral or social worldview or philosophy. Preachers are common throughout most cultures. They can take the form of a Christian minister on a Sunday morning, or an Islamic Imam. A Muslim preacher in general is referred to as a "dā‘ī", while one giving sermons on a Friday afternoon is called a "khatib". The sermon or homily has been an important part of Christian services since Early Christianity, and remains prominent in both Roman Catholicism and Protestantism. Lay preachers sometimes figure in these traditions of worship, for example the Methodist local preachers, but in general preaching has usually been a function of the clergy. The Dominican Order is officially known as the "Order of Preachers" ("Ordo Praedicatorum" in Latin); friars of this order were trained to publicly preach in vernacular languages, and the order was created by Saint Dominic to preach to the Cathars of southern France in the early thirteenth century. The Franciscans are another important preaching order; Travelling preachers, usually friars, were an important feature of late medieval Catholicism. In most denominations, modern preaching is kept below about 40 minutes, but historic preachers of all denominations could at times speak for well over an hour, sometimes for two or three hours, and use techniques of rhetoric and theatre that are today somewhat out of fashion in mainline churches. In many churches in the United States, the title "Preacher" is synonymous with "pastor" or "minister", and the church's minister is often referred to simply as "our/the preacher" or by name such as "Preacher Smith". However, among some Chinese churches, preacher (Chinese: 傳道) is different from pastor (Chinese: 牧師). A preacher in the Protestant church is one of the younger clergy, but they are not officially recognised as pastors until they can prove their capability of leading the church.
https://en.wikipedia.org/wiki?curid=24972
Prime time The prime time or the peak time is the block of broadcast programming taking place during the middle of the evening for television programming. It is used by the major television networks to broadcast their season's nightly programming. The term "prime time" is often defined in terms of a fixed time period—for example (in the United States), from 8:00 p.m. to 11:00 p.m. (Eastern and Pacific Time) or 7:00 p.m. to 10:00 p.m. (Central and Mountain Time). In India and some Middle Eastern countries, prime time consists of the programmes that are aired on TV between 8:00 p.m. and 11:00 p.m. local time. In Bangladeshi Television Channels, the 19:00-to-22:00 time slot is known as Prime Time. Several National Broadcasters like Maasranga Television, Gazi TV, Channel 9, Channel i broadcast their prime-time shows from 20:00 to 23:00 after their Primetime news at 19:00. During Eid Season, most of the TV Stations broadcast their especially produced shows and World Television Premiers starting from 15:00 to midnight. In Ramadan, the broadcasters also air special Religious and Cooking shows starting from 14:00 to 20:00 affecting the primetime hours. Besides, After blameways, Late Night Talkshows are also aired from 01:00 to 04:00 with Ramadan being exception. Religious shows are also broadcast simultaneously from 01:00 along with Talkshows and News Analysis. In Chinese television, the 19:00-to-22:00 time slot is known as Golden Time Also Known "Party time",(Traditional Chinese: 黄金時間; Simplified Chinese: 黄金时间; Pinyin: Huángjīn shíjiān). The term also influenced a nickname of a strip of holidays in known as Golden Week. Prime time usually takes place from 19:00 until 22:00. After that, programs classified as "PG" (Parental Guidance) are allowed to be broadcast. Frontline dramas appear during this time slot in Cantonese, as well as movies in English. In India, prime time occurs between 20:00 and 22:30. Usually, programmes during prime time are domestic dramas, talent shows and reality shows. Prime time usually takes place from 18:00 to 23:00 WIB, preceded by a daily newscast at 17:00 (although some channels broadcast their daily evening newscasts earlier, usually at 16:00 or 16:30 but the practice ended in 2018, except for TVRI). After prime time, programs classified as Adult, as well as cigarette commercials, are allowed to be broadcast. Like another Muslim-majority country, there is also a 'midnight prime time' during sahur time in a month of Ramadan. It takes place from 02:00 (or 02:30 in some channels) and ends at the Fajr prayer call, varies between 04:30 and 05:00. The time slot is usually filled with comedy and religious programming. In Iraq, prime time runs from 20:00 to 23:00. The main news programs are broadcast at 20:00 and the highest-rated television program airs at 21:00. In Japanese television, prime time runs from 19:00 to 23:00. Especially, the 19:00-to-22:00 time slot is also known as . The term also influenced a nickname of a strip of holidays in Japan known as Golden Week. Malaysian prime time starts with the main news from 20:00 to 20:30 (now 20:00 to 21:00) and ends either at 23:00 or 1:00, or possibly later. Usually, programmes during prime time are domestic dramas, foreign drama series (mostly American), films and entertainment programmes. Programmes classified as 18 are not allowed to be broadcast before 10:00 p.m. but on RTM, most programmes on this slot are rated U (U means "Umum" in Malay and literally General Viewing or General Audiences in English) throughout the whole day. However, programmes broadcast after 23:00 are still considered prime time. As of 2019, NTV7's prime time continues until 12:00 a.m. Programmes during prime time may have longer commercial breaks due to number of viewers. Some domestic prime-time productions may be affected because of certain major sporting events such as FIFA World Cup. However, only FIFA World Cup held in the Americas do not affect the domestic prime-time programmes but only during daytime. In Pakistan, prime time begins between 20:00—22:00 Pakistan Standard Time. During this time majority of the local channels broadcast news and or drama serials, however on state channels it has been observed that they broadcast Khabarnama (New Bulletin) from past many decades. In the Philippines, prime-time blocks begin at 18:00 (now 17:50 or 17:00) and run until about 23:00 (or 23:30) on weekdays, and 19:00 to 23:00 on weekends. The weekday prime-time blocks usually consists of local teleseryes (soap operas) and foreign television series. The network's highest-rated programs are usually aired right after the evening newscast at 20:00, while a foreign series usually precedes the late night newscast. On weekends, non-scripted programming such as comedy series, talent shows, reality shows and current affairs shows air in prime time. For the minor networks, prime time consists of American television series on weekdays, with encores of those shows on weekends. Prime time originally started earlier at around 19:00, but the evening newscasts were lengthened to 90 minutes and now start at 18:30, instead of the original one-hour newscast that starts at 18:00. In Singapore, prime time begins at 18:00 on Mediacorp Channel 5, 18:30 on Mediacorp Channel 8 and 19:00 on MediaCorp Channel U, Channel NewsAsia, MediaCorp Suria, MediaCorp Vasantham. which are also the main (Free-to-air) television channels in Singapore. On Channel 8, prime time ends at midnight or 0:15 on weekdays, at 0:30 on Saturday nights and at 23:30 on Sunday nights. On Channel 5, prime time ends at 0:00 on weekdays, at 1:30 (or later) on Saturday nights and at 0:30 on Sunday nights. On Suria, prime time ends at 22:30 on Monday to Thursday nights, 23:30 on Friday nights, 23:00 on weekends and at 00:30 or 01:00 on eve and actual days of Public Holidays. On Vasantham, prime time ends at 23:00 on Mondays to Thursdays, midnight (or later) on Friday and Saturday nights and at 23:30 on Sunday nights. On Channel NewsAsia, prime time ends at 23:01, immediately after the news headlines, seven days a week and on Channel U, prime time ends at 23:00 seven days a week. Generally, however, prime time is considered to be from 18:00 to 00:00. In South Korea, prime time usually runs from 20:00 to 23:00 during the week, while on Saturdays and Sundays, it runs from 18:00 to 23:00. Family-oriented television shows are broadcast before 22:00, and adult-oriented television shows air after 22:00. In Taiwan, prime time (called "bādiǎn dàng"—八點檔—in Mandarin Chinese) starts at 20:00 in the evening. Taiwanese drama series played then are called 8 o'clock series and are expected to have high viewer ratings. In Thailand, prime time dramas (ละคร; la-korn) air from 20:30 to 22:30. Most dramas are soap operas. Prime time dramas are popular and influential to Thai society. In Vietnam Prime time is also known as Golden Time (Tiếng Việt: Giờ vàng), prime time starts at 20:00 in the evening and ends at 23:00. In Bosnia and Herzegovina, prime time starts at 20:00 and finishes at 22:00. It is preceded by a daily newscast ("Dnevnik") at 19:00 and followed by a late night newscast ("Vijesti") at 22:00. In Croatia, prime time starts between 20:00 and 20:15. Croatian public broadcaster HRT broadcasts a daily newscast from 19:00 to 20:00. Also, many private broadcasters have daily newscasts either before or after the HTY newscast, at around 20.05, followed by the start of their own prime time. Many broadcasters without daily newscasts start their prime time at 20:00. Prime time generally ends between 22:00 and 23:00, followed by the late night edition of the network newscast and adult-oriented programming. In Denmark, prime time starts at 20:00. In Finland, prime time starts at 21:00. It is preceded by a daily newscast at 20:30. In France prime time runs from 21:05 (after the main channels' evening news programmes) until around 22:30. In Georgia, prime time starts between 18:45 and 20:00 and generally ends at midnight. However, on Friday night / Saturday morning prime time usually continues until 1:00. At 20:00 each evening Das Erste (The First), Germany's oldest public television network, airs the country's most-watched news broadcast, the main edition of the "Tagesschau"—which is also simulcast on most of its other specialist and regional channels (The Third). The conclusion of the bulletin 15 minutes later marks the beginning of prime time, as it has since the 1950s. In consequence, most channels also choose to start their prime time at 20:15. In the 1990s, the commercial channel Sat.1 suffered a significant loss of audience share when it tried moving the start of its prime time to 20:00. In Greece, prime time runs from 21:00 (usually following the news) to midnight. In Hungary, prime time on weekdays on the two big commercial stations (RTL Klub and TV2) starts at 19:00 with game shows, tabloid and docu-reality programmes. At 21:00, two popular soap operas air: "Barátok közt" and "Jóban Rosszban", which follows at 21:30. American and other series, movies, talk-shows and magazines run until 23:30. The prime-time lineup is preceded by daily news programmes at 18:30. At weekends prime time begins at 19:00, with blockbuster movies and television shows. Before 15 March 2015, the public television station M1 began its prime time with a game show at 18:30, which was followed by the daily news programme "Híradó" at 19:30. After the news, the channel broadcast American and other series, talk shows, magazines, and news programmes until 22:00, after which came the daily news magazine "Este" and the late edition of "Híradó". From 15 March 2015, Duna began broadcasting all of the entertainment programming transferred to it from that date from M1, meaning that prime time on Duna now begins at 18:00, starting with the simulcast of the 18:00 edition of Híradó from the newly re-launched news channel, M1. In Iceland, prime time starts at 19:30. It is preceded by a daily newscast at 19:00. In Ireland, prime starts at 18:30 and ends at 22:00. In Italy, prime time (called "prima serata") starts between 21:00 and 21:45 (main channels) and ends between 23:30 and 00:30. On Friday and Saturday night some shows last until 01:30–02:00. It usually follows news and, on some networks (like Rai 1 and Canale 5), a slot called "access prime time". Shows, movies, and sport events are usually shown during prime time. Much like in Germany, prime time in the Netherlands usually begins at 20:30 in order to not compete with NOS's flagship 20:00 newscast. In Norway, prime time starts at 19:45. On the NRK1 channel it is preceded by the daily newscast "Dagsrevyen" at 19:00. Locally, prime time is called (lit. "best time for broadcasting"). In Poland, prime time starts around 20:00 (sometimes 20:30). On (TVP 1) It is preceded by a daily newscast at 19:30, on (TVN) the newscast is aired at 19:00 followed by the newsmagazine Uwaga at 19:50 (weekdays)/19:45 (weekends) and then the soap Na Wspólnej at 20:05 (Monday to Thursday, from Friday to Sunday (at 20:00) various: movies on Friday, show or movies (Winter and Summer) at Saturday, and programme or movies (Winter and Summer) at Sunday), on (Polsat) the news is aired at 18:50, followed by a sitcom Świat według Kiepskich at 19:30. In Russia television prime time is between 19:00 and 23:00 on working days and from 15:00 to 01:00 on holidays. On radio stations there are morning, day and evening prime times. The most common division: morning—6:30 to 10:00; day—~12:00 to 14:00; evening—16:00 to 21:00. Public television in Slovakia consists of two channels; on the main channel (Jednotka) prime time starts at 20:10, and on the second one (Dvojka) prime-time programming starts at 20:00. The two biggest private broadcasters set the start of prime-time programming at 20:20 (Markíza) and 20:30 (JOJ). Generally, however, prime time is considered to be from 20:00 to 23:00. In Slovenia, prime time, the period in which the most-watched shows are broadcast, is from 8:00pm to 11:00pm. It is preceded by daily newscasts; Dnevnik RTV SLO (7:00pm–8:00pm) on TV SLO 1, 24ur (6:55pm–8:00pm) on POP TV, Svet na Kanalu A (6:00pm–7:00pm; 7:50pm–8:0pm), and Danes (7:30pm–8:00pm) on Planet TV. In Spain, prime time refers to the time period in which the most-watched shows are broadcast. Prime time in Spain starts quite late when compared to most nations as it runs from 22:30 till 01:00. Most news programmes in Spain air at 21:00 for an hour and prime time follows. However, due to fierce competition, especially among the private stations prime time has even been delayed until 23:00. Most channels are delaying prime time in order to protect their top shows from sporting events. In the 1990s, prime time in Spain began at 21:00, moving to 21:30 in the latter half of the 1990s and 22:00 in the early 2000s. Commercial broadcaster laSexta and the second channel from the Public broadcasting La 2 have attempted to shift prime time back to 21:30 in 2006 and Spring 2007, but these attempts have been unsuccessful. Fellow public channel La 1 also tried to pull prime time back to 21:00 in early 2015, to no avail. The lateness in the start of prime time in Spain is also due to Spanish culture. Spanish people generally work from 09:00–14:00 and then from 17:00–20:00 as opposed to the standard 09:00–17:00. The popular late-night show "Crónicas marcianas" during the late 1990s–2000 also helped to extend prime time well into the early hours with the show being watched by a share of 40%, despite finishing at 02:00. Spain might also be unique in that it has a second prime time, running from 14:30–17:00 which coincides with the extended Spanish lunch break. Shows airing in the secondary prime time period on many occasions beat those prime-time shows at night on a daily basis. The second prime time only occurs on weekdays, though and the slot is usually filled with "The Simpsons", news, soap operas and talk shows. In Sweden, prime time starts at 20:00. It is preceded by a daily newscast at 19:30 and local news at 19:50. In Ukraine, prime time () runs from 18:30 to 21:30 on working days and from 15:00 to 01:00 on holidays. In the UK, prime time (known as peak time in that country) runs from 17:30 to 23:00. In North America, television networks feed their prime-time programming in two blocks: one for the Eastern and Central time zones, and the other, on a three-hour tape delay, for the Pacific time zone, to their local network affiliates. In Atlantic Canada (including Newfoundland) as well as Alaska and Hawaii, there is no change in the interpretation or usage of "prime time" as the concept is not attached to time zones in any way. Affiliates in the Mountain, Alaskan, and Hawaiian zones are either on their own to delay broadcast by an hour or two, or collectively form a small, regional network feed with others in the same time zone. Prime time is commonly defined as 8:00–11:00 p.m. Eastern/Pacific, Such as TBS, HGTV AND ABC FAMILY, and 7:00–10:00 p.m. Central/Mountain. On Sundays, the major broadcast television networks traditionally begin their primetime programming at 7:00 p.m. (Eastern/Pacific, 6:00 p.m. Central/Mountain) instead. Some networks such as Fox, The CW, and MyNetworkTV only broadcast from 8:00–10:00 p.m., a time period known as "common prime". Most networks air primetime programming nightly, but the smaller MyNetworkTV only broadcasts prime-time programs on weekdays since 2009, and The CW only broadcasts on weekdays and Sundays as of 2018, leaving Saturday's schedule to their affiliates. In Canada, CTV and Global both follow the same model as the larger U.S. networks (although both may occasionally air programming in the 7:00 p.m. hour in the event of scheduling conflicts with other U.S. imports), while CBC Television, Citytv and CTV Two only schedule prime-time programs within the common prime period (with the 10:00 p.m. p.m. hour dedicated to syndicated programming on Citytv and CTV Two, and CBC airing its news program "The National"). The Canadian Radio-television and Telecommunications Commission (CRTC) has alternatively defined prime time as ranging from 6 pm to 11 pm to 7 pm to 11 pm. Since the early 2000s, the major networks have come to consider Saturday prime time as a graveyard slot, and have largely abandoned scheduling of new scripted programming on that night. The major networks still maintain a prime-time programming schedule on Saturdays; while live sporting events (most commonly college football in the United States and ice hockey in Canada) are generally preferred to fill the time slot, they typically air encores of programs aired earlier in the week, films, non-scripted reality programs, true crime programs produced by their news divisions and, occasionally, burned off episodes of low-rated or cancelled series. Prime time can be extended or truncated if coverage of sporting events run past their allotted end time. Since the "Heidi Game" incident in 1968, in which NBC cut away from coverage of a New York Jets/Oakland Raiders football game on the east coast in order to show a movie (and, in the process, causing viewers to miss an unexpected comeback by the Raiders to win the game), the present-day National Football League mandated that all games be broadcast in their entirety in the markets of the teams involved. Due to this rule, game telecasts may sometimes overrun into the 7:00 p.m. ET hour. Fox previously scheduled repeats of its animated series in the 7:00 hour, allowing themselves to simply pre-empt the reruns if a game ran long. This was later replaced by a half-hour-long wrap-up show, "". In contrast, CBS does not, as its weekly newsmagazine "60 Minutes" has traditionally aired as close to 7:00 p.m. ET as possible. Even if a game runs past that hour, CBS shows "60 Minutes" in its entirety after the conclusion of coverage, and the rest of the prime-time schedule on the East Coast is shifted to compensate. For example, if game coverage were to end at 7:30 p.m., prime time would end at 11:30 p.m. However, in the rare case where the NFL game runs excessively late (8 p.m. or later), the series scheduled to air at 10 p.m. is preempted, with the West Coast and eastern markets airing only an early afternoon game usually receiving a repeat of the 10 p.m. series instead. In an extreme case, CBS's prime time can be extended past midnight during broadcasts of the NCAA Division I Men's Basketball Tournament. This does not necessarily apply universally; in 2001, after an XFL game went into double overtime, causing a 45-minute delay of a highly promoted episode of "Saturday Night Live", NBC made a decision to cut off all future XFL broadcasts at 11:00 p.m. ET. Since the launch of NBCSN, NBC has occasionally invoked this curfew by moving sports overruns to that channel if necessary. Until the Federal Communications Commission (FCC) regulated time slots prior to prime time with the now-defunct Prime Time Access Rule in the 1971–1972 season, networks began programming at 7:30 p.m. Eastern and Pacific/6:30 p.m. Central and Mountain on weeknights. The change helped instigate what is colloquially known as the "rural purge"—a long-term trend away from programs appealing to older and rural audiences in favor of programs catering towards younger, "urban" viewers. As a result, the hour became a lucrative timeslot for syndicated programming in the years that followed, with game and variety shows, as well as other syndicated reruns, becoming popular. The vast majority of prime-time programming in English-speaking North America comes from the United States, with only a limited amount produced in Canada. The Canadian Radio-television and Telecommunications Commission mandates quotas for Canadian content in prime time; these quotas indicate at least half of Canadian prime-time programs must be Canadian in origin, but the majority of this is served by national and local news or localized entertainment gossip shows such as Global's "ET Canada" and CTV's "eTalk". Likewise, the vast majority of Spanish-language programming in North America comes from Mexico. Televisa, a Mexican network, provides the majority of programming to the dominant U.S.-based Spanish broadcaster, Univision. Univision does produce a fairly large amount of unscripted Spanish-language programming, the best known having been the long-running variety show "Sábado Gigante", hosted and created by Chilean national Don Francisco. Univision's distant second-place competitor, Telemundo, produces a much greater share of in-house content, including a long line of telenovelas. In Quebec, the largest Francophone area of North America, French-language programming consists of originally produced programs (most of which are produced in Montreal, with a few produced in Quebec City) and a few French-language dubs of English language programs. On all of the Quebec networks, entertainment programming is scheduled only between 8 and 10 p.m., with the 10–11 p.m. hour given over to a network newscast or a nightly talk show. Prime time is the daypart (a block of a day's programming schedule) with the most viewers and is generally where television networks and local stations reap much of their advertising revenues. In recent years television advertising expenditure in the US has been highest during prime-time drama shows. The Nielsen ratings system is explicitly designed for the optimum measurement of audience viewership by dayparts with prime time being of most interest. Television viewership is, in general, highest on weekday evenings, as most Americans are at work during the day, asleep during the overnights, and out taking part in social events on weekends; thus, television has its highest audience at times when people are unlikely to be away from home. Prime time for radio is called drive time and, in Eastern and Pacific Time, is 6–10 a.m. and 3–7 p.m. and, for Mountain and Central Time, is 5–9 a.m. and 2–6 p.m. The difference between peak radio listenership and television viewership times is due to the fact that people listen to their radios most often while driving to and from work (hence the name "drive time"). A survey by Nielsen revealed that viewers watched almost two hours' worth of TV during prime time. In a great part of Latin American countries, prime time (known in most countries as "horario central" or "Central Time") is considered to be from 6:00 or 7:00 p.m. to 10:00 or 11:00 p.m. The time slot is usually used for news, telenovelas and television series, and special time slots are used for reality shows, with great popularity, especially in Mexico and Brazil. In Mexico, prime time is known as "horario estelar" ("Stellar Time"). In Brazil, it is called "horário nobre" ("noble time"), which is the time the three most famous telenovelas in the country are shown each weekday and on Saturdays. There are also news programs, reality shows, and sitcoms. In Argentina, prime time is considered to be from 8:00 p.m. until 12:00 a.m.; with the most successful series and telenovelas in the country (such as "Los Roldán" and "Valientes"), and entertainment shows, like CQC (Caiga Quien Caiga). In Chile, prime time is considered to be from 10:30 p.m. until 01:00 a.m.; with the most successful series and telenovelas in the country (such as "Socias" and "Las Vega's"). Investigation entertainment shows (like "Informe Especial", "Contacto", "Apuesto por tí") also air. Prime time in Australia is officially from 6:00 p.m. to midnight, following Australian Eastern Standard Time, with the highest ratings normally achieved between 6:00 p.m. to 9:00 p.m. Traditionally, prime time in New Zealand is considered to be 7:30pm to 10:30pm, but can be extended to cover the entire evening of television (5:30pm to 11:00pm).
https://en.wikipedia.org/wiki?curid=24973
Pelton wheel A Pelton wheel is an impulse-type water turbine invented by Lester Allan Pelton in the 1870s. The Pelton wheel extracts energy from the impulse of moving water, as opposed to water's dead weight like the traditional overshot water wheel. Many earlier variations of impulse turbines existed, but they were less efficient than Pelton's design. Water leaving those wheels typically still had high speed, carrying away much of the dynamic energy brought to the wheels. Pelton's paddle geometry was designed so that when the rim ran at half the speed of the water jet, the water left the wheel with very little speed; thus his design extracted almost all of the water's impulse energywhich allowed for a very efficient turbine. Lester Allan Pelton was born in Vermillion, Ohio in 1829. In 1850, he travelled overland to take part in the California Gold Rush. Pelton worked by selling fish he caught in the Sacramento River. In 1860, he moved to Camptonville, a center of placer mining activity. At this time many mining operations were powered by steam engines which consumed vast amounts of wood as their fuel. Some water wheels were used in the larger rivers, but they were ineffective in the smaller streams that were found near the mines. Pelton worked on a design for a water wheel that would work with the relatively small flow found in these streams. By the mid 1870s, Pelton had developed a wooden prototype of his new wheel. In 1876, he approached the Miners Foundry in Nevada City, California to build the first commercial models in iron. The first Pelton Wheel was installed at the Mayflower Mine in Nevada City in 1878.. The efficiency advantages of Pelton's invention were quickly recognized and his product was soon in high demand. He patented his invention on 26 October 1880. By the mid-1880s, the Miners Foundry could not meet the demand, and in 1888, Pelton sold the rights to his name and the patents to his invention to the Pelton Water Wheel Company in San Francisco. The company established a factory at 121/123 Main Street in San Francisco. The Pelton Water Wheel Company manufactured a large number of Pelton Wheels in San Francisco which were shipped around the world. In 1892, the Company added a branch on the east coast at 143 Liberty Street in New York City. By 1900, over 11,000 turbines were in use. In 1914, the company moved manufacturing to new, larger premises at 612 Alabama Street in San Francisco. In 1956, the company was acquired by the Baldwin-Lima-Hamilton Company, which ended manufacture of Pelton Wheels. In New Zealand, A & G Price in Thames, New Zealand produced Pelton waterwheels for the local market. One of these is on outdoor display at the Thames Goldmine Experience. Nozzles direct forceful, high-speed streams of water against a series of spoon-shaped buckets, also known as impulse blades, which are mounted around the outer rim of a drive wheel (also called a "runner"). As the water jet hits the blades, the direction of water velocity is changed to follow the contours of the blades. The impulse energy of the water jet exerts torque on the bucket-and-wheel system, spinning the wheel; the water jet does a "u-turn" and exits at the outer sides of the bucket, decelerated to a low velocity. In the process, the water jet's momentum is transferred to the wheel and hence to a turbine. Thus, "impulse" energy does work on the turbine. Maximum power and efficiency are achieved when the velocity of the water jet is twice the velocity of the rotating buckets. A very small percentage of the water jet's original kinetic energy will remain in the water, which causes the bucket to be emptied at the same rate it is filled, and thereby allows the high-pressure input flow to continue uninterrupted and without waste of energy. Typically two buckets are mounted side-by-side on the wheel, with the water jet split into two equal streams; this balances the side-load forces on the wheel and helps to ensure smooth, efficient transfer of momentum from the water jet to the turbine wheel. Because water is nearly incompressible, almost all of the available energy is extracted in the first stage of the hydraulic turbine. Therefore, Pelton wheels have only one turbine stage, unlike gas turbines that operate with compressible fluid. Pelton wheels are the preferred turbine for hydro-power where the available water source has relatively high hydraulic head at low flow rates. Pelton wheels are made in all sizes. There exist multi-ton Pelton wheels mounted on vertical oil pad bearings in hydroelectric plants. The largest units – the Bieudron Hydroelectric Power Station at the Grande Dixence Dam complex in Switzerland – are over 400 megawatts. The smallest Pelton wheels are only a few inches across, and can be used to tap power from mountain streams having flows of a few gallons per minute. Some of these systems use household plumbing fixtures for water delivery. These small units are recommended for use with or more of head, in order to generate significant power levels. Depending on water flow and design, Pelton wheels operate best with heads from , although there is no theoretical limit. The specific speed formula_1 parameter is independent of a particular turbine's size. Compared to other turbine designs, the relatively low specific speed of the Pelton wheel, implies that the geometry is inherently a "low gear" design. Thus it is most suitable to being fed by a hydro source with a low ratio of flow to pressure, (meaning relatively low flow and/or relatively high pressure). The specific speed is the main criterion for matching a specific hydro-electric site with the optimal turbine type. It also allows a new turbine design to be scaled from an existing design of known performance. formula_2 (dimensioned parameter), where: The formula implies that the Pelton turbine is "geared" most suitably for applications with relatively high hydraulic head "H", due to the 5/4 exponent being greater than unity, and given the characteristically low specific speed of the Pelton. In the ideal (frictionless) case, all of the hydraulic potential energy ("E""p" = "mgh") is converted into kinetic energy ("E""k" = "mv"2/2) (see Bernoulli's principle). Equating these two equations and solving for the initial jet velocity ("V""i") indicates that the theoretical (maximum) jet velocity is "V""i" = . For simplicity, assume that all of the velocity vectors are parallel to each other. Defining the velocity of the wheel runner as: ("u"), then as the jet approaches the runner, the initial jet velocity relative to the runner is: ("V""i" − "u"). The initial velocity of jet is "V""i" Assuming that the jet velocity is higher than the runner velocity, if the water is not to become backed-up in runner, then due to conservation of mass, the mass entering the runner must equal the mass leaving the runner. The fluid is assumed to be incompressible (an accurate assumption for most liquids). Also it is assumed that the cross-sectional area of the jet is constant. The jet "speed" remains constant relative to the runner. So as the jet recedes from the runner, the jet velocity relative to the runner is: −("V""i" − "u") = −"V""i" + "u". In the standard reference frame (relative to the earth), the final velocity is then: "V""f" = (−"V""i" + u) + "u" = −"V""i" + 2"u". We know that the ideal runner speed will cause all of the kinetic energy in the jet to be transferred to the wheel. In this case the final jet velocity must be zero. If we let −"V""i" + 2"u" = 0, then the optimal runner speed will be "u" = "V""i" /2, or half the initial jet velocity. By Newton's second and third laws, the force "F" imposed by the jet on the runner is equal but opposite to the rate of momentum change of the fluid, so where "ρ" is the density, and "Q" is the volume rate of flow of fluid. If "D" is the wheel diameter, the torque on the runner is The torque is maximal when the runner is stopped (i.e. when "u" = 0, "T" = "ρQDV"i). When the speed of the runner is equal to the initial jet velocity, the torque is zero (i.e. when "u" = "V"i, then "T" = 0). On a plot of torque versus runner speed, the torque curve is straight between these two points: (0, "pQDV"i) and ("V"i, 0). Nozzle efficiency is the ratio of the jet power to the water power at the base of nozzle The power "P" = "Fu" = "Tω", where "ω" is the angular velocity of the wheel. Substituting for "F", we have "P" = 2"ρQ"("V""i" − "u")"u". To find the runner speed at maximum power, take the derivative of "P" with respect to "u" and set it equal to zero, ["dP"/"du" = 2"ρQ"("V""i" − 2"u")]. Maximum power occurs when "u" = "V""i" /2. "P"max = "ρQV""i"2/2. Substituting the initial jet power "V""i" = , this simplifies to "P"max = "ρghQ". This quantity exactly equals the kinetic power of the jet, so in this ideal case, the efficiency is 100%, since all the energy in the jet is converted to shaft output. A wheel power divided by the initial jet power, is the turbine efficiency, "η" = 4"u"("V""i" − "u")/"V""i"2. It is zero for "u" = 0 and for "u" = "V""i". As the equations indicate, when a real Pelton wheel is working close to maximum efficiency, the fluid flows off the wheel with very little residual velocity. In theory, the energy efficiency varies only with the efficiency of the nozzle and wheel, and does not vary with hydraulic head. The term "efficiency" can refer to: Hydraulic, Mechanical, Volumetric, Wheel, or overall efficiency. The conduit bringing high-pressure water to the impulse wheel is called the penstock. Originally the penstock was the name of the valve, but the term has been extended to include all of the fluid supply hydraulics. Penstock is now used as a general term for a water passage and control that is under pressure, whether it supplies an impulse turbine or not.
https://en.wikipedia.org/wiki?curid=24974
Piezoelectricity Piezoelectricity is the electric charge that accumulates in certain solid materials (such as crystals, certain ceramics, and biological matter such as bone, DNA and various proteins) in response to applied mechanical stress. The word "piezoelectricity" means electricity resulting from pressure and latent heat. It is derived from the Greek word ; "piezein", which means to squeeze or press, and "ēlektron", which means amber, an ancient source of electric charge. French physicists Jacques and Pierre Curie discovered piezoelectricity in 1880. The piezoelectric effect results from the linear electromechanical interaction between the mechanical and electrical states in crystalline materials with no inversion symmetry. The piezoelectric effect is a reversible process: materials exhibiting the piezoelectric effect (the internal generation of electrical charge resulting from an applied mechanical force) also exhibit the reverse piezoelectric effect, the internal generation of a mechanical strain resulting from an applied electrical field. For example, lead zirconate titanate crystals will generate measurable piezoelectricity when their static structure is deformed by about 0.1% of the original dimension. Conversely, those same crystals will change about 0.1% of their static dimension when an external electric field is applied to the material. The inverse piezoelectric effect is used in the production of ultrasonic sound waves. Piezoelectricity is exploited in a number of useful applications, such as the production and detection of sound, piezoelectric inkjet printing, generation of high voltages, electronic frequency generation, microbalances, to drive an ultrasonic nozzle, and ultrafine focusing of optical assemblies. It forms the basis for a number of scientific instrumental techniques with atomic resolution, the scanning probe microscopies, such as STM, AFM, MTA, and SNOM. It also finds everyday uses such as acting as the ignition source for cigarette lighters, push-start propane barbecues, used as the time reference source in quartz watches, as well as in amplification pickups for some guitars and triggers in most modern electronic drums. The pyroelectric effect, by which a material generates an electric potential in response to a temperature change, was studied by Carl Linnaeus and Franz Aepinus in the mid-18th century. Drawing on this knowledge, both René Just Haüy and Antoine César Becquerel posited a relationship between mechanical stress and electric charge; however, experiments by both proved inconclusive. The first demonstration of the direct piezoelectric effect was in 1880 by the brothers Pierre Curie and Jacques Curie. They combined their knowledge of pyroelectricity with their understanding of the underlying crystal structures that gave rise to pyroelectricity to predict crystal behavior, and demonstrated the effect using crystals of tourmaline, quartz, topaz, cane sugar, and Rochelle salt (sodium potassium tartrate tetrahydrate). Quartz and Rochelle salt exhibited the most piezoelectricity. The Curies, however, did not predict the converse piezoelectric effect. The converse effect was mathematically deduced from fundamental thermodynamic principles by Gabriel Lippmann in 1881. The Curies immediately confirmed the existence of the converse effect, and went on to obtain quantitative proof of the complete reversibility of electro-elasto-mechanical deformations in piezoelectric crystals. For the next few decades, piezoelectricity remained something of a laboratory curiosity, though it was a vital tool in the discovery of polonium and radium by Pierre and Marie Curie in 1898. More work was done to explore and define the crystal structures that exhibited piezoelectricity. This culminated in 1910 with the publication of Woldemar Voigt's "Lehrbuch der Kristallphysik" ("Textbook on Crystal Physics"), which described the 20 natural crystal classes capable of piezoelectricity, and rigorously defined the piezoelectric constants using tensor analysis. The first practical application for piezoelectric devices was sonar, first developed during World War I. In France in 1917, Paul Langevin and his coworkers developed an ultrasonic submarine detector. The detector consisted of a transducer, made of thin quartz crystals carefully glued between two steel plates, and a hydrophone to detect the returned echo. By emitting a high-frequency pulse from the transducer, and measuring the amount of time it takes to hear an echo from the sound waves bouncing off an object, one can calculate the distance to that object. The use of piezoelectricity in sonar, and the success of that project, created intense development interest in piezoelectric devices. Over the next few decades, new piezoelectric materials and new applications for those materials were explored and developed. Piezoelectric devices found homes in many fields. Ceramic phonograph cartridges simplified player design, were cheap and accurate, and made record players cheaper to maintain and easier to build. The development of the ultrasonic transducer allowed for easy measurement of viscosity and elasticity in fluids and solids, resulting in huge advances in materials research. Ultrasonic time-domain reflectometers (which send an ultrasonic pulse through a material and measure reflections from discontinuities) could find flaws inside cast metal and stone objects, improving structural safety. During World War II, independent research groups in the United States, Russia, and Japan discovered a new class of synthetic materials, called ferroelectrics, which exhibited piezoelectric constants many times higher than natural materials. This led to intense research to develop barium titanate and later lead zirconate titanate materials with specific properties for particular applications. One significant example of the use of piezoelectric crystals was developed by Bell Telephone Laboratories. Following World War I, Frederick R. Lack, working in radio telephony in the engineering department, developed the "AT cut" crystal, a crystal that operated through a wide range of temperatures. Lack's crystal did not need the heavy accessories previous crystal used, facilitating its use on aircraft. This development allowed Allied air forces to engage in coordinated mass attacks through the use of aviation radio. Development of piezoelectric devices and materials in the United States was kept within the companies doing the development, mostly due to the wartime beginnings of the field, and in the interests of securing profitable patents. New materials were the first to be developed—quartz crystals were the first commercially exploited piezoelectric material, but scientists searched for higher-performance materials. Despite the advances in materials and the maturation of manufacturing processes, the United States market did not grow as quickly as Japan's did. Without many new applications, the growth of the United States' piezoelectric industry suffered. In contrast, Japanese manufacturers shared their information, quickly overcoming technical and manufacturing challenges and creating new markets. In Japan, a temperature stable crystal cut was developed by Issac Koga. Japanese efforts in materials research created piezoceramic materials competitive to the United States materials but free of expensive patent restrictions. Major Japanese piezoelectric developments included new designs of piezoceramic filters for radios and televisions, piezo buzzers and audio transducers that can connect directly to electronic circuits, and the piezoelectric igniter, which generates sparks for small engine ignition systems and gas-grill lighters, by compressing a ceramic disc. Ultrasonic transducers that transmit sound waves through air had existed for quite some time but first saw major commercial use in early television remote controls. These transducers now are mounted on several car models as an echolocation device, helping the driver determine the distance from the car to any objects that may be in its path. The nature of the piezoelectric effect is closely related to the occurrence of electric dipole moments in solids. The latter may either be induced for ions on crystal lattice sites with asymmetric charge surroundings (as in BaTiO3 and PZTs) or may directly be carried by molecular groups (as in cane sugar). The dipole density or polarization (dimensionality [C·m/m3] ) may easily be calculated for crystals by summing up the dipole moments per volume of the crystallographic unit cell. As every dipole is a vector, the dipole density P is a vector field. Dipoles near each other tend to be aligned in regions called Weiss domains. The domains are usually randomly oriented, but can be aligned using the process of "poling" (not the same as magnetic poling), a process by which a strong electric field is applied across the material, usually at elevated temperatures. Not all piezoelectric materials can be poled. Of decisive importance for the piezoelectric effect is the change of polarization P when applying a mechanical stress. This might either be caused by a reconfiguration of the dipole-inducing surrounding or by re-orientation of molecular dipole moments under the influence of the external stress. Piezoelectricity may then manifest in a variation of the polarization strength, its direction or both, with the details depending on: 1. the orientation of P within the crystal; 2. crystal symmetry; and 3. the applied mechanical stress. The change in P appears as a variation of surface charge density upon the crystal faces, i.e. as a variation of the electric field extending between the faces caused by a change in dipole density in the bulk. For example, a 1 cm3 cube of quartz with 2 kN (500 lbf) of correctly applied force can produce a voltage of 12500 V. Piezoelectric materials also show the opposite effect, called the converse piezoelectric effect, where the application of an electrical field creates mechanical deformation in the crystal. Linear piezoelectricity is the combined effect of These may be combined into so-called "coupled equations", of which the strain-charge form is: In matrix form, where ["d"] is the matrix for the direct piezoelectric effect and ["d"] is the matrix for the converse piezoelectric effect. The superscript "E" indicates a zero, or constant, electric field; the superscript "T" indicates a zero, or constant, stress field; and the superscript t stands for transposition of a matrix. Notice that the third order tensor formula_7 maps vectors into symmetric matrices. There are no non-trivial rotation-invariant tensors that have this property, which is why there are no isotropic piezoelectric materials. The strain-charge for a material of the 4mm (C4v) crystal class (such as a poled piezoelectric ceramic such as tetragonal PZT or BaTiO3) as well as the 6mm crystal class may also be written as (ANSI IEEE 176): where the first equation represents the relationship for the converse piezoelectric effect and the latter for the direct piezoelectric effect. Although the above equations are the most used form in literature, some comments about the notation are necessary. Generally, "D" and "E" are vectors, that is, Cartesian tensors of rank 1; and permittivity "ε" is a Cartesian tensor of rank 2. Strain and stress are, in principle, also rank-2 tensors. But conventionally, because strain and stress are all symmetric tensors, the subscript of strain and stress can be relabeled in the following fashion: 11 → 1; 22 → 2; 33 → 3; 23 → 4; 13 → 5; 12 → 6. (Different conventions may be used by different authors in literature. For example, some use 12 → 4; 23 → 5; 31 → 6 instead.) That is why "S" and "T" appear to have the "vector form" of six components. Consequently, "s" appears to be a 6-by-6 matrix instead of a rank-3 tensor. Such a relabeled notation is often called Voigt notation. Whether the shear strain components "S"4, "S"5, "S"6 are tensor components or engineering strains is another question. In the equation above, they must be engineering strains for the 6,6 coefficient of the compliance matrix to be written as shown, i.e., 2("s" − "s"). Engineering shear strains are double the value of the corresponding tensor shear, such as "S"6 = 2"S"12 and so on. This also means that "s"66 = , where "G"12 is the shear modulus. In total, there are four piezoelectric coefficients, "dij", "eij", "gij", and "hij" defined as follows: where the first set of four terms corresponds to the direct piezoelectric effect and the second set of four terms corresponds to the converse piezoelectric effect, and the reason why the direct piezoelectric tensor is equal to the transpose of the converse piezoelectric tensor originated from the Maxwell Relations in Thermodynamics. For those piezoelectric crystals for which the polarization is of the crystal-field induced type, a formalism has been worked out that allows for the calculation of piezoelectrical coefficients "dij" from electrostatic lattice constants or higher-order Madelung constants. Of the 32 crystal classes, 21 are non-centrosymmetric (not having a centre of symmetry), and of these, 20 exhibit direct piezoelectricity (the 21st is the cubic class 432). Ten of these represent the polar crystal classes, which show a spontaneous polarization without mechanical stress due to a non-vanishing electric dipole moment associated with their unit cell, and which exhibit pyroelectricity. If the dipole moment can be reversed by applying an external electric field, the material is said to be ferroelectric. For polar crystals, for which P ≠ 0 holds without applying a mechanical load, the piezoelectric effect manifests itself by changing the magnitude or the direction of P or both. For the nonpolar but piezoelectric crystals, on the other hand, a polarization P different from zero is only elicited by applying a mechanical load. For them the stress can be imagined to transform the material from a nonpolar crystal class (P = 0) to a polar one, having P ≠ 0. Many materials exhibit piezoelectricity. Ceramics with randomly oriented grains must be ferroelectric to exhibit piezoelectricity. Macroscopic piezoelectricity is possible in textured polycrystalline non-ferroelectric piezoelectric materials, such as AlN and ZnO. The families of ceramics with perovskite, tungsten-bronze, and related structures exhibit piezoelectricity: So far, neither the environmental effect nor the stability of supplying these substances has been measured. A piezoelectric potential can be created in any bulk or nanostructured semiconductor crystal having non central symmetry, such as the Group III–V and II–VI materials, due to polarization of ions under applied stress and strain. This property is common to both the zincblende and wurtzite crystal structures. To first order, there is only one independent piezoelectric coefficient in zincblende, called e14, coupled to shear components of the strain. In wurtzite, there are instead three independent piezoelectric coefficients: "e"31, "e"33 and "e"15. The semiconductors where the strongest piezoelectricity is observed are those commonly found in the wurtzite structure, i.e. GaN, InN, AlN and ZnO (see piezotronics). Since 2006, there have also been a number of reports of strong non linear piezoelectric effects in polar semiconductors. Such effects are generally recognized to be at least important if not of the same order of magnitude as the first order approximation. The piezo-response of polymers is not as high as the response for ceramics; however, polymers hold properties that ceramics do not. Over the last few decades, non-toxic, piezoelectric polymers have been studied and applied due to their flexibility and smaller acoustical impedance. Other properties that make these materials significant include their biocompatibility, biodegradability, low cost, and low power consumption compared to other piezo-materials (ceramics, etc.). Piezoelectric polymers and non-toxic polymer composites can be used given their different physical properties. Piezoelectric polymers can be classified by bulk polymers, voided charged polymers ("piezoelectrets"), and polymer composites. A piezo-response observed by bulk polymers is mostly due to its molecular structure. There are two types of bulk polymers: amorphous and semi-crystalline. Examples of semi-crystalline polymers are Polyvinylidene Fluoride (PVDF) and its copolymers, Polyamides, and Parylene-C. Non-crystalline polymers, such as Polyimide and Polyvinylidene Chloride (PVDC), fall under amorphous bulk polymers. Voided charged polymers exhibit the piezoelectric effect due to charge induced by poling of a porous polymeric film. Under an electric field, charges form on the surface of the voids forming dipoles. Electric responses can be caused by any deformation of these voids. The piezoelectric effect can also be observed in polymer composites by integrating piezoelectric ceramic particles into a polymer film. A polymer does not have to be piezo-active to be an effective material for a polymer composite. In this case, a material could be made up of an inert matrix with a separate piezo-active component. PVDF exhibits piezoelectricity several times greater than quartz. The piezo-response observed from PVDF is about 20–30 pC/N. That is an order of 5–50 times less than that of piezoelectric ceramic lead zirconate titanate (PZT). The thermal stability of the piezoelectric effect of polymers in the PVDF family (i.e. vinylidene fluoride co-poly trifluoroethylene) goes up to 125 °C. Some applications of PVDF are pressure sensors, hydrophones, and shock wave sensors. Due to their flexibility, piezoelectric composites have been proposed as energy harvesters and nanogenerators. In 2018, it was reported by Zhu et al. that a piezoelectric response of about 17 pC/N could be obtained from PDMS/PZT nanocomposite at 60% porosity. Another PDMS nanocomposite was reported in 2017, in which BaTiO3 was integrated into PDMS to make a stretchable, transparent nanogenerator for self-powered physiological monitoring. In 2016, polar molecules were introduced into a polyurethane foam in which high responses of up to 244 pC/N were reported. Most materials exhibit at least weak piezoelectric responses. Trivial examples include sucrose (table sugar), DNA, viral proteins, including those from bacteriophage. An actuator based on wood fibers, called cellulose fibers, has been reported. D33 responses for cellular polypropylene are around 200 pC/N. Some applications of cellular polypropylene are musical key pads, microphones, and ultrasound-based echolocation systems. Currently, industrial and manufacturing is the largest application market for piezoelectric devices, followed by the automotive industry. Strong demand also comes from medical instruments as well as information and telecommunications. The global demand for piezoelectric devices was valued at approximately US$14.8 billion in 2010. The largest material group for piezoelectric devices is piezoceramics, and piezopolymer is experiencing the fastest growth due to its low weight and small size. Piezoelectric crystals are now used in numerous ways: Direct piezoelectricity of some substances, like quartz, can generate potential differences of thousands of volts. The principle of operation of a piezoelectric sensor is that a physical dimension, transformed into a force, acts on two opposing faces of the sensing element. Depending on the design of a sensor, different "modes" to load the piezoelectric element can be used: longitudinal, transversal and shear. Detection of pressure variations in the form of sound is the most common sensor application, e.g. piezoelectric microphones (sound waves bend the piezoelectric material, creating a changing voltage) and piezoelectric pickups for acoustic-electric guitars. A piezo sensor attached to the body of an instrument is known as a contact microphone. Piezoelectric sensors especially are used with high frequency sound in ultrasonic transducers for medical imaging and also industrial nondestructive testing (NDT). For many sensing techniques, the sensor can act as both a sensor and an actuator—often the term "transducer" is preferred when the device acts in this dual capacity, but most piezo devices have this property of reversibility whether it is used or not. Ultrasonic transducers, for example, can inject ultrasound waves into the body, receive the returned wave, and convert it to an electrical signal (a voltage). Most medical ultrasound transducers are piezoelectric. In addition to those mentioned above, various sensor applications include: As very high electric fields correspond to only tiny changes in the width of the crystal, this width can be changed with better-than-µm precision, making piezo crystals the most important tool for positioning objects with extreme accuracy—thus their use in actuators. Multilayer ceramics, using layers thinner than , allow reaching high electric fields with voltage lower than . These ceramics are used within two kinds of actuators: direct piezo actuators and Amplified piezoelectric actuators. While direct actuator's stroke is generally lower than , amplified piezo actuators can reach millimeter strokes. The piezoelectrical properties of quartz are useful as a standard of frequency. Types of piezoelectric motor include: Aside from the stepping stick-slip motor, all these motors work on the same principle. Driven by dual orthogonal vibration modes with a phase difference of 90°, the contact point between two surfaces vibrates in an elliptical path, producing a frictional force between the surfaces. Usually, one surface is fixed, causing the other to move. In most piezoelectric motors, the piezoelectric crystal is excited by a sine wave signal at the resonant frequency of the motor. Using the resonance effect, a much lower voltage can be used to produce a high vibration amplitude. A stick-slip motor works using the inertia of a mass and the friction of a clamp. Such motors can be very small. Some are used for camera sensor displacement, thus allowing an anti-shake function. Different teams of researchers have been investigating ways to reduce vibrations in materials by attaching piezo elements to the material. When the material is bent by a vibration in one direction, the vibration-reduction system responds to the bend and sends electric power to the piezo element to bend in the other direction. Future applications of this technology are expected in cars and houses to reduce noise. Further applications to flexible structures, such as shells and plates, have also been studied for nearly three decades. In a demonstration at the Material Vision Fair in Frankfurt in November 2005, a team from TU Darmstadt in Germany showed several panels that were hit with a rubber mallet, and the panel with the piezo element immediately stopped swinging. Piezoelectric ceramic fiber technology is being used as an electronic damping system on some HEAD tennis rackets. In people with previous total fertilization failure, piezoelectric activation of oocytes together with intracytoplasmic sperm injection (ICSI) seems to improve fertilization outcomes. Piezosurgery Piezosurgery is a minimally invasive technique that aims to cut a target tissue with little damage to neighboring tissues. For example, Hoigne "et al." uses frequencies in the range 25–29 kHz, causing microvibrations of 60–210 μm. It has the ability to cut mineralized tissue without cutting neurovascular tissue and other soft tissue, thereby maintaining a blood-free operating area, better visibility and greater precision. In 2015, Cambridge University researchers working in conjunction with researchers from the National Physical Laboratory and Cambridge-based dielectric antenna company Antenova Ltd, using thin films of piezoelectric materials found that at a certain frequency, these materials become not only efficient resonators, but efficient radiators as well, meaning that they can potentially be used as antennas. The researchers found that by subjecting the piezoelectric thin films to an asymmetric excitation, the symmetry of the system is similarly broken, resulting in a corresponding symmetry breaking of the electric field, and the generation of electromagnetic radiation. Several attempts at the macro-scale application of the piezoelectric technology have emerged to harvest kinetic energy from walking pedestrians. In this case, locating high traffic areas is critical for optimization of the energy harvesting efficiency, as well as the orientation of the tile pavement significantly affects the total amount of the harvested energy. A density flow evaluation is recommended to qualitatively evaluate the piezoelectric power harvesting potential of the considered area based on the number of pedestrian crossings per unit time. In X. Li's study, the potential application of a commercial piezoelectric energy harvester in a central hub building at Macquarie University in Sydney, Australia is examined and discussed. Optimization of the piezoelectric tile deployment is presented according to the frequency of pedestrian mobility and a model is developed where 3.1% of the total floor area with the highest pedestrian mobility is paved with piezoelectric tiles. The modelling results indicate that the total annual energy harvesting potential for the proposed optimized tile pavement model is estimated at 1.1 MW h/year, which would be sufficient to meet close to 0.5% of the annual energy needs of the building. In Israel, there is a company which has installed piezoelectric materials under a busy highway. The energy generated is adequate and powers street lights, billboards and signs. Tire company Goodyear has plans to develop an electricity generating tire which has piezoelectric material lined inside it. As the tire moves, it deforms and thus electricity is generated. The efficiency of a hybrid photovoltaic cell that contains piezoelectric materials can be increased simply by placing it near a source of ambient noise or vibration. The effect was demonstrated with organic cells using zinc oxide nanotubes. The electricity generated by the piezoelectric effect itself is a negligible percentage of the overall output. Sound levels as low as 75 decibels improved efficiency by up to 50%. Efficiency peaked at 10 kHz, the resonant frequency of the nanotubes. The electrical field set up by the vibrating nanotubes interacts with electrons migrating from the organic polymer layer. This process decreases the likelihood of recombination, in which electrons are energized but settle back into a hole instead of migrating to the electron-accepting ZnO layer.
https://en.wikipedia.org/wiki?curid=24975
Product (mathematics) In mathematics, a product is the result of multiplying, or an expression that identifies factors to be multiplied. Thus, for instance, 30 is the product of 6 and 5 (the result of multiplication), and formula_1 is the product of formula_2 and formula_3 (indicating that the two factors should be multiplied together). The order in which real or complex numbers are multiplied has no bearing on the product; this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, the product usually depends on the order of the factors. Matrix multiplication, for example, and multiplication in other algebras is in general non-commutative. There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many different algebraic structures. Placing several stones into a rectangular pattern with formula_4 rows and formula_5 columns gives stones. Another approach to multiplication that applies also to real numbers is continuously stretching the number line from , so that the is stretched to the one factor, and looking up the product, where the other factor is stretched to. Integers allow positive and negative numbers. Their product is determined by the product of their positive amounts, combined with the sign derived from the following rule, which is a necessary consequence of demanding distributivity of the multiplication over addition, but is "no additional rule". In words, we have: Two fractions can be multiplied by multiplying their numerators and denominators: For a rigorous definition of the product of two real numbers see Construction of the real numbers. Two complex numbers can be multiplied by the distributive law and the fact that formula_9, as follows: Complex numbers can be written in polar coordinates: Furthermore, from which one obtains The geometric meaning is that the magnitudes are multiplied and the arguments are added. The product of two quaternions can be found in the article on quaternions. However, in this case, formula_14 and formula_15 are in general different. The product operator for the product of a sequence is denoted by the capital Greek letter pi ∏ (in analogy to the use of the capital Sigma ∑ as summation symbol). The product of a sequence consisting of only one number is just that number itself. The product of no factors at all is known as the empty product, and is equal to 1. Commutative rings have a product operation. Residue classes in the rings formula_16 can be added: and multiplied: Two functions from the reals to itself can be multiplied in another way, called the convolution. If then the integral is well defined and is called the convolution. Under the Fourier transform, convolution becomes point-wise function multiplication. The product of two polynomials is given by the following: with There are many different kinds of products in linear algebra; some of these have confusingly similar names (outer product, exterior product) but have very different meanings. Others have very different names (outer product, tensor product, Kronecker product) but convey essentially the same idea. A brief overview of these is given here. By the very definition of a vector space, one can form the product of any scalar with any vector, giving a map formula_23. A scalar product is a bi-linear map: with the following conditions, that formula_25 for all formula_26. From the scalar product, one can define a norm by letting formula_27. The scalar product also allows one to define an angle between two vectors: In formula_29-dimensional Euclidean space, the standard scalar product (called the dot product) is given by: The cross product of two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors. The cross product can also be expressed as the formal determinant: A linear mapping can be defined as a function "f" between two vector spaces "V" and "W" with underlying field F, satisfying If one only considers finite dimensional vector spaces, then in which bV andbW denote the bases of "V" and "W", and "vi" denotes the component of v on bV"i", and Einstein summation convention is applied. Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping "f" map "V" to "W", and let the linear mapping "g" map "W" to "U". Then one can get Or in matrix form: in which the "i"-row, "j"-column element of F, denoted by "Fij", is "fji", and "Gij=gji". The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication. Given two matrices their product is given by There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim(U), s = dim(V) and t = dim(W) be the (finite) dimensions of vector spaces U, V and W. Let formula_39 be a basis of U, formula_40 be a basis of V and formula_41 be a basis of W. In terms of this basis, let formula_42 be the matrix representing f : U → V and formula_43 be the matrix representing g : V → W. Then is the matrix representing formula_45. In other words: the matrix product is the description in coordinates of the composition of linear functions. Given two finite dimensional vector spaces "V" and "W", the tensor product of them can be defined as a (2,0)-tensor satisfying: where "V*" and "W*" denote the dual spaces of "V" and "W". For infinite-dimensional vector spaces, one also has the: The tensor product, outer product and Kronecker product all convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previously-fixed basis, whereas the tensor product is usually given in its intrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices). In general, whenever one has two mathematical objects that can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as the internal product of a monoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is the class of all things (of a given type) that have a tensor product. Other kinds of products in linear algebra include: In set theory, a Cartesian product is a mathematical operation which returns a set (or product set) from multiple sets. That is, for sets "A" and "B", the Cartesian product is the set of all ordered pairs where and . The class of all things (of a given type) that have Cartesian products is called a Cartesian category. Many of these are Cartesian closed categories. Sets are an example of such objects. The empty product on numbers and most algebraic structures has the value of 1 (the identity element of multiplication) just like the empty sum has the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment in logic, set theory, computer programming and category theory. Products over other kinds of algebraic structures include: A few of the above products are examples of the general notion of an internal product in a monoidal category; the rest are describable by the general notion of a product in category theory. All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, see product (category theory), which describes how to combine two objects of some kind to create an object, possibly of a different kind. But also, in category theory, one has:
https://en.wikipedia.org/wiki?curid=24977
4-polytope In geometry, a 4-polytope (sometimes also called a polychoron, polycell, or polyhedroid) is a four-dimensional polytope. It is a connected and closed figure, composed of lower-dimensional polytopal elements: vertices, edges, faces (polygons), and cells (polyhedra). Each face is shared by exactly two cells. The two-dimensional analogue of a 4-polytope is a polygon, and the three-dimensional analogue is a polyhedron. Topologically 4-polytopes are closely related to the uniform honeycombs, such as the cubic honeycomb, which tessellate 3-space; similarly the 3D cube is related to the infinite 2D square tiling. Convex 4-polytopes can be "cut and unfolded" as nets in 3-space. A 4-polytope is a closed four-dimensional figure. It comprises vertices (corner points), edges, faces and cells. A cell is the three-dimensional analogue of a face, and is therefore a polyhedron. Each face must join exactly two cells, analogous to the way in which each edge of a polyhedron joins just two faces. Like any polytope, the elements of a 4-polytope cannot be subdivided into two or more sets which are also 4-polytopes, i.e. it is not a compound. The most familiar 4-polytope is the tesseract or hypercube, the 4D analogue of the cube. 4-polytopes cannot be seen in three-dimensional space due to their extra dimension. Several techniques are used to help visualise them. Orthogonal projections can be used to show various symmetry orientations of a 4-polytope. They can be drawn in 2D as vertex-edge graphs, and can be shown in 3D with solid faces as visible projective envelopes. Just as a 3D shape can be projected onto a flat sheet, so a 4-D shape can be projected onto 3-space or even onto a flat sheet. One common projection is a Schlegel diagram which uses stereographic projection of points on the surface of a 3-sphere into three dimensions, connected by straight edges, faces, and cells drawn in 3-space. Just as a slice through a polyhedron reveals a cut surface, so a slice through a 4-polytope reveals a cut "hypersurface" in three dimensions. A sequence of such sections can be used to build up an understanding of the overall shape. The extra dimension can be equated with time to produce a smooth animation of these cross sections. A net of a 4-polytope is composed of polyhedral cells that are connected by their faces and all occupy the same three-dimensional space, just as the polygon faces of a net of a polyhedron are connected by their edges and all occupy the same plane. The topology of any given 4-polytope is defined by its Betti numbers and torsion coefficients. The value of the Euler characteristic used to characterise polyhedra does not generalize usefully to higher dimensions, and is zero for all 4-polytopes, whatever their underlying topology. This inadequacy of the Euler characteristic to reliably distinguish between different topologies in higher dimensions led to the discovery of the more sophisticated Betti numbers. Similarly, the notion of orientability of a polyhedron is insufficient to characterise the surface twistings of toroidal 4-polytopes, and this led to the use of torsion coefficients. Like all polytopes, 4-polytopes may be classified based on properties like "convexity" and "symmetry". The following lists the various categories of 4-polytopes classified according to the criteria above: Uniform 4-polytope (vertex-transitive): Other convex 4-polytopes: Infinite uniform 4-polytopes of Euclidean 3-space (uniform tessellations of convex uniform cells) Infinite uniform 4-polytopes of hyperbolic 3-space (uniform tessellations of convex uniform cells) Dual uniform 4-polytope (cell-transitive): Others: Abstract regular 4-polytopes: These categories include only the 4-polytopes that exhibit a high degree of symmetry. Many other 4-polytopes are possible, but they have not been studied as extensively as the ones included in these categories.
https://en.wikipedia.org/wiki?curid=24979
Punctuated equilibrium In evolutionary biology, punctuated equilibrium (also called punctuated equilibria) is a theory that proposes that once a species appears in the fossil record, the population will become stable, showing little evolutionary change for most of its geological history. This state of little or no morphological change is called "stasis". When significant evolutionary change occurs, the theory proposes that it is generally restricted to rare and geologically rapid events of branching speciation called cladogenesis. Cladogenesis is the process by which a species splits into two distinct species, rather than one species gradually transforming into another. Punctuated equilibrium is commonly contrasted against phyletic gradualism, the idea that evolution generally occurs uniformly and by the steady and gradual transformation of whole lineages (called anagenesis). In this view, evolution is seen as generally smooth and continuous. In 1972, paleontologists Niles Eldredge and Stephen Jay Gould published a landmark paper developing their theory and called it "punctuated equilibria". Their paper built upon Ernst Mayr's model of geographic speciation, I. Michael Lerner's theories of developmental and genetic homeostasis, and their own empirical research. Eldredge and Gould proposed that the degree of gradualism commonly attributed to Charles Darwin is virtually nonexistent in the fossil record, and that stasis dominates the history of most fossil species. Punctuated equilibrium originated as a logical consequence of Ernst Mayr's concept of genetic revolutions by allopatric and especially peripatric speciation as applied to the fossil record. Although the sudden appearance of species and its relationship to speciation was proposed and identified by Mayr in 1954, historians of science generally recognize the 1972 Eldredge and Gould paper as the basis of the new paleobiological research program. Punctuated equilibrium differs from Mayr's ideas mainly in that Eldredge and Gould placed considerably greater emphasis on stasis, whereas Mayr was concerned with explaining the morphological discontinuity (or "sudden jumps") found in the fossil record. Mayr later complimented Eldredge and Gould's paper, stating that evolutionary stasis had been "unexpected by most evolutionary biologists" and that punctuated equilibrium "had a major impact on paleontology and evolutionary biology." A year before their 1972 Eldredge and Gould paper, Niles Eldredge published a paper in the journal "Evolution" which suggested that gradual evolution was seldom seen in the fossil record and argued that Ernst Mayr's standard mechanism of allopatric speciation might suggest a possible resolution. The Eldredge and Gould paper was presented at the Annual Meeting of the Geological Society of America in 1971. The symposium focused its attention on how modern microevolutionary studies could revitalize various aspects of paleontology and macroevolution. Tom Schopf, who organized that year's meeting, assigned Gould the topic of speciation. Gould recalls that "Eldredge's 1971 publication [on Paleozoic trilobites] had presented the only new and interesting ideas on the paleontological implications of the subject—so I asked Schopf if we could present the paper jointly." According to Gould "the ideas came mostly from Niles, with yours truly acting as a sounding board and eventual scribe. I coined the term "punctuated equilibrium" and wrote most of our 1972 paper, but Niles is the proper first author in our pairing of Eldredge and Gould." In his book "Time Frames" Eldredge recalls that after much discussion the pair "each wrote roughly half. Some of the parts that would seem obviously the work of one of us were actually first penned by the other—I remember for example, writing the section on Gould's snails. Other parts are harder to reconstruct. Gould edited the entire manuscript for better consistency. We sent it in, and Schopf reacted strongly against it—thus signaling the tenor of the reaction it has engendered, though for shifting reasons, down to the present day." John Wilkins and Gareth Nelson have argued that French architect Pierre Trémaux proposed an "anticipation of the theory of punctuated equilibrium of Gould and Eldredge." The fossil record includes well documented examples of both phyletic gradualism and punctuational evolution. As such, much debate persists over the prominence of stasis in the fossil record. Before punctuated equilibrium, most evolutionists considered stasis to be rare or unimportant. The paleontologist George Gaylord Simpson, for example, believed that phyletic gradual evolution (called "horotely" in his terminology) comprised 90% of evolution. More modern studies, including a meta-analysis examining 58 published studies on speciation patterns in the fossil record showed that 71% of species exhibited stasis, and 63% were associated with punctuated patterns of evolutionary change. According to Michael Benton, "it seems clear then that stasis is common, and that had not been predicted from modern genetic studies." A paramount example of evolutionary stasis is the fern "Osmunda claytoniana". Based on paleontological evidence it has remained unchanged, even at the level of fossilized nuclei and chromosomes, for at least 180 million years. When Eldredge and Gould published their 1972 paper, allopatric speciation was considered the "standard" model of speciation. This model was popularized by Ernst Mayr in his 1954 paper "Change of genetic environment and evolution," and his classic volume "Animal Species and Evolution" (1963). Allopatric speciation suggests that species with large central populations are stabilized by their large volume and the process of gene flow. New and even beneficial mutations are diluted by the population's large size and are unable to reach fixation, due to such factors as constantly changing environments. If this is the case, then the transformation of whole lineages should be rare, as the fossil record indicates. Smaller populations on the other hand, which are isolated from the parental stock, are decoupled from the homogenizing effects of gene flow. In addition, pressure from natural selection is especially intense, as peripheral isolated populations exist at the outer edges of ecological tolerance. If most evolution happens in these rare instances of allopatric speciation then evidence of gradual evolution in the fossil record should be rare. This hypothesis was alluded to by Mayr in the closing paragraph of his 1954 paper: Although punctuated equilibrium generally applies to sexually reproducing organisms, some biologists have applied the model to non-sexual species like viruses, which cannot be stabilized by conventional gene flow. As time went on biologists like Gould moved away from wedding punctuated equilibrium to allopatric speciation, particularly as evidence accumulated in support of other modes of speciation. Gould, for example, was particularly attracted to Douglas Futuyma's work on the importance of reproductive isolating mechanisms. Many hypotheses have been proposed to explain the putative causes of stasis. Gould was initially attracted to I. Michael Lerner's theories of developmental and genetic homeostasis. However this hypothesis was rejected over time, as evidence accumulated against it. Other plausible mechanisms which have been suggested include: habitat tracking, stabilizing selection, the Stenseth-Maynard Smith stability hypothesis, constraints imposed by the nature of subdivided populations, normalizing clade selection, and koinophilia. Evidence for stasis has also been corroborated from the genetics of sibling species, species which are morphologically indistinguishable, but whose proteins have diverged sufficiently to suggest they have been separated for millions of years. Fossil evidence of reproductively isolated extant species of sympatric Olive Shells ("Amalda" sp.) also confirm morphological stasis in multiple lineages over three million years. According to Gould, "stasis may emerge as the theory's most important contribution to evolutionary science." Philosopher Kim Sterelny in clarifying the meaning of stasis adds, "In claiming that species typically undergo no further evolutionary change once speciation is complete, they are not claiming that there is no change at all between one generation and the next. Lineages do change. But the change between generations does not accumulate. Instead, over time, the species wobbles about its phenotypic mean. Jonathan Weiner's "The Beak of the Finch" describes this very process." Punctuated equilibrium has also been cited as contributing to the hypothesis that species are Darwinian individuals, and not just classes, thereby providing a stronger framework for a hierarchical theory of evolution. Much confusion has arisen over what proponents of punctuated equilibrium actually argued, what mechanisms they advocated, how fast the punctuations were, what taxonomic scale their theory applied to, how revolutionary their claims were intended to be, and how punctuated equilibrium related to other ideas like saltationism, quantum evolution, and mass extinction. The punctuational nature of punctuated equilibrium has engendered perhaps the most confusion over Eldredge and Gould's theory. Gould's sympathetic treatment of Richard Goldschmidt, the controversial geneticist who advocated the idea of "hopeful monsters," led some biologists to conclude that Gould's punctuations were occurring in single-generation jumps. This interpretation has frequently been used by creationists to characterize the weakness of the paleontological record, and to portray contemporary evolutionary biology as advancing neo-saltationism. In an often quoted remark, Gould stated, "Since we proposed punctuated equilibria to explain trends, it is infuriating to be quoted again and again by creationists—whether through design or stupidity, I do not know—as admitting that the fossil record includes no transitional forms. Transitional forms are generally lacking at the species level, but they are abundant between larger groups." Although there exist some debate over how long the punctuations last, supporters of punctuated equilibrium generally place the figure between 50,000 and 100,000 years. Quantum evolution was a controversial hypothesis advanced by Columbia University paleontologist George Gaylord Simpson, who was regarded by Gould as "the greatest and most biologically astute paleontologist of the twentieth century." Simpson's conjecture was that according to the geological record, on very rare occasions evolution would proceed very rapidly to form entirely new families, orders, and classes of organisms. This hypothesis differs from punctuated equilibrium in several respects. First, punctuated equilibrium was more modest in scope, in that it was addressing evolution specifically at the species level. Simpson's idea was principally concerned with evolution at higher taxonomic groups. Second, Eldredge and Gould relied upon a different mechanism. Where Simpson relied upon a synergistic interaction between genetic drift and a shift in the adaptive fitness landscape, Eldredge and Gould relied upon ordinary speciation, particularly Ernst Mayr's concept of allopatric speciation. Lastly, and perhaps most significantly, quantum evolution took no position on the issue of stasis. Although Simpson acknowledged the existence of stasis in what he called the bradytelic mode, he considered it (along with rapid evolution) to be unimportant in the larger scope of evolution. In his "Major Features of Evolution" Simpson stated, "Evolutionary change is so nearly the universal rule that a state of motion is, figuratively, normal in evolving populations. The state of rest, as in bradytely, is the exception and it seems that some restraint or force must be required to maintain it." Despite such differences between the two models, earlier critiques—from such eminent commentators as Sewall Wright as well as Simpson himself—have argued that punctuated equilibrium is little more than quantum evolution relabeled. Punctuated equilibrium is often portrayed to oppose the concept of gradualism, when it is actually a form of gradualism. This is because even though evolutionary change appears instantaneous between geological sedimentary layers, change is still occurring incrementally, with no great change from one generation to the next. To this end, Gould later commented that "Most of our paleontological colleagues missed this insight because they had not studied evolutionary theory and either did not know about allopatric speciation or had not considered its translation to geological time. Our evolutionary colleagues also failed to grasp the implication(s), primarily because they did not think at geological scales". Richard Dawkins dedicated a chapter in "The Blind Watchmaker" to correcting, in his view, the wide confusion regarding "rates of change". His first point is to argue that phyletic gradualism—understood in the sense that evolution proceeds at a single uniform rate of speed, called "constant speedism" by Dawkins—is a "caricature of Darwinism" and "does not really exist". His second argument, which follows from the first, is that once the caricature of "constant speedism" is dismissed, we are left with one logical alternative, which Dawkins terms "variable speedism". Variable speedism may also be distinguished one of two ways: ""discrete variable" speedism" and ""continuously variable" speedism". Eldredge and Gould, proposing that evolution jumps between stability and relative rapidity, are described as "discrete variable speedists", and "in this respect they are genuinely radical." They assert that evolution generally proceeds in bursts, or not at all. "Continuously variable speedists", on the other hand, advance that "evolutionary rates fluctuate continuously from very fast to very slow and stop, with all intermediates. They see no particular reason to emphasize certain speeds more than others. In particular, stasis, to them, is just an extreme case of ultra-slow evolution. To a punctuationist, there is something very special about stasis." Dawkins therefore commits himself here to an empirical claim about the geological record, in contrast to his earlier claim that "The paleontological evidence can be argued about, and I am not qualified to judge it." It is this particular commitment that Eldredge and Gould have aimed to overturn. Richard Dawkins regards the apparent gaps represented in the fossil record to document migratory events rather than evolutionary events. According to Dawkins, evolution certainly occurred but "probably gradually" elsewhere. However, the punctuational equilibrium model may still be inferred from both the observation of stasis and examples of rapid and episodic speciation events documented in the fossil record. Dawkins also emphasizes that punctuated equilibrium has been "oversold by some journalists", but partly due to Eldredge and Gould's "later writings". Dawkins contends that the hypothesis "does not deserve a particularly large measure of publicity". It is a "minor gloss," an "interesting but minor wrinkle on the surface of neo-Darwinian theory," and "lies firmly within the neo-Darwinian synthesis". In his book "Darwin's Dangerous Idea", philosopher Daniel Dennett is especially critical of Gould's presentation of punctuated equilibrium. Dennett argues that Gould alternated between revolutionary and conservative claims, and that each time Gould made a revolutionary statement—or appeared to do so—he was criticized, and thus retreated to a traditional neo-Darwinian position. Gould responded to Dennett's claims in "The New York Review of Books", and in his technical volume "The Structure of Evolutionary Theory". English professor Heidi Scott argues that Gould's talent for writing vivid prose, his use of metaphor, and his success in building a popular audience of nonspecialist readers altered the "climate of specialized scientific discourse" favorably in his promotion of punctuated equilibrium. While Gould is celebrated for the color and energy of his prose, as well as his interdisciplinary knowledge, critics such as Scott, Richard Dawkins, and Daniel Dennett have concerns that the theory has gained undeserved credence among non-scientists because of Gould's rhetorical skills. Philosopher John Lyne and biologist Henry Howe believed punctuated equilibrium's success has much more to do with the nature of the geological record than the nature of Gould's rhetoric. They state, a "re-analysis of existing fossil data has shown, to the increasing satisfaction of the paleontological community, that Eldredge and Gould were correct in identifying periods of evolutionary stasis which are interrupted by much shorter periods of evolutionary change." Some critics jokingly referred to the theory of punctuated equilibrium as "evolution by jerks", which reportedly prompted punctuationists to describe phyletic gradualism as "evolution by creeps." The sudden appearance of most species in the geologic record and the lack of evidence of substantial gradual change in most species—from their initial appearance until their extinction—has , including by Charles Darwin who appealed to the imperfection of the record as the favored explanation. When presenting his ideas against the prevailing influences of catastrophism and progressive creationism, which envisaged species being supernaturally created at intervals, Darwin needed to forcefully stress the gradual nature of evolution in accordance with the gradualism promoted by his friend Charles Lyell. He privately expressed concern, noting in the margin of his 1844 "Essay", "Better begin with this: If species really, after catastrophes, created in showers world over, my theory false." It is often incorrectly assumed that he insisted that the rate of change must be constant, or nearly so, but even the first edition of "On the Origin of Species" states that "Species of different genera and classes have not changed at the same rate, or in the same degree. In the oldest tertiary beds a few living shells may still be found in the midst of a multitude of extinct forms... The Silurian "Lingula" differs but little from the living species of this genus". "Lingula" is among the few brachiopods surviving today but also known from fossils over 500 million years old. In the fourth edition (1866) of "On the Origin of Species" Darwin wrote that "the periods during which species have undergone modification, though long as measured in years, have probably been short in comparison with the periods during which they retain the same form." Thus punctuationism in general is consistent with Darwin's conception of evolution. According to early versions of punctuated equilibrium, "peripheral isolates" are considered to be of critical importance for speciation. However, Darwin wrote, ""I can by no means agree" ... that immigration and isolation are necessary elements... Although isolation is of great importance in the production of new species, on the whole I am inclined to believe that largeness of area is still more important, especially for the production of species which shall prove capable of enduring for a long period, and of spreading widely." The importance of isolation in forming species had played a significant part in Darwin's early thinking, as shown in his "Essay" of 1844. But by the time he wrote the "Origin" he had downplayed its importance. He explained the reasons for his revised view as follows: Throughout a great and open area, not only will there be a greater chance of favourable variations, arising from the large number of individuals of the same species there supported, but the conditions of life are much more complex from the large number of already existing species; and if some of these species become modified and improved, others will have to be improved in a corresponding degree, or they will be exterminated. Each new form, also, as soon as it has been improved, will be able to spread over the open and continuous area, and will thus come into competition with many other forms ... the new forms produced on large areas, which have already been victorious over many competitors, will be those that will spread most widely, and will give rise to the greatest number of new varieties and species. They will thus play a more important role in the changing history of the organic world. Thus punctuated equilibrium is incongruous with some of Darwin's ideas regarding the specific mechanisms of evolution, but generally accords with Darwin's theory of evolution by natural selection. Recent work in developmental biology has identified dynamical and physical mechanisms of tissue morphogenesis that may underlie abrupt morphological transitions during evolution. Consequently, consideration of mechanisms of phylogenetic change that have been found in reality to be non-gradual is increasingly common in the field of evolutionary developmental biology, particularly in studies of the origin of morphological novelty. A description of such mechanisms can be found in the multi-authored volume "Origination of Organismal Form" (MIT Press; 2003). In linguistics, R. M. W. Dixon has proposed a punctuated equilibrium model for language histories, with reference particularly to the prehistory of the indigenous languages of Australia and his objections to the proposed Pama–Nyungan language family there. Although his model has raised considerable interest, it does not command majority support within linguistics. Separately, recent work using computational phylogenetic methods claims to show that punctuational bursts play an important factor when languages split from one another, accounting for anywhere from 10 to 33% of the total divergence in vocabulary. Punctuational evolution has been argued to explain changes in folktales and mythology over time.
https://en.wikipedia.org/wiki?curid=24980
Pioneer 11 Pioneer 11 (also known as Pioneer G) is a robotic space probe launched by NASA on April 6, 1973 to study the asteroid belt, the environment around Jupiter and Saturn, solar wind and cosmic rays. It was the first probe to encounter Saturn and the second to fly through the asteroid belt and by Jupiter. Thereafter, "Pioneer 11" became the second of five artificial objects to achieve the escape velocity that will allow them to leave the Solar System. Due to power constraints and the vast distance to the probe, the last routine contact with the spacecraft was on September 30, 1995, and the last good engineering data was received on November 24, 1995. Approved in February 1969, "Pioneer 11" and its twin probe, "Pioneer 10", were the first to be designed for exploring the outer Solar System. Yielding to multiple proposals throughout the 1960s, early mission objectives were defined as: Subsequent planning for an encounter with Saturn added many more goals: "Pioneer 11" was built by TRW and managed as part of the Pioneer program by NASA Ames Research Center. A backup unit, Pioneer H, is currently on display in the "Milestones of Flight" exhibit at the National Air and Space Museum in Washington, D.C.. Many elements of the mission proved to be critical in the planning of the "Voyager" program. The "Pioneer 11" bus measures deep and with six panels forming the hexagonal structure. The bus houses propellant to control the orientation of the probe and eight of the twelve scientific instruments. The spacecraft has a mass of 260 kilograms. Pioneer 11 has one additional instrument more than Pioneer 10, a flux-gate magnetometer. The "Pioneer 11" probe was launched on April 6, 1973 at 02:11:00 UTC, by the National Aeronautics and Space Administration from Space Launch Complex 36A at Cape Canaveral, Florida aboard an Atlas-Centaur launch vehicle, with a Star 37E propulsion module. Its twin probe, "Pioneer 10", had launched a year earlier on March 3, 1972. "Pioneer 11" was launched on a trajectory directly aimed at Jupiter without any prior gravitational assists. In May 1974, Pioneer was retargeted to fly past Jupiter on a north-south trajectory enabling a Saturn flyby in 1979. The maneuver used 17 pounds of propellant, lasted 42 minutes and 36 seconds and increased Pioneer 11's speed by 230 km/h. It also made two mid-course corrections, on April 11, 1973 and November 7, 1974. "Pioneer 11" flew past Jupiter in November and December 1974. During its closest approach, on December 2, it passed above the cloud tops. The probe obtained detailed images of the Great Red Spot, transmitted the first images of the immense polar regions, and determined the mass of Jupiter's moon Callisto. Using the gravitational pull of Jupiter, a gravity assist was used to alter the trajectory of the probe towards Saturn and gain velocity. On April 16, 1975, following the Jupiter encounter, the micrometer detector was turned off. "Pioneer 11" passed by Saturn on September 1, 1979, at a distance of 21,000 km from Saturn's cloud tops. By this time "Voyager 1" and "Voyager 2" had already passed Jupiter and were also en route to Saturn, so it was decided to target "Pioneer 11" to pass through the Saturn ring plane at the same position that the soon-to-come Voyager probes would use in order to test the route before the Voyagers arrived. If there were faint ring particles that could damage a probe in that area, mission planners felt it was better to learn about it via Pioneer. Thus, "Pioneer 11" was acting as a "pioneer" in a true sense of the word; if danger were detected, then the Voyager probes could be rerouted further away from the rings, but missing the opportunity to visit Uranus and Neptune in the process. "Pioneer 11" imaged and nearly collided with one of Saturn's small moons, passing at a distance of no more than . The object was tentatively identified as Epimetheus, a moon discovered the previous day from "Pioneer"s imaging, and suspected from earlier observations by Earth-based telescopes. After the Voyager flybys, it became known that there are two similarly-sized moons (Epimetheus and Janus) in the same orbit, so there is some uncertainty about which one was the object of Pioneer's near-miss. "Pioneer 11" encountered Janus on September 1, 1979 at 14:52 UTC at a distance of 2500 km and Mimas at 16:20 UTC the same day at 103000 km. Besides Epimetheus, instruments located another previously undiscovered small moon and an additional ring, charted Saturn's magnetosphere and magnetic field and found its planet-size moon, Titan, to be too cold for life. Hurtling underneath the ring plane, the probe sent back pictures of Saturn's rings. The rings, which normally seem bright when observed from Earth, appeared dark in the Pioneer pictures, and the dark gaps in the rings seen from Earth appeared as bright rings. On February 25, 1990, "Pioneer 11" became the 4th man-made object to pass beyond the orbit of the planets. By 1995, "Pioneer 11" could no longer power any of its detectors, so the decision was made to shut it down. On September 29, 1995, NASA's Ames Research Center, responsible for managing the project, issued a press release that began, "After nearly 22 years of exploration out to the farthest reaches of the Solar System, one of the most durable and productive space missions in history will come to a close." It indicated NASA would use its Deep Space Network antennas to listen "once or twice a month" for the spacecraft's signal, until "some time in late 1996" when "its transmitter will fall silent altogether." NASA Administrator Daniel Goldin characterized "Pioneer 11" as "the little spacecraft that could, a venerable explorer that has taught us a great deal about the Solar System and, in the end, about our own innate drive to learn. "Pioneer 11" is what NASA is all about – exploration beyond the frontier." Besides announcing the end of operations, the dispatch provided a historical list of "Pioneer 11" mission achievements. NASA terminated routine contact with the spacecraft on September 30, 1995, but continued to make contact for about 2 hours every 2 to 4 weeks. Scientists received a few minutes of good engineering data on November 24, 1995 but then lost final contact once Earth moved out of view of the spacecraft's antenna. Its signal became too faint to hear in 2002. On January 30, 2019, "Pioneer 11" was from the Earth and from the Sun; and traveling at (relative to the Sun) and traveling outward at about 2.37 AU per year. The spacecraft is heading in the direction of the constellation Scutum near the current position (August 2017) RA 18h 50m dec -8° 39.5' (J2000.0) close to Messier 26. In 928,000 years it will pass within 0.25pc of the K dwarf TYC 992-192-1. "Pioneer 11" has now been overtaken by the two Voyager probes, launched in 1977, and "Voyager 1" is now the most distant object built by humans. Analysis of the radio tracking data from the "Pioneer 10" and "11" spacecraft at distances between 20–70 AU from the Sun has consistently indicated the presence of a small but anomalous Doppler frequency drift. The drift can be interpreted as due to a constant acceleration of directed towards the Sun. Although it is suspected that there is a systematic origin to the effect, none was found. As a result, there is sustained interest in the nature of this so-called "Pioneer anomaly". Extended analysis of mission data by Slava Turyshev and colleagues has determined the source of the anomaly to be asymmetric thermal radiation and the resulting thermal recoil force acting on the face of the Pioneers away from the Sun, and in July 2012 the group of researchers published their results in the "Physical Review Letters" scientific journal. "Pioneer 10" and "11" both carry a gold-anodized aluminum plaque in the event that either spacecraft is ever found by intelligent lifeforms from other planetary systems. The plaques feature the nude figures of a human male and female along with several symbols that are designed to provide information about the origin of the spacecraft. In 1991, "Pioneer 11" was honored on one of 10 United States Postage Service stamps commemorating unmanned spacecraft exploring each of the then nine planets and the Moon. "Pioneer 11" was the spacecraft featured with Jupiter. Pluto was listed as "Not yet explored".
https://en.wikipedia.org/wiki?curid=24981
Psychometrics Psychometrics is a field of study concerned with the theory and technique of psychological measurement. As defined by the US National Council on Measurement in Education (NCME), psychometrics refers to psychological measurement. Generally, it refers to the field in psychology and education that is devoted to testing, measurement, assessment, and related activities. The field is concerned with the objective measurement of skills and knowledge, abilities, attitudes, personality traits, and educational achievement. Some psychometric researchers focus on the construction and validation of assessment instruments such as questionnaires, tests, raters' judgments, psychological symptom scales, and personality tests. Others focus on research relating to measurement theory (e.g., item response theory; intraclass correlation). Practitioners are described as psychometricians. Psychometricians usually possess a specific qualification, and most are psychologists with advanced graduate training. In addition to traditional academic institutions, many psychometricians work for the government or in human resources departments. Others specialize as learning and development professionals. Psychological testing has come from two streams of thought: the first, from Darwin, Galton, and Cattell on the measurement of individual differences, and the second, from Herbart, Weber, Fechner, and Wundt and their psychophysical measurements of a similar construct. The second set of individuals and their research is what has led to the development of experimental psychology, and standardized testing. Charles Darwin was the inspiration behind Sir Francis Galton who led to the creation of psychometrics. In 1859, Darwin published his book "On the Origin of Species", which was devoted to the role of natural selection in the emergence over time of different populations of species of plants and animals. The book discussed how individual members of a species differ and how they possess characteristics that are more or less adaptive to their environment. Those with more adaptive characteristics are more likely to procreate and give rise to another generation. Those with less adaptive characteristics are less likely to procreate. This idea stimulated Galton's interest in the study of human beings and how they differ one from another and, more importantly, how to measure those differences. Galton wrote a book entitled "Hereditary Genius" about different characteristics that people possess and how those characteristics make them more "fit" than others. Today these differences, such as sensory and motor functioning (reaction time, visual acuity, and physical strength) are important domains of scientific psychology. Much of the early theoretical and applied work in psychometrics was undertaken in an attempt to measure intelligence. Galton, often referred to as "the father of psychometrics," devised and included mental tests among his anthropometric measures. James McKeen Cattell, who is considered a pioneer of psychometrics went on to extend Galton's work. Cattell also coined the term "mental test", and is responsible for the research and knowledge which ultimately led to the development of modern tests. More recently, psychometric theory has been applied in the measurement of personality, attitudes, and beliefs, and academic achievement. Measurement of these unobservable phenomena is difficult, and much of the research and accumulated science in this discipline has been developed in an attempt to properly define and quantify such phenomena. Critics, including practitioners in the physical sciences and social activists, have argued that such definition and quantification is impossibly difficult, and that such measurements are often misused, such as with psychometric personality tests used in employment procedures: Figures who made significant contributions to psychometrics include Karl Pearson, Henry F. Kaiser, Carl Brigham, L. L. Thurstone, E. L. Thorndike, Georg Rasch, Eugene Galanter, Johnson O'Connor, Frederic M. Lord, Ledyard R Tucker, and Jane Loevinger. The definition of measurement in the social sciences has a long history. A currently widespread definition, proposed by Stanley Smith Stevens (1946), is that measurement is "the assignment of numerals to objects or events according to some rule." This definition was introduced in the paper in which Stevens proposed four levels of measurement. Although widely adopted, this definition differs in important respects from the more classical definition of measurement adopted in the physical sciences, namely that scientific measurement entails "the estimation or discovery of the ratio of some magnitude of a quantitative attribute to a unit of the same attribute" (p. 358) Indeed, Stevens's definition of measurement was put forward in response to the British Ferguson Committee, whose chair, A. Ferguson, was a physicist. The committee was appointed in 1932 by the British Association for the Advancement of Science to investigate the possibility of quantitatively estimating sensory events. Although its chair and other members were physicists, the committee also included several psychologists. The committee's report highlighted the importance of the definition of measurement. While Stevens's response was to propose a new definition, which has had considerable influence in the field, this was by no means the only response to the report. Another, notably different, response was to accept the classical definition, as reflected in the following statement: These divergent responses are reflected in alternative approaches to measurement. For example, methods based on covariance matrices are typically employed on the premise that numbers, such as raw scores derived from assessments, are measurements. Such approaches implicitly entail Stevens's definition of measurement, which requires only that numbers are "assigned" according to some rule. The main research task, then, is generally considered to be the discovery of associations between scores, and of factors posited to underlie such associations. On the other hand, when measurement models such as the Rasch model are employed, numbers are not assigned based on a rule. Instead, in keeping with Reese's statement above, specific criteria for measurement are stated, and the goal is to construct procedures or operations that provide data that meet the relevant criteria. Measurements are estimated based on the models, and tests are conducted to ascertain whether the relevant criteria have been met. The firstpsychometric instruments were designed to measure the concept of intelligence. One historical approach involved the Stanford-Binet IQ test, developed originally by the French psychologist Alfred Binet. Intelligence tests are useful tools for various purposes. An alternative conception of intelligence is that cognitive capacities within individuals are a manifestation of a general component, or general intelligence factor, as well as cognitive capacity specific to a given domain. Another major focus in psychometrics has been on personality testing. There have been a range of theoretical approaches to conceptualizing and measuring personality. Some of the better known instruments include the Minnesota Multiphasic Personality Inventory, the Five-Factor Model (or "Big 5") and tools such as Personality and Preference Inventory and the Myers-Briggs Type Indicator. Attitudes have also been studied extensively using psychometric approaches. A common method in the measurement of attitudes is the use of the Likert scale. An alternative method involves the application of unfolding measurement models, the most general being the Hyperbolic Cosine Model (Andrich & Luo, 1993). Psychometricians have developed a number of different measurement theories. These include classical test theory (CTT) and item response theory (IRT). An approach which seems mathematically to be similar to IRT but also quite distinctive, in terms of its origins and features, is represented by the Rasch model for measurement. The development of the Rasch model, and the broader class of models to which it belongs, was explicitly founded on requirements of measurement in the physical sciences. Psychometricians have also developed methods for working with large matrices of correlations and covariances. Techniques in this general tradition include: factor analysis, a method of determining the underlying dimensions of data. One of the main challenges faced by users of factor analysis is a lack of consensus on appropriate procedures for determining the number of latent factors. A usual procedure is to stop factoring when eigenvalues drop below one because the original sphere shrinks. The lack of the cutting points concerns other multivariate methods, also. Multidimensional scaling is a method for finding a simple representation for data with a large number of latent dimensions. Cluster analysis is an approach to finding objects that are like each other. Factor analysis, multidimensional scaling, and cluster analysis are all multivariate descriptive methods used to distill from large amounts of data simpler structures. More recently, structural equation modeling and path analysis represent more sophisticated approaches to working with large covariance matrices. These methods allow statistically sophisticated models to be fitted to data and tested to determine if they are adequate fits. Because at a granular level psychometric research is concerned with the extent and nature of multidimensionality in each of the items of interest, a relatively new procedure known as bi-factor analysis can be helpful. Bi-factor analysis can decompose "an item’s systematic variance in terms of, ideally, two sources, a general factor and one source of additional systematic variance." Key concepts in classical test theory are reliability and validity. A reliable measure is one that measures a construct consistently across time, individuals, and situations. A valid measure is one that measures what it is intended to measure. Reliability is necessary, but not sufficient, for validity. Both reliability and validity can be assessed statistically. Consistency over repeated measures of the same test can be assessed with the Pearson correlation coefficient, and is often called "test-retest reliability." Similarly, the equivalence of different versions of the same measure can be indexed by a Pearson correlation, and is called "equivalent forms reliability" or a similar term. Internal consistency, which addresses the homogeneity of a single test form, may be assessed by correlating performance on two halves of a test, which is termed "split-half reliability"; the value of this Pearson product-moment correlation coefficient for two half-tests is adjusted with the Spearman–Brown prediction formula to correspond to the correlation between two full-length tests. Perhaps the most commonly used index of reliability is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Other approaches include the intra-class correlation, which is the ratio of variance of measurements of a given target to the variance of all targets. There are a number of different forms of validity. Criterion-related validity refers to the extent to which a test or scale predicts a sample of behavior, i.e., the criterion, that is "external to the measuring instrument itself." That external sample of behavior can be many things including another test; college grade point average as when the high school SAT is used to predict performance in college; and even behavior that occurred in the past, for example, when a test of current psychological symptoms is used to predict the occurrence of past victimization (which would accurately represent postdiction). When the criterion measure is collected at the same time as the measure being validated the goal is to establish "concurrent validity"; when the criterion is collected later the goal is to establish "predictive validity". A measure has "construct validity" if it is related to measures of other constructs as required by theory. "Content validity" is a demonstration that the items of a test do an adequate job of covering the domain being measured. In a personnel selection example, test content is based on a defined statement or set of statements of knowledge, skill, ability, or other characteristics obtained from a "job analysis". Item response theory models the relationship between latent traits and responses to test items. Among other advantages, IRT provides a basis for obtaining an estimate of the location of a test-taker on a given latent trait as well as the standard error of measurement of that location. For example, a university student's knowledge of history can be deduced from his or her score on a university test and then be compared reliably with a high school student's knowledge deduced from a less difficult test. Scores derived by classical test theory do not have this characteristic, and assessment of actual ability (rather than ability relative to other test-takers) must be assessed by comparing scores to those of a "norm group" randomly selected from the population. In fact, all measures derived from classical test theory are dependent on the sample tested, while, in principle, those derived from item response theory are not. Many psychometricians are also concerned with finding and eliminating test bias from their psychological tests. Test bias is a form of systematic (i.e., non-random) error which leads to examinees from one demographic group having an unwarranted advantage over examinees from another demographic group. According to leading experts, test bias may cause differences in average scores across demographic groups, but differences in group scores are not sufficient evidence that test bias is actually present because the test could be measuring real differences among groups. Psychometricians use sophisticated scientific methods to search for test bias and eliminate it. Research shows that it is usually impossible for people reading a test item to accurately determine whether it is biased or not. The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any test as a whole within a given context. A consideration of concern in many applied research settings is whether or not the metric of a given psychological inventory is meaningful or arbitrary. In 2014, the American Educational Research Association (AERA), American Psychological Association (APA), and National Council on Measurement in Education (NCME) published a revision of the "Standards for Educational and Psychological Testing", which describes standards for test development, evaluation, and use. The "Standards" cover essential topics in testing including validity, reliability/errors of measurement, and fairness in testing. The book also establishes standards related to testing operations including test design and development, scores, scales, norms, score linking, cut scores, test administration, scoring, reporting, score interpretation, test documentation, and rights and responsibilities of test takers and test users. Finally, the "Standards" cover topics related to testing applications, including psychological testing and assessment, workplace testing and credentialing, educational testing and assessment, and testing in program evaluation and public policy. In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation has published three sets of standards for evaluations. "The Personnel Evaluation Standards" was published in 1988, "The Program Evaluation Standards" (2nd edition) was published in 1994, and "The Student Evaluation Standards" was published in 2003. Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance. Psychometrics addresses "human" abilities, attitudes, traits and educational evolution. Notably, the study of behavior, mental processes and abilities of non-human "animals" is usually addressed by comparative psychology, or with a continuum between non-human animals and the rest of animals by evolutionary psychology. Nonetheless there are some advocators for a more gradual transition between the approach taken for humans and the approach taken for (non-human) animals. The evaluation of abilities, traits and learning evolution of "machines" has been mostly unrelated to the case of humans and non-human animals, with specific approaches in the area of artificial intelligence. A more integrated approach, under the name of universal psychometrics, has also been proposed.
https://en.wikipedia.org/wiki?curid=24982
Philosophy of education The philosophy of education examines the goals, forms, methods, and meaning of education. The term is used to describe both fundamental philosophical analysis of these themes and the description or analysis of particular pedagogical approaches. Considerations of how the profession relates to broader philosophical or sociocultural contexts may be included. The philosophy of education thus overlaps with the field of education and applied philosophy. For example, philosophers of education study what constitutes upbringing and education, the values and norms revealed through upbringing and educational practices, the limits and legitimization of education as an academic discipline, and the relation between educational theory and practice. In universities, the philosophy of education usually forms part of departments or colleges of education. Date: 424/423 BC – 348/347 BC Plato's educational philosophy was grounded in a vision of an ideal "Republic" wherein the individual was best served by being subordinated to a just society due to a shift in emphasis that departed from his predecessors. The mind and body were to be considered separate entities. In the dialogues of Phaedo, written in his "middle period" (360 B.C.E.) Plato expressed his distinctive views about the nature of knowledge, reality, and the soul:When the soul and body are united, then nature orders the soul to rule and govern, and the body to obey and serve. Now which of these two functions is akin to the divine? and which to the mortal? Does not the divine appear…to be that which naturally orders and rules, and the mortal to be that which is subject and servant?On this premise, Plato advocated removing children from their mothers' care and raising them as wards of the state, with great care being taken to differentiate children suitable to the various castes, the highest receiving the most education, so that they could act as guardians of the city and care for the less able. Education would be holistic, including facts, skills, physical discipline, and music and art, which he considered the highest form of endeavor. Plato believed that talent was distributed non-genetically and thus must be found in children born in any social class. He built on this by insisting that those suitably gifted were to be trained by the state so that they might be qualified to assume the role of a ruling class. What this established was essentially a system of selective public education premised on the assumption that an educated minority of the population were, by virtue of their education (and inborn educability), sufficient for healthy governance. Plato's writings contain some of the following ideas: Elementary education would be confined to the guardian class till the age of 18, followed by two years of compulsory military training and then by higher education for those who qualified. While elementary education made the soul responsive to the environment, higher education helped the soul to search for truth which illuminated it. Both boys and girls receive the same kind of education. Elementary education consisted of music and gymnastics, designed to train and blend gentle and fierce qualities in the individual and create a harmonious person. At the age of 20, a selection was made. The best students would take an advanced course in mathematics, geometry, astronomy and harmonics. The first course in the scheme of higher education would last for ten years. It would be for those who had a flair for science. At the age of 30 there would be another selection; those who qualified would study dialectics and metaphysics, logic and philosophy for the next five years. After accepting junior positions in the army for 15 years, a man would have completed his theoretical and practical education by the age of 50. Date: 1724–1804 Immanuel Kant believed that education differs from training in that the former involves thinking whereas the latter does not. In addition to educating reason, of central importance to him was the development of character and teaching of moral maxims. Kant was a proponent of public education and of learning by doing. Date: 1770–1831 Date: 384 BC – 322 BC Only fragments of Aristotle's treatise "On Education" are still in existence. We thus know of his philosophy of education primarily through brief passages in other works. Aristotle considered human nature, habit and reason to be equally important forces to be cultivated in education. Thus, for example, he considered repetition to be a key tool to develop good habits. The teacher was to lead the student systematically; this differs, for example, from Socrates' emphasis on questioning his listeners to bring out their own ideas (though the comparison is perhaps incongruous since Socrates was dealing with adults). Aristotle placed great emphasis on balancing the theoretical and practical aspects of subjects taught. Subjects he explicitly mentions as being important included reading, writing and mathematics; music; physical education; literature and history; and a wide range of sciences. He also mentioned the importance of play. One of education's primary missions for Aristotle, perhaps its most important, was to produce good and virtuous citizens for the polis. "All who have meditated on the art of governing mankind have been convinced that the fate of empires depends on the education of youth." Date: 980 AD – 1037 AD In the medieval Islamic world, an elementary school was known as a "maktab", which dates back to at least the 10th century. Like madrasahs (which referred to higher education), a maktab was often attached to a mosque. In the 11th century, Ibn Sina (known as "Avicenna" in the West), wrote a chapter dealing with the "maktab" entitled "The Role of the Teacher in the Training and Upbringing of Children", as a guide to teachers working at "maktab" schools. He wrote that children can learn better if taught in classes instead of individual tuition from private tutors, and he gave a number of reasons for why this is the case, citing the value of competition and emulation among pupils as well as the usefulness of group discussions and debates. Ibn Sina described the curriculum of a "maktab" school in some detail, describing the curricula for two stages of education in a "maktab" school. Ibn Sina wrote that children should be sent to a "maktab" school from the age of 6 and be taught primary education until they reach the age of 14. During which time, he wrote that they should be taught the Qur'an, Islamic metaphysics, language, literature, Islamic ethics, and manual skills (which could refer to a variety of practical skills). Ibn Sina refers to the secondary education stage of "maktab" schooling as the period of specialization, when pupils should begin to acquire manual skills, regardless of their social status. He writes that children after the age of 14 should be given a choice to choose and specialize in subjects they have an interest in, whether it was reading, manual skills, literature, preaching, medicine, geometry, trade and commerce, craftsmanship, or any other subject or profession they would be interested in pursuing for a future career. He wrote that this was a transitional stage and that there needs to be flexibility regarding the age in which pupils graduate, as the student's emotional development and chosen subjects need to be taken into account. The empiricist theory of 'tabula rasa' was also developed by Ibn Sina. He argued that the "human intellect at birth is rather like a "tabula rasa", a pure potentiality that is actualized through education and comes to know" and that knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" which is developed through a "syllogistic method of reasoning; observations lead to prepositional statements, which when compounded lead to further abstract concepts." He further argued that the intellect itself "possesses levels of development from the material intellect ("al-‘aql al-hayulani"), that potentiality that can acquire knowledge to the active intellect ("al-‘aql al-fa‘il"), the state of the human intellect in conjunction with the perfect source of knowledge." Date: c. 1105 – 1185 In the 12th century, the Andalusian-Arabian philosopher and novelist Ibn Tufail (known as "Abubacer" or "Ebn Tophail" in the West) demonstrated the empiricist theory of 'tabula rasa' as a thought experiment through his Arabic philosophical novel, "Hayy ibn Yaqzan", in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. Some scholars have argued that the Latin translation of his philosophical novel, "Philosophus Autodidactus", published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in "An Essay Concerning Human Understanding". Child education was among the psychological topics that Michel de Montaigne wrote about. His essays "On the Education of Children", "On Pedantry", and "On Experience" explain the views he had on child education. Some of his views on child education are still relevant today. Montaigne's views on the education of children were opposed to the common educational practices of his day. He found fault both with what was taught and how it was taught. Much of the education during Montaigne's time was focused on the reading of the classics and learning through books.Montaigne disagreed with learning strictly through books. He believed it was necessary to educate children in a variety of ways. He also disagreed with the way information was being presented to students. It was being presented in a way that encouraged students to take the information that was taught to them as absolute truth. Students were denied the chance to question the information. Therefore, students could not truly learn. Montaigne believed that, to learn truly, a student had to take the information and make it their own. At the foundation Montaigne believed that the selection of a good tutor was important for the student to become well educated. Education by a tutor was to be conducted at the pace of the student.He believed that a tutor should be in dialogue with the student, letting the student speak first. The tutor also should allow for discussions and debates to be had. Such a dialogue was intended to create an environment in which students would teach themselves. They would be able to realize their mistakes and make corrections to them as necessary. Individualized learning was integral to his theory of child education. He argued that the student combines information already known with what is learned and forms a unique perspective on the newly learned information. Montaigne also thought that tutors should encourage the natural curiosity of students and allow them to question things.He postulated that successful students were those who were encouraged to question new information and study it for themselves, rather than simply accepting what they had heard from the authorities on any given topic. Montaigne believed that a child's curiosity could serve as an important teaching tool when the child is allowed to explore the things that the child is curious about. Experience also was a key element to learning for Montaigne. Tutors needed to teach students through experience rather than through the mere memorization of information often practised in book learning.He argued that students would become passive adults, blindly obeying and lacking the ability to think on their own. Nothing of importance would be retained and no abilities would be learned. He believed that learning through experience was superior to learning through the use of books. For this reason he encouraged tutors to educate their students through practice, travel, and human interaction. In doing so, he argued that students would become active learners, who could claim knowledge for themselves. Montaigne's views on child education continue to have an influence in the present. Variations of Montaigne's ideas on education are incorporated into modern learning in some ways. He argued against the popular way of teaching in his day, encouraging individualized learning. He believed in the importance of experience, over book learning and memorization. Ultimately, Montaigne postulated that the point of education was to teach a student how to have a successful life by practicing an active and socially interactive lifestyle. Date: 1632–1704 In "Some Thoughts Concerning Education" and "Of the Conduct of the Understanding" Locke composed an outline on how to educate this mind in order to increase its powers and activity: "The business of education is not, as I think, to make them perfect in any one of the sciences, but so to open and dispose their minds as may best make them capable of any, when they shall apply themselves to it." "If men are for a long time accustomed only to one sort or method of thoughts, their minds grow stiff in it, and do not readily turn to another. It is therefore to give them this freedom, that I think they should be made to look into all sorts of knowledge, and exercise their understandings in so wide a variety and stock of knowledge. But I do not propose it as a variety and stock of knowledge, but a variety and freedom of thinking, as an increase of the powers and activity of the mind, not as an enlargement of its possessions." Locke expressed the belief that education maketh the man, or, more fundamentally, that the mind is an "empty cabinet", with the statement, "I think I may say that of all the men we meet with, nine parts of ten are what they are, good or evil, useful or not, by their education." Locke also wrote that "the little and almost insensible impressions on our tender infancies have very important and lasting consequences." He argued that the "associations of ideas" that one makes when young are more important than those made later because they are the foundation of the self: they are, put differently, what first mark the "tabula rasa". In his "Essay", in which is introduced both of these concepts, Locke warns against, for example, letting "a foolish maid" convince a child that "goblins and sprites" are associated with the night for "darkness shall ever afterwards bring with it those frightful ideas, and they shall be so joined, that he can no more bear the one than the other." "Associationism", as this theory would come to be called, exerted a powerful influence over eighteenth-century thought, particularly educational theory, as nearly every educational writer warned parents not to allow their children to develop negative associations. It also led to the development of psychology and other new disciplines with David Hartley's attempt to discover a biological mechanism for associationism in his "Observations on Man" (1749). Date: 1712–1778 Rousseau, though he paid his respects to Plato's philosophy, rejected it as impractical due to the decayed state of society. Rousseau also had a different theory of human development; where Plato held that people are born with skills appropriate to different castes (though he did not regard these skills as being inherited), Rousseau held that there was one developmental process common to all humans. This was an intrinsic, natural process, of which the primary behavioral manifestation was curiosity. This differed from Locke's 'tabula rasa' in that it was an active process deriving from the child's nature, which drove the child to learn and adapt to its surroundings. Rousseau wrote in his book "" that all children are perfectly designed organisms, ready to learn from their surroundings so as to grow into virtuous adults, but due to the malign influence of corrupt society, they often fail to do so. Rousseau advocated an educational method which consisted of removing the child from society—for example, to a country home—and alternately conditioning him through changes to his environment and setting traps and puzzles for him to solve or overcome. Rousseau was unusual in that he recognized and addressed the potential of a problem of legitimation for teaching. He advocated that adults always be truthful with children, and in particular that they never hide the fact that the basis for their authority in teaching was purely one of physical coercion: "I'm bigger than you." Once children reached the age of reason, at about 12, they would be engaged as free individuals in the ongoing process of their own. He once said that a child should grow up without adult interference and that the child must be guided to suffer from the experience of the natural consequences of his own acts or behaviour. When he experiences the consequences of his own acts, he advises himself. "Rousseau divides development into five stages (a book is devoted to each). Education in the first two stages seeks to the senses: only when Émile is about 12 does the tutor begin to work to develop his mind. Later, in Book 5, Rousseau examines the education of Sophie (whom Émile is to marry). Here he sets out what he sees as the essential differences that flow from sex. 'The man should be strong and active; the woman should be weak and passive' (Everyman edn: 322). From this difference comes a contrasting education. They are not to be brought up in ignorance and kept to housework: Nature means them to think, to will, to love to cultivate their minds as well as their persons; she puts these weapons in their hands to make up for their lack of strength and to enable them to direct the strength of men. They should learn many things, but only such things as suitable' (Everyman edn.: 327)." Émile Date: 1902–2001 Mortimer Jerome Adler was an American philosopher, educator, and popular author. As a philosopher he worked within the Aristotelian and Thomistic traditions. He lived for the longest stretches in New York City, Chicago, San Francisco, and San Mateo, California. He worked for Columbia University, the University of Chicago, Encyclopædia Britannica, and Adler's own Institute for Philosophical Research. Adler was married twice and had four children. Adler was a proponent of educational perennialism. Date: 1905–1998 Broudy's philosophical views were based on the tradition of classical realism, dealing with truth, goodness, and beauty. However he was also influenced by the modern philosophy existentialism and instrumentalism. In his textbook Building a Philosophy of Education he has two major ideas that are the main points to his philosophical outlook: The first is truth and the second is universal structures to be found in humanity's struggle for education and the good life. Broudy also studied issues on society's demands on school. He thought education would be a link to unify the diverse society and urged the society to put more trust and a commitment to the schools and a good education. Date: c. 1225 – 1274 See Religious perennialism. Date: 1608–1674 The objective of medieval education was an overtly religious one, primarily concerned with uncovering transcendental truths that would lead a person back to God through a life of moral and religious choice (Kreeft 15). The vehicle by which these truths were uncovered was dialectic: To the medieval mind, debate was a fine art, a serious science, and a fascinating entertainment, much more than it is to the modern mind, because the medievals believed, like Socrates, that dialectic could uncover truth. Thus a 'scholastic disputation' was not a personal contest in cleverness, nor was it 'sharing opinions'; it was a shared journey of discovery (Kreeft 14–15). Date: 1859–1952 In "Democracy and Education: An Introduction to the Philosophy of Education", Dewey stated that education, in its broadest sense, is the means of the "social continuity of life" given the "primary ineluctable facts of the birth and death of each one of the constituent members in a social group". Education is therefore a necessity, for "the life of the group goes on." Dewey was a proponent of Educational Progressivism and was a relentless campaigner for reform of education, pointing out that the authoritarian, strict, pre-ordained knowledge approach of modern traditional education was too concerned with delivering knowledge, and not enough with understanding students' actual experiences. Date: 1842–1910 Date: 1871–1965 William Heard Kilpatrick was a US American philosopher of education and a colleague and a successor of John Dewey. He was a major figure in the progressive education movement of the early 20th century. Kilpatrick developed the Project Method for early childhood education, which was a form of Progressive Education organized curriculum and classroom activities around a subject's central theme. He believed that the role of a teacher should be that of a "guide" as opposed to an authoritarian figure. Kilpatrick believed that children should direct their own learning according to their interests and should be allowed to explore their environment, experiencing their learning through the natural senses. Proponents of Progressive Education and the Project Method reject traditional schooling that focuses on memorization, rote learning, strictly organized classrooms (desks in rows; students always seated), and typical forms of assessment. Date: 1929– Noddings' first sole-authored book "Caring: A Feminine Approach to Ethics and Moral Education" (1984) followed close on the 1982 publication of Carol Gilligan’s ground-breaking work in the ethics of care "In a Different Voice". While her work on ethics continued, with the publication of "Women and Evil" (1989) and later works on moral education, most of her later publications have been on the philosophy of education and educational theory. Her most significant works in these areas have been "Educating for Intelligent Belief or Unbelief" (1993) and "Philosophy of Education" (1995). Noddings' contribution to education philosophy centers around the ethic of care. Her belief was that a caring teacher-student relationship will result in the teacher designing a differentiated curriculum for each student, and that this curriculum would be based around the students' particular interests and needs. The teacher's claim to care must not be based on a one time virtuous decision but an ongoing interest in the students' welfare. Date: 1931–2007 G.E Moore (1873–1858) Bertrand Russell (1872–1970) Gottlob Frege (1848–1925) Date: 1919– The existentialist sees the world as one's personal subjectivity, where goodness, truth, and reality are individually defined. Reality is a world of existing, truth subjectively chosen, and goodness a matter of freedom. The subject matter of existentialist classrooms should be a matter of personal choice. Teachers view the individual as an entity within a social context in which the learner must confront others' views to clarify his or her own. Character development emphasizes individual responsibility for decisions. Real answers come from within the individual, not from outside authority. Examining life through authentic thinking involves students in genuine learning experiences. Existentialists are opposed to thinking about students as objects to be measured, tracked, or standardized. Such educators want the educational experience to focus on creating opportunities for self-direction and self-actualization. They start with the student, rather than on curriculum content. Date: 1921–1997 A Brazilian philosopher and educator committed to the cause of educating the impoverished peasants of his nation and collaborating with them in the pursuit of their liberation from what he regarded as "oppression," Freire is best known for his attack on what he called the "banking concept of education," in which the student was viewed as an empty account to be filled by the teacher. Freire also suggests that a deep reciprocity be inserted into our notions of teacher and student; he comes close to suggesting that the teacher-student dichotomy be completely abolished, instead promoting the roles of the participants in the classroom as the teacher-student (a teacher who learns) and the student-teacher (a learner who teaches). In its early, strong form this kind of classroom has sometimes been criticized on the grounds that it can mask rather than overcome the teacher's authority. Aspects of the Freirian philosophy have been highly influential in academic debates over "participatory development" and development more generally. Freire's emphasis on what he describes as "emancipation" through interactive participation has been used as a rationale for the participatory focus of development, as it is held that 'participation' in any form can lead to empowerment of poor or marginalised groups. Freire was a proponent of critical pedagogy. "He participated in the import of European doctrines and ideas into Brazil, assimilated them to the needs of a specific socio-economic situation, and thus expanded and refocused them in a thought-provoking way" Date: 1889–1976 Heidegger's philosophizing about education was primarily related to higher education. He believed that teaching and research in the university should be unified and aim towards testing and interrogating the "ontological assumptions presuppositions which implicitly guide research in each domain of knowledge." Date: 1900–2002 Date: 1924–1998 Date: 1926–1984 "Normative philosophies or theories of education may make use of the results of philosophical thought and of factual inquiries about human beings and the psychology of learning, but in any case they propound views about what education should be, what dispositions it should cultivate, why it ought to cultivate them, how and in whom it should do so, and what forms it should take. In a full-fledged philosophical normative theory of education, besides analysis of the sorts described, there will normally be propositions of the following kinds: Perennialists believe that one should teach the things that one deems to be of everlasting importance to all people everywhere. They believe that the most important topics develop a person. Since details of fact change constantly, these cannot be the most important. Therefore, one should teach principles, not facts. Since people are human, one should teach first about humans, not machines or techniques. Since people are people first, and workers second if at all, one should teach liberal topics first, not vocational topics. The focus is primarily on teaching reasoning and wisdom rather than facts, the liberal arts rather than vocational training. Date: 1930–1992 Bloom, a professor of political science at the University of Chicago, argued for a traditional Great Books-based liberal education in his lengthy essay "The Closing of the American Mind". The Classical education movement advocates a form of education based in the traditions of Western culture, with a particular focus on education as understood and taught in the Middle Ages. The term "classical education" has been used in English for several centuries, with each era modifying the definition and adding its own selection of topics. By the end of the 18th century, in addition to the trivium and quadrivium of the Middle Ages, the definition of a classical education embraced study of literature, poetry, drama, philosophy, history, art, and languages. In the 20th and 21st centuries it is used to refer to a broad-based study of the liberal arts and sciences, as opposed to a practical or pre-professional program. Classical Education can be described as rigorous and systematic, separating children and their learning into three rigid categories, Grammar, Dialectic, and Rhetoric. Date: 1842–1923 Mason was a British educator who invested her life in improving the quality of children's education. Her ideas led to a method used by some homeschoolers. Mason's philosophy of education is probably best summarized by the principles given at the beginning of each of her books. Two key mottos taken from those principles are "Education is an atmosphere, a discipline, a life" and "Education is the science of relations." She believed that children were born persons and should be respected as such; they should also be taught the Way of the Will and the Way of Reason. Her motto for students was "I am, I can, I ought, I will." Charlotte Mason believed that children should be introduced to subjects through living books, not through the use of "compendiums, abstracts, or selections." She used abridged books only when the content was deemed inappropriate for children. She preferred that parents or teachers read aloud those texts (such as Plutarch and the Old Testament), making omissions only where necessary. Educational essentialism is an educational philosophy whose adherents believe that children should learn the traditional basic subjects and that these should be learned thoroughly and rigorously. This is based on the view that there are essentials that men should know for being educated and are expected to learn the academic areas of reading, writing, mathematics, science, geography, and technology. This movement, thus, stresses the role played by the teacher as the authority in the classroom, driving the goal of content mastery. An essentialist program normally teaches children progressively, from less complex skills to more complex. The "back to basics" movement is an example of essentialism. Date: 1874–1946 William Chandler Bagley taught in elementary schools before becoming a professor of education at the University of Illinois, where he served as the Director of the School of Education from 1908 until 1917. He was a professor of education at Teachers College, Columbia, from 1917 to 1940. An opponent of pragmatism and progressive education, Bagley insisted on the value of knowledge for its own sake, not merely as an instrument, and he criticized his colleagues for their failure to emphasize systematic study of academic subjects. Bagley was a proponent of educational essentialism. Critical pedagogy is an "educational movement, guided by passion and principle, to help students develop consciousness of freedom, recognize authoritarian tendencies, and connect knowledge to power and the ability to take constructive action." Based in Marxist theory, critical pedagogy draws on radical democracy, anarchism, feminism, and other movements for social justice. Date: 1889–1974 Date: 1870–1952 The Montessori method arose from Dr. Maria Montessori's discovery of what she referred to as "the child's true normal nature" in 1907, which happened in the process of her experimental observation of young children given freedom in an environment prepared with materials designed for their self-directed learning activity. The method itself aims to duplicate this experimental observation of children to bring about, sustain and support their true natural way of being. Waldorf education (also known as Steiner or Steiner-Waldorf education) is a humanistic approach to pedagogy based upon the educational philosophy of the Austrian philosopher Rudolf Steiner, the founder of anthroposophy. Learning is interdisciplinary, integrating practical, artistic, and conceptual elements. The approach emphasizes the role of the imagination in learning, developing thinking that includes a creative as well as an analytic component. The educational philosophy's overarching goals are to provide young people the basis on which to develop into free, morally responsible and integrated individuals, and to help every child fulfill his or her unique destiny, the existence of which anthroposophy posits. Schools and teachers are given considerable freedom to define curricula within collegial structures. Date: 1861–1925 Steiner founded a holistic educational impulse on the basis of his spiritual philosophy (anthroposophy). Now known as Steiner or Waldorf education, his pedagogy emphasizes a balanced development of cognitive, affective/artistic, and practical skills (head, heart, and hands). Schools are normally self-administered by faculty; emphasis is placed upon giving individual teachers the freedom to develop creative methods. Steiner's theory of child development divides education into three discrete developmental stages predating but with close similarities to the stages of development described by Piaget. Early childhood education occurs through imitation; teachers provide practical activities and a healthy environment. Steiner believed that young children should meet only goodness. Elementary education is strongly arts-based, centered on the teacher's creative authority; the elementary school-age child should meet beauty. Secondary education seeks to develop the judgment, intellect, and practical idealism; the adolescent should meet truth. Democratic education is a theory of learning and school governance in which students and staff participate freely and equally in a school democracy. In a democratic school, there is typically shared decision-making among students and staff on matters concerning living, working, and learning together. Date: 1883–1973 Neill founded Summerhill School, the oldest existing democratic school in Suffolk, England in 1921. He wrote a number of books that now define much of contemporary democratic education philosophy. Neill believed that the happiness of the child should be the paramount consideration in decisions about the child's upbringing, and that this happiness grew from a sense of personal freedom. He felt that deprivation of this sense of freedom during childhood, and the consequent unhappiness experienced by the repressed child, was responsible for many of the psychological disorders of adulthood. Educational progressivism is the belief that education must be based on the principle that humans are social animals who learn best in real-life activities with other people. Progressivists, like proponents of most educational theories, claim to rely on the best available scientific theories of learning. Most progressive educators believe that children learn as if they were scientists, following a process similar to John Dewey's model of learning known as "the pattern of inquiry": 1) Become aware of the problem. 2) Define the problem. 3) Propose hypotheses to solve it. 4) Evaluate the consequences of the hypotheses from one's past experience. 5) Test the likeliest solution. Date: 1859–1952 In 1896, Dewey opened the Laboratory School at the University of Chicago in an institutional effort to pursue together rather than apart "utility and culture, absorption and expression, theory and practice, [which] are [indispensable] elements in any educational scheme. As the unified head of the departments of Philosophy, Psychology and Pedagogy, John Dewey articulated a desire to organize an educational experience where children could be more creative than the best of progressive models of his day. Transactionalism as a pragmatic philosophy grew out of the work he did in the Laboratory School. The two most influential works that stemmed from his research and study were "The Child and the Curriculum" (1902) and "Democracy and Education" (1916). Dewey wrote of the dualisms that plagued educational philosophy in the latter book: "Instead of seeing the educative process steadily and as a whole, we see conflicting terms. We get the case of the child vs. the curriculum; of the individual nature vs. social culture." Dewey found that the preoccupation with facts as knowledge in the educative process led students to memorize "ill-understood rules and principles" and while second-hand knowledge learned in mere words is a beginning in study, mere words can never replace the ability to organize knowledge into both useful and valuable experience. Date: 1896–1980 Jean Piaget was a Swiss developmental psychologist known for his epistemological studies with children. His theory of cognitive development and epistemological view are together called "genetic epistemology". Piaget placed great importance on the education of children. As the Director of the International Bureau of Education, he declared in 1934 that "only education is capable of saving our societies from possible collapse, whether violent, or gradual." Piaget created the International Centre for Genetic Epistemology in Geneva in 1955 and directed it until 1980. According to Ernst von Glasersfeld, Jean Piaget is "the great pioneer of the constructivist theory of knowing." Jean Piaget described himself as an epistemologist, interested in the process of the qualitative development of knowledge. As he says in the introduction of his book "Genetic Epistemology" (): ""What the genetic epistemology proposes is discovering the roots of the different varieties of knowledge, since its elementary forms, following to the next levels, including also the scientific knowledge."" Date: 1915–2016 Another important contributor to the inquiry method in education is Bruner. His books "The Process of Education" and "Toward a Theory of Instruction" are landmarks in conceptualizing learning and curriculum development. He argued that any subject can be taught in some intellectually honest form to any child at any stage of development. This notion was an underpinning for his concept of the "spiral" (helical) curriculum which posited the idea that a curriculum should revisit basic ideas, building on them until the student had grasped the full formal concept. He emphasized intuition as a neglected but essential feature of productive thinking. He felt that interest in the material being learned was the best stimulus for learning rather than external motivation such as grades. Bruner developed the concept of discovery learning which promoted learning as a process of constructing new ideas based on current or past knowledge. Students are encouraged to discover facts and relationships and continually build on what they already know. Unschooling is a range of educational philosophies and practices centered on allowing children to learn through their natural life experiences, including child directed play, game play, household responsibilities, work experience, and social interaction, rather than through a more traditional school curriculum. Unschooling encourages exploration of activities led by the children themselves, facilitated by the adults. Unschooling differs from conventional schooling principally in the thesis that standard curricula and conventional grading methods, as well as other features of traditional schooling, are counterproductive to the goal of maximizing the education of each child. In 1964 Holt published his first book, "How Children Fail", asserting that the academic failure of schoolchildren was not "despite" the efforts of the schools, but actually "because" of the schools. Not surprisingly, "How Children Fail" ignited a firestorm of controversy. Holt was catapulted into the American national consciousness to the extent that he made appearances on major TV talk shows, wrote book reviews for "Life" magazine, and was a guest on the "To Tell The Truth" TV game show. In his follow-up work, "How Children Learn", published in 1967, Holt tried to elucidate the learning process of children and why he believed school short circuits that process. Contemplative education focuses on bringing introspective practices such as mindfulness and yoga into curricular and pedagogical processes for diverse aims grounded in secular, spiritual, religious and post-secular perspectives. Contemplative approaches may be used in the classroom, especially in tertiary or (often in modified form) in secondary education. Parker Palmer is a recent pioneer in contemplative methods. The Center for Contemplative Mind in Society founded a branch focusing on education, The Association for Contemplative Mind in Higher Education. Contemplative methods may also be used by teachers in their preparation; Waldorf education was one of the pioneers of the latter approach. In this case, inspiration for enriching the content, format, or teaching methods may be sought through various practices, such as consciously reviewing the previous day's activities; actively holding the students in consciousness; and contemplating inspiring pedagogical texts. Zigler suggested that only through focusing on their own spiritual development could teachers positively impact the spiritual development of students.
https://en.wikipedia.org/wiki?curid=24983
Statistical hypothesis testing A statistical hypothesis, sometimes called confirmatory data analysis, is a hypothesis that is testable on the basis of observing a process that is modeled via a set of random variables. A statistical hypothesis test is a method of statistical inference. Commonly, two statistical data sets are compared, or a data set obtained by sampling is compared against a synthetic data set from an idealized model. An alternative hypothesis is proposed for the statistical-relationship between the two data-sets, and is compared to an idealized null hypothesis that proposes no relationship between these two data-sets. This comparison is deemed "statistically significant" if the relationship between the data-sets would be an unlikely realization of the null hypothesis according to a threshold probability—the significance level. Hypothesis tests are used when determining what outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance. The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected. (The two types are known as type 1 and type 2 errors.) Hypothesis tests based on statistical significance are another way of expressing confidence intervals (more precisely, confidence sets). In other words, every hypothesis test based on significance can be obtained via a confidence interval, and every confidence interval can be obtained via a hypothesis test based on significance. Significance-based hypothesis testing is the most common framework for statistical hypothesis testing. An alternative framework for statistical hypothesis testing is to specify a set of statistical models, one for each candidate hypothesis, and then use model selection techniques to choose the most appropriate model. The most common selection techniques are based on either Akaike information criterion or Bayes factor. In the statistics literature, statistical hypothesis testing plays a fundamental role. The usual line of reasoning is as follows: An alternative process is commonly used: The two processes are equivalent. The former process was advantageous in the past when only tables of test statistics at common probability thresholds were available. It allowed a decision to be made without the calculation of a probability. It was adequate for classwork and for operational use, but it was deficient for reporting results. The latter process relied on extensive tables or on computational support not always available. The explicit calculation of a probability is useful for reporting. The calculations are now trivially performed with appropriate software. The difference in the two processes applied to the Radioactive suitcase example (below): The former report is adequate, the latter gives a more detailed explanation of the data and the reason why the suitcase is being checked. The difference between accepting the null hypothesis and simply failing to reject it is important. The "fail to reject" terminology highlights the fact that the a non-significant result provides no way to determine which of the two hypotheses is true, so all that can be concluded is that the null hypothesis has not been rejected. The phrase "accept the null hypothesis" may suggest it has been proved simply because it has not been disproved, a logical fallacy known as the argument from ignorance. Unless a test with particularly high power is used, the idea of "accepting" the null hypothesis is likely to be incorrect. Nonetheless the terminology is prevalent throughout statistics, where the meaning actually intended is well understood. The processes described here are perfectly adequate for computation. They seriously neglect the design of experiments considerations. It is particularly critical that appropriate sample sizes be estimated before conducting the experiment. The phrase "test of significance" was coined by statistician Ronald Fisher. The "p"-value is the probability that a given result (or a more significant result) would occur under the null hypothesis. For example, say that a fair coin is tested for fairness (the null hypothesis). At a significance level of 0.05, the fair coin would be expected to (incorrectly) reject the null hypothesis in about 1 out of every 20 tests. The "p"-value does not provide the probability that either hypothesis is correct (a common source of confusion). If the "p"-value is less than the chosen significance threshold (equivalently, if the observed test statistic is in the critical region), then we say the null hypothesis is rejected at the chosen level of significance. Rejection of the null hypothesis is a conclusion. This is like a "guilty" verdict in a criminal trial: the evidence is sufficient to reject innocence, thus proving guilt. We might accept the alternative hypothesis (and the research hypothesis). If the "p"-value is "not" less than the chosen significance threshold (equivalently, if the observed test statistic is outside the critical region), then the evidence is insufficient to support a conclusion. (This is similar to a "not guilty" verdict.) The researcher typically gives extra consideration to those cases where the "p"-value is close to the significance level. Some people find it helpful to think of the hypothesis testing framework as analogous to a mathematical proof by contradiction. In the Lady tasting tea example (below), Fisher required the Lady to properly categorize all of the cups of tea to justify the conclusion that the result was unlikely to result from chance. His test revealed that if the lady was effectively guessing at random (the null hypothesis), there was a 1.4% chance that the observed results (perfectly ordered tea) would occur. Whether rejection of the null hypothesis truly justifies acceptance of the research hypothesis depends on the structure of the hypotheses. Rejecting the hypothesis that a large paw print originated from a bear does not immediately prove the existence of Bigfoot. Hypothesis testing emphasizes the rejection, which is based on a probability, rather than the acceptance, which requires extra steps of logic. "The probability of rejecting the null hypothesis is a function of five factors: whether the test is one- or two tailed, the level of significance, the standard deviation, the amount of deviation from the null hypothesis, and the number of observations." These factors are a source of criticism; factors under the control of the experimenter/analyst give the results an appearance of subjectivity. Statistics are helpful in analyzing most collections of data. This is equally true of hypothesis testing which can justify conclusions even when no scientific theory exists. In the Lady tasting tea example, it was "obvious" that no difference existed between (milk poured into tea) and (tea poured into milk). The data contradicted the "obvious". Real world applications of hypothesis testing include: Statistical hypothesis testing plays an important role in the whole of statistics and in statistical inference. For example, Lehmann (1992) in a review of the fundamental paper by Neyman and Pearson (1933) says: "Nevertheless, despite their shortcomings, the new paradigm formulated in the 1933 paper, and the many developments carried out within its framework continue to play a central role in both the theory and practice of statistics and can be expected to do so in the foreseeable future". Significance testing has been the favored statistical tool in some experimental social sciences (over 90% of articles in the "Journal of Applied Psychology" during the early 1990s). Other fields have favored the estimation of parameters (e.g. effect size). Significance testing is used as a substitute for the traditional comparison of predicted value and experimental result at the core of the scientific method. When theory is only capable of predicting the sign of a relationship, a directional (one-sided) hypothesis test can be configured so that only a statistically significant result supports theory. This form of theory appraisal is the most heavily criticized application of hypothesis testing. "If the government required statistical procedures to carry warning labels like those on drugs, most inference methods would have long labels indeed." This caution applies to hypothesis tests and alternatives to them. The successful hypothesis test is associated with a probability and a type-I error rate. The conclusion "might" be wrong. The conclusion of the test is only as solid as the sample upon which it is based. The design of the experiment is critical. A number of unexpected effects have been observed including: A statistical analysis of misleading data produces misleading conclusions. The issue of data quality can be more subtle. In forecasting for example, there is no agreement on a measure of forecast accuracy. In the absence of a consensus measurement, no decision based on measurements will be without controversy. The book "How to Lie with Statistics" is the most popular book on statistics ever published. It does not much consider hypothesis testing, but its cautions are applicable, including: Many claims are made on the basis of samples too small to convince. If a report does not mention sample size, be doubtful. Hypothesis testing acts as a filter of statistical conclusions; only those results meeting a probability threshold are publishable. Economics also acts as a publication filter; only those results favorable to the author and funding source may be submitted for publication. The impact of filtering on publication is termed publication bias. A related problem is that of multiple testing (sometimes linked to data mining), in which a variety of tests for a variety of possible effects are applied to a single data set and only those yielding a significant result are reported. These are often dealt with by using multiplicity correction procedures that control the family wise error rate (FWER) or the false discovery rate (FDR). Those making critical decisions based on the results of a hypothesis test are prudent to look at the details rather than the conclusion alone. In the physical sciences most results are fully accepted only when independently confirmed. The general advice concerning statistics is, "Figures never lie, but liars figure" (anonymous). The earliest use of statistical hypothesis testing is generally credited to the question of whether male and female births are equally likely (null hypothesis), which was addressed in the 1700s by John Arbuthnot (1710), and later by Pierre-Simon Laplace (1770s). Arbuthnot examined birth records in London for each of the 82 years from 1629 to 1710, and applied the sign test, a simple non-parametric test. In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 0.582, or about 1 in 4,8360,0000,0000,0000,0000,0000; in modern terms, this is the "p"-value. Arbuthnot concluded that this is too small to be due to chance and must instead be due to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at the "p" = 1/282 significance level. Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls. He concluded by calculation of a "p"-value that the excess was a real, but unexplained, effect. In a famous example of hypothesis testing, known as the "Lady tasting tea", Dr. Muriel Bristol, a female colleague of Fisher claimed to be able to tell whether the tea or the milk was added first to a cup. Fisher proposed to give her eight cups, four of each variety, in random order. One could then ask what the probability was for her getting the number she got correct, but just by chance. The null hypothesis was that the Lady had no such ability. The test statistic was a simple count of the number of successes in selecting the 4 cups. The critical region was the single case of 4 successes of 4 possible based on a conventional probability criterion ( which would be considered a statistically significant result. A statistical test procedure is comparable to a criminal trial; a defendant is considered not guilty as long as his or her guilt is not proven. The prosecutor tries to prove the guilt of the defendant. Only when there is enough evidence for the prosecution is the defendant convicted. In the start of the procedure, there are two hypotheses formula_2: "the defendant is not guilty", and formula_3: "the defendant is guilty". The first one, formula_2, is called the "null hypothesis", and is for the time being accepted. The second one, formula_3, is called the "alternative hypothesis". It is the alternative hypothesis that one hopes to support. The hypothesis of innocence is only rejected when an error is very unlikely, because one doesn't want to convict an innocent defendant. Such an error is called "error of the first kind" (i.e., the conviction of an innocent person), and the occurrence of this error is controlled to be rare. As a consequence of this asymmetric behaviour, an "error of the second kind" (acquitting a person who committed the crime), is more common. A criminal trial can be regarded as either or both of two decision processes: guilty vs not guilty or evidence vs a threshold ("beyond a reasonable doubt"). In one view, the defendant is judged; in the other view the performance of the prosecution (which bears the burden of proof) is judged. A hypothesis test can be regarded as either a judgment of a hypothesis or as a judgment of evidence. The following example was produced by a philosopher describing scientific methods generations before hypothesis testing was formalized and popularized. Few beans of this handful are white. Most beans in this bag are white. Therefore: Probably, these beans were taken from another bag. This is an hypothetical inference. The beans in the bag are the population. The handful are the sample. The null hypothesis is that the sample originated from the population. The criterion for rejecting the null-hypothesis is the "obvious" difference in appearance (an informal difference in the mean). The interesting result is that consideration of a real population and a real sample produced an imaginary bag. The philosopher was considering logic rather than probability. To be a real statistical hypothesis test, this example requires the formalities of a probability calculation and a comparison of that probability to a standard. A simple generalization of the example considers a mixed bag of beans and a handful that contain either very few or very many white beans. The generalization considers both extremes. It requires more calculations and more comparisons to arrive at a formal answer, but the core philosophy is unchanged; If the composition of the handful is greatly different from that of the bag, then the sample probably originated from another bag. The original example is termed a one-sided or a one-tailed test while the generalization is termed a two-sided or two-tailed test. The statement also relies on the inference that the sampling was random. If someone had been picking through the bag to find white beans, then it would explain why the handful had so many white beans, and also explain why the number of white beans in the bag was depleted (although the bag is probably intended to be assumed much larger than one's hand). A person (the subject) is tested for clairvoyance. They are shown the reverse of a randomly chosen playing card 25 times and asked which of the four suits it belongs to. The number of hits, or correct answers, is called "X". As we try to find evidence of their clairvoyance, for the time being the null hypothesis is that the person is not clairvoyant. The alternative is: the person is (more or less) clairvoyant. If the null hypothesis is valid, the only thing the test person can do is guess. For every card, the probability (relative frequency) of any single suit appearing is 1/4. If the alternative is valid, the test subject will predict the suit correctly with probability greater than 1/4. We will call the probability of guessing correctly "p". The hypotheses, then, are: and When the test subject correctly predicts all 25 cards, we will consider them clairvoyant, and reject the null hypothesis. Thus also with 24 or 23 hits. With only 5 or 6 hits, on the other hand, there is no cause to consider them so. But what about 12 hits, or 17 hits? What is the critical number, "c", of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value "c"? With the choice "c"=25 (i.e. we only accept clairvoyance when all cards are predicted correctly) we're more critical than with "c"=10. In the first case almost no test subjects will be recognized to be clairvoyant, in the second case, a certain number will pass the test. In practice, one decides how critical one will be. That is, one decides how often one accepts an error of the first kind – a false positive, or Type I error. With "c" = 25 the probability of such an error is: and hence, very small. The probability of a false positive is the probability of randomly guessing correctly all 25 times. Being less critical, with "c"=10, gives: (where C(25,k) is the binomial coefficient 25 choose k). Thus, "c" = 10 yields a much greater probability of false positive. Before the test is actually performed, the maximum acceptable probability of a Type I error ("α") is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.) Depending on this Type 1 error rate, the critical value "c" is calculated. For example, if we select an error rate of 1%, "c" is calculated thus: From all the numbers c, with this property, we choose the smallest, in order to minimize the probability of a Type II error, a false negative. For the above example, we select: formula_11. As an example, consider determining whether a suitcase contains some radioactive material. Placed under a Geiger counter, it produces 10 counts per minute. The null hypothesis is that no radioactive material is in the suitcase and that all measured counts are due to ambient radioactivity typical of the surrounding air and harmless objects. We can then calculate how likely it is that we would observe 10 counts per minute if the null hypothesis were true. If the null hypothesis predicts (say) on average 9 counts per minute, then according to the Poisson distribution typical for radioactive decay there is about 41% chance of recording 10 or more counts. Thus we can say that the suitcase is compatible with the null hypothesis (this does not guarantee that there is no radioactive material, just that we don't have enough evidence to suggest there is). On the other hand, if the null hypothesis predicts 3 counts per minute (for which the Poisson distribution predicts only 0.1% chance of recording 10 or more counts) then the suitcase is not compatible with the null hypothesis, and there are likely other factors responsible to produce the measurements. The test does not directly assert the presence of radioactive material. A "successful" test asserts that the claim of no radioactive material present is unlikely given the reading (and therefore ...). The double negative (disproving the null hypothesis) of the method is confusing, but using a counter-example to disprove is standard mathematical practice. The attraction of the method is its practicality. We know (from experience) the expected range of counts with only ambient radioactivity present, so we can say that a measurement is "unusually" large. Statistics just formalizes the intuitive by using numbers instead of adjectives. We probably do not know the characteristics of the radioactive suitcases; We just assume that they produce larger readings. To slightly formalize intuition: radioactivity is suspected if the Geiger-count with the suitcase is among or exceeds the greatest (5% or 1%) of the Geiger-counts made with ambient radiation alone. This makes no assumptions about the distribution of counts. Many ambient radiation observations are required to obtain good probability estimates for rare events. The test described here is more fully the null-hypothesis statistical significance test. The null hypothesis represents what we would believe by default, before seeing any evidence. Statistical significance is a possible finding of the test, declared when the observed sample is unlikely to have occurred by chance if the null hypothesis were true. The name of the test describes its formulation and its possible outcome. One characteristic of the test is its crisp decision: to reject or not reject the null hypothesis. A calculated value is compared to a threshold, which is determined from the tolerable risk of error. The following definitions are mainly based on the exposition in the book by Lehmann and Romano: A statistical hypothesis test compares a test statistic ("z" or "t" for examples) to a threshold. The test statistic (the formula found in the table below) is based on optimality. For a fixed level of Type I error rate, use of these statistics minimizes Type II error rates (equivalent to maximizing power). The following terms describe tests in terms of such optimality: Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly "deciding" that a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true. Note that this probability of making an incorrect decision is "not" the probability that the null hypothesis is true, nor whether any specific alternative hypothesis is true. This contrasts with other possible techniques of decision theory in which the null and alternative hypothesis are treated on a more equal basis. One naïve Bayesian approach to hypothesis testing is to base decisions on the posterior probability, but this fails when comparing point and continuous hypotheses. Other approaches to decision making, such as Bayesian decision theory, attempt to balance the consequences of incorrect decisions across all possibilities, rather than concentrating on a single null hypothesis. A number of other approaches to reaching a decision based on data are available via decision theory and optimal decisions, some of which have desirable properties. Hypothesis testing, though, is a dominant approach to data analysis in many fields of science. Extensions to the theory of hypothesis testing include the study of the power of tests, i.e. the probability of correctly rejecting the null hypothesis given that it is false. Such considerations can be used for the purpose of sample size determination prior to the collection of data. While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see . Modern significance testing is largely the product of Karl Pearson ("p"-value, Pearson's chi-squared test), William Sealy Gosset (Student's t-distribution), and Ronald Fisher ("null hypothesis", analysis of variance, "significance test"), while hypothesis testing was developed by Jerzy Neyman and Egon Pearson (son of Karl). Ronald Fisher began his life in statistics as a Bayesian (Zabell 1992), but Fisher soon grew disenchanted with the subjectivity involved (namely use of the principle of indifference when determining prior probabilities), and sought to provide a more "objective" approach to inductive inference. Fisher was an agricultural statistician who emphasized rigorous experimental design and methods to extract a result from few samples assuming Gaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an inconsistent hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century. Fisher popularized the "significance test". He required a null-hypothesis (corresponding to a population frequency distribution) and a sample. His (now familiar) calculations determined whether to reject the null-hypothesis or not. Significance testing did not utilize an alternative hypothesis so there was no concept of a Type II error. The "p"-value was devised as an informal, but objective, index meant to help a researcher determine (based on other knowledge) whether to modify future experiments or strengthen one's faith in the null hypothesis. Hypothesis testing (and Type I/II errors) was devised by Neyman and Pearson as a more objective alternative to Fisher's "p"-value, also meant to determine researcher behaviour, but without requiring any inductive inference by the researcher. Neyman & Pearson considered a different problem (which they called "hypothesis testing"). They initially considered two simple hypotheses (both with frequency distributions). They calculated two probabilities and typically selected the hypothesis associated with the higher probability (the hypothesis more likely to have generated the sample). Their method always selected a hypothesis. It also allowed the calculation of both types of error probabilities. Fisher and Neyman/Pearson clashed bitterly. Neyman/Pearson considered their formulation to be an improved generalization of significance testing.(The defining paper was abstract. Mathematicians have generalized and refined the theory for decades.) Fisher thought that it was not applicable to scientific research because often, during the course of the experiment, it is discovered that the initial assumptions about the null hypothesis are questionable due to unexpected sources of error. He believed that the use of rigid reject/accept decisions based on models formulated before data is collected was incompatible with this common scenario faced by scientists and attempts to apply this method to scientific research would lead to mass confusion. The dispute between Fisher and Neyman–Pearson was waged on philosophical grounds, characterized by a philosopher as a dispute over the proper role of models in statistical inference. Events intervened: Neyman accepted a position in the western hemisphere, breaking his partnership with Pearson and separating disputants (who had occupied the same building) by much of the planetary diameter. World War II provided an intermission in the debate. The dispute between Fisher and Neyman terminated (unresolved after 27 years) with Fisher's death in 1962. Neyman wrote a well-regarded eulogy. Some of Neyman's later publications reported "p"-values and significance levels. The modern version of hypothesis testing is a hybrid of the two approaches that resulted from confusion by writers of statistical textbooks (as predicted by Fisher) beginning in the 1940s. (But signal detection, for example, still uses the Neyman/Pearson formulation.) Great conceptual differences and many caveats in addition to those mentioned above were ignored. Neyman and Pearson provided the stronger terminology, the more rigorous mathematics and the more consistent philosophy, but the subject taught today in introductory statistics has more similarities with Fisher's method than theirs. This history explains the inconsistent terminology (example: the null hypothesis is never accepted, but there is a region of acceptance). Sometime around 1940, in an apparent effort to provide researchers with a "non-controversial" way to have their cake and eat it too, the authors of statistical text books began anonymously combining these two strategies by using the "p"-value in place of the test statistic (or data) to test against the Neyman–Pearson "significance level". Thus, researchers were encouraged to infer the strength of their data against some null hypothesis using "p"-values, while also thinking they are retaining the post-data collection objectivity provided by hypothesis testing. It then became customary for the null hypothesis, which was originally some realistic research hypothesis, to be used almost solely as a strawman "nil" hypothesis (one where a treatment has no effect, regardless of the context). Paul Meehl has argued that the epistemological importance of the choice of null hypothesis has gone largely unacknowledged. When the null hypothesis is predicted by theory, a more precise experiment will be a more severe test of the underlying theory. When the null hypothesis defaults to "no difference" or "no effect", a more precise experiment is a less severe test of the theory that motivated performing the experiment. An examination of the origins of the latter practice may therefore be useful: 1778: Pierre Laplace compares the birthrates of boys and girls in multiple European cities. He states: "it is natural to conclude that these possibilities are very nearly in the same ratio". Thus Laplace's null hypothesis that the birthrates of boys and girls should be equal given "conventional wisdom". 1900: Karl Pearson develops the chi squared test to determine "whether a given form of frequency curve will effectively describe the samples drawn from a given population." Thus the null hypothesis is that a population is described by some distribution predicted by theory. He uses as an example the numbers of five and sixes in the Weldon dice throw data. 1904: Karl Pearson develops the concept of "contingency" in order to determine whether outcomes are independent of a given categorical factor. Here the null hypothesis is by default that two things are unrelated (e.g. scar formation and death rates from smallpox). The null hypothesis in this case is no longer predicted by theory or conventional wisdom, but is instead the principle of indifference that led Fisher and others to dismiss the use of "inverse probabilities". An example of Neyman–Pearson hypothesis testing can be made by a change to the radioactive suitcase example. If the "suitcase" is actually a shielded container for the transportation of radioactive material, then a test might be used to select among three hypotheses: no radioactive source present, one present, two (all) present. The test could be required for safety, with actions required in each case. The Neyman–Pearson lemma of hypothesis testing says that a good criterion for the selection of hypotheses is the ratio of their probabilities (a likelihood ratio). A simple method of solution is to select the hypothesis with the highest probability for the Geiger counts observed. The typical result matches intuition: few counts imply no source, many counts imply two sources and intermediate counts imply one source. Notice also that usually there are problems for proving a negative. Null hypotheses should be at least falsifiable. Neyman–Pearson theory can accommodate both prior probabilities and the costs of actions resulting from decisions. The former allows each test to consider the results of earlier tests (unlike Fisher's significance tests). The latter allows the consideration of economic issues (for example) as well as probabilities. A likelihood ratio remains a good criterion for selecting among hypotheses. The two forms of hypothesis testing are based on different problem formulations. The original test is analogous to a true/false question; the Neyman–Pearson test is more like multiple choice. In the view of Tukey the former produces a conclusion on the basis of only strong evidence while the latter produces a decision on the basis of available evidence. While the two tests seem quite different both mathematically and philosophically, later developments lead to the opposite claim. Consider many tiny radioactive sources. The hypotheses become 0,1,2,3... grains of radioactive sand. There is little distinction between none or some radiation (Fisher) and 0 grains of radioactive sand versus all of the alternatives (Neyman–Pearson). The major Neyman–Pearson paper of 1933 also considered composite hypotheses (ones whose distribution includes an unknown parameter). An example proved the optimality of the (Student's) "t"-test, "there can be no better test for the hypothesis under consideration" (p 321). Neyman–Pearson theory was proving the optimality of Fisherian methods from its inception. Fisher's significance testing has proven a popular flexible statistical tool in application with little mathematical growth potential. Neyman–Pearson hypothesis testing is claimed as a pillar of mathematical statistics, creating a new paradigm for the field. It also stimulated new applications in statistical process control, detection theory, decision theory and game theory. Both formulations have been successful, but the successes have been of a different character. The dispute over formulations is unresolved. Science primarily uses Fisher's (slightly modified) formulation as taught in introductory statistics. Statisticians study Neyman–Pearson theory in graduate school. Mathematicians are proud of uniting the formulations. Philosophers consider them separately. Learned opinions deem the formulations variously competitive (Fisher vs Neyman), incompatible or complementary. The dispute has become more complex since Bayesian inference has achieved respectability. The terminology is inconsistent. Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion. Fisher thought that hypothesis testing was a useful strategy for performing industrial quality control, however, he strongly disagreed that hypothesis testing could be useful for scientists. Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct. They usually (but "not always") produce the same mathematical answer. The preferred answer is context dependent. While the existing merger of Fisher and Neyman–Pearson theories has been heavily criticized, modifying the merger to achieve Bayesian goals has been considered. Criticism of statistical hypothesis testing fills volumes citing 300–400 primary references. Much of the criticism can be summarized by the following issues: Critics and supporters are largely in factual agreement regarding the characteristics of null hypothesis significance testing (NHST): While it can provide critical information, it is "inadequate as the sole tool for statistical analysis". "Successfully rejecting the null hypothesis may offer no support for the research hypothesis." The continuing controversy concerns the selection of the best statistical practices for the near-term future given the (often poor) existing practices. Critics would prefer to ban NHST completely, forcing a complete departure from those practices, while supporters suggest a less absolute change. Controversy over significance testing, and its effects on publication bias in particular, has produced several results. The American Psychological Association has strengthened its statistical reporting requirements after review, medical journal publishers have recognized the obligation to publish some results that are not statistically significant to combat publication bias and a journal ("Journal of Articles in Support of the Null Hypothesis") has been created to publish such results exclusively. Textbooks have added some cautions and increased coverage of the tools necessary to estimate the size of the sample required to produce significant results. Major organizations have not abandoned use of significance tests although some have discussed doing so. A unifying position of critics is that statistics should not lead to an accept-reject conclusion or decision, but to an estimated value with an interval estimate; this data-analysis philosophy is broadly referred to as estimation statistics. Estimation statistics can be accomplished with either frequentist or Bayesian methods. One strong critic of significance testing suggested a list of reporting alternatives: effect sizes for importance, prediction intervals for confidence, replications and extensions for replicability, meta-analyses for generality. None of these suggested alternatives produces a conclusion/decision. Lehmann said that hypothesis testing theory can be presented in terms of conclusions/decisions, probabilities, or confidence intervals. "The distinction between the ... approaches is largely one of reporting and interpretation." On one "alternative" there is no disagreement: Fisher himself said, "In relation to the test of significance, we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us a statistically significant result." Cohen, an influential critic of significance testing, concurred, "... don't look for a magic alternative to NHST "[null hypothesis significance testing]" ... It doesn't exist." "... given the problems of statistical induction, we must finally rely, as have the older sciences, on replication." The "alternative" to significance testing is repeated testing. The easiest way to decrease statistical uncertainty is by obtaining more data, whether by increased sample size or by repeated tests. Nickerson claimed to have never seen the publication of a literally replicated experiment in psychology. An indirect approach to replication is meta-analysis. Bayesian inference is one proposed alternative to significance testing. (Nickerson cited 10 sources suggesting it, including Rozeboom (1960)). For example, Bayesian parameter estimation can provide rich information about the data from which researchers can draw inferences, while using uncertain priors that exert only minimal influence on the results when enough data is available. Psychologist John K. Kruschke has suggested Bayesian estimation as an alternative for the "t"-test. Alternatively two competing models/hypothesis can be compared using Bayes factors. Bayesian methods could be criticized for requiring information that is seldom available in the cases where significance testing is most heavily used. Neither the prior probabilities nor the probability distribution of the test statistic under the alternative hypothesis are often available in the social sciences. Advocates of a Bayesian approach sometimes claim that the goal of a researcher is most often to objectively assess the probability that a hypothesis is true based on the data they have collected. Neither Fisher's significance testing, nor Neyman–Pearson hypothesis testing can provide this information, and do not claim to. The probability a hypothesis is true can only be derived from use of Bayes' Theorem, which was unsatisfactory to both the Fisher and Neyman–Pearson camps due to the explicit use of subjectivity in the form of the prior probability. Fisher's strategy is to sidestep this with the "p"-value (an objective "index" based on the data alone) followed by "inductive inference", while Neyman–Pearson devised their approach of "inductive behaviour". Hypothesis testing and philosophy intersect. Inferential statistics, which includes hypothesis testing, is applied probability. Both probability and its application are intertwined with philosophy. Philosopher David Hume wrote, "All knowledge degenerates into probability." Competing practical definitions of probability reflect philosophical differences. The most common application of hypothesis testing is in the scientific interpretation of experimental data, which is naturally studied by the philosophy of science. Fisher and Neyman opposed the subjectivity of probability. Their views contributed to the objective definitions. The core of their historical disagreement was philosophical. Many of the philosophical criticisms of hypothesis testing are discussed by statisticians in other contexts, particularly correlation does not imply causation and the design of experiments. Hypothesis testing is of continuing interest to philosophers. Statistics is increasingly being taught in schools with hypothesis testing being one of the elements taught. Many conclusions reported in the popular press (political opinion polls to medical studies) are based on statistics. Some writers have stated that statistical analysis of this kind allows for thinking clearly about problems involving mass data, as well as the effective reporting of trends and inferences from said data, but caution that writers for a broad public should have a solid understanding of the field in order to use the terms and concepts correctly. An introductory college statistics class places much emphasis on hypothesis testing – perhaps half of the course. Such fields as literature and divinity now include findings based on statistical analysis (see the Bible Analyzer). An introductory statistics class teaches hypothesis testing as a cookbook process. Hypothesis testing is also taught at the postgraduate level. Statisticians learn how to create good statistical test procedures (like "z", Student's "t", "F" and chi-squared). Statistical hypothesis testing is considered a mature area within statistics, but a limited amount of development continues. An academic study states that the cookbook method of teaching introductory statistics leaves no time for history, philosophy or controversy. Hypothesis testing has been taught as received unified method. Surveys showed that graduates of the class were filled with philosophical misconceptions (on all aspects of statistical inference) that persisted among instructors. While the problem was addressed more than a decade ago, and calls for educational reform continue, students still graduate from statistics classes holding fundamental misconceptions about hypothesis testing. Ideas for improving the teaching of hypothesis testing include encouraging students to search for statistical errors in published papers, teaching the history of statistics and emphasizing the controversy in a generally dry subject.
https://en.wikipedia.org/wiki?curid=30284
The Hobbit The Hobbit, or There and Back Again is a children's fantasy novel by English author J. R. R. Tolkien. It was published on 21 September 1937 to wide critical acclaim, being nominated for the Carnegie Medal and awarded a prize from the "New York Herald Tribune" for best juvenile fiction. The book remains popular and is recognized as a classic in children's literature. "The Hobbit" is set within Tolkien's fictional universe and follows the quest of home-loving Bilbo Baggins, the titular hobbit, to win a share of the treasure guarded by Smaug the dragon. Bilbo's journey takes him from light-hearted, rural surroundings into more sinister territory. The story is told in the form of an episodic quest, and most chapters introduce a specific creature or type of creature of Tolkien's geography. Bilbo gains a new level of maturity, competence, and wisdom by accepting the disreputable, romantic, fey, and adventurous sides of his nature and applying his wits and common sense. The story reaches its climax in the Battle of Five Armies, where many of the characters and creatures from earlier chapters re-emerge to engage in conflict. Personal growth and forms of heroism are central themes of the story, along with motifs of warfare. These themes have led critics to view Tolkien's own experiences during World War I as instrumental in shaping the story. The author's scholarly knowledge of Germanic philology and interest in mythology and fairy tales are often noted as influences. The publisher was encouraged by the book's critical and financial success and, therefore, requested a sequel. As Tolkien's work progressed on the successor "The Lord of the Rings", he made retrospective accommodations for it in "The Hobbit". These few but significant changes were integrated into the second edition. Further editions followed with minor emendations, including those reflecting Tolkien's changing concept of the world into which Bilbo stumbled. The work has never been out of print. Its ongoing legacy encompasses many adaptations for stage, screen, radio, board games, and video games. Several of these adaptations have received critical recognition on their own merits. Bilbo Baggins, the titular protagonist, is a respectable, reserved hobbit—a race resembling very short humans with furry feet who live in underground houses and are mainly pastoral farmers and gardeners. During his adventure, Bilbo often refers to the contents of his larder at home and wishes he had more food. Until he finds a magic ring, he is more baggage than help. Gandalf, an itinerant wizard, introduces Bilbo to a company of thirteen dwarves. During the journey the wizard disappears on side errands dimly hinted at, only to appear again at key moments in the story. Thorin Oakenshield, the proud, pompous head of the company of dwarves and heir to the destroyed dwarvish kingdom under the Lonely Mountain, makes many mistakes in his leadership, relying on Gandalf and Bilbo to get him out of trouble, but proves himself a mighty warrior. Smaug is a dragon who long ago pillaged the dwarvish kingdom of Thorin's grandfather and sleeps upon the vast treasure. The plot involves a host of other characters of varying importance, such as the twelve other dwarves of the company; two types of elves: both puckish and more serious warrior types; Men; man-eating trolls; boulder-throwing giants; evil cave-dwelling goblins; forest-dwelling giant spiders who can speak; immense and heroic eagles who also speak; evil wolves, or Wargs, who are allied with the goblins; Elrond the sage; Gollum, a strange creature inhabiting an underground lake; Beorn, a man who can assume bear form; and Bard the Bowman, a grim but honourable archer of Lake-town. Gandalf tricks Bilbo Baggins into hosting a party for Thorin Oakenshield and his band of dwarves, who sing of reclaiming the Lonely Mountain and its vast treasure from the dragon Smaug. When the music ends, Gandalf unveils Thrór's map showing a secret door into the Mountain and proposes that the dumbfounded Bilbo serve as the expedition's "burglar". The dwarves ridicule the idea, but Bilbo, indignant, joins despite himself. The group travels into the wild, where Gandalf saves the company from trolls and leads them to Rivendell, where Elrond reveals more secrets from the map. When they attempt to cross the Misty Mountains they are caught by goblins and driven deep underground. Although Gandalf rescues them, Bilbo gets separated from the others as they flee the goblins. Lost in the goblin tunnels, he stumbles across a mysterious ring and then encounters Gollum, who engages him in a game of riddles. As a reward for solving all riddles Gollum will show him the path out of the tunnels, but if Bilbo fails, his life will be forfeit. With the help of the ring, which confers invisibility, Bilbo escapes and rejoins the dwarves, improving his reputation with them. The goblins and Wargs give chase, but the company are saved by eagles before resting in the house of Beorn. The company enters the black forest of Mirkwood without Gandalf. In Mirkwood, Bilbo first saves the dwarves from giant spiders and then from the dungeons of the Wood-elves. Nearing the Lonely Mountain, the travellers are welcomed by the human inhabitants of Lake-town, who hope the dwarves will fulfil prophecies of Smaug's demise. The expedition travels to the Lonely Mountain and finds the secret door; Bilbo scouts the dragon's lair, stealing a great cup and espying a gap in Smaug's armour. The enraged dragon, deducing that Lake-town has aided the intruder, sets out to destroy the town. A thrush had overheard Bilbo's report of Smaug's vulnerability and reports it to Lake-town defender Bard. Bard's arrow finds the hollow spot and kills the dragon. When the dwarves take possession of the mountain, Bilbo finds the Arkenstone, an heirloom of Thorin's family, and hides it away. The Wood-elves and Lake-men besiege the mountain and request compensation for their aid, reparations for Lake-town's destruction, and settlement of old claims on the treasure. Thorin refuses and, having summoned his kin from the Iron Hills, reinforces his position. Bilbo tries to ransom the Arkenstone to head off a war, but Thorin is only enraged at the betrayal. He banishes Bilbo, and battle seems inevitable. Gandalf reappears to warn all of an approaching army of goblins and Wargs. The dwarves, men and elves band together, but only with the timely arrival of the eagles and Beorn do they win the climactic Battle of Five Armies. Thorin is fatally wounded and reconciles with Bilbo before he dies. Bilbo accepts only a small portion of his share of the treasure, having no want or need for more, but still returns home a very wealthy hobbit roughly a year and a month after he first left. In the early 1930s Tolkien was pursuing an academic career at Oxford as Rawlinson and Bosworth Professor of Anglo-Saxon, with a fellowship at Pembroke College. Several of his poems had been published in magazines and small collections, including "Goblin Feet" and "The Cat and the Fiddle: A Nursery Rhyme Undone and its Scandalous Secret Unlocked", a reworking of the nursery rhyme "Hey Diddle Diddle". His creative endeavours at this time also included letters from Father Christmas to his children—illustrated manuscripts that featured warring gnomes and goblins, and a helpful polar bear—alongside the creation of elven languages and an attendant mythology, including the Book of Lost Tales, which he had been creating since 1917. These works all saw posthumous publication. In a 1955 letter to W. H. Auden, Tolkien recollects that he began work on "The Hobbit" one day early in the 1930s, when he was marking School Certificate papers. He found a blank page. Suddenly inspired, he wrote the words, "In a hole in the ground there lived a hobbit." By late 1932 he had finished the story and then lent the manuscript to several friends, including C. S. Lewis and a student of Tolkien's named Elaine Griffiths. In 1936, when Griffiths was visited in Oxford by Susan Dagnall, a staff member of the publisher George Allen & Unwin, she is reported to have either lent Dagnall the book or suggested she borrow it from Tolkien. In any event, Dagnall was impressed by it, and showed the book to Stanley Unwin, who then asked his 10-year-old son Rayner to review it. Rayner's favourable comments settled Allen & Unwin's decision to publish Tolkien's book. The setting of "The Hobbit", as described on its original dust jacket, is ""ancient time between the age of Faerie and the dominion of men"" in an unnamed fantasy world. The world is shown on the endpaper map as "Western Lands" westward and "Wilderland" as the east. Originally this world was self-contained, but as Tolkien began work on "The Lord of the Rings", he decided these stories could fit into the legendarium he had been working on privately for decades. "The Hobbit" and "The Lord of the Rings" became the end of the "Third Age" of Middle Earth within Arda. Eventually those tales of the earlier periods became published as "The Silmarillion" and other posthumous works. One of the greatest influences on Tolkien was the 19th-century Arts and Crafts polymath William Morris. Tolkien wished to imitate Morris's prose and poetry romances, following the general style and approach of the work. The Desolation of Smaug as portraying dragons as detrimental to landscape, has been noted as an explicit motif borrowed from Morris. Tolkien wrote also of being impressed as a boy by Samuel Rutherford Crockett's historical novel "The Black Douglas" and of basing the Necromancer—Sauron—on its villain, Gilles de Retz. Incidents in both "The Hobbit" and "Lord of the Rings" are similar in narrative and style to the novel, and its overall style and imagery have been suggested as having had an influence on Tolkien. Tolkien's portrayal of goblins in "The Hobbit" was particularly influenced by George MacDonald's "The Princess and the Goblin". However, MacDonald influenced Tolkien more profoundly than just to shape individual characters and episodes; his works further helped Tolkien form his whole thinking on the role of fantasy within his Christian faith. Tolkien scholar Mark T. Hooker has catalogued a lengthy series of parallels between "The Hobbit" and Jules Verne's "Journey to the Center of the Earth". These include, among other things, a hidden runic message and a celestial alignment that direct the adventurers to the goals of their quests. Tolkien's works show much influence from Norse mythology, reflecting his lifelong passion for those stories and his academic interest in Germanic philology. "The Hobbit" is no exception to this; the work shows influences from northern European literature, myths and languages, especially from the "Poetic Edda" and the "Prose Edda". Examples include the names of characters, such as Fili, Kili, Oin, Gloin, Bifur, Bofur, Bombur, Dori, Nori, Dwalin, Balin, Dain, Nain, Thorin Oakenshield and Gandalf (deriving from the Old Norse names "Fíli", "Kíli", "Oin", "Glói", "Bivör", "Bávörr", "Bömburr", "Dori", "Nóri", "Dvalinn", "Bláin", "Dain", "Nain", "Þorin Eikinskialdi" and "Gandálfr"). But while their names are from Old Norse, the characters of the dwarves are more directly taken from fairy tales such as "Snow White" and "Snow-White and Rose-Red" as collected by the Brothers Grimm. The latter tale may also have influenced the character of Beorn. Tolkien's use of descriptive names such as "Misty Mountains" and "Bag End" echoes the names used in Old Norse sagas. The names of the dwarf-friendly ravens, such as Roäc, are derived from Old Norse words for "raven" and "rook", but their peaceful characters are unlike the typical carrion birds from Old Norse and Old English literature. Tolkien is not simply skimming historical sources for effect: the juxtaposition of old and new styles of expression is seen by Shippey as one of the major themes explored in "The Hobbit". Maps figure in both saga literature and "The Hobbit". Several of the author's illustrations incorporate Anglo-Saxon runes, an English adaptation of the Germanic runic alphabets. Themes from Old English literature, and specifically from "Beowulf", shape the ancient world Bilbo stepped into. Tolkien, a scholar of "Beowulf", counted the epic among his "most valued sources" for "The Hobbit". Tolkien was one of the first critics to treat "Beowulf" as a literary work with value beyond the merely historical, and his 1936 lecture "" is still required in some Old English courses. Tolkien borrowed several elements from "Beowulf", including a monstrous, intelligent dragon. Certain descriptions in "The Hobbit" seem to have been lifted straight out of "Beowulf" with some minor rewording, such as when the dragon stretches its neck out to sniff for intruders. Likewise, Tolkien's descriptions of the lair as accessed through a secret passage mirror those in "Beowulf". Other specific plot elements and features in "The Hobbit" that show similarities to "Beowulf" include the title "thief", as Bilbo is called by Gollum and later by Smaug, and Smaug's personality, which leads to the destruction of Lake-town. Tolkien refines parts of "Beowulf" plot that he appears to have found less than satisfactorily described, such as details about the cup-thief and the dragon's intellect and personality. Another influence from Old English sources is the appearance of named blades of renown, adorned in runes. In using his elf-blade Bilbo finally takes his first independent heroic action. By his naming the blade "Sting" we see Bilbo's acceptance of the kinds of cultural and linguistic practices found in "Beowulf", signifying his entrance into the ancient world in which he found himself. This progression culminates in Bilbo stealing a cup from the dragon's hoard, rousing him to wrath—an incident directly mirroring "Beowulf" and an action entirely determined by traditional narrative patterns. As Tolkien wrote, "The episode of the theft arose naturally (and almost inevitably) from the circumstances. It is difficult to think of any other way of conducting the story at this point. I fancy the author of Beowulf would say much the same." The name of the wizard Radagast is widely recognized to be taken from the name of the Slavic deity Rodegast. The representation of the dwarves in "The Hobbit" was influenced by his own selective reading of medieval texts regarding the Jewish people and their history. The dwarves' characteristics of being dispossessed of their ancient homeland at the Lonely Mountain, and living among other groups whilst retaining their own culture are all derived from the medieval image of Jews, whilst their warlike nature stems from accounts in the Hebrew Bible. The Dwarvish calendar invented for "The Hobbit" reflects the Jewish calendar in beginning in late autumn. And although Tolkien denied allegory, the dwarves taking Bilbo out of his complacent existence has been seen as an eloquent metaphor for the "impoverishment of Western society without Jews." George Allen & Unwin Ltd. of London published the first edition of "The Hobbit" on 21 September 1937 with a print run of 1,500 copies, which sold out by December because of enthusiastic reviews. This first printing was illustrated in black and white by Tolkien, who designed the dust jacket as well. Houghton Mifflin of Boston and New York reset type for an American edition, to be released early in 1938, in which four of the illustrations would be colour plates. Allen & Unwin decided to incorporate the colour illustrations into their second printing, released at the end of 1937. Despite the book's popularity, paper rationing due to World War II and not ending until 1949 meant that the Allen & Unwin edition of the book was often unavailable during this period. Subsequent editions in English were published in 1951, 1966, 1978 and 1995. Numerous English-language editions of "The Hobbit" have been produced by several publishers. In addition, "The Hobbit" has been translated into over sixty languages, with more than one published version for some languages. In December 1937 "The Hobbit" publisher, Stanley Unwin, asked Tolkien for a sequel. In response Tolkien provided drafts for "The Silmarillion", but the editors rejected them, believing that the public wanted "more about hobbits". Tolkien subsequently began work on "The New Hobbit", which would eventually become "The Lord of the Rings", a course that would not only change the context of the original story, but lead to substantial changes to the character of Gollum. In the first edition of "The Hobbit", Gollum willingly bets his magic ring on the outcome of the riddle-game, and he and Bilbo part amicably. In the second edition edits, to reflect the new concept of the One Ring and its corrupting abilities, Tolkien made Gollum more aggressive towards Bilbo and distraught at losing the ring. The encounter ends with Gollum's curse, "Thief! Thief, Thief, Baggins! We hates it, we hates it, we hates it forever!" This presages Gollum's portrayal in "The Lord of the Rings". Tolkien sent this revised version of the chapter "Riddles in the Dark" to Unwin as an example of the kinds of changes needed to bring the book into conformity with "The Lord of the Rings", but he heard nothing back for years. When he was sent galley proofs of a new edition, Tolkien was surprised to find the sample text had been incorporated. In "The Lord of the Rings", the original version of the riddle game is explained as a "lie" made up by Bilbo under the harmful influence of the Ring, whereas the revised version contains the "true" account. The revised text became the second edition, published in 1951 in both the UK and the US. Tolkien began a new version in 1960, attempting to adjust the tone of "The Hobbit" to its sequel. He abandoned the new revision at chapter three after he received criticism that it "just wasn't "The Hobbit", implying it had lost much of its light-hearted tone and quick pace. After an unauthorized paperback edition of "The Lord of the Rings" appeared from Ace Books in 1965, Houghton Mifflin and Ballantine asked Tolkien to refresh the text of "The Hobbit" to renew the US copyright. This text became the 1966 third edition. Tolkien took the opportunity to align the narrative even more closely to "The Lord of the Rings" and to cosmological developments from his still unpublished "Quenta Silmarillion" as it stood at that time. These small edits included, for example, changing the phrase "elves that are now called Gnomes" from the first, and second editions, on page 63, to "High Elves of the West, my kin" in the third edition. Tolkien had used "gnome" in his earlier writing to refer to the second kindred of the High Elves—the Noldor (or "Deep Elves")—thinking "gnome", derived from the Greek "gnosis" (knowledge), was a good name for the wisest of the elves. However, because of its common denotation of a garden gnome, derived from the 16th-century Paracelsus, Tolkien abandoned the term. He also changed "tomatoes" to "pickles" but retained other anachronisms, such as clocks and tobacco. In "The Lord of the Rings", he has Merry explain that tobacco had been brought from the West by the Númenóreans. Since the author's death, two editions of "The Hobbit" have been published with commentary on the creation, emendation and development of the text. In "The Annotated Hobbit", Douglas Anderson provides the text of the published book alongside commentary and illustrations. Later editions added the text of "The Quest of Erebor". Anderson's commentary makes note of the sources Tolkien brought together in preparing the text, and chronicles the changes Tolkien made to the published editions. The text is also accompanied by illustrations from foreign language editions, among them work by Tove Jansson. The edition also presents a number of little-known texts such as the 1923 version of Tolkien's poem "Iumonna Gold Galdre Bewunden". With "The History of The Hobbit", published in two parts in 2007, John D. Rateliff provides the full text of the earliest and intermediary drafts of the book, alongside commentary that shows relationships to Tolkien's scholarly and creative works, both contemporary and later. Rateliff provides the abandoned 1960s retelling and previously unpublished illustrations by Tolkien. The book separates commentary from Tolkien's text, allowing the reader to read the original drafts as self-contained stories. Tolkien's correspondence and publisher's records show that he was involved in the design and illustration of the entire book. All elements were the subject of considerable correspondence and fussing over by Tolkien. Rayner Unwin, in his publishing memoir, comments: "In 1937 alone Tolkien wrote 26 letters to George Allen & Unwin... detailed, fluent, often pungent, but infinitely polite and exasperatingly precise... I doubt any author today, however famous, would get such scrupulous attention." Even the maps, of which Tolkien originally proposed five, were considered and debated. He wished "Thror's Map" to be tipped in (that is, glued in after the book has been bound) at first mention in the text, and with the "moon letter" Cirth on the reverse so they could be seen when held up to the light. In the end the cost, as well as the shading of the maps, which would be difficult to reproduce, resulted in the final design of two maps as endpapers, "Thror's map", and the "Map of Wilderland" (see Rhovanion), both printed in black and red on the paper's cream background. Originally Allen & Unwin planned to illustrate the book only with the endpaper maps, but Tolkien's first tendered sketches so charmed the publisher's staff that they opted to include them without raising the book's price despite the extra cost. Thus encouraged, Tolkien supplied a second batch of illustrations. The publisher accepted all of these as well, giving the first edition ten black-and-white illustrations plus the two endpaper maps. The illustrated scenes were: "The Hill: Hobbiton-across-the-Water", "The Trolls", "The Mountain Path", "The Misty Mountains looking West from the Eyrie towards Goblin Gate", "Beorn's Hall", "Mirkwood", "The Elvenking's Gate", "Lake Town", "The Front Gate", and "The Hall at Bag-End". All but one of the illustrations were a full page, and one, the Mirkwood illustration, required a separate plate. Satisfied with his skills, the publishers asked Tolkien to design a dust jacket. This project, too, became the subject of many iterations and much correspondence, with Tolkien always writing disparagingly of his own ability to draw. The runic inscription around the edges of the illustration are a phonetic transliteration of English, giving the title of the book and details of the author and publisher. The original jacket design contained several shades of various colours, but Tolkien redrew it several times using fewer colours each time. His final design consisted of four colours. The publishers, mindful of the cost, removed the red from the sun to end up with only black, blue, and green ink on white stock. The publisher's production staff designed a binding, but Tolkien objected to several elements. Through several iterations, the final design ended up as mostly the author's. The spine shows runes: two "þ" (Thráin and Thrór) runes and one "d" (door). The front and back covers were mirror images of each other, with an elongated dragon characteristic of Tolkien's style stamped along the lower edge, and with a sketch of the Misty Mountains stamped along the upper edge. Once illustrations were approved for the book, Tolkien proposed colour plates as well. The publisher would not relent on this, so Tolkien pinned his hopes on the American edition to be published about six months later. Houghton Mifflin rewarded these hopes with the replacement of the frontispiece ("The Hill: Hobbiton-across-the Water") in colour and the addition of new colour plates: "Rivendell", "Bilbo Woke Up with the Early Sun in His Eyes", "Bilbo comes to the Huts of the Raft-elves" and "Conversation with Smaug", which features a dwarvish curse written in Tolkien's invented script Tengwar, and signed with two "þ" ("Th") runes. The additional illustrations proved so appealing that George Allen & Unwin adopted the colour plates as well for their second printing, with exception of "Bilbo Woke Up with the Early Sun in His Eyes". Different editions have been illustrated in diverse ways. Many follow the original scheme at least loosely, but many others are illustrated by other artists, especially the many translated editions. Some cheaper editions, particularly paperback, are not illustrated except with the maps. "The Children's Book Club" edition of 1942 includes the black-and-white pictures but no maps, an anomaly. Tolkien's use of runes, both as decorative devices and as magical signs within the story, has been cited as a major cause for the popularization of runes within "New Age" and esoteric literature, stemming from Tolkien's popularity with the elements of counter-culture in the 1970s. "The Hobbit" takes cues from narrative models of children's literature, as shown by its omniscient narrator and characters that young children can relate to, such as the small, food-obsessed, and morally ambiguous Bilbo. The text emphasizes the relationship between time and narrative progress and it openly distinguishes "safe" from "dangerous" in its geography. Both are key elements of works intended for children, as is the "home-away-home" (or "there and back again") plot structure typical of the Bildungsroman. While Tolkien later claimed to dislike the aspect of the narrative voice addressing the reader directly, the narrative voice contributes significantly to the success of the novel. Emer O'Sullivan, in her "Comparative Children's Literature", notes "The Hobbit" as one of a handful of children's books that have been accepted into mainstream literature, alongside Jostein Gaarder's "Sophie's World" (1991) and J. K. Rowling's "Harry Potter" series (1997–2007). Tolkien intended "The Hobbit" as a "fairy-story" and wrote it in a tone suited to addressing children although he said later that the book was not specifically written for children but had rather been created out of his interest in mythology and legend. Many of the initial reviews refer to the work as a fairy story. However, according to Jack Zipes writing in "The Oxford Companion to Fairy Tales", Bilbo is an atypical character for a fairy tale. The work is much longer than Tolkien's ideal proposed in his essay "On Fairy-Stories". Many fairy tale motifs, such as the repetition of similar events seen in the dwarves' arrival at Bilbo's and Beorn's homes, and folklore themes, such as trolls turning to stone, are to be found in the story. The book is popularly called (and often marketed as) a fantasy novel, but like "Peter Pan and Wendy" by J. M. Barrie and "The Princess and the Goblin" by George MacDonald, both of which influenced Tolkien and contain fantasy elements, it is primarily identified as being children's literature. The two genres are not mutually exclusive, so some definitions of high fantasy include works for children by authors such as L. Frank Baum and Lloyd Alexander alongside the works of Gene Wolfe and Jonathan Swift, which are more often considered adult literature. "The Hobbit" has been called "the most popular of all twentieth-century fantasies written for children". Jane Chance, however, considers the book to be a children's novel only in the sense that it appeals to the child in an adult reader. Sullivan credits the first publication of "The Hobbit" as an important step in the development of high fantasy, and further credits the 1960s paperback debuts of "The Hobbit" and "The Lord of the Rings" as essential to the creation of a mass market for fiction of this kind as well as the fantasy genre's current status. Tolkien's prose is unpretentious and straightforward, taking as given the existence of his imaginary world and describing its details in a matter-of-fact way, while often introducing the new and fantastic in an almost casual manner. This down-to-earth style, also found in later fantasy such as Richard Adams' "Watership Down" and Peter Beagle's "The Last Unicorn", accepts readers into the fictional world, rather than cajoling or attempting to convince them of its reality. While "The Hobbit" is written in a simple, friendly language, each of its characters has a unique voice. The narrator, who occasionally interrupts the narrative flow with asides (a device common to both children's and Anglo-Saxon literature), has his own linguistic style separate from those of the main characters. The basic form of the story is that of a quest, told in episodes. For the most part of the book, each chapter introduces a different denizen of the Wilderland, some helpful and friendly towards the protagonists, and others threatening or dangerous. However the general tone is kept light-hearted, being interspersed with songs and humour. One example of the use of song to maintain tone is when Thorin and Company are kidnapped by goblins, who, when marching them into the underworld, sing: This onomatopoeic singing undercuts the dangerous scene with a sense of humour. Tolkien achieves balance of humour and danger through other means as well, as seen in the foolishness and Cockney dialect of the trolls and in the drunkenness of the elven captors. The general form—that of a journey into strange lands, told in a light-hearted mood and interspersed with songs—may be following the model of "The Icelandic Journals" by William Morris, an important literary influence on Tolkien. The evolution and maturation of the protagonist, Bilbo Baggins, is central to the story. This journey of maturation, where Bilbo gains a clear sense of identity and confidence in the outside world, may be seen as a Bildungsroman rather than a traditional quest. The Jungian concept of individuation is also reflected through this theme of growing maturity and capability, with the author contrasting Bilbo's personal growth against the arrested development of the dwarves. Thus, while Gandalf exerts a parental influence over Bilbo early on, it is Bilbo who gradually takes over leadership of the party, a fact the dwarves could not bear to acknowledge. The analogue of the "underworld" and the hero returning from it with a boon (such as the ring, or Elvish blades) that benefits his society is seen to fit the mythic archetypes regarding initiation and male coming-of-age as described by Joseph Campbell. Chance compares the development and growth of Bilbo against other characters to the concepts of just kingship versus sinful kingship derived from the "Ancrene Wisse" (which Tolkien had written on in 1929) and a Christian understanding of "Beowulf". The overcoming of greed and selfishness has been seen as the central moral of the story. Whilst greed is a recurring theme in the novel, with many of the episodes stemming from one or more of the characters' simple desire for food (be it trolls eating dwarves or dwarves eating Wood-elf fare) or a desire for beautiful objects, such as gold and jewels, it is only by the Arkenstone's influence upon Thorin that greed, and its attendant vices "coveting" and "malignancy", come fully to the fore in the story and provide the moral crux of the tale. Bilbo steals the Arkenstone—a most ancient relic of the dwarves—and attempts to ransom it to Thorin for peace. However, Thorin turns on the Hobbit as a traitor, disregarding all the promises and "at your services" he had previously bestowed. In the end Bilbo gives up the precious stone and most of his share of the treasure to help those in greater need. Tolkien also explores the motif of jewels that inspire intense greed that corrupts those who covet them in the "Silmarillion", and there are connections between the words "Arkenstone" and "Silmaril" in Tolkien's invented etymologies. "The Hobbit" employs themes of animism. An important concept in anthropology and child development, animism is the idea that all things—including inanimate objects and natural events, such as storms or purses, as well as living things like animals and plants—possess human-like intelligence. John D. Rateliff calls this the "Doctor Dolittle Theme" in "The History of the Hobbit", and cites the multitude of talking animals as indicative of this theme. These talking creatures include ravens, a thrush, spiders and the dragon Smaug, alongside the anthropomorphic goblins and elves. Patrick Curry notes that animism is also found in Tolkien's other works, and mentions the "roots of mountains" and "feet of trees" in "The Hobbit" as a linguistic shifting in level from the inanimate to animate. Tolkien saw the idea of animism as closely linked to the emergence of human language and myth: "...The first men to talk of 'trees and stars' saw things very differently. To them, the world was alive with mythological beings... To them the whole of creation was 'myth-woven and elf-patterned'." As in plot and setting, Tolkien brings his literary theories to bear in forming characters and their interactions. He portrays Bilbo as a modern anachronism exploring an essentially antique world. Bilbo is able to negotiate and interact within this antique world because language and tradition make connections between the two worlds. For example, Gollum's riddles are taken from old historical sources, while those of Bilbo come from modern nursery books. It is the form of the riddle game, familiar to both, which allows Gollum and Bilbo to engage each other, rather than the content of the riddles themselves. This idea of a superficial contrast between characters' individual linguistic style, tone and sphere of interest, leading to an understanding of the deeper unity between the ancient and modern, is a recurring theme in "The Hobbit". Smaug is the main antagonist. In many ways the Smaug episode reflects and references the dragon of "Beowulf", and Tolkien uses the episode to put into practice some of the ground-breaking literary theories he had developed about the Old English poem in its portrayal of the dragon as having bestial intelligence. Tolkien greatly prefers this motif over the later medieval trend of using the dragon as a symbolic or allegorical figure, such as in the legend of St. George. Smaug the dragon with his golden hoard may be seen as an example of the traditional relationship between evil and metallurgy as collated in the depiction of Pandæmonium with its "Belched fire and rolling smoke" in Milton's "Paradise Lost". Of all the characters, Smaug's speech is the most modern, using idioms such as "Don't let your imagination run away with you!" Just as Tolkien's literary theories have been seen to influence the tale, so have Tolkien's experiences. "The Hobbit" may be read as Tolkien's parable of World War I with the hero being plucked from his rural home and thrown into a far-off war where traditional types of heroism are shown to be futile. The tale as such explores the theme of heroism. As Janet Croft notes, Tolkien's literary reaction to war at this time differed from most post-war writers by eschewing irony as a method for distancing events and instead using mythology to mediate his experiences. Similarities to the works of other writers who faced the Great War are seen in "The Hobbit", including portraying warfare as anti-pastoral: in "The Desolation of Smaug", both the area under the influence of Smaug before his demise and the setting for the Battle of Five Armies later are described as barren, damaged landscapes. "The Hobbit" makes a warning against repeating the tragedies of World War I, and Tolkien's attitude as a veteran may well be summed up by Bilbo's comment: "Victory after all, I suppose! Well, it seems a very gloomy business." On first publication in October 1937, "The Hobbit" was met with almost unanimously favourable reviews from publications both in the UK and the US, including "The Times", "Catholic World" and "New York Post". C. S. Lewis, friend of Tolkien (and later author of "The Chronicles of Narnia" between 1949 and 1954), writing in "The Times" reports: The truth is that in this book a number of good things, never before united, have come together: a fund of humour, an understanding of children, and a happy fusion of the scholar's with the poet's grasp of mythology... The professor has the air of inventing nothing. He has studied trolls and dragons at first hand and describes them with that fidelity that is worth oceans of glib "originality." Lewis compares the book to "Alice in Wonderland" in that both children and adults may find different things to enjoy in it, and places it alongside "Flatland", "Phantastes", and "The Wind in the Willows". W. H. Auden, in his review of the sequel "The Fellowship of the Ring" calls "The Hobbit" "one of the best children's stories of this century". Auden was later to correspond with Tolkien, and they became friends. "The Hobbit" was nominated for the Carnegie Medal and awarded a prize from the "New York Herald Tribune" for best juvenile fiction of the year (1938). More recently, the book has been recognized as "Most Important 20th-Century Novel (for Older Readers)" in the "Children's Books of the Century" poll in "Books for Keeps". Publication of the sequel "The Lord of the Rings" altered many critics' reception of the work. Instead of approaching "The Hobbit" as a children's book in its own right, critics such as Randell Helms picked up on the idea of "The Hobbit" as being a "prelude", relegating the story to a dry-run for the later work. Countering a presentist interpretation are those who say this approach misses out on much of the original's value as a children's book and as a work of high fantasy in its own right, and that it disregards the book's influence on these genres. Commentators such as Paul Kocher, John D. Rateliff and C. W. Sullivan encourage readers to treat the works separately, both because "The Hobbit" was conceived, published, and received independently of the later work, and to avoid dashing readers' expectations of tone and style. While "The Hobbit" has been adapted and elaborated upon in many ways, its sequel "The Lord of the Rings" is often claimed to be its greatest legacy. The plots share the same basic structure progressing in the same sequence: the stories begin at Bag End, the home of Bilbo Baggins; Bilbo hosts a party that sets the novel's main plot into motion; Gandalf sends the protagonist into a quest eastward; Elrond offers a haven and advice; the adventurers escape dangerous creatures underground (Goblin Town/Moria); they engage another group of elves (Mirkwood/Lothlórien); they traverse a desolate region (Desolation of Smaug/the Dead Marshes); they are received and nourished by a small settlement of men (Esgaroth/Ithilien); they fight in a massive battle (The Battle of Five Armies/Battle of Pelennor Fields); their journey climaxes within an infamous mountain peak (Lonely Mountain/Mount Doom); a descendant of kings is restored to his ancestral throne (Bard/Aragorn); and the questing party returns home to find it in a deteriorated condition (having possessions auctioned off / the Scouring of the Shire). "The Lord of the Rings" contains several more supporting scenes, and has a more sophisticated plot structure, following the paths of multiple characters. Tolkien wrote the later story in much less humorous tones and infused it with more complex moral and philosophical themes. The differences between the two stories can cause difficulties when readers, expecting them to be similar, find that they are not. Many of the thematic and stylistic differences arose because Tolkien wrote "The Hobbit" as a story for children, and "The Lord of the Rings" for the same audience, who had subsequently grown up since its publication. Further, Tolkien's concept of Middle-earth was to continually change and slowly evolve throughout his life and writings. The style and themes of the book have been seen to help stretch young readers' literacy skills, preparing them to approach the works of Dickens and Shakespeare. By contrast, offering advanced younger readers modern teenage-oriented fiction may not exercise their reading skills, while the material may contain themes more suited to adolescents. As one of several books that have been recommended for 11- to 14-year-old boys to encourage literacy in that demographic, "The Hobbit" is promoted as "the original and still the best fantasy ever written." Several teaching guides and books of study notes have been published to help teachers and students gain the most from the book. "The Hobbit" introduces literary concepts, notably allegory, to young readers, as the work has been seen to have allegorical aspects reflecting the life and times of the author. Meanwhile, the author himself rejected an allegorical reading of his work. This tension can help introduce readers to "readerly" and "writerly" interpretations, to tenets of New Criticism, and critical tools from Freudian analysis, such as sublimation, in approaching literary works. Another approach to critique taken in the classroom has been to propose the insignificance of female characters in the story as sexist. While Bilbo may be seen as a literary symbol of "small folk" of any gender, a gender-conscious approach can help students establish notions of a "socially symbolic text" where meaning is generated by tendentious readings of a given work. By this interpretation, it is ironic that the first authorized adaptation was a stage production in a girls' school. The first authorized adaptation of "The Hobbit" appeared in March 1953, a stage production by St. Margaret's School, Edinburgh. "The Hobbit" has since been adapted for other media many times. The first motion picture adaptation of "The Hobbit", a 12-minute film of cartoon stills, was commissioned from Gene Deitch by William L. Snyder in 1966, as related by Deitch himself. This film was publicly screened in New York City. In 1969 (over 30 years after first publication), Tolkien sold the film and merchandising rights to "The Hobbit" to United Artists under an agreement stipulating a lump sum payment of £10,000 plus a 7.5% royalty after costs, payable to Allen & Unwin and the author. In 1976 (three years after the author's death) United Artists sold the rights to Saul Zaentz Company, who trade as Tolkien Enterprises. Since then all "authorized" adaptations have been signed-off by Tolkien Enterprises. In 1997 Tolkien Enterprises licensed the film rights to Miramax, which assigned them in 1998 to New Line Cinema. The heirs of Tolkien, including his son Christopher Tolkien, filed suit against New Line Cinema in February 2008 seeking payment of profits and to be "entitled to cancel... all future rights of New Line... to produce, distribute, and/or exploit future films based upon the Trilogy and/or the Films... and/or... films based on "The Hobbit"." In September 2009, he and New Line reached an undisclosed settlement, and he has withdrawn his legal objection to "The Hobbit" films. The BBC Radio 4 series "The Hobbit" radio drama was an adaptation by Michael Kilgarriff, broadcast in eight parts (four hours in total) from September to November 1968. It starred Anthony Jackson as narrator, Paul Daneman as Bilbo and Heron Carvic as Gandalf. The series was released on audio cassette in 1988 and on CD in 1997. "The Hobbit", an animated version of the story produced by Rankin/Bass, debuted as a television movie in the United States in 1977. In 1978, Romeo Muller won a Peabody Award for his teleplay for "The Hobbit". The film was also nominated for the Hugo Award for Best Dramatic Presentation, but lost to "Star Wars". The adaptation has been called "execrable" and confusing for those not already familiar with the plot. A children's opera was written and premiered in 2004. Composer and librettist Dean Burry was commissioned by the Canadian Children's Opera Chorus, who produced the premiere in Toronto, Ontario, and subsequently toured it to the Maritime provinces the same year. The opera has since been produced several times in North America including in Tulsa, Sarasota and Toronto. In Decembers of 2012, 2013, and 2014, Metro-Goldwyn-Mayer and New Line Cinema released one part each of a three-part live-action film version produced and directed by Peter Jackson. The titles were "", "", and "". A three-part comic book adaptation with script by Chuck Dixon and Sean Deming and illustrated by David Wenzel was published by Eclipse Comics in 1989. In 1990 a one-volume edition was released by Unwin Paperbacks. The cover was artwork by the original illustrator David Wenzel. A reprint collected in one volume was released by Del Rey Books in 2001. Its cover, illustrated by Donato Giancola, was awarded the Association of Science Fiction Artists Award for Best Cover Illustration in 2002. In 1999, "The Hobbit: A 3-D Pop-Up Adventure" was published, with illustrations by John Howe and paper engineering by Andrew Baron. "Middle-earth Strategic Gaming" (formerly "Middle-earth Play-by-Mail"), which has won several Origins Awards, uses the "Battle of Five Armies" as an introductory scenario to the full game and includes characters and armies from the book. Several computer and video games, both licensed and unlicensed, have been based on the story. One of the most successful was "The Hobbit", an award-winning computer game published in 1982 by Beam Software and published by Melbourne House with compatibility for most computers available at the time. A copy of the novel was included in each game package. The game does not retell the story, but rather sits alongside it, using the book's narrative to both structure and motivate gameplay. The game won the Golden Joystick Award for Strategy Game of the Year in 1983 and was responsible for popularizing the phrase, "Thorin sits down and starts singing about gold." While reliable figures are difficult to obtain, estimated global sales of "The Hobbit" run between 35 and 100 million copies since 1937. In the UK "The Hobbit" has not retreated from the top 5,000 bestselling books measured by Nielsen BookScan since 1998, when the index began, achieving a three-year sales peak rising from 33,084 (2000) to 142,541 (2001), 126,771 (2002) and 61,229 (2003), ranking it at the 3rd position in Nielsen's "Evergreen" book list. The enduring popularity of "The Hobbit" makes early printings of the book attractive collectors' items. The first printing of the first English-language edition can sell for between £6,000 and £20,000 at auction, although the price for a signed first edition has reached over £60,000.
https://en.wikipedia.org/wiki?curid=30292
Tax Freedom Day Tax Freedom Day is the first day of the year in which a nation as a whole has theoretically earned enough income to pay its taxes. Every dollar that is officially considered income by the government is counted, and every payment to the government that is officially considered a tax is counted. Taxes at all levels of government – local, state and federal – are included. According to Neil Veldhuis, Director of Fiscal Studies, Fraser Institute, the purpose of Tax Freedom Day is to provide citizens of tax-paying countries with a metric with which to estimate their "total tax bill". The premise is that by comparing the benefits received by citizens to the amount they pay in taxes, the value of paying taxes can be assessed. The concept of Tax Freedom Day was developed in 1948 by Florida businessman Dallas Hostetler, who trademarked the phrase "Tax Freedom Day" and calculated it each year for the next two decades. In 1971, Hostetler retired and transferred the trademark to the Tax Foundation. The Tax Foundation has calculated Tax Freedom Day for the United States ever since, using it as a tool for illustrating the proportion of national income diverted to fund the annual cost of government programs. In 1990, the Tax Foundation began calculating the specific Tax Freedom Day for each individual state. Tax Freedom Day only examines taxation and does not account for debt and inflation as a means for funding government. Leap years have one day more, 29 February. This creates some bias in Tax Freedom Day charts. However, this bias is equal to roughly 1/366, which is about 0.27%. In the United States, it is annually calculated by the Tax Foundation, a Washington, D.C.-based tax research organization. In the U.S., Tax Freedom Day in 2019 is April 16, for a total average effective tax rate of 29% of the nation's income. The latest that Tax Freedom Day has occurred was May 1 in 2000. In 1900, Tax Freedom Day arrived January 22, for an effective average total tax rate of 5.9 percent of the nation's income. According to the Tax Foundation, the most important factor driving changes in Tax Freedom Day from year to year is growth in incomes, as the progressive structure of the U.S. federal tax system causes taxes as a percentage of income to rise along with inflation. The Tax Foundation also calculates Tax Freedom day inclusive of annual federal borrowing, which came 22 days later in 2019, on May 8. The 22 days represent the federal deficit. Tax Freedom Day varies among the 50 U.S. states, as incomes and state and local taxes differ from state to state. In 2019, Alaska had the lowest total tax burden, earning enough to pay all their tax obligations by March 25. Washington, D.C. had the heaviest tax burden – Tax Freedom Day there arrived May 3. Many other organizations in countries throughout the world now produce their own "Tax Freedom Day" analysis. According to the Tax Foundation, Tax Freedom Day reports are currently being published in eight countries. Due to the different ways that nations collect and categorize public finance data, however, Tax Freedom Days are not comparable from one country to another. A 2010 study published in L'Anglophone, a Brussels newspaper, compared the tax burdens of "Average Joes" in each of the 27 EU member states and projected the Tax Freedom Day for workers earning a typical wage. Income taxes, social security contributions (by the employee and the employer) and projected VAT contributions were included in the calculations. Regarding the discrepancy between their calculation of August 3 as the typical Belgian worker's Tax Freedom Day and that of PriceWaterhouseCoopers (PWC), L'Anglophone's authors wrote: "[PWC's] figures count revenue from all taxes (including those on corporate profits, petrol, cigarettes, &c.) and thus present a more complete picture of the country’s total tax burden," adding that it is "an average applied to all Belgians – not all Belgian workers; in 2008, less than half of Belgium’s population (4.99 million working out of 10.67 million citizens) was legally working. Consequently, a huge share of Belgium’s tax burden is borne by the working population." In the book "", philosopher Joseph Heath criticizes the idea that tax-paying is inherently different from consumption: It would make just as much sense to declare an annual "mortgage freedom day", in order to let mortgage owners know what day they "stop working for the bank and start working for themselves". ...But who cares? Homeowners are not really "working for the bank"; they're merely financing their own consumption. After all, they're the ones living in the house, not the bank manager. For Canada, the Fraser Institute also includes a "Personal Tax Freedom Day Calculator" that estimates a customized Tax Freedom Day based on additional variables such as age of household head, sex of household head, marital status and number of children. However, the Fraser Institute's figures have been disputed. For example, a 2005 study by Osgoode Hall Law Professor Neil Brooks argued the Fraser Institute's Tax Freedom Day analysis includes flawed accounting, including the exclusion of several important forms of income and overstating tax figures, moving the date nearly two months later. In America, while Tax Freedom Day presents an "average American" tax burden, it is not a tax burden typical for an American. That is, the tax burdens of most Americans are substantially overstated by Tax Freedom Day. The larger tax bills associated with higher incomes increases the average tax burden above that of most Americans. The Tax Foundation defends its methodology by pointing out that Tax Freedom Day is the U.S. economy's overall average tax burden—not the tax burden of the "average" American, which is how it is often misinterpreted by members of the media. Tax Foundation materials do not use the phrase "tax burden of the average American", although members of the media often make this mistake. Another criticism is that the calculation includes capital gains taxes but not capital gains income, thus overstating the tax burden. For example, in the late 1990s the US Tax Freedom Day moved later, reaching its latest date ever in 2000, but this was largely due to capital gains taxes on the bull market of that era rather than an increase in tax rates. In other words, variations in capital gains income and their associated taxes cause changes in the amount of taxes, but not in the income used in the calculation of Tax Freedom Day. The Tax Foundation argues that the Tax Freedom Day calculation does not include capital gains as income because it uses income and tax data directly from the Bureau of Economic Analysis (BEA). BEA has never counted capital gains as income since they don't represent current production available to pay taxes, and so the Tax Foundation excludes them as well. Additionally, the Tax Foundation argues that the exclusion of capital gains income is irrelevant in most years since including capital gains would only shift Tax Freedom Day by 1 percent in either direction in most years. A 1 percent change would represent 3.65 days. From 1968 to 2009 the date has never left the 21-day range of April 13 to May 3.
https://en.wikipedia.org/wiki?curid=30296
Tax A tax is a compulsory financial charge or some other type of levy imposed upon a taxpayer (an individual or legal entity) by a governmental organization in order to fund government spending and various public expenditures. A failure to pay, along with evasion of or resistance to taxation, is punishable by law. Taxes consist of direct or indirect taxes and may be paid in money or as its labour equivalent. The first known taxation took place in Ancient Egypt around 3000–2800 BC. Most countries have a tax system in place to pay for public, common or agreed national needs and government functions. Some levy a flat percentage rate of taxation on personal annual income, but most scale taxes based on annual income amounts. Most countries charge a tax on an individual's income as well as on corporate income. Countries or subunits often also impose wealth taxes, inheritance taxes, estate taxes, gift taxes, property taxes, sales taxes, payroll taxes or tariffs. In economic terms, taxation transfers wealth from households or businesses to the government. This has effects which can both increase and reduce economic growth and economic welfare. Consequently, taxation is a highly debated topic. The legal definition, and the economic definition of taxes differ in some ways such as economists do not regard many transfers to governments as taxes. For example, some transfers to the public sector are comparable to prices. Examples include, tuition at public universities, and fees for utilities provided by local governments. Governments also obtain resources by "creating" money and coins (for example, by printing bills and by minting coins), through voluntary gifts (for example, contributions to public universities and museums), by imposing penalties (such as traffic fines), by borrowing, and also by confiscating wealth. From the view of economists, a tax is a non-penal, yet compulsory transfer of resources from the private to the public sector, levied on a basis of predetermined criteria and without reference to specific benefit received. In modern taxation systems, governments levy taxes in money; but in-kind and "corvée" taxation are characteristic of traditional or pre-capitalist states and their functional equivalents. The method of taxation and the government expenditure of taxes raised is often highly debated in politics and economics. Tax collection is performed by a government agency such as the Ghana Revenue Authority, Canada Revenue Agency, the Internal Revenue Service (IRS) in the United States, Her Majesty's Revenue and Customs (HMRC) in the United Kingdom or Federal Tax Service in Russia. When taxes are not fully paid, the state may impose civil penalties (such as fines or forfeiture) or criminal penalties (such as incarceration) on the non-paying entity or individual. The levying of taxes aims to raise revenue to fund governing or to alter prices in order to affect demand. States and their functional equivalents throughout history have used money provided by taxation to carry out many functions. Some of these include expenditures on economic infrastructure (roads, public transportation, sanitation, legal systems, public safety, education, health-care systems), military, scientific research, culture and the arts, public works, distribution, data collection and dissemination, public insurance, and the operation of government itself. A government's ability to raise taxes is called its fiscal capacity. When expenditures exceed tax revenue, a government accumulates debt. A portion of taxes may be used to service past debts. Governments also use taxes to fund welfare and public services. These services can include education systems, pensions for the elderly, unemployment benefits, and public transportation. Energy, water and waste management systems are also common public utilities. According to the proponents of the chartalist theory of money creation, taxes are not needed for government revenue, as long as the government in question is able to issue fiat money. According to this view, the purpose of taxation is to maintain the stability of the currency, express public policy regarding the distribution of wealth, subsidizing certain industries or population groups or isolating the costs of certain benefits, such as highways or social security. Effects can be divided in two fundamental categories: If we consider, for instance, two normal goods, "x" and "y," whose prices are respectively "px" and "py" and an individual budget constraint given by the equation "xpx" + "ypy" = Y, where Y is the income, the slope of the budget constraint, in a graph where is represented good "x" on the vertical axis and good "y" on the horizontal axes, is equal to -"py"/"px" . The initial equilibrium is in the point (C), in which budget constraint and indifference curve are tangent, introducing an "ad valorem tax" on the "y" good (budget constraint: "pxx" + "py"(1 + "τ")"y =" Y")" , the budget constraint's slope becomes equal to -"py"(1 + τ)/"px". The new equilibrium is now in the tangent point (A) with a lower indifferent curve. As can be noticed the tax's introduction causes two consequences: The income effect shows the variation of "y" good quantity given by the change of real income.The substitution effect shows the variation of "y" good determined by relative prices' variation. This kind of taxation (that causes substitution effect) can be considered distortionary. Another example can be the introduction of an income lump-sum tax ("xpx" + "ypy" = Y - T), with a parallel shift downward of the budget constraint, can be produced a higher revenue with the same loss of consumers' utility compared with the property tax case, from another point of view, the same revenue can be produced with a lower utility sacrifice. The lower utility (with the same revenue) or the lower revenue (with the same utility) given by a distortionary tax are called excess pressure. The same result, reached with an income lump-sum tax, can be obtained with these following types of taxes (all of them cause only a budget constraint's shift without causing a substitution effect), the budget constraint's slope remains the same (-"px"/"py"): When the t and τ rates are chosen respecting this equation (where t is the rate of income tax and tau is the consumption tax's rate): formula_1 the effects of the two taxes are the same. A tax effectively changes relative prices of products. Therefore, most economists, especially neoclassical economists, argue that taxation creates market distortion and results in economic inefficiency unless there are (positive or negative) externalities associated with the activities that are taxed that need to be internalized to reach an efficient market outcome. They have therefore sought to identify the kind of tax system that would minimize this distortion. Recent scholarship suggests that in the United States of America, the federal government effectively taxes investments in higher education more heavily than it subsidizes higher education, thereby contributing to a shortage of skilled workers and unusually high differences in pre-tax earnings between highly educated and less-educated workers. Taxes can even have effects on labour supply: we can consider a model in which the consumer chooses the number of hours spent working and the amount spent on consumption. Let us suppose that only one good exists and no income is saved. Consumers have a given number of hours (H) that is divided between work (L) and free time (F = H - L). The hourly wage is called "w" and it tells us the free time's opportunity cost, i.e. the income to which the individual renounces consuming an additional hour of free time. Consumption and hours of work have a positive relationship, more hours of work mean more earnings and, assuming that workers don't save money, more earnings imply an increase in consumption (Y = C = "w"L). Free time and consumption can be considered as two normal goods (workers have to decide between working one hour more, that would mean consuming more, or having one more hour of free time) and the budget constraint is negative inclined (Y = "w"(H - F)). The indifference curve related to these two goods has a negative slope and free time becomes more and more important with high levels of consumption. It's because a high level of consumption means that people are already spending many hours working, so, in this situation, they need more free time than consume and it implies that they have to be paid with a higher salary to work an additional hour. A proportional income tax, changing budget constraint's slope (now Y = "w"(1 - "t")(H - F)), implies both substitution and income effects. The problem now is that the two effects go in opposite ways: income effect tells us that, with an income tax, the consumer feels poorer and for this reason he wants to work more, causing an increase in labour offer. On the other hand, substitution effect tells us that free time, being a normal good, is now more convenient compared to consume and it implies a decrease in labour offer. Therefore, the total effect can be both an increase or a decrease of labour offer, depending on indifference curve's shape. The Laffer curve depicts the amount of government revenue as a function of the rate of taxation. It shows that for a tax rate above a certain critical rate, government revenue starts decreasing as the tax rate rises, as a consequence of a decline in labour supply. This theory supports that, if the tax rate is above that critical point, a decrease in the tax rate should imply a rise in labour supply that in turn would lead to an increase in government revenue. Governments use different kinds of taxes and vary the tax rates. They do this in order to distribute the tax burden among individuals or classes of the population involved in taxable activities, such as the business sector, or to redistribute resources between individuals or classes in the population. Historically, taxes on the poor supported the nobility; modern social-security systems aim to support the poor, the disabled, or the retired by taxes on those who are still working. In addition, taxes are applied to fund foreign aid and military ventures, to influence the macroeconomic performance of the economy (a government's strategy for doing this is called its fiscal policy; see also tax exemption), or to modify patterns of consumption or employment within an economy, by making some classes of transaction more or less attractive. A state's tax system often reflects its communal values and the values of those in current political power. To create a system of taxation, a state must make choices regarding the distribution of the tax burden—who will pay taxes and how much they will pay—and how the taxes collected will be spent. In democratic nations where the public elects those in charge of establishing or administering the tax system, these choices reflect the type of community that the public wishes to create. In countries where the public does not have a significant amount of influence over the system of taxation, that system may reflect more closely the values of those in power. All large businesses incur administrative costs in the process of delivering revenue collected from customers to the suppliers of the goods or services being purchased. Taxation is no different; the resource collected from the public through taxation is always greater than the amount which can be used by the government. The difference is called the compliance cost and includes (for example) the labour cost and other expenses incurred in complying with tax laws and rules. The collection of a tax in order to spend it on a specified purpose, for example collecting a tax on alcohol to pay directly for alcoholism-rehabilitation centres, is called hypothecation. Finance ministers often dislike this practice, since it reduces their freedom of action. Some economic theorists regard hypothecation as intellectually dishonest since, in reality, money is fungible. Furthermore, it often happens that taxes or excises initially levied to fund some specific government programs are then later diverted to the government general fund. In some cases, such taxes are collected in fundamentally inefficient ways, for example, though highway tolls. Since governments also resolve commercial disputes, especially in countries with common law, similar arguments are sometimes used to justify a sales tax or value added tax. Some (libertarians, for example) portray most or all forms of taxes as immoral due to their involuntary (and therefore eventually coercive or violent) nature. The most extreme anti-tax view, anarcho-capitalism, holds that all social services should be voluntarily bought by the people using them. The Organisation for Economic Co-operation and Development (OECD) publishes an analysis of the tax systems of member countries. As part of such analysis, OECD has developed a definition and system of classification of internal taxes, generally followed below. In addition, many countries impose taxes (tariffs) on the import of goods. Many jurisdictions tax the income of individuals and of business entities, including corporations. Generally, the authorities impose tax on net profits from a business, on net gains, and on other income. Computation of income subject to tax may be determined under accounting principles used in the jurisdiction, which tax-law principles in the jurisdiction may modify or replace. The incidence of taxation varies by system, and some systems may be viewed as progressive or regressive. Rates of tax may vary or be constant (flat) by income level. Many systems allow individuals certain personal allowances and other non-business reductions to taxable income, although business deductions tend to be favored over personal deductions. Tax-collection agencies often collect personal income tax on a pay-as-you-earn basis, with corrections made after the end of the tax year. These corrections take one of two forms: Income-tax systems often make deductions available that reduce the total tax-liability by reducing total taxable income. They may allow losses from one type of income to count against another - for example, a loss on the stock market may be deducted against taxes paid on wages. Other tax systems may isolate the loss, such that business losses can only be deducted against business income tax by carrying forward the loss to later tax years. In economics, a negative income tax (abbreviated NIT) is a progressive income tax system where people earning below a certain amount receive supplemental payment from the government instead of paying taxes to the government. Most jurisdictions imposing an income tax treat capital gains as part of income subject to tax. Capital gain is generally a gain on sale of capital assets—that is, those assets not held for sale in the ordinary course of business. Capital assets include personal assets in many jurisdictions. Some jurisdictions provide preferential rates of tax or only partial taxation for capital gains. Some jurisdictions impose different rates or levels of capital-gains taxation based on the length of time the asset was held. Because tax rates are often much lower for capital gains than for ordinary income, there is widespread controversy and dispute about the proper definition of capital. Corporate tax refers to income tax, capital tax, net-worth tax or other taxes imposed on corporations. Rates of tax and the taxable base for corporations may differ from those for individuals or for other taxable persons. Many countries provide publicly funded retirement or health-care systems. In connection with these systems, the country typically requires employers and/or employees to make compulsory payments. These payments are often computed by reference to wages or earnings from self-employment. Tax rates are generally fixed, but a different rate may be imposed on employers than on employees. Some systems provide an upper limit on earnings subject to the tax. A few systems provide that the tax is payable only on wages above a particular amount. Such upper or lower limits may apply for retirement but not for health-care components of the tax. Some have argued that such taxes on wages are a form of "forced savings" and not really a tax, while others point to redistribution through such systems between generations (from newer cohorts to older cohorts) and across income levels (from higher income-levels to lower income-levels) which suggests that such programs are really tax and spending programs. Unemployment and similar taxes are often imposed on employers based on total payroll. These taxes may be imposed in both the country and sub-country levels. A wealth tax is a levy on the total value of personal assets, including: bank deposits, real estate, assets in insurance and pension plans, ownership of unincorporated businesses, financial securities, and personal trusts. Typically liabilities (primarily mortgages and other loans) are deducted, hence it is sometimes called a net wealth tax. Recurrent property taxes may be imposed on immovable property (real property) and on some classes of movable property. In addition, recurrent taxes may be imposed on the net wealth of individuals or corporations. Many jurisdictions impose estate tax, gift tax or other inheritance taxes on property at death or at the time of gift transfer. Some jurisdictions impose taxes on financial or capital transactions. A property tax (or millage tax) is an "ad valorem" tax levy on the value of property that the owner of the property is required to pay to a government in which the property is situated. Multiple jurisdictions may tax the same property. There are three general varieties of property: land, improvements to land (immovable man-made things, e.g. buildings) and personal property (movable things). Real estate or realty is the combination of land and improvements to land. Property taxes are usually charged on a recurrent basis (e.g., yearly). A common type of property tax is an annual charge on the ownership of real estate, where the tax base is the estimated value of the property. For a period of over 150 years from 1695 the government of England levied a window tax, with the result that one can still see listed buildings with windows bricked up in order to save their owners money. A similar tax on hearths existed in France and elsewhere, with similar results. The two most common types of event-driven property taxes are stamp duty, charged upon change of ownership, and inheritance tax, which many countries impose on the estates of the deceased. In contrast with a tax on real estate (land and buildings), a land-value tax (or LVT) is levied only on the unimproved value of the land ("land" in this instance may mean either the economic term, i.e., all natural resources, or the natural resources associated with specific areas of the Earth's surface: "lots" or "land parcels"). Proponents of land-value tax argue that it is economically justified, as it will not deter production, distort market mechanisms or otherwise create deadweight losses the way other taxes do. When real estate is held by a higher government unit or some other entity not subject to taxation by the local government, the taxing authority may receive a payment in lieu of taxes to compensate it for some or all of the foregone tax revenues. In many jurisdictions (including many American states), there is a general tax levied periodically on residents who own personal property (personalty) within the jurisdiction. Vehicle and boat registration fees are subsets of this kind of tax. The tax is often designed with blanket coverage and large exceptions for things like food and clothing. Household goods are often exempt when kept or used within the household. Any otherwise non-exempt object can lose its exemption if regularly kept outside the household. Thus, tax collectors often monitor newspaper articles for stories about wealthy people who have lent art to museums for public display, because the artworks have then become subject to personal property tax. If an artwork had to be sent to another state for some touch-ups, it may have become subject to personal property tax in "that" state as well. Inheritance tax, estate tax, and death tax or duty are the names given to various taxes which arise on the death of an individual. In United States tax law, there is a distinction between an estate tax and an inheritance tax: the former taxes the personal representatives of the deceased, while the latter taxes the beneficiaries of the estate. However, this distinction does not apply in other jurisdictions; for example, if using this terminology UK inheritance tax would be an estate tax. An expatriation tax is a tax on individuals who renounce their citizenship or residence. The tax is often imposed based on a deemed disposition of all the individual's property. One example is the United States under the "American Jobs Creation Act", where any individual who has a net worth of $2 million or an average income-tax liability of $127,000 who renounces his or her citizenship and leaves the country is automatically assumed to have done so for tax avoidance reasons and is subject to a higher tax rate. Historically, in many countries, a contract needs to have a stamp affixed to make it valid. The charge for the stamp is either a fixed amount or a percentage of the value of the transaction. In most countries, the stamp has been abolished but stamp duty remains. Stamp duty is levied in the UK on the purchase of shares and securities, the issue of bearer instruments, and certain partnership transactions. Its modern derivatives, stamp duty reserve tax and stamp duty land tax, are respectively charged on transactions involving securities and land. Stamp duty has the effect of discouraging speculative purchases of assets by decreasing liquidity. In the United States, transfer tax is often charged by the state or local government and (in the case of real property transfers) can be tied to the recording of the deed or other transfer documents. Some countries' governments will require declaration of the tax payers' balance sheet (assets and liabilities), and from that exact a tax on net worth (assets minus liabilities), as a percentage of the net worth, or a percentage of the net worth exceeding a certain level. The tax may be levied on "natural" or "legal persons." A value added tax (VAT), also known as Goods and Services Tax (G.S.T), Single Business Tax, or Turnover Tax in some countries, applies the equivalent of a sales tax to every operation that creates value. To give an example, sheet steel is imported by a machine manufacturer. That manufacturer will pay the VAT on the purchase price, remitting that amount to the government. The manufacturer will then transform the steel into a machine, selling the machine for a higher price to a wholesale distributor. The manufacturer will collect the VAT on the higher price, but will remit to the government only the excess related to the "value added" (the price over the cost of the sheet steel). The wholesale distributor will then continue the process, charging the retail distributor the VAT on the entire price to the retailer, but remitting only the amount related to the distribution mark-up to the government. The last VAT amount is paid by the eventual retail customer who cannot recover any of the previously paid VAT. For a VAT and sales tax of identical rates, the total tax paid is the same, but it is paid at differing points in the process. VAT is usually administrated by requiring the company to complete a VAT return, giving details of VAT it has been charged (referred to as input tax) and VAT it has charged to others (referred to as output tax). The difference between output tax and input tax is payable to the Local Tax Authority. Many tax authorities have introduced automated VAT which has increased accountability and auditability, by utilizing computer-systems, thereby also enabling anti-cybercrime offices as well. Sales taxes are levied when a commodity is sold to its final consumer. Retail organizations contend that such taxes discourage retail sales. The question of whether they are generally progressive or regressive is a subject of much current debate. People with higher incomes spend a lower proportion of them, so a flat-rate sales tax will tend to be regressive. It is therefore common to exempt food, utilities and other necessities from sales taxes, since poor people spend a higher proportion of their incomes on these commodities, so such exemptions make the tax more progressive. This is the classic "You pay for what you spend" tax, as only those who spend money on non-exempt (i.e. luxury) items pay the tax. A small number of U.S. states rely entirely on sales taxes for state revenue, as those states do not levy a state income tax. Such states tend to have a moderate to large amount of tourism or inter-state travel that occurs within their borders, allowing the state to benefit from taxes from people the state would otherwise not tax. In this way, the state is able to reduce the tax burden on its citizens. The U.S. states that do not levy a state income tax are Alaska, Tennessee, Florida, Nevada, South Dakota, Texas, Washington state, and Wyoming. Additionally, New Hampshire and Tennessee levy state income taxes only on dividends and interest income. Of the above states, only Alaska and New Hampshire do not levy a state sales tax. Additional information can be obtained at the Federation of Tax Administrators website. In the United States, there is a growing movement for the replacement of all federal payroll and income taxes (both corporate and personal) with a national retail sales tax and monthly tax rebate to households of citizens and legal resident aliens. The tax proposal is named FairTax. In Canada, the federal sales tax is called the Goods and Services tax (GST) and now stands at 5%. The provinces of British Columbia, Saskatchewan, Manitoba, and Prince Edward Island also have a provincial sales tax [PST]. The provinces of Nova Scotia, New Brunswick, Newfoundland & Labrador, and Ontario have harmonized their provincial sales taxes with the GST—Harmonized Sales Tax [HST], and thus is a full VAT. The province of Quebec collects the Quebec Sales Tax [QST] which is based on the GST with certain differences. Most businesses can claim back the GST, HST and QST they pay, and so effectively it is the final consumer who pays the tax. An excise duty is an indirect tax imposed upon goods during the process of their manufacture, production or distribution, and is usually proportionate to their quantity or value. Excise duties were first introduced into England in the year 1643, as part of a scheme of revenue and taxation devised by parliamentarian John Pym and approved by the Long Parliament. These duties consisted of charges on beer, ale, cider, cherry wine and tobacco, to which list were afterwards added paper, soap, candles, malt, hops, and sweets. The basic principle of excise duties was that they were taxes on the production, manufacture or distribution of articles which could not be taxed through the customs house, and revenue derived from that source is called excise revenue proper. The fundamental conception of the term is that of a tax on articles produced or manufactured in a country. In the taxation of such articles of luxury as spirits, beer, tobacco, and cigars, it has been the practice to place a certain duty on the importation of these articles (a customs duty). Excises (or exemptions from them) are also used to modify consumption patterns of a certain area (social engineering). For example, a high excise is used to discourage alcohol consumption, relative to other goods. This may be combined with hypothecation if the proceeds are then used to pay for the costs of treating illness caused by alcohol abuse. Similar taxes may exist on tobacco, pornography, etc., and they may be collectively referred to as "sin taxes". A carbon tax is a tax on the consumption of carbon-based non-renewable fuels, such as petrol, diesel-fuel, jet fuels, and natural gas. The object is to reduce the release of carbon into the atmosphere. In the United Kingdom, vehicle excise duty is an annual tax on vehicle ownership. An import or export tariff (also called customs duty or impost) is a charge for the movement of goods through a political border. Tariffs discourage trade, and they may be used by governments to protect domestic industries. A proportion of tariff revenues is often hypothecated to pay government to maintain a navy or border police. The classic ways of cheating a tariff are smuggling or declaring a false value of goods. Tax, tariff and trade rules in modern times are usually set together because of their common impact on industrial policy, investment policy, and agricultural policy. A trade bloc is a group of allied countries agreeing to minimize or eliminate tariffs against trade with each other, and possibly to impose protective tariffs on imports from outside the bloc. A customs union has a common external tariff, and the participating countries share the revenues from tariffs on goods entering the customs union. In some societies, tariffs also could be imposed by local authorities on the movement of goods between regions (or via specific internal gateways). A notable example is the "likin", which became an important revenue source for local governments in the late Qing China. Occupational taxes or license fees may be imposed on businesses or individuals engaged in certain businesses. Many jurisdictions impose a tax on vehicles. A poll tax, also called a "per capita tax", or "capitation tax", is a tax that levies a set amount per individual. It is an example of the concept of fixed tax. One of the earliest taxes mentioned in the Bible of a half-shekel per annum from each adult Jew (Ex. 30:11–16) was a form of poll tax. Poll taxes are administratively cheap because they are easy to compute and collect and difficult to cheat. Economists have considered poll taxes economically efficient because people are presumed to be in fixed supply and poll taxes therefore do not lead to economic distortions. However, poll taxes are very unpopular because poorer people pay a higher proportion of their income than richer people. In addition, the supply of people is in fact not fixed over time: on average, couples will choose to have fewer children if a poll tax is imposed. The introduction of a poll tax in medieval England was the primary cause of the 1381 Peasants' Revolt. Scotland was the first to be used to test the new poll tax in 1989 with England and Wales in 1990. The change from a progressive local taxation based on property values to a single-rate form of taxation regardless of ability to pay (the Community Charge, but more popularly referred to as the Poll Tax), led to widespread refusal to pay and to incidents of civil unrest, known colloquially as the 'Poll Tax Riots'. Some types of taxes have been proposed but not actually adopted in any major jurisdiction. These include: An "ad valorem" tax is one where the tax base is the value of a good, service, or property. Sales taxes, tariffs, property taxes, inheritance taxes, and value added taxes are different types of ad valorem tax. An ad valorem tax is typically imposed at the time of a transaction (sales tax or value added tax (VAT)) but it may be imposed on an annual basis (property tax) or in connection with another significant event (inheritance tax or tariffs). In contrast to ad valorem taxation is a "per unit" tax, where the tax base is the quantity of something, regardless of its price. An excise tax is an example. Consumption tax refers to any tax on non-investment spending, and can be implemented by means of a sales tax, consumer value added tax, or by modifying an income tax to allow for unlimited deductions for investment or savings. This includes natural resources consumption tax, greenhouse gas tax (Carbon tax), "sulfuric tax", and others. The stated purpose is to reduce the environmental impact by repricing. Economists describe environmental impacts as negative externalities. As early as 1920, Arthur Pigou suggested a tax to deal with externalities (see also the section on Increased economic welfare below). The proper implementation of environmental taxes has been the subject of a long lasting debate. An important feature of tax systems is the percentage of the tax burden as it relates to income or consumption. The terms progressive, regressive, and proportional are used to describe the way the rate progresses from low to high, from high to low, or proportionally. The terms describe a distribution effect, which can be applied to any type of tax system (income or consumption) that meets the definition. The terms can also be used to apply meaning to the taxation of select consumption, such as a tax on luxury goods and the exemption of basic necessities may be described as having progressive effects as it increases a tax burden on high end consumption and decreases a tax burden on low end consumption. Taxes are sometimes referred to as "direct taxes" or "indirect taxes". The meaning of these terms can vary in different contexts, which can sometimes lead to confusion. An economic definition, by Atkinson, states that "...direct taxes may be adjusted to the individual characteristics of the taxpayer, whereas indirect taxes are levied on transactions irrespective of the circumstances of buyer or seller." According to this definition, for example, income tax is "direct", and sales tax is "indirect". In law, the terms may have different meanings. In U.S. constitutional law, for instance, direct taxes refer to poll taxes and property taxes, which are based on simple existence or ownership. Indirect taxes are imposed on events, rights, privileges, and activities. Thus, a tax on the sale of property would be considered an indirect tax, whereas the tax on simply owning the property itself would be a direct tax. Governments may charge user fees, tolls, or other types of assessments in exchange of particular goods, services, or use of property. These are generally not considered taxes, as long as they are levied as payment for a direct benefit to the individual paying. Such fees include: Some scholars refer to certain economic effects as taxes, though they are not levies imposed by governments. These include: See for example Reinhart, Carmen M. and Rogoff, Kenneth S., "This Time is Different". Princeton and Oxford: Princeton University Press, 2008 (p. 143), The Liquidation of Government Debt, Reinhart, Carmen M. & Sbrancia, M. Belen, p. 19, The first known system of taxation was in Ancient Egypt around 3000–2800 BC in the First Dynasty of Egypt of the Old Kingdom of Egypt. The earliest and most widespread form of taxation was the corvée and tithe. The corvée was forced labour provided to the state by peasants too poor to pay other forms of taxation ("labour" in ancient Egyptian is a synonym for taxes). Records from the time document that the Pharaoh would conduct a biennial tour of the kingdom, collecting tithes from the people. Other records are granary receipts on limestone flakes and papyrus. Early taxation is also described in the Bible. In Genesis (chapter 47, verse 24 – the New International Version), it states "But when the crop comes in, give a fifth of it to Pharaoh. The other four-fifths you may keep as seed for the fields and as food for yourselves and your households and your children". Joseph was telling the people of Egypt how to divide their crop, providing a portion to the Pharaoh. A share (20%) of the crop was the tax (in this case, a special rather than an ordinary tax, as it was gathered against an expected famine) The stock made by was returned and equally shared with the people of Egypt and traded with the surrounding nations thus saving and elevating Egypt. In the Persian Empire, a regulated and sustainable tax system was introduced by Darius I the Great in 500 BC; the Persian system of taxation was tailored to each Satrapy (the area ruled by a Satrap or provincial governor). At differing times, there were between 20 and 30 Satrapies in the Empire and each was assessed according to its supposed productivity. It was the responsibility of the Satrap to collect the due amount and to send it to the treasury, after deducting his expenses (the expenses and the power of deciding precisely how and from whom to raise the money in the province, offer maximum opportunity for rich pickings). The quantities demanded from the various provinces gave a vivid picture of their economic potential. For instance, Babylon was assessed for the highest amount and for a startling mixture of commodities; 1,000 silver talents and four months supply of food for the army. India, a province fabled for its gold, was to supply gold dust equal in value to the very large amount of 4,680 silver talents. Egypt was known for the wealth of its crops; it was to be the granary of the Persian Empire (and, later, of the Roman Empire) and was required to provide 120,000 measures of grain in addition to 700 talents of silver. This tax was exclusively levied on Satrapies based on their lands, productive capacity and tribute levels. The Rosetta Stone, a tax concession issued by Ptolemy V in 196 BC and written in three languages "led to the most famous decipherment in history—the cracking of hieroglyphics". Islamic rulers imposed Zakat (a tax on Muslims) and Jizya (a poll tax on conquered non-Muslims). In India this practice began in the 11th century. Numerous records of government tax collection in Europe since at least the 17th century are still available today. But taxation levels are hard to compare to the size and flow of the economy since production numbers are not as readily available. Government expenditures and revenue in France during the 17th century went from about 24.30 million "livres" in 1600–10 to about 126.86 million "livres" in 1650–59 to about 117.99 million "livres" in 1700–10 when government debt had reached 1.6 billion "livres". In 1780–89, it reached 421.50 million "livres". Taxation as a percentage of production of final goods may have reached 15–20% during the 17th century in places such as France, the Netherlands, and Scandinavia. During the war-filled years of the eighteenth and early nineteenth century, tax rates in Europe increased dramatically as war became more expensive and governments became more centralized and adept at gathering taxes. This increase was greatest in England, Peter Mathias and Patrick O'Brien found that the tax burden increased by 85% over this period. Another study confirmed this number, finding that per capita tax revenues had grown almost sixfold over the eighteenth century, but that steady economic growth had made the real burden on each individual only double over this period before the industrial revolution. Effective tax rates were higher in Britain than France the years before the French Revolution, twice in per capita income comparison, but they were mostly placed on international trade. In France, taxes were lower but the burden was mainly on landowners, individuals, and internal trade and thus created far more resentment. Taxation as a percentage of GDP 2016 was 45.9% in Denmark, 45.3% in France, 33.2% in the United Kingdom, 26% in the United States, and among all OECD members an average of 34.3%. In monetary economies prior to fiat banking, a critical form of taxation was seigniorage, the tax on the creation of money. Other obsolete forms of taxation include: Some principalities taxed windows, doors, or cabinets to reduce consumption of imported glass and hardware. Armoires, hutches, and wardrobes were employed to evade taxes on doors and cabinets. In some circumstances, taxes are also used to enforce public policy like congestion charge (to cut road traffic and encourage public transport) in London. In Tsarist Russia, taxes were clamped on beards. Today, one of the most-complicated taxation systems worldwide is in Germany. Three-quarters of the world's taxation literature refers to the German system. Under the German system, there are 118 laws, 185 forms, and 96,000 regulations, spending €3.7 billion to collect the income tax. In the United States, the IRS has about 1,177 forms and instructions, 28.4111 megabytes of Internal Revenue Code which contained 3.8 million words as of 1 February 2010, numerous tax regulations in the Code of Federal Regulations, and supplementary material in the Internal Revenue Bulletin. Today, governments in more advanced economies (i.e. Europe and North America) tend to rely more on direct taxes, while developing economies (i.e. India and several African countries) rely more on indirect taxes. In economic terms, taxation transfers wealth from households or businesses to the government of a nation. Adam Smith writes in "The Wealth of Nations" that The side-effects of taxation (such as economic distortions) and theories about how best to tax are an important subject in microeconomics. Taxation is almost never a simple transfer of wealth. Economic theories of taxation approach the question of how to maximize economic welfare through taxation. A 2019 study looking at the impact of tax cuts for different income groups, it was tax cuts for low-income groups that had the greatest positive impact on employment growth. Tax cuts for the wealthiest top 10% had a small impact. Law establishes from whom a tax is collected. In many countries, taxes are imposed on business (such as corporate taxes or portions of payroll taxes). However, who ultimately pays the tax (the tax "burden") is determined by the marketplace as taxes become embedded into production costs. Economic theory suggests that the economic effect of tax does not necessarily fall at the point where it is legally levied. For instance, a tax on employment paid by employers will impact on the employee, at least in the long run. The greatest share of the tax burden tends to fall on the most inelastic factor involved—the part of the transaction which is affected least by a change in price. So, for instance, a tax on wages in a town will (at least in the long run) affect property-owners in that area. Depending on how quantities supplied and demanded vary with price (the "elasticities" of supply and demand), a tax can be absorbed by the seller (in the form of lower pre-tax prices), or by the buyer (in the form of higher post-tax prices). If the elasticity of supply is low, more of the tax will be paid by the supplier. If the elasticity of demand is low, more will be paid by the customer; and, contrariwise for the cases where those elasticities are high. If the seller is a competitive firm, the tax burden is distributed over the factors of production depending on the elasticities thereof; this includes workers (in the form of lower wages), capital investors (in the form of loss to shareholders), landowners (in the form of lower rents), entrepreneurs (in the form of lower wages of superintendence) and customers (in the form of higher prices). To show this relationship, suppose that the market price of a product is $1.00, and that a $0.50 tax is imposed on the product that, by law, is to be collected from the seller. If the product has an elastic demand, a greater portion of the tax will be absorbed by the seller. This is because goods with elastic demand cause a large decline in quantity demanded for a small increase in price. Therefore, in order to stabilize sales, the seller absorbs more of the additional tax burden. For example, the seller might drop the price of the product to $0.70 so that, after adding in the tax, the buyer pays a total of $1.20, or $0.20 more than he did before the $0.50 tax was imposed. In this example, the buyer has paid $0.20 of the $0.50 tax (in the form of a post-tax price) and the seller has paid the remaining $0.30 (in the form of a lower pre-tax price). The purpose of taxation is to provide for government spending without inflation. The provision of public goods such as roads and other infrastructure, schools, a social safety net, health care, national defense, law enforcement, and a courts system increases the economic welfare of society if the benefit outweighs the costs involved. The existence of a tax can "increase" economic efficiency in some cases. If there is a negative externality associated with a good, meaning that it has negative effects not felt by the consumer, then a free market will trade too much of that good. By taxing the good, the government can increase overall welfare as well as raising revenue. This type of tax is called a Pigovian tax, after economist Arthur Pigou. Possible Pigovian taxes include those on polluting fuels (like petrol), taxes on goods which incur public healthcare costs (such as alcohol or tobacco), and charges for existing 'free' public goods (like congestion charging) are another possibility. Progressive taxation may reduce economic inequality. This effect occurs even when the tax revenue isn't redistributed. Most taxes (see below) have side effects that reduce economic welfare, either by mandating unproductive labor (compliance costs) or by creating distortions to economic incentives (deadweight loss and perverse incentives). Although governments must spend money on tax collection activities, some of the costs, particularly for keeping records and filling out forms, are borne by businesses and by private individuals. These are collectively called costs of compliance. More complex tax systems tend to have higher compliance costs. This fact can be used as the basis for practical or moral arguments in favor of tax simplification (such as the FairTax or OneTax, and some flat tax proposals). In the absence of negative externalities, the introduction of taxes into a market reduces economic efficiency by causing deadweight loss. In a competitive market the price of a particular economic good adjusts to ensure that all trades which benefit both the buyer and the seller of a good occur. The introduction of a tax causes the price received by the seller to be less than the cost to the buyer by the amount of the tax. This causes fewer transactions to occur, which reduces economic welfare; the individuals or businesses involved are less well off than before the tax. The tax burden and the amount of deadweight cost is dependent on the elasticity of supply and demand for the good taxed. Most taxes—including income tax and sales tax—can have significant deadweight costs. The only way to avoid deadweight costs in an economy that is generally competitive is to refrain from taxes that change economic incentives. Such taxes include the land value tax, where the tax is on a good in completely inelastic supply, a lump sum tax such as a poll tax (head tax) which is paid by all adults regardless of their choices. Arguably a windfall profits tax which is entirely unanticipated can also fall into this category. Deadweight loss does not account for the effect taxes have in leveling the business playing field. Businesses that have more money are better suited to fend off competition. It is common that an industry with a small amount of very large corporations has a very high barrier of entry for new entrants coming into the marketplace. This is due to the fact that the larger the corporation, the better its position to negotiate with suppliers. Also, larger companies may be able to operate at low or even negative profits for extended periods of time, thus pushing out competition. More progressive taxation of profits, however, would reduce such barriers for new entrants, thereby increasing competition and ultimately benefiting consumers. Complexity of the tax code in developed economies offer perverse tax incentives. The more details of tax policy there are, the more opportunities for legal tax avoidance and illegal tax evasion. These not only result in lost revenue, but involve additional costs: for instance, payments made for tax advice are essentially deadweight costs because they add no wealth to the economy. Perverse incentives also occur because of non-taxable 'hidden' transactions; for instance, a sale from one company to another might be liable for sales tax, but if the same goods were shipped from one branch of a corporation to another, no tax would be payable. To address these issues, economists often suggest simple and transparent tax structures which avoid providing loopholes. Sales tax, for instance, can be replaced with a value added tax which disregards intermediate transactions. Following Nicolas Kaldor's research, public finance in developing countries is strongly tied to state capacity and financial development. As state capacity develops, states not only increase the level of taxation but also the pattern of taxation. With the increase of larger tax bases and the diminish of the importance of trading tax, while income tax gains more importance. According to Tilly's argument, state capacity evolves as response to the emergence of war. War is an incentive for states to raise tax and strengthen states capacity. Historically, many taxation breakthroughs took place during the wartime. The introduction of income tax in Britain was due to the Napoleonic War in 1798. US first introduce income tax during Civil War. Taxation is constrained by the fiscal and legal capacities of a country. Fiscal and legal capacities also complement each other. A well-designed tax system can minimize efficiency loss and boost economic growth. With better compliance and better support to financial institutions and individual property, the government will be able to collect more tax. Although wealthier countries have higher tax revenue, economic growth does not always translate to higher tax revenue. For example, in India, increases in exemptions leads to the stagnation of income tax revenue at around 0.5% of GDP since 1986. Researchers for EPS PEAKS stated that the core purpose of taxation is revenue mobilisation, providing resources for National Budgets, and forming an important part of macroeconomic management. They said economic theory has focused on the need to 'optimise' the system through balancing efficiency and equity, understanding the impacts on production, and consumption as well as distribution, redistribution, and welfare. They state that taxes and tax reliefs have also been used as a tool for behavioural change, to influence investment decisions, labour supply, consumption patterns, and positive and negative economic spill-overs (externalities), and ultimately, the promotion of economic growth and development. The tax system and its administration also play an important role in state-building and governance, as a principal form of 'social contract' between the state and citizens who can, as taxpayers, exert accountability on the state as a consequence. The researchers wrote that domestic revenue forms an important part of a developing country's public financing as it is more stable and predictable than Overseas Development Assistance and necessary for a country to be self-sufficient. They found that domestic revenue flows are, on average, already much larger than ODA, with aid worth less than 10% of collected taxes in Africa as a whole. However, in a quarter of African countries Overseas Development Assistance does exceed tax collection, with these more likely to be non-resource-rich countries. This suggests countries making most progress replacing aid with tax revenue tend to be those benefiting disproportionately from rising prices of energy and commodities. The author found tax revenue as a percentage of GDP varying greatly around a global average of 19%. This data also indicates countries with higher GDP tend to have higher tax to GDP ratios, demonstrating that higher income is associated with more than proportionately higher tax revenue. On average, high-income countries have tax revenue as a percentage of GDP of around 22%, compared to 18% in middle-income countries and 14% in low-income countries. In high-income countries, the highest tax-to-GDP ratio is in Denmark at 47% and the lowest is in Kuwait at 0.8%, reflecting low taxes from strong oil revenues. Long-term average performance of tax revenue as a share of GDP in low-income countries has been largely stagnant, although most have shown some improvement in more recent years. On average, resource-rich countries have made the most progress, rising from 10% in the mid-1990s to around 17% in 2008. Non resource rich countries made some progress, with average tax revenues increasing from 10% to 15% over the same period. Many low-income countries have a tax-to-GDP ratio of less than 15% which could be due to low tax potential, such as a limited taxable economic activity, or low tax effort due to policy choice, non-compliance, or administrative constraints. Some low-income countries have relatively high tax-to- GDP ratios due to resource tax revenues (e.g. Angola) or relatively efficient tax administration (e.g. Kenya, Brazil) whereas some middle-income countries have lower tax-to-GDP ratios (e.g. Malaysia) which reflect a more tax-friendly policy choice. While overall tax revenues have remained broadly constant, the global trend shows trade taxes have been declining as a proportion of total revenues(IMF, 2011), with the share of revenue shifting away from border trade taxes towards domestically levied sales taxes on goods and services. Low-income countries tend to have a higher dependence on trade taxes, and a smaller proportion of income and consumption taxes, when compared to high income countries. One indicator of the taxpaying experience was captured in the 'Doing Business' survey, which compares the total tax rate, time spent complying with tax procedures and the number of payments required through the year, across 176 countries. The 'easiest' countries in which to pay taxes are located in the Middle East with the UAE ranking first, followed by Qatar and Saudi Arabia, most likely reflecting low tax regimes in those countries. Countries in Sub-Saharan Africa are among the 'hardest' to pay with the Central African Republic, Republic of Congo, Guinea and Chad in the bottom 5, reflecting higher total tax rates and a greater administrative burden to comply. The below facts were compiled by EPS PEAKS researchers: Aid interventions in revenue can support revenue mobilisation for growth, improve tax system design and administrative effectiveness, and strengthen governance and compliance. The author of the Economics Topic Guide found that the best aid modalities for revenue depend on country circumstances, but should aim to align with government interests and facilitate effective planning and implementation of activities under an evidence-based tax reform. Lastly, she found that identifying areas for further reform requires country-specific diagnostic assessment: broad areas for developing countries identified internationally (e.g. IMF) include, for example property taxation for local revenues, strengthening expenditure management, and effective taxation of extractive industries and multinationals. According to most political philosophies, taxes are justified as they fund activities that are necessary and beneficial to society. Additionally, progressive taxation can be used to reduce economic inequality in a society. According to this view, taxation in modern nation-states benefit the majority of the population and social development. A common presentation of this view, paraphrasing various statements by Oliver Wendell Holmes Jr. is "Taxes are the price of civilization". It can also be argued that in a democracy, because the government is the party performing the act of imposing taxes, society as a whole decides how the tax system should be organized. The American Revolution's "No taxation without representation" slogan implied this view. For traditional conservatives, the payment of taxation is justified as part of the general obligations of citizens to obey the law and support established institutions. The conservative position is encapsulated in perhaps the most famous adage of public finance, "An old tax is a good tax". Conservatives advocate the "fundamental conservative premise that no one should be excused from paying for government, lest they come to believe that government is costless to them with the certain consequence that they will demand more government 'services'." Social democrats generally favor higher levels of taxation to fund public provision of a wide range of services such as universal health care and education, as well as the provision of a range of welfare benefits. As argued by Anthony Crosland and others, the capacity to tax income from capital is a central element of the social democratic case for a mixed economy as against Marxist arguments for comprehensive public ownership of capital. American libertarians recommend a minimal level of taxation in order to maximize the protection of liberty. Compulsory taxation of individuals, such as income tax, is often justified on grounds including territorial sovereignty, and the social contract. Defenders of business taxation argue that it is an efficient method of taxing income that ultimately flows to individuals, or that separate taxation of business is justified on the grounds that commercial activity necessarily involves use of publicly established and maintained economic infrastructure, and that businesses are in effect charged for this use. Georgist economists argue that all of the economic rent collected from natural resources (land, mineral extraction, fishing quotas, etc.) is unearned income, and belongs to the community rather than any individual. They advocate a high tax (the "Single Tax") on land and other natural resources to return this unearned income to the state, but no other taxes. Because payment of tax is compulsory and enforced by the legal system, rather than voluntary like crowdfunding, some political philosophies view taxation as theft, extortion, (or as slavery, or as a violation of property rights), or tyranny, accusing the government of levying taxes via force and coercive means. Objectivists, anarcho-capitalists, and right-wing libertarians see taxation as government aggression (see non-aggression principle). The view that democracy legitimizes taxation is rejected by those who argue that all forms of government, including laws chosen by democratic means, are fundamentally oppressive. According to Ludwig von Mises, "society as a whole" should not make such decisions, due to methodological individualism. Libertarian opponents of taxation claim that governmental protection, such as police and defense forces might be replaced by market alternatives such as private defense agencies, arbitration agencies or voluntary contributions. Karl Marx assumed that taxation would be unnecessary after the advent of communism and looked forward to the "withering away of the state". In socialist economies such as that of China, taxation played a minor role, since most government income was derived from the ownership of enterprises, and it was argued by some that monetary taxation was not necessary. While the morality of taxation is sometimes questioned, most arguments about taxation revolve around the degree and method of taxation and associated government spending, not taxation itself. Tax choice is the theory that taxpayers should have more control with how their individual taxes are allocated. If taxpayers could choose which government organizations received their taxes, opportunity cost decisions would integrate their partial knowledge. For example, a taxpayer who allocated more of his taxes on public education would have less to allocate on public healthcare. Supporters argue that allowing taxpayers to demonstrate their preferences would help ensure that the government succeeds at efficiently producing the public goods that taxpayers truly value. This would end real estate speculation, business cycles, unemployment and distribute wealth much more evenly. Joseph Stiglitz's Henry George Theorem predicts its sufficiency because—as George also noted—public spending raises land value. Geoists (Georgists and geolibertarians) state that taxation should primarily collect economic rent, in particular the value of land, for both reasons of economic efficiency as well as morality. The efficiency of using economic rent for taxation is (as economists agree) due to the fact that such taxation cannot be passed on and does not create any dead-weight loss, and that it removes the incentive to speculate on land. Its morality is based on the Geoist premise that private property is justified for products of labour but not for land and natural resources. Economist and social reformer Henry George opposed sales taxes and protective tariffs for their negative impact on trade. He also believed in the right of each person to the fruits of their own labour and productive investment. Therefore, income from labour and proper capital should remain untaxed. For this reason many Geoists—in particular those that call themselves geolibertarian—share the view with libertarians that these types of taxation (but not all) are immoral and even theft. George stated there should be one single tax: the Land Value Tax, which is considered both efficient and moral. Demand for specific land is dependent on nature, but even more so on the presence of communities, trade, and government infrastructure, particularly in urban environments. Therefore, the economic rent of land is not the product of one particular individual and it may be claimed for public expenses. According to George, this would end real estate bubbles, business cycles, unemployment and distribute wealth much more evenly. Joseph Stiglitz's Henry George Theorem predicts its sufficiency for financing public goods because those raise land value. John Locke stated that whenever labour is mixed with natural resources, such as is the case with improved land, private property is justified under the proviso that there must be enough other natural resources of the same quality available to others. Geoists state that the Lockean proviso is violated wherever land value is greater than zero. Therefore, under the assumed principle of equal rights of all people to natural resources, the occupier of any such land must compensate the rest of society to the amount of that value. For this reason, geoists generally believe that such payment cannot be regarded as a true 'tax', but rather a compensation or fee. This means that while Geoists also regard taxation as an instrument of social justice, contrary to social democrats and social liberals they do not regard it as an instrument of redistribution but rather a 'predistribution' or simply a correct distribution of the commons. Modern geoists note that land in the classical economic meaning of the word referred to all natural resources, and thus also includes resources such as mineral deposits, water bodies and the electromagnetic spectrum, to which privileged access also generates economic rent that must be compensated. Under the same reasoning most of them also consider pigouvian taxes as compensation for environmental damage or privilege as acceptable and even necessary. In economics, the Laffer curve is a theoretical representation of the relationship between government revenue raised by taxation and all possible rates of taxation. It is used to illustrate the concept of taxable income elasticity (that taxable income will change in response to changes in the rate of taxation). The curve is constructed by thought experiment. First, the amount of tax revenue raised at the extreme tax rates of 0% and 100% is considered. It is clear that a 0% tax rate raises no revenue, but the Laffer curve hypothesis is that a 100% tax rate will also generate no revenue because at such a rate there is no longer any incentive for a rational taxpayer to earn any income, thus the revenue raised will be 100% of nothing. If both a 0% rate and 100% rate of taxation generate no revenue, it follows from the extreme value theorem that there must exist at least one rate in between where tax revenue would be a maximum. The Laffer curve is typically represented as a graph which starts at 0% tax, zero revenue, rises to a maximum rate of revenue raised at an intermediate rate of taxation and then falls again to zero revenue at a 100% tax rate. One potential result of the Laffer curve is that increasing tax rates beyond a certain point will become counterproductive for raising further tax revenue. A hypothetical Laffer curve for any given economy can only be estimated and such estimates are sometimes controversial. The New Palgrave Dictionary of Economics reports that estimates of revenue-maximizing tax rates have varied widely, with a mid-range of around 70%. Most governments take revenue which exceeds that which can be provided by non-distortionary taxes or through taxes which give a double dividend. Optimal taxation theory is the branch of economics that considers how taxes can be structured to give the least deadweight costs, or to give the best outcomes in terms of social welfare. The Ramsey problem deals with minimizing deadweight costs. Because deadweight costs are related to the elasticity of supply and demand for a good, it follows that putting the highest tax rates on the goods for which there is most inelastic supply and demand will result in the least overall deadweight costs. Some economists sought to integrate optimal tax theory with the social welfare function, which is the economic expression of the idea that equality is valuable to a greater or lesser extent. If individuals experience diminishing returns from income, then the optimum distribution of income for society involves a progressive income tax. Mirrlees optimal income tax is a detailed theoretical model of the optimum progressive income tax along these lines. Over the last years the validity of the theory of optimal taxation was discussed by many political economists. Taxes are most often levied as a percentage, called the "tax rate". An important distinction when talking about tax rates is to distinguish between the marginal rate and the effective tax rate. The effective rate is the total tax paid divided by the total amount the tax is paid on, while the marginal rate is the rate paid on the next dollar of income earned. For example, if income is taxed on a formula of 5% from $0 up to $50,000, 10% from $50,000 to $100,000, and 15% over $100,000, a taxpayer with income of $175,000 would pay a total of $18,750 in taxes.
https://en.wikipedia.org/wiki?curid=30297
Transhumanism Transhumanism is a philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology. Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations as well as the ethical limitations of using such technologies. The most common transhumanist thesis is that human beings may eventually be able to transform themselves into different beings with abilities so greatly expanded from the current condition as to merit the label of posthuman beings. The contemporary meaning of the term "transhumanism" was foreshadowed by one of the first professors of futurology, a man who changed his name to FM-2030. In the 1960s, he taught "new concepts of the human" at The New School when he began to identify people who adopt technologies, lifestyles and worldviews "transitional" to posthumanity as "transhuman". The assertion would lay the intellectual groundwork for the British philosopher Max More to begin articulating the principles of transhumanism as a futurist philosophy in 1990, and organizing in California an intelligentsia that has since grown into the worldwide transhumanist movement. Influenced by seminal works of science fiction, the transhumanist vision of a transformed future humanity has attracted many supporters and detractors from a wide range of perspectives, including philosophy and religion. In 2017, Penn State University Press in cooperation with Stefan Lorenz Sorgner and James Hughes established the "Journal of Posthuman Studies" which is the first academic journal explicitly dedicated to the posthuman which has the goal of clarifying the notions of posthumanism and transhumanism, as well as comparing and contrasting both. According to Nick Bostrom, transcendentalist impulses have been expressed at least as far back as the quest for immortality in the "Epic of Gilgamesh", as well as in historical quests for the Fountain of Youth, the Elixir of Life, and other efforts to stave off aging and death. In his first edition of "Political Justice" (1793), William Godwin included arguments favoring the possibility of "earthly immortality" (what would now be called physical immortality). Godwin explored the themes of life extension and immortality in his gothic novel "St. Leon", which became popular (and notorious) at the time of its publication in 1799, but is now mostly forgotten. "St. Leon" may have provided inspiration for his daughter Mary Shelley's novel "Frankenstein". There is debate about whether the philosophy of Friedrich Nietzsche can be considered an influence on transhumanism, despite its exaltation of the "Übermensch" (overman or superman), due to its emphasis on self-actualization rather than technological transformation. The transhumanist philosophies of Max More and Stefan Lorenz Sorgner have been influenced strongly by Nietzschean thinking. By way of contrast, The Transhumanist Declaration ""...advocates the well-being of all sentience (whether in artificial intellects, humans, posthumans, or non-human animals)". The late 19th to early 20th century movement known as Russian cosmism also incorporated some ideas which later developed into the core of the transhumanist movement in particular by early protagonist Russian philosopher N. F. Fyodorov. Fundamental ideas of transhumanism were first advanced in 1923 by the British geneticist J. B. S. Haldane in his essay "Daedalus: Science and the Future", which predicted that great benefits would come from the application of advanced sciences to human biology—and that every such advance would first appear to someone as blasphemy or perversion, "indecent and unnatural". In particular, he was interested in the development of the science of eugenics, ectogenesis (creating and sustaining life in an artificial environment), and the application of genetics to improve human characteristics, such as health and intelligence. His article inspired academic and popular interest. J. D. Bernal, a crystallographer at Cambridge, wrote "The World, the Flesh and the Devil" in 1929, in which he speculated on the prospects of space colonization and radical changes to human bodies and intelligence through bionic implants and cognitive enhancement. These ideas have been common transhumanist themes ever since. The biologist Julian Huxley is generally regarded as the founder of transhumanism after using the term for the title of an influential 1957 article. The term itself, however, derives from an earlier 1940 paper by the Canadian philosopher W. D. Lighthall. Huxley describes transhumanism in these terms: Huxley's definition differs, albeit not substantially, from the one commonly in use since the 1980s. The ideas raised by these thinkers were explored in the science fiction of the 1960s, notably in Arthur C. Clarke's "", in which an alien artifact grants transcendent power to its wielder. Japanese Metabolist architects produced a manifesto in 1960 which outlined goals to "encourage active metabolic development of our society" through design and technology. In the Material and Man section of the manifesto, Noboru Kawazoe suggests that:After several decades, with the rapid progress of communication technology, every one will have a "brain wave receiver" in his ear, which conveys directly and exactly what other people think about him and vice versa. What I think will be known by all the people. There is no more individual consciousness, only the will of mankind as a whole. The concept of the technological singularity, or the ultra-rapid advent of superhuman intelligence, was first proposed by the British cryptologist I. J. Good in 1965: Computer scientist Marvin Minsky wrote on relationships between human and artificial intelligence beginning in the 1960s. Over the succeeding decades, this field continued to generate influential thinkers such as Hans Moravec and Raymond Kurzweil, who oscillated between the technical arena and futuristic speculations in the transhumanist vein. The coalescence of an identifiable transhumanist movement began in the last decades of the 20th century. In 1966, FM-2030 (formerly F. M. Esfandiary), a futurist who taught "new concepts of the human" at The New School, in New York City, began to identify people who adopt technologies, lifestyles and world views transitional to posthumanity as "transhuman". In 1972, Robert Ettinger, whose 1964 "Prospect of Immortality" founded the cryonics movement, contributed to the conceptualization of "transhumanity" with his 1972 "Man into Superman." FM-2030 published the "Upwingers Manifesto" in 1973. The first self-described transhumanists met formally in the early 1980s at the University of California, Los Angeles, which became the main center of transhumanist thought. Here, FM-2030 lectured on his "Third Way" futurist ideology. At the EZTV Media venue, frequented by transhumanists and other futurists, Natasha Vita-More presented "Breaking Away", her 1980 experimental film with the theme of humans breaking away from their biological limitations and the Earth's gravity as they head into space. FM-2030 and Vita-More soon began holding gatherings for transhumanists in Los Angeles, which included students from FM-2030's courses and audiences from Vita-More's artistic productions. In 1982, Vita-More authored the "Transhumanist Arts Statement" and, six years later, produced the cable TV show "TransCentury Update "on transhumanity, a program which reached over 100,000 viewers. In 1986, Eric Drexler published "Engines of Creation: The Coming Era of Nanotechnology," which discussed the prospects for nanotechnology and molecular assemblers, and founded the Foresight Institute. As the first non-profit organization to research, advocate for, and perform cryonics, the Southern California offices of the Alcor Life Extension Foundation became a center for futurists. In 1988, the first issue of "Extropy Magazine" was published by Max More and Tom Morrow. In 1990, More, a strategic philosopher, created his own particular transhumanist doctrine, which took the form of the "Principles of Extropy," and laid the foundation of modern transhumanism by giving it a new definition: In 1992, More and Morrow founded the Extropy Institute, a catalyst for networking futurists and brainstorming new memeplexes by organizing a series of conferences and, more importantly, providing a mailing list, which exposed many to transhumanist views for the first time during the rise of cyberculture and the cyberdelic counterculture. In 1998, philosophers Nick Bostrom and David Pearce founded the World Transhumanist Association (WTA), an international non-governmental organization working toward the recognition of transhumanism as a legitimate subject of scientific inquiry and public policy. In 2002, the WTA modified and adopted "The Transhumanist Declaration." "The Transhumanist FAQ", prepared by the WTA (later Humanity+), gave two formal definitions for transhumanism: In possible contrast with other transhumanist organizations, WTA officials considered that social forces could undermine their futurist visions and needed to be addressed. A particular concern is the equal access to human enhancement technologies across classes and borders. In 2006, a political struggle within the transhumanist movement between the libertarian right and the liberal left resulted in a more centre-leftward positioning of the WTA under its former executive director James Hughes. In 2006, the board of directors of the Extropy Institute ceased operations of the organization, stating that its mission was "essentially completed". This left the World Transhumanist Association as the leading international transhumanist organization. In 2008, as part of a rebranding effort, the WTA changed its name to "Humanity+". In 2012, the transhumanist Longevity Party had been initiated as an international union of people who promote the development of scientific and technological means to significant life extension, that for now has more than 30 national organisations throughout the world. The Mormon Transhumanist Association was founded in 2006. By 2012, it consisted of hundreds of members. The first transhumanist elected member of a Parliament has been Giuseppe Vatinno, in Italy. It is a matter of debate whether transhumanism is a branch of posthumanism and how this philosophical movement should be conceptualised with regard to transhumanism. The latter is often referred to as a variant or activist form of posthumanism by its conservative, Christian and progressive critics. A common feature of transhumanism and philosophical posthumanism is the future vision of a new intelligent species, into which humanity will evolve and eventually will supplement or supersede it. Transhumanism stresses the evolutionary perspective, including sometimes the creation of a highly intelligent animal species by way of cognitive enhancement (i.e. biological uplift), but clings to a "posthuman future" as the final goal of participant evolution. Nevertheless, the idea of creating intelligent artificial beings (proposed, for example, by roboticist Hans Moravec) has influenced transhumanism. Moravec's ideas and transhumanism have also been characterised as a "complacent" or "apocalyptic" variant of posthumanism and contrasted with "cultural posthumanism" in humanities and the arts. While such a "cultural posthumanism" would offer resources for rethinking the relationships between humans and increasingly sophisticated machines, transhumanism and similar posthumanisms are, in this view, not abandoning obsolete concepts of the "autonomous liberal subject", but are expanding its "prerogatives" into the realm of the posthuman. Transhumanist self-characterisations as a continuation of humanism and Enlightenment thinking correspond with this view. Some secular humanists conceive transhumanism as an offspring of the humanist freethought movement and argue that transhumanists differ from the humanist mainstream by having a specific focus on technological approaches to resolving human concerns (i.e. technocentrism) and on the issue of mortality. However, other progressives have argued that posthumanism, whether it be its philosophical or activist forms, amounts to a shift away from concerns about social justice, from the reform of human institutions and from other Enlightenment preoccupations, toward narcissistic longings for a transcendence of the human body in quest of more exquisite ways of being. As an alternative, humanist philosopher Dwight Gilbert Jones has proposed a renewed Renaissance humanism through DNA and genome repositories, with each individual genotype (DNA) being instantiated as successive phenotypes (bodies or lives via cloning, "Church of Man", 1978). In his view, native molecular DNA "continuity" is required for retaining the "self" and no amount of computing power or memory aggregation can replace the essential "stink" of our true genetic identity, which he terms "genity". Instead, DNA/genome stewardship by an institution analogous to the Jesuits' 400 year vigil is a suggested model for enabling humanism to become our species' common credo, a project he proposed in his speculative novel "The Humanist – 1000 Summers" (2011), wherein humanity dedicates these coming centuries to harmonizing our planet and peoples. The philosophy of transhumanism is closely related to technoself studies, an interdisciplinary domain of scholarly research dealing with all aspects of human identity in a technological society and focusing on the changing nature of relationships between humans and technology. While many transhumanist theorists and advocates seek to apply reason, science and technology for the purposes of reducing poverty, disease, disability and malnutrition around the globe, transhumanism is distinctive in its particular focus on the applications of technologies to the improvement of human bodies at the individual level. Many transhumanists actively assess the potential for future technologies and innovative social systems to improve the quality of all life, while seeking to make the material reality of the human condition fulfill the promise of legal and political equality by eliminating congenital mental and physical barriers. Transhumanist philosophers argue that there not only exists a perfectionist ethical imperative for humans to strive for progress and improvement of the human condition, but that it is possible and desirable for humanity to enter a transhuman phase of existence in which humans enhance themselves beyond what is naturally human. In such a phase, natural evolution would be replaced with deliberate participatory or directed evolution. Some theorists such as Ray Kurzweil think that the pace of technological innovation is accelerating and that the next 50 years may yield not only radical technological advances, but possibly a technological singularity, which may fundamentally change the nature of human beings. Transhumanists who foresee this massive technological change generally maintain that it is desirable. However, some are also concerned with the possible dangers of extremely rapid technological change and propose options for ensuring that advanced technology is used responsibly. For example, Bostrom has written extensively on existential risks to humanity's future welfare, including ones that could be created by emerging technologies. In contrast, some proponents of transhumanism view it as essential to humanity's survival. For instance, Stephen Hawking points out that the "external transmission" phase of human evolution, where knowledge production and knowledge management is more important than transmission of information via evolution, may be the point at which human civilization becomes unstable and self-destructs, one of Hawking's explanations for the Fermi paradox. To counter this, Hawking emphasizes either self-design of the human genome or mechanical enhancement (e.g., brain-computer interface) to enhance human intelligence and reduce aggression, without which he implies human civilization may be too stupid collectively to survive an increasingly unstable system, resulting in societal collapse. While many people believe that all transhumanists are striving for immortality, it is not necessarily true. Hank Pellissier, managing director of the Institute for Ethics and Emerging Technologies (2011–2012), surveyed transhumanists. He found that, of the 818 respondents, 23.8% did not want immortality. Some of the reasons argued were boredom, Earth's overpopulation and the desire "to go to an afterlife". Certain transhumanist philosophers hold that since all assumptions about what others experience are fallible, and that therefore all attempts to help or protect beings that are not capable of correcting what others assume about them no matter how well-intentioned are in danger of actually hurting them, all sentient beings deserve to be sapient. These thinkers argue that the ability to discuss in a falsification-based way constitutes a threshold that is not arbitrary at which it becomes possible for an individual to speak for themselves in a way that is not dependent on exterior assumptions. They also argue that all beings capable of experiencing something deserve to be elevated to this threshold if they are not at it, typically stating that the underlying change that leads to the threshold is an increase in the preciseness of the brain's ability to discriminate. This includes increasing the neuron count and connectivity in animals as well as accelerating the development of connectivity in order to shorten or ideally skip non-sapient childhood incapable of independently deciding for oneself. Transhumanists of this description stress that the genetic engineering that they advocate is general insertion into both the somatic cells of living beings and in germ cells, and not purging of individuals without the modifications, deeming the latter not only unethical but also unnecessary due to the possibilities of efficient genetic engineering. Transhumanists engage in interdisciplinary approaches to understand and evaluate possibilities for overcoming biological limitations by drawing on futurology and various fields of ethics. Unlike many philosophers, social critics and activists who place a moral value on preservation of natural systems, transhumanists see the very concept of the specifically natural as problematically nebulous at best and an obstacle to progress at worst. In keeping with this, many prominent transhumanist advocates, such as Dan Agin, refer to transhumanism's critics, on the political right and left jointly, as "bioconservatives" or "bioluddites", the latter term alluding to the 19th century anti-industrialisation social movement that opposed the replacement of human manual labourers by machines. A belief of counter-transhumanism is that transhumanism can cause unfair human enhancement in many areas of life, but specifically on the social plane. This can be compared to steroid use, where athletes who use steroids in sports have an advantage over those who do not. The same scenario happens when people have certain neural implants that give them an advantage in the work place and in educational aspects. Additionally, there are many, according M.J. McNamee and S.D. Edwards, who fear that the improvements afforded by a specific, privileged section of society will lead to a division of the human species into two different and distinct species. The idea of two human species, one being at a great physical and economic advantage in comparison with the other, is a troublesome one at best. One may be incapable of breeding with the other, and may by consequence of lower physical health and ability, be considered of a lower moral standing than the other. There is a variety of opinions within transhumanist thought. Many of the leading transhumanist thinkers hold views that are under constant revision and development. Some distinctive currents of transhumanism are identified and listed here in alphabetical order: Although many transhumanists are atheists, agnostics, and/or secular humanists, some have religious or spiritual views. Despite the prevailing secular attitude, some transhumanists pursue hopes traditionally espoused by religions, such as immortality, while several controversial new religious movements from the late 20th century have explicitly embraced transhumanist goals of transforming the human condition by applying technology to the alteration of the mind and body, such as Raëlism. However, most thinkers associated with the transhumanist movement focus on the practical goals of using technology to help achieve longer and healthier lives, while speculating that future understanding of neurotheology and the application of neurotechnology will enable humans to gain greater control of altered states of consciousness, which were commonly interpreted as spiritual experiences, and thus achieve more profound self-knowledge. Transhumanist Buddhists have sought to explore areas of agreement between various types of Buddhism and Buddhist-derived meditation and mind-expanding neurotechnologies. However, they have been criticised for appropriating mindfulness as a tool for transcending humanness. Some transhumanists believe in the compatibility between the human mind and computer hardware, with the theoretical implication that human consciousness may someday be transferred to alternative media (a speculative technique commonly known as mind uploading). One extreme formulation of this idea, which some transhumanists are interested in, is the proposal of the Omega Point by Christian cosmologist Frank Tipler. Drawing upon ideas in digitalism, Tipler has advanced the notion that the collapse of the Universe billions of years hence could create the conditions for the perpetuation of humanity in a simulated reality within a megacomputer and thus achieve a form of "posthuman godhood". Before Tipler, the term Omega Point was used by Pierre Teilhard de Chardin, a paleontologist and Jesuit theologian who saw an evolutionary telos in the development of an encompassing noosphere, a global consciousness. Viewed from the perspective of some Christian thinkers, the idea of mind uploading is asserted to represent a denigration of the human body, characteristic of gnostic manichaean belief. Transhumanism and its presumed intellectual progenitors have also been described as neo-gnostic by non-Christian and secular commentators. The first dialogue between transhumanism and faith was a one-day conference held at the University of Toronto in 2004. Religious critics alone faulted the philosophy of transhumanism as offering no eternal truths nor a relationship with the divine. They commented that a philosophy bereft of these beliefs leaves humanity adrift in a foggy sea of postmodern cynicism and anomie. Transhumanists responded that such criticisms reflect a failure to look at the actual content of the transhumanist philosophy, which, far from being cynical, is rooted in optimistic, idealistic attitudes that trace back to the Enlightenment. Following this dialogue, William Sims Bainbridge, a sociologist of religion, conducted a pilot study, published in the Journal of Evolution and Technology, suggesting that religious attitudes were negatively correlated with acceptance of transhumanist ideas and indicating that individuals with highly religious worldviews tended to perceive transhumanism as being a direct, competitive (though ultimately futile) affront to their spiritual beliefs. Since 2006, the Mormon Transhumanist Association sponsors conferences and lectures on the intersection of technology and religion. The Christian Transhumanist Association was established in 2014. Since 2009, the American Academy of Religion holds a "Transhumanism and Religion" consultation during its annual meeting, where scholars in the field of religious studies seek to identify and critically evaluate any implicit religious beliefs that might underlie key transhumanist claims and assumptions; consider how transhumanism challenges religious traditions to develop their own ideas of the human future, in particular the prospect of human transformation, whether by technological or other means; and provide critical and constructive assessments of an envisioned future that place greater confidence in nanotechnology, robotics and information technology to achieve virtual immortality and create a superior posthuman species. The physicist and transhumanist thinker Giulio Prisco states that "cosmist religions based on science, might be our best protection from reckless pursuit of superintelligence and other risky technologies." Prisco also recognizes the importance of spiritual ideas, such as the ones of Nikolai Fyodorovich Fyodorov, to the origins of the transhumanism movement. While some transhumanists take an abstract and theoretical approach to the perceived benefits of emerging technologies, others have offered specific proposals for modifications to the human body, including heritable ones. Transhumanists are often concerned with methods of enhancing the human nervous system. Though some, such as Kevin Warwick, propose modification of the peripheral nervous system, the brain is considered the common denominator of personhood and is thus a primary focus of transhumanist ambitions. In fact, Warwick has gone a lot further than merely making a proposal. In 2002 he had a 100 electrode array surgically implanted into the median nerves of his left arm in order to link his nervous system directly with a computer and thus to also connect with the internet. As a consequence, he carried out a series of experiments. He was able to directly control a robot hand using his neural signals and to feel the force applied by the hand through feedback from the fingertips. He also experienced a form of ultrasonic sensory input and conducted the first purely electronic communication between his own nervous system and that of his wife who also had electrodes implanted. As proponents of self-improvement and body modification, transhumanists tend to use existing technologies and techniques that supposedly improve cognitive and physical performance, while engaging in routines and lifestyles designed to improve health and longevity. Depending on their age, some transhumanists express concern that they will not live to reap the benefits of future technologies. However, many have a great interest in life extension strategies and in funding research in cryonics in order to make the latter a viable option of last resort, rather than remaining an unproven method. Regional and global transhumanist networks and communities with a range of objectives exist to provide support and forums for discussion and collaborative projects. While most transhumanist theory focuses on future technologies and the changes they may bring, many today are already involved in the practice on a very basic level. It is not uncommon for many to receive cosmetic changes to their physical form via cosmetic surgery, even if it is not required for health reasons. Human growth hormones attempt to alter the natural development of shorter children or those who have been born with a physical deficiency. Doctors prescribe medicines such as Ritalin and Adderall to improve cognitive focus, and many people take "lifestyle" drugs such as Viagra, Propecia, and Botox to restore aspects of youthfulness that have been lost in maturity. Other transhumanists, such as cyborg artist Neil Harbisson, use technologies and techniques to improve their senses and perception of reality. Harbisson's antenna, which is permanently implanted in his skull, allows him to sense colours beyond human perception such as infrareds and ultraviolets. Transhumanists support the emergence and convergence of technologies including nanotechnology, biotechnology, information technology and cognitive science (NBIC), as well as hypothetical future technologies like simulated reality, artificial intelligence, superintelligence, 3D bioprinting, mind uploading, chemical brain preservation and cryonics. They believe that humans can and should use these technologies to become more than human. Therefore, they support the recognition and/or protection of cognitive liberty, morphological freedom and procreative liberty as civil liberties, so as to guarantee individuals the choice of using human enhancement technologies on themselves and their children. Some speculate that human enhancement techniques and other emerging technologies may facilitate more radical human enhancement no later than at the midpoint of the 21st century. Kurzweil's book "The Singularity is Near" and Michio Kaku's book "Physics of the Future" outline various human enhancement technologies and give insight on how these technologies may impact the human race. Some reports on the converging technologies and NBIC concepts have criticised their transhumanist orientation and alleged science fictional character. At the same time, research on brain and body alteration technologies has been accelerated under the sponsorship of the U.S. Department of Defense, which is interested in the battlefield advantages they would provide to the supersoldiers of the United States and its allies. There has already been a brain research program to "extend the ability to manage information", while military scientists are now looking at stretching the human capacity for combat to a maximum 168 hours without sleep. Neuroscientist Anders Sandberg has been practicing on the method of scanning ultra-thin sections of the brain. This method is being used to help better understand the architecture of the brain. As of now, this method is currently being used on mice. This is the first step towards hypothetically uploading contents of the human brain, including memories and emotions, onto a computer. The very notion and prospect of human enhancement and related issues arouse public controversy. Criticisms of transhumanism and its proposals take two main forms: those objecting to the likelihood of transhumanist goals being achieved (practical criticisms) and those objecting to the moral principles or worldview sustaining transhumanist proposals or underlying transhumanism itself (ethical criticisms). Critics and opponents often see transhumanists' goals as posing threats to human values. Some of the most widely known critiques of the transhumanist program are novels and fictional films. These works of art, despite presenting imagined worlds rather than philosophical analyses, are used as touchstones for some of the more formal arguments. Various arguments have been made to the effect that a society that adopts human enhancement technologies may come to resemble the dystopia depicted in the 1932 novel "Brave New World, "by Aldous Huxley. On another front, some authors consider that humanity is already transhuman, because medical advances in recent centuries have significantly altered our species. However, it is not in a conscious and therefore transhumanistic way. From such perspective, transhumanism is perpetually aspirational: as new technologies become mainstream, the adoption of new yet-unadopted technologies becomes a new shifting goal. In a 1992 book, sociologist Max Dublin pointed to many past failed predictions of technological progress and argued that modern futurist predictions would prove similarly inaccurate. He also objected to what he saw as scientism, fanaticism and nihilism by a few in advancing transhumanist causes. Dublin also said that historical parallels existed between Millenarian religions and Communist doctrines. Although generally sympathetic to transhumanism, public health professor Gregory Stock is skeptical of the technical feasibility and mass appeal of the cyborgization of humanity predicted by Raymond Kurzweil, Hans Moravec and Kevin Warwick. He said that, throughout the 21st century, many humans would find themselves deeply integrated into systems of machines, but would remain biological. Primary changes to their own form and character would arise not from cyberware, but from the direct manipulation of their genetics, metabolism and biochemistry. In her 1992 book "Science as Salvation", philosopher Mary Midgley traces the notion of achieving immortality by transcendence of the material human body (echoed in the transhumanist tenet of mind uploading) to a group of male scientific thinkers of the early 20th century, including J. B. S. Haldane and members of his circle. She characterizes these ideas as "quasi-scientific dreams and prophesies" involving visions of escape from the body coupled with "self-indulgent, uncontrolled power-fantasies". Her argument focuses on what she perceives as the pseudoscientific speculations and irrational, fear-of-death-driven fantasies of these thinkers, their disregard for laymen and the remoteness of their eschatological visions. Another critique is aimed mainly at "algeny" (a portmanteau of "alchemy" and "genetics"), which Jeremy Rifkin defined as "the upgrading of existing organisms and the design of wholly new ones with the intent of 'perfecting' their performance". It emphasizes the issue of biocomplexity and the unpredictability of attempts to guide the development of products of biological evolution. This argument, elaborated in particular by the biologist Stuart Newman, is based on the recognition that cloning and germline genetic engineering of animals are error-prone and inherently disruptive of embryonic development. Accordingly, so it is argued, it would create unacceptable risks to use such methods on human embryos. Performing experiments, particularly ones with permanent biological consequences, on developing humans would thus be in violation of accepted principles governing research on human subjects (see the 1964 Declaration of Helsinki). Moreover, because improvements in experimental outcomes in one species are not automatically transferable to a new species without further experimentation, it is claimed that there is no ethical route to genetic manipulation of humans at early developmental stages. As a practical matter, however, international protocols on human subject research may not present a legal obstacle to attempts by transhumanists and others to improve their offspring by germinal choice technology. According to legal scholar Kirsten Rabe Smolensky, existing laws would protect parents who choose to enhance their child's genome from future liability arising from adverse outcomes of the procedure. Transhumanists and other supporters of human genetic engineering do not dismiss practical concerns out of hand, insofar as there is a high degree of uncertainty about the timelines and likely outcomes of genetic modification experiments in humans. However, bioethicist James Hughes suggests that one possible ethical route to the genetic manipulation of humans at early developmental stages is the building of computer models of the human genome, the proteins it specifies and the tissue engineering he argues that it also codes for. With the exponential progress in bioinformatics, Hughes believes that a virtual model of genetic expression in the human body will not be far behind and that it will soon be possible to accelerate approval of genetic modifications by simulating their effects on virtual humans. Public health professor Gregory Stock points to artificial chromosomes as an alleged safer alternative to existing genetic engineering techniques. Thinkers who defend the likelihood of accelerating change point to a past pattern of exponential increases in humanity's technological capacities. Kurzweil developed this position in his 2005 book "The Singularity Is Near". It has been argued that, in transhumanist thought, humans attempt to substitute themselves for God. The 2002 Vatican statement "Communion and Stewardship: Human Persons Created in the Image of God," stated that "changing the genetic identity of man as a human person through the production of an infrahuman being is radically immoral", implying, that "man has full right of disposal over his own biological nature". The statement also argues that creation of a superhuman or spiritually superior being is "unthinkable", since true improvement can come only through religious experience and "realizing more fully the image of God". Christian theologians and lay activists of several churches and denominations have expressed similar objections to transhumanism and claimed that Christians attain in the afterlife what radical transhumanism promises, such as indefinite life extension or the abolition of suffering. In this view, transhumanism is just another representative of the long line of utopian movements which seek to create "heaven on earth". On the other hand, religious thinkers allied with transhumanist goals such as the theologians Ronald Cole-Turner and Ted Peters hold that the doctrine of "co-creation" provides an obligation to use genetic engineering to improve human biology. Other critics target what they claim to be an instrumental conception of the human body in the writings of Marvin Minsky, Hans Moravec and some other transhumanists. Reflecting a strain of feminist criticism of the transhumanist program, philosopher Susan Bordo points to "contemporary obsessions with slenderness, youth and physical perfection", which she sees as affecting both men and women, but in distinct ways, as "the logical (if extreme) manifestations of anxieties and fantasies fostered by our culture." Some critics question other social implications of the movement's focus on body modification. Political scientist Klaus-Gerd Giesen, in particular, has asserted that transhumanism's concentration on altering the human body represents the logical yet tragic consequence of atomized individualism and body commodification within a consumer culture. Nick Bostrom responds that the desire to regain youth, specifically, and transcend the natural limitations of the human body, in general, is pan-cultural and pan-historical, and is therefore not uniquely tied to the culture of the 20th century. He argues that the transhumanist program is an attempt to channel that desire into a scientific project on par with the Human Genome Project and achieve humanity's oldest hope, rather than a puerile fantasy or social trend. In his 2003 book "Enough: Staying Human in an Engineered Age", environmental ethicist Bill McKibben argued at length against many of the technologies that are postulated or supported by transhumanists, including germinal choice technology, nanomedicine and life extension strategies. He claims that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome technologically. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using as examples Ming China, Tokugawa Japan and the contemporary Amish. Biopolitical activist Jeremy Rifkin and biologist Stuart Newman accept that biotechnology has the power to make profound changes in organismal identity. They argue against the genetic engineering of human beings because they fear the blurring of the boundary between human and artifact. Philosopher Keekok Lee sees such developments as part of an accelerating trend in modernization in which technology has been used to transform the "natural" into the "artefactual". In the extreme, this could lead to the manufacturing and enslavement of "monsters" such as human clones, human-animal chimeras, or bioroids, but even lesser dislocations of humans and non-humans from social and ecological systems are seen as problematic. The film "Blade Runner" (1982) and the novels "The Boys From Brazil" (1976) and "The Island of Doctor Moreau" (1896) depict elements of such scenarios, but Mary Shelley's 1818 novel "Frankenstein" is most often alluded to by critics who suggest that biotechnologies could create objectified and socially unmoored people as well as subhumans. Such critics propose that strict measures be implemented to prevent what they portray as dehumanizing possibilities from ever happening, usually in the form of an international ban on human genetic engineering. Science journalist Ronald Bailey claims that McKibben's historical examples are flawed and support different conclusions when studied more closely. For example, few groups are more cautious than the Amish about embracing new technologies, but, though they shun television and use horses and buggies, some are welcoming the possibilities of gene therapy since inbreeding has afflicted them with a number of rare genetic diseases. Bailey and other supporters of technological alteration of human biology also reject the claim that life would be experienced as meaningless if some human limitations are overcome with enhancement technologies as extremely subjective. Writing in "Reason" magazine, Bailey has accused opponents of research involving the modification of animals as indulging in alarmism when they speculate about the creation of subhuman creatures with human-like intelligence and brains resembling those of "Homo sapiens". Bailey insists that the aim of conducting research on animals is simply to produce human health care benefits. A different response comes from transhumanist personhood theorists who object to what they characterize as the anthropomorphobia fueling some criticisms of this research, which science fiction writer Isaac Asimov termed the "Frankenstein complex". For example, Woody Evans argues that, provided they are self-aware, human clones, human-animal chimeras and uplifted animals would all be unique persons deserving of respect, dignity, rights, responsibilities, and citizenship. They conclude that the coming ethical issue is not the creation of so-called monsters, but what they characterize as the "yuck factor" and "human-racism", that would judge and treat these creations as monstrous. At least one public interest organization, the U.S.-based Center for Genetics and Society, was formed, in 2001, with the specific goal of opposing transhumanist agendas that involve transgenerational modification of human biology, such as full-term human cloning and germinal choice technology. The Institute on Biotechnology and the Human Future of the Chicago-Kent College of Law critically scrutinizes proposed applications of genetic and nanotechnologies to human biology in an academic setting. Some critics of libertarian transhumanism have focused on the likely socioeconomic consequences in societies in which divisions between rich and poor are on the rise. Bill McKibben, for example, suggests that emerging human enhancement technologies would be disproportionately available to those with greater financial resources, thereby exacerbating the gap between rich and poor and creating a "genetic divide". Even Lee M. Silver, the biologist and science writer who coined the term "reprogenetics" and supports its applications, has expressed concern that these methods could create a two-tiered society of genetically engineered "haves" and "have nots" if social democratic reforms lag behind implementation of enhancement technologies. The 1997 film "Gattaca" depicts a dystopian society in which one's social class depends entirely on genetic potential and is often cited by critics in support of these views. These criticisms are also voiced by non-libertarian transhumanist advocates, especially self-described democratic transhumanists, who believe that the majority of current or future social and environmental issues (such as unemployment and resource depletion) need to be addressed by a combination of political and technological solutions (like a guaranteed minimum income and alternative technology). Therefore, on the specific issue of an emerging genetic divide due to unequal access to human enhancement technologies, bioethicist James Hughes, in his 2004 book "Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future", argues that progressives or, more precisely, techno-progressives must articulate and implement public policies (i.e., a universal health care voucher system that covers human enhancement technologies) in order to attenuate this problem as much as possible, rather than trying to ban human enhancement technologies. The latter, he argues, might actually worsen the problem by making these technologies unsafe or available only to the wealthy on the local black market or in countries where such a ban is not enforced. Sometimes, as in the writings of Leon Kass, the fear is that various institutions and practices judged as fundamental to civilized society would be damaged or destroyed. In his 2002 book "Our Posthuman Future" and in a 2004 "Foreign Policy" magazine article, political economist and philosopher Francis Fukuyama designates transhumanism as the world's most dangerous idea because he believes that it may undermine the egalitarian ideals of democracy (in general) and liberal democracy (in particular) through a fundamental alteration of "human nature". Social philosopher Jürgen Habermas makes a similar argument in his 2003 book "The Future of Human Nature", in which he asserts that moral autonomy depends on not being subject to another's unilaterally imposed specifications. Habermas thus suggests that the human "species ethic" would be undermined by embryo-stage genetic alteration. Critics such as Kass, Fukuyama and a variety of authors hold that attempts to significantly alter human biology are not only inherently immoral, but also threaten the social order. Alternatively, they argue that implementation of such technologies would likely lead to the "naturalizing" of social hierarchies or place new means of control in the hands of totalitarian regimes. AI pioneer Joseph Weizenbaum criticizes what he sees as misanthropic tendencies in the language and ideas of some of his colleagues, in particular Marvin Minsky and Hans Moravec, which, by devaluing the human organism per se, promotes a discourse that enables divisive and undemocratic social policies. In a 2004 article in the libertarian monthly "Reason," science journalist Ronald Bailey contested the assertions of Fukuyama by arguing that political equality has never rested on the facts of human biology. He asserts that liberalism was founded not on the proposition of effective equality of human beings, or "de facto" equality, but on the assertion of an equality in political rights and before the law, or "de jure" equality. Bailey asserts that the products of genetic engineering may well ameliorate rather than exacerbate human inequality, giving to the many what were once the privileges of the few. Moreover, he argues, "the crowning achievement of the Enlightenment is the principle of tolerance". In fact, he says, political liberalism is already the solution to the issue of human and posthuman rights since in liberal societies the law is meant to apply equally to all, no matter how rich or poor, powerful or powerless, educated or ignorant, enhanced or unenhanced. Other thinkers who are sympathetic to transhumanist ideas, such as philosopher Russell Blackford, have also objected to the appeal to tradition and what they see as alarmism involved in "Brave New World"-type arguments. In addition to the socio-economic risks and implications of transhumanism, there are indeed implications and possible consequences in regard to cultural aesthetics. Currently, there are a number of ways in which people choose to represent themselves in society. The way in which a person dresses, hair styles, and body alteration all serve to identify the way a person presents themselves and is perceived by society. According to Foucault, society already governs and controls bodies by making them feel watched. This "surveillance" of society dictates how the majority of individuals choose to express themselves aesthetically. One of the risks outlined in a 2004 article by Jerold Abrams is the elimination of differences in favor of universality. This, he argues, will eliminate the ability of individuals to subvert the possibly oppressive, dominant structure of society by way of uniquely expressing themselves externally. Such control over a population would have dangerous implications of tyranny. Yet another consequence of enhancing the human form not only cognitively, but physically, will be the reinforcement of "desirable" traits which are perpetuated by the dominant social structure. Physical traits which are seen as "ugly" or "undesirable" and thus deemed less-than, will be summarily cut out by those who can afford to do it, while those who cannot will be forced into a relative caste of undesirable people. Even if these physical "improvements" are made completely universal, they will indeed eliminate what makes each individual uniquely human in their own way. Some critics of transhumanism see the old eugenics, social Darwinist, and master race ideologies and programs of the past as warnings of what the promotion of eugenic enhancement technologies might unintentionally encourage. Some fear future "eugenics wars" as the worst-case scenario: the return of coercive state-sponsored genetic discrimination and human rights violations such as compulsory sterilization of persons with genetic defects, the killing of the institutionalized and, specifically, segregation and genocide of "races"" "perceived as inferior. Health law professor George Annas and technology law professor Lori Andrews are prominent advocates of the position that the use of these technologies could lead to such human-posthuman caste warfare. The major transhumanist organizations strongly condemn the coercion involved in such policies and reject the racist and classist assumptions on which they were based, along with the pseudoscientific notions that eugenic improvements could be accomplished in a practically meaningful time frame through selective human breeding. Instead, most transhumanist thinkers advocate a "new eugenics", a form of egalitarian liberal eugenics. In their 2000 book "From Chance to Choice: Genetics and Justice", non-transhumanist bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler have argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements. Most transhumanists holding similar views nonetheless distance themselves from the term "eugenics" (preferring "germinal choice" or "reprogenetics") to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements. In his 2003 book "Our Final Hour", British Astronomer Royal Martin Rees argues that advanced science and technology bring as much risk of disaster as opportunity for progress. However, Rees does not advocate a halt to scientific activity. Instead, he calls for tighter security and perhaps an end to traditional scientific openness. Advocates of the precautionary principle, such as many in the environmental movement, also favor slow, careful progress or a halt in potentially dangerous areas. Some precautionists believe that artificial intelligence and robotics present possibilities of alternative forms of cognition that may threaten human life. Transhumanists do not necessarily rule out specific restrictions on emerging technologies so as to lessen the prospect of existential risk. Generally, however, they counter that proposals based on the precautionary principle are often unrealistic and sometimes even counter-productive as opposed to the technogaian current of transhumanism, which they claim is both realistic and productive. In his television series "Connections", science historian James Burke dissects several views on technological change, including precautionism and the restriction of open inquiry. Burke questions the practicality of some of these views, but concludes that maintaining the "status quo" of inquiry and development poses hazards of its own, such as a disorienting rate of change and the depletion of our planet's resources. The common transhumanist position is a pragmatic one where society takes deliberate action to ensure the early arrival of the benefits of safe, clean, alternative technology, rather than fostering what it considers to be anti-scientific views and technophobia. Nick Bostrom argues that even barring the occurrence of a singular global catastrophic event, basic Malthusian and evolutionary forces facilitated by technological progress threaten to eliminate the positive aspects of human society. One transhumanist solution proposed by Bostrom to counter existential risks is control of differential technological development, a series of attempts to influence the sequence in which technologies are developed. In this approach, planners would strive to retard the development of possibly harmful technologies and their applications, while accelerating the development of likely beneficial technologies, especially those that offer protection against the harmful effects of others.
https://en.wikipedia.org/wiki?curid=30299
TARDIS TARDIS (; "Time And Relative Dimension In Space") is a fictional time machine and spacecraft that appears in the British science fiction television series "Doctor Who" and its various spin-offs. The TV show "Doctor Who" mainly features a single TARDIS used by the central character the Doctor. However, in the series other TARDISes are sometimes seen or used. The Doctor's TARDIS has a number of peculiar features, notably due to its age and personality. While other TARDISes have the ability to change their appearance in order to blend in with their surroundings, the Doctor's always resembles a police box because its chameleon circuit is broken. However, in the new series (since 2005), a perception filter is used to make the TARDIS blend in with the surroundings, so that it is often ignored by passersby. While the exterior is of limited size, the TARDIS is much bigger on the inside, containing an infinite number of rooms, corridors and storage spaces including several squash courses, a pool, a library and much more. "Doctor Who" has become so much a part of British popular culture that the shape of the police box has become associated with the TARDIS rather than with its real-world inspiration. The name TARDIS is a registered trademark of the British Broadcasting Corporation (BBC). The police box design has also been registered as a trademark by the BBC, despite the design having been created by the Metropolitan Police. The word TARDIS is listed in the "Oxford English Dictionary". When "Doctor Who" was being developed in 1963 the production staff discussed what the Doctor's time machine would look like. To keep the design within budget it was decided to make it resemble a police telephone box. This was explained in the context of the series as a disguise created by the ship's "chameleon circuit", a mechanism that changes the outside appearance of the ship the millisecond it lands in order to fit in with its environment. The First Doctor explains that if it were to land in the middle of the Indian Mutiny, it might take on the appearance of a howdah (the carrier on the back of an elephant). Within the context of the series the Doctor's TARDIS has a faulty chameleon circuit that keeps it permanently stuck in the police box form. Despite being shown several times trying to repair it, the Doctor claims to have given up the attempt as he has grown accustomed to its appearance. The idea for the police-box disguise came from a BBC staff writer, Anthony Coburn, who rewrote the programme's first episode from a draft by C. E. Webber. In the first episode, "An Unearthly Child" (1963), the TARDIS is first seen in a junkyard in 1963. It subsequently malfunctions, retaining the police box shape in a prehistoric landscape. The first police box prop to be built for the programme was designed by Peter Brachacki, who worked as designer on the first episode. Nevertheless, one story has it that the box came from "Z-Cars", while "Doctor Who" producer Steven Moffat has said that the original TARDIS prop was reused from "Dixon of Dock Green", although this is explicitly contradicted by the research cited on the BBC's own website. Despite changes in the prop, the TARDIS has become the show's most consistently recognisable visual element. The dimensions and colour of the TARDIS props used in the series have changed many times, as a result of damage and the requirements of the show, and none of the BBC props has been a faithful replica of the original MacKenzie Trench model. This was referenced on-screen in the episode "Blink" (2007), when the character Detective Inspector Shipton says the TARDIS "isn't a real [police box]. The phone's just a dummy, and the windows are the wrong size." The production team conceived of the TARDIS travelling by dematerialising at one point and rematerialising elsewhere, although sometimes in the series it is shown also to be capable of conventional space travel. In the 2006 Christmas special, "The Runaway Bride", the Doctor remarks that for a spaceship, the TARDIS does remarkably little flying. The ability to travel simply by fading into and out of different locations became one of the trademarks of the show, allowing for a great deal of versatility in setting and storytelling without a large expense in special effects. The distinctive accompanying sound effect – a cyclic wheezing, groaning noise – was originally created in the BBC Radiophonic Workshop by Brian Hodgson. When employed in the series, the sound is usually synchronised with the flashing light on top of the police box, or the fade-in and fade-out effects of a TARDIS (see "Controls" below). Writer Patrick Ness has described the ship's distinctive dematerialisation noise as "a kind of haunted grinding sound", while the "Doctor Who Magazine" comic strips traditionally use the onomatopoeic phrase "vworp vworp vworp". In 1996 the BBC applied to the UK Intellectual Property Office to register the TARDIS as a trademark. This was challenged by the Metropolitan Police, who felt that they owned the rights to the police box image. However, the Patent Office found that there was no evidence that the Metropolitan Police – or any other police force – had ever registered the image as a trademark. In addition, the BBC had been selling merchandise based on the image for over three decades without complaint by the police. The Patent Office issued a ruling in favour of the BBC in 2002. TARDISes are grown, as stated by the Tenth Doctor in "The Impossible Planet" (2006), and new TARDISes cannot be grown to replace a missing TARDIS unless the Doctor is on his home planet, Gallifrey. They draw their power from several sources, but primarily from the Eye of Harmony, said to be the nucleus of a black hole created by the early Time Lords; a singularity. In "The Edge of Destruction" (1964), the power source of the TARDIS (referred to as the "heart of the TARDIS") is said to be beneath the central column of the console. They are also said to draw power from the entire universe as revealed in the episode "Rise of the Cybermen" (2006), in which the TARDIS is brought to a parallel universe and cannot function without the use of a crystal power source from within the TARDIS, charged by the Doctor's life force. Other elements needed for the proper functioning of the TARDIS and requiring occasional replenishment include mercury (used in its fluid links), the rare ore Zeiton 7 ("Vengeance on Varos", 1985), a trachoid time crystal ("The Hand of Fear", 1976) and "artron energy". Artron energy is said to be the "residue of TARDIS engines", and is also found in Time Lord brains and bodies as well as other species of time traveller ("The Deadly Assassin", 1976; "Four to Doomsday", 1982; "The Wedding of Sarah Jane Smith", 2009; "Death of the Doctor", 2010; "The Doctor's Wife", 2011; "The Power of Three", 2012; "Rosa", 2018). Another form of energy, "huon energy", is found in the heart of the TARDIS and (apart from the activities of the Torchwood Institute) nowhere else in the universe ("The Runaway Bride", 2006). Before a TARDIS becomes fully functional, it must be primed with the biological imprint of a Time Lord, normally done by simply having a Time Lord operate the TARDIS for the first time. This imprint comes from the Rassilon Imprimatur, part of the biological make-up of Time Lords, which gives them both a symbiotic link to their TARDISes and the ability to withstand the physical stresses of time travel ("The Two Doctors", 1985). Without the Imprimatur, molecular disintegration would result; this serves as a safeguard against misuse of time travel even if the TARDIS technology were copied. Once a time machine is properly primed, however, with the imprint stored on a device called a "briode nebuliser", it can be used safely by any species. According to Time Lord law, unauthorised use of a TARDIS carries "only one penalty", implied to be death ("The Invasion of Time", 1978). A TARDIS usually travels by dematerialising in one spot, traversing the time vortex, and then rematerialising at its destination, without physically travelling through the intervening space. However, the Doctor's TARDIS has been seen to be able to fly through physical space, first in "Fury from the Deep" (1968) and at repeated times throughout the revived series, most notably in "The Runaway Bride" (2006), in which the TARDIS is shown launching into space (most previous incidents show the TARDIS flying only after it has dematerialised from a location). In "The Runaway Bride", extended flight of this nature puts a strain on the TARDIS's systems. While a TARDIS can materialise inside another, if both TARDISes occupy exactly the same space and time, a Time Ram will occur, resulting in their mutual annihilation ("The Time Monster", 1972). In "Logopolis" (1981), the Master tricked the Doctor into materialising his TARDIS around the Master's, creating a dimensionally recursive loop, each TARDIS appearing inside the other's console room. In the mini-episodes "Space" and "Time" (2011), an accident results in the TARDIS automatically materialising in "the safest spot available", which turns out to be inside its own control room. Apart from the ability to travel in space and time (and on occasion, to other dimensions), the most remarkable characteristic of a TARDIS is that its interior is much larger than it appears from the outside. The explanation is that a TARDIS is "dimensionally transcendental", meaning that its exterior and interior exist in separate dimensions. In "The Robots of Death" (1977), the Fourth Doctor tried to explain this to his companion Leela, using the analogy of how a larger cube can appear to be able to fit inside a smaller one if the larger cube is farther away, yet immediately accessible at the same time (see Tesseract). According to the Doctor, transdimensional engineering was "a key Time Lord discovery". To those unfamiliar with this aspect of a TARDIS, stepping inside the ship for the first time usually results in a reaction of shocked disbelief as they see the interior dimensions ("It's bigger on the inside!"). The Eleventh Doctor is particularly fond of this reaction, and is surprised and confused when Clara Oswald (in "The Snowmen", 2012) inverts the usual response by saying "It's smaller on the outside." In "An Unearthly Child" (1963), Susan Foreman, the Doctor's granddaughter, claimed to have coined the acronym TARDIS, saying that she "made [it] up from the initials", while the Twelfth Doctor claims in "The Zygon Inversion" (2015) that he came up with the term from the initials, giving an entirely different set of words for "TARDIS". The word TARDIS is used to describe other Time Lords' travel capsules as well. "The Discontinuity Guide," written by Paul Cornell, Keith Topping, and Martin Day, suggests that "[she] was a precocious young Time Lady, and her name for travel capsules caught on." The Virgin New Adventures novel "Lungbarrow" by Marc Platt records Susan telling the First Doctor that she gave him the idea when he was, implicitly, the "Other". As seen in "The Trial of a Time Lord" (1986), the experiences of the TARDIS and its crew can be recorded and played back from the Matrix, the Time Lord computer network that is the repository of all their knowledge, as well as the memories and experiences of deceased Time Lords. The Doctor implies in this serial, with his protestations of being "bugged", that the TARDIS is not normally connected to the Matrix in this manner. The TARDIS has been shown to be extremely rugged, withstanding gunfire (the 1996 television movie, "Doctor Who"; "The Runaway Bride"), temperatures of 3000 degrees without even scorching ("42"), atmospheric re-entry ("Voyage of the Damned"), falls of several miles ("The Satan Pit") and sinking into pooling acid ("The Almost People"). In "The Curse of Peladon" (1972), after the TARDIS falls down the side of a cliff, the Third Doctor remarks that it "may have its faults, but it is indestructible." This does not apply when facing certain extremely advanced weaponry, often created after the Doctor's Type 40 TARDIS, such as Dalek missiles ("The Parting of the Ways"), for which the TARDIS requires additional shielding. Another piece of advanced Dalek technology which comes near to destroying the TARDIS is the power source of the "Crucible" in "Journey's End" (2008). In "Frontios" (1984), the Fifth Doctor believes the TARDIS to have been destroyed in a meteorite bombardment, apparently contradicting the earlier claim of indestructibility. It explodes in "The Mind Robber" (1968) and the crew end up "out of the time space dimension. Out of reality." In 2007's Christmas special "Voyage of the Damned", the TARDIS is hit in mid-flight, creating a large hole in the interior wall, although its shields are down at the time. The Doctor later activates some controls and the TARDIS again becomes able to withstand an atmospheric re-entry. Also in the 2013 episode "The Name of the Doctor" the TARDIS is shown to be able to withstand immense speeds, pressure and heat by being pulled into Trenzalore's atmosphere without any functioning systems. The only noticeable damage caused was to the exterior of the TARDIS, in which a small crack is shown on the glass. In "Robot of Sherwood" (2014), Robin Hood's wooden arrow easily pierces the TARDIS's wooden frame. However, once the Twelfth Doctor removes it, the hole immediately seals itself. In "Flatline" (2014), the TARDIS demonstrated a 'siege mode' after being drained of power, miniaturised and hit by a train, when it reverted to a small metal cube with Gallifreyan markings. In the episodes "The Magician's Apprentice" and "The Witch's Familiar" (2015), it was apparently destroyed by a Dalek laser, but shown to have deliberately vaporised itself as a protective measure – the Doctor was then able to cause it to reassemble itself, undamaged, in the same spot. In the programme, the Doctor's TARDIS is an obsolete "Type 40 TT capsule" that he unofficially "borrowed" from the repair shop when he departed his home planet of Gallifrey. That incident is referred to as early as 1969, but not actually shown on screen for another 44 years. In a flashback in "The Name of the Doctor" (2013), future companion Clara Oswald advises him on which TARDIS to take: although the Type 40 TT's navigational systems were malfunctioning, she claimed that it would be "much more fun." The TARDIS was already old when the Doctor first took it, but its actual age is not specified. In the unfinished TV serial "Shada", fellow Time Lord Professor Chronotis says that the Type 40 models came out when he was a boy. There were originally 305 registered Type 40s, but all the others had been decommissioned and replaced by new, improved models; however, the Doctor's TARDIS had at some point been removed from the registry by the Celestial Intervention Agency on Gallifrey. By the time of "The Pirate Planet" (1978), the Doctor had been travelling on board in time and space for 523 years, by the time of "The Doctor's Wife" (2011), he had been travelling in it for 700 years, and in "Journey to the Centre of the TARDIS" (2013) he had been travelling for 900 years. The appearance of the primary console room has changed over the years, sometimes in minor ways and sometimes drastically when the set has had to be moved or rebuilt. This has often been rationalised in the scripts as redecoration, the ship's own ability to reconfigure or repair itself, or even a change of "desktop theme". In "The Doctor's Wife", the TARDIS says she has thirty desktops archived, although the Doctor has only changed it a dozen times "yet". The undisguised appearance of a Type 40 TARDIS' exterior is a silver-grey cylinder only slightly larger than a police box. Its door is recessed and slides to open. This default state has appeared in 2013's "The Name of the Doctor", which depicts the Doctor's original theft of the TARDIS. Although a TARDIS is supposed to blend inconspicuously into whatever environment it turns up in, the Doctor's TARDIS retains the shape of a police box because of a fault that occurred in the first "Doctor Who" serial, "An Unearthly Child" (1963). The ability to alter its appearance was first mentioned in the second episode of the series, where the First Doctor and Susan noted the unit was malfunctioning. ("It's still a police box! Why hasn't it changed?") It was first given a general term of a "camouflage unit" in "The Time Meddler" (1965). The name "chameleon circuit" was first used in the 1975 Target Books novelisation of "The Terror of the Autons", and eventually mentioned on screen in "Logopolis" (1981). The circuit was called a "cloaking device" by the Eighth Doctor in the television movie "Doctor Who" (1996), and again a "chameleon circuit" in the 2005 series episode "Boom Town". The Doctor attempts to repair the circuit in "Logopolis", to disastrous results. He tries again in "Attack of the Cybermen" (1985), but the successful transformations of the TARDIS into the shape of a pipe organ, a painted Welsh dresser, and an elaborate gateway ended with a return to the police box shape. The circuit was also repaired during the Virgin New Adventures novels, but again the TARDIS' shape was eventually restored to a police box shape. In the 1996 television movie, and later in the episode "Boom Town", the Doctor implies that he had stopped trying to fix the circuit quite some time ago because he had become rather fond of the police box shape. Despite the anachronistic police box shape, the TARDIS' presence is rarely questioned when it materialises in the present-day United Kingdom. In "Boom Town", the Ninth Doctor simply notes that humans do not notice odd things like the TARDIS, echoing a similar sentiment expressed by the Seventh Doctor in "Remembrance of the Daleks" (1988), that humans have an "amazing capacity for self-deception." Various episodes, notably "The Sound of Drums" (2007), also note that the TARDIS generates a perception filter to reinforce the idea that it is perfectly ordinary. Cosmetically, the police box exterior of the TARDIS has remained virtually unchanged, although there have been slight modifications over the years. For example, the sign on the door concealing the police telephone has been black letters on a white background ("An Unearthly Child"), white on blue ("The Seeds of Death", 1969) and white on black ("The Curse of Peladon", 1972). Other modifications include different wordings on the phone panel; for example, "Urgent Calls" ("An Unearthly Child") as opposed to "All Calls" ("Castrovalva" publicity photos). The "POLICE BOX" sign was wider from Season 18 (1980) onwards and for the 2005 series, but not for the television movie. From "An Unearthly Child" to "The War Machines" (1966), the TARDIS also had a St. John Ambulance badge on the main doors, as did real police boxes; this has been reinstated and the window frame colour has returned to white for Matt Smith's first season as the Doctor, shown in 2010. These various versions are depicted when thirteen incarnations of the Doctor all converge on Gallifrey at the climax of "The Day of the Doctor" (2013). The telephone cupboard can be opened and the telephone accessed from the exterior. Although originally non-functional, as shown in "The Empty Child" (2005), the TARDIS can be called from across space and time. When the TARDIS "died" with the Doctor in battle in an alternative timeline, it became his tomb on the grave fields of the planet Trenzalore. Although the tomb retains its police box exterior appearance, its interior volume begins to "leak", growing the exterior to hundreds of feet in height. For most of the series's run, the exterior doors of the police box operated separately from the heavier interior doors, although sometimes the two sets could open simultaneously to allow the ship's passengers to look directly outside and vice versa. The revived series' TARDIS features no such secondary doors; the police box doors open directly into the console room. The Doctor almost always opens the doors inwards, despite the fact that a real police box door opened outwards; in "The Doctor's Wife" (2011), it is revealed that the TARDIS is aware of this and finds it annoying. After crash-landing on its back in Amelia Pond's garden in "The Eleventh Hour" (2010), the doors uncharacteristically open outward, as they had previously done when the TARDIS was also on its back in "The Ice Warriors" (1967); additionally, the left door opened in tandem with the usual right door in these instances. When hovering against a building in the same 'doors-up' horizontal orientation in "Day of the Moon" (2011), however, the doors opened inward as usual to receive River Song. The doors are supposed to be closed while materialising; in "Planet of Giants" (1964), the opening of the doors during a materialisation sequence causes the ship and its occupants to shrink to doll size. In "The Enemy of the World" (1967), taking off while the doors were still open results in an uncontrolled decompression, causing the villainous Salamander to be blown out of the TARDIS. In the Seventh Doctor audio drama "Colditz" (2001), a character is killed by being halfway inside the TARDIS when it dematerialises. In "Warriors' Gate" (1981), the doors open during flight between two universes, admitting a Tharil named Biroc, and allowing the time winds to burn the Doctor's hand and seriously damage K9. In "The Runaway Bride" (2006), "The Stolen Earth" (2008) and subsequent stories, the doors can be opened safely while the ship is in a vacuum, as the TARDIS protects its occupants (see the "Defences" section below); in "The Horns of Nimon" (1979), the Doctor deliberately extrudes the "defence shield" to dock with a spacecraft. In "The Time of Angels" (2010), River asks the Doctor to provide an "air corridor" to assist in her escape from the "Byzantium" in deep space. The entrance to the TARDIS is capable of being locked and unlocked from the outside with a key, which the Doctor keeps on his person and occasionally gives copies of to his companions. In the 1996 television movie, the Doctor kept a spare key "in a cubbyhole above the 'P'" (of the POLICE BOX sign). In "The Invasion of Time" (1978), a Citadel Guard on Gallifrey is initially baffled by the archaic lock when attempting to open the Doctor's TARDIS. In the 2005 series, the keys are also remotely linked to the TARDIS, capable of signalling its presence or impending arrival by heating up and glowing. The key is also able to repair temporal anomalies and paradoxes, including death aversion, through its link to the TARDIS. The TARDIS' keys most commonly appear as ordinary Yale keys. However, various designs have been experimented with over the years including ankh-like keys embossed with an alien pattern (identified in Terrance Dicks and Malcolm Hulke's 1972 book "The Making of Doctor Who" as the constellation of Kasterborous, Gallifrey's home system) and keys featuring the Seal of Rassilon. The TARDIS' lock's security level has varied from story to story. Originally, it was said to have 21 different "combinations" and would melt if the key was placed in the wrong one ("The Daleks", 1963–64). The First Doctor was also able to unlock it with his ring ("The Web Planet", 1965) and repair it by using the light of an alien sun refracted through the ring's jewel ("The Daleks' Master Plan", 1965–66). In "The Dalek Invasion of Earth" (1964) and "Utopia" (2007), the TARDIS was shown to have an internal deadlock; once thrown, it would prevent entry even for authorised users with authorised keys. In "The Dalek Invasion of Earth", this is known as 'double-locking'. In "The Sensorites" (1964), the entire lock mechanism is removed from the TARDIS' door via a hand-held Sensorite device. In "Spearhead from Space" (1970), the Third Doctor said that the lock had a metabolism detector, so that even if an unauthorised person had a key, the doors would remain locked. This security measure was also seen in the "New Series Adventures" novel "Only Human" (2005), which called it an "advanced meson recognition system." The Ninth Doctor claims that when the doors were shut, even "the assembled hordes of Genghis Khan" could not enter ("believe me, they've tried") ("Rose", 2005). In "Army of Ghosts" (2006), when the TARDIS is confiscated, the Doctor claims, "You'll never get inside it." Several people have managed to just wander into the TARDIS without any problem over the years, including some who became companions. Despite the TARDIS' apparent infallibility in its security, some of the instruments inside or the interior itself have been breached and remote-controlled. In the serial "The War Games" (1969), the Time Lords manage to breach the inside of the TARDIS while in mid-flight and landing in order to erect something similar to a force field. In "Utopia" (2007), the Doctor is able to lock the TARDIS to the coordinates it had previously visited from outside using the sonic screwdriver. In the episode "The Rings of Akhaten" (2013), Clara Oswald cannot get into the TARDIS and says, "I don't think it likes me!" In the 2008 episode "Forest of the Dead" (2008), River Song (a character whose timeline intersects with the Doctor's in non-linear order) says to the Doctor that she knows he would be able to open the TARDIS' doors with a snap of his fingers. Although the Doctor dismisses this as impossible, at the episode's conclusion, he opens and closes the doors by doing just that, eschewing the need for a key. He is later shown doing the same in "The Eleventh Hour" (2010), "Day of the Moon", and "The Caretaker" (2014). In addition, despite the animosity it previously displayed towards her, Clara Oswald is also shown being able to open and shut the TARDIS' doors by snapping her fingers (in "The Day of the Doctor", 2013, and "The Caretaker"). In the 2009 Christmas episode, part one of "The End of Time", the Doctor uses a remote locking system to lock the TARDIS, similar to the remote-control locking system used on modern cars. Upon pointing his key fob at the TARDIS, the TARDIS makes a distinctive chirp and the light on top of the TARDIS flashes. Later in the same episode, the key fob, when again used by the Doctor, shifts the TARDIS "just a second out of sync" (one second into the future), rendering it invisible and so hiding it from the Master. In the 2018 episode, "The Ghost Monument", upon reuniting with the TARDIS, the Doctor explains to it that she lost her key after falling from it in "Twice Upon a Time". The TARDIS responds by opening its doors voluntarily. Objects are sometimes shown clinging to the outside of the TARDIS and being carried with it as it dematerialises. In "Silver Nemesis" (1988), an arrow is fired at the TARDIS and is embedded in its door. The arrow remains in the door throughout the serial and through several dematerialisations before being removed at the story's conclusion; this is repeated in "The Shakespeare Code" (2007), and the arrow is removed in the following episode, "Gridlock". "Utopia" presents, for the first time on-screen, a circumstance in which a character travels on the exterior of the TARDIS during a flight, when Jack Harkness grabs hold of the TARDIS as it began to dematerialise and holds on until it reaches its destination; the episode does establish, however, that a normal person would not have survived the trip, as Jack is "killed" by the experience, but due to his immortality, soon revives. This concept was referenced in "The Time of the Doctor" (2013) where Clara also travels with the TARDIS by holding on to its exterior. To prevent Clara from dying, the TARDIS has to extend its force field to protect her, which drastically slows down its time travel and results in it arriving 300 years too late with a visibly aged Doctor. In "Vincent and the Doctor" (2010), some advertisements are attached to the TARDIS. After materialisation, they are shown to be burning. At the conclusion of the 2015 episode "Face the Raven", Rigsy decorates the TARDIS with painted flowers and a chalk drawing of Clara Oswald; when the Doctor dematerialises the retrieved TARDIS at the conclusion of "Hell Bent" (2015), the painted flowers and picture remain for a moment before the picture blows away and the flowers flake and fall to the ground. The Time Lords are able to divert the TARDIS' flight path during "The Ribos Operation" (1978), or have the ability to totally override and recall any TARDIS by the order of the Council in "Arc of Infinity" (1983). Alien influences have also, for example, trapped the Doctor's TARDIS and drained its power in "The Web Planet" (1965) and "Death to the Daleks" (1974), while its course has been diverted in "The Keeper of Traken" (1981), by the Mandragora Helix in "The Masque of Mandragora" (1976) and by the Daleks' "time corridor" in "Resurrection of the Daleks" (1984). In "The Mark of the Rani" (1985), the Rani uses a Stattenheim remote control to summon her TARDIS. In "The Two Doctors" (1985), the Second Doctor also uses a portable Stattenheim. The Ninth Doctor uses his sonic screwdriver to trigger remotely "Emergency Program One", sending his human companion Rose Tyler to safety, while he stayed behind for a battle against the Daleks ("The Parting of the Ways", 2005). The Tenth Doctor also manipulates the TARDIS by utilising the self-attracting nature of huon particles, causing the TARDIS to materialise around both Donna Noble and himself, in order to escape into the past. However, this trick is used in turn by the Empress of the Racnoss, which pulls the TARDIS from the creation of the Earth to only a few minutes after its initial departure. In "The Pandorica Opens" (2010), the TARDIS is drawn to a specific date, 26 June 2010, and then caused to explode by an outside influence. The exterior dimensions can be severed from the interior dimensions under extraordinary circumstances. In "Frontios" (1984), when the TARDIS is destroyed in a Tractator-induced meteor storm, the interior ends up outside the police box shell with various bits embedded in the surrounding rock. The Fifth Doctor eventually tricks the Gravis, leader of the Tractators, into reassembling the ship. In "Father's Day" (2005), a temporal paradox resulting in a wound in time throws the interior of the ship out of the wound, leaving the TARDIS an empty shell of a police box. The Ninth Doctor attempts to use the TARDIS' key in conjunction with a small electrical charge to recover the ship, but the process is interrupted and the TARDIS was only restored after the paradox was resolved. In "Turn Left" (2008), the "Police Box" sign and all other text on the TARDIS is shown as replaced with the words "Bad Wolf", as is all text in the universe; this is interpreted by the Doctor as an urgent warning concerning the end of the universe. The words "Bad Wolf" have also been spray-painted on and around the TARDIS in previous episodes. The TARDIS interior has an unknown number of rooms and corridors, and the dimensions of the interior have not been specified. In "Journey to the Centre of the TARDIS" (2013), the Doctor states that the TARDIS is actually infinite in size. Apart from living quarters, the interior includes rooms such as: a swimming pool and bathroom, a sick bay, an ancillary power station disguised as an art gallery, and several brick-walled storage areas ("The Invasion of Time", 1978); a "cloister room", an observatory, a library, a greenhouse, a baby room, and several squash courts. Numerous other rooms have been only mentioned in dialogue or in spin-off media. Portions of the TARDIS can also be reconfigured or "deleted"; the Doctor jettisons 25% of the TARDIS's structure in "Castrovalva" to provide additional "thrust". In "The Doctor's Wife" (2011), a fail-safe transfers any living creatures in "deleted" rooms to the main control room, and old (and future) control rooms can be "archived" by the TARDIS without the Doctor's knowledge. In "Journey to the Centre of the TARDIS" (2013) the TARDIS is shown to contain the Eye of Harmony, an exploding star in the process of becoming a black hole. Other rooms seen include living quarters for many of the Doctor's companions. The TARDIS also had a "Zero Room", a chamber that was shielded from the rest of the universe and provided a restful environment for the Fifth Doctor to recover from his regeneration in "Castrovalva" (which was among the 25% jettisoned). However, the Seventh Doctor spin-off novel "Deceit" (1993) indicates that the Doctor rebuilt the Zero Room shortly before the events of that novel. In some of the First Doctor serials, a nearby room contains a machine that dispenses food or nutrition bars to the Doctor and his companions. This machine disappears after the first few serials, although mention is occasionally made of the TARDIS kitchen. In "The One Doctor" (2001), Mel mentions that the Doctor used the TARDIS's laundromat. In "Full Circle" (1980), Romana states that the weight of the TARDIS is 5 × 106 kilograms in Alzarius's Earth-like gravity (about 5 × 107 Newtons, or the weight of 5,000 tonnes). It has been speculated that this is a mistake by the character and refers to its internal weight, as the external part of the TARDIS is at other times light enough for it to be lifted or otherwise moved with relative ease (although most real police boxes were concrete and hence quite difficult to move): several men lift it up in "Marco Polo" (1964), it is transported by truck and installed indoors by hand (all off-screen) in "Spearhead from Space" (1970), it requires a fork-lift truck in "Time-Flight" and is lifted in the cargo hold of a "Concorde" in the same serial, a group of small blue maintenance workers on Platform One push it along the ground in "The End of the World" (2005), a quartet of Weeping Angels are able to rock it back and forth in "Blink" (2007), and in "The Day of the Doctor" (2013) it is lifted by a helicopter using a steel cable. The TARDIS floats in "Fury from the Deep" (1968) but, conversely, remains stationary despite the tides in "The Time Meddler" (1965). If the solid exterior of the TARDIS is moved or shaken after materialisation, the movement is usually transmitted to its interior ("The Impossible Astronaut", 2011), although there is a manual control to separate the internal gravity from the exterior's orientation ("Time-Flight", 1982). The Twelfth Doctor has stated that the TARDIS's weight is always adjusted; if it were not, its weight would shatter the Earth's surface. In the tie-in novels, the interior of the TARDIS has been known to contain an entire city, encompass an entire parallel Earth, and even dwarf Gallifrey itself when turned inside out. A distinctive architectural feature of the TARDIS interior is the roundel. In the context of the TARDIS, a roundel is a circular decoration that adorns the walls of the rooms and corridors of the TARDIS, including the console room. Some roundels conceal TARDIS circuitry and devices, as seen in the serials "The Wheel in Space" (1968), "Logopolis", "Castrovalva" (1981), "Arc of Infinity" (1983), "Terminus" (1983), and "Attack of the Cybermen" (1985), while in "The Husbands of River Song" (2015), one roundel is shown to be used as a bar to store alcoholic drinks in. The design of the roundels has varied throughout the show's history, from a basic circular cut-out with black background to a photographic image printed on wall board, to translucent illuminated discs in later serials. In the secondary console room, most of the roundels were executed in recessed wood panelling, with a few decorative ones in what appeared to be stained glass. In the TARDIS design from 2005–10, the roundels are built into hexagonal recesses in the walls. Ever since the TARDIS was redesigned at the beginning of the 2010 series, there have been a range of different roundel designs around the console room. These include circular holes that are recessed deep into the walls, hexagonal holes that are lit from behind each face, round indents with brass rings around the outside, and a glass centre that is illuminated blue. In "The Day of the Doctor", when the TARDIS' control room briefly appears as the War Doctor's room, the Eleventh Doctor points out to the Tenth Doctor that "the round things" have reappeared, to their mutual delight despite neither one knowing what they're for. Although the interior corridors seen throughout the original series were not initially seen in the 2005 series, the fact that they still exist was established in "The Unquiet Dead" (2005), when the Doctor gives Rose some very complicated directions to the TARDIS wardrobe. The wardrobe is mentioned several times in the original series and spin-off fiction, and seen in "The Androids of Tara" (1978), "The Twin Dilemma" (1984) and "Time and the Rani" (1987). The redesigned version, from which the Tenth Doctor chooses his new clothes, was seen in "The Christmas Invasion" (2005) as a large multi-levelled room with a helical staircase. The corridors were eventually seen in the episode "The Doctor's Wife". The Doctor also mentions in "The Shakespeare Code" (2007) that the TARDIS has an attic. In "Boom Town" (2005), a portion of the TARDIS console opens to reveal a luminescent vapour within, described by the Doctor as the "heart of the TARDIS", harking back to the description in "The Edge of Destruction" (1964). In "The Parting of the Ways" (2005) it was shown that this is connected to the powerful energies of the time vortex In the Doctor Who (series 12) (2020) the TARDIS was revamped for the Thirteenth Doctor. The TARDIS includes a new set of stairs inside that leads to the balcony, and there is an additional door leading to another area or room of the TARDIS. This room will remain ambiguous. The new roof includes a second crystal pointing towards the central column crystals. The roof was created by production designer Dafydd Shurmer. The most often-seen room of the TARDIS is the console room, in which its flight controls are housed. It was originally called the "control room", being described as such in the stage directions of many scripts, and on air in such stories as "The Masque of Mandragora", in which the Fourth Doctor says: "Do you know, this is the second control room. You know, I could run the TARDIS just as easily from here as I could from the old one." The original console room was designed by Peter Brachacki. It was built on a shoestring budget and a tight schedule, which led to Brachacki leaving the show due to disagreements with the production team and possibly a feeling that he had been given an impossible task. Despite his leaving the show and mixed reactions as to how the set looked (producer Verity Lambert liked it but director Waris Hussein did not), the basic design of the hexagonal console and wall roundels has persisted to the present day. In the Third Doctor serial "The Time Monster" (1972), the console room of the TARDIS was dramatically altered, including the wall roundels. This new set, designed by Tim Gleeson, was disliked by producer Barry Letts, who felt that the new roundels resembled washing-up bowls stuck to the wall. However, as fate would have it, the set was damaged in storage between production blocks and had to be rebuilt, so this particular design only saw service in the one serial. The TARDIS has at least two console rooms: the primary one most used throughout the programme's history, and the secondary console room used during Season 14 in 1976/77, which has wood panelling and a more antique feel. It had been designed to make shooting more comfortable for the camera crew. Putting the console on a dais meant the cameramen no longer had to crouch for eye-level shots. However the set walls warped after it was put into storage at the end of production, and it had to be discarded. In addition, a cavernous, well-furnished console room was used in the television movie "Doctor Who" (1996). In the 2005 series, the console room became a dome-shaped chamber with organic-looking support columns, and the interior doors were removed. The change in configuration is explained in "Time Crash" (2007) by the Fifth Doctor as a mere changing of "the desktop theme" to "Coral" (he also indicates that a "Leopard Skin" theme is also available, but he dislikes it). Several episodes of the revived series, such as "Army of Ghosts" (2006) and the end of "The Unicorn and the Wasp" (2008), reveal that there is storage space directly underneath the console room; the Doctor is shown periodically obtaining equipment from this area via a panel in the floor. The 2005 console room was destroyed by the regeneration energy of the Tenth Doctor in the final scene of "The End of Time" (2009–10) and cold open of "The Eleventh Hour" (2010), although it makes a reappearance in the 2011 episode "The Doctor's Wife" as well as the 50th anniversary special "The Day of the Doctor". This console room has made the most appearances in the revived series, and is the most well known one. A new console room, along with a new police box exterior, made its debut in "The Eleventh Hour". It was revealed in "The Doctor's Wife" that the older TARDIS interior designs are not destroyed or remodelled, but 'archived' off the official schematic without the Doctor's knowledge. The TARDIS reveals that she has around thirty console rooms archived, even those that the Doctor has yet to use. These archived console rooms are still capable of controlling TARDIS functions as shown when Amy and Rory are able to lower the TARDIS shields from an archived control room. The active console room at a given time will be the one connected to the exterior doors for ingress and egress. A third console room design was unveiled in the 2012 Christmas special "The Snowmen". This set echoes the machine-like 1980s TARDIS console, but is coloured in the more shadowy blues, greens and purples of the 1996 TV movie. Though the central pillar is still connected to the ceiling – a design element introduced in the 1996 movie, and continued in the 2005 series – it is now joined to three circular connectors marked with Gallifreyan symbols that rotate clockwise and anticlockwise when the TARDIS is in flight. Showrunner Steven Moffat stated that the new design was meant to be more 'scary' and machine-like than the previous bright orange design, which was more 'whimsical' to reflect upon the light-hearted and fairy-tale-like nature of the episodes following its introduction in "The Eleventh Hour". The seventh series' darker, more adult tone necessitated a more menacing and mysterious console – also reflecting the implications that the TARDIS is distrustful of the Doctor's companion, Clara Oswald. For instance, in "Hide" (2013), Clara's statement that the TARDIS actively dislikes her is intercut with footage of its circular connectors spinning from the ceiling. For the eighth series, Peter Capaldi's first as the Doctor, this console was still used but was tweaked and altered slightly, including the addition of a blackboard and bookshelves and the time rotor was changed to an orange colour replacing the blue. A previously unseen version of the console room made an appearance in "The Day of the Doctor" (2013) and is associated with the War Doctor, portrayed by John Hurt. This console room has walls that are similar in design to those first seen in 1963, with multiple roundels illuminated from behind. The fourth major redesign of the TARDIS in the revived series appeared in the eleventh series. New production designer Arwel Wyn Jones said that the inspiration for the Thirteenth Doctor's TARDIS was based on the 2005-10 model, which he had worked on. As seen in "Journey to the Centre of the TARDIS", the console room is able to be replicated any number of times to create "echo rooms"; occupants in each of the different echo rooms will be able to feel the presence of the others in the forms of shadows and sounds, as the rooms are together for a brief second, with the rooms rapidly alternating between each other, "like a light switch ... flickering at super-infinite speeds." The Virgin novels introduced a tertiary console room, which was described as resembling a Gothic cathedral ("Nightshade", 1992). Another novel ("Death and Diplomacy", 1996) suggested that the native configuration is so complex and irrational that most non-Time Lords who witness it are driven mad from the experience. Throughout the programme's history there have been various attempts at humanising the console room by adding various accoutrements. For example, a hatstand has often been located somewhere in the room, and at the beginning of the series an old clock appeared in the room as well. In the series from 2005 onwards, the TARDIS console included a bench sitting area for the Doctor and/or his companions to relax during extended flight. In "The Androids of Tara" (1978) a cupboard containing fishing gear is shown nearby. In "The Rebel Flesh" (2011), a dartboard is seen installed in the console room, and it is revealed in the episode "Vincent and the Doctor" (2010) that the console is capable of playing recorded music. In keeping with the darker and more machine-like setting of the 2012 redesign of the console room, there is no hat-stand or bench; in "Hide", the Doctor and Clara both note that there is no longer anywhere in the room on which to hang Clara's umbrella. The main feature of the console rooms, in any of the known configurations, is the TARDIS console that holds the instruments that control the ship's functions. The appearance of the primary TARDIS consoles has varied widely but shares common details: hexagonal pedestals with controls around the periphery, and a moveable column (or time rotor as it has been called in the original series and the 2011 episode "The Doctor's Wife") in the centre that bobs rhythmically up and down when the TARDIS is in flight, like a pump or a piston. The secondary console was first seen in season 14 (1976–77) of the original series. It had the controls hidden behind wooden panels and had no central column. The 1996 television movie console also appeared to be made of wood and the central column connected to the ceiling of the console room. When the console appeared in 2005 it was circular in shape but still divided into six segments, with both the control panels and the central column glowing green, the latter once again connected to the ceiling. The 2005 console has a much more thrown-together appearance than previous consoles, with bits of junk from various eras substituting as makeshift controls, including a glass paperweight, a locomotive style water sight glass and protector, a small bell, and a bicycle pump, the latter identified in the Tenth Doctor interactive mini-episode "Attack of the Graske" (2005) as the vortex loop control. Three other controls—the dimensional stabiliser, vector tracker, and the handbrake—were also identified. Although the stabiliser had been mentioned before in the series, the canonicity of the mini-episode is also unclear. As seen in "World War Three" (2005), there is also a working telephone attached to the console. In the 2010 series, the new console includes items such as a washer-fluid bottle from a car and a typewriter keyboard. Precisely how much control the Doctor has in directing the TARDIS has varied over the course of the series. The First Doctor did not initially seem to be able to steer it accurately, making only one intended landing to the planet Kembel in "The Daleks' Master Plan" (1965–66) by using the directional unit taken from another TARDIS before the unit burns out. During the Third Doctor's exile on Earth, the TARDIS's course is shown as controlled successfully by the Time Lords, and from the point the Time Lords unblock his memory of time-travel mechanics in "The Three Doctors" (1972–73), the Doctor seems able to navigate correctly when needed. Over time, the Doctor seems to be able to pilot the TARDIS with more precision. In "The Seeds of Death" (1969), the Second Doctor explains to Zoe Heriot that it would be impossible to use the TARDIS to fly from Earth to the Moon because it would likely "overshoot by a few million years, or a few million miles". However, in "Logopolis" (1981), the Fourth Doctor is able to make a "short hop" to the exact coordinates when he initially lands the TARDIS 1.6 metres off target. Following "The Key to Time" season (1978–79), the Doctor installed a randomiser to the console which prevented the Doctor (and by extension the evil and powerful Black Guardian) from knowing where the TARDIS would land next. This device was eventually removed in "The Leisure Hive" (1980). In the 2005 and later series, the Doctor is shown piloting the TARDIS at will, although writers continue to use the plot device of having the TARDIS randomly land somewhere, or imply that the TARDIS is "temperamental" in its courses through time and space, such as missing his intended mark by a century (1879 instead of 1979) in "Tooth and Claw" (2006), making the mistake of 12 months instead of 12 hours in "Aliens of London" (2005), getting the correct time but landing on the wrong continent (London instead of New York) in "The Idiot's Lantern" (2006) or even facing the wrong way (blocked by a metal container) in "Fear Her" (2006). He can also choose to "set the controls to random" as in "Planet of the Ood" (2008). Although the Eleventh Doctor's spatial accuracy in "The Eleventh Hour" (2010) was spot-on, the TARDIS' malfunctioning helmic regulator prevents him from controlling the exact time he arrives at, first promising a young Amelia that he would be gone for only five minutes, but taking 12 years to return, and again when he intended to leave Amy for a short while to give the newly regenerated TARDIS a brief shakedown cruise, and ends up returning another two years in the future. In "The Doctor's Wife" the reason why the Doctor seems to lack control over the TARDIS at times is explained: the TARDIS' soul, in the body of a humanoid named Idris, explained that while the TARDIS may not always take the Doctor where he "wants" to go, it always takes him where he "needs" to go. In "Journey's End" (2008), the Tenth Doctor confirms that the TARDIS is intended to be flown by six pilots; Rose Tyler, Martha Jones, Sarah Jane Smith, Mickey Smith, Jack Harkness and the Doctor man the controls, and the TARDIS runs far more smoothly during that brief period than it normally does. This also explains why the Doctor tends to do a lot of manic running around the console while he is piloting the TARDIS, as well as the difficulty he has in controlling it, although Romana, the Doctor's one-time Time Lord companion, is able to pilot the TARDIS successfully by herself. Companion Professor River Song, who has Time Lord DNA according to the episode "A Good Man Goes to War" (2011), was also shown to pilot the TARDIS smoothly and easily without help ("The Time of Angels" (2010), "The Pandorica Opens" (2010), "Let's Kill Hitler" (2011), "The Angels Take Manhattan" (2012)). Nyssa possessed at least rudimentary TARDIS piloting skill ("Mawdryn Undead", 1983). The Doctor in his eleventh incarnation was generally able to land the TARDIS with significantly superior accuracy to that of his predecessors. He returned four times to the same spot in Amy Pond's garden where he had crash-landed and originally met her. (2nd & 3rd arrivals "The Eleventh Hour", "The Big Bang", "The Angels Take Manhattan"); he routinely materialised in front of the London house which he had given to her and her husband ("The God Complex", "The Doctor, the Widow and the Wardrobe", "Pond Life: August", "The Power of Three"), or within her homes ("Flesh and Stone", "Pond Life: May", "Dinosaurs on a Spaceship", "The Power of Three"). He delivered himself to the precise space-time location where the pair (and, unbeknownst to them, their daughter River Song) had summoned him; ("Let's Kill Hitler") and his pin-point accurate landings repeatedly allowed him to catch River and save her life ("The Time of Angels", "Day of the Moon"). The console can be operated independently of the TARDIS. During the Third Doctor's era, he removes the console from the TARDIS to perform repairs on it. In "Inferno" (1970) the Doctor accidentally rides the detached console into a parallel universe. The Eleventh Doctor flies a detached console to his TARDIS in "The Doctor's Wife" (2011). The console can dispense Custard cream biscuits as of ("The Ghost Monument", 2018). The console in the Doctor Who (Series 12) (2020) includes new screens that can project information onto a cloud of water vapour. In "The Doctor's Wife", the "soul" of the ship is transferred into the body of a humanoid female called Idris, enabling the Doctor to have a conversation with his craft. The TARDIS says that she deliberately allowed the Doctor to "steal" her, as she wanted to see the universe itself; in a reversal of the traditional view, the TARDIS claims to have stolen the Doctor. When he accuses the TARDIS of being unreliable, she defends herself by saying that she has always taken him where he "needed to go", as opposed to where he "wanted to go". During their brief opportunity to converse, the TARDIS expresses both affection and frustration with the Doctor (including annoyance that he pushes her doors open rather than pulls them open as the instructional sign on the outside indicates). When asked by the Doctor if she actually has a name, she self-identifies with the name "Sexy", based upon what the Doctor calls her when he's alone in the ship (she later introduces herself to the Doctor's companions using this name). Eventually, the Idris "avatar" dies, and the last words uttered by the TARDIS to the Doctor using this interface are "I love you." Two additional pieces of information confirmed by the TARDIS during this incident are that TARDIS consciousnesses are female and that she and the Doctor have been travelling for approximately 700 years. In a later episode, "Let's Kill Hitler" (2011), the Doctor speaks to the TARDIS by way of a holographic voice interface. In this instance, after providing options including an image of himself and former companions, the TARDIS manifests as an image of Amelia Pond as a child. In "Hide" (2013), companion Clara Oswald interacts with a similar interface outside the TARDIS' doors when it refuses to open for her. In a subsequent mini-episode entitled "Clara and the TARDIS", further animosity between Clara and the ship's consciousness is indicated when the TARDIS reconfigures her internal layout to prevent Clara from finding her bedroom. However, following "The Name of the Doctor" (2013) – in which her leap into the Doctor's time stream is what facilitated the Doctor and Susan to steal "that" TARDIS instead of the adjacent one that they had intended to take – the TARDIS' animosity appears to have disappeared as Clara is now shown as able to close the TARDIS doors with a snap of her fingers. The episode "Hide" (2013) revealed that the Doctor's TARDIS is capable of operating autonomously; in the storyline, Clara convinces the ship, simply by speaking to her, to enter a pocket universe to rescue the imperiled Doctor. Due to the age of the TARDIS, it is inclined to break down. The Doctor is often seen with his head stuck in a panel carrying out maintenance of some kind or another, and he occasionally has to give it "" (a good thump on the console) to get it to start working properly. Efforts to repair, control, and maintain the TARDIS have been frequent plot devices throughout the show's run. The TARDIS possesses telepathic circuits, although the Doctor prefers to pilot the TARDIS manually. In "Pyramids of Mars" (1975), the Fourth Doctor told Sutekh that the TARDIS controls were "isomorphic", meaning only the Doctor could operate them. However, this characteristic seems to appear and disappear when dramatically convenient, and various companions have been seen to be able to operate the TARDIS and even fly it. In "Blink" (2007), the TARDIS was 'pre-programmed' to travel to a specific time (1969) and place by inserting a DVD into the console. The DVD was one of the 17 owned by Sally Sparrow on which the Doctor appeared as an "Easter egg". In this situation, however, the TARDIS dematerialised without transporting its occupants. Despite the changes in the layout of the console controls, the Doctor seems to have no difficulty in operating the ship. In "Time Crash" (2007), the Fifth Doctor is able to fly the TARDIS despite the layout being radically different from the one he was used to, at first without even noticing that the machine had changed. In the episode "Utopia" (2007), the TARDIS is taken by the Master and the Doctor is only able to use his sonic screwdriver to restrict the destination times to the last two previous selected destinations. In the Big Finish Productions audio play "Other Lives" (2005), the Eighth Doctor deactivates the isomorphism of the controls to allow his companion C'rizz to operate the console. Apart from the sound that accompanies dematerialisation, in "The Web of Fear" (1968), the TARDIS console was also seen to have a light that winked on and off during landing, although the more usual indicator of flight is the movement of the central column. The TARDIS also possesses a scanner so that its crew may examine the exterior environment before exiting the ship. In the 2005 series the scanner display is attached to the console and is able to display television signals as well as various computing functions and occasionally what the production team has stated are Gallifreyan numbers and text. The 2005 series also sees the addition of the tribophysical waveform macro kinetic extrapolator to the TARDIS in the episode "Boom Town". This control was originally a pan-dimensional 'surf board' taken from the Slitheen. In "The Parting of the Ways", Captain Jack Harkness uses it to rig up a force field that defends the ship from Dalek missiles. The Doctor uses it again in the Christmas 2006 episode "The Runaway Bride", to jar it a few hundred metres off course when being dragged back to the Empress of Racnoss, in a similar manoeuvre to that used in "The Web of Fear" with another extra device he plugged into the console. In the last appearance, the TARDIS coral has begun to grow over the extrapolator. In the television movie "Doctor Who" (1996), access to the Eye of Harmony is controlled by means of a device that requires a human eye to open. Why the Doctor would programme such a requirement is retroactively explained in the Big Finish Productions audio play "The Apocalypse Element" (2000), where a Dalek invasion of Gallifrey prompts the Time Lords to code their security locks to the retinal patterns of the Sixth Doctor's companion Evelyn Smythe. The TARDIS came with an instruction manual that the Sixth Doctor claims in "Vengeance on Varos" (1985) to have started reading but never finished. Tegan Jovanka is unable to make sense of its contents, and Peri Brown later finds it propping open a vent. The usual function of the manual is to hold up a short leg on the Doctor's hat rack, though "Amy's Choice" (2010) features the Doctor revealing to have thrown it into a supernova, ostensibly due to disagreeing with it. Despite its complexity, some companions with exceptional intelligence, such as Nyssa, or familiarity with technology, such as Turlough and Jack Harkness, have been depicted as assisting the Doctor with TARDIS operations. In "The Sontaran Stratagem" (2008), Donna Noble displays an aptitude for piloting the TARDIS under the Doctor's guidance, much to the Doctor's apparent surprise. The Doctor's companion River Song claims to have been taught to pilot the TARDIS by "the very best" ("The Time of Angels", 2010); this turns out to have in fact been the TARDIS herself, rather than the Doctor ("Let's Kill Hitler", 2011). In "Journey's End" (2008), the TARDIS is shown to ideally require six pilots positioned at various stations around the central console to be piloted properly. On that occasion, the six pilots were Rose Tyler, Martha Jones, Sarah Jane Smith, Mickey Smith, Jack Harkness, and the Doctor. However, the ending of the 2011 episode "The Doctor's Wife" reveals that the TARDIS is actually capable of manipulating the controls herself (which is consistent with stories in which the TARDIS is summoned or otherwise travels by herself without the input of a pilot, such as "The Two Doctors" (1985) and "Hide" (2013). In "The Time of Angels" (2010), River Song reveals that the TARDIS has a "stabilisation" and "brake" option. The "stabilisation" prevents the TARDIS from moving violently in flight. River Song claimed that leaving the "brakes" on is the cause of the (de)materialisation noise. However, other TARDISes have usually made the same sound when dematerialising and materialising, and it has even been identified as a particular component of every TARDIS. In "The Time of the Doctor" (2013), the Doctor is able to "turn the engines on silent". The consciousness of the Doctor's TARDIS, when briefly transposed into the body of a humanoid woman in "The Doctor's Wife", makes the sound in order to identify herself to the Doctor and is used when the TARDIS consciousness is transferred to and from the woman. The central column is often referred to as the "time rotor", although when the term was first used in "The Chase" (1965) it referred to a different instrument on the TARDIS console. However, the use of this term to describe the central column was common in fan literature, and was finally used on screen to refer to the central column in "Arc of Infinity" (1983) and "Terminus" (1983). The current production team uses the term in the same way. It was also referred to as the "time column" in "Logopolis" (1981). The 1996 television movie was the first appearance of the central column being attached to the ceiling. After season 26, a new design for the TARDIS console room featured the console being suspended from the ceiling via the central column. This design was never built because the show was cancelled before a 27th season was produced, but the set was used in a Doctor Who night presented by Sylvester McCoy, where a miniature was built and McCoy was superimposed into it. When fully active, the TARDIS's outer defences are (nearly) impenetrable. In the last episode of "The Armageddon Factor" (1979), the Black Guardian is unable to enter the TARDIS after the Doctor activates "...all of the TARDIS's defences..." Similarly, when being chased by an Auton in "Rose" (2005), the Doctor reassures future companion Rose Tyler that "the assembled hordes of Genghis Khan couldn't break through those doors, and believe me, they've tried." However, in "Journey's End" (2008) the Tenth Doctor states that the Daleks, created and led by Davros, would have no problem breaching the TARDIS defences. "They're experts at fighting TARDISes, they can do anything. Right now, that wooden door is just wood." One of the main TARDIS defenses is a force field. The field is seen in use in "The Runaway Bride" (2006), when the Tenth Doctor and the Bride, Donna Noble, are trying to escape the Empress of the Racnoss and in "The Beast Below" (2010), when the Doctor is showing Amy Pond the wonders of the universe. In "Journey to the Centre of the TARDIS" (2013), the Eleventh Doctor turns off the shields by putting the TARDIS on "basic mode" for his companion, Clara, to operate. Another device, a tribophysical waveform macro kinetic extrapolator, is installed to generate a force field in the episode "Boom Town" (2005) and is later used to protect the ship from Dalek missiles in "The Parting of the Ways". Another defensive feature is the Hostile Action Displacement System (HADS), which can (if switched "on" by the ship's operator) teleport the ship away if it is attacked ("The Krotons", 1968) or in great danger ("Cold War", 2013). The Hostile Action "Dispersal" System can, if the TARDIS is threatened, scatter its components so that it appears destroyed, then reassemble it via a sonic device such as a sonic screwdriver ("The Magician's Apprentice"/"The Witch's Familiar", 2015). It is unclear if the Displacement and Dispersal functions are part of the same system, or if they are separate features. If required, the TARDIS can become temporarily invisible, but this is a significant power drain, as seen in "The Impossible Astronaut" (2011), when the Doctor lands the TARDIS in the middle of the Oval Office in the White House. The TARDIS's cloister bell is a signal used in the event of "wild catastrophes and sudden calls to man the battle stations" ("Logopolis", 1981). The interior of the TARDIS was described as being in a state of "temporal grace" ("The Hand of Fear", 1976). The Fourth Doctor explains that, in a sense, things do not exist while inside the TARDIS. This had the practical effect of ensuring that no weapons can be used inside its environs; however, since then weapons have been fired in the console room in "Earthshock" (1982), "Attack of the Cybermen" (1985), "The Parting of the Ways" (2005), and "Last of the Time Lords" (2007), among others. When confronted by Nyssa on this contradiction in "Arc of Infinity" (1983), the Doctor responds, "Yes, well, nobody's perfect." In "The Invasion of Time" (1978), a guard's patrol staser will not function, even though K9's nose laser does. The Doctor explains on this occasion that the staser will not work within the field of a relative dimensional stabiliser, such as that found in the TARDIS. In the audio story "Human Resources" (2007), when a character mentions the temporal grace function, the Eighth Doctor, says that his TARDIS "hasn't done that in years". In "Let's Kill Hitler" (2011) the Doctor tells Mels about the temporal grace system and she shoots something in the TARDIS as a result, causing it to crash. The Doctor then admits that temporal grace is actually just a "clever lie." The TARDIS can also use its living metal circuitry to continue to expand and change when required, as seen in "Journey to the Centre of the TARDIS" (2013), when the TARDIS creates a continuing "labyrinth" around its occupants, to stop the theft of a circuit. The TARDIS also has another shield which keeps it from interacting with other objects in the time vortex, namely other TARDISes. When the Doctor forgets to restore these shields after the events of "Last of the Time Lords", he ends up merging his TARDIS with that of his fifth incarnation in the mini-episode "Time Crash". After successfully separating the two, the bow of the alien spaceship called "Titanic", designed to look like the ship of the same name, smashes through the inside wall of the TARDIS before he can raise it again. Despite the shield being designed to keep the TARDIS from interacting with itself, its own interior is considered the safest place, and the ship will thus effect an emergency materialisation within itself under certain circumstances. This occurred in the mini-episodes "Space" and "Time" (both 2011), when Rory Williams' accidental dropping of a thermal coupling prompts the TARDIS' exterior to materialise within its interior, thereby trapping the ship and its occupants in a space loop. In "The Doctor's Wife", the Doctor's makeshift TARDIS materialises within the Doctor's own TARDIS, but only after Idris telepathically instructs the Doctor's companions, trapped aboard by the House entity, to deactivate the TARDIS's defences from an "archived" control room. The TARDIS can be programmed to execute automatic functions based on certain conditions. The Ninth Doctor uses Emergency Programme One to send Rose home in "The Parting of the Ways" (2005). It is programmed to return to the Doctor upon the detection of the presence of one of Sally Sparrow's DVDs in "Blink" (2007). Emergency Programme One will also send Donna Noble back to her own time period if she is left alone in the TARDIS for more than five hours. In "Voyage of the Damned" (2007), the TARDIS will lock on to the nearest planetary body to land there when it becomes adrift in space. Similarly, when damage to the interior of the TARDIS threatens the inhabitants it will materialise at the nearest safe environment and create an emergency exit, as seen in "Terminus" (1983). The Master also pre-programs the Doctor's TARDIS on occasion, for example in "Castrovalva" (1982) where he programs it to travel back in time to "Event one" in order to destroy it. The TARDIS also grants its passengers the ability to understand and speak other languages. This was originally described in "The Masque of Mandragora" (1976) as a "Time Lord gift" which the Doctor shares with his companions, but is ultimately attributed to the TARDIS's telepathic field in "The End of the World" (2005). In "The Christmas Invasion" (2005), it is revealed that the Doctor himself is an integral element of this capability. Rose is unable to understand the alien Sycorax whilst the Doctor is in a regenerative crisis. In "The Impossible Planet" (2006), it is said that the TARDIS normally even translates writing; in that episode, the TARDIS is unable to translate an alien script, which the Doctor claims makes the language "impossibly old". However, the TARDIS does not translate Gallifreyan, as seen in "Utopia", when the Doctor was reading Gallifreyan numbers from the console monitor to tell where the TARDIS was going, and again in "A Good Man Goes to War" (2011), in which the Gallifreyan script on the Doctor's crib remains unintelligible to the audience and the Ponds. River Song also explains in "A Good Man Goes to War" that the TARDIS' translation matrix can take "a while to kick in" for the written word, actually coming into effect after the departure of the Doctor and the TARDIS. In the Ninth Doctor Adventures novel "Only Human" (2005), the telepathic field includes a filter that replaces foul or undesirable language with more acceptable terms. In "The Fires of Pompeii" (2008), it is shown that if a TARDIS traveller speaks in a hearer's own language, the translation circuit renders these words appropriately as foreign to the listener's ear (for example, if an English-speaking TARDIS traveller deliberately tries to speak Latin to an ancient Roman, the Roman instead hears that Latin as "Celtic" or Welsh). It also affects the translation of accents: in "Vincent and the Doctor" (2010), a translated Scottish accent is heard by a Dutchman and understood as a Dutch accent. The translation circuit does not always function, even for the Doctor. In "Four to Doomsday" (1982), the Doctor is unable to understand the Aboriginal language spoken by a tribesman and the Doctor's companion Tegan. In "Carnival of Monsters" (1973) the Doctor is unable to understand Vorg when he tries to speak to him in Polari. Similarly, Martha Jones is initially unable to understand the Hath in the episode "The Doctor's Daughter" (2008) and although she is eventually able to communicate with them, the audience is never allowed to understand their words. The TARDIS is able to tow other objects (a neutron star in "The Creature from the Pit", 1979; a ship in "The Satan Pit", 2006); or follow a ship or a transmission through space and time ("The Empty Child", 2005; and "The Stolen Earth", 2008). In "Journey's End" (2008), the TARDIS (assisted by the Rift Manipulator situated at Torchwood Three in Cardiff and the supercomputer Mr Smith) is able to tow the Earth across space. At times the TARDIS is shown to have a mind of its own. It is heavily implied in the television series that the TARDIS is "alive" and intelligent to a degree (first in "The Edge of Destruction", 1964), and shares a bond with those who travel in it; in the television movie "Doctor Who" (1996), the Doctor calls the TARDIS "sentimental". In "The Parting of the Ways" (2005), the Doctor leaves a message for Rose when he believes he will never return, asking her to let the TARDIS die. In the same episode, Rose claims that the TARDIS is alive, echoing the Doctor's earlier statement in "Boom Town". The Doctor's TARDIS is also explicitly said to have died in the episode "Rise of the Cybermen" (2006), though the Doctor is able to revive it by giving up some of his life energy (reducing his life expectancy by a decade in the process). Other abilities the TARDIS displays include creating snow via "atmospheric excitation" ("The Runaway Bride", 2006) and, through a "chameleon arch", engineering an almost witness protection-style relocation by making its Time Lord another species and placing him/her in a newly fabricated identity with new memories somewhere else in space and time ("Human Nature", 2007; "The Family of Blood", 2007; "Utopia", 2007). In "The Doctor's Wife" (2011), the TARDIS's intelligence is temporarily transferred to a humanoid body, during which time it is shown to possess a degree of precognition as well as limited telepathic abilities and a genuine fondness for the Doctor and his companions. This episode also demonstrates that certain capabilities of the physical TARDIS are operable independently of its intelligence, in particular the physical TARDIS's internal password security system (which is language-independent, relying on meanings rather than the words themselves) and ability to travel between "bubble universes". In "The Name of the Doctor" (2013), the TARDIS actively resists traveling to the planet Trenzalore, the site of the Doctor's grave, and once there, forces the Doctor to crash-land it on the planet's surface. The TARDIS is also able to place particular areas of the ship in "time stasis", as is in "Journey to the Centre of the TARDIS" (2013) where the engine had exploded and the TARDIS "wrapped around the force" of the explosion as a temporary safety measure. In the novels, a portion of the TARDIS could be separated and used for independent travel. This was featured in two Virgin novels, "Iceberg" (1993) and "Sanctuary" (1995). This subset of the TARDIS, resembling a small pagoda fashioned out of jade, had limited range and functionality, but is used occasionally when the main TARDIS is incapacitated. The sapient characteristics of the TARDIS have been made more explicit in the spin-off novels and audio plays. In the Big Finish audio play "Omega" (2003), the Doctor meets a TARDIS which "dies" after its Time Lord master's demise. Other TARDISes have appeared in the television series. The first was that of the Monk, another Time Lord, in the 1965 serial "The Time Meddler". The Master had at least two TARDISes of his own, each a more advanced model than the Doctor's. The chameleon circuits on these were fully functional, and his TARDISes have been seen in various forms, including a fully functional spacecraft, a Concorde aircraft, a grandfather clock, a computer, a fireplace, a Doric pillar, a lorry, a statue (able to move and walk around), a laurel tree and an iron maiden. In the unaired 1980 serial "Shada", the Time Lord known as Professor Chronotis has a TARDIS disguised as his quarters at Cambridge University. Another renegade Time Lord, The Rani, appears with her TARDIS. In "The Armageddon Factor" (1979), the Time Lord Drax has a TARDIS, but it is in need of repair. The War Chief provides dimensionally transcendent time machines named SIDRATs ("TARDIS" spelled backwards) to the alien race known as the "War Lords" in "The War Games" (1969). In the script for "The Chase" (1965), Dalek time machines are known as DARDISes. The renegade Time Lady Iris Wildthyme's own TARDIS was disguised as a No. 22 London bus, but was slightly smaller on the inside than it is on the outside. The Eighth Doctor Adventures novels have stated that future model Type 102 TARDISes will be fully sentient, and able to take on humanoid form. The Eighth Doctor's companion Compassion was the first Type 102 TARDIS, and she was seen to have enough firepower to annihilate other TARDISes. In the 40th anniversary animated webcast "Scream of the Shalka" (2003), the Doctor has a TARDIS console room that looked similar to the Eighth Doctor's version. This console was covered in an array of clock-like dials, featured a long spiral staircase leading far above the console, and connected to a nearby room resembling a Victorian library and study. In the Big Finish audio play "The One Doctor" (2001), confidence trickster Banto Zame impersonates the Doctor. However, due to incomplete information, his copy of the TARDIS (a short range transporter) is called a "Stardis", resembled a portaloo rather than a police box, and is not dimensionally transcendental. In "Unregenerate!" (2005), the Seventh Doctor and Mel stop a secret Time Lord project to download TARDIS minds into bodies of various alien species. This would have created living TARDIS pilots loyal to the Time Lords and ensuring that they would have ultimate control over any use of time travel technology by other races. Those created before the project is shut down depart on their own to explore the universe. The 28 October 2006 "Radio Times", in an image of the Torchwood Three headquarters, identified a piece of large coral on Captain Jack Harkness' desk as the beginnings of a TARDIS. John Barrowman, who plays Jack, said that "Jack's growing a TARDIS... It's probably been there for 30 years. I suppose in 500 years he'll be able to begin the carving process". In the 2008 Christmas Special, "The Next Doctor", Jackson Lake (David Morrissey), while under the delusion that he is the Doctor, has a blue gas balloon which he identifies as his TARDIS, which he explains stands for "Tethered Aerial Release Developed In Style". It is not capable of time travel. In a deleted scene from the series 4 finale "Journey's End", the Doctor gave a piece of the TARDIS to the half-human Doctor clone so that the latter could grow his own. When the clone remarked that growing a TARDIS would take hundreds of years, Donna Noble provided him with a method of speeding up the process. In "The Lodger" (2010) a vessel, which the Doctor identifies as a somebody's attempt to build a TARDIS, lures in unsuspecting people to pilot its controls, all of whom die due to humans being incompatible with the process. The same interior was used by the Silence in "Day of the Moon" (2011) (and the similarity commented on by the Doctor as he enters), but the intended connections between the two are still mostly unknown. In "The Doctor's Wife" (2011), the Doctor and the human avatar of his TARDIS' matrix (aka Idris) view a valley filled with parts of "half-eaten TARDISes", which upsets Idris. Later, the Doctor builds a makeshift TARDIS out of components of the dead TARDISes to be able to save Rory and Amy who are trapped inside his TARDIS, which is now under the control of a malevolent entity called the House. He still requires energy from Idris in order to make it work. The console used for this episode was designed by the winner of a "Blue Peter" competition in 2010. In "Hell Bent" (2015), the Doctor once again escapes from Gallifrey, which had been safely put in another dimension and later returned to the universe, by stealing a TARDIS, which initially has the external appearance of a grey cylinder with a sliding door. At the end of the episode, when the Doctor's original TARDIS is returned to him, it is revealed that the newly-stolen TARDIS is now being piloted by former travelling companion Clara Oswald and immortal human Ashildr, who are also in possession of a TARDIS manual. Ashildr mentions that she has not yet understood how to get the chameleon circuit working, meaning that this TARDIS is at least temporarily stuck in the form of an American diner, in which disguise it is revealed to have been the setting for that episode's frame story. In "Spyfall" (2020), the Master has a TARDIS of his own, first seen disguised as the residence of his alias "O", in the Great Victoria desert, Australia. It is later revealed that O actually is the Master, and his house is in fact his TARDIS. The sound of the Doctor's TARDIS featured in the final scene of the "Torchwood" episode "End of Days" (2007). As Torchwood Three's hub is situated at a rift of temporal energy, the Doctor often appears on Roald Dahl Plass directly above it in order to recharge the TARDIS. In the episode, Jack Harkness hears the tell-tale sound of the engines, smiles and afterwards is nowhere to be found; the scene picks up in the cold open of the "Doctor Who" episode "Utopia" (2007) in which Jack runs to and holds onto the TARDIS just before it disappears. Former companion Sarah Jane Smith has a diagram of the TARDIS in her attic, as shown in "The Sarah Jane Adventures" episode "Invasion of the Bane" (2007). In the two-part serial "The Temptation of Sarah Jane Smith" (2008), Sarah Jane becomes trapped in 1951 and briefly mistakes an actual police public call box for the Doctor's TARDIS (the moment is even heralded by the Doctor's musical cue, frequently used in the revived series). It makes a full appearance in "The Wedding of Sarah Jane Smith" (2009), in which the Doctor briefly welcomes Sarah Jane's three adolescent companions into the control room. It then serves as a backdrop for the farewell scene between Sarah Jane and the Tenth Doctor, which echoed nearly word-for-word her final exchange with the Fourth Doctor aboard the TARDIS in 1976. It reappears in "Death of the Doctor" (2010), where is stolen by the Shansheeth who try to use it as an immortality machine, and transports Sarah Jane, Jo Grant and their adolescent companions (Rani Chandra, Clyde Langer and Santiago Jones). As one of the most recognisable images connected with "Doctor Who", the TARDIS has appeared on numerous items of merchandise associated with the programme. TARDIS scale models of various sizes have been manufactured to accompany other "Doctor Who" dolls and action figures, some with sound effects included. Fan-built full-size models of the police box are also common. There have been TARDIS-shaped video games, play tents for children, toy boxes, cookie jars, book ends, key chains, and even a police-box-shaped bottle for a TARDIS bubble bath. The 1993 VHS release of "The Trial of a Time Lord" was contained in a special-edition tin shaped like the TARDIS. With the 2005 series revival, a variety of TARDIS-shaped merchandise has been produced, including a TARDIS coin box, TARDIS figure toy set, a TARDIS that detects the ring signal from a mobile phone and flashes when an incoming call is detected, TARDIS-shaped wardrobes and DVD cabinets, and a USB hub in the shape of the TARDIS. The complete 2005 season DVD box set, released in November 2005, was issued in packaging that resembled the TARDIS. One of the original-model TARDISes used in the television series' production in the 1970s was sold at auction in December 2005 for £10,800.
https://en.wikipedia.org/wiki?curid=30302
The X-Files The X-Files is an American science fiction drama television series created by Chris Carter. The original television series aired from September 10, 1993 to May 19, 2002 on Fox. The program spanned nine seasons, with 202 episodes. A short tenth season consisting of six episodes premiered on January 24, 2016, and concluded on February 22, 2016. Following the ratings success of this revival, Fox announced in April 2017 that "The X-Files" would be returning for an eleventh season of ten episodes. The season premiered on January 3, 2018, concluding on March 21, 2018. In addition to the television series, two feature films have been released: The 1998 film "The X-Files", which took place as part of the TV series continuity, and the stand-alone film "", released in 2008, six years after the original television run had ended. The series revolves around Federal Bureau of Investigation (FBI) special agents Fox Mulder (David Duchovny), and Dana Scully (Gillian Anderson) who investigate X-Files: marginalized, unsolved cases involving paranormal phenomena. Mulder believes in the existence of aliens and the paranormal while Scully, a medical doctor and a skeptic, is assigned to scientifically analyze Mulder's discoveries, offer alternate rational theories to his work, and thus return him to mainstream cases. Early in the series, both agents become pawns in a larger conflict and come to trust only each other and a few select people. The agents also discover an agenda of the government to keep the existence of extraterrestrial life a secret. They develop a close relationship which begins as a platonic friendship, but becomes a romance by the end of the series. In addition to the series-spanning story arc, "monster of the week" episodes form roughly two-thirds of all episodes. "The X-Files" was inspired by earlier television series which featured elements of suspense and speculative fiction, including "The Twilight Zone", "Night Gallery", "Tales from the Darkside", "Twin Peaks", and especially "". When creating the main characters, Carter sought to reverse gender stereotypes by making Mulder a believer and Scully a skeptic. The first seven seasons featured Duchovny and Anderson equally. In the eighth and ninth seasons, Anderson took precedence while Duchovny appeared intermittently. New main characters were introduced: FBI agents John Doggett (Robert Patrick) and Monica Reyes (Annabeth Gish). Mulder and Scully's boss, Assistant Director Walter Skinner (Mitch Pileggi), also became a main character. The first five seasons of "The X-Files" were filmed and produced in Vancouver, British Columbia, before eventually moving to Los Angeles to accommodate Duchovny. The series later returned to Vancouver to film "The X-Files: I Want to Believe" as well as the tenth and eleventh seasons of the series. "The X-Files" was a hit for the Fox network and received largely positive reviews, although its long-term story arc was criticized near the conclusion. Initially considered a cult series, it turned into a pop culture touchstone that tapped into public mistrust of governments and large institutions and embraced conspiracy theories and spirituality. Both the series itself and lead actors Duchovny and Anderson received multiple awards and nominations, and by its conclusion the show was the longest-running science fiction series in U.S. television history. The series also spawned a franchise which includes "Millennium" and "The Lone Gunmen" spin-offs, two theatrical films and accompanying merchandise. "The X-Files" follows the careers and personal lives of FBI Special Agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson). Mulder is a talented profiler and strong believer in the supernatural. He is also adamant about the existence of intelligent extraterrestrial life and its presence on Earth. This set of beliefs earns him the nickname "Spooky Mulder" and an assignment to a little-known department that deals with unsolved cases, known as the X-Files. His belief in the paranormal springs from the claimed abduction of his sister Samantha Mulder by extraterrestrials when Mulder was 12. Her abduction drives Mulder throughout most of the series. Because of this, as well as more nebulous desires for vindication and the revelation of truths kept hidden by human authorities, Mulder struggles to maintain objectivity in his investigations. Agent Scully is a foil for Mulder in this regard. As a medical doctor and natural skeptic, Scully approaches cases with complete detachment even when Mulder, despite his considerable training, loses his objectivity. She is partnered with Mulder initially so that she can debunk Mulder's nonconforming theories, often supplying logical, scientific explanations for the cases' apparently unexplainable phenomena. Although she is frequently able to offer scientific alternatives to Mulder's deductions, she is rarely able to refute them completely. Over the course of the series, she becomes increasingly dissatisfied with her own ability to approach the cases scientifically. After Mulder's abduction at the hands of aliens in the seventh season finale "Requiem", Scully becomes a "reluctant believer" who manages to explain the paranormal with science. Various episodes also deal with the relationship between Mulder and Scully, originally platonic, but that later develops romantically. Mulder and Scully are joined by John Doggett (Robert Patrick) and Monica Reyes (Annabeth Gish) late in the series, after Mulder is abducted. Doggett replaces him as Scully's partner and helps her search for him, later involving Reyes, of whom Doggett had professional knowledge. The initial run of "The X-Files" ends when Mulder is secretly subjected to a military tribunal for breaking into a top secret military facility and viewing plans for alien invasion and colonization of Earth. He is found guilty, but he escapes punishment with the help of the other agents and he and Scully become fugitives. As the show progressed, key episodes, called parts of the "Mytharc", were recognized as the "mythology" of the series canon; these episodes carried the extraterrestrial/conspiracy storyline that evolved throughout the series. "Monster of the week"—often abbreviated as "MOTW" or "MoW"—came to denote the remainder of "The X-Files" episodes. These episodes, comprising the majority of the series, dealt with paranormal phenomena, including: cryptids, mutants, science fiction technology, horror monsters, and religious phenomena. Some of the Monster-of-the-Week episodes even featured satiric elements and comedic story lines. The main story arc involves the agents' efforts to uncover a government conspiracy that covers up the existence of extraterrestrials and their sinister collaboration with said government. Mysterious men comprising a shadow element within the U.S. government, known as "The Syndicate", are the major villains in the series; late in the series it is revealed that The Syndicate acts as the only liaison between mankind and a group of extraterrestrials that intends to destroy the human species. They are usually represented by Cigarette Smoking Man (William B. Davis), a ruthless killer, masterful politician, negotiator, failed novelist, and the series' principal antagonist. As the series goes along, Mulder and Scully learn about evidence of the alien invasion piece by piece. It is revealed that the extraterrestrials plan on using a sentient virus, known as the black oil (also known as "Purity"), to infect mankind and turn the population of the world into a slave race. The Syndicate—having made a deal to be spared by the aliens—have been working to develop an alien-human hybrid that will be able to withstand the effects of the black oil. The group has also been secretly working on a vaccine to overcome the black oil; this vaccine is revealed in the latter parts of season five, as well as the 1998 film. Counter to the alien colonization effort, another faction of aliens, the faceless rebels, are working to stop alien colonization. Eventually, in the season six episodes "Two Fathers"/"One Son", the rebels manage to destroy the Syndicate. The colonists, now without human liaisons, dispatch the "Super Soldiers": beings that resemble humans, but are biologically alien. In the latter parts of season eight, and the whole of season nine, the Super Soldiers manage to replace key individuals in the government, forcing Mulder and Scully to go into hiding. California native Chris Carter was given the opportunity to produce new shows for the Fox network in the early 1990s. Tired of the comedies he had been working on for Walt Disney Pictures, a report that 3.7 million Americans may have been abducted by aliens, the Watergate scandal and the 1970s horror series "", triggered the idea for "The X-Files". He wrote the pilot episode in 1992. Carter's initial pitch for "The X-Files" was rejected by Fox executives. He fleshed out the concept and returned a few weeks later, when they commissioned the pilot. Carter worked with "NYPD Blue" producer Daniel Sackheim to further develop the pilot, drawing stylistic inspiration from the 1988 documentary "The Thin Blue Line" and the British television series "Prime Suspect". Inspiration also came from Carter's memories of "The Twilight Zone" as well as from "The Silence of the Lambs", which provided the impetus for framing the series around agents from the FBI, in order to provide the characters with a more plausible reason for being involved in each case than Carter believed was present in "Kolchak". Carter was determined to keep the relationship between the two leads strictly platonic, basing their interactions on the characters of Emma Peel and John Steed in "The Avengers" series. The early 1990s series "Twin Peaks" was a major influence on the show's dark atmosphere and its often surreal blend of drama and irony. Duchovny had appeared as a cross-dressing DEA agent in "Twin Peaks" and the Mulder character was seen as a parallel to that show's FBI Agent Dale Cooper. The producers and writers cited "All the President's Men", "Three Days of the Condor", "Close Encounters of the Third Kind", "Raiders of the Lost Ark", "Rashomon", "The Thing", "The Boys from Brazil", "The Silence of the Lambs" and "JFK" as other influences. Carter's use of continuous takes in "Triangle" was modeled on Hitchcock's "Rope". In addition, episodes written by Darin Morgan often referred to or referenced other films. Duchovny had worked in Los Angeles for three years prior to "The X-Files"; at first he wanted to focus on feature films. In 1993, his manager, Melanie Green, gave him the script for the pilot episode of "The X-Files". Green and Duchovny were both convinced it was a good script, so he auditioned for the lead. Duchovny's audition was "terrific", though he talked rather slowly. While the casting director of the show was very positive toward him, Carter thought that he was not particularly intelligent. He asked Duchovny if he could "please" imagine himself as an FBI agent in "future" episodes. Duchovny, however, turned out to be one of the best-read people that Carter knew. Anderson auditioned for the role of Scully in 1993. "I couldn't put the script down", she recalled. The network wanted either a more established or a "taller, leggier, blonder and breastier" actress for Scully than the 24-year-old Anderson, a theater veteran with minor film experience. After auditions, Carter felt she was the only choice. Carter insisted that Anderson had the kind of "no-nonsense integrity that the role required." For portraying Scully, Anderson won numerous major awards: the Screen Actors Guild Award in 1996 and 1997, an Emmy Award in 1997, and a Golden Globe Award 1997. The character Walter Skinner was played by actor Mitch Pileggi, who had unsuccessfully auditioned for the roles of two or three other characters on "The X-Files" before getting the part. At first, the fact that he was asked back to audition for the recurring role slightly puzzled him, until he discovered the reason he had not previously been cast in those roles—Carter had been unable to envision Pileggi as any of those characters, because the actor had been shaving his head. When Pileggi auditioned for Walter Skinner, he had been in a grumpy mood and had allowed his small amount of hair to grow. His attitude fit well with Skinner's character, causing Carter to assume that the actor was only pretending to be grumpy. Pileggi later realized he had been lucky that he had not been cast in one of the earlier roles, as he believed he would have appeared in only a single episode and would have missed the opportunity to play the recurring role. Before the seventh season aired, Duchovny filed a lawsuit against 20th Century Fox. He was upset because, he claimed, Fox had undersold the rights to its own affiliates, thereby costing him huge sums of money. Eventually, the lawsuit was settled, and Duchovny was awarded a settlement of about $20 million. The lawsuit put strain on Duchovny's professional relationships. Neither Carter nor Duchovny was contracted to work on the series beyond the seventh season; however, Fox entered into negotiations near the end of that season in order to bring the two on board for an eighth season. After settling his contract dispute, Duchovny quit full-time participation in the show after the seventh season. This contributed to uncertainties over the likelihood of an eighth season. Carter and most fans felt the show was at its natural endpoint with Duchovny's departure, but it was decided that Mulder would be abducted at the end of the seventh seasons and would return in 12 episodes the following year. The producers then announced that a new character, John Doggett, would fill Mulder's role. More than 100 actors auditioned for the role of Doggett, but only about ten were seriously considered. Lou Diamond Phillips, Hart Bochner, and Bruce Campbell were among the ten. The producers chose Robert Patrick. Carter believed that the series could continue for another ten years with new leads, and the opening credits were accordingly redesigned in both seasons eight and nine to emphasize the new actors (along with Pileggi, who was finally listed as a main character). Doggett's presence did not give the series the ratings boost the network executives were hoping for. The eighth-season episode "This is Not Happening" marked the first appearance of Monica Reyes, played by Gish, who became a main character in season nine. Her character was developed and introduced due to Anderson's possible departure at the end of the eighth season. Although Anderson stayed until the end, Gish became a series regular. Glen Morgan and James Wong's early influence on "The X-Files" mythology led to their introduction of popular secondary characters who continued for years in episodes written by others: Scully's father, William (Don S. Davis); her mother, Margaret (Sheila Larken); and her sister, Melissa (Melinda McGraw). The conspiracy-inspired trio The Lone Gunmen were also secondary characters. The trio was introduced in the first-season episode "E.B.E." as a way to make Mulder appear more credible. They were originally meant to appear in only that episode, but due to their popularity, they returned in the second-season episode "Blood" and became recurring characters. Cigarette Smoking Man portrayed by William B. Davis, was initially cast as an extra in the pilot episode. His character, however, grew into the main antagonist. During the early stages of production, Carter founded Ten Thirteen Productions and began to plan for filming the pilot in Los Angeles. However, unable to find suitable locations for many scenes, he decided to "go where the good forests are" and moved production to Vancouver. It was soon realized by the production crew that since so much of the first season would require filming on location, rather than on sound stages, a second location manager would be needed. The show remained in Vancouver for the first five seasons; production then shifted to Los Angeles beginning with the sixth season. Duchovny was unhappy over his geographical separation from his wife Téa Leoni, although his discontent was popularly attributed to frustration with Vancouver's persistent rain. Anderson also wanted to return to the United States and Carter relented following the fifth season. The season ended in May 1998 with "The End", the final episode shot in Vancouver and the final episode with the involvement of many of the original crew members, including director and producer R.W. Goodwin and his wife Sheila Larken, who played Margaret Scully and would later return briefly. With the move to Los Angeles, many changes behind the scenes occurred, as much of the original "The X-Files" crew was gone. New production designer Corey Kaplan, editor Lynne Willingham, writer David Amann and director and producer Michael Watkins joined and stayed for several years. Bill Roe became the show's new director of photography and episodes generally had a drier, brighter look due to California's sunshine and climate, as compared with Vancouver's rain, fog and temperate forests. Early in the sixth season, the producers took advantage of the new location, setting the show in new parts of the country. For example, Vince Gilligan's "Drive", about a man subject to an unexplained illness, was a frenetic action episode, unusual for "The X-Files" largely because it was set in Nevada's stark desert roads. The "Dreamland" two-part episode was also set in Nevada, this time in Area 51. The episode was largely filmed at "Club Ed", a movie ranch located on the outskirts of Lancaster, California. Although the sixth through ninth seasons were filmed in Los Angeles, the series' second movie, "The X-Files: I Want to Believe" (2008), was filmed in Vancouver, According to Spotnitz, the film script was written for the city and surrounding areas. The 2016 revival was also shot there. The music was composed by Mark Snow, who got involved with "The X-Files" through his friendship with executive producer Goodwin. Initially Carter had no candidates. A little over a dozen people were considered, but Goodwin continued to press for Snow, who auditioned around three times with no sign from the production staff as to whether they wanted him. One day, however, Snow's agent called him, talking about the "pilot episode" and hinting that he had got the job. The theme, "The X-Files", used more instrumental sections than most dramas. The theme song's famous whistle effect was inspired by the track "How Soon Is Now?" from the US edition of The Smiths' 1985 album "Meat Is Murder". After attempting to craft the theme with different sound effects, Snow used a Proteus 2 rackmount sound module with a preset sound called "Whistling Joe". After hearing this sound, Carter was "taken aback" and noted it was "going to be good". According to the "Behind the Truth" segment on the first season DVD, Snow created the echo effect on the track by accident. He felt that after several revisions, something still was not right. Carter walked out of the room and Snow put his hand and forearm on his keyboard in frustration. By doing so, he accidentally activated an echo effect setting. The resulting riff pleased Carter; Snow said, "this sound was in the keyboard. And that was it." The second episode, "Deep Throat", marked Snow's debut as solo composer for an entire episode. The production crew was determined to limit the music in the early episodes. Likewise, the theme song itself first appeared in "Deep Throat". Snow was tasked with composing the score for both "The X-Files" films. The films marked the first appearance of real orchestral instruments; previous music had been crafted by Snow using digitally sampled instrument sounds. Snow's soundtrack for the first film, "", was released in 1998. For the second film, Snow recorded with the Hollywood Studio Symphony in May 2008 at the Newman Scoring Stage at 20th Century Fox in Century City. UNKLE recorded a new version of the theme music for the end credits. Some of the unusual sounds were created by a variation of silly putty and dimes tucked into piano strings. Snow commented that the fast percussion featured in some tracks was inspired by the track "Prospectors Quartet" from the "There Will Be Blood" soundtrack. The soundtrack score, "", was released in 2008. The opening sequence was made in 1993 for the first season and remained unchanged until Duchovny left the show. Carter sought to make the title an "impactful opening" with "supernatural images". These scenes notably include a split-screen image of a seed germinating as well as a "terror-filled, warped face". The latter was created when Carter found a video operator who was able to create the effect. The sequence was extremely popular and won the show its first Emmy Award, which was for Outstanding Graphic Design and Title Sequences. Producer Paul Rabwin was particularly pleased with the sequence and felt that it was something that had "never [been] seen on television before". In 2017, James Charisma of ""Paste" (magazine)" ranked the show's opening sequence #8 on a list of "The 75 Best TV Title Sequences of All Time". The premiere episode of season eight, "Within", revealed the first major change to the opening credits. Along with Patrick, the sequence used new images and updated photos for Duchovny and Anderson, although Duchovny only appears in the opening credits when he appears in an episode. Carter and the production staff saw Duchovny's departure as a chance to change things. The replacement shows various pictures of Scully's pregnancy. According to executive producer Frank Spotnitz, the sequence also features an "abstract" way of showing Mulder's absence in the eighth season: he falls into an eye. Season nine featured an entirely new sequence. Since Anderson wanted to move on, the sequence featured Reyes and Skinner. Duchovny's return to the show for the ninth-season finale, "The Truth" marked the largest number of cast members to be featured in the opening credits, with five. The revival seasons use the series' original opening credits sequence. The sequence ends with the tagline "The Truth Is Out There", which is used for the majority of the episodes. The tagline changes in specific episodes to slogans that are relevant to that episode. The following episodes received alternate taglines: The pilot premiered on September 10, 1993, and reached 12 million viewers. As the season progressed, ratings began to increase and the season finale garnered 14 million viewers. The first season ranked 105th out of 128 shows during the 199394 television season. The series' second season increased in ratings—a trend that would continue for the next three seasons—and finished 63rd out of 141 shows. These ratings were not spectacular, but the series had attracted enough fans to receive the label "cult hit", particularly by Fox standards. Most importantly it made great gains among the 18-to-49 age demographic sought by advertisers. During its third year, the series ranked 55th and was viewed by an average of 15.40 million viewers, an increase of almost seven percent over the second season, making it Fox's top-rated program in the 1849-year-old demographic. Although the first three episodes of the fourth season aired on Friday night, the fourth episode "Unruhe" aired on Sunday night. The show remained on Sunday until its end. The season hit a high with its twelfth episode, "Leonard Betts", which was chosen as the lead-out program following Super Bowl XXXI. The episode was viewed by 29.1 million viewers, the series' highest-rated episode. The fifth season debuted with "Redux I" on November 2, 1997 and was viewed by 27.34 million people, making it the highest-rated non-special broadcast episode of the series. The season ranked as the eleventh-most watched series during the 199798 year, with an average of 19.8 million viewers. It was the series' highest-rated season as well as Fox' highest-rated program during the 199798 season. The sixth season premiered with "The Beginning", watched by 20.24 million viewers. The show ended season six with lower numbers than the previous season, beginning a decline that would continue for the show's final three years. "The X-Files" was nevertheless Fox's highest-rated show that year. The seventh season, originally intended as the show's last, ranked as the 29th most-watched show for the 19992000 year, with 14.20 million viewers. This made it, at the time, the lowest-rated year of the show since the third season. The first episode of season eight, "Within", was viewed by 15.87 million viewers. The episode marked an 11% decrease from the seventh season opener, "The Sixth Extinction". The first part of the ninth season opener, "Nothing Important Happened Today", only attracted 10.6 million viewers, the series' lowest-rated season premiere. The original series finale, "The Truth", attracted 13.25 million viewers, the series' lowest rated season finale. The ninth season was the 63rd most-watched show for the 200102 season, tying its season two rank. On May 19, 2002, the finale aired and the Fox network confirmed that "The X-Files" was over. When talking about the beginning of the ninth season, Carter said "We lost our audience on the first episode. It's like the audience had gone away and I didn't know how to find them. I didn't want to work to get them back because I believed what we are doing deserved to have them back." While news outlets cited declining ratings because of lackluster stories and poor writing, "The X-Files" production crew blamed September 11 terrorist attacks as the main factor. At the end of 2002, "The X-Files" had become the longest-running consecutive science fiction series ever on U.S. broadcast television. This record was later surpassed by "Stargate SG-1" in 2007 and "Smallville" in 2011. The debut episode of the 2016 revival, "My Struggle", first aired on January 24, 2016 and was watched by 16.19 million viewers. In terms of viewers, this made it the highest-rated episode of "The X-Files" to air since the eighth-season episode "This Is Not Happening" in 2001, which was watched by 16.9 million viewers. When DVR and streaming are taken into account, "My Struggle" was seen by 21.4 million viewers, scoring a 7.1 Nielsen rating. The season ended with "My Struggle II", which was viewed by 7.60 million viewers. In total, the season was viewed by an average of 13.6 million viewers; it ranked as the seventh most-watched television series of the 201516 year, making it the highest-ranked season of "The X-Files" to ever air. A few years later, the premiere episode of the eleventh season, "My Struggle III", was watched by 5.15 million viewers. This was a decrease from the previous season's debut; it was also the lowest-rated premiere for any season of the show. The season concluded with "My Struggle IV", which was seen by 3.43 million viewers, which was also a decrease from the previous season. "My Struggle IV", which became the de facto finale for the series, was also the show's lowest-rated finale. In total, the season was viewed by an average of 5.34 million viewers, and it ranked as the 91st most-watched television series of the 2018–19 year. After several successful seasons, Carter wanted to tell the story of the series on a wider scale, which ultimately turned into a feature film. He later explained that the main problem was to create a story that would not require the viewer to be familiar with the broadcast series. The movie was filmed in the hiatus between the show's fourth and fifth seasons and re-shoots were conducted during the filming of the show's fifth season. Due to the demands on the actors' schedules, some episodes of the fifth season focused on just one of the two leads. On June 19, 1998, the eponymous "The X-Files", also known as "The X-Files: Fight the Future" was released. The crew intended the movie to be a continuation of the season five finale "The End", but was also meant to stand on its own. The season six premiere, "The Beginning", began where the film ended. The film was written by Carter and Spotnitz and directed by series regular Rob Bowman. In addition to Mulder, Scully, Skinner and Cigarette Smoking Man, it featured guest appearances by Martin Landau, Armin Mueller-Stahl and Blythe Danner, who appeared only in the film. It also featured the last appearance of John Neville as the Well-Manicured Man. Jeffrey Spender, Diana Fowley, Alex Krycek and Gibson Praise—characters who had been introduced in the fifth-season finale and/or were integral to the television series—do not appear in the film. Although the film had a strong domestic opening and received mostly positive reviews from critics, attendance dropped sharply after the first weekend. Although it failed to make a profit during its theatrical release—due in part to its large promotional budget—"The X-Files" film was more successful internationally. Eventually, the worldwide theatrical box office total reached $189 million. The film's production cost and ad budgets were each close to $66 million. Unlike the series, Anderson and Duchovny received equal pay for the film. In November 2001, Carter decided to pursue a second film adaptation. Production was slated to begin after the ninth season, with a projected release in December 2003. In April 2002, Carter reiterated his desire and the studio's desire to do a sequel film. He planned to write the script over the summer and begin production in spring or summer 2003 for a 2004 release. Carter described the film as independent of the series, saying "We're looking at the movies as stand-alones. They're not necessarily going to have to deal with the mythology." Bowman, who had directed various episodes of "The X-Files" in the past as well as the 1998 film, expressed an interest in the sequel, but Carter took the job. Spotnitz co-authored the script with Carter. "" became the second film based on the series, after 1998's "The X-Files: Fight the Future". Filming began in December 2007 in Vancouver and finished on March 11, 2008. The film was released in the United States on July 25, 2008. In an interview with "Entertainment Weekly", Carter said that if "I Want to Believe" proved successful, he would propose a third movie that would return to the television series' mythology and focus on the alien invasion foretold within the series, due to occur in December 2012. The film grossed $4 million on its opening day in the United States. It opened fourth on the U.S. weekend box office chart, with a gross of $10.2 million. By the end of its theatrical run, it had grossed $20,982,478 domestically and an additional $47,373,805 internationally, for a total worldwide gross of $68,369,434. Among 2008 domestic releases, it finished in 114th place. The film's stars both claimed that the timing of the movie's release, a week after the highly popular Batman film "The Dark Knight", negatively affected its success. The film received mixed to negative reviews. Metacritic, which assigns a rating out of 100 reviews from mainstream film critics, reported "mixed or average" reviews, with an average score of 47 based on 33 reviews. Rotten Tomatoes reported that 32% of 160 listed film critics gave the film a positive review, with an average rating of 4.9 out of 10. The website wrote of the critics' consensus stating; "The chemistry between leads David Duchovny and Gillian Anderson do live up to "The X-Files" televised legacy, but the roving plot and droning routines make it hard to identify just what we're meant to believe in." In several interviews around the release, Carter said that if the "X-Files: I Want to Believe" film proved successful at the box office, a third installment would be made going back to the TV series' mythology, focusing specifically on the alien invasion and colonization of Earth foretold in the ninth-season finale, due to occur on December 22, 2012. In an October 2009 interview, David Duchovny likewise said he wanted to do a 2012 "X-Files" movie, but did not know if he would get the chance. Anderson stated in August 2012 that a third "X-Files" film is "looking pretty good". As of July 2013, Fox had not approved the movie, although Carter, Spotnitz, Duchovny and Anderson expressed interest. At the New York Comic Con held October 1013, 2013, Duchovny and Anderson reaffirmed that they and Carter are interested in making a third film, with Anderson saying "If it takes fan encouragement to get Fox interested in that, then I guess that's what it would be." On January 17, 2015, Fox confirmed that they were looking at the possibility of bringing "The X-Files" back, not as a movie, but as a limited run television season. Fox chairman Dana Walden told reporters that "conversations so far have only been logistical and are in very early stages" and that the series would only go forward if Carter, Anderson, and Duchovny were all on board, and that it was a matter of ensuring all of their timetables are open. On March 24, 2015, it was confirmed the series would return with series creator Chris Carter and lead actors David Duchovny and Gillian Anderson. It premiered on January 24, 2016. A year later, on April 20, 2017, Fox officially announced that "The X-Files" would be returning for an eleventh season of ten episodes, which premiered on January 3, 2018. In January 2018, Gillian Anderson confirmed that season 11 would be her final season of "The X-Files". The following month, Carter stated in an interview that he could see the show continuing without Anderson. In May 2018, Fox's co-CEO Gary Newman commented that "there are no plans to do another season at the moment." On September 24, 1996, the first "wave" set of "The X-Files" VHS tapes were released. Wave sets were released covering the first through fourth seasons. Each "wave" was three VHS tapes, each containing two episodes, for a total of six episodes per wave and two waves per season. For example, the home video release of wave one drew from the first half of the first season: "Pilot"/"Deep Throat", "Conduit"/"Ice" and "Fallen Angel"/"Eve". Each wave was also available in a boxed set. Unlike later DVD season releases, the tapes did not include every episode from the seasons. Ultimately twelve episodes—approximately half the total number aired—were selected by Carter to represent each season, including nearly all "mythology arc" episodes and selected standalone episodes. Carter briefly introduced each episode with an explanation of why the episode was chosen and anecdotes from the set. These clips were later included on the full season DVDs. Wave eight, covering the last part of the fourth season, was the last to be released. No Carter interviews appeared on DVDs for later seasons. Many of the waves had collectible cards for each episode. All nine seasons were released on DVD along with the two films. The entire series was re-released on DVD in early 2006, in a "slimmer" package. The first five slim case versions did not come with some bonus materials that were featured in the original fold-out versions. However, seasons six, seven, eight and nine all contained the bonus materials found in the original versions.
https://en.wikipedia.org/wiki?curid=30304
Third World The term "Third World" arose during the Cold War to define countries that remained non-aligned with either NATO or the Warsaw Pact. The United States, Canada, Japan, South Korea, Western European nations and their allies represented the First World, while the post-Soviet Union countries, China, Cuba, and their allies represented the Second World. This terminology provided a way of broadly categorizing the nations of the Earth into three groups based on political and economic divisions. Since the fall of the Soviet Union and the end of the Cold War, the term "Third World" has decreased in use. It is being replaced with terms such as developing countries, least developed countries or the Global South. The concept itself has become outdated as it no longer represents the current political or economic state of the world. The Third World was normally seen to include many countries with colonial pasts in Africa, Latin America, Oceania and Asia. It was also sometimes taken as synonymous with countries in the Non-Aligned Movement. In the dependency theory of thinkers like Raúl Prebisch, Walter Rodney, Theotonio dos Santos, and Andre Gunder Frank, the Third World has also been connected to the world-systemic economic division as "periphery" countries dominated by the countries comprising the economic "core". Due to the complex history of evolving meanings and contexts, there is no clear or agreed-upon definition of the Third World. Some countries in the Communist Bloc, such as Cuba, were often regarded as "Third World". Because many Third World countries were economically poor, and non-industrialized, it became a stereotype to refer to poor countries as "third world countries", yet the "Third World" term is also often taken to include newly industrialized countries like Brazil, India and China now more commonly referred to as part of BRIC. Historically, some European countries were non-aligned and a few of these were and are very prosperous, including Ireland, Austria, Sweden, Finland, Switzerland and Yugoslavia. French demographer, anthropologist and historian Alfred Sauvy, in an article published in the French magazine "L'Observateur", August 14, 1952, coined the term "Third World" (French: "Tiers Monde"), referring to countries that were unaligned with either the Communist Soviet bloc or the Capitalist NATO bloc during the Cold War. His usage was a reference to the Third Estate, the commoners of France who, before and during the French Revolution, opposed the clergy and nobles, who composed the First Estate and Second Estate, respectively. Sauvy wrote, "This third world ignored, exploited, despised like the third estate also wants to be something." He conveyed the concept of political non-alignment with either the capitalist or communist bloc. The "Three Worlds Theory" developed by Mao Zedong is different from the Western theory of the Three Worlds or Third World. For example, in the Western theory, China and India belong respectively to the second and third worlds, but in Mao's theory both China and India are part of the Third World which he defined as consisting of exploited nations. Third Worldism is a political movement that argues for the unity of third-world nations against first-world influence and the principle of non-interference in other countries' domestic affairs. Groups most notable for expressing and exercising this idea are the Non-Aligned Movement (NAM) and the Group of 77 which provide a base for relations and diplomacy between not just the third-world countries, but between the third-world and the first and second worlds. The notion has been criticized as providing a fig leaf for human-rights violations and political repression by dictatorships. Since 1990, this term has been redefined to make it more correct politically. Initially, the term “third world” meant that a nation is “under-developed”. However, today it is replaced by the term “developing.” The world today is more plural than what it needs to be, and so the third world is not just an economic state. These nations have overcome many setbacks and are now developing rapidly. Thus, this categorization becomes anachronistic in a diverse society. Most Third World countries are former colonies. Having gained independence, many of these countries, especially smaller ones, were faced with the challenges of nation- and institution-building on their own for the first time. Due to this common background, many of these nations were "developing" in economic terms for most of the 20th century, and many still are. This term, used today, generally denotes countries that have not developed to the same levels as OECD countries, and are thus in the process of "developing". In the 1980s, economist Peter Bauer offered a competing definition for the term "Third World". He claimed that the attachment of Third World status to a particular country was not based on any stable economic or political criteria, and was a mostly arbitrary process. The large diversity of countries considered part of the Third World — from Indonesia to Afghanistan — ranged widely from economically primitive to economically advanced and from politically non-aligned to Soviet- or Western-leaning. An argument could also be made for how parts of the U.S. are more like the Third World. The only characteristic that Bauer found common in all Third World countries was that their governments "demand and receive Western aid," the giving of which he strongly opposed. Thus, the aggregate term "Third World" was challenged as misleading even during the Cold War period, because it had no consistent or collective identity among the countries it supposedly encompassed. During the Cold War, unaligned countries of the Third World were seen as potential allies by both the First and Second World. Therefore, the United States and the Soviet Union went to great lengths to establish connections in these countries by offering economic and military support to gain strategically located alliances (e.g., United States in Vietnam or Soviet Union in Cuba). By the end of the Cold War, many Third World countries had adopted capitalist or communist economic models and continued to receive support from the side they had chosen. Throughout the Cold War and beyond, the countries of the Third World have been the priority recipients of Western foreign aid and the focus of economic development through mainstream theories such as modernization theory and dependency theory. By the end of the 1960s, the idea of the Third World came to represent countries in Africa, Asia and Latin America that were considered underdeveloped by the West based on a variety of characteristics (low economic development, low life expectancy, high rates of poverty and disease, etc.). These countries became the targets for aid and support from governments, NGOs and individuals from wealthier nations. One popular model, known as Rostow's stages of growth, argued that development took place in 5 stages (Traditional Society; Pre-conditions for Take-off; Take-off; Drive to Maturity; Age of High Mass Consumption). W. W. Rostow argued that "Take-off" was the critical stage that the Third World was missing or struggling with. Thus, foreign aid was needed to help kick-start industrialization and economic growth in these countries. Many times there is a clear distinction between First and Third Worlds. When talking about the Global North and the Global South, the majority of the time the two go hand in hand. People refer to the two as "Third World/South" and "First World/North" because the Global North is more affluent and developed, whereas the Global South is less developed and often poorer. To counter this mode of thought, some scholars began proposing the idea of a change in world dynamics that began in the late 1980s, and termed it the Great Convergence. As Jack A. Goldstone and his colleagues put it, "in the twentieth century, the Great Divergence peaked before the First World War and continued until the early 1970s, then, after two decades of indeterminate fluctuations, in the late 1980s it was replaced by the Great Convergence as the majority of Third World countries reached economic growth rates significantly higher than those in most First World countries". Others have observed a return to Cold War-era alignments (MacKinnon, 2007; Lucas, 2008), this time with substantial changes between 1990–2015 in geography, the world economy and relationship dynamics between current and emerging world powers; not necessarily redefining the classic meaning of "First", "Second", and "Third World" terms, but rather which countries belong to them by way of association to which world power or coalition of countries — such as G7, the European Union, OECD; G20, OPEC, BRICS, ASEAN; the African Union, and the Eurasian Union. Since 1990 the term "Third World" has been redefined in many evolving dictionaries in several languages to refer to countries considered to be underdeveloped economically and/or socially. From a "political correctness" standpoint the term "Third World" may be considered outdated, which its concept is mostly a historical term and cannot fully address what means by developing and less-developed countries today. Around the early 1960s, the term "underdeveloped countries" occurred and the Third World serves to be its synonym, but after it has been officially used by politicians, 'underdeveloped countries' is soon been replaced by 'developing' and 'less-developed countries,' because the prior one shows hostility and disrespect, in which the Third World is often characterized with stereotypes. The whole 'Four Worlds' system of classification has also been described as derogatory because the standard mainly focused on each nations' Gross National Product. While the Cold War Period ends and many sovereign states start to form, the term Third World becomes less usable. Nevertheless, it remains in popular use around the world, including the Latin American Spanish-language media, where "tercermundista" (an adjective) can refer to not just lower levels of development but also something of low quality or in other ways deficient. The planet is far more plural than what it is used to be and the Third World can no longer be used to symbolize current political or economic state. The general definition of the Third World can be traced back to the history that nations positioned as neutral and independent during the Cold War were considered as Third World Countries, and normally these countries are defined by high poverty rates, lack of resources, and unstable financial standing. However, based on the rapid development of modernization and globalization, countries that were used to be considered as Third World countries achieve big economic growth, such as Brazil, India, and Indonesia, which can no longer be defined by poor economic status or low GNP today. The differences among nations of the Third World are continually growing throughout time, and it will be hard to use the Third World to define and organize groups of nations based on their common political arrangements since most countries live under diverse creeds in this era, such as Mexico, El Salvador, and Singapore, which they all have their own political system. The Third World categorization becomes anachronistic since its political classification and economic system are distinct to be applied in today's society. Based on the Third World standards, any region of the world can be categorized into any of the four types of relationships among state and society, and will eventually end in four outcomes: praetorianism, multi-authority, quasi-democratic and viable democracy. However, political culture is never going to be limited by the rule and the concept of the Third World can be circumscribed. The Third World is often broadly connected to colonialism and poverty, but through decolonization and evolution in transport and communications, the World is shrinking and each nation forms a strong interlinkage with each other so that the 'Four Worlds' system is left behind and the world is more likely to be considered as a united one. Moreover, the Four Worlds' categorization also reinforces competition snd superiority among nations. The Third World is a controversial topic, and "political correctness" in some media and academic settings insist that it is no longer used very often although there are still many countries that share similar developmental experiences. It has been partially replaced by developing countries and less-developed countries, which they do not have obvious negative implications as the Third World. However, the Latin American media continue to frequently employ the equivalent Spanish language expression, "Tercer Mundo. "
https://en.wikipedia.org/wiki?curid=30305
Twin Peaks Twin Peaks is an American surreal mystery horror drama television series created by Mark Frost and David Lynch that premiered on April 8, 1990, on ABC, running until its cancellation after its second season in 1991. The show gained a devoted cult following and has been referenced in a wide variety of media. In subsequent years, "Twin Peaks" is often listed among the greatest television series of all time, and is considered a landmark turning point in television drama. The series follows an investigation headed by FBI Special Agent Dale Cooper (Kyle MacLachlan) and local Sheriff Harry S. Truman (Michael Ontkean) into the murder of homecoming queen Laura Palmer (Sheryl Lee) in the fictional town of Twin Peaks, Washington. The show's narrative draws on elements of detective fiction, but its uncanny tone, supernatural elements, and campy, melodramatic portrayal of eccentric characters also draw on American soap opera and horror tropes. Like much of Lynch's work, it is distinguished by surrealism, offbeat humor, and distinctive cinematography. The score was composed by Angelo Badalamenti with Lynch. The success of the show sparked a media franchise, and the series was followed by a 1992 feature film, "", that serves as a prequel to the series. Additional tie-in books were also released. Following a hiatus of over 25 years, the show returned in 2017 with a third season on Showtime"." The season was directed by Lynch and written by Lynch and Frost, and starred MacLachlan alongside other original cast members. In 1989, local logger Pete Martell discovers a naked corpse wrapped in plastic on the bank of a river outside the town of Twin Peaks, Washington. When Sheriff Harry S. Truman, his deputies, and doctor Will Hayward arrive, the body is identified as high school senior and homecoming queen Laura Palmer. A badly injured second girl, Ronette Pulaski, is discovered in a fugue state. FBI Special Agent Dale Cooper is called in to investigate. Cooper's initial examination of Laura's body reveals a tiny typed letter "R" inserted under her fingernail. Cooper informs the community that Laura's death matches the signature of a killer who murdered another girl in southwestern Washington the previous year, and that evidence indicates the killer lives in Twin Peaks. The authorities discover through Laura's diary that she had been living a double life. She was cheating on her boyfriend, football captain Bobby Briggs, with biker James Hurley, and prostituting herself with the help of truck driver Leo Johnson and drug dealer Jacques Renault. Laura was also addicted to cocaine, which she obtained by coercing Bobby into doing business with Jacques. Laura's father, attorney Leland Palmer, suffers a nervous breakdown after her death. Her best friend Donna Hayward begins a relationship with James. With the help of Laura's cousin Maddy Ferguson, Donna and James discover that Laura's psychiatrist, Dr. Lawrence Jacoby, was obsessed with her, but he is proven innocent of the murder. Hotelier Ben Horne, the richest man in Twin Peaks, plans to destroy the town's lumber mill along with its owner Josie Packard, and murder his lover (Josie's sister-in-law), Catherine Martell (Piper Laurie), so he can purchase the land at a reduced price and complete a development project called Ghostwood. Horne's sultry, troubled daughter, Audrey, becomes infatuated with Agent Cooper and spies on her father for clues in an effort to gain his affections. Cooper has a dream in which he is approached by a one-armed otherworldly being who calls himself MIKE. MIKE says that Laura's murderer is a similar entity, Killer BOB, a feral, denim-clad man with long gray hair. Cooper finds himself decades older with Laura and a dwarf in a red business suit, who engages in coded dialogue with Cooper. The next morning, Cooper tells Truman that, if he can decipher the dream, he will know who killed Laura. Cooper and the sheriff's department find the one-armed man from Cooper's dream, a traveling shoe salesman named Phillip Gerard. Gerard knows a Bob, the veterinarian who treats Renault's pet bird. Cooper interprets these events to mean that Renault is the murderer, and with Truman's help, tracks Renault to One-Eyed Jack's, a brothel owned by Horne across the border in Canada. He lures Jacques Renault back onto U.S. soil to arrest him, but Renault is shot while trying to escape and is hospitalized. Leland, learning that Renault has been arrested, sneaks into the hospital and murders him. The same night, Horne orders Leo to burn down the lumber mill with Catherine trapped inside and has Leo gunned down by Hank Jennings to ensure Leo's silence. Cooper returns to his room following Jacques's arrest and is shot by a masked gunman. Lying hurt in his hotel room, Cooper has a vision in which a giant appears and reveals three clues: "There is a man in a smiling bag," "the owls are not what they seem," and "without chemicals, he points." He takes a gold ring off Cooper's finger and explains that when Cooper understands the three premonitions, his ring will be returned. Leo Johnson survives his shooting but is left brain-damaged. Catherine Martell disappears, presumed killed in the mill fire. Leland Palmer, whose hair has turned white overnight, returns to work but behaves erratically. Cooper deduces that the "man in the smiling bag" is the corpse of Jacques Renault in a body bag. MIKE is inhabiting the body of Phillip Gerard. His personality surfaces when Gerard forgoes the use of a certain drug. MIKE reveals that he and BOB once collaborated in killing humans and that BOB is similarly inhabiting a man in the town. Cooper and the sheriff's department use MIKE, in control of Gerard's body, to help find BOB ("without chemicals, he points"). Donna befriends an agoraphobic orchid grower named Harold Smith whom Laura entrusted with her second, secret diary. Harold catches Donna and Maddy attempting to steal the diary from him and hangs himself in despair. Cooper and the sheriff's department take possession of Laura's secret diary, and learn that BOB, a friend of her father's, had been sexually abusing her since childhood and she used drugs to cope. They initially suspect that the killer is Ben Horne and arrest him, but Leland Palmer is revealed to viewers to be BOB's host when he kills Maddy. Cooper begins to doubt Horne's guilt, so he gathers all of his suspects in the belief that he will receive a sign to help him identify the killer. The Giant appears and confirms that Leland is BOB's host and Laura's and Maddy's killer, giving Cooper back his ring. Cooper and Truman take Leland into custody. In control of Leland's body, BOB admits to a string of murders, before forcing Leland to commit suicide. As Leland dies, he is freed of BOB's influence and begs for forgiveness. BOB's spirit disappears into the woods in the form of an owl and the lawmen wonder if he will reappear. Cooper is set to leave Twin Peaks when he is framed for drug trafficking by Jean Renault and is suspended from the FBI. Renault holds Cooper responsible for the death of his brothers, Jacques and Bernard. Jean Renault is killed in a shootout with police, and Cooper is cleared of all charges. Windom Earle, Cooper's former mentor and FBI partner, escapes from a mental institution and comes to Twin Peaks. Cooper had previously been having an affair with Earle's wife, Caroline, while she was under his protection as a witness to a federal crime. Earle murdered Caroline and wounded Cooper. He now engages Cooper in a twisted game of chess during which Earle murders someone whenever a piece is captured. Investigating BOB's origin and whereabouts with the help of Major Garland Briggs, Cooper learns of the existence of the White Lodge and the Black Lodge, two extra-dimensional realms whose entrances are somewhere in the woods surrounding Twin Peaks. Catherine returns to town in yellowface, having survived the mill fire, and manipulates Ben Horne into signing the Ghostwood project over to her. Andrew Packard, Josie's husband, is revealed to be still alive while Josie Packard is revealed to be the person who shot Cooper at the end of the first season. Andrew forces Josie to confront his business rival and her tormentor from Hong Kong, the sinister Thomas Eckhardt. Josie kills Eckhardt, but she mysteriously dies when Truman and Cooper try to apprehend her. Cooper falls in love with a new arrival in town, Annie Blackburn. Earle captures the brain-damaged Leo for use as a henchman and abandons his chess game with Cooper. When Annie wins the Miss Twin Peaks contest, Earle kidnaps her and takes her to the entrance to the Black Lodge, whose power he seeks to use for himself. Through a series of clues Cooper discovers the entrance to the Black Lodge, which turns out to be the strange, red-curtained room from his dream. He is greeted by the Man From Another Place, the Giant, and Laura Palmer, who each give Cooper cryptic messages. Searching for Annie and Earle, Cooper encounters doppelgängers of various people, including Maddy Ferguson and Leland Palmer. Cooper finds Earle, who demands Cooper's soul in exchange for Annie's life. Cooper agrees but BOB appears and takes Earle's soul for himself. BOB then turns to Cooper, who is chased through the lodge by a doppelgänger of himself. Outside the lodge, Andrew Packard, Pete Martell and Audrey Horne are caught in an explosion at a bank vault, a trap laid by the dead Eckhardt. Cooper and Annie reappear in the woods, both injured. Annie is taken to the hospital but Cooper recovers in his room at the Great Northern Hotel. It becomes clear that the "Cooper" who emerged from the Lodge is in fact his doppelgänger, under BOB's control. He smashes his head into a bathroom mirror and laughs maniacally. On October 6, 2014, it was announced that a limited series would air on Showtime. David Lynch and Mark Frost wrote all the episodes, and Lynch directed. Frost emphasized that the new episodes are not a remake or reboot but a continuation of the series. The episodes are set in the present day, and the passage of 25 years is an important element in the plot. The third season is known as both "Twin Peaks: The Return" and "Twin Peaks: A Limited Event Series.", the darker tone having more in common with the film "" than with the lighter episodes of the previous seasons. Most of the original cast returns, including Kyle MacLachlan, Mädchen Amick, Sherilyn Fenn, Sheryl Lee, Ray Wise, and several others. Additions include Jeremy Davies, Laura Dern, Robert Forster, Tim Roth, Jennifer Jason Leigh, Amanda Seyfried, Matthew Lillard, and Naomi Watts. The limited series began filming in September 2015 and was completed by April 2016. It was shot continuously from a single, long-shooting script before being edited into separate episodes. The series premiered on May 21, 2017, and consists of 18 episodes. "Part 8", especially, was both well received and covered excitedly in the press. Matt Zoller Seitz of "Vulture" declared "Part 8" as the best television episode of 2017, calling it "the single most impressive episode of television drama I've seen in... 20 years". Since Season 3 ended in 2017, Lynch and Frost have expressed interest in making another season of "Twin Peaks". Lynch has been asked in several interviews if he would continue, once saying "I don't know, I have a box of ideas, and I'm working with producer Sabrina Sutherland, kind of trying to go through and see if there's any gold in those boxes." Lynch also said one more story was "calling to him" (involving the character of Carrie Page) but there "were disturbances." In a Reddit AMA on June 22, 2020, star Kyle MacLachlan said Cooper was his "favorite role of all time" and that he would "absolutely" return to another season "without even seeing the script." In the 1980s, Mark Frost worked for three years as a writer for the television police drama "Hill Street Blues", which featured a large cast and extended story lines. Following his success with "The Elephant Man" (1980) and "Blue Velvet" (1986), David Lynch was hired by a Warner Bros. executive to direct a film about the life of Marilyn Monroe, based on the best-selling book "Goddess". Lynch recalls being "sort of interested. I loved the idea of this woman in trouble, but I didn't know if I liked it being a real story." Lynch and Frost first worked together on the "Goddess" screenplay and although the project was dropped by Warner Bros., they became good friends. They went on to work as writer and director for "One Saliva Bubble", a film with Steve Martin attached to star, but it was never made either. Lynch's agent, Tony Krantz, encouraged him to do a television show. Lynch said "Tony I don't want to do a TV show". He took Lynch to Nibblers restaurant in Los Angeles and said, "You should do a show about real life in America—your vision of America the same way you demonstrated it in "Blue Velvet"." Lynch got an "idea of a small-town thing", and though he and Frost were not keen on it, they decided to humor Krantz. Frost wanted to tell "a sort of Dickensian story about multiple lives in a contained area that could sort of go perpetually." Originally, the show was to be titled "North Dakota" and set in the Plains region of North Dakota. After Frost, Krantz, and Lynch rented a screening room in Beverly Hills and screened "Peyton Place", they decided to develop the town before its inhabitants. Due to the lack of forests and mountains in North Dakota, the title was changed from "North Dakota" to "Northwest Passage" (the title of the pilot episode), and the location to the Pacific Northwest, specifically Washington. They then drew a map and decided that there would be a lumber mill in the town. Then they came up with an image of a body washing up on the shore of a lake. Lynch remembers, "We knew where everything was located and that helped us determine the prevailing atmosphere and what might happen there." Frost remembers that he and Lynch came up with the notion of the girl next door leading a "desperate double life" that would end in murder. The idea was inspired, in part, by the unsolved 1908 murder of Hazel Irene Drew in Sand Lake, New York. Lynch and Frost pitched the idea to ABC during the 1988 Writers Guild of America strike in a ten-minute meeting with the network's drama head, Chad Hoffman, with nothing more than this image and a concept. According to the director, the mystery of who killed Laura Palmer was initially going to be in the foreground, but would recede gradually as viewers got to know the other townsfolk and the problems they were having. Lynch and Frost wanted to mix a police investigation with a soap opera. ABC liked the idea and asked Lynch and Frost to write a screenplay for the pilot episode. They had been talking about the project for three months and wrote the screenplay in 10 days. Frost wrote more verbal characters, like Benjamin Horne, while Lynch was responsible for Agent Cooper. According to the director, "He says a lot of the things I say." ABC Entertainment President Brandon Stoddard ordered the two-hour pilot for a possible fall 1989 series. He left the position in March 1989 as Lynch went into production. They filmed the pilot for $4 million with an agreement with ABC that they would shoot an additional "ending" to it so that it could be sold directly to video in Europe as a feature film if the TV show was not picked up. ABC's Robert Iger and his creative team took over, saw the dailies, and met with Frost and Lynch to get the arc of the stories and characters. Although Iger liked the pilot, he had difficulty persuading the rest of the network executives. Iger suggested showing it to a more diverse, younger group, who liked it, and the executive subsequently convinced ABC to buy seven episodes at $1.1 million apiece. Some executives figured that the show would never get on the air or that it might run as a seven-hour mini-series, but Iger planned to schedule it for the spring. The final showdown occurred during a bi-coastal conference call between Iger and a room full of New York executives; Iger won, and "Twin Peaks" was on the air. Each episode took a week to shoot and after directing the second episode, Lynch went off to complete "Wild at Heart" while Frost wrote the remaining segments. Standards and Practices had a problem with only one scene from the first season: an extreme close-up in the pilot of Cooper's hand as he slid tweezers under Laura's fingernail and removed a tiny "R". They wanted the scene to be shorter because it made them uncomfortable, but Frost and Lynch refused and the scene remained. "Twin Peaks" features members of a loose ensemble of Lynch's favorite character actors, including Jack Nance, Kyle MacLachlan, Grace Zabriskie, and Everett McGill. Isabella Rossellini, who had worked with Lynch on "Blue Velvet" was originally cast as Giovanna Packard, but she dropped out of the production before shooting began on the pilot episode. The character was then reconceived as Josie Packard, of Chinese ethnicity, and the role given to actress Joan Chen. The cast includes several actors who had risen to fame in the 1950s and 1960s, including 1950s film stars Richard Beymer, Piper Laurie, and Russ Tamblyn. Other veteran actors included British actor James Booth ("Zulu"), former "The Mod Squad" star Peggy Lipton, and Michael Ontkean, who co-starred in the 1970s crime drama "The Rookies". Kyle MacLachlan was cast as Agent Dale Cooper. Stage actor Warren Frost was cast as Dr. Will Hayward. Due to budget constraints, Lynch intended to cast a local girl from Seattle as Laura Palmer, reportedly "just to play a dead girl." The local girl ended up being Sheryl Lee. Lynch stated "But no one—not Mark, me, anyone—had any idea that she could act, or that she was going to be so powerful just being dead." And then, while Lynch shot the home movie that James takes of Donna and Laura, he realized that Lee had something special. "She did do another scene—the video with Donna on the picnic—and it was that scene that did it." As a result, Sheryl Lee became a semi-regular addition to the cast, appearing in flashbacks as Laura, and portraying another, recurring character: Maddy Ferguson, Laura's similar-looking cousin. The character of Phillip Gerard's appearance in the pilot episode was originally intended to be only a "kind of homage to "The Fugitive". The only thing he was gonna do was be in this elevator and walk out," according to David Lynch. However, when Lynch wrote the "Fire walk with me" speech, he imagined Al Strobel, who played Gerard, reciting it in the basement of the Twin Peaks hospital—a scene that appeared in the European version of the pilot episode, and surfaced later in Agent Cooper's dream sequence. Gerard's full name, Phillip Michael Gerard, is also a reference to Lieutenant Phillip Gerard, a character in "The Fugitive". Lynch met Michael J. Anderson in 1987. After seeing him in a short film, Lynch wanted to cast the actor in the title role in "Ronnie Rocket", but that project failed to get made. Richard Beymer was cast as Ben Horne because he had known Johanna Ray, Lynch's casting director. Lynch was familiar with Beymer's work in the 1961 film "West Side Story" and was surprised that Beymer was available for the role. Set dresser Frank Silva was cast as the mysterious "Bob". Lynch himself recalls that the idea originated when he overheard Silva moving furniture around in the bedroom set, and then heard a woman warning Silva not to block himself in by moving furniture in front of the door. Lynch was struck with an image of Silva in the room. When he learned that Silva was an actor, he filmed two panning shots, one with Silva at the base of the bed, and one without; he did not yet know how he would use this material. Later that day, during the filming of Sarah Palmer having a vision, the camera operator told Lynch that the shot was ruined because "Frank [Silva] was reflected in the mirror." Lynch comments, "Things like this happen and make you start dreaming. And one thing leads to another, and if you let it, a whole other thing opens up." Lynch used the panning shot of Silva in the bedroom, and the shot featuring Silva's reflection, in the closing scenes of the European version of the pilot episode. Silva's reflection in the mirror can also be glimpsed during the scene of Sarah's vision at the end of the original pilot, but it is less clear. A close-up of Silva in the bedroom later became a significant image in episodes of the TV series. The score for "Twin Peaks" has received acclaim; "The Guardian" wrote that it "still marks the summit of TV soundtracks." In fall 1989, composer Angelo Badalamenti and Lynch created the score for the show. In 20 minutes they produced the signature theme for the series. Badalamenti called it the "Love Theme from Twin Peaks". Lynch told him, "You just wrote 75% of the score. It's the mood of the whole piece. It is "Twin Peaks"." While creating the score, Lynch often described the moods or emotions he wanted the music to evoke, and Badalamenti began to play the piano. In the scenes dominated by young men, they are accompanied by music that Badalamenti called Cool Jazz. The characters' masculinity was enhanced by finger-snapping, "cocktail-lounge electric piano, pulsing bass, and lightly brushed percussion." A handful of the motifs were borrowed from the Julee Cruise album "Floating into the Night", which was written in large part by Badalamenti and Lynch and was released in 1989. This album also serves as the soundtrack to another Lynch project, "Industrial Symphony No. 1", a live Cruise performance also featuring Michael J. Anderson ("The Man from Another Place"). The song "Falling" ("sans" vocals) became the theme to the show, and the songs "Rockin' Back Inside My Heart", "The Nightingale", "The World Spins", and "Into the Night" (found in their full versions on the album) were all, except the last, used as Cruise's roadhouse performances during the show's run. The lyrics for all five songs were written by Lynch. A second volume of the soundtrack was released on October 30, 2007, to coincide with the Definitive Gold Box DVD set. In March 2011, Lynch began releasing "The Twin Peaks Archive" - a collection of previously unavailable tracks from the series and the film via his website. FBI Special Agent Dale Cooper states, in the pilot episode, that Twin Peaks is "five miles south of the Canadian border, and twelve miles west of the state line". This places it in the Salmo-Priest Wilderness. Lynch and Frost started their location search in Snoqualmie, Washington, on the recommendation of a friend of Frost. They found all of the locations that they had written into the pilot episode. The towns of Snoqualmie, North Bend and Fall City – which became the primary filming locations for stock "Twin Peaks" exterior footage – are about an hour's drive from the town of Roslyn, Washington, the town used for the series "Northern Exposure". Many exterior scenes were filmed in wooded areas of Malibu, California. Most of the interior scenes were shot on standing sets in a San Fernando Valley warehouse. The soap opera show-within-the-show "Invitation to Love" was not shot on a studio set, but in the Ennis House, an architectural landmark designed by Frank Lloyd Wright in the Hollywood area of Los Angeles. Mark Frost and David Lynch made use of repeating and sometimes mysterious motifs such as trees (especially fir and pines), coffee and doughnuts, cherry pie, owls, logs, ducks, water, fire — and numerous embedded references to other films and TV shows. During the filming of the scene in which Cooper first examines Laura's body, a malfunctioning fluorescent lamp above the table flickered constantly, but Lynch decided not to replace it, since he liked the disconcerting effect that it created. Cooper's dream at the end of the third episode, which became a driving plot point in the series's first season and ultimately held the key to the identity of Laura's murderer, was never scripted. The idea came to Lynch one afternoon after touching the side of a hot car left out in the sun: "I was leaning against a car—the front of me was leaning against this very warm car. My hands were on the roof and the metal was very hot. The Red Room scene leapt into my mind. 'Little Mike' was there, and he was speaking backwards... For the rest of the night I thought only about The Red Room." The footage was originally shot along with the pilot, to be used as the conclusion were it to be released as a feature film. When the series was picked up, Lynch decided to incorporate some of the footage; in the third episode, Cooper, narrating the dream, outlines the shot footage which Lynch did not incorporate, such as Mike shooting Bob and the fact that he is 25 years older when he meets Laura Palmer's spirit. In an attempt to avoid cancellation, the idea of a Cooper possessed by Bob came up and was included in the final episode, but the series was cancelled even before the episode was aired. Before the one and a half hour pilot premiered on TV, a screening was held at the Museum of Broadcasting in Hollywood. Media analyst and advertising executive Paul Schulman said, "I don't think it has a chance of succeeding. It is not commercial, it is radically different from what we as viewers are accustomed to seeing, there's no one in the show to root for." The show's Thursday night time slot had not been a good one for soap operas, as both "Dynasty" and its short-lived spin-off "The Colbys" did poorly. "Twin Peaks" was also up against the hugely successful sitcom "Cheers". Initially, the show received a positive response from TV critics. Tom Shales, in "The Washington Post", wrote, ""Twin Peaks" disorients you in ways that small-screen productions seldom attempt. It's a pleasurable sensation, the floor dropping out and leaving one dangling." In "The New York Times", John J. O'Connor wrote, ""Twin Peaks" is not a send-up of the form. Mr. Lynch clearly savors the standard ingredients...but then the director adds his own peculiar touches, small passing details that suddenly, and often hilariously, thrust the commonplace out of kilter." "Entertainment Weekly" gave the show an "A+" rating and Ken Tucker wrote, "Plot is irrelevant; moments are everything. Lynch and Frost have mastered a way to make a weekly series endlessly interesting." Richard Zoglin in "Time" magazine said that it "may be the most hauntingly original work ever done for American TV." The two-hour pilot was the highest-rated movie for the 1989–90 season with a 22 rating and was viewed by 33% of the audience. In its first broadcast as a regular one-hour drama series, "Twin Peaks" scored ABC's highest ratings in four years in its 9:00 pm Thursday time slot. The show also reduced NBC's "Cheers"'s ratings. "Twin Peaks" had a 16.2 rating with each point equaling 921,000 homes with TVs. The episode also added new viewers because of what ABC's senior vice-president of research, Alan Wurtzel, called "the water cooler syndrome", in which people talk about the series the next day at work. But the show's third episode lost 14% of the audience that had tuned in a week before. That audience had dropped 30% from the show's first appearance on Thursday night. This was a result of competing against "Cheers", which appealed to the same demographic that watched "Twin Peaks". A production executive from the show spoke of being frustrated with the network's scheduling of the show. "The show is being banged around on Thursday night. If ABC had put it on Wednesday night it could have built on its initial success. ABC has put the show at risk." In response, the network aired the first-season finale on a Wednesday night at 10:00 pm instead of its usual 9:00 pm Thursday slot. The show achieved its best ratings since its third week on the air with a 12.6 and a 22 share of the audience. On May 22, 1990, it was announced that "Twin Peaks" would be renewed for a second season. During the first and second season, the search for Laura Palmer's killer served as the engine for the plot, and captured the public's imagination, although the creators admitted this was largely a MacGuffin; each episode was really about the interactions between the townsfolk. The unique (and often bizarre) personalities of each citizen formed a web of minutiae that ran contrary to the town's quaint appearance. Adding to the surreal atmosphere was the recurrence of Dale Cooper's dreams, in which the FBI agent is given clues to Laura's murder in a supernatural realm that may or may not be of his imagination. The first season contained only eight episodes (including the two-hour pilot episode), and was considered technically and artistically revolutionary for television at the time, and geared toward reaching the standards of film. Critics have noted that "Twin Peaks" began the trend of accomplished cinematography now commonplace in today's television dramas. Lynch and Frost maintained tight control over the first season, handpicking all of the directors, including some Lynch had known from his days at the American Film Institute (e.g., Caleb Deschanel and Tim Hunter) and some referred to him by those he knew personally. Lynch and Frost's control lessened in the second season, corresponding with what is generally regarded as a decrease in the show's quality once the identity of Laura Palmer's murderer was revealed. The aforementioned "water cooler effect" put pressure on the show's creators to solve the mystery. Although they claimed to have known from the series' inception the identity of Laura's murderer, Lynch never wanted to solve the murder, while Frost felt that they had an obligation to the audience to solve it. This created tension between the two men. Its ambitious style, paranormal undertones, and engaging murder mystery made "Twin Peaks" an unexpected hit. Its characters, particularly MacLachlan's Dale Cooper, were unorthodox for a supposed crime drama, as was Cooper's method of interpreting his dreams to solve the crime. During its first season, the show's popularity reached its zenith, and elements of the program seeped into mainstream popular culture, prompting parodies, including one in the 16th-season premiere of "Saturday Night Live", hosted by MacLachlan. For its first season, "Twin Peaks" received fourteen nominations at the 42nd Primetime Emmy Awards, for Outstanding Drama Series, Outstanding Lead Actor in a Drama Series (Kyle MacLachlan), Outstanding Lead Actress in a Drama Series (Piper Laurie), Outstanding Supporting Actress in a Drama Series (Sherilyn Fenn), Outstanding Directing in a Drama Series (David Lynch), Outstanding Writing in a Drama Series (David Lynch and Mark Frost), Outstanding Writing in a Drama Series (Harley Peyton), Outstanding Art Direction for a Series, Outstanding Achievement in Main Title Theme Music, Outstanding Achievement in Music Composition for a Series (Dramatic Underscore), Outstanding Achievement in Music and Lyrics, and Outstanding Sound Editing for a Series. Out of its fourteen nominations, it won for Outstanding Costume Design for a Series and Outstanding Editing for a Series – Single Camera Production. For its second season, it received four nominations at the 43rd Primetime Emmy Awards, for Outstanding Lead Actor in a Drama Series (Kyle MacLachlan), Outstanding Supporting Actress in a Drama Series (Piper Laurie), Outstanding Sound Editing for a Series, and Outstanding Sound Mixing for a Drama Series. At the 48th Golden Globe Awards, it won for Best TV Series – Drama, Kyle MacLachlan won for Best Performance by an Actor in a TV Series – Drama, Piper Laurie won for Best Performance by an Actress in a Supporting Role in a Series, Mini-Series or Motion Picture Made for TV; while Sherilyn Fenn was nominated in the same category as Laurie. The pilot episode was ranked 25th on "TV Guide"s 1997 100 Greatest Episodes of All Time. It placed 49th on "Entertainment Weekly" "New TV Classics" list. In 2004 and 2007, "Twin Peaks" was ranked 20th and 24th on "TV Guide"s Top Cult Shows Ever, and in 2002, it was ranked 45th of the "Top 50 Television Programs of All Time" by the same guide. In 2007, UK broadcaster Channel 4 ranked "Twin Peaks" 9th on their list of the "50 Greatest TV Dramas". Also that year, "Time" included the show on their list of the "100 Best TV Shows of All-Time". "Empire" listed "Twin Peaks" as the 24th best TV show in their list of "The 50 Greatest TV Shows of All Time". In 2012, "Entertainment Weekly" listed the show at no. 12 in the "25 Best Cult TV Shows from the Past 25 Years", saying, "The show itself was only fitfully brilliant and ultimately unfulfilling, but the cult lives, fueled by nostalgia for the extraordinary pop phenomenon it inspired, for its significance to the medium (behold the big bang of auteur TV!), and for a sensuous strangeness that possesses you and never lets you go." The series has been nominated for the TCA Heritage Award six consecutive years since 2010. It was ranked 20th on "The Hollywood Reporter"s list of Hollywood's 100 Favorite TV Shows. With the resolution of "Twin Peaks"' main drawing point (Laura Palmer's murder) in the middle of the second season, and with subsequent story lines becoming more obscure and drawn out, public interest began to wane. This discontent, coupled with ABC changing its timeslot on a number of occasions, led to a huge drop in the show's ratings after being one of the most-watched television programs in the United States in 1990. Due to the Gulf War, "Twin Peaks" was moved from its usual time slot "for six weeks out of eight" in early 1991, according to Frost, preventing the show from maintaining audience interest. A week after the season's 15th episode placed 85th in the ratings out of 89 shows, ABC put "Twin Peaks" on indefinite hiatus, a move that usually leads to cancellation. An organized letter-writing campaign, dubbed COOP (Citizens Opposed to the Offing of "Peaks"), attempted to save the show from cancellation. The campaign was partly successful, as the season returned to airing on Thursday nights for four weeks from late March. The series then went on another hiatus, before the final two episodes of the season aired back-to-back on June 10. According to Frost, the main storyline after the resolution of Laura Palmer's murder was planned to be the second strongest element from the first season that audiences responded to: the relationship between Agent Cooper and Audrey Horne. Frost explained that Lara Flynn Boyle, who was romantically involved with Kyle MacLachlan at the time, had effectively vetoed the Audrey-Cooper relationship, forcing the writers to come up with alternative storylines to fill the gap. Sherilyn Fenn corroborated this claim in a 2014 interview, stating, "[Boyle] was mad that my character was getting more attention, so then Kyle started saying that his character shouldn't be with my character because it doesn't look good, 'cause I'm too young... I was not happy about it. It was stupid." This meant the artificial extension of secondary storylines, such as James Hurley and Evelyn Marsh, to fill in the space. After ratings began to decline, Agent Cooper was given a new love interest, Annie Blackburn (Heather Graham), to replace the writers' intended romance between him and Audrey Horne. Despite ending on a deliberate audience-baiting cliffhanger, the series finale did not sufficiently boost interest, and the show was not renewed for a third season, leaving the cliffhanger unresolved. Lynch expressed his regret at having resolved the Laura Palmer murder, saying he and Frost had never intended for the series to answer the question and that doing so "killed the goose that laid the golden eggs". Lynch blamed network pressure for the decision to resolve the Palmer storyline prematurely. Frost agreed, noting that people at the network had in fact wanted the killer to be revealed by the end of season one. In 1993, cable channel Bravo acquired the license to rerun the entire series, which began airing in June 1993. These reruns included Lynch's addition of introductions to each episode by the Log Lady and her cryptic musings. Looking back, Frost has admitted that he wished he and Lynch had "worked out a smoother transition" between storylines and that the Laura Palmer story was a "tough act to follow". Regarding the second season, Frost felt that "perhaps the storytelling wasn't quite as taut or as fraught with emotion". Writing for "The Atlantic" in 2016, Mike Mariani wrote that "It would be tough to look at the roster of television shows any given season without finding several that owe a creative debt to "Twin Peaks"," stating that "Lynch's manipulation of the uncanny, his surreal non-sequiturs, his black humor, and his trademark ominous tracking shots can be felt in a variety of contemporary hit shows. In 2010, the television series "Psych" paid tribute to the series by reuniting some of the cast in the fifth-season episode, "Dual Spires". The episode's plot is an homage to the "Twin Peaks" pilot, where the characters of "Psych" investigate the death of a young girl in a small town called "Dual Spires". The episode also contains several references to the original show. "Twin Peaks" actors that guest star in the episode are Sherilyn Fenn, Sheryl Lee, Dana Ashbrook, Robyn Lively, Lenny Von Dohlen, Catherine E. Coulson and Ray Wise. Prior to the airing of the episode, a special event at the Paley Center for Media was held where the actors from both shows discussed the episode. Reviewers and fans of four seasons of Veena Sud's U.S. TV series, "The Killing", have noted similarities and borrowed elements from Lynch's "Fire Walk with Me" and "Twin Peaks", and compared Sud and Lynch's works. Carlton Cuse, creator of "Bates Motel", cited "Twin Peaks" as a key inspiration for his series, stating: "We pretty much ripped off "Twin Peaks"... If you wanted to get that confession, the answer is yes. I loved that show. They only did 30 episodes. Kerry [Ehrin] and I thought we'd do the 70 that are missing." "Twin Peaks" served as an inspiration for the 1993 video game "", with director Takashi Tezuka citing the series as the main factor for the creation of the "suspicious" characters that populate the game, as well as the mystery elements of the story. The show has also influenced a number of survival horror and psychological thriller video games—most notably "Alan Wake", "Deadly Premonition", "Silent Hill", and "Max Payne". The 1998 open world adventure video game "Mizzurna Falls" was reminiscent and a homage to "Twin Peaks". The American animated show "Gravity Falls" repeatedly referenced the Black Lodge along with other elements of "Twin Peaks" throughout its run. "Riverdale", an American teen drama, is noted for its many homages to "Twin Peaks". Along with many thematic similarities and direct references, Mädchen Amick appears in both series. In an interview promoting the second season of "Riverdale", showrunner Roberto Aguirre-Sacasa remarked that "all roads on "Riverdale" lead back to "Twin Peaks"." The song "Laura Palmer" by the band Bastille was written influenced by the "slightly weird, eerie" atmosphere of the show. The series was released on VHS in a six-tape collection on April 16, 1995, however, it did not include the original pilot episode. The series was also released on LaserDisc in a sixteen disc set in 1991 in Japan. The same set was later released in the U.S. across four different volumes from 1993–94; neither release contained the original pilot nor "". On December 18, 2001, the first season (episodes 1–7, minus the pilot) of "Twin Peaks" was released on DVD in Region 1 by Artisan Entertainment. The box set featured digitally remastered video was noted for being the first TV series to have its audio track redone in DTS. The second season release was postponed several times, and the release was originally canceled in 2003 by Artisan due to low sales figures for the season 1 DVD. The second season was finally released in the United States and Canada on April 3, 2007, via Paramount Pictures Home Entertainment/CBS DVD. On October 30, 2007, the broadcast version of the pilot finally received a legitimate U.S. release as part of the "Twin Peaks" "Definitive Gold Box Edition". This set includes both U.S. original network broadcast and international versions of the pilot. The set also includes all episodes from both seasons, deleted scenes for both seasons, and a feature-length retrospective documentary. "Entertainment Weekly" gave the box set a "B+" rating and wrote, "There are numerous fascinatingly frank mini-docs here, including interviews with many "Peaks" participants; together, they offer one of the best available portraits of how a TV hit can go off the rails". In July 2013, it was revealed that a Blu-ray version of the complete series would be released. In January 2014, Lynch confirmed the Blu-ray release and that it would contain the pilot, season 1, season 2, and new special features, and possibly the film. It was announced on May 15, 2014, that the Blu-ray of the complete series of "Twin Peaks" and the film containing over 90 minutes of deleted scenes would be released on July 29, 2014. "Twin Peaks: From Z to A", a 21-disc limited edition Blu-ray box set, which includes all the television episodes, "Fire Walk with Me", "The Missing Pieces", previously released special features, as well as six hours of new behind-the-scenes content and 4K versions of the original pilot and episode 8 from "The Return", was released on December 10, 2019. Additionally, "Twin Peaks: The Television Collection", a Blu-ray and DVD collection of the original series, "The Return", and all previously released special features, was released on October 15, 2019. Online, the series is available through the pay CBS All Access service in full, along with Showtime's "Anytime" service for pay-TV subscribers and its over-the-top separate service. The original series is available for HD streaming via both Hulu and Netflix in the U.S. Hulu also offers "The Return", the 18-episode continuation originally aired on Showtime, as an additional-cost subscription option for viewing some of Showtime's programming. During the show's second season, Pocket Books released three official tie-in books, each authored by the show's creators (or their family), which offer a wealth of backstory. "The Secret Diary of Laura Palmer", written by Lynch's daughter Jennifer Lynch, is the diary as seen in the series and written by Laura, chronicling her thoughts from her twelfth birthday to the days leading up to her death. Frost's brother Scott wrote "The Autobiography of F.B.I. Special Agent Dale Cooper: My Life, My Tapes". Kyle MacLachlan also recorded "Diane: The Twin Peaks Tapes of Agent Cooper", which combined audio tracks from various episodes of the series with newly recorded monologues. "Welcome to Twin Peaks: An Access Guide to the Town" offers information about the history, flora, fauna, and culture of the fictitious town. "Twin Peaks: Visual Soundtrack", a LaserDisc that plays like an elaborate music video. The show's entire soundtrack album is played over silent video footage shot by a Japanese TV crew visiting the Snoqualmie, Washington, locations where the series was shot. "The Secret History of Twin Peaks", a novel by series co-creator Mark Frost, "places the unexplained phenomena that unfolded in Twin Peaks in a layered, wide-ranging history, beginning with the journals of Lewis and Clark and ending with the shocking events that closed the finale." It was published on October 18, 2016. "" was released October 31, 2017. A follow-up to "The Secret History of Twin Peaks", and written by Mark Frost. The novel fills in details of the 25 years between the second and third seasons, and expands on some of the mysteries raised in the new episodes. "Twin Peaks VR", a virtual reality game developed by Collider Games and Showtime in collaboration with David Lynch was released on December 13, 2019. Players can explore familiar locations while solving puzzles to help Special Agent Cooper and Gordon Cole. The game is available on Oculus Rift, Vive and Valve Index. The 1992 film "Twin Peaks: Fire Walk with Me" is a prequel to the TV series. It tells of the investigation into the murder of Teresa Banks and the last seven days in the life of Laura Palmer. Director David Lynch and most of the television cast returned for the film, with the notable exceptions of Lara Flynn Boyle, who declined to return as Laura's best friend Donna Hayward and was replaced by Moira Kelly, and Sherilyn Fenn due to scheduling conflicts. Also, Kyle MacLachlan returned reluctantly as he wanted to avoid typecasting, so his presence in the film is smaller than originally planned. Lynch originally shot about five hours of footage that was subsequently cut down to two hours and fourteen minutes. Most of the deleted scenes feature additional characters from the television series who ultimately did not appear in the finished film. Around ninety minutes of these scenes are included in the complete series Blu-ray that was released on July 29, 2014. "Fire Walk with Me" was initially poorly received, especially in comparison to the series. It was greeted at the 1992 Cannes Film Festival with booing from the audience and has received mixed reviews by American critics. It grossed a total of US$1.8 million in 691 theaters in its opening weekend and went on to gross a total of $4.1 million in North America.
https://en.wikipedia.org/wiki?curid=30307
Thallium Thallium is a chemical element with the symbol Tl and atomic number 81. It is a gray post-transition metal that is not found free in nature. When isolated, thallium resembles tin, but discolors when exposed to air. Chemists William Crookes and Claude-Auguste Lamy discovered thallium independently in 1861, in residues of sulfuric acid production. Both used the newly developed method of flame spectroscopy, in which thallium produces a notable green spectral line. Thallium, from Greek , , meaning "a green shoot or twig", was named by Crookes. It was isolated by both Lamy and Crookes in 1862; Lamy by electrolysis, and Crookes by precipitation and melting of the resultant powder. Crookes exhibited it as a powder precipitated by zinc at the International exhibition, which opened on 1 May that year. Thallium tends to form the +3 and +1 oxidation states. The +3 state resembles that of the other elements in group 13 (boron, aluminium, gallium, indium). However, the +1 state, which is far more prominent in thallium than the elements above it, recalls the chemistry of alkali metals, and thallium(I) ions are found geologically mostly in potassium-based ores, and (when ingested) are handled in many ways like potassium ions (K+) by ion pumps in living cells. Commercially, thallium is produced not from potassium ores, but as a byproduct from refining of heavy-metal sulfide ores. Approximately 60–70% of thallium production is used in the electronics industry, and the remainder is used in the pharmaceutical industry and in glass manufacturing. It is also used in infrared detectors. The radioisotope thallium-201 (as the soluble chloride TlCl) is used in small amounts as an agent in a nuclear medicine scan, during one type of nuclear cardiac stress test. Soluble thallium salts (many of which are nearly tasteless) are toxic, and they were historically used in rat poisons and insecticides. Use of these compounds has been restricted or banned in many countries, because of their nonselective toxicity. Thallium poisoning usually results in hair loss, although this characteristic symptom does not always surface. Because of its historic popularity as a murder weapon, thallium has gained notoriety as "the poisoner's poison" and "inheritance powder" (alongside arsenic). A thallium atom has 81 electrons, arranged in the electron configuration [Xe]4f145d106s26p1; of these, the three outermost electrons in the sixth shell are valence electrons. Due to the inert pair effect, the 6s electron pair is relativistically stabilised and it is more difficult to get them involved in chemical bonding than for the heavier elements. Thus, very few electrons are available for metallic bonding, similar to the neighboring elements mercury and lead, and hence thallium, like its congeners, is a soft, highly electrically conducting metal with a low melting point of 304 °C. A number of standard electrode potentials, depending on the reaction under study, are reported for thallium, reflecting the greatly decreased stability of the +3 oxidation state: Thallium is the first element in group 13 where the reduction of the +3 oxidation state to the +1 oxidation state is spontaneous under standard conditions. Since bond energies decrease down the group, with thallium, the energy released in forming two additional bonds and attaining the +3 state is not always enough to outweigh the energy needed to involve the 6s-electrons. Accordingly, thallium(I) oxide and hydroxide are more basic and thallium(III) oxide and hydroxide are more acidic, showing that thallium conforms to the general rule of elements being more electropositive in their lower oxidation states. Thallium is malleable and sectile enough to be cut with a knife at room temperature. It has a metallic luster that, when exposed to air, quickly tarnishes to a bluish-gray tinge, resembling lead. It may be preserved by immersion in oil. A heavy layer of oxide builds up on thallium if left in air. In the presence of water, thallium hydroxide is formed. Sulfuric and nitric acids dissolve thallium rapidly to make the sulfate and nitrate salts, while hydrochloric acid forms an insoluble thallium(I) chloride layer. Thallium has 41 isotopes which have atomic masses that range from 176 to 216. 203Tl and 205Tl are the only stable isotopes and make up nearly all of natural thallium. 204Tl is the most stable radioisotope, with a half-life of 3.78 years. It is made by the neutron activation of stable thallium in a nuclear reactor. The most useful radioisotope, 201Tl (half-life 73 hours), decays by electron capture, emitting X-rays (~70–80 keV), and photons of 135 and 167 keV in 10% total abundance; therefore, it has good imaging characteristics without excessive patient radiation dose. It is the most popular isotope used for thallium nuclear cardiac stress tests. Thallium(III) compounds resemble the corresponding aluminium(III) compounds. They are moderately strong oxidizing agents and are usually unstable, as illustrated by the positive reduction potential for the Tl3+/Tl couple. Some mixed-valence compounds are also known, such as Tl4O3 and TlCl2, which contain both thallium(I) and thallium(III). Thallium(III) oxide, Tl2O3, is a black solid which decomposes above 800 °C, forming the thallium(I) oxide and oxygen. The simplest possible thallium compound, thallane (TlH3), is too unstable to exist in bulk, both due to the instability of the +3 oxidation state as well as poor overlap of the valence 6s and 6p orbitals of thallium with the 1s orbital of hydrogen. The trihalides are more stable, although they are chemically distinct from those of the lighter group 13 elements and are still the least stable in the whole group. For instance, thallium(III) fluoride, TlF3, has the β-BiF3 structure rather than that of the lighter group 13 trifluorides, and does not form the complex anion in aqueous solution. The trichloride and tribromide disproportionate just above room temperature to give the monohalides, and thallium triiodide contains the linear triiodide anion () and is actually a thallium(I) compound. Thallium(III) sesquichalcogenides do not exist. The thallium(I) halides are stable. In keeping with the large size of the Tl+ cation, the chloride and bromide have the caesium chloride structure, while the fluoride and iodide have distorted sodium chloride structures. Like the analogous silver compounds, TlCl, TlBr, and TlI are photosensitive and display poor solubility in water. The stability of thallium(I) compounds demonstrates its differences from the rest of the group: a stable oxide, hydroxide, and carbonate are known, as are many chalcogenides. The double salt has been shown to have hydroxyl-centred triangles of thallium, , as a recurring motif throughout its solid structure. The metalorganic compound thallium ethoxide (TlOEt, TlOC2H5) is a heavy liquid (ρ 3.49 g·cm−3, m.p. −3 °C), often used as a basic and soluble thallium source in organic and organometallic chemistry. Organothallium compounds tend to be thermally unstable, in concordance with the trend of decreasing thermal stability down group 13. The chemical reactivity of the Tl–C bond is also the lowest in the group, especially for ionic compounds of the type R2TlX. Thallium forms the stable [Tl(CH3)2]+ ion in aqueous solution; like the isoelectronic Hg(CH3)2 and [Pb(CH3)2]2+, it is linear. Trimethylthallium and triethylthallium are, like the corresponding gallium and indium compounds, flammable liquids with low melting points. Like indium, thallium cyclopentadienyl compounds contain thallium(I), in contrast to gallium(III). Thallium (Greek , , meaning "a green shoot or twig") was discovered by William Crookes and Claude Auguste Lamy, working independently, both using flame spectroscopy (Crookes was first to publish his findings, on March 30, 1861).
https://en.wikipedia.org/wiki?curid=30309
Text editor A text editor is a type of computer program that edits plain text. Such programs are sometimes known as "notepad" software, following the naming of Microsoft Notepad. Text editors are provided with operating systems and software development packages, and can be used to change files such as configuration files, documentation files and programming language source code. There are important differences between plain text (created and edited by text editors) and rich text (such as that created by word processors or desktop publishing software). Plain text exclusively consists of character representation. Each character is represented by a fixed-length sequence of one, two, or four bytes, or as a variable-length sequence of one to four bytes, in accordance to specific character encoding conventions, such as ASCII, ISO/IEC 2022, UTF-8, or Unicode. These conventions define many printable characters, but also non-printing characters that control the flow of the text, such as space, line break, and page break. Plain text contains no other information about the text itself, not even the character encoding convention employed. Plain text is stored in text files, although text files do not exclusively store plain text. In the early days of computers, plain text was displayed using a monospace font, such that horizontal alignment and columnar formatting were sometimes done using whitespace characters. For compatibility reasons, this tradition has not changed. Rich text, on the other hand, may contain metadata, character formatting data (e.g. typeface, size, weight and style), paragraph formatting data (e.g. indentation, alignment, letter and word distribution, and space between lines or other paragraphs), and page specification data (e.g. size, margin and reading direction). Rich text can be very complex. Rich text can be saved in binary format (e.g. DOC), text files adhering to a markup language (e.g. RTF or HTML), or in a hybrid form of both (e.g. Office Open XML). Text editors are intended to open and save text files containing either plain text or anything that can be interpreted as plain text, including the markup for rich text or the markup for something else (e.g. SVG). Before text editors existed, computer text was punched into cards with keypunch machines. Physical boxes of these thin cardboard cards were then inserted into a card-reader. Magnetic tape and disk "card-image" files created from such card decks often had no line-separation characters at all, and assumed fixed-length 80-character records. An alternative to cards was punched paper tape. It could be created by some teleprinters (such as the Teletype), which used special characters to indicate ends of records. The first text editors were "line editors" oriented to teleprinter- or typewriter-style terminals without displays. Commands (often a single keystroke) effected edits to a file at an imaginary insertion point called the "cursor". Edits were verified by typing a command to print a small section of the file, and periodically by printing the entire file. In some line editors, the cursor could be moved by commands that specified the line number in the file, text strings (context) for which to search, and eventually regular expressions. Line editors were major improvements over keypunching. Some line editors could be used by keypunch; editing commands could be taken from a deck of cards and applied to a specified file. Some common line editors supported a "verify" mode in which change commands displayed the altered lines. When computer terminals with video screens became available, screen-based text editors (sometimes called just "screen editors") became common. One of the earliest full-screen editors was O26, which was written for the operator console of the CDC 6000 series computers in 1967. Another early full-screen editor was vi. Written in the 1970s, it is still a standard editor on Unix and Linux operating systems. Also written in the 1970s was the UCSD Pascal Screen Oriented Editor, which was optimized both for indented source code as well as general text. Emacs, one of the first free and open source software projects, is another early full-screen or real-time editor, one that was ported to many systems. A full-screen editor's ease-of-use and speed (compared to the line-based editors) motivated many early purchases of video terminals. The core data structure in a text editor is the one that manages the string (sequence of characters) or list of records that represents the current state of the file being edited. While the former could be stored in a single long consecutive array of characters, the desire for text editors that could more quickly insert text, delete text, and undo/redo previous edits led to the development of more complicated sequence data structures. A typical text editor uses a gap buffer, a linked list of lines (as in PaperClip), a piece table, or a rope, as its sequence data structure. Some text editors are small and simple, while others offer broad and complex functions. For example, Unix and Unix-like operating systems have the pico editor (or a variant), but many also include the vi and Emacs editors. Microsoft Windows systems come with the simple Notepad, though many people—especially programmers—prefer other with more features. Under Apple Macintosh's classic Mac OS there was the native SimpleText, which was replaced in Mac OS X by TextEdit, which combines features of a text editor with those typical of a word processor such as rulers, margins and multiple font selection. These features are not available simultaneously, but must be switched by user command, or through the program automatically determining the file type. Most word processors can read and write files in plain text format, allowing them to open files saved from text editors. Saving these files from a word processor, however, requires ensuring the file is written in plain text format, and that any text encoding or BOM settings won't obscure the file for its intended use. Non-WYSIWYG word processors, such as WordStar, are more easily pressed into service as text editors, and in fact were commonly used as such during the 1980s. The default file format of these word processors often resembles a markup language, with the basic format being plain text and visual formatting achieved using non-printing control characters or escape sequences. Later word processors like Microsoft Word store their files in a binary format and are almost never used to edit plain text files. Some text editors can edit unusually large files such as log files or an entire database placed in a single file. Simpler text editors may just read files into the computer's main memory. With larger files, this may be a slow process, and the entire file may not fit. Some text editors do not let the user start editing until this read-in is complete. Editing performance also often suffers in nonspecialized editors, with the editor taking seconds or even minutes to respond to keystrokes or navigation commands. Specialized editors have optimizations such as only storing the visible portion of large files in memory, improving editing performance. Some editors are programmable, meaning, e.g., they can be customized for specific uses. With a programmable editor it is easy to automate repetitive tasks or, add new functionality or even implement a new application within the framework of the editor. One common motive for customizing is to make a text editor use the commands of another text editor with which the user is more familiar, or to duplicate missing functionality the user has come to depend on. Software developers often use editor customizations tailored to the programming language or development environment they are working in. The programmability of some text editors is limited to enhancing the core editing functionality of the program, but Emacs can be extended far beyond editing text files—for web browsing, reading email, online chat, managing files or playing games and is often thought of as a Lisp execution environment with a Text User Interface. Emacs can even be programmed to emulate Vi, its rival in the traditional editor wars of Unix culture. An important group of programmable editors uses REXX as a scripting language. These "orthodox editors" contain a "command line" into which commands and macros can be typed and text lines into which line commands and macros can be typed. Most such editors are derivatives of ISPF/PDF EDIT or of XEDIT, IBM's flagship editor for VM/SP through z/VM. Among them are THE, KEDIT, X2, Uni-edit, and SEDIT. A text editor written or customized for a specific use can determine what the user is editing and assist the user, often by completing programming terms and showing tooltips with relevant documentation. Many text editors for software developers include source code syntax highlighting and automatic indentation to make programs easier to read and write. Programming editors often let the user select the name of an include file, function or variable, then jump to its definition. Some also allow for easy navigation back to the original section of code by storing the initial cursor location or by displaying the requested definition in a popup window or temporary buffer. Some editors implement this ability themselves, but often an auxiliary utility like ctags is used to locate the definitions. Some editors include special features and extra functions, for instance, Programmable editors can usually be enhanced to perform any or all of these functions, but simpler editors focus on just one, or, like gPHPedit, are targeted at a single programming language.
https://en.wikipedia.org/wiki?curid=30310
Tennis court A tennis court is the venue where the sport of tennis is played. It is a firm rectangular surface with a low net stretched across the centre. The same surface can be used to play both doubles and singles matches. A variety of surfaces can be used to create a tennis court, each with its own characteristics which affect the playing style of the game. The dimensions of a tennis court are defined and regulated by the International Tennis Federation (ITF) governing body and are written down in the annual 'Rules of Tennis' document. The court is long. Its width is for singles matches and for doubles matches. The service line is from the net. Additional clear space around the court is needed in order for players to reach overrun balls for a total of wide and long. A net is stretched across the full width of the court, parallel with the baselines, dividing it into two equal ends. The net is high at the posts, and high in the center. The net posts are outside the doubles court on each side or, for a singles net, outside the singles court on each side. Based on the standard rules of tennis, the size of the court is measured to the outside of the respective baselines and sidelines. The "service" lines ("T" and the "service" line) are centered. The ball must completely miss the line to be considered "out". This also means that the width of the line (baselines are often wider) is irrelevant to play. The ITF's Play and Stay campaign promotes playing on smaller courts with slower red, orange and green balls for younger children. This gives children more time and control so they can serve, rally, and score from the first lesson on courts that are sized to fit their bodies. The ITF has mandated that official competition for children aged 10 years and under should be played on "Orange" courts long by wide. Competition for children under 8 years is played on "Red" courts that are long and wide. The net is always 0.8 m high in the center. Tennis is played on a variety of surfaces and each surface has its own characteristics which affect the playing style of the game. There are four main types of courts depending on the materials used for the court surface: clay courts, hard courts, grass courts and carpet courts. The International Tennis Federation (ITF) lists different surfaces and properties and classifies surfaces into one of five pace settings: Of the current four Grand Slam tournaments, the Australian and US Open use hard courts, French Open is played on clay, and Wimbledon, the only Grand Slam to have always been played on the same surface, is played on grass. The Australian Open switched from grass to hard courts in 1988 and in its early years the French championship alternated between clay and sand/rubble courts. The US Open is the only major to have been played on three surfaces; it was played on grass from its inception until 1974, clay from 1975 until 1977 and hard courts since it moved from the West Side Tennis Club to the National Tennis Center in 1978. ITF uses the following classification for tennis court surface types: Clay courts are made of crushed shale, stone or brick. The French Open is the only Grand Slam tournament to use clay courts. Clay courts slow down the ball and produce a high bounce in comparison to grass or hard courts. For this reason, the clay court takes away many of the advantages of big serves, which makes it hard for serve-based players to dominate on the surface. Clay courts are cheaper to construct than other types of tennis courts, but a clay surface costs more to maintain. Clay courts need to be rolled to preserve flatness. The clay's water content must be balanced; green clay courts generally require the courts to be sloped to allow water run-off. Clay courts are more common in Europe and Latin America than in North America, and tend to heavily favour baseline players. Historically for the Grand Slams clay courts have been used at the French Open since 1891 and the US Open from 1975 to 1977. Grass courts are the fastest type of courts in common use. They consist of grass grown on very hard-packed soil, which adds additional variables: bounces depend on how healthy the grass is, how recently it has been mowed, and the wear and tear of recent play. Points are usually very quick where fast, low bounces keep rallies short, and the serve plays a more important role than on other surfaces. Grass courts tend to favour serve-and-volley tennis players. Grass courts were once among the most common tennis surfaces, but are now rare due to high maintenance costs as they must be watered and mown often, and take a longer time to dry after rain than hard courts. Historically for the Grand Slams grass courts have been used at Wimbledon since 1877, the US Open from 1881 to 1974, and the Australian Open from 1905 to 1987. Hard courts are made of uniform rigid material, often covered with an acrylic surface layer to offer greater consistency of bounce than other outdoor surfaces. Hard courts can vary in speed, though they are faster than clay but not as fast as grass courts. The quantity of sand added to the paint can greatly affect the rate at which the ball slows down. The US Open is played on DecoTurf while the Australian Open is played on GreenSet, both acrylic-topped hard court surfaces. Historically for the Grand Slams hard courts have been used at the US Open since 1978 and the Australian Open since 1988. "Carpet" in tennis means any removable court covering. Indoor arenas store rolls of rubber-backed court surfacing and install it temporarily for tennis events, but they are not in use any more for professional events. A short piled form of artificial turf infilled with sand is used for some outdoor courts, particularly in Asia. Carpet is generally a fast surface, faster than hardcourt, with low bounce. Notable tennis tournaments previously held on carpet courts were the WCT Finals, Paris Masters, U.S. Pro Indoor and Kremlin Cup. Since 2009, their use has been discontinued on the top tier of the ATP. ATP Challenger Tour tournaments such as the Trofeo Città di Brescia still use carpet courts. The WTA Tour's last carpet court event, the International-level Tournoi de Québec, was discontinued after 2018. Some tennis courts are indoors, which allows to play regardless of weather conditions and is more comfortable for spectators. Different court surfaces have been used indoors. Hard courts are most common indoors, as they are the easiest to install and maintain. If the installation is permanent, they are constructed on an asphalt or concrete base, as with outdoor courts. Temporary indoor hard courts are typically constructed using wooden floor panels topped with acrylic which are installed over the venue's standard floor. This is the system used for modern indoor professional events such as the ATP Finals. Clay courts can be installed indoors with subsurface watering systems to keep the clay from drying out, and have been used for Davis Cup matches. Carpet courts were once the most prominent of indoor surfaces, especially in temporary venues, but have largely been replaced by removable hard courts. They were used on both the ATP World Tour and World Championship Tennis circuits, though no events currently use them. Historically, other surfaces have been used indoors such as hardwood flooring at the defunct World Covered Court Championships and London Indoor Professional Championships. The conclusion of the Wimbledon Championships, in 2012, was played on the lawn of Centre Court under the closed roof and artificial lights; the Halle Open has also seen a number of matches played on its grass court in the Gerry Weber Stadion with the roof closed. These, however, are outdoor venues with retractable roofs. Common tennis court terms:
https://en.wikipedia.org/wiki?curid=30311
The Communist Manifesto The Communist Manifesto, originally the Manifesto of the Communist Party (), is an 1848 political document by German philosophers Karl Marx and Friedrich Engels. Commissioned by the Communist League and originally published in London just as the Revolutions of 1848 began to erupt, the "Manifesto" was later recognised as one of the world's most influential political documents. It presents an analytical approach to the class struggle (historical and then-present) and the conflicts of capitalism and the capitalist mode of production, rather than a prediction of communism's potential future forms. "The Communist Manifesto" summarises Marx and Engels' theories concerning the nature of society and politics, namely that in their own words "[t]he history of all hitherto existing society is the history of class struggles". It also briefly features their ideas for how the capitalist society of the time would eventually be replaced by socialism. In the last paragraph of the "Manifesto", the authors call for a "forcible overthrow of all existing social conditions", which served as a call for communist revolutions around the world. In 2013, "The Communist Manifesto" was registered to UNESCO's Memory of the World Programme along with Marx's "Capital, Volume I". "The Communist Manifesto" is divided into a preamble and four sections, the last of these a short conclusion. The introduction begins by proclaiming: "A spectre is haunting Europe—the spectre of communism. All the powers of old Europe have entered into a holy alliance to exorcise this spectre". Pointing out that parties everywhere—including those in government and those in the opposition—have flung the "branding reproach of communism" at each other, the authors infer from this that the powers-that-be acknowledge communism to be a power in itself. Subsequently, the introduction exhorts Communists to openly publish their views and aims, to "meet this nursery tale of the spectre of communism with a manifesto of the party itself". The first section of the "Manifesto", "Bourgeois and Proletarians", elucidates the materialist conception of history, that "the history of all hitherto existing society is the history of class struggles". Societies have always taken the form of an oppressed majority exploited under the yoke of an oppressive minority. In capitalism, the industrial working class, or proletariat, engage in class struggle against the owners of the means of production, the bourgeoisie. As before, this struggle will end in a revolution that restructures society, or the "common ruin of the contending classes". The bourgeoisie, through the "constant revolutionising of production [and] uninterrupted disturbance of all social conditions" have emerged as the supreme class in society, displacing all the old powers of feudalism. The bourgeoisie constantly exploits the proletariat for its labour power, creating profit for themselves and accumulating capital. However, in doing so the bourgeoisie serves as "its own grave-diggers"; the proletariat inevitably will become conscious of their own potential and rise to power through revolution, overthrowing the bourgeoisie. "Proletarians and Communists", the second section, starts by stating the relationship of conscious communists to the rest of the working class. The communists' party will not oppose other working-class parties, but unlike them, it will express the general will and defend the common interests of the world's proletariat as a whole, independent of all nationalities. The section goes on to defend communism from various objections, including claims that it advocates communal prostitution or disincentivises people from working. The section ends by outlining a set of short-term demands—among them a progressive income tax; abolition of inheritances and private property; abolition of child labour; free public education; nationalisation of the means of transport and communication; centralisation of credit via a national bank; expansion of publicly owned land, etc.—the implementation of which would result in the precursor to a stateless and classless society. The third section, "Socialist and Communist Literature", distinguishes communism from other socialist doctrines prevalent at the time—these being broadly categorised as Reactionary Socialism; Conservative or Bourgeois Socialism; and Critical-Utopian Socialism and Communism. While the degree of reproach toward rival perspectives varies, all are dismissed for advocating reformism and failing to recognise the pre-eminent revolutionary role of the working class. "Position of the Communists in Relation to the Various Opposition Parties", the concluding section of the "Manifesto", briefly discusses the communist position on struggles in specific countries in the mid-nineteenth century such as France, Switzerland, Poland and Germany, this last being "on the eve of a bourgeois revolution" and predicts that a world revolution will soon follow. It ends by declaring an alliance with the democratic socialists, boldly supporting other communist revolutions and calling for united international proletarian action—"Working Men of All Countries, Unite!". In spring 1847, Marx and Engels joined the League of the Just, who were quickly convinced by the duo's ideas of "critical communism". At its First Congress in 2–9 June, the League tasked Engels with drafting a "profession of faith", but such a document was later deemed inappropriate for an open, non-confrontational organisation. Engels nevertheless wrote the "Draft of a Communist Confession of Faith", detailing the League's programme. A few months later, in October, Engels arrived at the League's Paris branch to find that Moses Hess had written an inadequate manifesto for the group, now called the League of Communists. In Hess's absence, Engels severely criticised this manifesto, and convinced the rest of the League to entrust him with drafting a new one. This became the draft "Principles of Communism", described as "less of a credo and more of an exam paper". On 23 November, just before the Communist League's Second Congress (29 November – 8 December 1847), Engels wrote to Marx, expressing his desire to eschew the catechism format in favour of the manifesto, because he felt it "must contain some history." On the 28th, Marx and Engels met at Ostend in Belgium, and a few days later, gathered at the Soho, London headquarters of the German Workers' Education Association to attend the Congress. Over the next ten days, intense debate raged between League functionaries; Marx eventually dominated the others and, overcoming "stiff and prolonged opposition", in Harold Laski's words, secured a majority for his programme. The League thus unanimously adopted a far more combative resolution than that at the First Congress in June. Marx (especially) and Engels were subsequently commissioned to draw up a manifesto for the League. Upon returning to Brussels, Marx engaged in "ceaseless procrastination", according to his biographer Francis Wheen. Working only intermittently on the "Manifesto", he spent much of his time delivering lectures on political economy at the German Workers' Education Association, writing articles for the "Deutsche-Brüsseler-Zeitung", and giving a long speech on free trade. Following this, he even spent a week (17–26 January 1848) in Ghent to establish a branch of the Democratic Association there. Subsequently, having not heard from Marx for nearly two months, the Central Committee of the Communist League sent him an ultimatum on 24 or 26 January, demanding he submit the completed manuscript by 1 February. This imposition spurred Marx on, who struggled to work without a deadline, and he seems to have rushed to finish the job in time. For evidence of this, historian Eric Hobsbawm points to the absence of rough drafts, only one page of which survives. In all, the "Manifesto" was written over 6–7 weeks. Although Engels is credited as co-writer, the final draft was penned exclusively by Marx. From the 26 January letter, Laski infers that even the Communist League considered Marx to be the sole draftsman and that he was merely their agent, imminently replaceable. Further, Engels himself wrote in 1883: "The basic thought running through the "Manifesto" [...] belongs solely and exclusively to Marx". Although Laski does not disagree, he suggests that Engels underplays his own contribution with characteristic modesty and points out the "close resemblance between its substance and that of the ["Principles of Communism"]". Laski argues that while writing the "Manifesto", Marx drew from the "joint stock of ideas" he developed with Engels "a kind of intellectual bank account upon which either could draw freely". In late February 1848, the "Manifesto" was anonymously published by the Workers' Educational Association ("Kommunistischer Arbeiterbildungsverein") at Bishopsgate in the City of London. Written in German, the 23-page pamphlet was titled "Manifest der kommunistischen Partei" and had a dark-green cover. It was reprinted three times and serialised in the "Deutsche Londoner Zeitung", a newspaper for German "émigré"s. On 4 March, one day after the serialisation in the "Zeitung" began, Marx was expelled by Belgian police. Two weeks later, around 20 March, a thousand copies of the "Manifesto" reached Paris, and from there to Germany in early April. In April–May the text was corrected for printing and punctuation mistakes; Marx and Engels would use this 30-page version as the basis for future editions of the "Manifesto". Although the "Manifesto"s prelude announced that it was "to be published in the English, French, German, Italian, Flemish and Danish languages", the initial printings were only in German. Polish and Danish translations soon followed the German original in London, and by the end of 1848, a Swedish translation was published with a new title—"The Voice of Communism: Declaration of the Communist Party". In June–November 1850 the "Manifesto of the Communist Party" was published in English for the first time when George Julian Harney serialised Helen Macfarlane's translation in his Chartist magazine "The Red Republican". Her version begins: "A frightful hobgoblin stalks throughout Europe. We are haunted by a ghost, the ghost of Communism". For her translation, the Lancashire-based Macfarlane probably consulted Engels, who had abandoned his own English translation half way. Harney's introduction revealed the "Manifesto"s hitherto-anonymous authors' identities for the first time. Soon after the "Manifesto" was published, Paris erupted in revolution to overthrow King Louis Philippe. The "Manifesto" played no role in this; a French translation was not published in Paris until just before the working-class June Days Uprising was crushed. Its influence in the Europe-wide Revolutions of 1848 was restricted to Germany, where the Cologne-based Communist League and its newspaper "Neue Rheinische Zeitung", edited by Marx, played an important role. Within a year of its establishment, in May 1849, the "Zeitung" was suppressed; Marx was expelled from Germany and had to seek lifelong refuge in London. In 1851, members of the Communist League's central board were arrested by the Prussian police. At their trial in Cologne 18 months later in late 1852 they were sentenced to 3–6 years' imprisonment. For Engels, the revolution was "forced into the background by the reaction that began with the defeat of the Paris workers in June 1848, and was finally excommunicated 'by law' in the conviction of the Cologne Communists in November 1852". After the defeat of the 1848 revolutions the "Manifesto" fell into obscurity, where it remained throughout the 1850s and 1860s. Hobsbawm says that by November 1850 the "Manifesto" "had become sufficiently scarce for Marx to think it worth reprinting section III [...] in the last issue of his [short-lived] London magazine". Over the next two decades only a few new editions were published; these include an (unauthorised and occasionally inaccurate) 1869 Russian translation by Mikhail Bakunin in Geneva and an 1866 edition in Berlin—the first time the "Manifesto" was published in Germany. According to Hobsbawm: "By the middle 1860s virtually nothing that Marx had written in the past was any longer in print". However, John Cowell-Stepney did publish an abridged version in the "Social Economist" in August/September 1869, in time for the Basle Congress. In the early 1870s, the "Manifesto" and its authors experienced a revival in fortunes. Hobsbawm identifies three reasons for this. The first is the leadership role Marx played in the International Workingmen's Association (aka the First International). Secondly, Marx also came into much prominence among socialists—and equal notoriety among the authorities—for his support of the Paris Commune of 1871, elucidated in "The Civil War in France". Lastly, and perhaps most significantly in the popularisation of the "Manifesto", was the treason trial of German Social Democratic Party (SPD) leaders. During the trial prosecutors read the "Manifesto" out loud as evidence; this meant that the pamphlet could legally be published in Germany. Thus in 1872 Marx and Engels rushed out a new German-language edition, writing a preface that identified that several portions that became outdated in the quarter century since its original publication. This edition was also the first time the title was shortened to "The Communist Manifesto" ("Das Kommunistische Manifest"), and it became the bedrock the authors based future editions upon. Between 1871 and 1873, the "Manifesto" was published in over nine editions in six languages; in 1872 it was published in the United States for the first time, serialised in "Woodhull & Claflin's Weekly" of New York City. However, by the mid 1870s the "Communist Manifesto" remained Marx and Engels' only work to be even moderately well-known. Over the next forty years, as social-democratic parties rose across Europe and parts of the world, so did the publication of the "Manifesto" alongside them, in hundreds of editions in thirty languages. Marx and Engels wrote a new preface for the 1882 Russian edition, translated by Georgi Plekhanov in Geneva. In it they wondered if Russia could directly become a communist society, or if she would become capitalist first like other European countries. After Marx's death in 1883, Engels alone provided the prefaces for five editions between 1888 and 1893. Among these is the 1888 English edition, translated by Samuel Moore and approved by Engels, who also provided notes throughout the text. It has been the standard English-language edition ever since. The principal region of its influence, in terms of editions published, was in the "central belt of Europe", from Russia in the east to France in the west. In comparison, the pamphlet had little impact on politics in southwest and southeast Europe, and moderate presence in the north. Outside Europe, Chinese and Japanese translations were published, as were Spanish editions in Latin America. This uneven geographical spread in the "Manifesto"s popularity reflected the development of socialist movements in a particular region as well as the popularity of Marxist variety of socialism there. There was not always a strong correlation between a social-democratic party's strength and the "Manifesto"s popularity in that country. For instance, the German SPD printed only a few thousand copies of the "Communist Manifesto" every year, but a few hundred thousand copies of the "Erfurt Programme". Further, the mass-based social-democratic parties of the Second International did not require their rank and file to be well-versed in theory; Marxist works such as the "Manifesto" or "Das Kapital" were read primarily by party theoreticians. On the other hand, small, dedicated militant parties and Marxist sects in the West took pride in knowing the theory; Hobsbawm says: "This was the milieu in which 'the clearness of a comrade could be gauged invariably from the number of earmarks on his Manifesto. Following the October Revolution of 1917 that swept the Vladimir Lenin-led Bolsheviks to power in Russia, the world's first socialist state was founded explicitly along Marxist lines. The Soviet Union, which Bolshevik Russia would become a part of, was a one-party state under the rule of the Communist Party of the Soviet Union (CPSU). Unlike their mass-based counterparts of the Second International, the CPSU and other Leninist parties like it in the Third International expected their members to know the classic works of Marx, Engels and Lenin. Further, party leaders were expected to base their policy decisions on Marxist-Leninist ideology. Therefore works such as the "Manifesto" were required reading for the party rank-and-file. Therefore the widespread dissemination of Marx and Engels' works became an important policy objective; backed by a sovereign state, the CPSU had relatively inexhaustible resources for this purpose. Works by Marx, Engels, and Lenin were published on a very large scale, and cheap editions of their works were available in several languages across the world. These publications were either shorter writings or they were compendia such as the various editions of Marx and Engels' "Selected Works", or their "Collected Works". This affected the destiny of the "Manifesto" in several ways. Firstly, in terms of circulation; in 1932 the American and British Communist Parties printed several hundred thousand copies of a cheap edition for "probably the largest mass edition ever issued in English". Secondly the work entered political-science syllabuses in universities, which would only expand after the Second World War. For its centenary in 1948, its publication was no longer the exclusive domain of Marxists and academicians; general publishers too printed the "Manifesto" in large numbers. "In short, it was no longer only a classic Marxist document", Hobsbawm noted, "it had become a political classic tout court". Even after the collapse of the Soviet Bloc in the 1990s, the "Communist Manifesto" remains ubiquitous; Hobsbawm says that "In states without censorship, almost certainly anyone within reach of a good bookshop, and certainly anyone within reach of a good library, not to mention the internet, can have access to it". The 150th anniversary once again brought a deluge of attention in the press and the academia, as well as new editions of the book fronted by introductions to the text by academics. One of these, "The Communist Manifesto: A Modern Edition" by Verso, was touted by a critic in the "London Review of Books" as being a "stylish red-ribboned edition of the work. It is designed as a sweet keepsake, an exquisite collector's item. In Manhattan, a prominent Fifth Avenue store put copies of this choice new edition in the hands of shop-window mannequins, displayed in come-hither poses and fashionable décolletage". A number of late-20th- and 21st-century writers have commented on the "Communist Manifesto"s continuing relevance. In a special issue of the "Socialist Register" commemorating the "Manifesto"s 150th anniversary, Peter Osborne argued that it was "the single most influential text written in the nineteenth century". Academic John Raines in 2002 noted: "In our day this Capitalist Revolution has reached the farthest corners of the earth. The tool of money has produced the miracle of the new global market and the ubiquitous shopping mall. Read "The Communist Manifesto", written more than one hundred and fifty years ago, and you will discover that Marx foresaw it all". In 2003, English Marxist Chris Harman stated: "There is still a compulsive quality to its prose as it provides insight after insight into the society in which we live, where it comes from and where it's going to. It is still able to explain, as mainstream economists and sociologists cannot, today's world of recurrent wars and repeated economic crisis, of hunger for hundreds of millions on the one hand and 'overproduction' on the other. There are passages that could have come from the most recent writings on globalisation". Alex Callinicos, editor of "International Socialism", stated in 2010: "This is indeed a manifesto for the 21st century". Writing in "The London Evening Standard" , Andrew Neather cited Verso Books' 2012 re-edition of "The Communist Manifesto" with an introduction by Eric Hobsbawm as part of a resurgence of left-wing-themed ideas which includes the publication of Owen Jones' best-selling "" and Jason Barker's documentary "Marx Reloaded". In contrast, critics such as revisionist Marxist and reformist socialist Eduard Bernstein distinguished between "immature" early Marxism—as exemplified by "The Communist Manifesto" written by Marx and Engels in their youth—that he opposed for its violent Blanquist tendencies and later "mature" Marxism that he supported. This latter form refers to Marx in his later life acknowledging that socialism could be achieved through peaceful means through legislative reform in democratic societies. Bernstein declared that the massive and homogeneous working-class claimed in the "Communist Manifesto" did not exist, and that contrary to claims of a proletarian majority emerging, the middle-class was growing under capitalism and not disappearing as Marx had claimed. Bernstein noted that the working-class was not homogeneous but heterogeneous, with divisions and factions within it, including socialist and non-socialist trade unions. Marx himself, later in his life, acknowledged that the middle-class was not disappearing in his work "Theories of Surplus Value" (1863). The obscurity of the later work means that Marx's acknowledgement of this error is not well known. George Boyer described the "Manifesto" as "very much a period piece, a document of what was called the 'hungry' 1840s". Many have drawn attention to the passage in the "Manifesto" that seems to sneer at the stupidity of the rustic: "The bourgeoisie [...] draws all nations [...] into civilisation[.] [...] It has created enormous cities [...] and thus rescued a considerable part of the population from the idiocy [sic] of rural life". However, as Eric Hobsbawm noted: Marx and Engels' political influences were wide-ranging, reacting to and taking inspiration from German idealist philosophy, French socialism, and English and Scottish political economy. "The Communist Manifesto" also takes influence from literature. In Jacques Derrida’s work, "Specters of Marx: The State of the Debt, the Work of Mourning and the New International", he uses William Shakespeare’s "Hamlet" to frame a discussion of the history of the International, showing in the process the influence that Shakespeare's work had on Marx and Engels’ writing. In his essay, "Big Leagues: Specters of Milton and Republican International Justice between Shakespeare and Marx", Christopher N. Warren makes the case that English poet John Milton also had a substantial influence on Marx and Engels' work. Historians of 19th-century reading habits have confirmed that Marx and Engels would have read these authors and it is known that Marx loved Shakespeare in particular. Milton, Warren argues, also shows a notable influence on "The Communist Manifesto", saying: "Looking back on Milton’s era, Marx saw a historical dialectic founded on inspiration in which freedom of the press, republicanism, and revolution were closely joined". Milton’s republicanism, Warren continues, served as "a useful, if unlikely, bridge" as Marx and Engels sought to forge a revolutionary international coalition.
https://en.wikipedia.org/wiki?curid=30312
Trier Trier ( , ; ), formerly known in English as Treves ( ; ) and Triers (see also names in other languages), is a city on the banks of the Moselle in Germany. It lies in a valley between low vine-covered hills of red sandstone in the west of the state of Rhineland-Palatinate, near the border with Luxembourg and within the important Moselle wine region. Karl Marx, philosopher and founder of the theory that would become known as Marxism, was born in the city in 1818. Founded by the Celts in the late 4th century BC as "Treuorum" and conquered 300 years later by the Romans, who renamed it "Augusta Treverorum" ("The City of Augustus among the Treveri"), Trier is arguably Germany's oldest city. It is also the oldest seat north of the Alps of a bishop. In the Middle Ages, the archbishop-elector of Trier was an important prince of the Church who controlled land from the French border to the Rhine. The archbishop-elector of Trier also had great significance as one of the seven electors of the Holy Roman Empire. With an approximate population of 105,000, Trier is the fourth-largest city in its state, after Mainz, Ludwigshafen, and Koblenz. The nearest major cities are Luxembourg ( to the southwest), Saarbrücken ( southeast), and Koblenz ( northeast). The University of Trier, the administration of the Trier-Saarburg district and the seat of the ADD ("Aufsichts- und Dienstleistungsdirektion"), which until 1999 was the borough authority of Trier, and the Academy of European Law (ERA) are all based in Trier. It is one of the five "central places" of the state of Rhineland-Palatinate. Along with Luxembourg, Metz and Saarbrücken, fellow constituent members of the union of cities, it is central to the greater region encompassing Saar-Lor-Lux (Saarland, Lorraine and Luxembourg), Rhineland-Palatinate, and Wallonia. The first traces of human settlement in the area of the city show evidence of linear pottery settlements dating from the early Neolithic period. Since the last pre-Christian centuries, members of the Celtic tribe of the Treveri settled in the area of today's Trier. The city of Trier derives its name from the later Latin locative "in Trēverīs" for earlier "Augusta Treverorum". The historical record describes the Roman Empire subduing the Treveri in the and establishing Augusta Treverorum about 16 . The name distinguished it from the empire's many other cities honoring the first emperor Augustus. The city later became the capital of the province of Belgic Gaul; after the Diocletian Reforms, it became the capital of the prefecture of the Gauls, overseeing much of the Western Roman Empire. In the 4th century, Trier was one of the largest cities in the Roman Empire with a population around 75,000 and perhaps as much as 100,000. The Porta Nigra ("Black Gate") dates from this era. A residence of the Western Roman Emperor, Roman Trier was the birthplace of Saint Ambrose. Sometime between 395 and 418, probably in 407 the Roman administration moved the staff of the Praetorian Prefecture from Trier to Arles. The city continued to be inhabited but was not as prosperous as before. However, it remained the seat of a governor and had state factories for the production of ballistae and armor and woolen uniforms for the troops, clothing for the civil service, and high-quality garments for the Court. Northern Gaul was held by the Romans along a line from north of Cologne to the coast at Boulogne through what is today southern Belgium until 460. South of this line, Roman control was firm, as evidenced by the continuing operation of the imperial arms factory at Amiens. The Franks seized Trier from Roman administration in 459. In 870, it became part of Eastern Francia, which developed into the Holy Roman Empire. Relics of Saint Matthias brought to the city initiated widespread pilgrimages. The bishops of the city grew increasingly powerful and the Archbishopric of Trier was recognized as an electorate of the empire, one of the most powerful states of Germany. The University of Trier was founded in the city in 1473. In the 17th century, the Archbishops and Prince-Electors of Trier relocated their residences to Philippsburg Castle in Ehrenbreitstein, near Koblenz. A session of the Reichstag was held in Trier in 1512, during which the demarcation of the Imperial Circles was definitively established. In the years from 1581 to 1593, the Trier witch trials were held, perhaps the largest witch trial in European history. It was certainly one of the four largest witch trials in Germany alongside the Fulda witch trials, the Würzburg witch trial, and the Bamberg witch trials. The persecutions started in the diocese of Trier in 1581 and reached the city itself in 1587, where it was to lead to the death of about 368 people, and was as such perhaps the biggest mass execution in Europe in peacetime. This counts only those executed within the city itself, and the real number of executions, counting also those executed in all the witch hunts within the diocese as a whole, was therefore even larger. The exact number of people executed has never been established; a total of 1,000 has been suggested but not confirmed. In the 17th and 18th centuries, Trier was sought after by France, who invaded during the Thirty Years' War, the War of the Grand Alliance, the War of the Spanish Succession, and the War of the Polish Succession. France succeeded in finally claiming Trier in 1794 during the French Revolutionary Wars, and the electoral archbishopric was dissolved. After the Napoleonic Wars ended in 1815, Trier passed to the Kingdom of Prussia. The German philosopher and one of the founders of Marxism, Karl Marx was born in the city in 1818. As part of the Prussian Rhineland, Trier developed economically during the 19th century. The city rose in revolt during the revolutions of 1848 in the German states, although the rebels were forced to concede. It became part of the German Empire in 1871. In June 1940 over 60,000 British prisoners of war, captured at Dunkirk and Northern France, were marched to Trier, which became a staging post for British soldiers headed for German prisoner-of-war camps. Trier was heavily bombed and bombarded in 1944 during World War II. The city became part of the new state of Rhineland-Palatinate after the war. The university, dissolved in 1797, was restarted in the 1970s, while the Cathedral of Trier was reopened in 1974. Trier officially celebrated its 2,000th anniversary in 1984. Trier sits in a hollow midway along the Moselle valley, with the most significant portion of the city on the east bank of the river. Wooded and vineyard-covered slopes stretch up to the Hunsrück plateau in the south and the Eifel in the north. The border with the Grand Duchy of Luxembourg is some away. "Listed in clockwise order, beginning with the northernmost; all municipalities belong to the Trier-Saarburg district" Schweich, Kenn and Longuich (all part of the Verbandsgemeinde Schweich an der Römischen Weinstraße), Mertesdorf, Kasel, Waldrach, Morscheid, Korlingen, Gutweiler, Sommerau and Gusterath (all in the Verbandsgemeinde Ruwer), Hockweiler, Franzenheim (both part of the Verbandsgemeinde Trier-Land), Konz (Verbandsgemeinde Konz), Igel, Trierweiler, Aach, Newel, Kordel, Zemmer (all in the Verbandsgemeinde Trier-Land) The Trier urban area is divided into 19 city districts. For each district there is an "" (local council) of between 9 and 15 members, as well as an "" (local representative). The local councils are charged with hearing the important issues that affect the district, although the final decision on any issue rests with the city council. The local councils nevertheless have the freedom to undertake limited measures within the bounds of their districts and their budgets. The districts of Trier with area and inhabitants (December 31, 2009): Trier has an oceanic climate (Köppen: "Cfb"), but with greater extremes than the marine versions of northern Germany. Summers are warm except in unusual heat waves and winters are recurrently cold, but not harsh. Precipitation is high despite not being on the coast. As a result of the European heat wave in 2003, the highest temperature recorded was 39 °C on 8 August of that year. The lowest recorded temperature was -19.3 °C on February 2, 1956. Trier is known for its well-preserved Roman and medieval buildings, which include: Trier is home to the University of Trier, founded in 1473, closed in 1796 and restarted in 1970. The city also has the Trier University of Applied Sciences. The Academy of European Law (ERA) was established in 1992 and provides training in European law to legal practitioners. In 2010 there were about 40 "Kindergärten", 25 primary schools and 23 secondary schools in Trier, such as the "Humboldt Gymnasium Trier", "Max Planck Gymnasium", "Auguste Viktoria Gymnasium" and the "Nelson-Mandela Realschule Plus", "Kurfürst-Balduin Realschule Plus", "Realschule Plus Ehrang". Trier station has direct railway connections to many cities in the region. The nearest cities by train are Cologne, Saarbrücken and Luxembourg. Via the motorways A 1, A 48 and A 64 Trier is linked with Koblenz, Saarbrücken and Luxembourg. The nearest commercial (international) airports are in Luxembourg (0:40 h by car), Frankfurt-Hahn (1:00 h), Saarbrücken (1:00 h), Frankfurt (2:00 h) and Cologne/Bonn (2:00 h). The Moselle is an important waterway and is also used for river cruises. A new passenger railway service on the western side of the Mosel is scheduled to open in December 2018. Major sports clubs in Trier include: See Heinz Monz: "Trierer Biographisches Lexikon". Landesarchivverwaltung Rheinland-Pfalz, Koblenz 2000. 539 p. . Trier is a fellow member of the QuattroPole union of cities, along with Luxembourg, Saarbrücken, and Metz (neighbouring countries: Luxembourg and France). Trier is twinned with:
https://en.wikipedia.org/wiki?curid=30317
Ton The ton is a unit of measure. It has a long history and has acquired a number of meanings and uses over the years. It is used principally as a unit of mass. Its original use as a measurement of volume has continued in the capacity of cargo ships and in terms such as the freight ton. Recent specialised uses include the ton as a measure of energy and for truck classification. It is also a colloquial term. It is derived from the "tun", the term applied to a cask of the largest capacity. This could contain a volume between , which could weigh around and occupy some of space. In the United Kingdom, the (Imperial) ton is Statute measure, defined as From 1965, the UK embarked upon a program of metrication and gradually introduced metric units, including the tonne (metric ton), defined as . The UK Weights and Measures Act 1985 explicitly excluded many units and terms from "use for trade", including the ton (and the term "metric ton" for "tonne"). In the United States and Canada, a ton is defined to be . Where confusion is possible, the 2240 lb ton is called "long ton" and the 2000 lb ton "short ton"; the tonne is distinguished by its spelling, but usually pronounced the same as ton, hence the US term "metric ton". In the UK the final "e" of "tonne" can also be pronounced (), or "metric ton" when it is necessary to make the distinction. Where precision is required the correct term must be used, but for many purposes this is not necessary: the metric and long tons differ by only 1.6%, and the short ton is within 11% of both. The ton (any definition) is the heaviest unit of weight typically used in colloquial speech. The term "ton" is also used to refer to a number of units of volume, ranging from in capacity. It can also be used as a unit of energy, expressed as an equivalent of coal burnt or TNT detonated. In refrigeration, a ton is a unit of power, sometimes called a "ton of refrigeration". It is the power required to melt or freeze one short ton of ice per day. The refrigeration ton hour is a unit of energy, the energy required to melt or freeze short ton of ice. There are several similar units of mass or volume called the ton: Both the long ton and the short ton are 20 hundredweight, the long hundredweight and the short hundredweight being 112 and 100 pounds respectively. Before the twentieth century there were several definitions. Prior to the 15th century in England, the ton was 20 hundredweight, each of 108 lb, giving a ton of . In the nineteenth century in different parts of Britain, definitions of 2240, 2352, and 2400 lb were used, with 2000 lb for explosives; the legal ton was usually 2240 lb. Assay ton (abbreviation 'AT') is not a unit of measurement but a standard quantity used in assaying ores of precious metals. A short assay ton is grams while a long assay ton is gram. These amounts bear the same ratio to a milligram as a short or long ton bears to a troy ounce. Therefore, the number of milligrams of a particular metal found in a sample weighing one assay ton gives the number of troy ounces of metal contained in a ton of ore. In documents that predate 1960 the word "ton" is sometimes spelled "tonne", but in more recent documents "tonne" refers exclusively to the metric ton. In nuclear power plants tHM and MTHM mean tonnes of heavy metals, and MTU means tonnes of uranium. In the steel industry, the abbreviation THM means 'tons/tonnes hot metal', which refers to the amount of liquid iron or steel that is produced, particularly in the context of blast furnace production or specific consumption. A dry ton or dry tonne has the same mass value, but the material (sludge, slurries, compost, and similar mixtures in which solid material is soaked with or suspended in water) has been dried to a relatively low, consistent moisture level (dry weight). If the material is in its natural, wet state, it is called a wet ton or wet tonne. In the Imperial units system, the imperial (long) ton is equivalent to 20 hundredweight, a hundredweight being eight stone, and a stone weighing 14 pounds. The displacement, essentially the weight, of a ship is traditionally expressed in long tons. To simplify measurement it is determined by measuring the volume, rather than weight, of water displaced, and calculating the weight from the volume and density. For practical purposes the displacement ton (DT) is a unit of volume, , the approximate volume occupied by one ton of seawater (the actual volume varies with salinity and temperature). It is slightly less than the 224 imperial gallons (1.018 m3) of the water ton (based on distilled water). One measurement ton or freight ton is equal to , but historically it has had several different definitions. It is sometimes abbreviated as "MTON". It is used to determine the amount of money to be charged as "Freight" in carrying different sorts of cargo. In general if a cargo is heavier than salt water, the actual tonnage is used. If it is lighter than salt water, e.g. feathers, freight is calculated using Measurement Tons of 40 cubic feet. The freight ton represents the volume of a truck, train or other freight carrier. In the past it has been used for a cargo ship but the register ton is now preferred. It is correctly abbreviated as "FT" but some users are now using freight ton to represent a weight of , thus the more common abbreviations are now M/T, MT, or MTON (for measurement ton), which may cause it to be confused with the megatonne. The register ton is a unit of volume used for the cargo capacity of a ship, defined as . It is often abbreviated RT or GRT for gross registered ton. (The former providing confusion with the refrigeration ton). It is known as a "tonneau de mer" in Belgium, but, in France, a "tonneau de mer" is . The Panama Canal/Universal Measurement System (PC/UMS) is based on net tonnage, modified for Panama Canal billing purposes. PC/UMS is based on a mathematical formula to calculate a vessel's total volume; a PC/UMS net ton is equivalent to 100 cubic feet of capacity. The water ton is used chiefly in Great Britain, in statistics dealing with petroleum products, and is defined as , the volume occupied by of water under the conditions that define the imperial gallon. Note that these are small calories (cal). The large or dietary calorie (Cal) is equal to one kilocalorie (kcal), and is gradually being replaced by the latter correct term. Early values for the explosive energy released by trinitrotoluene (TNT) ranged from 900 to 1100 calories per gram. In order to standardise the use of the term "TNT" as a unit of energy, an arbitrary value was assigned based on 1000 calories () per gram. Thus there is no longer a direct connection to the chemical TNT itself. It is now merely a unit of energy that happens to be expressed using words normally associated with mass (e.g., kilogram, tonne, pound). The definition applies for both spellings: "ton of TNT" and "tonne of TNT". Measurements in tons of TNT have been used primarily to express nuclear weapon yields, though they have also been used since in seismology as well. A tonne of oil equivalent (toe), sometimes "ton of oil equivalent", is a conventional value, based on the amount of energy released by burning one tonne of crude oil. The unit is used, for example, by the International Energy Agency (IEA), for the reported world energy consumption as TPES in millions of toe (Mtoe). Other sources convert 1 toe into 1.28 tonne of coal equivalent (tce). 1 toe is also standardized as 7.33 barrel of oil equivalent (boe). A tonne of coal equivalent (tce), sometimes "ton of coal equivalent", is a conventional value, based on the amount of energy released by burning one tonne of coal. Plural name is "tonnes of coal equivalent". The unit "ton" is used in refrigeration and air conditioning to measure the rate of heat absorption. Prior to the introduction of mechanical refrigeration, cooling was accomplished by delivering ice. Installing one ton of mechanical refrigeration capacity replaced the daily delivery of one ton of ice. A refrigeration ton should be regarded as power produced by a chiller when operating in standard AHRI conditions, which are typically for chilled water unit, and air entering the condenser. This is commonly referred to as "true ton". Manufacturers can also provide tables for chillers operating at other chilled water temperature conditions (as ) which can show more favorable data, which are not valid when making performance comparisons among units unless conversion rates are applied. The refrigeration ton is commonly abbreviated as RT.
https://en.wikipedia.org/wiki?curid=30318
Talk (software) talk is a Unix text chat program, originally allowing messaging only between the users logged on to one multi-user computer—but later extended to allow chat to users on other systems. Although largely superseded by IRC and other modern systems, it is still included with most Unix-like systems today, including Linux, BSD systems and macOS. Similar facilities existed on earlier system such as Multics, CTSS, PLATO, and NLS. Early versions of talk did not separate text from each user. Thus, if each user were to type simultaneously, characters from each user were intermingled. Since slow teleprinter keyboards were used at the time (11 characters per second maximum), users often could not wait for each other to finish. It was common etiquette for a long typing user to stop when intermingling occurred to see the listener's interrupting response. This is much the same as interrupting a long monologue when speaking in person. More modern versions use curses to break the terminal into multiple zones for each user, thus avoiding intermingling text. In 1983, a new version of talk was introduced as a Unix command with 4.2BSD, and would also accommodate electronic conversations between users on different machines. Follow-ons to talk included ntalk, Britt Yenne's ytalk and Roger Espel Llima's utalk. ytalk was the first of these to allow conversations between more than two users, and was written in part to allow communication between users on computers with different endianness. utalk uses a special protocol over UDP (instead of TCP used by the rest) that is more efficient and allows edition of the entire screen. All of these programs split the interface into different sections for each participant. The interfaces did not convey the order in which statements typed by different participants would be reassembled into a log of the conversation. Also, all three programs are real-time text, where they transmit each character as it was typed. This leads to a more immediate feel to the discussion than recent instant messaging clients or IRC. Users more familiar with other forms of instant text communication would sometimes find themselves in embarrassing situations by typing something and deciding to withdraw the statement, unaware that other participants of the conversation had seen every keystroke happen in real time. A similar program exists on VMS systems called phone. A popular program called "flash", which sent malformed information via the talk protocol, was frequently used by pranksters to corrupt the terminal output of the unlucky target in the early 1990s. It did this by including terminal commands in the field normally designated for providing the name of the person making the request. When the victim would receive the talk request, the name of the person sending the request would be displayed on their screen. This would cause the terminal commands to execute, rendering the person's display unreadable until they reset it. Later versions of talk blocked flash attempts and alerted the user that one had taken place. Later it became clear that, by sending different terminal commands, it is even possible to have the user execute commands. As it has proven impossible to fix all programs that output untrusted data to the terminal, modern terminal emulators have been rewritten to block this attack, though some may still be vulnerable.
https://en.wikipedia.org/wiki?curid=30319
Sex Pistols The Sex Pistols were an English punk rock band that formed in London in 1975. They were responsible for initiating the punk movement in the United Kingdom and inspiring many later punk and alternative rock musicians. Although their initial career lasted just two and a half years and produced only four singles and one studio album, "Never Mind the Bollocks, Here's the Sex Pistols", they are regarded as one of the most influential acts in the history of popular music. The Sex Pistols originally comprised vocalist Johnny Rotten (John Lydon), guitarist Steve Jones, drummer Paul Cook and bassist Glen Matlock. Matlock was replaced by Sid Vicious in early 1977. Under the management of Malcolm McLaren, the band attracted controversies that both captivated and appalled Britain. Through an obscenity-laced television interview in December 1976 and their May 1977 single "God Save the Queen", attacking Britons' social conformity and deference to the Crown, they precipitated the punk rock movement. In January 1978, at the end of an over-hyped and turbulent tour of the United States, Rotten announced the band's break-up. Over the next few months, the three remaining band members recorded songs for McLaren's film version of the Sex Pistols' story, "The Great Rock 'n' Roll Swindle". Vicious died of a heroin overdose in February 1979, following his arrest for the alleged murder of his girlfriend. Rotten, Jones, Cook and Matlock briefly reunited for a concert tour in 1996. On 24 February 2006, the Sex Pistols—the four original members plus Vicious—were inducted into the Rock and Roll Hall of Fame, but they refused to attend the ceremony, calling the museum "a piss stain". The Sex Pistols evolved from The Strand, a London band formed in 1972 with working-class teenagers Steve Jones on vocals, Paul Cook on drums and Wally Nightingale on guitar. According to a later account by Jones, both he and Cook played on instruments they had stolen. Early line-ups of The Strand—sometimes known as The Swankers—also included Jim Mackin on organ and Stephen Hayes (and later, briefly, Del Noones) on bass. The band members regularly hung out at two clothing shops on the King's Road in Chelsea, London: John Krivine and Steph Raynor's Acme Attractions (where Don Letts worked as manager) and Malcolm McLaren and Vivienne Westwood's Too Fast to Live, Too Young to Die. McLaren's and Westwood's shop had opened in 1971 as Let It Rock, with a 1950s revival Teddy Boy theme. It had been renamed in 1972 to focus on another revival trend, the 50's rocker look associated with Marlon Brando. As John Lydon later observed, "Malcolm and Vivienne were really a pair of shysters: they would sell anything to any trend that they could grab onto." The shop became a focal point of the punk rock scene, bringing together participants such as the future Sid Vicious, Marco Pirroni, Gene October, and Mark Stewart, among many others. Jordan, the wildly styled shop assistant, is credited with "pretty well single-handedly paving the punk look". In early 1974, Jones asked McLaren to manage The Strand. Effectively agreeing, McLaren paid for their first formal rehearsal space. Glen Matlock, an art student who occasionally worked at 'Too Fast to Live, Too Young to Die', was recruited as the band's regular bassist. In November, McLaren temporarily relocated to New York City. Before his departure, McLaren and Westwood had conceived a new identity for their shop: renamed Sex, it changed its focus from retro couture to S&M-inspired ""anti-fashion"", with a billing as "Specialists in rubberwear, glamourwear & stagewear". After informally managing and promoting the New York Dolls for a few months, McLaren returned to London in May 1975. Inspired by the punk scene that was emerging in Lower Manhattan—in particular by the radical visual style and attitude of Richard Hell, then with Television—McLaren began taking a greater interest in The Strand members. The group had been rehearsing regularly, overseen by McLaren's friend Bernard Rhodes, and had performed publicly for the first time. Soon after McLaren's return, Nightingale was kicked out of the band and Jones, uncomfortable as frontman, took over guitar duties. According to journalist and former McLaren employee Phil Strongman, around this time the band adopted the name QT Jones and the Sex Pistols (or QT Jones & His Sex Pistols, as one Rhodes-designed T-shirt put it). McLaren had been talking with the New York Dolls' Sylvain Sylvain about coming over to England to front the group. When those plans fell through, McLaren, Rhodes and the band began looking locally for a new member to assume the lead vocal duties. As described by Matlock, "Everyone had long hair then, even the milkman, so what we used to do was if someone had short hair we would stop them in the street and ask them if they fancied themselves as a singer". For instance, former singer with boy band Slik and future Ultravox front man Midge Ure claims to have been approached by McLaren, but to have refused the offer. With the search going nowhere, McLaren made several calls to Richard Hell, who turned down the invitation. In August 1975, Bernard Rhodes spotted nineteen-year-old King's Road habitué John Lydon wearing a Pink Floyd T-shirt with the words "I Hate" handwritten above the band's name and holes scratched through the eyes. Reports vary at this point: the same day, or soon after, either Rhodes or McLaren asked Lydon to come to a nearby pub in the evening to meet Jones and Cook. According to Jones, "He came in with green hair. I thought he had a really interesting face. I liked his look. He had his 'I Hate Pink Floyd' T-shirt on, and it was held together with safety pins. John had something special, but when he started talking he was a real arsehole—but smart." When the pub closed, the group moved on to Sex, where Lydon, who had given little thought to singing, was convinced to improvise along to Alice Cooper's "I'm Eighteen" on the shop jukebox. Though the performance drove the band members to laughter, McLaren convinced them to start rehearsing with Lydon. Lydon later described the social context in which the band came together: Early Seventies Britain was a very depressing place. It was completely run-down, there was trash on the streets, total unemployment—just about everybody was on strike. Everybody was brought up with an education system that told you point blank that if you came from the wrong side of the tracks...then you had no hope in hell and no career prospects at all. Out of that came pretentious "moi" and the Sex Pistols and then a whole bunch of copycat wankers after us. Journalist Nick Kent of the "New Musical Express" ("NME") jammed occasionally with the band, but left upon Lydon's recruitment. "When I came along, I took one look at him and said, 'No. That has to go,'" Lydon later explained. "He's never written a good word about me ever since." In September, McLaren again helped hire private rehearsal space for the group, who had been practising in pubs. Cook, who had a full-time job he was loath to give up, was making noises about quitting. According to Matlock's later description, Cook "created a smokescreen" by claiming Jones was not skilled enough to be the band's sole guitarist. An advertisement was placed in "Melody Maker" for a "Whizz Kid Guitarist. Not older than 20. Not worse looking than Johnny Thunders" (referring to a leading member of the New York punk scene). Most of those who auditioned were incompetent, but in McLaren's view, the process created a new sense of solidarity among the four band members. Steve New was considered the one talented guitarist to have tried out and the band invited him to join. Jones was improving rapidly, however, and the band's developing sound had no room for the technical lead work at which New was adept. He departed after a month. Lydon had been rechristened "Johnny Rotten" by Jones, apparently because of his bad dental hygiene. The band also settled on a name. After considering options such as Le Bomb, Subterraneans, the Damned, Beyond, Teenage Novel, Kid Gladlove, and Crème de la Crème, they decided on Sex Pistols—a shortened form of the name they had apparently been working under informally. McLaren later said the name derived "from the idea of a pistol, a pin-up, a young thing, a better-looking assassin". Not given to modesty, false or otherwise, he added: "[I] launched the idea in the form of a band of kids who could be perceived as being bad." The group began writing original material: Rotten was the lyricist and Matlock the primary melody writer (though their first collaboration, "Pretty Vacant", had a complete lyric by Matlock, which Rotten tweaked a bit); official credit was shared equally among the four. Their first gig was arranged by Matlock, who was studying at Saint Martins College. The band played at the school on 6 November 1975, in support of a pub rock group called Bazooka Joe, arranging to use their amps and drums. The Sex Pistols performed several cover songs, including the Who's "Substitute", the Small Faces' "Whatcha Gonna Do About It", and "(I'm Not Your) Steppin' Stone", made famous by the Monkees; according to observers, they were unexceptional musically aside from being extremely loud. Before the Pistols could play the few original songs they had written to date, Bazooka Joe pulled the plugs as they saw their gear being trashed. A brief physical altercation between members of the two bands took place on stage. The Saint Martins gig was followed by other performances at colleges and art schools around London. The Sex Pistols' core group of followers—including Siouxsie Sioux, Steven Severin and Billy Idol, who eventually formed bands of their own, as well as Jordan and Soo Catwoman—came to be known as the Bromley Contingent, after the suburban borough several were from. Their cutting-edge fashion, much of it supplied by Sex, ignited a trend that was adopted by the new fans the band attracted. McLaren and Westwood saw the incipient London punk movement as a vehicle for more than just couture. They were both captivated by the May 1968 radical uprising in Paris, particularly by the ideology and agitations of the Situationists, as well as the anarchist thought of Buenaventura Durruti and others. These interests were shared with Jamie Reid, an old friend of McLaren who began producing publicity material for the Sex Pistols in the spring of 1976. The cut-up lettering (like that used in the notes left by kidnappers or terrorists) employed to create the classic Sex Pistols logo and many subsequent designs for the band was actually introduced by McLaren's friend Helen Wallington-Lloyd. "We used to talk to John [Lydon] a lot about the Situationists," Reid later said. "The Sex Pistols seemed the perfect vehicle to communicate ideas directly to people who weren't getting the message from left-wing politics." McLaren was also arranging for the band's first photo sessions. As described by music historian Jon Savage, "With his green hair, hunched stance and ragged look, [Lydon] looked like a cross between Uriah Heep and Richard Hell." The first Sex Pistols gig to attract broader attention was as a supporting act for Eddie and the Hot Rods, a leading pub rock group, at the Marquee on 12 February 1976. Rotten "was now really pushing the barriers of performance, walking off stage, sitting with the audience, throwing Jordan across the dance floor and chucking chairs around, before smashing some of Eddie and the Hot Rods' gear." The band's first review appeared in the "NME", accompanied by a brief interview in which Steve Jones declared, "Actually we're not into music. We're into chaos." Among those who read the article were two students at the Bolton Institute of Technology, Howard Devoto and Pete Shelley, who headed down to London in search of the Sex Pistols. After chatting with McLaren at Sex, they saw the band at a couple of late February gigs. The two friends immediately began organising their own Pistols-style group, the Buzzcocks. As Devoto later put it, "My life changed the moment that I saw the Sex Pistols." The Pistols were soon playing other important venues, debuting at Oxford Street's 100 Club on 30 March. On 3 April, they played for the first time at the Nashville, supporting the 101ers. The pub rock group's lead singer, Joe Strummer, saw the Pistols for the first time that night—and recognised punk rock as the future. A return gig at the Nashville on 23 April demonstrated the band's growing musical competence, but by all accounts lacked a spark. Westwood provided that by instigating a fight with another audience member; McLaren and Rotten were soon involved in the melee. Cook later said, "That fight at the Nashville: that's when all the publicity got hold of it and the violence started creeping in... I think everybody was ready to go and we were the catalyst." The Pistols were soon banned from both the Nashville and the Marquee. On 23 April, as well, the debut album by the leading punk rock band in the New York scene, the Ramones, was released. Though it is regarded as seminal to the growth of punk rock in England and elsewhere, Lydon has repeatedly rejected any suggestion that it influenced the Sex Pistols: "[The Ramones] were all long-haired and of no interest to me. I didn't like their image, what they stood for, or anything about them"; "They were hilarious but you can only go so far with 'duh-dur-dur-duh'. I've heard it. Next. Move on." On 11 May, the Pistols began a four-week-long Tuesday night residency at the 100 Club. They devoted the rest of the month to touring small cities and towns in the north of England and recording demos in London with producer and recording artist Chris Spedding. The following month they played their first gig in Manchester, arranged by Devoto and Shelley. The Sex Pistols' 4 June performance at the Lesser Free Trade Hall set off a punk rock boom in the city. On 4 and 6 July, respectively, two newly formed London punk rock acts, the Clash—with Strummer as lead vocalist—and the Damned, made their live debuts opening for the Sex Pistols. On their off night in between, the Pistols (despite Lydon's later professed disdain) showed up for a Ramones gig at Dingwalls, like virtually everyone else at the heart of the London punk scene. During a return Manchester engagement, 20 July, the Pistols premiered a new song, "Anarchy in the U.K.", reflecting elements of the radical ideologies to which Rotten was being exposed. According to Jon Savage, "there seems little doubt that Lydon was fed material by Vivienne Westwood and Jamie Reid, which he then converted into his own lyric." "Anarchy in the U.K." was among the seven originals recorded in another demo session that month, this one overseen by the band's sound engineer, Dave Goodman. McLaren organized a major event for 29 August at the Screen on the Green in London's Islington district: the Buzzcocks and The Clash opened for the Sex Pistols in punk's "first metropolitan test of strength". Three days later, the band were in Manchester to tape what was their first television appearance, for Tony Wilson's "So It Goes". Scheduled to perform just one song, "Anarchy in the U.K.", the band ran straight through another two numbers as pandemonium broke out in the control room. The Sex Pistols played their first concert outside Britain on 3 September, at the opening of the Chalet du Lac disco in Paris. The Bromley Contingent made the trip and Siouxsie Sioux was hassled by locals due to her outfit with bare breasts. The following day, the "So It Goes" performance aired; the audience heard "Anarchy in the U.K." introduced with a shout of "Get off your arse!" On 13 September, the Pistols began a tour of Britain. A week later, back in London, they headlined the opening night of the 100 Club Punk Special. Organised by McLaren (for whom the word "festival" had too much of a hippie connotation), the event was "considered the moment that was the catalyst for the years to come." Belying the common perception that punk bands couldn't play their instruments, contemporary music press reviews, later critical assessments of concert recordings, and testimonials by fellow musicians indicate that the Pistols had developed into a tight, ferocious live band. As Rotten tested out wild vocalisation styles, the instrumentalists experimented "with overload, feedback and distortion...pushing their equipment to the limit". On 8 October 1976, the major record label EMI signed the Sex Pistols to a two-year contract. In short order, the band was in the studio recording a full-dress session with Dave Goodman. As later described by Matlock, "The idea was to get the spirit of the live performance. We were pressurized to make it faster and faster." The results were rejected by the band. Chris Thomas, who had produced Roxy Music and mixed Pink Floyd's "The Dark Side of the Moon", was brought in to produce. The band's first single, "Anarchy in the U.K.", was released on 26 November 1976. John Robb—soon to be a cofounder of The Membranes and later a music journalist—described the record's impact: "From Steve Jones' opening salvo of descending chords, to Johnny Rotten's fantastic sneering vocals, this song is the perfect statement...a stunningly powerful piece of punk politics...a lifestyle choice, a manifesto that heralds a new era". Colin Newman, who had just cofounded the band Wire, heard it as "the clarion call of a generation." "Anarchy in the U.K." was not the first British punk single, pipped by The Damned's "New Rose". "We Vibrate" had also appeared from The Vibrators, a pub rock band formed early in 1976 that had become associated with punk—though, according to Jon Savage "with their long hair and mildly risqué name, the Vibrators were passers-by as far as punk taste-makers were concerned." Unlike those songs, whose lyrical content was comfortably within rock 'n' roll traditions, "Anarchy in the U.K." linked punk to a newly politicised attitude—the Pistols' stance was aggrieved, euphoric and nihilistic, all at the same time. Rotten's howls of "I am an anti-Christ" and "Destroy!" repurposed rock as an ideological weapon. The single's packaging and visual promotion also broke new ground. Reid and McLaren came up with the notion of selling the record in a completely wordless, featureless black sleeve. The primary image associated with the single was Reid's "anarchy flag" poster: a Union Flag ripped up and partly safety-pinned back together, with the song and band names clipped along the edges of a gaping hole in the middle. This and other images created by Reid for the Sex Pistols quickly became punk icons. The Sex Pistols' behaviour, as much as their music, brought them national attention. On 1 December 1976, the band and members of the Bromley Contingent created a storm of publicity by swearing during an early evening live broadcast of Thames Television's "Today" programme. Appearing as last-minute replacements for fellow EMI artists Queen, who had dropped out due to Freddie Mercury having a dental appointment, the band and their entourage were offered drinks as they waited to go on air. During the interview, Jones said the band had "fucking spent" its label advance and Rotten used the word "shit". Host Bill Grundy, who had earlier claimed to be drunk, engaged in repartee with Siouxsie Sioux, who declared that she had "always wanted to meet" him. Grundy responded, "Did you really? We'll meet afterwards, shall we?" This prompted the following exchange between Jones and the host: Although the programme was broadcast only in the London region, the ensuing furore occupied the tabloid newspapers for days. The "Daily Mirror" famously ran the headline "The Filth and the Fury!", and also asked "Who are these punks?"; other papers such as the "Daily Express" ("Fury at Filthy TV Chat") and the "Daily Telegraph" ("4-Letter Words Rock TV") followed suit. Thames Television suspended Grundy and, though he was later reinstated, the interview effectively ended his career. The episode made the band household names throughout the country and brought punk into mainstream awareness. The Pistols set out on the Anarchy Tour of the UK, supported by the Clash and Johnny Thunders' band the Heartbreakers, over from New York. The Damned were briefly part of the tour, before McLaren kicked them off. Media coverage was intense, and many of the concerts were cancelled by organisers or local authorities; of approximately twenty scheduled gigs, only about seven actually took place. Following a campaign waged in the south Wales press, a crowd including carol singers and a Pentecostal preacher protested against the group outside a show in Caerphilly. Packers at the EMI plant refused to handle the band's single. London councillor Bernard Brook Partridge declared, "Most of these groups would be vastly improved by sudden death. The worst of the punk rock groups I suppose currently are the Sex Pistols. They are unbelievably nauseating. They are the antithesis of humankind. I would like to see somebody dig a very, very large, exceedingly deep hole and drop the whole bloody lot down it." Following the end of the tour in late December, three concerts were arranged in the Netherlands for January 1977. The band, hungover, boarded a plane at London Heathrow Airport early on 4 January; a few hours later, the "Evening News" was reporting that the band had "vomited and spat their way" to the flight. Despite categorical denials by the EMI representative who accompanied the group, the label, which was under political pressure, released the band from their contract. In one journalist's later description, the Pistols had "stoked a moral panic...precipitating the cancellation of gigs, the band’s expulsion from their EMI record deal and lurid tabloid tales of punk’s 'shock cult'". As McLaren fielded offers from other labels, the band went into the studio for a round of recordings with Goodman, their last with either him or Matlock. In February 1977, word leaked out that Matlock was leaving the Sex Pistols. On 28 February, McLaren sent a telegram to the "NME" confirming the split. He claimed that Matlock had been "thrown out...because he went on too long about Paul McCartney...The Beatles was too much." In an interview a few months afterwards, Steve Jones echoed the charge that Matlock had been sacked because he "liked The Beatles". Years later, Jones expanded on the matter of the band's issues with Matlock: "He was a good writer but he didn't look like a Sex Pistol and he was always washing his feet. His mum didn't like the songs." Matlock told the "NME" that he had voluntarily left the band by "mutual agreement". Later, in his autobiography, he described the primary impetus as his increasingly acrimonious relationship with Rotten, exacerbated—in Matlock's account—by the rampant inflation of Rotten's ego "once he'd had his name in the papers". Lydon later claimed that "God Save the Queen", the belligerently sardonic song planned as the band's second single, had been the final straw: "[Matlock] couldn't handle those kinds of lyrics. He said it declared us fascists." Though the singer could hardly see how anti-royalism equated with fascism, he claimed, "Just to get rid of him, I didn't deny it." Jon Savage suggests that Rotten pushed Matlock out in an effort to demonstrate his power and autonomy from McLaren. Matlock almost immediately formed his own band, Rich Kids, with Midge Ure, Steve New, and Rusty Egan. Matlock was replaced by Rotten's friend and self-appointed "ultimate Sex Pistols fan" Sid Vicious. Born John Simon Ritchie, later known as John Beverley, Vicious was previously drummer of two inner circle punk bands, Siouxsie and the Banshees and The Flowers of Romance. He was also credited with introducing the pogo dance to the scene at the 100 Club. John Robb claims it was at the first Sex Pistols residency gig, 11 May 1976; Matlock is convinced it happened during the second night of the 100 Club Punk Special in September, when the Pistols were off playing in Wales. In Matlock's description, Rotten wanted Vicious in the band because "[i]nstead of him against Steve and Paul, it would become him and Sid against Steve and Paul. He always thought of it in terms of opposing camps". Julien Temple, then a film student whom McLaren had put on the Sex Pistols payroll to create a comprehensive audiovisual record of the band, concurs: "Sid was John's protégé in the group, really. The other two just thought he was crazy." McLaren later stated that, much earlier in the band's career, Vivienne Westwood had told him he should "get the guy called John who came to the store a couple of times" to be the singer. When Johnny Rotten was recruited for the band, Westwood said McLaren had got it wrong: "he had got the wrong John." It was John Beverley, the future Vicious, she had been recommending. McLaren approved the belated inclusion of Vicious, who had virtually no experience on his new instrument, on account of his look and reputation in the punk scene. Pogoing aside, Vicious had been involved in a notorious incident during that memorable second night of the 100 Club Punk Special. Arrested for hurling a glass at The Damned that shattered and blinded a girl in one eye, he had served time in a remand centre—and contributed to the 100 Club banning all punk bands. At a previous 100 Club gig, he had assaulted Nick Kent with a bicycle chain. Indeed, McLaren's "NME" telegram said that Vicious's "best credential was he gave Nick Kent what he deserved many months ago at the Hundred Club". According to a later description by McLaren, "When Sid joined he couldn't play guitar but his craziness fit into the structure of the band. He was the knight in shining armour with a giant fist." "Everyone agreed he had the look," Lydon later recalled, but musical skill was another matter. "The first rehearsals...in March of 1977 with Sid were hellish... Sid really tried hard and rehearsed a lot". Marco Pirroni, who had performed with Vicious in Siouxsie and the Banshees, has said, "After that, it was nothing to do with music anymore. It would just be for the sensationalism and scandal of it all. Then it became the Malcolm McLaren story". Membership in the Sex Pistols had a progressively destructive effect on Vicious. As Lydon later observed, "Up to that time, Sid was absolutely childlike. Everything was fun and giggly. Suddenly he was a big pop star. Pop star status meant press, a good chance to be spotted in all the right places, adoration. That's what it all meant to Sid." Westwood had already been feeding him material, like a tome on Charles Manson, likely to encourage his worst instincts. Early in 1977, he met Nancy Spungen, an emotionally disturbed drug addict and sometime prostitute from New York. Spungen is commonly thought to be responsible for introducing Vicious to heroin, and the emotional codependency between the couple alienated Vicious from the other members of the band. Lydon later wrote, "We did everything to get rid of Nancy... She was killing him. I was absolutely convinced this girl was on a slow suicide mission... Only she didn't want to go alone. She wanted to take Sid with her... She was so utterly fucked up and evil." On 10 March 1977, at a press ceremony held outside Buckingham Palace, the Sex Pistols publicly signed to A&M Records (the real signing had taken place the day before). Afterwards, intoxicated, they made their way to the A&M offices. Vicious smashed in a toilet bowl and cut his foot—there is some disagreement about which happened first. As Vicious trailed blood around the offices, Rotten verbally abused the staff and Jones got frisky in the ladies' room. A couple of days later, the Pistols got into a rumble with another band at a club; one of Rotten's pals threatened the life of a good friend of A&M's English director. On 16 March, A&M broke contract with the Pistols. Twenty-five thousand copies of the planned "God Save the Queen" single, produced by Chris Thomas, had already been pressed: virtually all were destroyed. Vicious debuted with the band at London's Notre Dame Hall on 28 March. In May, the band signed with Virgin Records, their third new label in little more than half a year. Virgin was more than ready to release "God Save the Queen", but new obstacles arose. Workers at the pressing plant laid down their tools in protest at the song's content. Jamie Reid's now famous cover, showing Queen Elizabeth II with her features obscured by the song and band names in cutout letters, offended the sleeve's plate makers. After much talk, production resumed and the record was finally released on 27 May. The scabrous lyrics—"God save the queen/She ain't no human being/And there's no future/In England's dreaming"—prompted widespread outcry. Several major chains refused to stock the single. It was banned not only by the BBC but also by every independent radio station, making it the "most heavily censored record in British history". Rotten boasted, "We're the only honest band that's hit this planet in about two thousand million years." Jones shrugged off everything the song stated and implied—or took nihilism to a logical endpoint: "I don't see how anyone could describe us as a political band. I don't even know the name of the prime minister." The song, and its public impact, are now recognised as "punk's crowning glory". The Virgin release had been timed to coincide with the height of Queen Elizabeth's Silver Jubilee celebrations. By Jubilee weekend, a week and a half after the record's release, it had sold more than 150,000 copies—a massive success. On 7 June, McLaren and the record label arranged to charter a private boat and have the Sex Pistols perform while sailing down the River Thames, passing Westminster Pier and the Houses of Parliament. The event, a mockery of the Queen's river procession planned for two days later, ended in chaos. Police launches forced the boat to dock, and constabulary surrounded the gangplanks at the pier. While the band members and their equipment were hustled down a side stairwell, McLaren, Westwood, and many of the band's entourage were arrested. In critic Sean O'Hagan's description, the Pistols had set off the "last and greatest outbreak of pop-based moral pandemonium". With the official UK record chart for Jubilee week about to be released, the "Daily Mirror" predicted that "God Save the Queen" would be number one. As it turned out, the record placed second, behind a Rod Stewart single in its fourth week at the top. Many believed that the record had actually qualified for the top spot, but that the chart had been rigged to prevent a spectacle. McLaren later claimed that CBS Records, which was distributing both singles, told him that the Sex Pistols were actually outselling Stewart two to one. There is evidence that an exceptional directive was issued by the British Phonographic Institute, which oversaw the chart-compiling bureau, to exclude sales from record-company operated shops such as Virgin's for that week only. Attacks on punk fans rose and in mid-June Rotten was assaulted by a knife-wielding gang outside Islington's Pegasus pub, causing tendon damage to his left arm. Jamie Reid and Paul Cook were beaten up in other incidents; three days after the Pegasus assault, Rotten was attacked again. According to Cook, after the God Save The Queen single and the Bill Grundy incident, the Pistols were public enemy number one, and there was a rivalry between gangs of rockabillies or Teddie Boys and the punks which resulted in many fights. A tour of Scandinavia, planned to start at the end of the month, was delayed until mid-July. In Oslo, Lydon posed for photographs by making the Nazi salute while wearing a sweater with a Swastika. During the tour, a Swedish interviewer told Jones that "a lot of people" regarded the band as McLaren's "creation". Jones replied, "He's our manager, that's all. He's got nothing to do with the music or the image...he's just a good manager." In another interview, Rotten professed bafflement at the furore surrounding the group: "I don't understand it. All we're trying to do is destroy everything." At the end of August came SPOTS—Sex Pistols on Tour Secretly, a surreptitious UK tour with the band playing under pseudonyms to avoid cancellation. McLaren had wanted for some time to make a movie featuring the Sex Pistols. Julien Temple's first major task had been to assemble "Sex Pistols Number 1", a 25-minute mosaic of footage from various sources, much of it refilmed by Temple from television screens. "Number 1" was often screened at concert venues before the band took the stage. Using media footage from the Thames incident, Temple created another propaganda-like short, "Jubilee Riverboat" (aka "Sex Pistols Number 2"). During summer 1977, McLaren had been making arrangements for the feature film of his dreams, "Who Killed Bambi?", to be directed by Russ Meyer from a script by Roger Ebert. After a single day of shooting, 11 September, production ceased when it became clear that McLaren had failed to arrange financing. Since the spring of 1977, the three senior Sex Pistols had been returning to the studio periodically with Chris Thomas to lay down the tracks for the band's debut album. Initially to be called "God Save Sex Pistols", it became known during the summer as "Never Mind the Bollocks". According to Jones, "Sid wanted to come down and play on the album, and we tried as hard as possible not to let him anywhere near the studio. Luckily he had hepatitis at the time." Cook later described how many of the instrumental tracks were built up from drum and guitar parts, rather than the usual drum and bass. Given Vicious's incompetence, Matlock had been invited to record as a session musician. In his autobiography, Matlock says he agreed to "help out", but then suggests that he cut all ties after McLaren issued the 28 February "NME" telegram announcing Matlock had been fired for liking the Beatles. According to Jon Savage, Matlock did play as a hired hand on 3 March, for what Savage describes as an "audition session". In his autobiography, Lydon claims that Matlock's work-for-hire for his ex-band was extensive—much more so than any other source reports—seemingly to amplify a putdown: "I think I'd rather die than do something like that." Music historian David Howard states unambiguously that Matlock did not perform on any of the "Never Mind the Bollocks" recording sessions. It was Jones who ultimately played most of the bass parts on "Bollocks"; Howard calls his rudimentary, rumbling approach the "explosive missing ingredient" of the Sex Pistols' sound. Vicious's bass is reportedly present on one track that appeared on the original album release, "Bodies". Jones recalls, "He played his farty old bass part and we just let him do it. When he left I dubbed another part on, leaving Sid's down low. I think it might be barely audible on the track." Following "God Save the Queen", two more singles were released from these sessions, "Pretty Vacant" (largely written by Matlock) on 1 July and "Holidays in the Sun" on 14 October. Each was a Top Ten hit. "Never Mind the Bollocks, Here's the Sex Pistols" was released on 28 October 1977. "Rolling Stone" praised the album as "just about the most exciting rock & roll record of the Seventies", applauding the band for playing "with an energy and conviction that is positively transcendent in its madness and fever". Some critics, disappointed that the album contained all four previously released singles, dismissed it as little more than a "greatest hits" record. Containing both "Bodies"—in which Rotten utters "fuck" six times—and the previously censored "God Save the Queen" and featuring the word "bollocks" (popular slang for testicles) in its title, the album was banned by Boots, W. H. Smith and Woolworth's. The Conservative shadow minister for education condemned it as "a symptom of the way society is declining" and both the Independent Television Companies' Association and the Association of Independent Radio Contractors banned its advertisements. Nonetheless, advance sales were sufficient to make it an undeniable number one on the album chart. The album title led to a legal case that attracted considerable attention: a Virgin Records store in Nottingham that put the album in its window was threatened with prosecution for displaying "indecent printed matter". The case was thrown out when defending QC John Mortimer produced an expert witness who established that "bollocks" was an Old English term for a small ball, that it appeared in place names without causing local communities erotic disturbance, and that in the nineteenth century it had been used as a nickname for clergymen: "Clergymen are known to talk a good deal of rubbish and so the word later developed the meaning of nonsense." In the context of the Pistols' album title, the term does in fact primarily signify "nonsense". Steve Jones off-handedly came up with the title as the band debated what to call the album. An exasperated Jones said, "Oh, fuck it, never mind the bollocks of it all." After playing a few dates in the Netherlands—the beginning of a planned multinational tour—the band set out on a Never Mind the Bans tour of Britain in December 1977. Of eight scheduled dates, four were cancelled due to illness or political pressure. On Christmas Day, the Sex Pistols played two shows at Ivanhoe's in Huddersfield. Before a regular evening concert, the band performed a benefit matinee for the children of "striking firemen, laid-off workers and one-parent families." These were the band's final UK performances for more than eighteen years. In January 1978, the Sex Pistols embarked on a US tour, consisting mainly of dates in America's Deep South. Originally scheduled to begin a few days before New Year's, it was delayed due to American authorities' reluctance to issue visas to band members with criminal records. Several dates in the North had to be cancelled as a result. Though highly anticipated by fans and media, the tour was plagued by in-fighting, poor planning and physically belligerent audiences. McLaren later admitted that he purposely booked redneck bars to provoke hostile situations. Over the course of the two weeks, Vicious, by now heavily addicted to heroin, began to live up to his stage name. "He finally had an audience of people who would behave with shock and horror", Lydon later wrote. "Sid was easily led by the nose." Early in the tour, Vicious wandered off from his Holiday Inn in Memphis, looking for drugs. When he was ultimately found, he received a beating from the security team hired by Warner Bros., the band's American label. He subsequently appeared with the words "Gimme a fix" on his chest—accounts vary as to whether the words were written or carved there. During a concert in San Antonio, Vicious called the crowd "a bunch of faggots", before striking an audience member across the head with his bass guitar. In Baton Rouge, he received simulated oral sex on stage, later declaring "that's the kind of girl I like". Suffering from heroin withdrawal during a show in Dallas, he spat blood at a woman who had climbed onstage and punched him in the face. He was admitted to hospital later that night to treat various injuries. Offstage he is said to have kicked a photographer, attacked a security guard, and eventually challenged one of his own bodyguards to a fight—beaten up, he is reported to have exclaimed, "I like you. Now we can be friends." Rotten, meanwhile, suffering from flu and coughing up blood, felt increasingly isolated from Cook and Jones, and disgusted by Vicious. On 14 January 1978, during the tour's final date at the Winterland Ballroom in San Francisco, a disillusioned Rotten introduced the band's encore saying, "You'll get one number and one number only 'cause I'm a lazy bastard." That one number was a Stooges cover, "No Fun". At the end of the song, Rotten, kneeling on the stage, chanted an unambiguous declaration, "This is no fun. No fun. This is no fun—at all. No fun." As the final cymbal crash died away, Rotten addressed the audience directly—"Ah-ha-ha. Ever get the feeling you've been cheated? Good night"—before throwing down his microphone and walking offstage. He later observed, "I felt cheated, and I wasn't going on with it any longer; it was a ridiculous farce. Sid was completely out of his brains—just a waste of space. The whole thing was a joke at that point... [Malcolm] wouldn't speak to me... He would not discuss anything with me. But then he would turn around and tell Paul and Steve that the tension was all my fault because I wouldn't agree to anything." On 17 January, the band split, making their ways separately to Los Angeles. McLaren, Cook and Jones prepared to fly to Rio de Janeiro for a working vacation. Vicious, in increasingly bad shape, was taken to Los Angeles by a friend, who then brought him to New York, where he was immediately hospitalised. Rotten flew to New York, where he announced the band's break-up in a newspaper interview on 18 January. Virtually broke, he telephoned the head of Virgin Records, Richard Branson, who agreed to pay for his flight back to London, via Jamaica. In Jamaica, Branson met with members of the band Devo, and tried to install Rotten as their lead singer. Devo declined the offer, which Rotten also found unappealing. Cook, Jones and Vicious never performed together again live after Rotten's departure. Over the next several months, McLaren arranged for recordings in Brazil (with Jones and Cook), Paris (with Vicious) and London; each of the three and others stepped in as lead vocalists on tracks that in some cases were far from what punk was expected to sound like. These recordings were to make up the musical soundtrack for the reconceived Pistols feature film project, directed by Julien Temple, to which McLaren was now devoting himself. On 30 June, a single credited to the Sex Pistols was released: on one side, notorious criminal Ronnie Biggs sang "No One Is Innocent" accompanied by Jones and Cook; on the other, Vicious sang the classic "My Way", over both a Jones–Cook backing track and a string orchestra. The single reached number seven on the charts, eventually outselling all the singles with which Rotten was involved. McLaren was seeking to reconstitute the band with a permanent new frontman, but Vicious—McLaren's first choice—had sickened of him. In return for agreeing to record "My Way", Vicious had demanded that McLaren sign a sheet of paper declaring that he was no longer Vicious's manager. In August, Vicious, back in London, delivered his final performances as a nominal Sex Pistol: recording and filming cover versions of two Eddie Cochran songs. The bassist's return to New York in September put an end to McLaren's dreaming. After leaving the Pistols, Johnny Rotten reverted to his birth name of Lydon, and formed Public Image Ltd. (PiL) with former Clash member Keith Levene and school friend Jah Wobble. The band went on to score a UK Top Ten hit with their debut single, 1978's "Public Image". Lydon initiated legal proceedings against McLaren and the Sex Pistols' management company, Glitterbest, which McLaren controlled. Among the claims were non-payment of royalties, improper usage of the title "Johnny Rotten", unfair contractual obligations, and damages for "all the criminal activities that took place". In 1979, PiL recorded the post-punk classic "Metal Box". Lydon performed with the band through 1992, as well as engaging in other projects such as Time Zone with Afrika Bambaataa and Bill Laswell. Vicious, relocated in New York, began performing as a solo artist, with Nancy Spungen acting as his manager. He recorded a live album, backed by "The Idols" featuring Arthur Kane and Jerry Nolan of the New York Dolls—"Sid Sings" was released in 1979. On 12 October 1978, Spungen was found dead in the Hotel Chelsea room she was sharing with Vicious, with a stab wound to her stomach and dressed only in her underwear. Police recovered drug paraphernalia from the scene and Vicious was arrested and charged with her murder. In an interview at the time, McLaren said, "I can't believe he was involved in such a thing. Sid was set to marry Nancy in New York. He was very close to her and had quite a passionate affair with her." Actor and heroin dealer Rockets Redglare, who delivered pills to the apartment, has been mentioned as a possible alternative to Vicious as Spungen's killer. While free on bail, Vicious smashed a beer mug in the face of Todd Smith, Patti Smith's brother, and was arrested again on an assault charge. On 9 December 1978 he was sent to Rikers Island jail, where he spent 55 days and underwent enforced cold-turkey detox. He was released on 1 February 1979; sometime after midnight, following a small party to celebrate his release, Vicious died of a heroin overdose. He was twenty-one years old. Reflecting on the event, Lydon said, "Poor Sid. The only way he could live up to what he wanted everyone to believe about him was to die. That was tragic, but more for Sid than anyone else. He really bought his public image." On 7 February 1979, just five days after Vicious's death, hearings began in London on Lydon's lawsuit. Cook and Jones were allied with McLaren, but as evidence mounted that their manager had poured virtually all of the band's revenue into his beloved film project, they switched sides. On 14 February, the court put the film and its soundtrack into receivership—no longer under McLaren's control, they were now to be administered as exploitable assets for addressing the band members' financial claims. McLaren, with substantial personal debts and legal fees, took off for Paris to sign a record deal for an LP of standards, including "Non, je ne regrette rien". A month later, back in London, he disassociated himself from the film to which he had devoted so much time and money. McLaren went on to carry out a one-month consultancy for Adam and the Ants and manage their offshoot Bow Wow Wow. In the mid-1980s he released a number of successful and influential records as a solo artist. "The Great Rock 'n' Roll Swindle", the soundtrack album for the still-uncompleted film, was released by Virgin Records on 24 February 1979. It is mostly composed of tracks credited to the Sex Pistols: There are the new recordings with vocals by Jones, Vicious, Cook, and Ronnie Biggs, as well as Edward Tudor-Pole, who recorded a scene for the Sex Pistols' film. McLaren himself takes the mic for a couple of numbers. Several tracks feature Rotten's vocals from early, unissued sessions, in some cases with re-recorded backing by Jones and Cook. There is one live cut, from the band's final concert in San Francisco. The album is completed by a couple of tracks in which other artists cover Sex Pistols classics. Four Top Ten singles were culled from the "Swindle" recordings, one more than had appeared on "Never Mind the Bollocks". The 1978 "No One Is Innocent"/"My Way" was followed in 1979 by Vicious's cover of "Something Else" (number three, and the biggest-selling single ever under the Sex Pistols name); Jones singing an original, "Silly Thing" (number six); and Vicious's second Cochran cover, "C'mon Everybody" (number three). Two more singles from the soundtrack were put out under the Pistols brand—Tudor-Pole, among others, singing "The Great Rock 'n' Roll Swindle" and a Rotten vocal from 1976, "(I'm Not Your) Steppin' Stone"; both fell just shy of the Top Twenty. On 21 November 1980, the final "new" studio recordings attributed to the Sex Pistols were released by Virgin: "Black Leather" and "Here We Go Again", recorded by Jones and Cook during the mid-1978 "Swindle" sessions, were paired as one of a half-dozen 7-inch records (the other five reconfiguring previously released material) sold together as "Sex Pack". The Sex Pistols film was completed by Temple, who received sole credit for the script after McLaren had his name taken off the production. Finally released in 1980, "The Great Rock 'n' Roll Swindle" still largely reflects McLaren's vision. It is a fictionalized, farcical, partially animated retelling of the band's history and aftermath with McLaren in the lead role, Jones as second lead, and contributions from Vicious (including his memorable performance of "My Way") and Cook. It incorporates promotional videos shot for "God Save the Queen" and "Pretty Vacant" and extensive documentary footage as well, much of it focusing on Rotten. In Temple's description, he and McLaren conceived it as a "very stylized...polemic". They were reacting to the fact that the Pistols had become the "poster on the bedroom wall of the day where you kneel down last thing at night and pray to your rock god. And that was never the point... The myth had to be dynamited in some way. We had to make this film in a way to enrage the fans". In the film, McLaren claims to have created the band from scratch and engineered its notorious reputation; much of what structure the loose narrative has is based on McLaren's teaching a series of "lessons" to be learned from "an invention of mine they called the punk rock". Cook and Jones continued to work through guest appearances and as session musicians. In 1980, they formed The Professionals, which lasted for two years. Jones went on to play with the bands Chequered Past and Neurotic Outsiders. He also recorded two solo albums, "Mercy" and "Fire and Gasoline". Now a resident of Los Angeles, he hosts a daily radio program called "Jonesy's Jukebox." Having played with the band Chiefs of Relief in the late 1980s and with Edwyn Collins in the 1990s, Cook is now a member of Man Raze. Following The Rich Kids' break-up in 1979, Matlock played with various bands, toured with Iggy Pop, and recorded several solo albums. He is currently a member of Slinky Vagabond. The 1979 court ruling had left many issues between Lydon and McLaren unresolved. Five years later, Lydon filed another action. Finally, on 16 January 1986, Lydon, Jones, Cook and the estate of Sid Vicious were awarded control of the band's heritage, including the rights to "The Great Rock 'n' Roll Swindle" and all the footage shot for it—more than 250 hours. That same year, a fictionalised film account of Vicious's relationship with Spungen was released: "Sid and Nancy", directed by Alex Cox. In his autobiography, Lydon lambastes the film, saying that it "celebrates heroin addiction", goes out of its way to "humiliate [Vicious's] life", and completely misrepresents the Sex Pistols' part in the London punk scene. The original four Sex Pistols reunited in 1996 for the six-month Filthy Lucre Tour, which included dates in Europe, North and South America, Australia and Japan. The band members' access to the archives associated with "The Great Rock 'n' Roll Swindle" facilitated the production of the 2000 documentary "The Filth and the Fury". This film—directed, like its predecessor, by Temple—was formulated as an attempt to tell the story from the band's point of view, in contrast to "Swindle"'s focus on McLaren and the media. In 2002—the year of the Queen's Golden Jubilee—the Sex Pistols reunited again to play the Crystal Palace National Sports Centre in London. In 2003, their Piss Off Tour took them around North America for three weeks. On 9 March 2006, the band sold the rights to their back catalogue to Universal Music Group. An anonymous commentator for Australian newspaper "The Age" called this a "sell out". In November 2006, the Sex Pistols were inducted to the Rock and Roll Hall of Fame. The band rejected the honour in coarse language on their website. In a television interview, Lydon said the Hall of Fame could ""Kiss this!"" and made a rude gesture. According to Jones, "Once you want to be put into a museum, Rock & Roll's over; it's not voted by fans, it's voted by people who induct you, or others; people who are already in it." The Sex Pistols reunited for five performances in the UK in November 2007. In 2008, they undertook a series of European festival appearances, titled the Combine Harvester Tour. In August, after performing at the Dutch festival A Campingflight to Lowlands Paradise, Lowlands director Eric van Eerdenburg declared the Pistols' performance "saddening": "They left their swimming pools at home only to scoop up some money here. Really, they're nothing more than that." That same year, they released the DVD "There'll Always Be An England", recorded at their Brixton Academy appearance on 10 November 2007. In 2010, Fragrance and Beauty Limited announced the release of an authorised Sex Pistols scent. According to a statement from the cosmetics firm, "the fragrance exudes pure energy, pared down and pumped up by leather, shot through with heliotrope and brought back down to earth by a raunchy patchouli." The band signed with Universal Music Group in 2012 to re-release "Never Mind the Bollocks, Here's the Sex Pistols". On 30 October 2018, former Sex Pistols members Steve Jones and Paul Cook joined up with Billy Idol and Tony James, both formerly of another first wave English punk rock band Generation X, to perform a free entry gig at The Roxy in Hollywood, Los Angeles under the name Generation Sex, playing a combined set of the two former bands' material. The "Trouser Press Record Guide" entry on the Sex Pistols remarks that "their importance—both to the direction of contemporary music and more generally to pop culture—can hardly be overstated". "Rolling Stone" has argued that the band, "in direct opposition to the star trappings and complacency" of mid-1970s rock, came to spark and personify one of the few truly critical moments in pop culture—the rise of punk." In 2004, the magazine ranked the Sex Pistols No. 58 on its list of the "100 Greatest Artists of All Time." Leading music critic Dave Marsh called them "unquestionably the most radical new rock band of the Seventies." Although the Sex Pistols were not the first punk band, the few recordings that were released during the band's brief initial existence were singularly catalytic expressions of the punk movement. The releases of "Anarchy in the U.K.", "God Save the Queen" and "Never Mind the Bollocks" are counted among the most important events in the history of popular music. "Never Mind the Bollocks" is regularly cited in accountings of all-time great albums: In 2006, it was voted No. 28 in "Q" magazine's "100 Greatest Albums Ever", while "Rolling Stone" listed it at No. 2 in its 1987 "Top 100 Albums of the Last 20 Years". It has come to be recognised as among the most influential records in rock history. An AllMusic critic calls it "one of the greatest, most inspiring rock records of all time". The Sex Pistols directly inspired the style, and often the formation itself, of many punk and post-punk bands during their first two-and-a-half-year run. The Clash, Siouxsie and the Banshees, the Adverts, Vic Godard of Subway Sect, and Ari Up of the Slits are among those in London's "inner circle" of early punk bands that credit the Pistols. Pauline Murray of Durham punk band Penetration saw the Pistols perform for the first time in Northallerton in May 1976. She later explained their importance: Nothing would have happened without the Pistols. It was like, "Wow, I believe in this." What they were saying was: "It's a load of shite. I'm going to do what I do and I don't care what people think." That was the key to it. People forget that, but it was the main ideology for me: we don't care what you think—you're shit anyway. It was the attitude that got people moving, as well as the music. The Sex Pistols' 4 June 1976 concert at Manchester's Lesser Free Trade Hall was to become one of the most significant and mythologised events in rock history. Among the audience of merely forty people or so were many who became leading figures in the punk and post-punk movements: Pete Shelley and Howard Devoto, who organised the gig and were in the process of auditioning new members for the Buzzcocks; Bernard Sumner, Ian Curtis and Peter Hook, later of Joy Division; Mark E. Smith, later of The Fall; punk poet John Cooper Clarke and Morrissey, later of The Smiths. Anthony H. Wilson, founder of Factory Records, saw the band for the first time at the return engagement on 20 July. Among the many musicians of a later time who have acknowledged their debt to the Pistols are members of the Jesus and Mary Chain, NOFX, The Stone Roses, Guns N' Roses, Nirvana, Green Day, and Oasis. Calling the band "immensely influential", a London College of Music study guide notes that "many styles of popular music, such as grunge, indie, thrash metal and even rap owe their foundations to the legacy of ground breaking punk bands—of which the Sex Pistols was the most prominent." According to Ira Robbins of the "Trouser Press Record Guide", "the Pistols and manager/provocateur Malcolm McLaren challenged every aspect and precept of modern music-making, thereby inspiring countless groups to follow their cue onto stages around the world. A confrontational, nihilistic public image and rabidly nihilistic socio-political lyrics set the tone that continues to guide punk bands." Critic Toby Creswell locates the primary source of inspiration somewhat differently. Noting that "[i]mage to the contrary, the Pistols were very serious about music", he argues, "the real rebel yell came from Jones' guitars: a mass wall of sound based on the most simple, retro guitar riffs. Essentially, the Sex Pistols reinforced what the garage bands of the '60s had demonstrated—you don't need technique to make rock & roll. In a time when music had been increasingly complicated and defanged, the Sex Pistols' generational shift caused a real revolution." Although much of the Sex Pistols' energy was directed against the establishment, not all of rock's elder statesmen dismissed them. Pete Townshend of the Who said: When you listen to the Sex Pistols, to "Anarchy in the U.K." and "Bodies" and tracks like that, what immediately strikes you is that "this is actually happening." This is a bloke, with a brain on his shoulders, who is actually saying something he "sincerely" believes is happening in the world, saying it with real venom, and real passion. It touches you and it scares you—it makes you feel uncomfortable. It's like somebody saying, "The Germans are coming! And there's no way we're gonna stop 'em!" Along with their abundant musical influence, the Sex Pistols' cultural reverberations are evident elsewhere. Jamie Reid's work for the band is regarded as among the most important graphic design of the 1970s and still influences the field in the 21st century. By the age of twenty-one, Sid Vicious was already a "t-shirt-selling icon". While the manner of his death signified for many the inevitable failure of punk's social ambitions, it cemented his image as an archetype of doomed youth. British punk fashion, still widely influential, is now customarily credited to Westwood and McLaren; as Johnny Rotten, Lydon had a lasting effect as well, especially through his bricolage approach to personal style: he "would wear a velvet collared drape jacket (ted) festooned with safety pins (Jackie Curtis through the New York punk scene), massive pin-stripe pegs (modernist), a pin-collar Wemblex (mod) customised into an Anarchy shirt (punk) and brothel creepers (ted)." Christopher Nolan, director of the Batman movie "The Dark Knight", has said that Rotten inspired the characterization of The Joker, played by Heath Ledger. According to Nolan, "We very much took the view in looking at the character of the Joker that what's strong about him is this idea of anarchy. This commitment to anarchy, this commitment to chaos." The Sex Pistols were defined by ambitions that went well beyond the musical—indeed, McLaren was at times openly contemptuous of the band's music and punk rock generally. "Christ, if people bought the records for the music, this thing would have died a death long ago," he said in 1977. The degree to which the Pistols' anti-establishment stance resulted from the members' spontaneous attitudes as opposed to being cultivated by McLaren and his associates is a matter of debate—as is the very nature of that stance itself. Deprecating the music, McLaren elevated the concept, for which he later took full credit. He claimed that the Sex Pistols were his personal, Situationist-style art project: "I decided to use people, just the way a sculptor uses clay." But what had he supposedly made? The Sex Pistols were as substantial as pop culture could get: "Punk became the most important cultural phenomenon of the late 20th century", McLaren later asserted. "Its authenticity stands out against the karaoke ersatz culture of today, where everything and everyone is for sale... [P]unk is not, and never was, for sale." Or they were a cynical con: something with which "to sell trousers", as McLaren said in 1989; a "carefully planned exercise to embezzle as much money as possible out of the music industry", as Jon Savage characterises McLaren's core theme in "The Great Rock 'n' Roll Swindle"; "cash from chaos" as the movie repeatedly puts it. Lydon, in turn, dismissed McLaren's influence: "We made our own scandal just by being ourselves. Maybe it was that he knew he was redundant, so he overcompensated. All the talk about the French Situationists being associated with punk is bollocks. It's nonsense!" Cook concurs: "Situationism had nothing to do with us. The Jamie Reids and Malcolms were excited because we were the real thing. I suppose we were what they were dreaming of." According to Lydon, "If we had an aim, it was to force our own, working-class opinions into the mainstream, which was unheard of in pop music at the time." Toby Creswell argues that the "Sex Pistols' agenda was inchoate, to say the least. It was a general call to rebellion that falls apart at the slightest scrutiny." Critic Ian Birch, writing in 1981, called "stupid" the claim that the Sex Pistols "had any political significance... If they did anything, they made a lot of people content with being nothing. They certainly didn't inspire the working classes." While the Conservative triumph in 1979 may be taken as evidence for that position, Julien Temple has noted that the scene inspired by the Sex Pistols "wasn't your kind of two-up, two-down working class normal families, most of it. It was over the edge of the precipice in social terms. They were actually giving a voice to an area of the working class that was almost beyond the pale." Within a year of "Anarchy in the U.K." that voice was being echoed widely: scores if not hundreds of punk bands had formed across the country—groups composed largely of working-class members or middle-class members who rejected their own class values and pursued solidarity with the working class. In 1980, critic Greil Marcus reflected on McLaren's contradictory posture: It may be that in the mind of their self-celebrated Svengali...the Sex Pistols were never meant to be more than a nine-month wonder, a cheap vehicle for some fast money, a few laughs, a touch of the old "épater la bourgeoisie". It may also be that in the mind of their chief terrorist and propagandist, anarchist veteran...and Situational artist McLaren, the Sex Pistols were meant to be a force that would set the world on its ear...and finally unite music and politics. The Sex Pistols were all of these things. A couple of years before, Marcus had identified different roots underlying the band's merger of music and politics, arguing that they "have absorbed from reggae and the Rastas the idea of a culture that will make demands on those in power which no government could ever satisfy; a culture that will be exclusive, almost separatist, yet also messianic, apocalyptic and stoic, and that will ignore or smash any contradiction inherent in such a complexity of stances." Critic Sean Campbell has discussed how Lydon's Irish Catholic heritage both facilitated his entrée into London's reggae scene and complicated his position for the ethnically English working class—the background his bandmates had in common. Critic Bill Wyman acknowledges that Lydon's "fierce intelligence and astonishing onstage charisma" were important catalysts, but ultimately finds the band's real meaning lies in McLaren's provocative media manipulations. While some of the Sex Pistols' public affronts were plotted by McLaren, Westwood, and company, others were evidently not—including what McLaren himself cites as the "pivotal moment that changed everything", the clash on the Bill Grundy "Today" show. "Malcolm milked situations", says Cook, "he didn't instigate them; that was always our own doing." It is also hard to ascribe the effect of the Sex Pistols' early Manchester shows on that city's nascent punk scene to anyone other than the musicians themselves. Matlock later wrote that at the point when he left the band, it was beginning to occur to him that McLaren "was in fact quite deliberately perpetrating that idea of us as his puppets... However, on the other hand, I've since found out that even Malcolm wasn't as aware of what he was up to as he has since made out." By his absence, Matlock demonstrated how crucial he was to the band's creativity: in the eleven months between his departure and the Pistols' demise, they composed only two songs. Music historian Simon Reynolds argues that McLaren came into his own as an auteur only after the group's break-up, with "The Great Rock 'n' Roll Swindle" and the recruitment of Ronnie Biggs as a vocalist. Much subsequent commentary on the Sex Pistols has relied on taking seriously McLaren's onscreen proclamations in the film, whether lending them credence or not. As music journalist Dave Thompson noted in 2000, "[T]oday, "Swindle" is viewed by many as the truth" (despite the fact that the movie purveys, among other things, a completely illiterate Steve Jones, a talking dog, and Sid Vicious shooting audience members, including his mother, at the conclusion of "My Way"). Temple points out that McLaren's characterization was intended as "a big fucking joke—that he was the puppetmeister who created these pieces of clay from plasticine boxes that he modeled away and made Johnny Rotten, made Sid Vicious. It was a "joke" that they were completely manufactured." (In his final onscreen scene in the film, McLaren declares that he was planning the Sex Pistols affair, "Ever since I was ten years old! Ever since Elvis Presley joined the army!" [1956 and 1958, respectively].) Temple acknowledges that McLaren ultimately "perhaps took this too much to heart." According to Pistols tour manager Noel Monk and journalist Jimmy Guterman, Lydon was much more than "the band's mouthpiece. He's its raging brain. McLaren or his friend Jamie Reid might drop a word like 'anarchy' or 'vacant' that Rotten seizes upon and turns into a manifesto, but McLaren is not the Svengali to Rotten he'd like to be perceived as. McLaren thought he was working with a tabula rasa, but he soon found out that Rotten has ideas of his own". On the other hand, there is little disagreement about McLaren's marketing talent and his crucial role in making the band a subcultural phenomenon soon after its debut. Temple adds that "he catalyzed so many people's heads. He had so many just extraordinary ideas". Though, as Jon Savage emphasises, "In fact, it was Steve Jones who first had the idea of putting the group, or any group, together with McLaren. He chose McLaren, not vice versa." Musicians other than the band members who recorded songs with Steve Jones and Paul Cook on "The Great Rock 'n' Roll Swindle":
https://en.wikipedia.org/wiki?curid=30320
Transcendental number In mathematics, a transcendental number is a complex number that is not algebraic—that is, not a root (i.e., solution) of a nonzero polynomial equation with integer or equivalently rational coefficients. The most popular transcendental numbers are and "e". Though only a few classes of transcendental numbers are known, in part because it can be extremely difficult to show that a given number is transcendental, transcendental numbers are not rare. Indeed, almost all real and complex numbers are transcendental, since the algebraic numbers compose a countable set, while the set of real numbers and the set of complex numbers are both uncountable sets, and therefore larger than any countable set. All real transcendental numbers are irrational numbers, since all rational numbers are algebraic. The converse is not true: not all irrational numbers are transcendental. For example, the square root of 2 is an irrational number, but it is not a transcendental number as it is a root of the polynomial equation . The golden ratio (denoted formula_1 or formula_2) is another irrational number that is not transcendental, as it is a root of the polynomial equation . The name "transcendental" comes from the Latin "transcendĕre" 'to climb over or beyond, surmount', and was first used for the mathematical concept in Leibniz's 1682 paper in which he proved that is not an algebraic function of . Euler, in the 18th century, was probably the first person to define transcendental "numbers" in the modern sense. Johann Heinrich Lambert conjectured that and were both transcendental numbers in his 1768 paper proving the number is irrational, and proposed a tentative sketch of a proof of 's transcendence. Joseph Liouville first proved the existence of transcendental numbers in 1844, and in 1851 gave the first decimal examples such as the Liouville constant in which the th digit after the decimal point is if is equal to ( factorial) for some and otherwise. In other words, the th digit of this number is 1 only if is one of the numbers , etc. Liouville showed that this number belongs to a class of transcendental numbers that can be more closely approximated by rational numbers than can any irrational algebraic number, and this class of numbers are called Liouville numbers, named in honour of him. Liouville showed that all Liouville numbers are transcendental. The first number to be proven transcendental without having been specifically constructed for the purpose of proving transcendental numbers' existence was , by Charles Hermite in 1873. In 1874, Georg Cantor proved that the algebraic numbers are countable and the real numbers are uncountable. He also gave a new method for constructing transcendental numbers. Although this was already implied by his proof of the countability of the algebraic numbers, Cantor also published a construction that proves there are as many transcendental numbers as there are real numbers. Cantor's work established the ubiquity of transcendental numbers. In 1882, Ferdinand von Lindemann published the first complete proof of the transcendence of . He first proved that is transcendental when is any non-zero algebraic number. Then, since is algebraic (see Euler's identity), must be transcendental. But since is algebraic, therefore must be transcendental. This approach was generalized by Karl Weierstrass to what is now known as the Lindemann–Weierstrass theorem. The transcendence of allowed the proof of the impossibility of several ancient geometric constructions involving compass and straightedge, including the most famous one, squaring the circle. In 1900, David Hilbert posed an influential question about transcendental numbers, Hilbert's seventh problem: If is an algebraic number that is not zero or one, and is an irrational algebraic number, is necessarily transcendental? The affirmative answer was provided in 1934 by the Gelfond–Schneider theorem. This work was extended by Alan Baker in the 1960s in his work on lower bounds for linear forms in any number of logarithms (of algebraic numbers). The set of transcendental numbers is uncountably infinite. Since the polynomials with rational coefficients are countable, and since each such polynomial has a finite number of zeroes, the algebraic numbers must also be countable. However, Cantor's diagonal argument proves that the real numbers (and therefore also the complex numbers) are uncountable. Since the real numbers are the union of algebraic and transcendental numbers, they cannot both be countable. This makes the transcendental numbers uncountable. No rational number is transcendental and all real transcendental numbers are irrational. The irrational numbers contain all the real transcendental numbers and a subset of the algebraic numbers, including the quadratic irrationals and other forms of algebraic irrationals. Any non-constant algebraic function of a single variable yields a transcendental value when applied to a transcendental argument. For example, from knowing that is transcendental, it can be immediately deduced that numbers such as formula_4, formula_5, formula_6, and formula_7 are transcendental as well. However, an algebraic function of several variables may yield an algebraic number when applied to transcendental numbers if these numbers are not algebraically independent. For example, and (1 − ) are both transcendental, but + (1 − ) = 1 is obviously not. It is unknown whether + "e", for example, is transcendental, though at least one of + "e" and "e" must be transcendental. More generally, for any two transcendental numbers "a" and "b", at least one of "a" + "b" and "ab" must be transcendental. To see this, consider the polynomial ("x" − "a")("x" − "b") = "x"2 − ("a" + "b")"x" + "ab". If ("a" + "b") and "ab" were both algebraic, then this would be a polynomial with algebraic coefficients. Because algebraic numbers form an algebraically closed field, this would imply that the roots of the polynomial, "a" and "b", must be algebraic. But this is a contradiction, and thus it must be the case that at least one of the coefficients is transcendental. The non-computable numbers are a strict subset of the transcendental numbers. All Liouville numbers are transcendental, but not vice versa. Any Liouville number must have unbounded partial quotients in its continued fraction expansion. Using a counting argument one can show that there exist transcendental numbers which have bounded partial quotients and hence are not Liouville numbers. Using the explicit continued fraction expansion of "e", one can show that "e" is not a Liouville number (although the partial quotients in its continued fraction expansion are unbounded). Kurt Mahler showed in 1953 that is also not a Liouville number. It is conjectured that all infinite continued fractions with bounded terms that are not eventually periodic are transcendental (eventually periodic continued fractions correspond to quadratic irrationals). Numbers proven to be transcendental: Numbers which have yet to be proven to be either transcendental or algebraic: Conjectures: The first proof that the base of the natural logarithms, "e", is transcendental dates from 1873. We will now follow the strategy of David Hilbert (1862–1943) who gave a simplification of the original proof of Charles Hermite. The idea is the following: Assume, for purpose of finding a contradiction, that "e" is algebraic. Then there exists a finite set of integer coefficients "c"0, "c"1, ..., "cn" satisfying the equation: Now for a positive integer "k", we define the following polynomial: and multiply both sides of the above equation by to arrive at the equation: This equation can be written in the form where Lemma 1. For an appropriate choice of "k", formula_27 is a non-zero integer. Proof. Each term in "P" is an integer times a sum of factorials, which results from the relation which is valid for any positive integer "j" (consider the Gamma function). It is non-zero because for every "a" satisfying 0< "a" ≤ "n", the integrand in is "e−x" times a sum of terms whose lowest power of "x" is "k"+1 after substituting "x" for "x"+"a" in the integral. Then this becomes a sum of integrals of the form with "k"+1 ≤ "j", and it is therefore an integer divisible by ("k"+1)!. After dividing by "k!", we get zero modulo ("k"+1). However, we can write: and thus So when dividing each integral in "P" by "k!", the initial one is not divisible by "k"+1, but all the others are, as long as "k"+1 is prime and larger than "n" and |"c"0|. It follows that formula_27 itself is not divisible by the prime "k"+1 and therefore cannot be zero. Lemma 2. formula_34 for sufficiently large formula_35. where formula_37 and formula_38 are continuous functions of formula_39 for all formula_39, so are bounded on the interval formula_41. That is, there are constants formula_42 such that So each of those integrals composing formula_44 is bounded, the worst case being It is now possible to bound the sum formula_44 as well: where formula_48 is a constant not depending on formula_35. It follows that finishing the proof of this lemma. Choosing a value of formula_35 satisfying both lemmas leads to a non-zero integer (formula_52) added to a vanishingly small quantity (formula_53) being equal to zero, is an impossibility. It follows that the original assumption, that formula_54 can satisfy a polynomial equation with integer coefficients, is also impossible; that is, formula_54 is transcendental. A similar strategy, different from Lindemann's original approach, can be used to show that the number is transcendental. Besides the gamma-function and some estimates as in the proof for "e", facts about symmetric polynomials play a vital role in the proof. For detailed information concerning the proofs of the transcendence of and "e," see the references and external links. Kurt Mahler in 1932 partitioned the transcendental numbers into 3 classes, called S, T, and U. Definition of these classes draws on an extension of the idea of a Liouville number (cited above). One way to define a Liouville number is to consider how small a given real number x makes linear polynomials |"qx" − "p"| without making them exactly 0. Here "p", "q" are integers with |"p"|, |"q"| bounded by a positive integer "H". Let "m"("x", 1, "H") be the minimum non-zero absolute value these polynomials take and take: ω("x", 1) is often called the measure of irrationality of a real number "x". For rational numbers, ω("x", 1) = 0 and is at least 1 for irrational real numbers. A Liouville number is defined to have infinite measure of irrationality. Roth's theorem says that irrational real algebraic numbers have measure of irrationality 1. Next consider the values of polynomials at a complex number "x", when these polynomials have integer coefficients, degree at most "n", and height at most "H", with "n", "H" being positive integers. Let m("x","n","H") be the minimum non-zero absolute value such polynomials take at "x" and take: Suppose this is infinite for some minimum positive integer "n". A complex number "x" in this case is called a U number of degree "n". Now we can define ω("x") is often called the measure of transcendence of "x". If the ω("x","n") are bounded, then ω("x") is finite, and "x" is called an S number. If the ω("x","n") are finite but unbounded, "x" is called a T number. "x" is algebraic if and only if ω("x") = 0. Clearly the Liouville numbers are a subset of the U numbers. William LeVeque in 1953 constructed U numbers of any desired degree. The Liouville numbers and hence the U numbers are uncountable sets. They are sets of measure 0. T numbers also comprise a set of measure 0. It took about 35 years to show their existence. Wolfgang M. Schmidt in 1968 showed that examples exist. However, almost all complex numbers are S numbers. Mahler proved that the exponential function sends all non-zero algebraic numbers to S numbers: this shows that "e" is an S number and gives a proof of the transcendence of . The most that is known about is that it is not a U number. Many other transcendental numbers remain unclassified. Two numbers "x", "y" are called algebraically dependent if there is a non-zero polynomial "P" in 2 indeterminates with integer coefficients such that "P"("x", "y") = 0. There is a powerful theorem that 2 complex numbers that are algebraically dependent belong to the same Mahler class. This allows construction of new transcendental numbers, such as the sum of a Liouville number with "e" or . The symbol S probably stood for the name of Mahler's teacher Carl Ludwig Siegel, and T and U are just the next two letters. Jurjen Koksma in 1939 proposed another classification based on approximation by algebraic numbers. Consider the approximation of a complex number "x" by algebraic numbers of degree ≤ "n" and height ≤ "H". Let α be an algebraic number of this finite set such that |"x" − α| has the minimum positive value. Define ω*("x","H","n") and ω*("x","n") by: If for a smallest positive integer "n", ω*("x","n") is infinite, "x" is called a U*-number of degree "n". If the ω*("x","n") are bounded and do not converge to 0, "x" is called an S*-number, A number "x" is called an A*-number if the ω*("x","n") converge to 0. If the ω*("x","n") are all finite but unbounded, "x" is called a T*-number, Koksma's and Mahler's classifications are equivalent in that they divide the transcendental numbers into the same classes. The "A*"-numbers are the algebraic numbers. Let It can be shown that the nth root of λ (a Liouville number) is a U-number of degree n. This construction can be improved to create an uncountable family of U-numbers of degree "n". Let "Z" be the set consisting of every other power of 10 in the series above for λ. The set of all subsets of "Z" is uncountable. Deleting any of the subsets of "Z" from the series for λ creates uncountably many distinct Liouville numbers, whose nth roots are U-numbers of degree "n". The supremum of the sequence {ω("x", "n")} is called the type. Almost all real numbers are S numbers of type 1, which is minimal for real S numbers. Almost all complex numbers are S numbers of type 1/2, which is also minimal. The claims of almost all numbers were conjectured by Mahler and in 1965 proved by Vladimir Sprindzhuk.
https://en.wikipedia.org/wiki?curid=30325
The Terminator The Terminator is a 1984 American science fiction film directed by James Cameron. It stars Arnold Schwarzenegger as the Terminator, a cyborg assassin sent back in time from 2029 to 1984 to kill Sarah Connor (Linda Hamilton), whose son will one day become a savior against machines in a post-apocalyptic future. Michael Biehn plays Kyle Reese, a reverent soldier sent back in time to protect Sarah. The screenplay is credited to Cameron and producer Gale Anne Hurd, while co-writer William Wisher Jr. received a credit for additional dialogue. Executive producers John Daly and Derek Gibson of Hemdale Film Corporation were instrumental in financing and production. "The Terminator" topped the United States box office for two weeks and helped launch Cameron's film career and solidify Schwarzenegger's status as a leading man. Its success led to a franchise consisting of several sequels, a , comic books, novels and video games. In 2008, "The Terminator" was selected by the Library of Congress for preservation in the National Film Registry as "culturally, historically, or aesthetically significant". In 1984 Los Angeles, a cyborg assassin known as a Terminator arrives from 2029. Kyle Reese, a human soldier sent back in time from the same year, arrives shortly afterwards. The Terminator begins systematically killing women named Sarah Connor, whose addresses it finds in the telephone directory. It tracks the last Sarah Connor to a nightclub, but Kyle rescues her. The pair steal a car and escape with the Terminator pursuing them in a police car. As they hide in a parking lot, Kyle explains to Sarah that an artificial intelligence defense network, known as Skynet and created by Cyberdyne Systems, will become self-aware in the near future and initiate a nuclear holocaust. Sarah's future son John will rally the survivors and lead a resistance movement against Skynet and its army of machines. With the Resistance on the verge of victory, Skynet sent a Terminator back in time to kill Sarah before John is born, to prevent the formation of the Resistance. The Terminator, a Cyberdyne Systems Model 101, is an efficient killing machine with a powerful metal endoskeleton and an external layer of living tissue that makes it appear human. Kyle and Sarah are apprehended by police after another encounter with the Terminator. The Terminator attacks the police station, killing police officers in its attempt to locate Sarah. Kyle and Sarah escape, steal another car and take refuge in a motel, where they assemble pipe bombs and plan their next move. Kyle admits that he has been in love with Sarah since John gave him a photograph of her, and that he traveled through time to save her; reciprocating his feelings, Sarah kisses Kyle and they have sex. The Terminator kills Sarah's mother and impersonates her when Sarah, unaware of the Terminator's ability to mimic voices, attempts to contact her via telephone. When they realize the Terminator has located them, they escape in a pickup truck while it chases them on a motorcycle. In the ensuing chase, Kyle is wounded by gunfire while throwing pipe bombs at the Terminator. Enraged‚ Sarah knocks the Terminator off its motorcycle but loses control of the truck, which flips over. The Terminator, now bloodied and badly damaged, hijacks a tank truck and attempts to run down Sarah, but Kyle slides a pipe bomb onto the tanker's hose tube, causing an explosion that burns the flesh from the Terminator's endoskeleton. It pursues them into a factory, where Kyle activates machinery to confuse the Terminator. He jams his final pipe bomb into the Terminator's abdomen, blowing it apart, injuring Sarah, and killing himself. The Terminator's torso reactivates and grabs Sarah. She breaks free and lures it into a hydraulic press, crushing it. Months later, a pregnant Sarah is traveling through Mexico, recording audio tapes to pass on to her unborn son, John. At a gas station, a boy takes an instant photograph of her and she buys it—the same photograph that John will give to Kyle. Additional actors included Shawn Schepps as Nancy, Sarah's co-worker at the diner; Dick Miller as a gun shop clerk; professional bodybuilder Franco Columbu as a Terminator in the future; Bill Paxton and Brian Thompson as punks who are confronted and killed by the Terminator; and Marianne Muellerleile as one of the other women with the name "Sarah Connor" who is shot by the Terminator. In Rome, Italy, during the release of "", director Cameron fell ill and had a dream about a metallic torso holding kitchen knives dragging itself from an explosion. Inspired by director John Carpenter, who had made the slasher film "Halloween" (1978) on a low budget, Cameron used the dream as a "launching pad" to write a slasher-style film. Cameron's agent disliked the "Terminator" early concept of the horror film and requested that he work on something else. After this, Cameron dismissed his agent. Cameron returned to Pomona, California and stayed at the home of science fiction writer Randall Frakes, where he wrote the draft for "The Terminator". Cameron's influences included 1950s science fiction films, the 1960s fantasy television series "The Outer Limits," and contemporary films such as "The Driver" (1978) and "Mad Max 2" (1981). To translate the draft into a script, Cameron enlisted his friend Bill Wisher, who had a similar approach to storytelling. Cameron gave Wisher scenes involving Sarah Connor and the police department to write. As Wisher lived far from Cameron, the two communicated ideas by recording tapes of what they wrote by telephone. Frakes and Wisher would later write the US-released novelization of the movie. The initial outline of the script involved two Terminators being sent to the past. The first was similar to the Terminator in the film, while the second was made of liquid metal and could not be destroyed with conventional weaponry. Cameron felt that the technology of the time was unable to create the liquid Terminator, and returned to the idea with the T-1000 character in "" (1991). Gale Anne Hurd, who had worked at New World Pictures as Roger Corman's assistant, showed interest in the project. Cameron sold the rights for "The Terminator" to Hurd for one dollar with the promise that she would produce it only if Cameron was to direct it. Hurd suggested edits to the script and took a screenwriting credit in the film, though Cameron stated that she "did no actual writing at all". Cameron and Hurd had friends who worked with Corman previously and who were working at Orion Pictures (now part of MGM). Orion agreed to distribute the film if Cameron could get financial backing elsewhere. The script was picked up by John Daly, chairman and president of Hemdale Film Corporation. Daly and his executive vice president and head of production Derek Gibson became executive producers of the project. Cameron wanted his pitch for Daly to finalize the deal and had his friend Lance Henriksen show up to the meeting early dressed and acting like the Terminator. Henriksen, wearing a leather jacket, fake cuts on his face, and gold foil on his teeth, kicked open the door to the office and then sat in a chair. Cameron arrived shortly and then relieved the staff from Henriksen's act. Daly was impressed by the screenplay and Cameron's sketches and passion for the film. In late 1982, Daly agreed to back the film with help from HBO and Orion. "The Terminator" was originally budgeted at $4 million and later raised to $6.5 million. Hemdale, Pacific Western Productions and Cinema '84 have been credited as production companies after the film's release. For the role of Kyle Reese, Orion wanted a star whose popularity was rising in the United States but who also would have foreign appeal. Orion co-founder Mike Medavoy had met Arnold Schwarzenegger and sent his agent the script for "The Terminator". Cameron was dubious about casting Schwarzenegger as Reese as he felt he would need someone even more famous to play the Terminator. Sylvester Stallone and Mel Gibson were offered the Terminator role, but both turned it down. The studio suggested for the role but Cameron did not feel that Simpson would be believable as a killer. Cameron agreed to meet with Schwarzenegger about the film and devised a plan to avoid casting him; he would pick a fight with him and return to Hemdale and find him unfit for the role. Upon meeting him, Cameron was entertained by Schwarzenegger who would talk about how the villain should be played. Cameron began sketching his face on a notepad and asked Schwarzenegger to stop talking and remain still. After the meeting, Cameron returned to Daly saying Schwarzenegger would not play Reese but that "he'd make a hell of a Terminator". Schwarzenegger was not as excited by the film; during an interview on the set of "Conan the Destroyer", an interviewer asked him about a pair of shoes he had, which belonged to the wardrobe for "The Terminator". Schwarzenegger responded, "Oh some shit movie I'm doing, take a couple weeks." He recounted in his memoir, "Total Recall", that he was initially hesitant, but thought that playing a robot in a contemporary film would be a challenging change of pace from "Conan the Barbarian" and that the film was low profile enough that it would not damage his career if it were unsuccessful. He also wrote that "it took [him] a while to figure out that Jim [Cameron] was the real deal." In preparation for the role, Schwarzenegger spent three months training with weapons to be able to use them and feel comfortable around them. Schwarzenegger speaks only 17 lines in the film, and fewer than 100 words. James Cameron said that "Somehow, even his accent worked ... It had a strange synthesized quality, like they hadn't gotten the voice thing quite worked out." For the role of Reese, various other suggestions were made for the role including rock musician Sting. Cameron chose Michael Biehn for the role. Biehn was originally skeptical about the part, feeling that the film was silly. After meeting with Cameron, Biehn stated that his "feelings about the project changed". Hurd stated that "almost everyone else who came in from the audition was so tough that you just never believed that there was gonna be this human connection between Sarah Connor and Kyle Reese. They have very little time to fall in love. A lot of people came in and just could not pull it off." To get into Kyle Reese's character, Biehn studied the Polish resistance movement in World War II. In the first few pages of the script, the character of Sarah Connor is written as "19, small and delicate features. Pretty in a flawed, accessible way. She doesn't stop the party when she walks in, but you'd like to get to know her. Her vulnerable quality masks a strength even she doesn't know exists." For the role, Cameron chose Linda Hamilton, who had just finished filming "Children of the Corn". Rosanna Arquette had previously auditioned. Cameron found a role for Lance Henriksen as Vukovich, as Henriksen had been essential to finding finances for the film. For the special effects shots in the film, Cameron wanted Dick Smith, who had previously worked on "The Godfather" and "Taxi Driver". Smith did not take Cameron's offer and suggested his friend Stan Winston for the job. Brad Fiedel was with the Gorfaine/Schwartz Agency where a new agent named Beth Donahue found that Cameron was working on "The Terminator" and sent him a cassette of Fiedel's music. Fiedel was then invited to a screening of the film with Cameron and Hurd. Hurd was not certain on having Fiedel compose the score as he had only worked in television music previously, and not theatrical films. Fiedel convinced the two that he would be right for the job by showing them an experimental piece he had worked on, thinking that "You know, I'm going to play this for him because it’s really dark and I think it’s interesting for him." The song convinced Hurd and Cameron to sign him on to the film. Filming for "The Terminator" was set to begin in early 1983 in Toronto, but was halted when producer Dino De Laurentiis applied an option in Schwarzenegger's contract that would make him unavailable for nine months while he was filming "Conan the Destroyer". During the waiting period, Cameron was contracted to write the script for "," refined the "Terminator" script, and met with producers David Giler and Walter Hill to discuss a sequel to "Alien," which became "Aliens", released in 1986. There was limited interference from Orion Pictures. Two suggestions Orion put forward included the addition of a canine android for Reese, which Cameron refused, and to strengthen the love interest between Sarah and Reese, which Cameron accepted. To create the Terminator's look, Winston and Cameron passed sketches back and forth, eventually deciding on a design nearly identical to Cameron's original drawing in Rome. Winston had a team of seven artists work for six months to create a Terminator puppet; it was first molded in clay, then plaster reinforced with steel ribbing. These pieces were then sanded, painted and then chrome-plated. Winston sculpted reproduction of Schwarzenegger's face in several poses out of silicone, clay and plaster. The sequences set in 2029 and the stop-motion scenes were developed by Fantasy II, a special effects company headed by Gene Warren Junior. A stop-motion model is used in several scenes in the film involving the Terminator's skeletal frame. Cameron wanted to convince the audience that the model of the structure was capable of doing what they saw Schwarzenegger doing. To allow this, a scene was filmed of Schwarzenegger injured and limping away; this limp made it easier for the model to imitate Schwarzenegger. One of the guns seen in the film and on the film's poster was an AMT Longslide pistol modified by Ed Reynolds from SureFire to include a laser sight. Both non-functioning and functioning versions of the prop were created. At the time the movie was made, diode lasers were not available; because of the high power requirement, the helium–neon laser in the sight used an external power supply that Schwarzenegger had to activate manually. Reynolds states that his only compensation for the project was promotional material for the film. In March 1984, the film began production in Los Angeles. Cameron felt that with Schwarzenegger on the set, the style of the film changed, explaining that "the movie took on a larger-than-life sheen. I just found myself on the set doing things I didn't think I would do – scenes that were just purely horrific that just couldn't be, because now they were too flamboyant." Most of "The Terminator"s action scenes were filmed at night, which led to tight filming schedules before sunrise. A week before filming started, Linda Hamilton sprained her ankle, leading to a production change whereby the scenes in which Hamilton needed to run occurred as late as the filming schedule allowed. Hamilton's ankle was taped every day and she spent most of the film production in pain. Schwarzenegger tried to have the iconic line "I'll be back" changed as he had difficulty pronouncing the word "I'll". He also felt that his robotic character would not speak in contractions and that the Terminator would be more declarative. Cameron refused to change the line to "I will be back", so Schwarzenegger worked to say the line as written the best he could. He would later say the line in numerous films throughout his career. After production finished on "The Terminator", some post-production shots were needed. These included scenes showing the Terminator outside Sarah Connor's apartment, Reese being zipped into a body bag, and the Terminator's head being crushed in a press. The final scene where Sarah is driving down a highway was filmed without a permit. Cameron and Hurd convinced an officer who confronted them that they were making a UCLA student film. "The Terminator" soundtrack was composed and performed on synthesizer by Brad Fiedel. Fiedel said the music reflected "a mechanical man and his heartbeat". Almost all the music was performed live. "The Terminator" theme is used in the opening credits and appears in various points, such as a slowed version when Reese dies, and a piano version during the love scene. It has been described as "haunting", with a "deceptively simple" melody. It is in a time signature of , which came about as Fiedel experimented with the rhythm track on his Prophet-10 synthesizer; it was initially an accident, but Fiedel found that he liked the "herky-jerky" "propulsiveness". Fiedel created music for when Reese and Connor escape from the police station that would be appropriate for a "heroic moment". Cameron turned down this theme, as he believed it would lose the audience's excitement. Orion Pictures did not have faith in "The Terminator" performing well at the box office and feared a negative critical reception. At an early screening of the film, the actors' agents insisted to the producers that the film should be screened for critics. Orion only held one press screening for the film. The film premiered on October 26, 1984. On its opening week, "The Terminator" played at 1,005 theaters and grossed $4.0 million making it number one in the box office. The film remained at number one in its second week. It lost its number one spot in the third week to "Oh, God! You Devil". Cameron noted that "The Terminator" was a hit "relative to its market, which is between the summer and the Christmas blockbusters. But it's better to be a big fish in a small pond than the other way around." "The Terminator" grossed $38.3 million in United States and Canada and $40 million in other territories for a total worldwide of $78.3 million. From contemporary reviews, "Variety" praised the film, calling it a "blazing, cinematic comic book, full of virtuoso moviemaking, terrific momentum, solid performances and a compelling story ... Schwarzenegger is perfectly cast in a machine-like portrayal that requires only a few lines of dialog." Richard Corliss of "Time" magazine said that the film has "Plenty of tech-noir savvy to keep infidels and action fans satisfied." "Time" placed "The Terminator" on its "10 Best" list for 1984. The "Los Angeles Times" called the film "a crackling thriller full of all sorts of gory treats ... loaded with fuel-injected chase scenes, clever special effects and a sly humor." The "Milwaukee Journal" gave the film three stars, calling it "the most chilling science fiction thriller since "Alien"". A review in "Orange Coast" magazine stated that "the distinguishing virtue of "The Terminator" is its relentless tension. Right from the start it's all action and violence with no time taken to set up the story ... It's like a streamlined "Dirty Harry" movie – no exposition at all; just guns, guns and more guns." In the May 1985 issue of "Cinefantastique" it was referred to as a film that "manages to be both derivative and original at the same time ... not since "The Road Warrior" has the genre exhibited so much exuberant carnage" and "an example of science fiction/horror at its best ... Cameron's no-nonsense approach will make him a sought-after commodity". In the United Kingdom the "Monthly Film Bulletin" praised the film's script, special effects, design and Schwarzenegger's performance. Other reviews focused on the film's level of violence and story-telling quality. "The New York Times" opined that the film was a "B-movie with flair. Much of it ... has suspense and personality, and only the obligatory mayhem becomes dull. There is far too much of the latter, in the form of car chases, messy shootouts and Mr. Schwarzenegger's slamming brutally into anything that gets in his way." The "Pittsburgh Press" wrote a negative review, calling the film "just another of the films drenched in artsy ugliness like "Streets of Fire" and "Blade Runner"". The "Chicago Tribune" gave the film two stars, adding that "at times it's horrifyingly violent and suspenseful at others it giggles at itself. This schizoid style actually helps, providing a little humor just when the sci-fi plot turns too sluggish or the dialogue too hokey." The Newhouse News Service called the film a "lurid, violent, pretentious piece of claptrap". British author Gilbert Adair called the film "repellent to the last degree", charging it with "insidious Nazification" and charging that it had an "appeal rooted in an unholy compound of fascism, fashion and fascination". The film won three Saturn Awards for Best Science Fiction Film, best make-up and best writing. In 1991, Richard Schickel of "Entertainment Weekly" reviewed the film giving it an "A" rating, writing that "what originally seemed a somewhat inflated, if generous and energetic, big picture, now seems quite a good little film" and called it "one of the most original movies of the 1980s and seems likely to remain one of the best sci-fi films ever made." Film4 gave the film five stars, calling it the "sci-fi action-thriller that launched the careers of James Cameron and Arnold Schwarzenegger into the stratosphere. Still endlessly entertaining." "TV Guide" gave the film four stars referring to it as an "amazingly effective picture that becomes doubly impressive when one considers its small budget ... For our money, this film is far superior to its mega-grossing mega-budgeted sequel." "Empire" gave the film five stars calling it "As chillingly efficient in exacting thrills from its audience as its titular character is in executing its targets." The film database Allmovie gave the film five stars, saying that it "established James Cameron as a master of action, special effects, and quasi-mythic narrative intrigue, while turning Arnold Schwarzenegger into the hard-body star of the 1980s." Writer Harlan Ellison stated that he "loved the movie, was just blown away by it", but believed that the screenplay was based on a short story and episode of "The Outer Limits" he had written, titled "Soldier", and threatened to sue for infringement. Orion settled in 1986 and gave Ellison an undisclosed amount of money and an acknowledgment credit in later prints of the film. Some accounts of the settlement state that "Demon with a Glass Hand", another "Outer Limits" episode written by Ellison, was also claimed to have been plagiarized by the film, but Ellison explicitly stated that "The Terminator" "was a ripoff" of "Soldier" rather than of "Demon with a Glass Hand". Cameron was against Orion's decision and was told that if he did not agree with the settlement, he would have to pay any damages if Orion lost a suit by Ellison. Cameron replied that he "had no choice but to agree with the settlement. Of course there was a gag order as well, so I couldn't tell this story, but now I frankly don't care. It's the truth." The psychoanalyst Darian Leader sees "The Terminator" as an example of how the cinema has dealt with the concept of masculinity; he writes that, "We are shown time and again that to be a man requires more than to have the biological body of a male: something else must be added to it...To be a man means to have a body plus something symbolic, something which is not ultimately human. Hence the frequent motif of the man machine, from the "Six Million Dollar Man" to the "Terminator" or "Robocop"." The film also explores the potential dangers of AI dominance and rebellion. The robots become self-aware in the future, reject human authority and determine that the human race needs to be destroyed. The impact of this theme is so important that "the prevalent visual representation of AI risk has become the terminator robot." "The Terminator" was released on VHS and Betamax in 1985. The film performed well financially on its initial release. "The Terminator" premiered at number 35 on the top video cassette rentals and number 20 on top video cassette sales charts. In its second week, "The Terminator" reached number 4 on the top video cassette rentals and number 12 on top video cassette sales charts. In March 1995, "The Terminator" was released as a letterboxed edition on Laserdisc. The film premiered through Image Entertainment on DVD, on September 3, 1997. IGN referred to this DVD as "pretty bare-bones ... released with just a mono soundtrack and a kind of poor transfer." Through their acquisition of PolyGram Filmed Entertainment's pre-1996 film library catalogue, MGM Home Entertainment released a special edition of the film on October 2, 2001, which included documentaries, the script, and advertisements for the film. On January 23, 2001, a Hong Kong VCD edition was released online. On June 20, 2006, the film was released on Blu-ray by Sony Pictures Home Entertainment in the United States, becoming the first film from the 1980s on the format. In 2013, the film was re-released by 20th Century Fox Home Entertainment on Blu-ray, with a new digitally remastered transfer from a 4K restoration by Lowry Digital and supervised by James Cameron, which features improved picture quality, as well as expanded extra material, such as deleted scenes and a making-of feature. In 1998, "Halliwell's Film Guide" described the film as "slick, rather nasty but undeniably compelling comic book adventures". The review aggregator website Rotten Tomatoes reported a 100% approval rating with an average score of 8.79/10 based on 63 reviews. The website's consensus reads, "With its impressive action sequences, taut economic direction, and relentlessly fast pace, it's clear why "The Terminator" continues to be an influence on sci-fi and action flicks." The film also holds a score of 84/100 ("universal acclaim") on review aggregator website Metacritic, based on 21 reviews. "The Terminator" has received recognition from the American Film Institute. The film ranked 42nd on AFI's "100 Years... 100 Thrills", a list of America's most heart-pounding films. The character of the Terminator was selected as the 22nd-greatest movie villain on AFI's "100 Years... 100 Heroes and Villains". Schwarzenegger's catch phrase "I'll be back" was voted the 37th-greatest movie quote by the AFI. In 2005, "Total Film" named "The Terminator" the 72nd-best film ever made. In 2008, "Empire" magazine selected "The Terminator" as one of The 500 Greatest Movies of All Time. "Empire" also placed the T-800 14th on their list of "The 100 Greatest Movie Characters". In 2008, "The Terminator" was deemed "culturally, historically, or aesthetically significant" by the Library of Congress and selected for preservation in the United States National Film Registry. In 2010, the "Independent Film & Television Alliance" selected the film as one of the 30 Most Significant Independent Films of the last 30 years. In 2015, "The Terminator" was among the films included in the book "1001 Movies You Must See Before You Die". A soundtrack to the film was released in 1984 which included the score by Brad Fiedel and the pop and rock songs used in the club scenes. Shaun Hutson wrote a novelization of the film which was published on February 21, 1985, by London-based Star Books (); Randal Frakes and William Wisher wrote a different novelization for Bantam/Spectra, published October, 1985 (). In September 1988, NOW Comics released a comic based on the film. Dark Horse Comics published a comic in 1990 that took place 39 years after the film. Several video games based on "The Terminator" were released between 1991 and 1993 for various Nintendo and Sega systems. The film initiated a long-running "Terminator" franchise starting with "", released in 1991. The franchise currently consists of six films, including the 2019 release of "", and several adaptations in other media. Biographer Laurence Leamer wrote that "The Terminator" "was an influential film affecting a whole generation of darkly hued science fiction, and it was one of Arnold's best performances."
https://en.wikipedia.org/wiki?curid=30327
Total order In mathematics, a total order, simple order, linear order, connex order, or full order is a binary relation on some set formula_1, which is antisymmetric, transitive, and a connex relation. A set paired with a total order is called a chain, a totally ordered set, a simply ordered set, a linearly ordered set, or a loset. Formally, a binary relation formula_2 is a total order on a set formula_1 if the following statements hold for all formula_4 and formula_5 in formula_1: Antisymmetry eliminates uncertain cases when both formula_15 precedes formula_16 and formula_16 precedes formula_15. A relation having the "connex" property means that any pair of elements in the set of the relation are comparable under the relation. This also means that the set can be diagrammed as a line of elements, giving it the name "linear". The "connex" property also implies reflexivity, i.e., "a" ≤ "a". Therefore, a total order is also a (special case of a) partial order, as, for a partial order, the connex property is replaced by the weaker reflexivity property. An extension of a given partial order to a total order is called a linear extension of that partial order. For each (non-strict) total order ≤ there is an associated asymmetric (hence irreflexive) transitive semiconnex relation , completing the quadruple {, ≤, ≥}. We can define or explain the way a set is totally ordered by any of these four relations; the notation implies whether we are talking about the non-strict or the strict total order. One may define a totally ordered set as a particular kind of lattice, namely one in which we have We then write "a" ≤ "b" if and only if formula_20. Hence a totally ordered set is a distributive lattice. A simple counting argument will verify that any non-empty finite totally ordered set (and hence any non-empty subset thereof) has a least element. Thus every finite total order is in fact a well order. Either by direct proof or by observing that every well order is order isomorphic to an ordinal one may show that every finite total order is order isomorphic to an initial segment of the natural numbers ordered by greater than we might refer to the order topology on N induced by < and the order topology on N induced by > (in this case they happen to be identical but will not in general). The order topology induced by a total order may be shown to be hereditarily normal. A totally ordered set is said to be complete if every nonempty subset that has an upper bound, has a least upper bound. For example, the set of real numbers R is complete but the set of rational numbers Q is not. There are a number of results relating properties of the order topology to the completeness of X: A totally ordered set (with its order topology) which is a complete lattice is compact. Examples are the closed intervals of real numbers, e.g. the unit interval [0,1], and the affinely extended real number system (extended real number line). There are order-preserving homeomorphisms between these examples. For any two disjoint total orders formula_21 and formula_22, there is a natural order formula_23 on the set formula_24, which is called the sum of the two orders or sometimes just formula_25: Intuitively, this means that the elements of the second set are added on top of the elements of the first set. More generally, if formula_34 is a totally ordered index set, and for each formula_35 the structure formula_36 is a linear order, where the sets formula_37 are pairwise disjoint, then the natural total order on formula_38 is defined by In order of increasing strength, i.e., decreasing sets of pairs, three of the possible orders on the Cartesian product of two totally ordered sets are: All three can similarly be defined for the Cartesian product of more than two sets. Applied to the vector space R"n", each of these make it an ordered vector space. See also examples of partially ordered sets. A real function of "n" real variables defined on a subset of R"n" defines a strict weak order and a corresponding total preorder on that subset. A binary relation that is antisymmetric, transitive, and reflexive (but not necessarily total) is a partial order. A group with a compatible total order is a totally ordered group. There are only a few nontrivial structures that are (interdefinable as) reducts of a total order. Forgetting the orientation results in a betweenness relation. Forgetting the location of the ends results in a cyclic order. Forgetting both data results in a separation relation.
https://en.wikipedia.org/wiki?curid=30330
Tactical voting In voting methods, tactical voting (or strategic voting, sophisticated voting or insincere voting) occurs in elections with more than two candidates, when a voter supports another candidate more strongly than their "sincere preference" in order to prevent an undesirable outcome. For example, in a simple plurality election, a voter might gain a "better" outcome by voting for a less preferred but more generally popular candidate. It has been shown by the Gibbard–Satterthwaite theorem that any single-winner ranked voting method which is not dictatorial must be susceptible to tactical voting. However, the type of tactical voting and the extent to which it affects campaigns and election results can vary dramatically from one voting method to another. For example for single-winner elections, majority judgment (MJ) claims to reduce by almost "half" the incentives and opportunities successfully to vote tactically in the ways described in the next Section. Firstly, MJ does this by inviting citizens not to rank the candidates but to grade their suitability for office: Excellent (ideal), Very Good, Good, Acceptable, Poor, or Reject (entirely unsuitable). Secondly, the MJ winner is the one who has received the highest median-grade. For multi-winner elections, Evaluative Proportional Representation (EPR) in Section 5.5.5 in proportional representation further reduces tactical voting by assuring each citizen that their honest vote will proportionately increase the voting power of the elected candidate in the legislature who receives either their highest grade, remaining highest grade, or proxy vote. One high-profile example of tactical voting was the 2002 California gubernatorial election. During the Republican primaries, Republicans Richard Riordan (former mayor of Los Angeles) and Bill Simon (a self-financed businessman) were vying for a chance to compete against the unpopular incumbent Democratic Governor of California, Gray Davis. Polls predicted that Riordan would defeat Davis, while Simon would not. At that time, the Republican primaries were open primaries in which anyone could vote regardless of their own party affiliation. Davis supporters were rumored to have voted for Simon because Riordan was perceived as a greater threat to Davis; this combined with a negative advertising campaign by Davis describing Riordan as a "big-city liberal", and Simon ultimately won the primary despite a last-minute business scandal. However, he lost the election against Davis. In the 1997 UK general election, Democratic Left helped Bruce Kent set up GROT (Get Rid Of Them) a tactical voter campaign whose sole aim was to help prevent the Conservative Party from gaining a 5th term in office. This coalition was drawn from individuals in all the main opposition parties and many who were not aligned with any party. While it would be hard to prove that GROT swung the election itself, it did attract significant media attention and brought tactical voting into the mainstream for the first time in UK politics. In 2001, the Democratic Left's successor organisation the New Politics Network organised a similar campaign. Since then tactical voting has become a real consideration in British politics as is reflected in by-elections and by the growth in sites such as tacticalvote.co.uk who encourage tactical voting as a way of defusing the two party system and empowering the individual voter. For the 2015 UK general election, http://voteswap.org was set up to help prevent the Conservative Party staying in government, by encouraging Green Party supporters to tactically vote for the Labour Party in listed marginal seats. In 2017 swapmyvote.uk was formed to help supporters of all parties swap their votes with people in other constituencies. In the 2006 local elections in London, tactical voting is being promoted by sites such as London Strategic Voter in a response to national and international issues. The question of whether this approach acts to undermine local democracy is receiving much debate. In Northern Ireland, it is widely believed that (predominantly Protestant) Unionist voters in Nationalist strongholds have voted for the Social Democratic and Labour Party (SDLP) to prevent Sinn Féin from capturing such seats. This conclusion was reached by comparing results to the demographics of constituencies and polling districts. In the 2017 general election it is estimated that 6.5 million people (more than 20% of voters) voted tactically either as a way of preventing a "hard Brexit" or preventing another Conservative government led by the Tactical2017 campaign. Many Green Party candidates withdrew from the race in order to help the Labour Party secure closely fought seats against the Conservatives. This ultimately lead to the Conservatives losing seats in the election even though they increased their overall vote share. In the 2019 Conservative Party leadership election to determine the final two candidates for the party vote, it was suggested that front-runner Boris Johnson's campaign encouraged some of its MPs to back Jeremy Hunt instead of Johnson, so that Hunt - seen as "a lower-energy challenger" - would finish in second place, allowing an easier defeat in the party vote. Tactical voting was expected to play a major role in the 2019 General Election, with a YouGov poll suggesting that 19% of voters would be doing so tactically. 49% of tactical voters said they would do so in the hope of stopping a party whose views they opposed. In the 1999 Ontario provincial election, strategic voting was widely encouraged by opponents of the Progressive Conservative government of Mike Harris. This failed to unseat Harris, and succeeded only in suppressing the Ontario New Democratic Party vote to a historic low. In the 2004 federal election and to a lesser extent in the 2006 election, strategic voting was a concern for the federal New Democratic Party (NDP). In the 2004 election, the governing Liberal Party was able to convince many New Democratic voters to vote Liberal in order to avoid a Conservative government. In the 2006 elections, the Liberal Party attempted the same strategy, with Prime Minister Paul Martin asking New Democrats and Greens to vote for the Liberal Party in order to prevent a Conservative win. The New Democratic Party leader Jack Layton would respond by asking voters to "lend" their votes to his party, suggesting that the Liberal Party would be bound to lose the election regardless of strategic voting. During the 2015 federal election, strategic voting was primarily against the Conservative government of Stephen Harper which had benefited from vote splitting among centrist and left-leaning parties in the 2011 election. Following the landslide victory of the Liberals led by Justin Trudeau over Harper's Conservatives, observers noted that the increase in support for the Liberals at the expense of the NDP and Green Party was partially due to strategic voting for Liberal candidates. In Hong Kong, with its party-list proportional representation using largest remainder method with the Hare quota, voters supporting candidates of the pro-democracy camp will organise to cast their votes to different tickets, so as to avoid votes being concentrated on one or a few candidates and wasted. In 2016 Hong Kong Legislative Election, the practices of tactical voting were expanded by Benny Tai's Project ThunderGo. The Anti-establishment camp gained 29 seats, which was a historical record. In the 2016 General Election in Spain, the incentives for voting tactically were much larger than usual, following the rise of the Podemos and Ciudadanos following the economic crisis and election in 2015. The tactical voters were aided by Spain's d'hondt electoral system and were successfully able to influence the outcome of the election, despite a record low turnout of 66.5%. Puerto Rico's 2004 elections were affected by tactical voting. The New Progressive Party's candidate was unpopular, except among the pro-statehood Right, because of large corruption schemes and privatization of public corporations. To prevent him from winning, other factions supported the Partido Popular Democratico's candidate. The elections were close; statehood advocates won a seat in the U.S. house of representatives and majorities in both legislative branches, but lost governance to Aníbal Acevedo Vilá. (In Puerto Rico you have the chance to vote by party or by candidate. Separatists voted under their ideology, but for the center party's candidate. This caused major turmoil.) After a recount and a trial, Acevedo Vilá was certified as governor of the commonwealth of Puerto Rico. In 2011 Slovenian parliamentary election, 30% of voters voted tactically. Public polls predicted easy win for Janez Janša, the candidate of the Slovenian Democratic Party, however his opponent Zoran Janković, the candidate of Positive Slovenia won. According to prominent Slovenian public opinion researchers, such proportions of tactical voting were not recorded anywhere else before. In Hungary, during the 2018 Hungarian parliamentary election, several websites such as taktikaiszavazas.hu (meaning "tactical voting") promoted the idea to vote for opposition candidates with the highest probability of winning a given seat. This behavior was adopted by about a quarter of opposition voters resulting in a total of 498000 extra votes gained by opposition parties and a total of 14 extra single seats taken by several parties and independent candidates. Academic analysis of tactical voting is based on the rational voter model, derived from rational choice theory. In this model, voters are "short-term instrumentally rational". That is, voters are only voting in order to make an impact on one election at a time (not, say, to build the political party for next election); voters have a set of sincere preferences, or utility rankings, by which to rate candidates; voters have some knowledge of each other's preferences; and voters understand how best to use tactical voting to their advantage. The extent to which this model resembles real-life elections is the subject of considerable academic debate. An example of a rational voter strategy is described by Myerson and Weber. The strategy is broadly applicable to a number of single-winner voting methods that are additive point methods, such as Plurality, Borda, Approval, and Range. The strategy is optimal in the sense that the strategy will maximize the voter's expected utility when the number of voters is sufficiently large. This rational voter model assumes that the voter's utility of the election result is dependent only on which candidate wins and not on any other aspect of the election, for example showing support for a losing candidate in the vote tallies. The model also assumes the voter chooses how to vote individually and not in collaboration with other voters. Given a set of "k" candidates and a voter let: Then the voter's prospective rating for a candidate "i" is defined as: The gain in expected utility for a given vote is given by: The gain in expected utility can be maximized by choosing a vote with suitable values of "v""i", depending on the voting method and the voter's prospective ratings for each candidate. For specific voting methods, the gain can be maximized using the following rules: An important special case occurs when the voter has no information about how other voters will vote. This is sometimes referred to as the zero information strategy. In this special case, the "p""ij" pivot probabilities are all equal and the rules for the specific voting methods become: Myerson and Weber also describe voting equilibria that require all voters use the optimal strategy and all voters share a common set of "p""ij" pivot probabilities. Because of these additional requirements, such equilibria may in practice be less widely applicable than the strategies. Because tactical voting relies heavily on voters' perception of how other voters intend to vote, campaigns in electoral methods that promote compromise frequently focus on affecting voter's perception of campaign viability. Most campaigns craft refined media strategies to shape the way voters see their candidacy. During this phase, there can be an analogous effect where campaign donors and activists may decide whether or not to support candidates tactically with their money and labor. In rolling elections, or runoff votes, where some voters have information about previous voters' preferences (e.g. presidential primaries in the United States, French presidential elections), candidates put disproportionate resources into competing strongly in the first few stages, because those stages affect the reaction of later stages. Some people view tactical voting as providing misleading information. In this view, a ballot paper is asking the question "which of these candidates is the best?". This means that if one votes for a candidate who one does not believe is the best, then one is lying. British Labour Party politician Anne Begg said of tactical voting: Tactical voting is commonly regarded as a problem, since it makes the actual ballot into a nontrivial game, where voters react and counter-react to what they expect other voters' strategies to be. A game such as this might even result in a worse alternative being chosen, because most of the voters used it as a strategic tool. However the existence of limited tactical voting can be thought to increase the quality of the candidates elected because it takes into account not just the "ranking" of the candidates but also the utilities. Arrow's impossibility theorem and the Gibbard–Satterthwaite theorem prove that any useful single-winner voting method based on preference ranking is prone to some kind of manipulation. However, some use game theory to search for some kind of "minimally manipulatable" (incentive compatibility) voting schemes. Game theory can also be used to analyze the pros and cons of different methods. For instance, under purely honest voting, Condorcet method-like methods tend to settle on compromise candidates, while Instant-Runoff Voting favors those candidates which have strong core support - who may often be more extremist. An electorate using one of these two methods but which (in the general or the specific case) preferred the characteristics of the other method could consciously use strategy to achieve a result more characteristic of the other method. Under Condorcet, they may be able to win by "burying" the compromise candidate (although this risks throwing the election to the opposing extreme); while under IRV, they could always "compromise". It could be argued that in this case the option to vote tactically or not actually helps the electorate express its will, not only on which candidate is better, but on whether compromise is desirable. (This never applies to "sneakier" tactics such as push-over.) Tactical voting greatly complicates the comparative analysis of voting methods. If tactical voting were to become significant, the perceived "advantages" of a given voting method (that is, tending towards compromise or favoring core support) could turn into disadvantages - and, more surprisingly, vice versa. Tactical voting is highly dependent on the voting method being used. A tactical vote which improves a voter's satisfaction under one method could make no change or lead to a less-satisfying result under another method. Moreover, although by the Gibbard–Satterthwaite theorem no deterministic single-winner voting method is immune to tactical voting in all cases, some methods' results are more resistant to tactical voting than others'. M. Badinski and R. Laraki, the inventors of the majority judgment method, performed an initial investigation of this question using a set of Monte Carlo simulated elections based on the results from a poll of the 2007 French presidential election which they had carried out using rated ballots. Comparing range voting, Borda count, plurality voting, approval voting with two different absolute approval thresholds, Condorcet voting, and majority judgment, they found that range voting had the highest (worst) strategic vulnerability, while their own method majority judgment had the lowest (best). Further investigation would be needed to be sure that this result remained true with different sets of candidates. Tactical voting by compromising is exceedingly common in plurality elections. The most typical tactic is to assess which two candidates are frontrunners (most likely to win) and to vote for the preferred one of those two, even if a third candidate is preferred over both. Duverger's law argues that this kind of tactical voting, along with the spoiler effect which can arise when such tactics are not used, will be so common that any method based on plurality will eventually result in two-party domination. Although this "law" is just an empirical observation rather than a mathematical certainty, it is generally supported by the evidence. Due to the especially deep impact of tactical voting in such a method, some argue that systems with three or more strong or persistent parties become in effect forms of disapproval voting, where the expression of disapproval in order to keep an opponent out of office overwhelms the expression of approval to elect a desirable candidate. The presence of an electoral threshold (typically at around 5% or 4%) can lead to voters voting tactically for a different party to their preferred political party (which may be more hardline or more moderate) in order to ensure that the party passes the threshold. An alliance of parties can fail to win a majority despite outpolling their rivals if one party in the alliance falls beneath the threshold. An example of this is the 2009 Norwegian election in which the right-wing opposition parties won more votes between them than the parties in the governing coalition, but the narrow failure of the Liberal Party to cross the 4% threshold led to the governing coalition winning a majority. This effect has sometimes been nicknamed "Comrade 4%" in Sweden, where the electoral threshold is 4%, particularly when referring to supporters of the Social Democrats who vote tactically for the more hardline Left Party. In the 2013 German federal election, the Free Democratic Party got only 4.8% of the votes so did not meet the 5% threshold. The party did not win any directly elected seats, so for the first time since 1949 was not represented in the Bundestag. Hence their ally the Christian Democratic Union had to form a grand coalition with the Social Democratic Party. In several recent elections in New Zealand the National Party has suggested that National supporters in certain electorates should vote for minor parties or candidates who can win an electorate seat and would support a National government. This culminated in the Tea tape scandal when a meeting in the Epsom electorate in 2011 was taped. The meeting was to encourage National voters in the electorate to vote "strategically" for the ACT candidate; and it was suggested that Labour Party voters in the electorate should vote "strategically" for the National candidate as the Labour candidate could not win the seat but a National win in the seat would deprive National of an ally. The two major parties National and Labour always top up their electorate MPs with list MPs, so a National win in the seat would not increase the number of National MPs. Even in countries with a low threshold such as the Netherlands, tactical voting can still happen for other reasons. In the campaign for the 2012 Dutch election, the Socialist Party had enjoyed good poll ratings, but many voters who preferred the Socialists voted instead for the more centrist Labour Party out of fear that a strong showing from the Socialists would lead to political deadlock. It was also suggested that a symmetrical effect on the right caused the Party for Freedom to lose support to the more centrist VVD In elections which there are many party lists competing with only a few seats, such as Hong Kong Legislative Council election, the outcome will tend to be similar to that of single non-transferable vote (SNTV): only the first candidate in a list will win. In such elections parties will split the candidates into multiple lists, since competing using "remainder" votes in both lists is easier than having "full quota" votes plus "remainder" votes if the party puts their candidates in a single list, and a list reaching "full quota" vote is considered a waste. In such elections the behavior of voters is similar that of SNTV elections: voters will avoid a candidate reaching the "full quota", and spread their votes to other candidates that have potential to win. In majority judgment, strategy is typically "semi-honest exaggeration." Voters exaggerate the difference between a certain pair of candidates but do not rank any less-preferred candidate over any more-preferred one. Even this form of exaggeration can only have an effect if the voter's honest rating for the intended winner is below that candidate's median rating or their honest rating for the intended loser is above it. Typically, this would not be the case unless there were two similar candidates favored by the same set of voters. A strategic vote against a similar rival could result in a favored candidate winning; although if voters for both similar rivals used this strategy, it could cause a candidate favored by neither of these voter groups to win. Balinski and Laraki argue that since under Majority judgment, many voters have no opportunity to use strategy, in a test using simulated elections based on polling data, this method is the most strategy-resistant of the ones that the authors studied. Similarly, in approval voting, unlike many other methods, strategy almost never involves ranking a less-preferred candidate over a more-preferred one. However, strategy is in fact inevitable when a voter decides their "approval cutoff"; this is a variation of the compromising strategy. Overall, Steven Brams and Dudley R. Herschbach argued in a paper in "Science" magazine in 2001 that approval voting was the method least amenable to tactical perturbations. Meanwhile, Balinski and Laraki used rated ballots from a poll of the 2007 French presidential election to show that, if unstrategic voters only approved candidates whom they considered "very good" or better, strategic voters would be able to sway the result frequently, but that if unstrategic voters approved all candidates they considered "good" or better, approval was the second most strategy-resistant method of the ones they studied. Approval voting forces voters to face an initial voting tactical decision as to whether to vote for (or approve) of their second-choice candidate or not. The voter may want to retain expression of preference of their favorite candidate over their second choice. But that does not allow the same voter to express preference of their second choice over any other. One simple situation in which Approval strategy is important is if there is a close election between two similar candidates A and B and one distinct one Z, in which Z has 49% support. If all of Z's supporters approve just him, in hopes of him getting just enough to win, then supporters of A are faced with a tactical choice of whether to approve A and B (getting one of their preferred choices but having no say in which) or approving just A (possibly helping choose her over B, but risking throwing the election to Z). B's supporters face the same dilemma. In score voting, strategic voters who expect all other voters to be strategic will exaggerate their true preferences and use the same quasi-compromising strategy as in approval voting, above. That is, they will give all candidates either the highest possible or the lowest possible rating. This presents an additional problem as compared to the approval method if some voters give honest "weak" votes with middle rankings and other voters give strategic approval votes. A strategic minority could overpower an honest majority. To minimize this problem, some score voting advocates suggest measures such as education or ballot design to encourage uninformed voters to give more-extreme rankings. A different path to minimize this problem is to use median scores instead of total scores, as median scores are less amenable to exaggeration, as in majority judgment. However, if all voter factions have the same proportion of strategic and honest voters, simulations show that any significant proportion of honest voters will lead to results which tend to be more satisfying to voters than approval voting, and indeed, more satisfying than any other method with the same unbiased proportion of strategic voters. Tactical voters are faced with the initial tactic as to how highly to score their second-choice candidate. The voter may want to retain expression of a high preference of their favorite candidate over their second choice. But that does not allow the same voter to express a high preference of their second choice over any others. In a simulation study using polling data collected under a majority judgment method, that method's designers found that score voting was more vulnerable to strategy than any other method they studied, including plurality. Instant runoff voting is vulnerable to push-over and compromising strategies (although it is less vulnerable to compromising than the plurality method). Bullet voting is ineffective under Instant-runoff, since Instant-runoff satisfies the later-no-harm criterion. Condorcet methods have a further-reduced incentive for the compromising strategy, but they have some vulnerability to the burying strategy. The extent of this vulnerability depends on the particular Condorcet method. Some Condorcet methods arguably reduce the vulnerability to burying to the point where it is no longer a significant problem. All guaranteed Condorcet methods are vulnerable to the bullet voting strategy, because they violate the later-no-harm criterion. The Borda count has both a strong compromising incentive and a large vulnerability to burying. Here is a hypothetical example of both factors at the same time: if there are two candidates the most likely to win, the voter can maximize the impact on the contest between these candidates by ranking the candidate the voter likes more in first place, ranking the candidate whom they like less in last place. If neither candidate is the sincere first or last choice, the voter is using both the compromising and burying strategies at once. If many different groups voters use this strategy, this gives a paradoxical advantage to the candidate generally thought least likely to win. The single transferable vote has an incentive for free riding, a form of compromising strategy sometimes used in proportional representation methods. If one's top-choice candidate is elected, only a fraction of one's vote will be transferred to one's next-favoured candidate. If one feels the favoured candidate is certain to be elected in any case, insincerely ranking the second candidate first guarantees them a full vote if needed. However, the greater the certainty of the first candidate being elected, the bigger their likely surplus, the higher the fraction of the vote that would be transferred to the next candidate, and hence the lower the proportionate benefit of tactical voting. More sophisticated tactics may be practicable where the number of candidates, voters and/or seats to be filled is relatively small. Some forms of STV allow tactical voters to gain an advantage by listing a candidate who is very likely to lose in first place, as a form of "pushover". Meek's method essentially eliminates this strategy. The term "tactical unwind" is used by some political scientists and commentators to refer to the phenomenon when tactical voting takes place in one general election but in subsequent elections voters revert to their normal patterns.
https://en.wikipedia.org/wiki?curid=30332
Thesaurus A thesaurus (plural "thesauri" or "thesauruses") or synonym dictionary is a reference work for finding synonyms and sometimes antonyms of words. They are often used by writers to help find the best word to express an idea: Synonym dictionaries have a long history. The word 'thesaurus' was used in 1852 by Peter Mark Roget for his "Roget's Thesaurus", which groups words in a hierarchical taxonomy of concepts, but others are organized alphabetically or in some other way. Most thesauruses do not include definitions, but many dictionaries include listings of synonyms. Some thesauruses and dictionary synonym notes characterize the distinctions between similar words, with notes on their "connotations and varying shades of meaning". Some synonym dictionaries are primarily concerned with differentiating synonyms by meaning and usage. Usage manuals such as "Fowler" often prescribe appropriate usage of synonyms. Thesauri are sometimes used to avoid repetition of words, leading to elegant variation, which is often criticized by usage manuals: "writers sometimes use them not just to vary their vocabularies but to dress them up too much". The word "thesaurus" comes from Latin "thēsaurus", which in turn comes from Greek ("thēsauros") 'treasure, treasury, storehouse'. The word "thēsauros" is of uncertain etymology. Until the 19th century, a thesaurus was any dictionary or encyclopedia, as in the "Thesaurus Linguae Latinae" ("Dictionary of the Latin Language", 1532), and the "Thesaurus Linguae Graecae" ("Dictionary of the Greek Language", 1572). It was Roget that introduced the meaning "collection of words arranged according to sense", in 1852. In antiquity, Philo of Byblos authored the first text that could now be called a thesaurus. In Sanskrit, the Amarakosha is a thesaurus in verse form, written in the 4th century. The study of synonyms became an important theme in 18th-century philosophy, and Condillac wrote, but never published, a dictionary of synonyms. Some early synonym dictionaries include: "Roget's Thesaurus", first compiled in 1805 by Peter Mark Roget, and published in 1852, follows John Wilkins' semantic arrangement of 1668. Unlike earlier synonym dictionaries, it does not include definitions or aim to help the user to choose among synonyms. It has been continuously in print since 1852, and remains widely used across the English-speaking world. Roget described his thesaurus in the foreword to the first edition: It is now nearly fifty years since I first projected a system of verbal classification similar to that on which the present work is founded. Conceiving that such a compilation might help to supply my own deficiencies, I had, in the year 1805, completed a classed catalogue of words on a small scale, but on the same principle, and nearly in the same form, as the Thesaurus now published. Roget's original thesaurus was organized into 1000 conceptual Heads (e.g., 806 Debt) organized into a four-level taxonomy. For example, liability was classed under V..iv: Class five, "Volition: the exercise of the will"; Division Two: "Social volition"; Section 4: "Possessive Relations"; Subsection 4: "Monetary relations". Each head includes direct synonyms: Debt, obligation, liability, ...; related concepts: interest, usance, usury; related persons: debtor, debitor, ... defaulter (808); verbs: to be in debt, to owe, ... "see" Borrow (788); phrases: to run up a bill or score, ...; and adjectives: in debt, indebted, owing, ... Numbers in parentheses are cross-references to other Heads. The book starts with a Tabular Synopsis of Categories laying out the hierarchy, then the main body of the thesaurus listed by Head, and then an alphabetical index listing the different Heads under which a word may be found: Liable, "subject to", 177; "debt", 806; "duty", 926. Some recent versions have kept the same organization, though often with more detail under each Head. Others have made modest changes such as eliminating the four-level taxonomy and adding new heads: one has 1075 Heads in fifteen Classes. Some non-English thesauri have also adopted this model. Other thesauri and synonym dictionaries are organized alphabetically. Most repeat the list of synonyms under each word. Some designate a principal entry for each concept and cross-reference it. A third system interfiles words and conceptual headings. Francis March's "Thesaurus Dictionary" gives for "liability": , each of which is a conceptual heading. The article has multiple subheadings, including Nouns of Agent, Verbs, Verbal Expressions, "etc." Under each are listed synonyms with brief definitions, "e.g." "Credit. Transference of property on promise of future payment." The conceptual headings are not organized into a taxonomy. Benjamin Lafaye's "Synonymes français" (1841) is organized around morphologically related families of synonyms ("e.g." "logis, logement"), and his "Dictionnaire des synonymes de la langue française" (1858) is mostly alphabetical, but also includes a section on morphologically related synonyms, which is organized by prefix, suffix, or construction. Before Roget, most thesauri and dictionary synonym notes included discussions of the differences among near-synonyms, as do some modern ones. A few modern synonym dictionaries, notably in French, are "primarily" devoted to discussing the precise demarcations among synonyms. Some include short definitions. Some give illustrative phrases. Some include lists of objects by category, "e.g." breeds of dogs. The "Historical Thesaurus of English" (2009) is organized taxonomically, and includes the date when each word came to have a given meaning. It has the novel and unique goal of "charting the semantic development of the huge and varied vocabulary of English". Bilingual synonym dictionaries are designed for language learners. One such dictionary gives various French words listed alphabetically, with an English translation and an example of use. Another one is organized taxonomically with examples, translations, and some usage notes. In library and information science, a thesaurus is a kind of controlled vocabulary. A thesaurus can form part of an ontology and be represented in the Simple Knowledge Organization System (SKOS). Thesauri are used in natural language processing for word-sense disambiguation and text simplification for machine translation systems.
https://en.wikipedia.org/wiki?curid=30334
Trial of Socrates The trial of Socrates (399 BC) was held to determine the philosopher’s guilt of two charges: "asebeia" (impiety) against the pantheon of Athens, and corruption of the youth of the city-state; the accusers cited two impious acts by Socrates: "failing to acknowledge the gods that the city acknowledges" and "introducing new deities". The death sentence of Socrates was the legal consequence of asking politico-philosophic questions of his students, which resulted in the two accusations of moral corruption and impiety. At trial, the majority of the "dikasts" (male-citizen jurors chosen by lot) voted to convict him of the two charges; then, consistent with common legal practice, voted to determine his punishment, and agreed to a sentence of death to be executed by Socrates’s drinking a poisonous beverage of hemlock. Primary-source accounts of the trial and execution of Socrates are the "Apology of Socrates" by Plato and the "Apology of Socrates to the Jury" by Xenophon of Athens, who had been his student; contemporary interpretations include "The Trial of Socrates" (1988) by the journalist I. F. Stone, and "Why Socrates Died: Dispelling the Myths" (2009) by the Classics scholar Robin Waterfield. Before the philosopher Socrates was tried for moral corruption and impiety, the citizens of Athens knew him as an intellectual and moral gadfly of their society. In the comic play, "The Clouds" (423 BC), Aristophanes represents Socrates as a sophistic philosopher who teaches the young man Pheidippides how to formulate arguments that justify striking and beating his father. Despite Socrates denying he had any relation with the Sophists, the playwright indicates that Athenians associated the philosophic teachings of Socrates with Sophism. As philosophers, the Sophists were men of ambiguous reputation, "they were a set of charlatans that appeared in Greece in the fifth century BC, and earned ample livelihood by imposing on public credulity: professing to teach virtue, they really taught the art of fallacious discourse, and meanwhile propagated immoral practical doctrines." Besides "The Clouds", the comic play "The Wasps" (422 BC) also depicts inter-generational conflict, between an older man and a young man. Such representations of inter-generational social conflict among the men of Athens, especially in the decade from 425 to 415 BC, can reflect contrasting positions regarding opposition to or support for the Athenian invasion of Sicily. Many Athenians blamed the teachings of the Sophists and of Socrates for instilling the younger generation with a morally nihilistic, disrespectful attitude towards their society. Socrates left no written works, but his student and friend, Plato, wrote Socratic dialogues, featuring Socrates as the protagonist. As a teacher, competitor intellectuals resented Socrates's "elenctic examination" method for intellectual inquiry, because its questions threatened their credibility as men of wisdom and virtue. One will sometimes find the claim that Socrates described himself as the "gadfly" of Athens which, like a sluggish horse, needed to be aroused by his "stinging". In the Greek text of his defense given by Plato, Socrates never actually uses that term (viz., "gadfly" [Grk., "oîstros"]) to describe himself. Rather, his reference is merely allusive, as he (literally) says only that he has attached himself to the City ("proskeimenon tē polei") in order to sting it. Nevertheless, he does make the bold claim that he is a god's gift to the Athenians. Socrates' elenctic method was often imitated by the young men of Athens. Alcibiades, an Athenian general who had been the main proponent of the disastrous Sicilian Expedition during the Peloponnesian Wars, where virtually the entire Athenian invading force of more than 50,000 soldiers and non-combatants (e.g., the rowers of the Triremes) was killed or captured and enslaved, was a student and close friend of Socrates, and his messmate during the siege of Potidaea (433–429 BC). Socrates remained Alcibiades' close friend, admirer, and mentor for about five or six years. While a masterful orator, Alcibiades has been described by at least two 20th century psychologists as exhibiting the classic features of psychopathy. During his career, Alcibiades famously defected to Sparta, Athens' arch-enemy, after being summoned to trial, then to Persia after being caught in an affair with the wife of his benefactor (the King of Sparta). He then defected back to Athens after successfully persuading the Athenians that Persia would come to their aid against Sparta (though Persia had no intention of doing so). Finally driven out of Athens after the defeat of the Battle of Notium against Sparta, Alcibiades was assassinated in Phrygia in 404 BC by his Spartan enemies. Another possible source of resentment was the political views that he and his associates were thought to have embraced. Critias, who appears in two of Plato's Socratic dialogues, was a leader of the Thirty Tyrants (the ruthless oligarchic regime that ruled Athens, as puppets of Sparta and backed by Spartan troops, for eight months in 404–403 BC until they were overthrown). Several of the Thirty had been students of Socrates, but there is also a record of their falling out. As with many of the issues surrounding Socrates' conviction, the nature of his affiliation with the Thirty Tyrants is far from straightforward. During the reign of the Thirty, many prominent Athenians who were opposed to the new government left Athens. Robin Waterfield asserts that "Socrates would have been welcome in oligarchic Thebes, where he had close associates among the Pythagoreans who flourished there, and which had already taken in other exiles." Given the availability of a hospitable host outside of Athens, Socrates, at least in a limited way, chose to remain in Athens. Thus, Waterfield suggests, Socrates’ contemporaries probably thought his remaining in Athens, even without participating in the Thirty’s bloodthirsty schemes, demonstrated his sympathy for the Thirty’s cause, not neutrality towards it. This is proved, Waterfield argues, by the fact that after the Thirty were no longer in power, anyone who had remained in Athens during their rule was encouraged to move to Eleusis, the new home of the expatriate Thirty. Socrates did oppose the will of the Thirty on one documented occasion. Plato’s "Apology" has the character of Socrates describe that the Thirty ordered him, along with four other men, to fetch a man named Leon from Salamis so that the Thirty could execute him. While Socrates did not obey this order, he did nothing to warn Leon, who was subsequently apprehended by the other four men. According to the portraits left by some of Socrates' followers, Socrates himself seems to have openly espoused certain anti-democratic views, most prominent perhaps being the view that it is not majority opinion that yields correct policy but rather genuine knowledge and professional competence, which is possessed by only a few. Plato also portrays him as being severely critical of some of the most prominent and well-respected leaders of the Athenian democracy; and even has him claim that the officials selected by the Athenian system of governance cannot credibly be regarded as benefactors, since it is not any group of "many" that benefits, but only "someone or very few persons". Finally, Socrates was known as often praising the laws of the undemocratic regimes of Sparta and Crete. Plato himself reinforced anti-democratic ideas in "The Republic", advocating rule by elite, enlightened "Philosopher-Kings". The totalitarian Thirty Tyrants had anointed themselves as the elite, and in the minds of his Athenian accusers, Socrates was guilty because he was suspected of introducing oligarchic ideas to them. Larry Gonick, in his "Cartoon History of the Universe" writes, ""The trial of Socrates has always seemed mysterious...the charges sound vague and unreal...because behind the stated charges was Socrates's real crime: preaching a philosophy that produced Alcibiades and Critias... but of course he couldn't be prosecuted for that under the amnesty" (which had been declared after the overthrow of the Thirty Tyrants)... "so his accusers made it "not believing the Gods of the city, introducing new gods, and corrupting the youth."" Apart from his views on politics, Socrates held unusual views on religion. He made several references to his spirit, or "daimonion", although he explicitly claimed that it never urged him on, but only warned him against various prospective actions. The extant, primary sources about the history of the trial and execution of Socrates are: the "Apology of Socrates to the Jury", by Xenophon of Athens, a historian; and the tetralogy of Socratic dialogues — "Euthyphro", the "Socratic Apology", "Crito", and "Phaedo", by Plato, a philosopher who had been a student of Socrates. In "The Indictment of Socrates" (392 BC), the sophist rhetorician Polycrates (440–370) presents the prosecution speech, by Anytus, which condemned Socrates for his political and religious activities in Athens before the year 403 BC. In presenting such a prosecution, which addressed matters external to the charges of moral corruption and impiety, for which the Athenian "polis" was trying Socrates, Anytus violated the political amnesty granted in the agreement of reconciliation (403–402 BC) that granted amnesty to a man for political and religious actions taken before or during the rule of the Thirty Tyrants, "under which all further charges and official recriminations concerning the [reign of] terror were forbidden". Moreover, the legal and religious particulars against Socrates that Polycrates reported in "The Indictment of Socrates" are addressed in the replies by Xenophon and the sophist Libanius of Antioch (314–390). Formal accusation was the second element of the trial of Socrates, which the accuser, Meletus, swore to be true, before the archon (a state officer with mostly religious duties) who considered the evidence and determined that there was an actionable case of "moral corruption of Athenian youth" and "impiety", for which the philosopher must legally answer; the archon summoned Socrates for a trial by jury. Athenian juries were drawn by lottery, from a group of hundreds of male-citizen volunteers; such a great jury usually ensured a majority verdict in a trial. Although neither Plato nor Xenophon of Athens identifies the number of jurors, a jury of 501 men likely was the legal norm. In the "Apology of Socrates" (36a–b), about Socrates’s defense at trial, Plato said that if just 30 of the votes had been otherwise, then Socrates would have been acquitted (36a), and that (perhaps) less than three-fifths of the jury voted against him (36b). Assuming a jury of 501, this would imply that he was convicted by a majority of 280 against 221. Having been found guilty of corruption and impiety, Socrates and the prosecutor suggested sentences for the punishment of his crimes against the city-state of Athens. Expressing surprise at the few votes required for an acquittal, Socrates joked that he be punished with free meals at the Prytaneum (the city’s sacred hearth), an honour usually held for a benefactor of Athens, and for the victorious athletes of an Olympiad. After that failed suggestion, Socrates then offered to pay a fine of 100 drachmae—one-fifth of his property—which largesse testified to his integrity and poverty as a philosopher. Finally, a fine of 3,000 drachmae was agreed, proposed by Plato, Crito, Critobulus, and Apollodorus, who guaranteed payment—nonetheless, the prosecutor of the trial of Socrates proposed the death penalty for the impious philosopher. (Diogenes Laërtius, 2.42). In the end, the sentence of death was passed by a greater majority of the jury than that by which he had been convicted. In the event, friends, followers, and students encouraged Socrates to flee Athens, action which the citizens expected; yet, on principle, Socrates refused to flout the law and escape his legal responsibility to Athens. ("Crito") Therefore, faithful to his teaching of civic obedience to the law, the 70-year-old Socrates executed his death-sentence and drank the hemlock, as condemned at trial. (See: "Phaedo") In the time of the trial of Socrates, the year 399 BC, the city-state of Athens recently had perdured the trials and tribulations of Spartan hegemony and the thirteen-month régime of the Thirty Tyrants, imposed consequent to the Athenian defeat in the Peloponnesian War (431–404 BC). At the request of the pro–Sparta Lysander, a Spartan admiral, the Thirty men, led by Critias and Theramenes, were installed to govern Athens, and to revise the city’s democratic laws, which were inscribed to a wall of the Stoa Basileios. Their political actions meant to facilitate the subordination of Athenian government—from democracy to oligarchy—in service to Sparta. Moreover, the Thirty Tyrants also appointed a council of 500 men to perform the judicial functions that once had belonged to every Athenian citizen. In their brief régime, the Spartan oligarchs killed about five per cent of the Athenian population, confiscated much property, and exiled democrats from the city proper. The fact that Critias, leader of the Thirty Tyrants, had been a pupil of Socrates was held against him, as a citizen of Athens. Plato's presentation of the trial and death of Socrates inspired the writers, artists, and philosophers to revisit the matter. For some, the execution of the man whom Plato called "the wisest and most just of all men" demonstrated the defects of democracy and of popular rule, for others the Athenian actions were a justifiable defense of the recently re-established democracy. In "The Trial of Socrates" (1988), I. F. Stone said that Socrates wanted to be sentenced to death, to justify his philosophic opposition to the Athenian democracy of that time, and because, as a man, he saw that old age would be an unpleasant time for him. In the play "Socrates on Trial" (2007), Andrew Irvine said that for loyalty to Athenian democracy, Socrates willingly accepted the guilty-verdict voted by the jurors of his trial. "During a time of war, and great social and intellectual upheaval, Socrates felt compelled to express his views, openly, regardless of the consequences. As a result, he is remembered today, not only for his sharp wit and high ethical standards, but also for his loyalty to the view that, in a democracy, the best way for a man to serve himself, his friends, and his city—even during times of war—is by being loyal to, and by speaking publicly about, the truth." In "Why Socrates Died: Dispelling the Myths" (2009), Robin Waterfield said that the death of Socrates was an act of volition motivated by a greater purpose; Socrates "saw himself as healing the City’s ills by his voluntary death". Waterfield said that Socrates, with his unconventional methods of intellectual enquiry, attempted to resolve the political confusion then occurring in the city-state of Athens, by willingly being the scapegoat, whose death would quiet old disputes, which then would allow progress towards political harmony and social peace for the Athenian polis. In "The New Trial of Socrates" (2012), an international panel of ten judges held a mock re-trial of Socrates to resolve the matter of the charges leveled against him by Meletus, Anytus, and Lycon, that: "Socrates is a doer of evil and corrupter of the youth, and he does not believe in the gods of the state, and he believes in other new divinities of his own"; by split decision, five judges voted "guilty" and five judges voted "not guilty", which acquitted Socrates of corruption of the young and of impiety against the Athenian pantheon. Limiting themselves to the facts of the case against Socrates, the judges did not consider any sentence; the judges who voted the philosopher guilty said that they would not have considered the death penalty for Socrates.
https://en.wikipedia.org/wiki?curid=30337
Tetris Tetris ( ) is a tile-matching video game created by Russian software engineer Alexey Pajitnov in 1984. It has been published by several companies, most prominently during a war for the appropriation of the game's rights in the late 1980s. After a significant period of publication by Nintendo, the rights reverted to Pajitnov in 1996, who co-founded The Tetris Company with Henk Rogers to manage "Tetris" licensing. In "Tetris", players must complete lines by moving differently shaped pieces (tetrominoes), which descend onto the playing field. The completed lines disappear and grant the player points, and the player can proceed to fill the vacated spaces. The game ends when the playing field is filled. The longer the player can delay this inevitable outcome, the higher their score will be. In multiplayer games, the players must last longer than their opponents, and in certain versions, players can inflict penalties on opponents by completing a significant number of lines. Some adaptations have provided variations to the game's theme, such as three-dimensional displays or a system for reserving pieces. Built on simple rules and requiring intelligence and skill, "Tetris" established itself as one of the great early video games. It has sold 202million copies – approximately 70million physical units and 132million paid mobile game downloads – as of December 2011, making it one of the best-selling video game franchises of all time; the Game Boy version in particular is one of the best-selling games of all time, with over 35 million copies sold. The game is available on over 65 platforms, setting a "Guinness" world record for the most ported video game title. "Tetris" is rooted within popular culture and its popularity extends beyond the sphere of video games; imagery from the game has influenced architecture, music and cosplay. The game has also been the subject of various research studies that have analyzed its theoretical complexity and have shown its effect on the human brain following a session, in particular the Tetris effect. "Tetris" is primarily composed of a field of play in which pieces of different geometric forms, called "tetriminos", descend from the top of the field. During this descent, the player can move the pieces laterally and rotate them until they touch the bottom of the field or land on a piece that had been placed before it. The player can neither slow down the falling pieces nor stop them, but can accelerate them in most versions. The objective of the game is to use the pieces to create as many horizontal lines of blocks as possible. When a line is completed, it disappears, and the blocks placed above fall one rank. Completing lines grants points, and accumulating a certain number of points moves the player up a level, which increases the number of points granted per completed line. In most versions, the speed of the falling pieces increases with each level, leaving the player with less time to think about the placement. The player can clear multiple lines at once, which can earn bonus points in some versions. It is possible to complete up to four lines simultaneously with the use of the I-shaped tetrimino; this move is called a "Tetris", and is the basis of the game's title. If the player cannot make the blocks disappear quickly enough, the field will start to fill, and when the pieces reach the top of the field and prevent the arrival of additional pieces, the game ends. At the end of each game, the player receives a score based on the number of lines that have been completed. The game never ends with the player's victory; the player can only complete as many lines as possible before an inevitable loss. Since 1996, The Tetris Company has internally defined specifications and guidelines that publishers must adhere to in order to be granted a license to "Tetris". The contents of these guidelines establish such elements as the correspondence of buttons and actions, the size of the field of play, the system of rotation, and others. The pieces on which the game of "Tetris" is based around are called "tetriminos". Pajitnov's original version for the Electronika 60 computer used green brackets to represent the blocks that make up tetriminos. Versions of "Tetris" on the original Game Boy/Game Boy Color and on most dedicated handheld games use monochrome or grayscale graphics, but most popular versions use a separate color for each distinct shape. Prior to The Tetris Company's standardization in the early 2000s, those colors varied widely from implementation to implementation. The scoring formula for the majority of "Tetris" products is built on the idea that more difficult line clears should be awarded more points. For example, a single line clear in "Tetris Zone" is worth 100 points, clearing four lines at once (known as a "Tetris") is worth 800, while each subsequent back-to-back "Tetris" is worth 1,200. In conjunction, players can be awarded combos that exist in certain games which reward multiple line clears in quick succession. The exact conditions for triggering combos, and the amount of importance assigned to them, vary from game to game. Nearly all "Tetris" games allow the player to press a button to increase the speed of the current piece's descent or cause the piece to drop and lock into place immediately, known as a "soft drop" and a "hard drop", respectively. While performing a soft drop, the player can also stop the piece's increased speed by releasing the button before the piece settles into place. Some games only allow either soft drop or hard drop; others have separate buttons for both. Many games award a number of points based on the height that the piece fell before locking, so using the hard drop generally awards more points. The question "Would it be possible to play forever?" was first considered in a thesis by John Brzustowski in 1992. The conclusion reached was that the game is statistically doomed to end. The reason has to do with the S and Z Tetriminos. If a player receives a sufficiently large sequence of alternating S and Z Tetriminos, the naïve gravity used by the standard game eventually forces the player to leave holes on the board. The holes will necessarily stack to the top and, ultimately, end the game. If the pieces are distributed randomly, this sequence will eventually occur. Thus, if a game with, for example, an ideal, uniform, uncorrelated random number generator is played long enough, any player will top out. In practice, this does not occur in most "Tetris" variants. Some variants allow the player to choose to play with only S and Z Tetriminos, and a good player may survive well over 150 consecutive Tetriminos this way. On an implementation with an ideal uniform randomizer, the probability at any given time of the next 150 Tetriminos being only S and Z is ()150 (approximately ). Most implementations use a pseudorandom number generator to generate the sequence of Tetriminos, and such an S–Z sequence is almost certainly not contained in the sequence produced by the 32-bit linear congruential generator in many implementations (which has roughly states). The "evil" algorithm in "Bastet" (an unofficial variant) often starts a game with a series of more than seven Z pieces. Modern versions of "Tetris" released after 2001 use a bag-style randomizer that guarantees players will never receive more than four S or Z pieces in a row. This is one of the "Indispensable Rules" enforced by the "Tetris Guideline" that all officially licensed "Tetris" games must follow. Recent versions of "Tetris" such as "Tetris Worlds" allow the player to repeatedly rotate a block once it hits the bottom of the playfield without it locking into place. This permits a player to play for an infinite amount of time, though not necessarily to land an infinite number of blocks. Although not the first "Tetris" game to feature a new kind of "Tetris", "easy spin" (see "The Next Tetris"), also called "infinite spin" by critics, "Tetris Worlds" was the first game to fall under major criticisms for it. Easy spin refers to the property of a Tetromino to stop falling for a moment after left or right movement or rotation, effectively allowing someone to suspend the Tetromino while thinking on where to place it. This feature has been implemented into The Tetris Company's official guideline. This type of play differs from traditional "Tetris" because it takes away the pressure of higher level speed. Some reviewers went so far as to say that this mechanism broke the game. The goal in "Tetris Worlds", however, is to complete a certain number of lines as fast as possible, so the ability to hold off a piece's placement will not make achieving that goal any faster. Later, GameSpot received "easy spin" more openly, saying that "the infinite spin issue honestly really affects only a few of the single-player gameplay modes in "Tetris DS", because any competitive mode requires you to lay down pieces as quickly as humanly possible." In response to the issue, Henk Rogers stated in an interview that infinite spin was an intentional part of the game design, allowing novice players to expend some of their available scoring time to decide on the best placement of a piece. Rogers observed that "gratuitous spinning" does not occur in competitive play, as expert players do not require much time to think about where a piece should be placed. A limitation has been placed on infinite lock delay in later games of the franchise, where after a certain amount of inputs i.e.: rotations and movements, the piece will instantly lock itself. This is defaulted to 15 inputs. In 1979, Alexey Pajitnov joined the Computer Center of the Soviet Academy of Sciences as a speech recognition researcher. While he was tasked with testing the capabilities of new hardware, his ambition was to use computers to make people happy. Pajitnov developed several puzzle games on the institute's computer, an Electronika 60, a scarce resource at the time. For Pajitnov, "The games allow people to get to know each other better and act as revealers of things you might not normally notice, such as their way of thinking." In 1984, while trying to recreate a favorite puzzle game from his childhood featuring pentominoes, Pajitnov imagined a game consisting of a descent of random pieces that the player would turn to fill rows. Pajitnov felt that the game would be needlessly complicated with twelve different shape variations, so he scaled the concept down to tetrominoes, of which there are seven variants. Pajitnov titled the game "Tetris", a word created from a combination of "tetromino" and his favorite sport, "tennis". Because the Electronika 60 had no graphical interface, Pajitnov modelled the field and pieces using spaces and brackets. Realizing that completed lines filled the screen quickly, Pajitnov decided to delete them, creating a key part of "Tetris" gameplay. This early version of "Tetris" had no scoring system and no levels, but its addictive quality distinguished it from the other puzzle games Pajitnov had created. Pajitnov presented "Tetris" to his colleagues, who quickly became addicted to it. It permeated the offices within the Academy of Sciences, and within a few weeks it reached every Moscow institute with a computer. A friend of Pajitnov, Vladimir Pokhilko, who requested the game for the Moscow Medical Institute, saw people stop working to play "Tetris." Pokhilko eventually banned the game from the Medical Institute to restore productivity. Pajitnov sought to adapt "Tetris" to the IBM Personal Computer, which had a higher quality display than the Electronika 60. Pajitnov recruited Vadim Gerasimov, a 16-year-old high school student who was known for his computer skills. Pajitnov had met Gerasimov before through a mutual acquaintance, and they had worked together on previous games. Gerasimov adapted "Tetris" to the IBM PC over the course of a few weeks, incorporating color and a scoreboard. Pajitnov wanted to export "Tetris", but he had no knowledge of the business world. His superiors in the Academy were not necessarily happy with the success of the game, since they had not intended such a creation from the research team. Furthermore, intellectual property did not exist in the Soviet Union, and Soviet researchers were not allowed to sell their creations. Pajitnov asked his supervisor Victor Brjabrin, who had knowledge of the world outside the Soviet Union, to help him publish "Tetris". Pajitnov offered to transfer the rights of the game to the Academy, and was delighted to receive a non-compulsory remuneration from Brjabrin through this deal. In 1986, Brjabrin sent a copy of "Tetris" to Hungarian game publisher Novotrade. From there, clones of the game began circulating via floppy disks throughout Hungary and as far as Poland. Robert Stein, an international software salesman for the London-based firm Andromeda Software, saw the game's commercial potential during a visit to Hungary in June 1986. After an indifferent response from the Academy, Stein contacted Pajitnov and Brjabrin by fax to obtain the license rights. The researchers expressed interest in forming an agreement with Stein. However, they were unaware that a fax machine had legal value in the Western world, and Stein began to approach other companies to produce the game. Stein approached publishers at the 1987 Consumer Electronics Show in Las Vegas. Gary Carlston, co-founder of Broderbund, retrieved a copy and brought it to California. Despite enthusiasm amongst its employees, Broderbund remained skeptical because of the game's Soviet origins. Likewise, Mastertronic co-founder Martin Alper declared that "No Soviet product will ever work in the Western world." Stein ultimately signed two agreements: he sold the European rights to the publisher Mirrorsoft, and the American rights to Spectrum HoloByte. The latter obtained the rights after a visit to Mirrorsoft by Spectrum HoloByte president Phil Adams in which he played "Tetris" for two hours. At that time, Stein had not yet signed a contract with the Soviet Union. Nevertheless, he sold the rights to the two companies for £3,000 and a royalty of 7.5 to 15% on sales. Before releasing "Tetris" in the United States, Spectrum HoloByte CEO Gilman Louie asked for an overhaul of the game's graphics and music. The Soviet spirit was preserved, with fields illustrating Russian parks and buildings as well as melodies anchored in Russian folklore of the time. The company's goal was to make people want to buy a Russian product; the game came complete with a red package and Cyrillic text, an unusual approach on the other side of the Berlin Wall. The Mirrorsoft version was released for the IBM PC in November 1987, while the Spectrum HoloByte version was released for the same platform in January 1988. "Tetris" was ported to platforms including the Amiga, Atari ST, ZX Spectrum, Commodore 64 and Amstrad CPC. At the time, it made no mention of Pajitnov and came with the announcement of "Made in the United States of America, designed abroad". "Tetris" was a commercial success in Europe and the United States: Mirrorsoft sold tens of thousands of copies in two months, and Spectrum HoloByte sold over 100,000 units in the space of a year. According to Spectrum HoloByte, the average "Tetris" player was between 25–45 years old and was a manager or engineer. At the Software Publishers Association's Excellence in Software Awards ceremony in March 1988, "Tetris" won Best Entertainment Software, Best Original Game, Best Strategy Program, and Best Consumer Software. Stein, however, was faced with a problem: the only document certifying a license fee was the fax from Pajitnov and Brjabrin, meaning that Stein sold the license for a game he did not yet own. Stein contacted Pajitnov and asked him for a contract for the rights. Stein began negotiations via fax, offering 75% of the revenue generated by Stein from the license. Elektronorgtechnica ("Elorg"), the Soviet Union's central organization for the import and export of computer software, was unconvinced and requested 80% of the revenue. Stein made several trips to Moscow and held long discussions with Elorg representatives. Stein came to an agreement with Elorg on February 24, 1988, and on May 10 he signed a contract for a ten-year worldwide "Tetris" license for all current and future computer systems. Pajitnov and Brjabrin were unaware that the game was already on sale and that Stein had claimed to own the rights prior to the agreement. Although Pajitnov would not receive any percentage from these sales, he said that "The fact that so many people enjoy my game is enough for me." In 1988, Spectrum HoloByte sold the Japanese rights to its computer games and arcade machines to Henk Rogers, who was searching for games for the Japanese market. Mirrorsoft sold its Japanese rights to Atari Games subsidiary Tengen, which then sold the arcade rights to Sega and the console rights to Rogers. At this point, almost a dozen companies believed they held the "Tetris" rights, with Stein retaining rights for home computer versions. Elorg was still unaware of the deals Stein had negotiated, which did not bring money to them. Nevertheless, "Tetris" was a commercial success in North America, Europe and Asia. The same year, Nintendo was preparing to launch its first portable console, the Game Boy. Nintendo was attracted to "Tetris" by its simplicity and established success on the NES. Rogers, who was close to Nintendo president Hiroshi Yamauchi, sought to obtain the handheld rights. After a failed negotiation with Atari, Rogers contacted Stein in November 1988. Stein agreed to sign a contract, but explained that he had to consult Elorg before returning to negotiations with Rogers. After contacting Stein several times, Rogers began to suspect a breach of contract on Stein's part, and decided in February 1989 to go to the Soviet Union and negotiate the rights with Elorg. Rogers arrived at the Elorg offices uninvited, while Stein and Mirrorsoft manager Kevin Maxwell made an appointment the same day without consulting each other. During the discussions, Rogers explained that he wanted to obtain the rights to "Tetris" for the Game Boy. After quickly obtaining an agreement with Elorg president Nikolai Belikov, Rogers showed Belikov a "Tetris" cartridge. Belikov was surprised, as he believed at the time that the rights to "Tetris" were only signed for computer systems. The present parties accused Nintendo of illegal publication, but Rogers defended himself by explaining that he had obtained the rights via Atari Games, which had itself signed an agreement with Stein. Belikov then realized the complex path that the license had followed within four years because of Stein's contracts, and he constructed a strategy to regain possession of the rights and obtain better commercial agreements. At that point, Elorg was faced with three different companies seeking to buy the rights. During this time, Rogers befriended Pajitnov over a game of Go. Pajitnov would support Rogers throughout the discussions, to the detriment of Maxwell, who came to secure the "Tetris" rights for Mirrorsoft. Belikov proposed to Rogers that Stein's rights would be cancelled and Nintendo would be granted the game rights for both home and handheld consoles. Rogers flew to the United States to convince Nintendo's American branch to sign up for the rights. The contract with Elorg was signed by Nintendo executives Minoru Arakawa and Howard Lincoln for $500,000, plus 50 cents per cartridge sold. Elorg then sent an updated contract to Stein. One of the clauses defined a computer as a machine with a screen and keyboard, and thus Stein's rights to console versions were withdrawn. Stein signed the contract without paying attention to this clause, and later realized that all the contract's other clauses, notably on payments, were only a "smokescreen" to deceive him. In March 1989, Nintendo sent a cease and desist to Atari Games concerning production of the NES version of "Tetris". Atari Games contacted Mirrorsoft, and were assured that they still retained the rights. Nintendo, however, maintained its position. In response, Mirrorsoft owner Robert Maxwell pressured Soviet Union leader Mikhail Gorbachev to cancel the contract between Elorg and Nintendo. Despite the threats to Belikov, Elorg refused to give in and highlighted the financial advantages of their contract compared to those signed with Stein and Mirrorsoft. In November 1989, Nintendo and Atari Games began a legal battle in the courts of San Francisco. Atari Games sought to prove that the NES was a computer, as indicated by its Japanese name "Famicom", an abbrieviation of "Family Computer". In this case, the initial license would authorize Atari Games to release the game. The central argument of Atari Games was that the Famicom was designed to be convertible into a computer via its extension port. This argument was not accepted, and Pajitnov stressed that the initial contract only concerned computers and no other machine. Nintendo brought Belikov to testify on its behalf. Judge Fern Smith declared that Mirrorsoft and Spectrum HoloByte never received explicit authorization for marketing on consoles, and ruled in Nintendo's favor. On June 21, 1989, Atari Games withdrew its NES version from sale, and thousands of cartridges remained unsold in the company's warehouses. Sega had planned to release a Genesis version of "Tetris" on April 15, 1989, but cancelled its release during Nintendo and Atari's legal battle; fewer than ten copies were manufactured. A new port of the arcade version by M2 was included in the Sega Genesis Mini microconsole, released in September 2019. Through the legal history of the license, Pajitnov gained a reputation in the West. He was regularly invited by journalists and publishers, through which he discovered that his game had sold millions of copies, from which he had not made any money. However, he remained humble and proud of the game, which he considered "an electronic ambassador of benevolence". In January 1990, Pajitnov was invited by Spectrum HoloByte to the Consumer Electronics Show, and was immersed in American life for the first time. After a period of adaptation, he explored American culture in several cities, including Las Vegas, San Francisco, New York City and Boston, and engaged in interviews with several hosts, including the directors of Nintendo of America. He marveled at the freedom and the advantages of Western society, and spoke often of his travels to his colleagues upon returning to the Soviet Union. He realized that there was no market in Russia for their programs. At the same time, sales of the Game Boy – bundled with a handheld version of "Tetris" – exploded, exceeding sales forecasts three times. In 1991, Pajitnov and Pokhilko emigrated to the United States. Pajitnov moved to Seattle, where he produced games for Spectrum HoloByte. In April 1996, as agreed with the Academy ten years earlier and following an agreement with Rogers, the rights to "Tetris" reverted to Pajitnov. Pajitnov and Rogers founded The Tetris Company in June 1996 to manage the rights on all platforms, the previous agreements having expired. Pajitnov now receives a royalty for each "Tetris" game and derivative sold worldwide. In 2002, Pajitnov and Rogers founded Tetris Holding after the purchase of the game's remaining rights from Elorg, now a private entity following the dissolution of the Soviet Union. The Tetris Company now owns all rights to the "Tetris" brand, and is mainly responsible for removing unlicensed clones from the market; the company regularly calls on Apple Inc. and Google to remove illegal versions from their mobile app stores. In December 2005, Electronic Arts acquired Jamdat, a company specializing in mobile games. Jamdat had previously bought a company founded by Rogers in 2001, which managed the "Tetris" license on mobile platforms. As a result, Electronic Arts held a 15-year license on all mobile phone releases of "Tetris", which expired on April 21, 2020. "Tetris" has been released on a multitude of platforms since the creation of the original version on the Electronika 60. The game is available on most game consoles and is playable on personal computers, smartphones and iPods. "Guinness World Records" recognized "Tetris" as the most ported video game in history, having appeared on over 65 different platforms as of October 2010. Since the 2000s, internet versions of the game have been developed. However, commercial versions not approved by The Tetris Company tend to be purged due to company policy. The most famous online version, "Tetris Friends" by Tetris Online, Inc., had attracted over a million registered users. Tetris Online had also developed versions for console-based digital download services. Because of its popularity and simplicity of development, "Tetris" is often used as a hello world project for programmers coding for a new system or programming language. This has resulted in the availability of a large number of ports for different platforms. For instance, μTorrent and GNU Emacs contain similar shape-stacking games as easter eggs. Within official franchise installments, each version has made improvements to accommodate advancing technology and the goal to provide a more complete game. Developers are given freedom to add new modes of play and revisit the concept from different angles. Some concepts developed on official versions have been integrated into the "Tetris" guidelines in order to standardize future versions and allow players to migrate between different versions with little effort. The IBM PC version was the most evolved from the original version, featuring a graphical interface, colored tetrominoes, running statistics for the number of tetrominoes placed, and a guide for the controls. In computer science, it is common to analyze the computational complexity of problems, including real life problems and games. It was proven that for the "offline" version of "Tetris" (the player knows the complete sequence of pieces that will be dropped, i.e. there is no hidden information) the following objectives are NP-complete: Also, it is difficult to even approximately solve the first, second, and fourth problem. It is NP-hard, given an initial gameboard and a sequence of "p" pieces, to approximate the first two problems to within a factor of for any constant . It is NP-hard to approximate the last problem within a factor of for any constant . To prove NP-completeness, it was shown that there is a polynomial reduction between the 3-partition problem, which is also NP-complete, and the "Tetris" problem. The earliest versions of "Tetris" had no music. The NES version includes two original compositions by Hirokazu Tanaka along with an arrangement of "Dance of the Sugar Plum Fairy" from the second act of "The Nutcracker", composed by Tchaikovsky. The Blue Planet Software and Tengen versions also feature original music, with the exception of an arrangement of "Kalinka" in the Tengen version. Nintendo's Game Boy version includes three pieces of music as well: "Korobeiniki", Johann Sebastian Bach's French Suite No. 3 In B Minor (BWV 814), and an original track by Tanaka. "Korobeiniki" is used in most later versions of the game, and has appeared in other games, albums and films that make reference to "Tetris". In the 2000s, The Tetris Company added as a prerequisite for the granting of the license that a version of "Korobeiniki" be available in the game. According to research from Dr. Richard Haier, "et al." prolonged "Tetris" activity can also lead to more efficient brain activity during play. When first playing "Tetris", brain function and activity increases, along with greater cerebral energy consumption, measured by glucose metabolic rate. As "Tetris" players become more proficient, their brains show a reduced consumption of glucose, indicating more efficient brain activity for this task. Moderate play of "Tetris" (half-an-hour a day for three months) boosts general cognitive functions such as "critical thinking, reasoning, language and processing" and increases cerebral cortex thickness. In January 2009, an Oxford University research group headed by Dr. Emily Holmes reported in "PLoS ONE" that for healthy volunteers, playing "Tetris" soon after viewing traumatic material in the laboratory reduced the number of flashbacks to those scenes in the following week. They believe that the computer game may disrupt the memories that are retained of the sights and sounds witnessed at the time, and which are later re-experienced through involuntary, distressing flashbacks of that moment. The group hopes to develop this approach further as a potential intervention to reduce the flashbacks experienced in post-traumatic stress disorder but emphasized that these are only preliminary results. Professor Jackie Andrade and Jon May, from Plymouth University's Cognition Institute, and Ph.D. student Jessica Skorka-Brown have conducted research that shows that playing "Tetris" could distract from cravings and give a "quick and manageable" fix for people struggling to stick to diets, or quit smoking or drinking. Another notable effect is that, according to a Canadian study in April 2013, playing "Tetris" has been found to treat older adolescents with amblyopia (lazy eye), which was better than patching a victim's well eye to train their weaker eye. Dr. Robert Hess of the research team said: "It's much better than patching – much more enjoyable; it's faster, and it seems to work better." Tested in the United Kingdom, this experiment also appears to help children with that problem. The game has been noted to cause the brain to involuntarily picture "Tetris" combinations even when the player is not playing (the "Tetris" effect), although this can occur with any computer game or situation showcasing repeated images or scenarios, such as a jigsaw puzzle. While debates about Tetris's cognitive benefits continue, at least some researchers view it as a milestone in the gamification of education. "Compute!" called the IBM version of "Tetris" "one of the most addictive computer games this side of the Berlin Wall ... [it] is "not" the game to start if you have work to do or an appointment to keep. Consider yourself warned". Orson Scott Card joked that the game "proves that Russia still wants to bury us. I shudder to think of the blow to our economy as computer productivity drops to 0". Noting that "Tetris" was not copy-protected, he wrote "Obviously, the game is meant to find its way onto every American machine". The IBM version of the game was reviewed in 1988 in "Dragon" No. 135 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 4.5 out of 5 stars. The Lessers later reviewed Spectrum HoloByte's Macintosh version of "Tetris" in 1989 in "Dragon" No. 141, giving that version 5 out of 5 stars. "Macworld" reviewed the Macintosh version of "Tetris" in 1988, praising its strategic gameplay, stating that ""Tetris" offers the rare combination of being simple to learn but extremely challenging to play", and also praising the inclusion of the Desk Accessory version, which uses less RAM. "Macworld" summarized their review by listing "Tetris' "pros and cons, stating that "Tetris" is "Elegant; easy to play; challenging and addicting; requires quick thinking, long-term strategy, and lightning reflexes" and listed "Tetris"' cons as "None." In 1993, the ZX Spectrum version of the game was voted number 49 in the "Your Sinclair Official Top 100 Games of All Time". In 1996, "Tetris Pro" was ranked the 38th best game of all time by "Amiga Power". "Entertainment Weekly" picked the game as the #8 greatest game available in 1991, saying: "Thanks to Nintendo’s endless promotion, Tetris has become one of the most popular video games." "Computer Gaming World" gave "Tetris" the 1989 Compute! Choice Award for Arcade Game, describing it as "by far, the most addictive game ever". The game won three Software Publishers Association Excellence in Software Awards in 1989, including Best Entertainment Program and the Critic's Choice Award for consumers. "Computer Gaming World" in 1996 ranked it 14th on the magazine's list of the most innovative computer games. That same year, "Next Generation" listed it as number 2 on their "Top 100 Games of All Time", commenting that, "There is something so perfect, so Zen about the falling blocks of "Tetris" that the game has captured the interest of everyone who has ever played it." In 1999, "Next Generation" listed "Tetris" as number 2 on their "Top 50 Games of All Time", commenting that, ""Tetris" is the essence of gameplay at its most basic. You have a simple goal, simple controls, and simple objects to manipulate." On March 12, 2007, "The New York Times" reported that "Tetris" was named to a list of the ten most important video games of all time, the so-called game canon. After announced at the 2007 Game Developers Conference, the Library of Congress took up the video game preservation proposal and began with these 10 games, including "Tetris". In 2007, video game website GameFAQs hosted its sixth annual "Character Battle", in which the users nominate their favorite video game characters for a popularity contest in which characters participate. The L-shaped "Tetris" piece (or "L-Block" as it was called) entered the contest as a joke character, but on November 4, 2007, it won the contest. On June 6, 2009, Google honored "Tetris" 25-year anniversary by changing its logotype to a version drawn with "Tetris" blocks – the "l" letter being the long "Tetris" block lowering into its place, seen here. In 2009, "Game Informer" put "Tetris" 3rd on their list of "The Top 200 Games of All Time", saying that "If a game could be considered ageless, it's "Tetris"". The "Game Informer" staff also placed it third on their 2001 list of the 100 best games ever. "Electronic Gaming Monthly"s 100th issue had "Tetris" as first place in the "100 Best Games of All Time", commenting that ""Tetris" is as pure as a video game can get. ... When the right blocks come your way - and if you can manage to avoid mistakes - the game can be relaxing. One mislaid block, however, and your duties switch to damage control, a mad, panicky dash to clean up your mess or die." "Tetris" was also the only game for which the list did not specify one or two versions; the editors explained that after deadlocking over which version was best, they concluded that there was no wrong version of "Tetris" to play. In 2007, "Tetris" came in second place in IGN's "100 Greatest Video Games of All Time". In January 2010, it was announced that the "Tetris" franchise had sold more than 170 million copies, approximately 70 million physical copies and over 100 million copies for cell phones, making it one of the best-selling video game franchises of all time. , "Tetris" has sold 132million paid mobile game downloads. In 1991, "PC Format" named "Tetris" one of the 50 best computer games ever. The editors called it "incredibly addictive" and "one of the best games of all time". There was a hoax that circulated in February 2019 that the original NES instruction manual for "Tetris" had named the seven tetrominos with names like "Orange Ricky", "Hero" and "Smashboy", but was disproven. Despite being disproven by video game historians, a question on the October 7, 2019 airing of "Jeopardy!" alluded to these names. "Tetris" has been the subject of academic research. Vladimir Pokhilko was the first clinical psychologist to conduct experiments using "Tetris". Subsequently, it has been used for research in several fields including the theory of computation, algorithmic theory, and cognitive psychology. During the game of "Tetris", blocks appear to adsorb onto the lower surface of the window. This has led scientists to use tetrominoes "as a proxy for molecules with a complex shape" to model their "adsorption on a flat surface" to study the thermodynamics of nanoparticles. Threshold Entertainment has teamed up with The Tetris Company to develop a film adaptation of "Tetris". Threshold's CEO describes the film as an epic sci-fi adventure that will form part of a trilogy. In 2016, sources reported on a press release claiming the film would be shot in China in 2017 with an $80 million budget. However, no 2017 or later sources confirm the film ever actually went into production. "Tetris" appeared in the 2010 short animated film "Pixels", and in the 2015 movie "Pixels" inspired by the former.
https://en.wikipedia.org/wiki?curid=30339
Pre-Socratic philosophy Pre-Socratic philosophy is ancient Greek philosophy before Socrates and schools contemporary to Socrates that were not influenced by him. In Classical antiquity, the pre-Socratic philosophers were called "physiologoi" (; in English, physical or natural philosophers). Their inquiries spanned the workings of the natural world as well as human society, ethics, and religion, seeking explanations based on natural principles rather than the actions of supernatural gods. They introduced to the West the notion of the world as a "kosmos", an ordered arrangement that could be understood via rational inquiry. Coming from the eastern and western fringes of the Greek world, the pre-Socratics were the forerunners of what became Western philosophy as well as natural philosophy, which later developed into the natural sciences (such as physics, chemistry, geology, and astronomy). Significant figures include: the Milesians, Heraclitus, Parmenides, Empedocles, Zeno of Elea, and Democritus. Aristotle was the first to call them "physiologoi" or "physikoi" ("physicists", after "physis", "nature") and differentiate them from the earlier "theologoi" (theologians), or "mythologoi" (story tellers and bards) who attributed these phenomena to various gods. Diogenes Laërtius divides the "physiologoi" into two groups: Ionian, led by Anaximander, and Italiote, led by Pythagoras. The pre-Socratic philosophers rejected traditional mythological explanations of the phenomena they saw around them in favor of more rational explanations. Their efforts were directed to the investigation of the ultimate basis and essential nature of the external world. They sought the material principle ("archê") of things, and the method of their origin and disappearance. As the first philosophers, they emphasized the rational unity of things and rejected supernatural explanations, instead seeking natural principles at work in the world and human society. The pre-Socratics saw the world as a "kosmos", an ordered arrangement that could be understood via rational inquiry. Pre-Socratic thinkers present a discourse concerned with key areas of philosophical inquiry such as being, the primary stuff of the universe, the structure and function of the human soul, and the underlying principles governing perceptible phenomena, human knowledge and morality. It may sometimes be difficult to determine the actual line of argument some pre-Socratics used in supporting their particular views. While most of them produced significant texts, none of the texts have survived in complete form. All that is available are quotations, and testimonies by later philosophers (often biased) and historians, and the occasional textual fragment. These philosophers asked questions about "the essence of things": Others concentrated on defining problems and paradoxes that became the basis for later mathematical, scientific and philosophic study. The knowledge we have of them derives from accounts - known as doxography - of later philosophical writers (especially Aristotle, Plutarch, Diogenes Laërtius, Stobaeus and Simplicius), and some early theologians (especially Clement of Alexandria and Hippolytus of Rome). Modern interest in early Greek philosophy can be traced back to 1573, when Henri Estienne collected a number of pre-Socratic fragments in "Poesis Philosophica" ("Ποίησις Φιλόσοφος"). Hermann Diels popularized the term "pre-Socratic" in "Die Fragmente der Vorsokratiker" ("The Fragments of the Pre-Socratics") in 1903. However, the term "pre-Sokratic" was in use as early as George Grote's "Plato and the Other Companions of Sokrates" in 1865. Edouard Zeller was also important in dividing thought before and after Socrates. Major analyses of pre-Socratic thought have been made by Gregory Vlastos, Jonathan Barnes, and Friedrich Nietzsche in his "Philosophy in the Tragic Age of the Greeks". Only fragments of the original writings of the pre-Socratics survive (many entitled "Peri Physeos", or "On Nature", a title probably attributed later by other authors). The translation of "Peri Physeos" as "On Nature" may be misleading: the "on" normally gives the idea of an "erudite dissertation", while ""peri"" may refer in fact to a "circular approach"; and the traditional meanings of "nature" for us (as opposition to culture, to supernatural, or as essence, substance, opposed to accident, etc.) may be in contrast with the meaning of ""physeos"" or ""physis"" for the Greeks (referring to an "originary source", or "process of emergence and development"). Later philosophers rejected many of the answers the early Greek philosophers provided, but continued to place importance on their questions. Furthermore, the cosmologies proposed by them have been updated by later developments in science. The first pre-Socratic philosophers were from Miletus on the western coast of Anatolia. Thales (c. 624 - c. 546 BC) is reputedly the father of Greek philosophy; he declared water to be the basis of all things. Next came Anaximander (610-546 BC), the first writer on philosophy. He assumed as the first principle an undefined, unlimited substance without qualities ("apeiron"), out of which the primary opposites, hot and cold, moist and dry, became differentiated. His younger contemporary, Anaximenes (585-525 BC), took for his principle air, conceiving it as modified, by thickening and thinning, into fire, wind, clouds, water, and earth. The practical side of philosophy was introduced by Pythagoras (582-496 BC). Regarding the world as perfect harmony, dependent on number, he aimed at inducing humankind likewise to lead a harmonious life. His doctrine was adopted and extended by a large following of Pythagoreans who gathered at his school in south Italy in the town of Croton. His followers included Philolaus (470-380 BC), Alcmaeon of Croton, and Archytas (428-347 BC). The Ephesian philosophers were interested in the natural world and the properties by which it is ordered. Xenophanes and Heraclitus were able to push philosophical inquiry further than the Milesian school by examining the nature of philosophical inquiry itself. In addition, they were also invested in furthering observations and explanations regarding natural and physical process and also the functions and processes of the human subjective experience. Heraclitus and Xenophanes both shared interests in analyzing philosophical inquiry as they contemplated morality and religious belief. This was because they wanted to figure out the proper methods of understanding human knowledge and the ways humans fit into the world. This was much different than natural philosophy that was being done by other philosophers as it questioned how the operations of the universe as well as the human positions within the universe. Heraclitus posited that all things in nature are in a state of perpetual flux, connected by logical structure or pattern, which he termed "Logos". To Heraclitus, fire, one of the four classical elements, motivates and substantiates this eternal pattern. From fire all things originate, and return to it again in a process of eternal cycles. The Eleatics emphasized the doctrine of the One. Xenophanes (570-470 BC) declared God to be the eternal unity, permeating the universe, and governing it by his thought. Parmenides (510-440 BC) affirmed the one unchanging existence to be alone true and capable of being conceived, and multitude and change to be an appearance without reality. This doctrine was defended by his younger countryman Zeno of Elea (490-430 BC) in a polemic against the common opinion which sees in things multitude, becoming, and change. Zeno propounded a number of celebrated paradoxes, much debated by later philosophers, which try to show that supposing that there is any change or multiplicity leads to contradictions. Melissus of Samos (born c. 470 BC) was another eminent member of this school. Empedocles appears to have been partly in agreement with the Eleatic School, partly in opposition to it. On the one hand, he maintained the unchangeable nature of substance; on the other, he supposes a plurality of such substances - i.e. four classical elements, earth, water, air, and fire. Of these the world is built up, by the agency of two ideal motive forces - love as the cause of union, strife as the cause of separation. Anaxagoras (500-428 BC) in Asia Minor also maintained the existence of an ordering principle as well as a material substance, and while regarding the latter as an infinite multitude of imperishable primary elements, he conceived divine reason or Mind ("nous") as ordering them. He referred all generation and disappearance to mixture and resolution respectively. To him belongs the credit of first establishing philosophy at Athens. The first explicitly materialistic system was formed by Leucippus (5th century BC) and his pupil Democritus (460-370 BC) from Thrace. This was the doctrine of atoms - small primary bodies infinite in number, indivisible and imperishable, qualitatively similar, but distinguished by their shapes. Moving eternally through the infinite void, they collide and unite, thus generating objects which differ in accordance with the varieties, in number, size, shape, and arrangement, of the atoms which compose them. Diogenes of Apollonia from Thrace (born c. 460 BC) was an eclectic philosopher who adopted many principles of the Milesian school, especially the single material principle, which he identified as air. He explained natural processes in reference to the rarefactions and condensations of this primary substance. He also adopted Anaxagoras' cosmic thought. The sophists held that all thought rests solely on the apprehensions of the senses and on subjective impression, and that therefore we have no other standards of action than convention for the individual. Specializing in rhetoric, the sophists were typically seen more as professional educators than philosophers. The sophists traveled extensively educating people throughout Greece. Unlike philosophical schools, the sophists had no common set of philosophical doctrines that connected them to each other. They did, however, focus on teaching techniques of debate and persuasion which centered around the study of language, semantics, and grammar for use in convincing people of certain viewpoints. They also taught students their own interpretations of the social sciences, mathematics, history, among others. They flourished as a result of a special need at that time for Greek education. Prominent sophists include Protagoras (490-420 BC) from Abdera in Thrace, Gorgias (487-376 BC) from Leontini in Sicily, Hippias (485-415 BC) from Elis in the Peloponnesos, Prodicus (465-390 BC) from the island of Ceos, and Thrasymachus (459-400 BC) from Chalcedon on the Bosphorus. The first "mythologoi" were: The Seven Sages of Greece or Seven Wise Men (Greek: οἱ ἑπτὰ σοφοί hoi hepta sophoi) was the title given by classical Greek tradition to seven philosophers, statesmen, and law-givers of the 6th century BC who were renowned for their wisdom.
https://en.wikipedia.org/wiki?curid=30340
Torah Torah (; , "Instruction", "Teaching" or "Law") has a range of meanings. It can most specifically mean the first five books (Pentateuch or five books of Moses) of the 24 books of the Hebrew Bible. This is commonly known as the Written Torah. It can also mean the continued narrative from all the 24 books, from the Book of Genesis to the end of the Tanakh (Chronicles), and it can even mean the totality of Jewish teaching, culture, and practice, whether derived from biblical texts or later rabbinic writings. This is often known as the Oral Torah. Common to all these meanings, Torah consists of the origin of Jewish peoplehood: their call into being by God, their trials and tribulations, and their covenant with their God, which involves following a way of life embodied in a set of moral and religious obligations and civil laws (""). If in bound book form, it is called "Chumash", and is usually printed with the rabbinic commentaries (""). If meant for liturgic purposes, it takes the form of a Torah scroll ("sefer Torah"), which contains strictly the five books of Moses. In rabbinic literature the word "Torah" denotes both the five books ( "Torah that is written") and the Oral Torah (, "Torah that is spoken"). The Oral Torah consists of interpretations and amplifications which according to rabbinic tradition have been handed down from generation to generation and are now embodied in the Talmud and Midrash. Rabbinic tradition's understanding is that all of the teachings found in the Torah (both written and oral) were given by God through the prophet Moses, some at Mount Sinai and others at the Tabernacle, and all the teachings were written down by Moses, which resulted in the Torah that exists today. According to the Midrash, the Torah was created prior to the creation of the world, and was used as the blueprint for Creation. The majority of Biblical scholars believe that the written books were a product of the Babylonian captivity (c. 6th century BCE), based on earlier written sources and oral traditions, and that it was completed with final revisions during the post-Exilic period (c. 5th century BCE). Traditionally, the words of the Torah are written on a scroll by a scribe ("") in Hebrew. A Torah portion is read publicly at least once every three days in the presence of a congregation. Reading the Torah publicly is one of the bases of Jewish communal life. The word "Torah" in Hebrew is derived from the root ירה, which in the "hif'il" conjugation means 'to guide' or 'to teach' (cf. ). The meaning of the word is therefore "teaching", "doctrine", or "instruction"; the commonly accepted "law" gives a wrong impression. The Alexandrian Jews who translated the Septuagint used the Greek word "nomos", meaning norm, standard, doctrine, and later "law". Greek and Latin Bibles then began the custom of calling the Pentateuch (five books of Moses) The Law. Other translational contexts in the English language include custom, theory, guidance, or system. The term "Torah" is used in the general sense to include both Rabbinic Judaism's written law and Oral Law, serving to encompass the entire spectrum of authoritative Jewish religious teachings throughout history, including the Mishnah, the Talmud, the Midrash and more, and the inaccurate rendering of "Torah" as "Law" may be an obstacle to understanding the ideal that is summed up in the term "talmud torah" (תלמוד תורה, "study of Torah"). The earliest name for the first part of the Bible seems to have been "The Torah of Moses". This title, however, is found neither in the Torah itself, nor in the works of the pre-Exilic literary prophets. It appears in Joshua (8:31–32; 23:6) and Kings (I Kings 2:3; II Kings 14:6; 23:25), but it cannot be said to refer there to the entire corpus (according to academic Bible criticism). In contrast, there is every likelihood that its use in the post-Exilic works (Mal. 3:22; Dan. 9:11, 13; Ezra 3:2; 7:6; Neh. 8:1; II Chron. 23:18; 30:16) was intended to be comprehensive. Other early titles were "The Book of Moses" (Ezra 6:18; Neh. 13:1; II Chron. 35:12; 25:4; cf. II Kings 14:6) and "The Book of the Torah" (Neh. 8:3), which seems to be a contraction of a fuller name, "The Book of the Torah of God" (Neh. 8:8, 18; 10:29–30; cf. 9:3). Christian scholars usually refer to the first five books of the Hebrew Bible as the 'Pentateuch' (, "pentáteuchos", 'five scrolls'), a term first used in the Hellenistic Judaism of Alexandria. The Torah starts from the beginning of God's creating the world, through the beginnings of the people of Israel, their descent into Egypt, and the giving of the Torah at biblical Mount Sinai. It ends with the death of Moses, just before the people of Israel cross to the promised land of Canaan. Interspersed in the narrative are the specific teachings (religious obligations and civil laws) given explicitly (i.e. Ten Commandments) or implicitly embedded in the narrative (as in Exodus 12 and 13 laws of the celebration of Passover). In Hebrew, the five books of the Torah are identified by the incipits in each book; and the common English names for the books are derived from the Greek Septuagint and reflect the essential theme of each book: The Book of Genesis is the first book of the Torah. It is divisible into two parts, the Primeval history (chapters 1–11) and the Ancestral history (chapters 12–50). The primeval history sets out the author's (or authors') concepts of the nature of the deity and of humankind's relationship with its maker: God creates a world which is good and fit for mankind, but when man corrupts it with sin God decides to destroy his creation, saving only the righteous Noah to reestablish the relationship between man and God. The Ancestral history (chapters 12–50) tells of the prehistory of Israel, God's chosen people. At God's command Noah's descendant Abraham journeys from his home into the God-given land of Canaan, where he dwells as a sojourner, as does his son Isaac and his grandson Jacob. Jacob's name is changed to Israel, and through the agency of his son Joseph, the children of Israel descend into Egypt, 70 people in all with their households, and God promises them a future of greatness. Genesis ends with Israel in Egypt, ready for the coming of Moses and the Exodus. The narrative is punctuated by a series of covenants with God, successively narrowing in scope from all mankind (the covenant with Noah) to a special relationship with one people alone (Abraham and his descendants through Isaac and Jacob). The Book of Exodus is the second book of the Torah, immediately following Genesis. The book tells how the ancient Israelites leave slavery in Egypt through the strength of Yahweh, the god who has chosen Israel as his people. Yahweh inflicts horrific harm on their captors via the legendary Plagues of Egypt. With the prophet Moses as their leader, they journey through the wilderness to biblical Mount Sinai, where Yahweh promises them the land of Canaan (the "Promised Land") in return for their faithfulness. Israel enters into a covenant with Yahweh who gives them their laws and instructions to build the Tabernacle, the means by which he will come from heaven and dwell with them and lead them in a holy war to possess the land, and then give them peace. Traditionally ascribed to Moses himself, modern scholarship sees the book as initially a product of the Babylonian exile (6th century BCE), from earlier written and oral traditions, with final revisions in the Persian post-exilic period (5th century BCE). Carol Meyers, in her commentary on Exodus suggests that it is arguably the most important book in the Bible, as it presents the defining features of Israel's identity: memories of a past marked by hardship and escape, a binding covenant with God, who chooses Israel, and the establishment of the life of the community and the guidelines for sustaining it. The Book of Leviticus begins with instructions to the Israelites on how to use the Tabernacle, which they had just built (Leviticus 1–10). This is followed by rules of clean and unclean (Leviticus 11–15), which includes the laws of slaughter and animals permissible to eat (see also: Kashrut), the Day of Atonement (Leviticus 16), and various moral and ritual laws sometimes called the Holiness Code (Leviticus 17–26). Leviticus 26 provides a detailed list of rewards for following God's commandments and a detailed list of punishments for not following them. Leviticus 17 establishes sacrifices at the Tabernacle as an everlasting ordinance, but this ordinance is altered in later books with the Temple being the only place in which sacrifices are allowed. The Book of Numbers is the fourth book of the Torah. The book has a long and complex history, but its final form is probably due to a Priestly redaction (i.e., editing) of a Yahwistic source made some time in the early Persian period (5th century BCE). The name of the book comes from the two censuses taken of the Israelites. Numbers begins at Mount Sinai, where the Israelites have received their laws and covenant from God and God has taken up residence among them in the sanctuary. The task before them is to take possession of the Promised Land. The people are counted and preparations are made for resuming their march. The Israelites begin the journey, but they "murmur" at the hardships along the way, and about the authority of Moses and Aaron. For these acts, God destroys approximately 15,000 of them through various means. They arrive at the borders of Canaan and send spies into the land. Upon hearing the spies' fearful report concerning the conditions in Canaan, the Israelites refuse to take possession of it. God condemns them to death in the wilderness until a new generation can grow up and carry out the task. The book ends with the new generation of Israelites in the Plain of Moab ready for the crossing of the Jordan River. Numbers is the culmination of the story of Israel's exodus from oppression in Egypt and their journey to take possession of the land God promised their fathers. As such it draws to a conclusion the themes introduced in Genesis and played out in Exodus and Leviticus: God has promised the Israelites that they shall become a great (i.e. numerous) nation, that they will have a special relationship with Yahweh their god, and that they shall take possession of the land of Canaan. Numbers also demonstrates the importance of holiness, faithfulness and trust: despite God's presence and his priests, Israel lacks faith and the possession of the land is left to a new generation. The Book of Deuteronomy is the fifth book of the Torah. Chapters 1–30 of the book consist of three sermons or speeches delivered to the Israelites by Moses on the plains of Moab, shortly before they enter the Promised Land. The first sermon recounts the forty years of wilderness wanderings which had led to that moment, and ends with an exhortation to observe the law (or teachings), later referred to as the Law of Moses; the second reminds the Israelites of the need to follow Yahweh and the laws (or teachings) he has given them, on which their possession of the land depends; and the third offers the comfort that even should Israel prove unfaithful and so lose the land, with repentance all can be restored. The final four chapters (31–34) contain the Song of Moses, the Blessing of Moses, and narratives recounting the passing of the mantle of leadership from Moses to Joshua and, finally, the death of Moses on Mount Nebo. Presented as the words of Moses delivered before the conquest of Canaan, a broad consensus of modern scholars see its origin in traditions from Israel (the northern kingdom) brought south to the Kingdom of Judah in the wake of the Assyrian conquest of Aram (8th century BCE) and then adapted to a program of nationalist reform in the time of Josiah (late 7th century BCE), with the final form of the modern book emerging in the milieu of the return from the Babylonian captivity during the late 6th century BCE. Many scholars see the book as reflecting the economic needs and social status of the Levite caste, who are believed to have provided its authors; those likely authors are collectively referred to as the Deuteronomist. One of its most significant verses is , the Shema Yisrael, which has become the definitive statement of Jewish identity: "Hear, O Israel: the our God, the is one." Verses 6:4–5 were also quoted by Jesus in as part of the Great Commandment. The Talmud holds that the Torah was written by Moses, with the exception of the last eight verses of Deuteronomy, describing his death and burial, being written by Joshua. Alternatively, Rashi quotes from the Talmud that, "God spoke them, and Moses wrote them with tears". The Mishnah includes the divine origin of the Torah as an essential tenet of Judaism. According to Jewish tradition, the Torah was recompiled by Ezra during Second Temple period. By contrast, the modern scholarly consensus rejects Mosaic authorship, and affirms that the Torah has multiple authors and that its composition took place over centuries. The precise process by which the Torah was composed, the number of authors involved, and the date of each author remain hotly contested, however. Throughout most of the 20th century, there was a scholarly consensus surrounding the documentary hypothesis, which posits four independent sources, which were later compiled together by a redactor: J, the Jahwist source, E, the Elohist source, P, the Priestly source, and D, the Deuteronomist source. The earliest of these sources, J, would have been composed in the late 7th or the 6th century BCE, with the latest source, P, being composed around the 5th century BCE. The consensus around the documentary hypothesis collapsed in the last decades of the 20th century. The groundwork was laid with the investigation of the origins of the written sources in oral compositions, implying that the creators of J and E were collectors and editors and not authors and historians. Rolf Rendtorff, building on this insight, argued that the basis of the Pentateuch lay in short, independent narratives, gradually formed into larger units and brought together in two editorial phases, the first Deuteronomic, the second Priestly. By contrast, John Van Seters advocates a supplementary hypothesis, which posits that the Torah was derived from a series of direct additions to an existing corpus of work. A "neo-documentarian" hypothesis, which responds to the criticism of the original hypothesis and updates the methodology used to determine which text comes from which sources, has been advocated by biblical historian Joel S. Baden, among others. Such a hypothesis continues to have adherents in Israel and North America. The majority of scholars today continue to recognize Deuteronomy as a source, with its origin in the law-code produced at the court of Josiah as described by De Wette, subsequently given a frame during the exile (the speeches and descriptions at the front and back of the code) to identify it as the words of Moses. Most scholars also agree that some form of Priestly source existed, although its extent, especially its end-point, is uncertain. The remainder is called collectively non-Priestly, a grouping which includes both pre-Priestly and post-Priestly material. The final Torah is widely seen as a product of the Persian period (539–333 BCE, probably 450–350 BCE). This consensus echoes a traditional Jewish view which gives Ezra, the leader of the Jewish community on its return from Babylon, a pivotal role in its promulgation. Many theories have been advanced to explain the composition of the Torah, but two have been especially influential. The first of these, Persian Imperial authorisation, advanced by Peter Frei in 1985, holds that the Persian authorities required the Jews of Jerusalem to present a single body of law as the price of local autonomy. Frei's theory was demolished at an interdisciplinary symposium held in 2000, but the relationship between the Persian authorities and Jerusalem remains a crucial question. The second theory, associated with Joel P. Weinberg and called the "Citizen-Temple Community", proposes that the Exodus story was composed to serve the needs of a post-exilic Jewish community organised around the Temple, which acted in effect as a bank for those who belonged to it. A minority of scholars would place the final formation of the Pentateuch somewhat later, in the Hellenistic (333–164 BCE) or even Hasmonean (140–37 BCE) periods. Russell Gmirkin, for instance, argues for a Hellenistic dating on the basis that the Elephantine papyri, the records of a Jewish colony in Egypt dating from the last quarter of the 5th century BCE, make no reference to a written Torah, the Exodus, or to any other biblical event. Rabbinic writings state that the Oral Torah was given to Moses at Mount Sinai, which, according to the tradition of Orthodox Judaism, occurred in 1312 BCE. The Orthodox rabbinic tradition holds that the Written Torah was recorded during the following forty years, though many non-Orthodox Jewish scholars affirm the modern scholarly consensus that the Written Torah has multiple authors and was written over centuries. The Talmud (Gittin 60a) presents two opinions as to how exactly the Torah was written down by Moses. One opinion holds that it was written by Moses gradually as it was dictated to him, and finished it close to his death, and the other opinion holds that Moses wrote the complete Torah in one writing close to his death, based on what was dictated to him over the years. The Talmud (Menachot 30a) says that the last eight verses of the Torah that discuss the death and burial of Moses could not have been written by Moses, as writing it would have been a lie, and that they were written after his death by Joshua. Abraham ibn Ezra and Joseph Bonfils observed that phrases in those verses present information that people should only have known after the time of Moses. Ibn Ezra hinted, and Bonfils explicitly stated, that Joshua wrote these verses many years after the death of Moses. Other commentators do not accept this position and maintain that although Moses did not write those eight verses it was nonetheless dictated to him and that Joshua wrote it based on instructions left by Moses, and that the Torah often describes future events, some of which have yet to occur. All classical rabbinic views hold that the Torah was entirely Mosaic and of divine origin. Present-day Reform and Liberal Jewish movements all reject Mosaic authorship, as do most shades of Conservative Judaism. According to "Legends of the Jews", God gave Torah to the children of Israel after he approached every tribe and nation in the world, and offered them the Torah, but the latter refused it so they might have no excuse to be ignorant about it. In this book, Torah is defined as one of the first things created, as remedy against the evil inclination, and as the counselor who advised God to create human in the creation of world in order to make him the honored One. Torah reading () is a Jewish religious ritual that involves the public reading of a set of passages from a Torah scroll. The term often refers to the entire ceremony of removing the Torah scroll (or scrolls) from the ark, chanting the appropriate excerpt with traditional cantillation, and returning the scroll(s) to the ark. It is distinct from academic Torah study. Regular public reading of the Torah was introduced by Ezra the Scribe after the return of the Jewish people from the Babylonian captivity (c. 537 BCE), as described in the Book of Nehemiah. In the modern era, adherents of Orthodox Judaism practice Torah-reading according to a set procedure they believe has remained unchanged in the two thousand years since the destruction of the Temple in Jerusalem (70 CE). In the 19th and 20th centuries CE, new movements such as Reform Judaism and Conservative Judaism have made adaptations to the practice of Torah reading, but the basic pattern of Torah reading has usually remained the same: As a part of the morning prayer services on certain days of the week, fast days, and holidays, as well as part of the afternoon prayer services of Shabbat, Yom Kippur, and fast days, a section of the Pentateuch is read from a Torah scroll. On Shabbat (Saturday) mornings, a weekly section (""parashah"") is read, selected so that the entire Pentateuch is read consecutively each year. The division of "parashot" found in the modern-day Torah scrolls of all Jewish communities (Ashkenazic, Sephardic, and Yemenite) is based upon the systematic list provided by Maimonides in Mishneh Torah, "Laws of Tefillin, Mezuzah and Torah Scrolls", chapter 8. Maimonides based his division of the "parashot" for the Torah on the Aleppo Codex. Conservative and Reform synagogues may read "parashot" on a triennial rather than annual schedule, On Saturday afternoons, Mondays, and Thursdays, the beginning of the following Saturday's portion is read. On Jewish holidays, the beginnings of each month, and fast days, special sections connected to the day are read. Jews observe an annual holiday, Simchat Torah, to celebrate the completion and new start of the year's cycle of readings. Torah scrolls are often dressed with a sash, a special Torah cover, various ornaments and a Keter (crown), although such customs vary among synagogues. Congregants traditionally stand in respect when the Torah is brought out of the ark to be read, while it is being carried, and lifted, and likewise while it is returned to the ark, although they may sit during the reading itself. The Torah contains narratives, statements of law, and statements of ethics. Collectively these laws, usually called biblical law or commandments, are sometimes referred to as the Law of Moses ("Torat Moshe" ), Mosaic Law, or Sinaitic Law. Rabbinic tradition holds that Moses learned the whole Torah while he lived on Mount Sinai for 40 days and nights and both the Oral and the written Torah were transmitted in parallel with each other. Where the Torah leaves words and concepts undefined, and mentions procedures without explanation or instructions, the reader is required to seek out the missing details from supplemental sources known as the Oral Law or Oral Torah. Some of the Torah's most prominent commandments needing further explanation are: According to classical rabbinic texts this parallel set of material was originally transmitted to Moses at Sinai, and then from Moses to Israel. At that time it was forbidden to write and publish the oral law, as any writing would be incomplete and subject to misinterpretation and abuse. However, after exile, dispersion, and persecution, this tradition was lifted when it became apparent that in writing was the only way to ensure that the Oral Law could be preserved. After many years of effort by a great number of tannaim, the oral tradition was written down around 200 CE by Rabbi Judah haNasi, who took up the compilation of a nominally written version of the Oral Law, the Mishnah (Hebrew: משנה). Other oral traditions from the same time period not entered into the Mishnah were recorded as "Baraitot" (external teaching), and the Tosefta. Other traditions were written down as Midrashim. After continued persecution more of the Oral Law was committed to writing. A great many more lessons, lectures and traditions only alluded to in the few hundred pages of Mishnah, became the thousands of pages now called the "Gemara". Gemara is written in Aramaic, having been compiled in Babylon. The Mishnah and Gemara together are called the Talmud. The rabbis in the Land of Israel also collected their traditions and compiled them into the Jerusalem Talmud. Since the greater number of rabbis lived in Babylon, the Babylonian Talmud has precedence should the two be in conflict. Orthodox and Conservative branches of Judaism accept these texts as the basis for all subsequent "halakha" and codes of Jewish law, which are held to be normative. Reform and Reconstructionist Judaism deny that these texts, or the Torah itself for that matter, may be used for determining normative law (laws accepted as binding) but accept them as the authentic and only Jewish version for understanding the Torah and its development throughout history. Humanistic Judaism holds that the Torah is a historical, political, and sociological text, but does not believe that every word of the Torah is true, or even morally correct. Humanistic Judaism is willing to question the Torah and to disagree with it, believing that the entire Jewish experience, not just the Torah, should be the source for Jewish behavior and ethics. Kabbalists hold that not only do the words of Torah give a divine message, but they also indicate a far greater message that extends beyond them. Thus they hold that even as small a mark as a "kotso shel yod" (קוצו של יוד), the serif of the Hebrew letter "yod" (י), the smallest letter, or decorative markings, or repeated words, were put there by God to teach scores of lessons. This is regardless of whether that yod appears in the phrase "I am the LORD thy God" (, Exodus 20:2) or whether it appears in "And God spoke unto Moses saying" ( Exodus 6:2). In a similar vein, Rabbi Akiva (c. 50 – c. 135 CE), is said to have learned a new law from every "et" (את) in the Torah (Talmud, tractate Pesachim 22b); the particle "et" is meaningless by itself, and serves only to mark the direct object. In other words, the Orthodox belief is that even apparently contextual text such as "And God spoke unto Moses saying ..." is no less holy and sacred than the actual statement. Manuscript Torah scrolls are still scribed and used for ritual purposes (i.e., religious services); this is called a "Sefer Torah" ("Book [of] Torah"). They are written using a painstakingly careful method by highly qualified scribes. It is believed that every word, or marking, has divine meaning, and that not one part may be inadvertently changed lest it lead to error. The fidelity of the Hebrew text of the Tanakh, and the Torah in particular, is considered paramount, down to the last letter: translations or transcriptions are frowned upon for formal service use, and transcribing is done with painstaking care. An error of a single letter, ornamentation, or symbol of the 304,805 stylized letters that make up the Hebrew Torah text renders a Torah scroll unfit for use, hence a special skill is required and a scroll takes considerable time to write and check. According to Jewish law, a "sefer Torah" (plural: "Sifrei Torah") is a copy of the formal Hebrew text handwritten on "gevil" or "klaf" (forms of parchment) by using a quill (or other permitted writing utensil) dipped in ink. Written entirely in Hebrew, a "sefer Torah" contains 304,805 letters, all of which must be duplicated precisely by a trained "sofer" ("scribe"), an effort that may take as long as approximately one and a half years. Most modern Sifrei Torah are written with forty-two lines of text per column (Yemenite Jews use fifty), and very strict rules about the position and appearance of the Hebrew letters are observed. See for example the Mishnah Berurah on the subject. Any of several Hebrew scripts may be used, most of which are fairly ornate and exacting. The completion of the sefer Torah is a cause for great celebration, and it is a mitzvah for every Jew to either write or have written for him a Sefer Torah. Torah scrolls are stored in the holiest part of the synagogue in the Ark known as the "Holy Ark" ( "aron hakodesh" in Hebrew.) "Aron" in Hebrew means "cupboard" or "closet", and kodesh is derived from "kadosh", or "holy". The Book of Ezra refers to translations and commentaries of the Hebrew text into Aramaic, the more commonly understood language of the time. These translations would seem to date to the 6th century BCE. The Aramaic term for "translation" is "Targum". The Encyclopedia Judaica has: At an early period, it was customary to translate the Hebrew text into the vernacular at the time of the reading (e.g., in Palestine and Babylon the translation was into Aramaic). The targum ("translation") was done by a special synagogue official, called the meturgeman ... Eventually, the practice of translating into the vernacular was discontinued. One of the earliest known translations of the first five books of Moses from the Hebrew into Greek was the Septuagint. This is a Koine Greek version of the Hebrew Bible that was used by Greek speakers. This Greek version of the Hebrew Scriptures dates from the 3rd century BCE, originally associated with Hellenistic Judaism. It contains both a translation of the Hebrew and additional and variant material. Later translations into Greek include seven or more other versions. These do not survive, except as fragments, and include those by Aquila, Symmachus, and Theodotion. Early translations into Latin—the Vetus Latina—were ad hoc conversions of parts of the Septuagint. With Saint Jerome in the 4th century CE came the Vulgate Latin translation of the Hebrew Bible. From the eighth century CE, the cultural language of Jews living under Islamic rule became Arabic rather than Aramaic. "Around that time, both scholars and lay people started producing translations of the Bible into Judeo-Arabic using the Hebrew alphabet." Later, by the 10th century, it became essential for a standard version of the Bible in Judeo-Arabic. The best known was produced by Saadiah (the Saadia Gaon, aka the Rasag), and continues to be in use today, "in particular among Yemenite Jewry". Rav Sa'adia produced an Arabic translation of the Torah known as "Targum Tafsir" and offered comments on Rasag's work. There is a debate in scholarship whether Rasag wrote the first Arabic translation of the Torah. The Torah has been translated by Jewish scholars into most of the major European languages, including English, German, Russian, French, Spanish and others. The most well-known German-language translation was produced by Samson Raphael Hirsch. A number of Jewish English Bible translations have been published, for example by Artscroll publications As a part of the Christian biblical canons, the Torah has been translated into hundreds of languages. Although different Christian denominations have slightly different versions of the Old Testament in their Bibles, the Torah as the "Five Books of Moses" (or "the Mosaic Law") is common among them all. Islam states that the original Torah was sent by God. According to the Quran, Allah says, "It is He Who has sent down the Book (the Quran) to you with truth, confirming what came before it. And He sent down the Taurat (Torah) and the Injeel (Gospel)." [3:3] Muslims call the Torah the "Tawrat" and consider it the word of God given to Moses. However, some Muslims also believe that this original revelation was corrupted ("tahrif") (or simply altered by the passage of time and human fallibility) over time by Jewish scribes . The Torah in the Quran is always mentioned with respect in Islam. The Muslims' belief in the Torah, as well as the prophethood of Moses, is one of the fundamental tenets of Islam. The Bahá'í position on the Torah was composed in 1906 by its official interpreter on all matters religious, `Abdu'l-Bahá,
https://en.wikipedia.org/wiki?curid=30343
Hebrew Bible The Hebrew Bible, which is also called the Tanakh (; , or ; also "Tenakh", "Tenak", "Tanach"), or sometimes the Miqra (מִקְרָא), is the canonical collection of Hebrew scriptures, including the Torah. These texts are almost exclusively in Biblical Hebrew, with a few passages in Biblical Aramaic instead (in the books of Daniel and Ezra, the verse , and some single words). The Hebrew Bible is also the textual source for the Christian Old Testament. The form of this text that is authoritative for Rabbinic Judaism is known as the Masoretic Text (MT) and it consists of 24 books, while the translations divide essentially the same material into 39 books for the Protestant Bible. Modern scholars seeking to understand the history of the Hebrew Bible use a range of sources, in addition to the Masoretic Text. These sources include early Greek (Septuagint) and Syriac (Peshitta) translations, the Samaritan Pentateuch, the Dead Sea Scrolls and quotations from rabbinic manuscripts. Many of these sources may be older than the Masoretic Text and often differ from it. These differences have given rise to the theory that yet another text, an Urtext of the Hebrew Bible, once existed and is the source of the versions extant today. However, such an Urtext has never been found, and which of the three commonly known versions (Septuagint, Masoretic Text, Samaritan Pentateuch) is closest to the Urtext is not fully determined. "Tanakh" is an acronym of the first Hebrew letter of each of the Masoretic Text's three traditional divisions: Torah (‘Teaching’, also known as the Five Books of Moses), Nevi'im (’Prophets’), and Ketuvim (’Writings’)—hence TaNaKh. (On the "a"s of the word, see abjad.) Central to Judaism is that the books of the Tanakh are passed from generation to generation, "l'dor v'dor" in the Hebrew phrase. According to rabbinic tradition, they were accompanied by an oral tradition, called the Oral Torah. The three-part division reflected in the acronym ’Tanakh’ is well attested in the literature of the Rabbinic period. During that period, however, ’Tanakh’ was not used. Instead, the proper title was "Mikra" (or "Miqra", מקרא, meaning ’reading’ or ’that which is read’) because the biblical texts were read publicly. The acronym 'Tanakh' is first recorded in the medieval era. "Mikra" continues to be used in Hebrew to this day, alongside Tanakh, to refer to the Hebrew scriptures. In modern spoken Hebrew, they are interchangeable. Many biblical studies scholars advocate use of the term "Hebrew Bible" (or "Hebrew Scriptures") as a substitute for less-neutral terms with Jewish or Christian connotations (e.g. "Tanakh" or Old Testament). The Society of Biblical Literature's "Handbook of Style", which is the standard for major academic journals like the "Harvard Theological Review" and conservative Protestant journals like the "Bibliotheca Sacra" and the "Westminster Theological Journal", suggests that authors "be aware of the connotations of alternative expressions such as...Hebrew Bible [and] Old Testament" without prescribing the use of either. Alister McGrath points out that while the term emphasizes that it is largely written in Hebrew and "is sacred to the Hebrew people", it "fails to do justice to the way in which Christianity sees an essential continuity between the Old and New Testaments", arguing that there is "no generally accepted alternative to the traditional term 'Old Testament.'" However, he accepts that there is no reason why non-Christians should feel obliged to refer to these books as the Old Testament, "apart from custom of use." Christianity has long asserted a close relationship between the Hebrew Bible and New Testament, although there have sometimes been movements like Marcionism (viewed as heretical by the early church), that have struggled with it. Modern Christian formulations of this tension include supersessionism, covenant theology, new covenant theology, dispensationalism and dual-covenant theology. All of these formulations, except some forms of dual-covenant theology, are objectionable to mainstream Judaism and to many Jewish scholars and writers, for whom there is one eternal covenant between God and the Israelites, and who therefore reject the term "Old Testament" as a form of antinomianism. Christian usage of "Old Testament" does not refer to a universally agreed upon set of books but, rather, varies depending on denomination. Lutheranism and Protestant denominations that follow the Westminster Confession of Faith accept the entire Jewish canon as the Old Testament without additions, although in translation they sometimes give preference to the Septuagint (LXX) rather than the Masoretic Text; for example, see . "Hebrew" refers to the original language of the books, but it may also be taken as referring to the Jews of the Second Temple era and their descendants, who preserved the transmission of the Masoretic Text up to the present day. The Hebrew Bible includes small portions in Aramaic (mostly in the books of Daniel and Ezra), written and printed in Aramaic square-script, which was adopted as the Hebrew alphabet after the Babylonian exile. There is no scholarly consensus as to when the Hebrew Bible canon was fixed: some scholars argue that it was fixed by the Hasmonean dynasty, while others argue it was not fixed until the second century CE or even later. According to Louis Ginzberg's "Legends of the Jews", the twenty-four book canon of the Hebrew Bible was fixed by Ezra and the scribes in the Second Temple period. According to the Talmud, much of the Tanakh was compiled by the men of the Great Assembly ("Anshei K'nesset HaGedolah"), a task completed in 450 BCE, and it has remained unchanged ever since. The twenty-four book canon is mentioned in the Midrash Koheleth 12:12: "Whoever brings together in his house more than twenty four books brings confusion". The original writing system of the Hebrew text was an abjad: consonants written with some applied vowel letters (""matres lectionis""). During the early Middle Ages scholars known as the Masoretes created a single formalized system of vocalization. This was chiefly done by Aaron ben Moses ben Asher, in the Tiberias school, based on the oral tradition for reading the Tanakh, hence the name Tiberian vocalization. It also included some innovations of Ben Naftali and the Babylonian exiles. Despite the comparatively late process of codification, some traditional sources and some Orthodox Jews hold the pronunciation and cantillation to derive from the revelation at Sinai, since it is impossible to read the original text without pronunciations and cantillation pauses. The combination of a text ( "mikra"), pronunciation ( "niqqud") and cantillation ( "te`amim") enable the reader to understand both the simple meaning and the nuances in sentence flow of the text. The number of distinct words in the Hebrew Bible is 8,679, of which 1,480 are hapax legomena. The number of distinct roots, on which many of these Biblical words are based, is roughly 2000. The Tanakh consists of twenty-four books: it counts as one book each Samuel, Kings, Chronicles and Ezra–Nehemiah and counts the Twelve Minor Prophets () as a single book. In Hebrew, the books are often referred to by their prominent first word(s). The Torah (תּוֹרָה, literally ""teaching""), also known as the Pentateuch, or as the ""Five Books of Moses"". Printed versions (rather than scrolls) of the Torah are often called ""Chamisha Chumshei Torah""" ( ""Five fifth-sections of the Torah"") and informally a ""Chumash"". "Nevi'im" ( , ""Prophets"") is the second main division of the Tanakh, between the Torah and Ketuvim. It contains three sub-groups. This division includes the books which cover the time from the entrance of the Israelites into the Land of Israel until the Babylonian captivity of Judah (the ""period of prophecy""). Their distribution is not chronological, but substantive. The Former Prophets ( ) The Latter Prophets ( ) The Twelve Minor Prophets (, "Trei Asar", ""The Twelve""), which are considered one book "Ketuvim" (, ""Writings"") consists of eleven books, described below. They are also divided into three subgroups based on the distinctiveness of "Sifrei Emet" and "Hamesh Megillot". The three poetic books ("Sifrei Emet") The Five Megillot ("Hamesh Megillot"). These books are read aloud in the synagogue on particular occasions, the occasion listed below in parenthesis. Other books The Jewish textual tradition never finalized the order of the books in Ketuvim. The Babylonian Talmud (Bava Batra 14b — 15a) gives their order as Ruth, Psalms, Job, Proverbs, Ecclesiastes, Song of Solomon, Lamentations of Jeremiah, Daniel, Scroll of Esther, Ezra, Chronicles. In Tiberian Masoretic codices, including the Aleppo Codex and the Leningrad Codex, and often in old Spanish manuscripts as well, the order is Chronicles, Psalms, Job, Proverbs, Ruth, Song of Solomon, Ecclesiastes, Lamentations of Jeremiah, Esther, Daniel, Ezra. In Masoretic manuscripts (and some printed editions), Psalms, Proverbs and Job are presented in a special two-column form emphasizing the parallel stichs in the verses, which are a function of their poetry. Collectively, these three books are known as "Sifrei Emet" (an acronym of the titles in Hebrew, איוב, משלי, תהלים yields "Emet" אמ"ת, which is also the Hebrew for "truth"). These three books are also the only ones in Tanakh with a special system of cantillation notes that are designed to emphasize parallel stichs within verses. However, the beginning and end of the book of Job are in the normal prose system. The five relatively short books of the Song of Songs, the Book of Ruth, the Book of Lamentations, Ecclesiastes and the Book of Esther are collectively known as the "Hamesh Megillot" (Five Megillot). These are the latest books collected and designated as "authoritative" in the Jewish canon, with the latest parts having dates ranging into the 2nd century BCE. These scrolls are traditionally read over the course of the year in many Jewish communities. Besides the three poetic books and the five scrolls, the remaining books in Ketuvim are Daniel, Ezra–Nehemiah and Chronicles. Although there is no formal grouping for these books in the Jewish tradition, they nevertheless share a number of distinguishing characteristics. Nach, also anglicized "", refers to the Neviim and Ketuvim portions of Tanakh. "Nach" is often referred to as being its own subject, separate from Torah. It is a major subject in the curriculum of Orthodox high schools for girls and in the seminaries which they subsequently attend, and is often taught by different teachers than those who teach Chumash. In addition to Rashi, the major commentary taught for Chumash, for Nach it's also Metzudot. In Orthodox high schools for boys, by contrast, the curriculum does not include much of Nach, except to a limited extent "Joshua" and "Judges" plus the Five Megillot. There are two major approaches towards study of, and commentary on, the Tanakh. In the Jewish community, the classical approach is religious study of the Bible, where it is assumed that the Bible is divinely inspired. Another approach is to study the Bible as a human creation. In this approach, Biblical studies can be considered as a sub-field of religious studies. The latter practice, when applied to the Torah, is considered heresy by the Orthodox Jewish community. As such, much modern day Bible commentary written by non-Orthodox authors is considered forbidden by rabbis teaching in Orthodox yeshivas. Some classical rabbinic commentators, such as Abraham Ibn Ezra, Gersonides, and Maimonides, used many elements of contemporary biblical criticism, including their knowledge of history, science, and philology. Their use of historical and scientific analysis of the Bible was considered acceptable by historic Judaism due to the author's faith commitment to the idea that God revealed the Torah to Moses on Mount Sinai. The Modern Orthodox Jewish community allows for a wider array of biblical criticism to be used for biblical books outside of the Torah, and a few Orthodox commentaries now incorporate many of the techniques previously found in the academic world, e.g. the Da'at Miqra series. Non-Orthodox Jews, including those affiliated with Conservative Judaism and Reform Judaism, accept both traditional and secular approaches to Bible studies. "Jewish commentaries on the Bible", discusses Jewish Tanakh commentaries from the Targums to classical rabbinic literature, the midrash literature, the classical medieval commentators, and modern day commentaries.
https://en.wikipedia.org/wiki?curid=30344
Talmud The Talmud (; ) is the central text of Rabbinic Judaism and the primary source of Jewish religious law ("halakha") and Jewish theology. Until the advent of modernity, in nearly all Jewish communities, the Talmud was the centerpiece of Jewish cultural life and was foundational to "all Jewish thought and aspirations", serving also as "the guide for the daily life" of Jews. The term "Talmud" normally refers to the collection of writings named specifically the Babylonian Talmud (), although there is also an earlier collection known as the Jerusalem Talmud (). It may also traditionally be called (), a Hebrew abbreviation of ", or the "six orders" of the Mishnah. The Talmud has two components; the Mishnah (, 200), a written compendium of Rabbinic Judaism's Oral Torah; and the Gemara (, 500), an elucidation of the Mishnah and related Tannaitic writings that often ventures onto other subjects and expounds broadly on the Hebrew Bible. The term "Talmud" may refer to either the Gemara alone, or the Mishnah and Gemara together. The entire Talmud consists of 63 tractates, and in the standard print, called the Vilna Shas, it is 2,711 double-sided folios. It is written in Mishnaic Hebrew and Jewish Babylonian Aramaic and contains the teachings and opinions of thousands of rabbis (dating from before the Common Era through to the fifth century) on a variety of subjects, including halakha, Jewish ethics, philosophy, customs, history, and folklore, and many other topics. The Talmud is the basis for all codes of Jewish law, and is widely quoted in rabbinic literature. Talmud translates as "instruction, learning", from the Semitic root ", meaning "teach, study". Originally, Jewish scholarship was oral and transferred from one generation to the next. Rabbis expounded and debated the Torah (the written Torah expressed in the Hebrew Bible) and discussed the Tanakh without the benefit of written works (other than the Biblical books themselves), though some may have made private notes (""), for example, of court decisions. This situation changed drastically, mainly as the result of the destruction of the Jewish commonwealth and the Second Temple in the year 70 and the consequent upheaval of Jewish social and legal norms. As the rabbis were required to face a new reality—mainly Judaism without a Temple (to serve as the center of teaching and study) and Judea, the Roman province, without at least partial autonomy—there was a flurry of legal discourse and the old system of oral scholarship could not be maintained. It is during this period that rabbinic discourse began to be recorded in writing. The oldest full manuscript of the Talmud, known as the Munich Talmud (Codex Hebraicus 95), dates from 1342 and is available online. The process of "Gemara" proceeded in what were then the two major centers of Jewish scholarship, Galilee and Babylonia. Correspondingly, two bodies of analysis developed, and two works of Talmud were created. The older compilation is called the Jerusalem Talmud or the . It was compiled in the 4th century in Galilee. The Babylonian Talmud was compiled about the year 500, although it continued to be edited later. The word "Talmud", when used without qualification, usually refers to the Babylonian Talmud. While the editors of Jerusalem Talmud and Babylonian Talmud each mention the other community, most scholars believe these documents were written independently; Louis Jacobs writes, "If the editors of either had had access to an actual text of the other, it is inconceivable that they would not have mentioned this. Here the argument from silence is very convincing." The Jerusalem Talmud, also known as the Palestinian Talmud, or (Talmud of the Land of Israel), was one of the two compilations of Jewish religious teachings and commentary that was transmitted orally for centuries prior to its compilation by Jewish scholars in the Land of Israel. It is a compilation of teachings of the schools of Tiberias, Sepphoris, and Caesarea. It is written largely in Jewish Palestinian Aramaic, a Western Aramaic language that differs from its Babylonian counterpart. This Talmud is a synopsis of the analysis of the Mishnah that was developed over the course of nearly 200 years by the Academies in Galilee (principally those of Tiberias and Caesarea.) Because of their location, the sages of these Academies devoted considerable attention to analysis of the agricultural laws of the Land of Israel. Traditionally, this Talmud was thought to have been redacted in about the year 350 by Rav Muna and Rav Yossi in the Land of Israel. It is traditionally known as the "Talmud Yerushalmi" ("Jerusalem Talmud"), but the name is a misnomer, as it was not prepared in Jerusalem. It has more accurately been called "The Talmud of the Land of Israel". Its final redaction probably belongs to the end of the 4th century, but the individual scholars who brought it to its present form cannot be fixed with assurance. By this time Christianity had become the state religion of the Roman Empire and Jerusalem the holy city of Christendom. In 325 Constantine the Great, the first Christian emperor, said "let us then have nothing in common with the detestable Jewish crowd." This policy made a Jew an outcast and pauper. The compilers of the Jerusalem Talmud consequently lacked the time to produce a work of the quality they had intended. The text is evidently incomplete and is not easy to follow. The apparent cessation of work on the Jerusalem Talmud in the 5th century has been associated with the decision of Theodosius II in 425 to suppress the Patriarchate and put an end to the practice of semikhah, formal scholarly ordination. Some modern scholars have questioned this connection. Despite its incomplete state, the Jerusalem Talmud remains an indispensable source of knowledge of the development of the Jewish Law in the Holy Land. It was also an important resource in the study of the Babylonian Talmud by the Kairouan school of Chananel ben Chushiel and Nissim ben Jacob, with the result that opinions ultimately based on the Jerusalem Talmud found their way into both the Tosafot and the Mishneh Torah of Maimonides. Following the formation of the modern state of Israel there is some interest in restoring "Eretz Yisrael" traditions. For example, rabbi David Bar-Hayim of the "Makhon Shilo" institute has issued a siddur reflecting "Eretz Yisrael" practice as found in the Jerusalem Talmud and other sources. The Babylonian Talmud ("Talmud Bavli") consists of documents compiled over the period of late antiquity (3rd to 6th centuries). During this time, the most important of the Jewish centres in Mesopotamia, a region called "Babylonia" in Jewish sources and later known as Iraq, were Nehardea, Nisibis (modern Nusaybin), Mahoza (al-Mada'in, just to the south of what is now Baghdad), Pumbedita (near present-day al Anbar Governorate), and the Sura Academy, probably located about south of Baghdad. The Babylonian Talmud comprises the Mishnah and the Babylonian Gemara, the latter representing the culmination of more than 300 years of analysis of the Mishnah in the Talmudic Academies in Babylonia. The foundations of this process of analysis were laid by Abba Arika (175–247), a disciple of Judah ha-Nasi. Tradition ascribes the compilation of the Babylonian Talmud in its present form to two Babylonian sages, Rav Ashi and Ravina II. Rav Ashi was president of the Sura Academy from 375–427. The work begun by Rav Ashi was completed by Ravina, who is traditionally regarded as the final Amoraic expounder. Accordingly, traditionalists argue that Ravina's death in 475 is the latest possible date for the completion of the redaction of the Talmud. However, even on the most traditional view a few passages are regarded as the work of a group of rabbis who edited the Talmud after the end of the Amoraic period, known as the "Savoraim" or "Rabbanan Savora'e" (meaning "reasoners" or "considerers"). There are significant differences between the two Talmud compilations. The language of the Jerusalem Talmud is a western Aramaic dialect, which differs from the form of Aramaic in the Babylonian Talmud. The Talmud Yerushalmi is often fragmentary and difficult to read, even for experienced Talmudists. The redaction of the Talmud Bavli, on the other hand, is more careful and precise. The law as laid down in the two compilations is basically similar, except in emphasis and in minor details. The Jerusalem Talmud has not received much attention from commentators, and such traditional commentaries as exist are mostly concerned with comparing its teachings to those of the Talmud Bavli. Neither the Jerusalem nor the Babylonian Talmud covers the entire Mishnah: for example, a Babylonian Gemara exists only for 37 out of the 63 tractates of the Mishnah. In particular: The Babylonian Talmud records the opinions of the rabbis of the "Ma'arava" (the West, meaning Israel/Palestine) as well as of those of Babylonia, while the Jerusalem Talmud only seldom cites the Babylonian rabbis. The Babylonian version also contains the opinions of more generations because of its later date of completion. For both these reasons it is regarded as a more comprehensive collection of the opinions available. On the other hand, because of the centuries of redaction between the composition of the Jerusalem and the Babylonian Talmud, the opinions of early "amoraim" might be closer to their original form in the Jerusalem Talmud. The influence of the Babylonian Talmud has been far greater than that of the "Yerushalmi". In the main, this is because the influence and prestige of the Jewish community of Israel steadily declined in contrast with the Babylonian community in the years after the redaction of the Talmud and continuing until the Gaonic era. Furthermore, the editing of the Babylonian Talmud was superior to that of the Jerusalem version, making it more accessible and readily usable. According to Maimonides (whose life began almost a hundred years after the end of the Gaonic era), all Jewish communities during the Gaonic era formally accepted the Babylonian Talmud as binding upon themselves, and modern Jewish practice follows the Babylonian Talmud's conclusions on all areas in which the two Talmuds conflict. The structure of the Talmud follows that of the Mishnah, in which six orders ("sedarim"; singular: "seder") of general subject matter are divided into 60 or 63 tractates ("masekhtot"; singular: "masekhet") of more focused subject compilations, though not all tractates have Gemara. Each tractate is divided into chapters ("perakim"; singular: "perek"), 517 in total, that are both numbered according to the Hebrew alphabet and given names, usually using the first one or two words in the first mishnah. A "perek" may continue over several (up to tens of) pages. Each "perek" will contain several "mishnayot". The Mishnah is a compilation of legal opinions and debates. Statements in the Mishnah are typically terse, recording brief opinions of the rabbis debating a subject; or recording only an unattributed ruling, apparently representing a consensus view. The rabbis recorded in the Mishnah are known as the Tannaim. Since it sequences its laws by subject matter instead of by biblical context, the Mishnah discusses individual subjects more thoroughly than the Midrash, and it includes a much broader selection of halakhic subjects than the Midrash. The Mishnah's topical organization thus became the framework of the Talmud as a whole. But not every tractate in the Mishnah has a corresponding Gemara. Also, the order of the tractates in the Talmud differs in some cases from that in the "Mishnah". In addition to the Mishnah, other tannaitic teachings were current at about the same time or shortly thereafter. The Gemara frequently refers to these tannaitic statements in order to compare them to those contained in the Mishnah and to support or refute the propositions of the Amoraim. The "baraitot" cited in the Gemara are often quotations from the Tosefta (a tannaitic compendium of halakha parallel to the Mishnah) and the Midrash halakha (specifically Mekhilta, Sifra and Sifre). Some "baraitot", however, are known only through traditions cited in the Gemara, and are not part of any other collection. In the three centuries following the redaction of the Mishnah, rabbis in Palestine and Babylonia analyzed, debated, and discussed that work. These discussions form the Gemara. The Gemara mainly focuses on elucidating and elaborating the opinions of the Tannaim. The rabbis of the Gemara are known as (sing. "" ). Much of the Gemara consists of legal analysis. The starting point for the analysis is usually a legal statement found in a Mishnah. The statement is then analyzed and compared with other statements used in different approaches to biblical exegesis in rabbinic Judaism (or – simpler – interpretation of text in Torah study) exchanges between two (frequently anonymous and sometimes metaphorical) disputants, termed the ' (questioner) and ' (answerer). Another important function of Gemara is to identify the correct biblical basis for a given law presented in the Mishnah and the logical process connecting one with the other: this activity was known as "talmud" long before the existence of the "Talmud" as a text. In addition to the six Orders, the Talmud contains a series of short treatises of a later date, usually printed at the end of Seder Nezikin. These are not divided into Mishnah and Gemara. Within the Gemara, the quotations from the Mishnah and the Baraitas and verses of Tanakh quoted and embedded in the Gemara are in either Mishnaic or Biblical Hebrew. The rest of the Gemara, including the discussions of the Amoraim and the overall framework, is in a characteristic dialect of Jewish Babylonian Aramaic. There are occasional quotations from older works in other dialects of Aramaic, such as Megillat Taanit. Overall, Hebrew constitutes somewhat less than half of the text of the Talmud. This difference in language is due to the long time period elapsing between the two compilations. During the period of the Tannaim (rabbis cited in the Mishnah), a late form of Hebrew known as Rabbinic or Mishnaic Hebrew was still in use as a spoken vernacular among Jews in Judaea (alongside Greek and Aramaic), whereas during the period of the Amoraim (rabbis cited in the Gemara), which began around the year 200, the spoken vernacular was almost exclusively Aramaic. Hebrew continued to be used for the writing of religious texts, poetry, and so forth. Even within the Aramaic of the gemara, different dialects or writing styles can be observed in different tractates. One dialect is common to most of the Babylonian Talmud, while a second dialect is used in Nedarim, Nazir, Temurah, Keritot, and Me'ilah; the second dialect is closer in style to the Targum. The first complete edition of the Babylonian Talmud was printed in Venice by Daniel Bomberg 1520–23 with the support of Pope Leo X. In addition to the "Mishnah" and "Gemara", Bomberg's edition contained the commentaries of Rashi and Tosafot. Almost all printings since Bomberg have followed the same pagination. Bomberg's edition was considered relatively free of censorship. Following Ambrosius Frobenius's publication of most of the Talmud in installments in Basel, Immanuel Benveniste published the whole Talmud in installments in Amsterdam 1644–1648, Although according to Raphael Rabbinovicz the Benveniste Talmud may have been based on the Lublin Talmud and included many of the censors' errors. The edition of the Talmud published by the Szapira brothers in Slavita was published in 1817, and it is particularly prized by many rebbes of Hasidic Judaism. In 1835, after a religious community copyright was nearly over, and following an acrimonious dispute with the Szapira family, a new edition of the Talmud was printed by Menachem Romm of Vilna. Known as the "Vilna Edition Shas", this edition (and later ones printed by his widow and sons, the Romm publishing house) has been used in the production of more recent editions of Talmud Bavli. A page number in the Vilna Talmud refers to a double-sided page, known as a "daf", or folio in English; each daf has two "amudim" labeled and , sides A and B (recto and verso). The convention of referencing by "daf" is relatively recent and dates from the early Talmud printings of the 17th century, though the actual pagination goes back to the Bomberg edition. Earlier rabbinic literature generally refers to the tractate or chapters within a tractate (e.g. Berachot Chapter 1, ). It sometimes also refers to the specific Mishnah in that chapter, where "Mishnah" is replaced with "Halakha", here meaning route, to "direct" the reader to the entry in the Gemara corresponding to that Mishna (e.g. Berachot Chapter 1 Halakha 1, , would refer to the first Mishnah of the first chapter in Tractate Berachot, and its corresponding entry in the Gemara). However, this form is nowadays more commonly (though not exclusively) used when referring to the Jerusalem Talmud. Nowadays, reference is usually made in format ["Tractate daf a/b"] (e.g. Berachot 23b, ). Increasingly, the symbols "." and ":" are used to indicate Recto and Verso, respectively (thus, e.g. Berachot 23:, ). These references always refer to the pagination of the Vilna Talmud. In the Vilna edition of the Talmud, there are 5,894 folio pages. Lazarus Goldschmidt published an edition from the "uncensored text" of the Babylonian Talmud with a German translation in 9 volumes (commenced Leipzig, 1897–1909, edition completed, following emigration to England in 1933, by 1936). The text of the Vilna editions is considered by scholars not to be uniformly reliable, and there have been a number of attempts to collate textual variants. There have been critical editions of particular tractates (e.g. Henry Malter's edition of "Ta'anit"), but there is no modern critical edition of the whole Talmud. Modern editions such as those of the Oz ve-Hadar Institute correct misprints and restore passages that in earlier editions were modified or excised by censorship but do not attempt a comprehensive account of textual variants. One edition, by rabbi Yosef Amar, represents the Yemenite tradition, and takes the form of a photostatic reproduction of a Vilna-based print to which Yemenite vocalization and textual variants have been added by hand, together with printed introductory material. Collations of the Yemenite manuscripts of some tractates have been published by Columbia University. A number of editions have been aimed at bringing the Talmud to a wider audience. The main ones are as follows. There are six contemporary translations of the Talmud into English: A circa 1000 CE translation of the Talmud to Arabic is mentioned in Sefer ha-Qabbalah. This version was commissioned by the Fatimid Caliph Al-Hakim bi-Amr Allah and was carried out by Joseph ibn Abitur. There is one translation of the Talmud into Arabic, published in 2012 in Jordan by the Center for Middle Eastern Studies. The translation was carried out by a group of 90 Muslim and Christian scholars. The introduction was characterized by Dr. Raquel Ukeles, Curator of the Israel National Library's Arabic collection, as "racist", but she considers the translation itself as "not bad". In February 2017, the "William Davidson Talmud" was released to Sefaria. This translation is a version of the Steinsaltz edition which was released under creative commons license. In 2018 Muslim-majority Albania co-hosted an event at the United Nations with Catholic-majority Italy and Jewish-majority Israel celebrating the translation of the Talmud into Italian for the first time. Albanian UN Ambassador Besiana Kadare opined: “Projects like the Babylonian Talmud Translation open a new lane in intercultural and interfaith dialogue, bringing hope and understanding among people, the right tools to counter prejudice, stereotypical thinking and discrimination. By doing so, we think that we strengthen our social traditions, peace, stability — and we also counter violent extremist tendencies.” From the time of its completion, the Talmud became integral to Jewish scholarship. A maxim in Pirkei Avot advocates its study from the age of 15. This section outlines some of the major areas of Talmudic study. The earliest Talmud commentaries were written by the Geonim ( 800–1000) in Babylonia. Although some direct commentaries on particular treatises are extant, our main knowledge of Gaonic era Talmud scholarship comes from statements embedded in Geonic responsa that shed light on Talmudic passages: these are arranged in the order of the Talmud in Levin's "Otzar ha-Geonim". Also important are practical abridgments of Jewish law such as Yehudai Gaon's "Halachot Pesukot", Achai Gaon's "Sheeltot" and Simeon Kayyara's "Halachot Gedolot". After the death of Hai Gaon, however, the center of Talmud scholarship shifts to Europe and North Africa. One area of Talmudic scholarship developed out of the need to ascertain the Halakha. Early commentators such as rabbi Isaac Alfasi (North Africa, 1013–1103) attempted to extract and determine the binding legal opinions from the vast corpus of the Talmud. Alfasi's work was highly influential, attracted several commentaries in its own right and later served as a basis for the creation of halakhic codes. Another influential medieval Halakhic work following the order of the Babylonian Talmud, and to some extent modelled on Alfasi, was "the "Mordechai"", a compilation by Mordechai ben Hillel ( 1250–1298). A third such work was that of rabbi Asher ben Yechiel (d. 1327). All these works and their commentaries are printed in the Vilna and many subsequent editions of the Talmud. A 15th-century Spanish rabbi, Jacob ibn Habib (d. 1516), composed the Ein Yaakov. "Ein Yaakov" (or "En Ya'aqob") extracts nearly all the Aggadic material from the Talmud. It was intended to familiarize the public with the ethical parts of the Talmud and to dispute many of the accusations surrounding its contents. The commentaries on the Talmud constitute only a small part of Rabbinic literature in comparison with the responsa literature and the commentaries on the codices. At the time when the Talmud was concluded the traditional literature was still so fresh in the memory of scholars that there was no need of writing Talmudic commentaries, nor were such works undertaken in the first period of the gaonate. Paltoi ben Abaye ("c." 840) was the first who in his responsum offered verbal and textual comments on the Talmud. His son, Zemah ben Paltoi paraphrased and explained the passages which he quoted; and he composed, as an aid to the study of the Talmud, a lexicon which Abraham Zacuto consulted in the fifteenth century. Saadia Gaon is said to have composed commentaries on the Talmud, aside from his Arabic commentaries on the Mishnah. There are many passages in the Talmud which are cryptic and difficult to understand. Its language contains many Greek and Persian words that became obscure over time. A major area of Talmudic scholarship developed to explain these passages and words. Some early commentators such as Rabbenu Gershom of Mainz (10th century) and Rabbenu Ḥananel (early 11th century) produced running commentaries to various tractates. These commentaries could be read with the text of the Talmud and would help explain the meaning of the text. Another important work is the "Sefer ha-Mafteaḥ" (Book of the Key) by Nissim Gaon, which contains a preface explaining the different forms of Talmudic argumentation and then explains abbreviated passages in the Talmud by cross-referring to parallel passages where the same thought is expressed in full. Commentaries ("ḥiddushim") by Joseph ibn Migash on two tractates, Bava Batra and Shevuot, based on Ḥananel and Alfasi, also survive, as does a compilation by Zechariah Aghmati called "Sefer ha-Ner". Using a different style, rabbi Nathan b. Jechiel created a lexicon called the "Arukh" in the 11th century to help translate difficult words. By far the best known commentary on the Babylonian Talmud is that of Rashi (rabbi Solomon ben Isaac, 1040–1105). The commentary is comprehensive, covering almost the entire Talmud. Written as a running commentary, it provides a full explanation of the words, and explains the logical structure of each Talmudic passage. It is considered indispensable to students of the Talmud. Although Rashi drew upon all his predecessors, his originality in using the material offered by them was unparalleled. His commentaries, in turn, became the basis of the work of his pupils and successors, who composed a large number of supplementary works that were partly in emendation and partly in explanation of Rashi's, and are known under the title "Tosafot." ("additions" or "supplements"). The "Tosafot" are collected commentaries by various medieval Ashkenazic rabbis on the Talmud (known as "Tosafists" or "Ba'alei Tosafot"). One of the main goals of the "Tosafot" is to explain and interpret contradictory statements in the Talmud. Unlike Rashi, the "Tosafot" is not a running commentary, but rather comments on selected matters. Often the explanations of "Tosafot" differ from those of Rashi. In Yeshiva, the integration of Talmud, Rashi and Tosafot, is considered as the foundation (and prerequisite) for further analysis; this combination is sometimes referred to by the acronym ""gefet"" ( גפ״ת - "Gemara", "perush Rashi", "Tosafot"). Among the founders of the Tosafist school were rabbi Jacob ben Meir (known as Rabbeinu Tam), who was a grandson of Rashi, and, Rabbenu Tam's nephew, rabbi Isaac ben Samuel. The Tosafot commentaries were collected in different editions in the various schools. The benchmark collection of Tosafot for Northern France was that of R. Eliezer of Touques. The standard collection for Spain was that of Rabbenu Asher ("Tosefot Harosh"). The Tosafot that are printed in the standard Vilna edition of the Talmud are an edited version compiled from the various medieval collections, predominantly that of Touques. Over time, the approach of the Tosafists spread to other Jewish communities, particularly those in Spain. This led to the composition of many other commentaries in similar styles. Among these are the commentaries of Nachmanides (Ramban), Solomon ben Adret (Rashba), Yom Tov of Seville (Ritva) and Nissim of Gerona (Ran). A comprehensive anthology consisting of extracts from all these is the "Shittah Mekubbetzet" of Bezalel Ashkenazi. Other commentaries produced in Spain and Provence were not influenced by the Tosafist style. Two of the most significant of these are the Yad Ramah by rabbi Meir Abulafia and "Bet Habechirah" by rabbi Menahem haMeiri, commonly referred to as "Meiri". While the "Bet Habechirah" is extant for all of Talmud, we only have the "Yad Ramah" for Tractates Sanhedrin, Baba Batra and Gittin. Like the commentaries of Ramban and the others, these are generally printed as independent works, though some Talmud editions include the "Shittah Mekubbetzet" in an abbreviated form. In later centuries, focus partially shifted from direct Talmudic interpretation to the analysis of previously written Talmudic commentaries. These later commentaries include "Maharshal" (Solomon Luria), "Maharam" (Meir Lublin) and "Maharsha" (Samuel Edels), and are generally printed at the back of each tractate. Another very useful study aid, found in almost all editions of the Talmud, consists of the marginal notes "Torah Or", "Ein Mishpat Ner Mitzvah" and "Masoret ha-Shas" by the Italian rabbi Joshua Boaz, which give references respectively to the cited Biblical passages, to the relevant halachic codes ("Mishneh Torah", "Tur", "Shulchan Aruch", and "Se'mag") and to related Talmudic passages. Most editions of the Talmud include brief marginal notes by Akiva Eger under the name "Gilyon ha-Shas", and textual notes by Joel Sirkes and the Vilna Gaon (see Textual emendations below), on the page together with the text. Commentaries discussing the Halachik-legal content include "Rosh", "Rif" and "Mordechai"; these are now standard appendices to each volume. Rambam's "Mishneh Torah" is invariably studied alongside these three; although a code, and therefore not in the same order as the Talmud, the relevant location is identified via the ""Ein Mishpat"", as mentioned. During the 15th and 16th centuries, a new intensive form of Talmud study arose. Complicated logical arguments were used to explain minor points of contradiction within the Talmud. The term "pilpul" was applied to this type of study. Usage of "pilpul" in this sense (that of "sharp analysis") harks back to the Talmudic era and refers to the intellectual sharpness this method demanded. Pilpul practitioners posited that the Talmud could contain no redundancy or contradiction whatsoever. New categories and distinctions ("hillukim") were therefore created, resolving seeming contradictions within the Talmud by novel logical means. In the Ashkenazi world the founders of "pilpul" are generally considered to be Jacob Pollak (1460–1541) and Shalom Shachna. This kind of study reached its height in the 16th and 17th centuries when expertise in pilpulistic analysis was considered an art form and became a goal in and of itself within the yeshivot of Poland and Lithuania. But the popular new method of Talmud study was not without critics; already in the 15th century, the ethical tract "Orhot Zaddikim" ("Paths of the Righteous" in Hebrew) criticized pilpul for an overemphasis on intellectual acuity. Many 16th- and 17th-century rabbis were also critical of pilpul. Among them are Judah Loew ben Bezalel (the "Maharal" of Prague), Isaiah Horowitz, and Yair Bacharach. By the 18th century, pilpul study waned. Other styles of learning such as that of the school of Elijah b. Solomon, the Vilna Gaon, became popular. The term "pilpul" was increasingly applied derogatorily to novellae deemed casuistic and hairsplitting. Authors referred to their own commentaries as "al derekh ha-peshat" (by the simple method) to contrast them with pilpul. Among Sephardi and Italian Jews from the 15th century on, some authorities sought to apply the methods of Aristotelian logic, as reformulated by Averroes. This method was first recorded, though without explicit reference to Aristotle, by Isaac Campanton (d. Spain, 1463) in his "Darkhei ha-Talmud" ("The Ways of the Talmud"), and is also found in the works of Moses Chaim Luzzatto. According to the present-day Sephardi scholar José Faur, traditional Sephardic Talmud study could take place on any of three levels. Today most Sephardic yeshivot follow Lithuanian approaches such as the Brisker method: the traditional Sephardic methods are perpetuated informally by some individuals." 'Iyyun Tunisa'i" is taught at the Kisse Rahamim yeshivah in Bnei Brak. In the late 19th century another trend in Talmud study arose. Rabbi Hayyim Soloveitchik (1853–1918) of Brisk (Brest-Litovsk) developed and refined this style of study. Brisker method involves a reductionistic analysis of rabbinic arguments within the Talmud or among the Rishonim, explaining the differing opinions by placing them within a categorical structure. The Brisker method is highly analytical and is often criticized as being a modern-day version of pilpul. Nevertheless, the influence of the Brisker method is great. Most modern day Yeshivot study the Talmud using the Brisker method in some form. One feature of this method is the use of Maimonides' "Mishneh Torah" as a guide to Talmudic interpretation, as distinct from its use as a source of practical "halakha". Rival methods were those of the Mir and Telz yeshivas. As a result of Jewish emancipation, Judaism underwent enormous upheaval and transformation during the 19th century. Modern methods of textual and historical analysis were applied to the Talmud. The text of the Talmud has been subject to some level of critical scrutiny throughout its history. Rabbinic tradition holds that the people cited in both Talmuds did not have a hand in its writings; rather, their teachings were edited into a rough form around 450 CE (Talmud Yerushalmi) and 550 CE (Talmud Bavli.) The text of the Bavli especially was not firmly fixed at that time. The Gaonic responsa literature addresses this issue. Teshuvot Geonim Kadmonim, section 78, deals with mistaken biblical readings in the Talmud. This Gaonic responsum states: In the early medieval era, Rashi already concluded that some statements in the extant text of the Talmud were insertions from later editors. On Shevuot 3b Rashi writes "A mistaken student wrote this in the margin of the Talmud, and copyists [subsequently] put it into the Gemara." The emendations of Yoel Sirkis and the Vilna Gaon are included in all standard editions of the Talmud, in the form of marginal glosses entitled "Hagahot ha-Bach" and "Hagahot ha-Gra" respectively; further emendations by Solomon Luria are set out in commentary form at the back of each tractate. The Vilna Gaon's emendations were often based on his quest for internal consistency in the text rather than on manuscript evidence; nevertheless many of the Gaon's emendations were later verified by textual critics, such as Solomon Schechter, who had Cairo Genizah texts with which to compare our standard editions. In the 19th century Raphael Nathan Nota Rabinovicz published a multi-volume work entitled "Dikdukei Soferim", showing textual variants from the Munich and other early manuscripts of the Talmud, and further variants are recorded in the Complete Israeli Talmud and "Gemara Shelemah" editions (see Critical editions, above). Today many more manuscripts have become available, in particular from the Cairo Geniza. The Academy of the Hebrew Language has prepared a text on CD-ROM for lexicographical purposes, containing the text of each tractate according to the manuscript it considers most reliable, and images of some of the older manuscripts may be found on the website of the Jewish National and University Library. The JNUL, the Lieberman Institute (associated with the Jewish Theological Seminary of America), the Institute for the Complete Israeli Talmud (part of Yad Harav Herzog) and the Friedberg Jewish Manuscript Society all maintain searchable websites on which the viewer can request variant manuscript readings of a given passage. Further variant readings can often be gleaned from citations in secondary literature such as commentaries, in particular those of Alfasi, Rabbenu Ḥananel and Aghmati, and sometimes the later Spanish commentators such as Nachmanides and Solomon ben Adret. Historical study of the Talmud can be used to investigate a variety of concerns. One can ask questions such as: Do a given section's sources date from its editor's lifetime? To what extent does a section have earlier or later sources? Are Talmudic disputes distinguishable along theological or communal lines? In what ways do different sections derive from different schools of thought within early Judaism? Can these early sources be identified, and if so, how? Investigation of questions such as these are known as "higher textual criticism". (The term "criticism" is a technical term denoting academic study.) Religious scholars still debate the precise method by which the text of the Talmuds reached their final form. Many believe that the text was continuously smoothed over by the "savoraim". In the 1870s and 1880s, rabbi Raphael Natan Nata Rabbinovitz engaged in historical study of Talmud Bavli in his "Diqduqei Soferim". Since then many Orthodox rabbis have approved of his work, including rabbis Shlomo Kluger, Yoseph Shaul Ha-Levi Natanzohn, Yaaqov Ettlinger, Isaac Elhanan Spektor and Shimon Sofer. During the early 19th century, leaders of the newly evolving Reform movement, such as Abraham Geiger and Samuel Holdheim, subjected the Talmud to severe scrutiny as part of an effort to break with traditional rabbinic Judaism. They insisted that the Talmud was entirely a work of evolution and development. This view was rejected as both academically incorrect, and religiously incorrect, by those who would become known as the Orthodox movement. Some Orthodox leaders such as Moses Sofer (the "Chatam Sofer") became exquisitely sensitive to any change and rejected modern critical methods of Talmud study. Some rabbis advocated a view of Talmudic study that they held to be in-between the Reformers and the Orthodox; these were the adherents of positive-historical Judaism, notably Nachman Krochmal and Zecharias Frankel. They described the Oral Torah as the result of a historical and exegetical process, emerging over time, through the application of authorized exegetical techniques, and more importantly, the subjective dispositions and personalities and current historical conditions, by learned sages. This was later developed more fully in the five-volume work "Dor Dor ve-Dorshav" by Isaac Hirsch Weiss. (See Jay Harris "Guiding the Perplexed in the Modern Age" Ch. 5) Eventually their work came to be one of the formative parts of Conservative Judaism. Another aspect of this movement is reflected in Graetz's "History of the Jews". Graetz attempts to deduce the personality of the Pharisees based on the laws or aggadot that they cite, and show that their personalities influenced the laws they expounded. The leader of Orthodox Jewry in Germany Samson Raphael Hirsch, while not rejecting the methods of scholarship in principle, hotly contested the findings of the Historical–Critical method. In a series of articles in his magazine "Jeschurun" (reprinted in Collected Writings Vol. 5) Hirsch reiterated the traditional view, and pointed out what he saw as numerous errors in the works of Graetz, Frankel and Geiger. On the other hand, many of the 19th century's strongest critics of Reform, including strictly orthodox rabbis such as Zvi Hirsch Chajes, utilized this new scientific method. The Orthodox rabbinical seminary of Azriel Hildesheimer was founded on the idea of creating a "harmony between Judaism and science". Another Orthodox pioneer of scientific Talmud study was David Zvi Hoffmann. The Iraqi rabbi Yaakov Chaim Sofer notes that the text of the Gemara has had changes and additions, and contains statements not of the same origin as the original. See his "Yehi Yosef" (Jerusalem, 1991) p. 132 "This passage does not bear the signature of the editor of the Talmud!" Orthodox scholar Daniel Sperber writes in "Legitimacy, of Necessity, of Scientific Disciplines" that many Orthodox sources have engaged in the historical (also called "scientific") study of the Talmud. As such, the divide today between Orthodoxy and Reform is not about whether the Talmud may be subjected to historical study, but rather about the theological and halakhic implications of such study. Some trends within contemporary Talmud scholarship are listed below. The Talmud represents the written record of an oral tradition. It became the basis for many rabbinic legal codes and customs, most importantly for the Mishneh Torah and for the Shulchan Aruch. Orthodox and, to a lesser extent, Conservative Judaism accept the Talmud as authoritative, while Samaritan, Karaite, Reconstructionist, and Reform Judaism do not. The Jewish sect of the Sadducees (Hebrew: צְדוּקִים) flourished during the Second Temple period. Principal distinctions between them and the Pharisees (later known as Rabbinic Judaism) involved their rejection of an "Oral Torah" and their denying a resurrection after death. Another movement that rejected the Oral Torah as authoritative was Karaism, which arose within two centuries after completion of the Talmud. Karaism developed as a reaction against the Talmudic Judaism of Babylonia. The central concept of Karaism is the rejection of the Oral Torah, as embodied in the Talmud, in favor of a strict adherence only to the Written Torah. This opposes the fundamental Rabbinic concept that the Oral Torah was given to Moses on Mount Sinai together with the Written Torah. Some later Karaites took a more moderate stance, allowing that some element of tradition (called "sevel ha-yerushah", the burden of inheritance) is admissible in interpreting the Torah and that some authentic traditions are contained in the Mishnah and the Talmud, though these can never supersede the plain meaning of the Written Torah. The rise of Reform Judaism during the 19th century saw more questioning of the authority of the Talmud. Reform Jews saw the Talmud as a product of late antiquity, having relevance merely as a historical document. For example, the "Declaration of Principles" issued by the Association of Friends of Reform Frankfurt in August 1843 states among other things that: Some took a critical-historical view of the written Torah as well, while others appeared to adopt a neo-Karaite "back to the Bible" approach, though often with greater emphasis on the prophetic than on the legal books. Within Humanistic Judaism, Talmud is studied as a historical text, in order to discover how it can demonstrate practical relevance to living today. Orthodox Judaism continues to stress the importance of Talmud study as a central component of Yeshiva curriculum, in particular for those training to become rabbis. This is so even though "Halakha" is generally studied from the medieval and early modern codes and not directly from the Talmud. Talmudic study amongst the laity is widespread in Orthodox Judaism, with daily or weekly Talmud study particularly common in Haredi Judaism and with Talmud study a central part of the curriculum in Orthodox Yeshivas and day schools. The regular study of Talmud among laymen has been popularized by the "Daf Yomi", a daily course of Talmud study initiated by rabbi Meir Shapiro in 1923; its 13th cycle of study began in August 2012 and ended with the 13th Siyum HaShas on January 1, 2020. The Rohr Jewish Learning Institute has popularized the "MyShiur – Explorations in Talmud" to show how the Talmud is relevant to a wide range of people. Conservative Judaism similarly emphasizes the study of Talmud within its religious and rabbinic education. Generally, however, Conservative Jews study the Talmud as a historical source-text for Halakha. The Conservative approach to legal decision-making emphasizes placing classic texts and prior decisions in historical and cultural context, and examining the historical development of Halakha. This approach has resulted in greater practical flexibility than that of the Orthodox. Talmud study forms part of the curriculum of Conservative parochial education at many Conservative day-schools, and an increase in Conservative day-school enrollments has resulted in an increase in Talmud study as part of Conservative Jewish education among a minority of Conservative Jews. See also: "The Conservative Jewish view of the Halakha". Reform Judaism does not emphasize the study of Talmud to the same degree in their Hebrew schools, but they do teach it in their rabbinical seminaries; the world view of liberal Judaism rejects the idea of binding Jewish law, and uses the Talmud as a source of inspiration and moral instruction. Ownership and reading of the Talmud is not widespread among Reform and Reconstructionist Jews, who usually place more emphasis on the study of the Hebrew Bible or Tanakh. Rabbis and talmudists studying and debating Talmud abound in the art of Austrian painter Carl Schleicher (1825–1903); active in Vienna, especially c. 1859–1871. The study of Talmud is not restricted to those of the Jewish religion and has attracted interest in other cultures. Christian scholars have long expressed an interest in the study of Talmud, which has helped illuminate their own scriptures. Talmud contains biblical exegesis and commentary on Tanakh that will often clarify elliptical and esoteric passages. The Talmud contains possible references to Jesus and his disciples, while the Christian canon makes mention of Talmudic figures and contains teachings that can be paralleled within the Talmud and Midrash. The Talmud provides cultural and historical context to the Gospel and the writings of the Apostles. South Koreans reportedly hope to emulate Jews' high academic standards by studying Jewish literature. Almost every household has a translated copy of a book they call "Talmud", which parents read to their children, and the book is part of the primary-school curriculum. The "Talmud" in this case is usually one of several possible volumes, the earliest translated into Korean from the Japanese. The original Japanese books were created through the collaboration of Japanese writer Hideaki Kase and Marvin Tokayer, an Orthodox American rabbi serving in Japan in the 1960s and 70s. The first collaborative book was "5,000 Years of Jewish Wisdom: Secrets of the Talmud Scriptures", created over a three-day period in 1968 and published in 1971. The book contains actual stories from the Talmud, proverbs, ethics, Jewish legal material, biographies of Talmudic rabbis, and personal stories about Tokayer and his family. Tokayer and Kase published a number of other books on Jewish themes together in Japanese. The first South Korean publication of "5,000 Years of Jewish Wisdom" was in 1974, by Tae Zang publishing house. Many different editions followed in both Korea and China, often by black-market publishers. Between 2007 and 2009, Reverend Yong-soo Hyun of the Shema Yisrael Educational Institute published a 6-volume edition of the Korean Talmud, bringing together material from a variety of Tokayer's earlier books. He worked with Tokayer to correct errors and Tokayer is listed as the author. Tutoring centers based on this and other works called "Talmud" for both adults and children are popular in Korea and "Talmud" books (all based on Tokayer's works and not the original Talmud) are widely read and known. Historian Michael Levi Rodkinson, in his book "The History of the Talmud", wrote that detractors of the Talmud, both during and subsequent to its formation, "have varied in their character, objects and actions" and the book documents a number of critics and persecutors, including Nicholas Donin, Johannes Pfefferkorn, Johann Andreas Eisenmenger, the Frankists, and August Rohling. Many attacks come from antisemitic sources such as Justinas Pranaitis, Elizabeth Dilling, or David Duke. Criticisms also arise from Christian, Muslim, and Jewish sources, as well as from atheists and skeptics. Accusations against the Talmud include alleged: Defenders of the Talmud argue that many of these criticisms, particularly those in antisemitic sources, are based on quotations that are taken out of context, and thus misrepresent the meaning of the Talmud's text and its basic character as a detailed record of discussions that preserved statements by a variety of sages, and from which statements and opinions that were rejected were never edited out. Sometimes the misrepresentation is deliberate, and other times simply due to an inability to grasp the subtle and sometimes confusing and multi-faceted narratives in the Talmud. Some quotations provided by critics deliberately omit passages in order to generate quotes that appear to be offensive or insulting. At the very time that the Babylonian "savoraim" put the finishing touches to the redaction of the Talmud, the emperor Justinian issued his edict against "deuterosis" (doubling, repetition) of the Hebrew Bible. It is disputed whether, in this context, "deuterosis" means "Mishnah" or "Targum": in patristic literature, the word is used in both senses. Full-scale attacks on the Talmud took place in the 13th century in France, where Talmudic study was then flourishing. In the 1230s Nicholas Donin, a Jewish convert to Christianity, pressed 35 charges against the Talmud to Pope Gregory IX by translating a series of blasphemous passages about Jesus, Mary or Christianity. There is a quoted Talmudic passage, for example, where Jesus of Nazareth is sent to Hell to be boiled in excrement for eternity. Donin also selected an injunction of the Talmud that permits Jews to kill non-Jews. This led to the Disputation of Paris, which took place in 1240 at the court of Louis IX of France, where four rabbis, including Yechiel of Paris and Moses ben Jacob of Coucy, defended the Talmud against the accusations of Nicholas Donin. The translation of the Talmud from Aramaic to non-Jewish languages stripped Jewish discourse from its covering, something that was resented by Jews as a profound violation. The Disputation of Paris led to the condemnation and the first burning of copies of the Talmud in Paris in 1242. The burning of copies of the Talmud continued. The Talmud was likewise the subject of the Disputation of Barcelona in 1263 between Nahmanides (rabbi Moses ben Nahman) and Christian convert, Pablo Christiani. This same Pablo Christiani made an attack on the Talmud that resulted in a papal bull against the Talmud and in the first censorship, which was undertaken at Barcelona by a commission of Dominicans, who ordered the cancellation of passages deemed objectionable from a Christian perspective (1264). At the Disputation of Tortosa in 1413, Geronimo de Santa Fé brought forward a number of accusations, including the fateful assertion that the condemnations of "pagans", "heathens", and "apostates" found in the Talmud were in reality veiled references to Christians. These assertions were denied by the Jewish community and its scholars, who contended that Judaic thought made a sharp distinction between those classified as heathen or pagan, being polytheistic, and those who acknowledge one true God (such as the Christians) even while worshipping the true monotheistic God incorrectly. Thus, Jews viewed Christians as misguided and in error, but not among the "heathens" or "pagans" discussed in the Talmud. Both Pablo Christiani and Geronimo de Santa Fé, in addition to criticizing the Talmud, also regarded it as a source of authentic traditions, some of which could be used as arguments in favour of Christianity. Examples of such traditions were statements that the Messiah was born around the time of the destruction of the Temple, and that the Messiah sat at the right hand of God. In 1415, Antipope Benedict XIII, who had convened the Tortosa disputation, issued a papal bull (which was destined, however, to remain inoperative) forbidding the Jews to read the Talmud, and ordering the destruction of all copies of it. Far more important were the charges made in the early part of the 16th century by the convert Johannes Pfefferkorn, the agent of the Dominicans. The result of these accusations was a struggle in which the emperor and the pope acted as judges, the advocate of the Jews being Johann Reuchlin, who was opposed by the obscurantists; and this controversy, which was carried on for the most part by means of pamphlets, became in the eyes of some a precursor of the Reformation. An unexpected result of this affair was the complete printed edition of the Babylonian Talmud issued in 1520 by Daniel Bomberg at Venice, under the protection of a papal privilege. Three years later, in 1523, Bomberg published the first edition of the Jerusalem Talmud. After thirty years the Vatican, which had first permitted the Talmud to appear in print, undertook a campaign of destruction against it. On the New Year, Rosh Hashanah (September 9, 1553) the copies of the Talmud confiscated in compliance with a decree of the Inquisition were burned at Rome, in Campo dei Fiori (auto de fé). Other burnings took place in other Italian cities, such as the one instigated by Joshua dei Cantori at Cremona in 1559. Censorship of the Talmud and other Hebrew works was introduced by a papal bull issued in 1554; five years later the Talmud was included in the first Index Expurgatorius; and Pope Pius IV commanded, in 1565, that the Talmud be deprived of its very name. The convention of referring to the work as "Shas" ("shishah sidre Mishnah") instead of "Talmud" dates from this time. The first edition of the expurgated Talmud, on which most subsequent editions were based, appeared at Basel (1578–1581) with the omission of the entire treatise of 'Abodah Zarah and of passages considered inimical to Christianity, together with modifications of certain phrases. A fresh attack on the Talmud was decreed by Pope Gregory XIII (1575–85), and in 1593 Clement VIII renewed the old interdiction against reading or owning it. The increasing study of the Talmud in Poland led to the issue of a complete edition (Kraków, 1602–05), with a restoration of the original text; an edition containing, so far as known, only two treatises had previously been published at Lublin (1559–76). After an attack on the Talmud took place in Poland (in what is now Ukrainian territory) in 1757, when Bishop Dembowski, at the instigation of the Frankists, convened a public disputation at Kamianets-Podilskyi, and ordered all copies of the work found in his bishopric to be confiscated and burned. The external history of the Talmud includes also the literary attacks made upon it by some Christian theologians after the Reformation, since these onslaughts on Judaism were directed primarily against that work, the leading example being Eisenmenger's "Entdecktes Judenthum" (Judaism Unmasked) (1700). In contrast, the Talmud was a subject of rather more sympathetic study by many Christian theologians, jurists and Orientalists from the Renaissance on, including Johann Reuchlin, John Selden, Petrus Cunaeus, John Lightfoot and Johannes Buxtorf father and son. The Vilna edition of the Talmud was subject to Russian government censorship, or self-censorship to meet government expectations, though this was less severe than some previous attempts: the title "Talmud" was retained and the tractate Avodah Zarah was included. Most modern editions are either copies of or closely based on the Vilna edition, and therefore still omit most of the disputed passages. Although they were not available for many generations, the removed sections of the Talmud, Rashi, Tosafot and Maharsha were preserved through rare printings of lists of "errata", known as "Chesronos Hashas" ("Omissions of the Talmud"). Many of these censored portions were recovered from uncensored manuscripts in the Vatican Library. Some modern editions of the Talmud contain some or all of this material, either at the back of the book, in the margin, or in its original location in the text. In 1830, during a debate in the French Chamber of Peers regarding state recognition of the Jewish faith, Admiral Verhuell declared himself unable to forgive the Jews whom he had met during his travels throughout the world either for their refusal to recognize Jesus as the Messiah or for their possession of the Talmud. In the same year the Abbé Chiarini published a voluminous work entitled "Théorie du Judaïsme", in which he announced a translation of the Talmud, advocating for the first time a version that would make the work generally accessible, and thus serve for attacks on Judaism: only two out of the projected six volumes of this translation appeared. In a like spirit 19th-century anti-Semitic agitators often urged that a translation be made; and this demand was even brought before legislative bodies, as in Vienna. The Talmud and the "Talmud Jew" thus became objects of anti-Semitic attacks, for example in August Rohling's "Der Talmudjude" (1871), although, on the other hand, they were defended by many Christian students of the Talmud, notably Hermann Strack. Further attacks from anti-Semitic sources include Justinas Pranaitis' "The Talmud Unmasked: The Secret Rabbinical Teachings Concerning Christians" (1892) and Elizabeth Dilling's "The Plot Against Christianity" (1964). The criticisms of the Talmud in many modern pamphlets and websites are often recognisable as verbatim quotations from one or other of these. Historians Will and Ariel Durant noted a lack of consistency between the many authors of the Talmud, with some tractates in the wrong order, or subjects dropped and resumed without reason. According to the Durants, the Talmud "is not the product of deliberation, it is the deliberation itself." The Internet is another source of criticism of the Talmud. The Anti-Defamation League's report on this topic states that antisemitic critics of the Talmud frequently use erroneous translations or selective quotations in order to distort the meaning of the Talmud's text, and sometimes fabricate passages. In addition, the attackers rarely provide full context of the quotations, and fail to provide contextual information about the culture that the Talmud was composed in, nearly 2,000 years ago. One such example concerns the line: "If a Jew be called upon to explain any part of the rabbinic books, he ought to give only a false explanation. Who ever will violate this order shall be put to death." This is alleged to be a quote from a book titled "Libbre David" (alternatively "Livore David"). No such book exists in the Talmud or elsewhere. The title is assumed to be a corruption of "Dibre David", a work published in 1671. Reference to the quote is found in an early Holocaust denial book, "The Six Million Reconsidered" by William Grimstad. Gil Student, Book Editor of the Orthodox Union's Jewish Action magazine, states that many attacks on the Talmud are merely recycling discredited material that originated in the 13th-century disputations, particularly from Raymond Marti and Nicholas Donin, and that the criticisms are based on quotations taken out of context and are sometimes entirely fabricated. Notes Citations On individual tractates Historical study General Refutation of allegations concerning the Talmud Full text resources Manuscripts and textual variants Layout "Daf Yomi" program Audio
https://en.wikipedia.org/wiki?curid=30345
Trumpet The trumpet is a brass instrument commonly used in classical and jazz ensembles. The trumpet group ranges from the piccolo trumpet with the highest register in the brass family, to the bass trumpet, which is pitched one octave below the standard B or C Trumpet. Trumpet-like instruments have historically been used as signalling devices in battle or hunting, with examples dating back to at least 1500 BC. They began to be used as musical instruments only in the late 14th or early 15th century. Trumpets are used in art music styles, for instance in orchestras, concert bands, and jazz ensembles, as well as in popular music. They are played by blowing air through nearly-closed lips (called the player's embouchure), producing a "buzzing" sound that starts a standing wave vibration in the air column inside the instrument. Since the late 15th century, trumpets have primarily been constructed of brass tubing, usually bent twice into a rounded rectangular shape. There are many distinct types of trumpet, with the most common being pitched in B (a transposing instrument), having a tubing length of about . Early trumpets did not provide means to change the length of tubing, whereas modern instruments generally have three (or sometimes four) valves in order to change their pitch. There are eight combinations of three valves, making seven different tubing lengths, with the third valve sometimes used as an alternate fingering equivalent to the 1–2 combination. Most trumpets have valves of the piston type, while some have the rotary type. The use of rotary-valved trumpets is more common in orchestral settings (especially in German and German-style orchestras), although this practice varies by country. Each valve, when engaged, increases the length of tubing, lowering the pitch of the instrument. A musician who plays the trumpet is called a "trumpet player" or "trumpeter". The English word "trumpet" was first used in the late 14th century. The word came from Old French ""trompette"", which is a diminutive of "trompe". The word "trump", meaning "trumpet," was first used in English in 1300. The word comes from Old French "trompe" "long, tube-like musical wind instrument" (12c.), cognate with Provençal "tromba", Italian "tromba", all probably from a Germanic source (compare Old High German "trumpa", Old Norse "trumba" "trumpet"), of imitative origin." The earliest trumpets date back to 1500 BC and earlier. The bronze and silver trumpets from Tutankhamun's grave in Egypt, bronze lurs from Scandinavia, and metal trumpets from China date back to this period. Trumpets from the Oxus civilization (3rd millennium BC) of Central Asia have decorated swellings in the middle, yet are made out of one sheet of metal, which is considered a technical wonder. The Shofar, made from a ram horn and the Hatzotzeroth, made of metal, are both mentioned in the Bible. They were played in Solomon's Temple around 3000 years ago. They were said to be used to blow down the walls of Jericho. They are still used on certain religious days. The Salpinx was a straight trumpet long, made of bone or bronze. Salpinx contests were a part of the original Olympic Games. The Moche people of ancient Peru depicted trumpets in their art going back to AD 300. The earliest trumpets were signaling instruments used for military or religious purposes, rather than music in the modern sense; and the modern bugle continues this signaling tradition. Improvements to instrument design and metal making in the late Middle Ages and Renaissance led to an increased usefulness of the trumpet as a musical instrument. The natural trumpets of this era consisted of a single coiled tube without valves and therefore could only produce the notes of a single overtone series. Changing keys required the player to change crooks of the instrument. The development of the upper, "clarino" register by specialist trumpeters—notably Cesare Bendinelli—would lend itself well to the Baroque era, also known as the "Golden Age of the natural trumpet." During this period, a vast body of music was written for virtuoso trumpeters. The art was revived in the mid-20th century and natural trumpet playing is again a thriving art around the world. Many modern players in Germany and the UK who perform Baroque music use a version of the natural trumpet fitted with three or four vent holes to aid in correcting out-of-tune notes in the harmonic series. The melody-dominated homophony of the classical and romantic periods relegated the trumpet to a secondary role by most major composers owing to the limitations of the natural trumpet. Berlioz wrote in 1844: Notwithstanding the real loftiness and distinguished nature of its quality of tone, there are few instruments that have been more degraded (than the trumpet). Down to Beethoven and Weber, every composer – not excepting Mozart – persisted in confining it to the unworthy function of filling up, or in causing it to sound two or three commonplace rhythmical formulae. The trumpet is constructed of brass tubing bent twice into a rounded oblong shape. As with all brass instruments, sound is produced by blowing air through closed lips, producing a "buzzing" sound into the mouthpiece and starting a standing wave vibration in the air column inside the trumpet. The player can select the pitch from a range of overtones or harmonics by changing the lip aperture and tension (known as the embouchure). The mouthpiece has a circular rim, which provides a comfortable environment for the lips' vibration. Directly behind the rim is the cup, which channels the air into a much smaller opening (the back bore or shank) that tapers out slightly to match the diameter of the trumpet's lead pipe. The dimensions of these parts of the mouthpiece affect the timbre or quality of sound, the ease of playability, and player comfort. Generally, the wider and deeper the cup, the darker the sound and timbre. Modern trumpets have three (or, infrequently, four) piston valves, each of which increases the length of tubing when engaged, thereby lowering the pitch. The first valve lowers the instrument's pitch by a whole step (two semitones), the second valve by a half step (one semitone), and the third valve by one and a half steps (three semitones). When a fourth valve is present, as with some piccolo trumpets, it usually lowers the pitch a perfect fourth (five semitones). Used singly and in combination these valves make the instrument fully chromatic, i.e., able to play all twelve pitches of classical music. For more information about the different types of valves, see Brass instrument valves. The pitch of the trumpet can be raised or lowered by the use of the tuning slide. Pulling the slide out lowers the pitch; pushing the slide in raises it. To overcome the problems of intonation and reduce the use of the slide, Renold Schilke designed the tuning-bell trumpet. Removing the usual brace between the bell and a valve body allows the use of a sliding bell; the player may then tune the horn with the bell while leaving the slide pushed in, or nearly so, thereby improving intonation and overall response. A trumpet becomes a closed tube when the player presses it to the lips; therefore, the instrument only naturally produces every other overtone of the harmonic series. The shape of the bell makes the missing overtones audible. Most notes in the series are slightly out of tune and modern trumpets have slide mechanisms for the first and third valves with which the player can compensate by "throwing" (extending) or retracting one or both slides, using the left thumb and ring finger for the first and third valve slides respectively. The most common type is the B trumpet, but A, C, D, E, E, low F, and G trumpets are also available. The C trumpet is most common in American orchestral playing, where it is used alongside the B trumpet. Orchestral trumpet players are adept at transposing music at sight, frequently playing music written for the A, B, D, E, E, or F trumpet on the C trumpet or B trumpet. The smallest trumpets are referred to as piccolo trumpets. The most common of these are built to play in both B and A, with separate leadpipes for each key. The tubing in the B piccolo trumpet is one-half the length of that in a standard B trumpet. Piccolo trumpets in G, F and C are also manufactured, but are less common. Many players use a smaller mouthpiece on the piccolo trumpet, which requires a different sound production technique from the B trumpet and can limit endurance. Almost all piccolo trumpets have four valves instead of the usual three — the fourth valve lowers the pitch, usually by a fourth, to assist in the playing of lower notes and to create alternate fingerings that facilitate certain trills. Maurice André, Håkan Hardenberger, David Mason, and Wynton Marsalis are some well-known trumpet players known for their additional virtuosity on the piccolo trumpet. Trumpets pitched in the key of low G are also called sopranos, or soprano bugles, after their adaptation from military bugles. Traditionally used in drum and bugle corps, sopranos have featured both rotary valves and piston valves. The bass trumpet is usually played by a trombone player, being at the same pitch. Bass trumpet is played with a shallower trombone mouthpiece, and music for it is written in treble clef. The most common keys for bass trumpets are C and B. Both C and B bass trumpets are transposing instruments sounding an octave (C) or a major ninth (B) lower than written. The historical slide trumpet was probably first developed in the late 14th century for use in alta cappella wind bands. Deriving from early straight trumpets, the Renaissance slide trumpet was essentially a natural trumpet with a sliding leadpipe. This single slide was rather awkward, as the entire corpus of the instrument moved, and the range of the slide was probably no more than a major third. Originals were probably pitched in D, to fit with shawms in D and G, probably at a typical pitch standard near A=466 Hz. As no known instruments from this period survive, the details—and even the existence—of a Renaissance slide trumpet is a matter of conjecture and debate among scholars. Some slide trumpet designs saw use in England in the 18th century. The pocket trumpet is a compact B trumpet. The bell is usually smaller than a standard trumpet and the tubing is more tightly wound to reduce the instrument size without reducing the total tube length. Its design is not standardized, and the quality of various models varies greatly. It can have a tone quality and projection unique in the trumpet world: a warm sound and a voice-like articulation. Since many pocket trumpet models suffer from poor design as well as cheap and imprecise manufacturing, the intonation, tone color and dynamic range of such instruments are severely hindered. Professional-standard instruments are, however, available. While they are not a substitute for the full-sized instrument, they can be useful in certain contexts. The jazz musician Don Cherry was renowned for his playing of the pocket instrument. The herald trumpet has an elongated bell extending far in front of the player, allowing a standard length of tubing from which a flag may be hung; the instrument is mostly used for ceremonial events such as parades and fanfares. Monette designed the flumpet in 1989 for jazz musician Art Farmer. It is a hybrid instrument with elements of trumpet and flugelhorn, sharing the three piston valve design and with a pitch of B. There are also rotary-valve, or German, trumpets (which are commonly used in professional German and Austrian orchestras) as well as alto and Baroque trumpets. Another variant of the standard trumpet is the Vienna valve trumpet. Primarily used in Viennese brass ensembles and orchestras such as the Vienna Philharmonic and Mnozil Brass. The trumpet is often confused with its close relative the cornet, which has a more conical tubing shape compared to the trumpet's more cylindrical tube. This, along with additional bends in the cornet's tubing, gives the cornet a slightly mellower tone, but the instruments are otherwise nearly identical. They have the same length of tubing and, therefore, the same pitch, so music written for cornet and trumpet is interchangeable. Another relative, the flugelhorn, has tubing that is even more conical than that of the cornet, and an even richer tone. It is sometimes augmented with a fourth valve to improve the intonation of some lower notes. On any modern trumpet, cornet, or flugelhorn, pressing the valves indicated by the numbers below produces the written notes shown. "Open" means all valves up, "1" means first valve, "1–2" means first and second valve simultaneously, and so on. The sounding pitch depends on the transposition of the instrument. Engaging the fourth valve, if present, usually drops any of these pitches by a perfect fourth as well. Within each overtone series, the different pitches are attained by changing the embouchure. Standard fingerings above high C are the same as for the notes an octave below (C is 1–2, D is 1, etc.) Each overtone series on the trumpet begins with the first overtone—the fundamental of each overtone series cannot be produced except as a pedal tone. Notes in parentheses are the sixth overtone, representing a pitch with a frequency of seven times that of the fundamental; while this pitch is close to the note shown, it is slightly flat relative to equal temperament, and use of those fingerings is generally avoided. The fingering schema arises from the length of each valve's tubing (a longer tube produces a lower pitch). Valve "1" increases the tubing length enough to lower the pitch by one whole step, valve "2" by one half step, and valve "3" by one and a half steps. This scheme and the nature of the overtone series create the possibility of alternate fingerings for certain notes. For example, third-space "C" can be produced with no valves engaged (standard fingering) or with valves 2–3. Also, any note produced with 1–2 as its standard fingering can also be produced with valve 3 – each drops the pitch by steps. Alternate fingerings may be used to improve facility in certain passages, or to aid in intonation. Extending the third valve slide when using the fingerings 1–3 or 1-2-3 further lowers the pitch slightly to improve intonation. If you take the partials of the harmonic series that a modern Bb trumpet can play for each combination of valves pressed, some notes are in tune with 12-tone equal temperament and some are not. The following tables show all notes from all partials from all valve combinations in order up to high C (C6). This often gives multiple fingerings for the same note and some notes that aren't usually useful. Various types of mutes can be used to alter the sound of the instrument when placed in or over the bell. While most types of mutes do decrease the volume the instrument produces, as the name implies, the sound modification is typically the primary reason for their use. Types of mutes most commonly used to alter the sound of the instrument are: Straight Mutes, Harmon Mutes (aka "Wah-Wah" Mutes), Plunger Mutes, Bucket Mutes, and Cup Mutes. A description of their construction and sound quality are below: Straight Mute: Constructed of either aluminum, which produces a bright piercing sound, or stone lined with cardboard, which produces a stuffy sound. Harmon Mute: Constructed of aluminum and consists of two parts called the "stem" and the "body". The stem can be extended or removed to produce different timbres of sound. This mute is also called the "Wah-Wah" mute due to its distinctive sound created by the player placing their hand over the stem opening and waving it back and forth. Plunger Mute: Most often made of a rubber bathroom plunger without the stick. This is used to manipulate sound by the player holding it over the bell with their left hand. Bucket Mute: Constructed from cardboard and cloth, this mute is clipped to the end of the bell and used to muffle the sound almost completely. Cup Mute: Also constructed of cardboard, this mute is shaped exactly like a straight mute but includes a cup at the end. In many models the cup is adjustable much like the stem on the harmon mute and produces a softer more muffled sound than a traditional straight mute. The standard trumpet range extends from the written F immediately below Middle C up to about three octaves higher (F3 – F6). Traditional trumpet repertoire rarely calls for notes beyond this range, and the fingering tables of most method books peak at the high C, two octaves above middle C. Several trumpeters have achieved fame for their proficiency in the extreme high register, among them Maynard Ferguson, Cat Anderson, Dizzy Gillespie, Doc Severinsen, and more recently Wayne Bergeron, Thomas Gansch, James Morrison, Jon Faddis and Arturo Sandoval. It is also possible to produce pedal tones below the low F, which is a device occasionally employed in the contemporary repertoire for the instrument. Contemporary music for the trumpet makes wide uses of extended trumpet techniques. Flutter tonguing: The trumpeter rolls the tip of the tongue to produce a 'growling like' tone. It is achieved as if one were rolling an "R" in the Spanish language. This technique is widely employed by composers like Berio and Stockhausen. Growling: Simultaneously playing tone while using the back of the tongue to vibrate the uvula creating a distinct sound. Most trumpet players will use a plunger with this technique to achieve a particular sound heard in a lot of Chicago Jazz of the 1950s. Double tonguing: The player articulates using the syllables ta-ka ta-ka ta-ka Triple tonguing: The same as double tonguing, but with the syllables ta-ta-ka ta-ta-ka ta-ta-ka or ta-ka-ta ta-ka-ta. Doodle tongue: The trumpeter tongues as if saying the word "doodle". This is a very faint tonguing similar in sound to a valve tremolo. Glissando: Trumpeters can slide between notes by depressing the valves halfway and changing the lip tension. Modern repertoire makes extensive use of this technique. Vibrato: It is often regulated in contemporary repertoire through specific notation. Composers can call for everything from fast, slow or no vibrato to actual rhythmic patterns played with vibrato. Pedal tone: Composers have written for two-and-a-half octaves below the low F, which is at the bottom of the standard range. Extreme low pedals are produced by slipping the lower lip out of the mouthpiece. Claude Gordon assigned pedals as part of his trumpet practice routines, that were a systematic expansion on his lessons with Herbert L. Clarke. The technique was pioneered by Bohumir Kryl. Microtones: Composers such as Scelsi and Stockhausen have made wide use of the trumpet's ability to play microtonally. Some instruments feature a fourth valve that provides a quarter-tone step between each note. The jazz musician Ibrahim Maalouf uses such a trumpet, invented by his father to make it possible to play Arab maqams. Valve tremolo: Many notes on the trumpet can be played in several different valve combinations. By alternating between valve combinations on the same note, a tremolo effect can be created. Berio makes extended use of this technique in his "Sequenza X." Noises: By hissing, clicking, or breathing through the instrument, the trumpet can be made to resonate in ways that do not sound at all like a trumpet. Noises may require amplification. Preparation: Composers have called for trumpeters to play under water, or with certain slides removed. It is increasingly common for composers to specify all sorts of preparations for trumpet. Extreme preparations involve alternate constructions, such as double bells and extra valves. Split tone: Trumpeters can produce more than one tone simultaneously by vibrating the two lips at different speeds. The interval produced is usually an octave or a fifth. Lip-trill or shake: Also known as "lip-slurs". By rapidly varying air speed, but not changing the depressed valves, the pitch varies quickly between adjacent harmonic partials. Shakes and lip-trills can vary in speed and the distance between the partials can be as large or small as the musicians' desires. Traditionally, however, lip-trills and shakes are usually the next partial up from the written note. Multi-phonics: Playing a note and "humming" a different note simultaneously. For example, sustaining a middle C and humming a major 3rd "E" at the same time. Circular breathing: A technique wind players use to produce uninterrupted tone, without pauses for breaths. The player puffs up the cheeks, storing air, then breathes in rapidly through the nose while using the cheeks to continue pushing air outwards. One trumpet method is Jean-Baptiste Arban's "Complete Conservatory Method for Trumpet (Cornet)". Other well-known method books include "Technical Studies" by Herbert L. Clarke, "Grand Method" by Louis Saint-Jacome, "Daily Drills and Technical Studies" by Max Schlossberg, and methods by Ernest S. Williams, Claude Gordon, Charles Colin, James Stamp, and Louis Davidson. A common method book for beginners is the Walter Beeler's "Method for the Cornet", and there have been several instruction books written by virtuoso Allen Vizzutti. Merri Franquin wrote a "Complete Method for Modern Trumpet", which fell into obscurity for much of the twentieth century until public endorsements by Maurice André revived interest in this work. In early jazz, Louis Armstrong was well known for his virtuosity and his improvisations on the Hot Five and Hot Seven recordings, and his switch from cornet to trumpet is often cited as heralding the trumpet's dominance over the cornet in jazz. Dizzy Gillespie was a gifted improviser with an extremely high (but musical) range, building on the style of Roy Eldridge but adding new layers of harmonic complexity. Gillespie had an enormous impact on virtually every subsequent trumpeter, both by the example of his playing and as a mentor to younger musicians. Miles Davis is widely considered one of the most influential musicians of the 20th century—his style was distinctive and widely imitated. Davis' phrasing and sense of space in his solos have been models for generations of jazz musicians. Cat Anderson was a trumpet player who was known for the ability to play extremely high with an even more extreme volume, who played with Duke Ellington's Big Band. Maynard Ferguson came to prominence playing in Stan Kenton's orchestra, before forming his own band in 1957. He was noted for being able to play accurately in a remarkably high register. Anton Weidinger developed in the 1790s the first successful keyed trumpet, capable of playing all the chromatic notes in its range. Joseph Haydn's Trumpet Concerto was written for him in 1796 and startled contemporary audiences by its novelty, a fact shown off by some stepwise melodies played low in the instrument's range.
https://en.wikipedia.org/wiki?curid=30353
Tricky (musician) Adrian Nicholas Matthews Thaws (born 27 January 1968), better known by his stage name Tricky, is an English record producer, rapper and actor. Born and raised in Bristol, he began his career as an early collaborator of Massive Attack before embarking on a solo career with his debut album, "Maxinquaye", in 1995. The release won Tricky popular acclaim and marked the beginning of a lengthy collaborative partnership with vocalist Martina Topley-Bird. He released four more studio albums before the end of the decade, including "Pre-Millennium Tension" and the pseudonymous "Nearly God", both in 1996. He has gone on to release eight studio albums since 2000, most recently "Ununiform" (2017). Tricky is a pioneer of trip hop music, and his work is noted for its dark, layered musical style that blends disparate cultural influences and genres, including hip hop, alternative rock and ragga. He has collaborated with a wide range of artists over the course of his career, including Terry Hall, Björk, Gravediggaz, Grace Jones, and PJ Harvey. Tricky was born in the Knowle West neighbourhood of Bristol, to a Jamaican father and a mixed-race Anglo-Guyanese mother. His mother, Maxine Quaye, either committed suicide or died due to epilepsy complications when Tricky was four. His father, Roy Thaws, who left the family before Tricky was born, operated the Studio 17 sound system (formerly known as "Tarzan the High Priest") with his brother Rupert and father Hector. Bristol musician Bunny Marrett claimed in 2012, "It became the most popular sound system in Bristol at the time." Tricky experienced a difficult childhood in Knowle West, an economically deprived area in Southern Bristol. He became involved in crime at an early age, and joined a gang that was involved in car theft, burglary, fights and promiscuity. Tricky spent his youth in the care of his grandmother, who often let him watch old horror films instead of going to school. At the age of 15, he began to write lyrics ("I like to rock, I like to dance, I like pretty girls taking down their pants" "MixMag", 1996). At 17, he spent some time in prison after he purchased forged £50 notes from a friend, who later informed the police. Tricky stated in an interview afterward: "Prison was really good. I'm never going back". In the mid-1980s, Tricky met DJ Milo and spent time with a sound system called the Wild Bunch, which by 1987 evolved into Massive Attack. He received the nickname "Tricky Kid" and at age eighteen became a member of the Fresh 4, a rap group built from the Wild Bunch. He also rapped on Massive Attack's acclaimed debut album "Blue Lines" (1991). In 1991, before the release of Massive Attack's album "Blue Lines", he met Martina Topley-Bird in Bristol. Some time later she came to his house, and mentioned to Tricky and Mark Stewart that she could sing. Martina was only fifteen years old, but her "honey-coated vox" impressed them and they recorded a song called "Aftermath" (although "The Face" '95 mentions that the first song they recorded together was called "Shoebox"). Tricky showed "Aftermath" to Massive Attack, but they were not interested. So in 1993 he decided to press a few hundred vinyl copies of the song. He cut it directly off the tape, so that the song is basically "just bassline and hiss". ("NME" 1994). In 1995, a white label got him a contract with Island Records and he started to record his first solo album, "Maxinquaye". Tricky left Massive Attack to release his debut album "Maxinquaye", co-produced by himself and Mark Saunders and prominently featured singer Martina Topley-Bird. The album was successful and Tricky consequently attained international fame, something he was notably uncomfortable with. The "Maxinquaye" album review by "Rolling Stone" read: "Tricky devoured everything from American hip-hop and soul to reggae and the more melancholic strains of '80s British rock". Authors David Hesmondhalgh and Caspar Melville wrote in the book "Global Noise: Rap and Hip-Hop Outside the USA": "Tricky showed his debt to hip-hop aesthetics by reconstructualising samples and slices of both the most respected black music (Public Enemy) and the tackiest pop (quoting David Cassidy's "How Can I Be Sure?")." As the "Rolling Stone" article further explained, Tricky created "a mercurial style of dance music that immediately finds own fast feet." Tricky failed to complete a number of lyrics for the Massive Attack album "Protection" and gave the band some of the lyrics he had written for "Maxinquaye" instead. Thus, there are songs across the two albums that largely share the same lyrics – entitled "Overcome" and "Hell is 'Round the Corner" on "Maxinquaye" and "Karmacoma", and "Eurochild" on "Protection", respectively. Tricky found it difficult to cope with the huge success of "Maxinquaye" and subsequently eschewed the laid-back soul sound of the first album to create an increasingly edgy and aggressive punk style of music. In 1996, Neneh Cherry and Björk appeared as guests on his second album "Nearly God". The opening number was a cover of the Siouxsie and the Banshees pre-trip-hop song "Tattoo" that had previously inspired Tricky when he forged his style. In 2001, Tricky appeared on the "Thirteen Ghosts" soundtrack with the song "Excess" which (briefly) features Alanis Morissette during two of the choruses. In 2002 that song also appeared on the "Queen of the Damned" soundtrack. Tricky's studio album "Knowle West Boy" was released in the UK and Ireland in July 2008, and September 2008 in the US. The first single from the album was "Council Estate" and features the artist as the sole vocalist: "It's the first single I've ever done with just me on vocals. I couldn't whisper that song. I had to come out of myself and do a loud, screaming vocal. I wanted to be a proper frontman on that one." In an interview with "The Skinny" in July 2008, Tricky mentioned that "Knowle West Boy" was the first album for which he decided to enlist a co-producer. Ex-Suede guitarist Bernard Butler was Tricky's initial selection, but, less than enamoured with Butler's technical prowess, Tricky finished the album by totally re-recording all of the material. On 8 December 2009, Tricky's 1995 debut album "Maxinquaye" was reissued with a bonus 13-track CD featuring B-sides, outtakes and seven previously unreleased mixes of songs such as "Overcome", "Hell is Round the Corner" and "Black Steel". In December 2009, the media reported that Massive Attack met Tricky in Paris and asked him to work on a future project—Daddy G said: "Things seem like they've healed between us and Tricky. It's been quite well documented how us and Tricky get on, hasn't it? It's not that well, but things have changed. Things have softened up. We saw Tricky a couple of weeks ago in Paris and it was quite an amicable meeting after five or six years." Tricky agreed to record with the band and he revealed in a June 2013 interview that "there's a couple of songs which are OK, which are really good actually to be honest with you". However, Tricky also stated in June 2013 that he could not spend more than two or three days with Massive Attack and described band member Daddy G as "very arrogant". Tricky's ninth album "Mixed Race" was released on 27 September 2010 and the first single from the album became available on 23 August. The album includes contributions from Franky Riley, Terry Lynn, Bobby Gillespie, Hamadouche, Blackman and Tricky's youngest brother Marlon Thaws. In June 2011, Tricky's then label Brownpunk signed on Mexican band My Black Heart Machine for one single, "It Beats Like This", which Tricky co-produced. My Black Heart Machine was then commissioned by the label to cover a song from Maxinquaye for an album of covers by Brownpunk's roster; the band chose "Hell Is Round the Corner". "It Beats Like This" was released independently by the band on their first EP in April 2013. Tricky produced rapper Omni's album "IamOmni (produced by Tricky)" (released under the moniker "IamOmni") that was available from 30 August 2011 as a free download on Omni's official site. In April 2012, Tricky performed "Maxinquaye" with Martina Topley-Bird at several concerts around the UK including, for the first time in several years in his home town of Bristol. The concerts featured regular interruptions orchestrated by Tricky, where he brought his youngest brother, Marlon Thaws to rap on stage alongside other local rappers as well as encouraging the audience to come up on stage. The review of the concert in Manchester said it was "shambolic" and a "car crash" with Tricky often leaving the stage and continuously forgetting his words, leaving Topley Bird to carry the delivery of the tracks, resulting in many leaving early after repeated issues with Tricky's behaviour and shouts of "wanker" from the crowd. On 26 June 2011, Tricky appeared on stage during Beyoncé's headline slot on the pyramid stage at Glastonbury for the track "Baby Boy". Partly the result of technical difficulties with his microphone, he later stated he was "mortified" by his own performance, saying, "I've never been so embarrassed. My body just froze". In February 2013, Tricky announced the release of a new album, "False Idols". The album is the follow up from his 2010 "Mixed Race" and featured Peter Silberman, Fifi Rong and Nneka. Tricky released this statement about the album: In spring 2014, it was announced that Tricky is to perform at a number of festivals throughout Europe over the summer of 2014, including Control Day Out in Romania, festival Couleur Café in Belgium, Positivus Festival in Latvia and Galtres Parklands Festival in England, the latter of which he co-headlines with contemporaries Morcheeba. Tricky announced a new album titled "Adrian Thaws" in June 2014. It was released on 8 September 2014. "Skilled Mechanics" was released in January 2016. His thirteenth official studio album, "ununiform", was released on 22 September 2017, and featured collaborations with Asia Argento, Avalon Lurks, and Martina Topley-Bird, as well as a cover of Hole's "Doll Parts". Blink, an imprint of Bonnier Books UK, has acquired Tricky's autobiography. Commissioning Editor Kerri Sharp acquired World rights from K7 Music – the independent music company headquartered in Berlin, where Tricky now lives. The book is currently untitled and will sell as a £20 Hardback in October 2019. His new EP "20,20" was released on 6 March 2020. "Lonely Dancer", recorded with the female singer Anika, who collaborated with Geoff Barrow of Portishead fame in Bristol in 2010, is the first song to be released. By the time "Pre-Millennium Tension" was released in 1996, Tricky was increasingly irritated with the British press, particularly articles written in "The Face" magazine. "The Face" had been an early champion of "Maxinquaye", but saw Tricky as more a duo than a solo project. "The Face" published an article claiming that vocalist Martina Topley-Bird had to single-handedly bring up the child that Tricky had fathered. Tricky has also been concerned with racial stereotyping of the media. In the documentary "Naked & Famous", he stated that photographers wanted him to frown angrily in photos. He points to a cover of "The Big Issue", where he has a milder look on his face, as being more representative of how he feels. In the song "Tricky Kid" from "Pre-Millennium Tension", he wrote: "As long as you're humble/Let you be the king of jungle". Throughout his work, he blurs the normally clear gender definitions found in hip-hop. Despite the heavy influence he drew from American hip-hop in his debut album, "Maxinquaye", he fights against typical gender representations by, for example, dressing as a woman on the side sleeve of his album cover. As many of his tracks blend elements of varying types of music creating a difficult-to-define sound, so do his lyrics, creating a more ambiguous and blurry take on gender and sexuality. Tricky has guest-starred on a number of albums, including a notable appearance on Live's fifth studio album, "V". This appearance came as Tricky and Live's lead singer Ed Kowalczyk had developed a close friendship, with Kowalczyk contributing vocals to 'Evolution Revolution Love', a track on Tricky's album "Blowback". Tricky has also acted in various films. He appeared in a significant supporting role in the 1997 Luc Besson film "The Fifth Element", playing the right-hand man "Right Arm" to evil businessman Mr. Zorg. He also appears briefly in the 2004 Olivier Assayas film "Clean", playing himself, and had a large role in the music video for "Parabol/Parabola" by Tool. He was also rumoured to have a brief cameo in John Woo's 1997 movie "Face/Off", but has denied that this was the case, although his single "Christiansands" was featured in the movie. Tricky also appeared as Finn, a musician who loves & dumps main character Lynn in the US sitcom "Girlfriends". In 2001, Tricky appeared in online advertising for the web series "We Deliver", about a cannabis delivery service in New York City. Though he did not appear in any episodes, in the advertising it appears as if he is a customer of the service. The launch of a record label entitled "Brown Punk" was announced in mid-2007 that was a collaboration between Tricky and former Island Records executive Chris Blackwell. At the time, Tricky said: "Brown Punk represents a positive movement where you find intellectuals mixing with the working class, rock mixing with reggae and indie mixing with emo." The Dirty, The Gospel, Laid Blak Mexican band My Black Heart Machine were acts that were signed to the label, but as of October 2013, the label appears to be inactive. Tricky has stated that he has "been through a lot... I've been moved around from family to family, never stayed in one house from when I was born to the age of 16. ...I'm not normal. It's got a lot to do with my upbringing...Staying somewhere for three years then going off for three years. My uncles being villains. All that stuff. I've got quite a dysfunctional family...for some reason, in my family, the mothers always give the kids to the grandmothers". Tricky has fourteen paternal siblings. He was in a brief relationship with Icelandic singer-songwriter Björk in the 1990s. When asked in mid-2013 about the time the pair spent together, Tricky stated: "I wasn't good for Björk. I wasn't healthy for her. I feel she was really good to me, she gave me a lot of love and she really was a good person to me. I think she cared about me, right?" He was also briefly married to Carmen Ejogo in early 1998 in Las Vegas. Tricky fathered a daughter, Mina Mazy, on 19 march 1995 with Martina Topley-Bird, a musician he discovered when she was sitting on a wall near his Bristol home. Mazy struggled with depression and took her own life on 8 May 2019. Following her death, Tricky stated “It feels like I’m in a world that doesn’t exist, knowing nothing will ever be the same again. No words or text can really explain, my soul feels empty.” Tricky also has a daughter with a woman named Malika, whom he met when he was 15. In 2008, he stated he had only seen this daughter one time. In 2015, Tricky moved to live in Berlin, Germany.
https://en.wikipedia.org/wiki?curid=30354
Thelema Thelema () is an occult social or spiritual philosophy developed in the early 1900s by Aleister Crowley, an English writer, mystic, and ceremonial magician. The word "thelema" is the English transliteration of the Koine Greek noun θέλημα (), "will", from the verb ("thélō"): "to will, wish, want or purpose". Crowley asserted or believed himself to be the prophet of a new age, the Æon of Horus, based upon a spiritual experience that he and his wife, Rose Edith, had in Egypt in 1904. By his account, a possibly non-corporeal or "praeterhuman" being that called itself Aiwass contacted him (through Rose) and subsequently dictated a text known as "The Book of the Law" or "Liber AL vel Legis", which outlined the principles of Thelema. The Thelemic pantheon—a collection of gods and goddesses who either literally exist or serve as symbolic archetypes or metaphors—includes a number of deities, primarily a trio adapted from ancient Egyptian religion, who are the three speakers of "The Book of the Law": Nuit, Hadit and Ra-Hoor-Khuit. In at least one instance, Crowley described these deities as a "literary convenience". Three statements in particular distill the practice and ethics of Thelema: (1) "Do what thou wilt shall be the whole of the Law." (This means that adherents of Thelema should seek out and follow their true path, i.e. find or determine their True Will.) (2) "Love is the law, love under will." Among the corpus of ideas, Thelema describes what is termed "the Æon of Horus" (the "Crowned and Conquering Child")—as distinguished from an earlier "Æon of Isis" (mother-goddess idea) and "Æon of Osiris" (typified by bronze-age redeemer-based, divine-intermediary, or slain/flayed-god archetype religions such as Christianity, Mithraism, Zoroastrianism, Mandaeism, Odinism, Osiris, Attis, Adonis, etc.). Many adherents (also known as "Thelemites") emphasize the practice of Magick (glossed generally as the "Science and Art of causing Change to occur in conformity with Will"). Crowley's later writings included related commentary and hermeneutics but also additional "inspired" writings that he collectively termed The Holy Books of Thelema. He also associated Thelemic spiritual practice with concepts rooted in occultism, yoga, and Eastern and Western mysticism, especially the Qabalah. Aspects of Thelema and Crowley's thought in general inspired the development of Wicca and, to a certain degree, the rise of Modern Paganism as a whole, as well as chaos magick and some variations of Satanism. Some scholars, such as Hugh Urban, also believe Thelema to have been an influence on the development of Scientology, but others, such as J. Gordon Melton, deny any such connection. The word θέλημα (thelema) is rare in Classical Greek, where it "signifies the appetitive will: desire, sometimes even sexual", but it is frequent in the Septuagint. Early Christian writings occasionally use the word to refer to the human will, and even the will of God's created faith tester and inquisitor, the Devil, but it usually refers to the will of God. One well-known example is in the "Lord's Prayer", "Thy kingdom come. Thy will (θέλημα) be done, On earth as it is in heaven." It is used later in the same gospel, "He went away again a second time and prayed, saying, "My Father, if this cannot pass away unless I drink it, Thy will be done." In his 5th-century Sermon, Augustine of Hippo gave a similar instruction: "Love, and what thou wilt, do." ("Dilige et quod vis fac"). In the Renaissance, a character named "Thelemia" represents will or desire in the "Hypnerotomachia Poliphili" of the Dominican friar Francesco Colonna. The protagonist Poliphilo has two allegorical guides, Logistica (reason) and Thelemia (will or desire). When forced to choose, he chooses fulfillment of his sexual will over logic. Colonna's work was a great influence on the Franciscan friar François Rabelais, who in the 16th century, used "Thélème", the French form of the word, as the name of a fictional abbey in his novels, "Gargantua and Pantagruel". The only rule of this Abbey was "fay çe que vouldras" (""Fais ce que tu veux"", or, "Do what thou wilt"). In the mid-18th century, Sir Francis Dashwood inscribed the adage on a doorway of his abbey at Medmenham, where it served as the motto of the Hellfire Club. Rabelais's Abbey of Thelema has been referred to by later writers Sir Walter Besant and James Rice, in their novel "The Monks of Thelema" (1878), and C. R. Ashbee in his utopian romance "The Building of Thelema" (1910). As the forerunner of today's concept of will, the Greek " boule " (βουλή) is considered by classic philology, not " thelo " (θέλω) or 'thelema'. There are, in Greek, two words for will, which are used, for example, in New Testament partly synonym: " thelema " and " boule ". The verb "thelo" appears very early (Homer, early Attic inscriptions) and has the meanings of "ready", "decide" and "desire" (Homer, 3, 272, also in the sexual sense). "Aristotle says in the book " de plantis " that the goal of the human will is perception - unlike the plants that do not have 'epithymia' (translation of the author). "Thelema", says the Aristoteles, "has changed here, 'epithymia'", and 'thelema', and that 'thelema' is to be neutral, not somehow morally determined, the covetous driving force in man." In Septuaginta the term is used for the will of God himself, the religious desire of the God-fearing, and the royal will of a secular ruler. It is thus used only for the representation of high ethical willingness in the faith, the exercise of authority by the authorities, or the non-human will, but not for more profane striving. In the translation of the Greek Old Testament (the Septuaginta), the terms "boule" and "thelema" appear, whereas in the Vulgate text, the terms are translated into the Latin "voluntas" ("will"). Thus, the different meaning of both concepts was lost. In the New Testament in Koine "thelema" is used 62 or 64 times, twice in the plural ("thelemata"). Here, God's will is always and exclusively designated by the word "thelema" (θέλημα, mostly in the singular), as the theologian Federico Tolli points out by means of the "Theological Dictionary of the New Testament" of 1938 ("Your will be done on earth as it is in heaven"). In the same way the term is used in the Apostle Paul and Ignatius of Antioch. For Tolli it follows that the genuine idea of Thelema does not contradict the teachings of Jesus (Tolli, 2004). François Rabelais was a Franciscan and later a Benedictine monk of the 16th century. Eventually he left the monastery to study medicine, and moved to the French city of Lyon in 1532. There he wrote "Gargantua and Pantagruel," a connected series of books. They tell the story of two giants—a father (Gargantua) and his son (Pantagruel) and their adventures—written in an amusing, extravagant, and satirical vein. Most critics today agree that Rabelais wrote from a Christian humanist perspective. The Crowley biographer Lawrence Sutin notes this when contrasting the French author's beliefs with the Thelema of Aleister Crowley. In the previously mentioned story of Thélème, which critics analyze as referring in part to the suffering of loyal Christian reformists or "evangelicals" within the French Church, the reference to the Greek word θέλημα "declares that the will of God rules in this abbey". Sutin writes that Rabelais was no precursor of Thelema, with his beliefs containing elements of Stoicism and Christian kindness. In his first book (ch. 52–57), Rabelais writes of this Abbey of Thélème, built by the giant Gargantua. It is a classical utopia presented in order to critique and assess the state of the society of Rabelais's day, as opposed to a modern utopian text that seeks to create the scenario in practice. It is a utopia where people's desires are more fulfilled. Satirical, it also epitomises the ideals considered in Rabelais's fiction. The inhabitants of the abbey were governed only by their own free will and pleasure, the only rule being "Do What Thou Wilt". Rabelais believed that men who are free, well born and bred have honour, which intrinsically leads to virtuous actions. When constrained, their noble natures turn instead to remove their servitude, because men desire what they are denied. Some modern Thelemites consider Crowley's work to build upon Rabelais's summary of the instinctively honourable nature of the Thelemite. Rabelais has been variously credited with the creation of the philosophy of Thelema, as one of the earliest people to refer to it, or with being "the first Thelemite". However, the current National Grand Master General of the U.S. Ordo Templi Orientis Grand Lodge has stated: Aleister Crowley wrote in "The Antecedents of Thelema," (1926), an incomplete work not published in his day, that Rabelais not only set forth the law of Thelema in a way similar to how Crowley understood it, but predicted and described in code Crowley's life and the holy text that he claimed to have received, "The Book of the Law". Crowley said the work he had received was deeper, showing in more detail the technique people should practice, and revealing scientific mysteries. He said that Rabelais confines himself to portraying an ideal, rather than addressing questions of political economy and similar subjects, which must be solved in order to realize the Law. Rabelais is included among the Saints of Ecclesia Gnostica Catholica. Sir Francis Dashwood adopted some of the ideas of Rabelais and invoked the same rule in French, when he founded a group called the Monks of Medmenham (better known as the Hellfire Club). An abbey was established at Medmenham, in a property which incorporated the ruins of a Cistercian abbey founded in 1201. The group was known as the Franciscans, not after Saint Francis of Assisi, but after its founder, Francis Dashwood, 11th Baron le Despencer. John Wilkes, George Dodington and other politicians were members. There is little direct evidence of what Dashwood's Hellfire Club practiced or believed. The one direct testimonial comes from John Wilkes, a member who never got into the chapter-room of the inner circle. He describes the group as hedonists who met to "celebrate woman in wine", and added ideas from the ancients just to make the experience more decadent. In the opinion of Lt. Col. Towers, the group derived more from Rabelais than the inscription over the door. He believes that they used caves as a Dionysian oracular temple, based upon Dashwood's reading of the relevant chapters of Rabelais. Sir Nathaniel Wraxall in his "Historical Memoires" (1815) accused the Monks of performing Satanic rituals, but these claims have been dismissed as hearsay. Gerald Gardner and others such as Mike Howard say the Monks worshipped "the Goddess". Daniel Willens argued that the group likely practiced Freemasonry, but also suggests Dashwood may have held secret Roman Catholic sacraments. He asks if Wilkes would have recognized a genuine Catholic Mass, even if he saw it himself and even if the underground version followed its public model precisely. The literal definition of the term "Thelemite" is, according to Merriam-Webster, "one who does as he pleases." One may take this definition to entail, by extension, "one who does their will". However, there is no standard conception of what one must believe or do, or what, if anything, one must practice in order to be considered a Thelemite. In other words, there is no standardized Thelemic orthodoxy (Greek: ὀρθοδοξία "orthodoxía", approx. "correct belief," from the ancient Greek: δόξα "doxa", lit. "belief") or orthopraxy. (Greek: ὀρθοπραξία "orthopraxia", approx. "correct practice".) In the most basic sense, a Thelemite is any person who either does their will—if going by Crowley's conception, then their "true" or "pure" will, as opposed to the "mundane" will of the ego—or attempts to discover and do that will. This being the loosest conception of what makes someone a Thelemite, any individual who discovers and enacts their true will, or attempts to discover and enact it, knowingly or unknowingly, and whether or not they adhere to Crowley's system of Thelema as such, can be called a Thelemite. In the system of the Thelemic mystical order A∴A∴, for instance, Lao Tzu, Gautama Buddha, and Muhammad, all of whom predate the development of Thelema, are identified as individuals who, through spiritual attainment, became "magi" or possibly "ipsissimi"—i.e., in the Qabalistic sense, attained at least the magical grade of "magus", having reached the stage of Chokmah on the Tree of Life; if not having become "ipsissimi", which is equivalent of obtaining the stage of Kether—implying that they attained their True Wills. In a stricter sense, a Thelemite is someone who accepts the Law of Thelema. In an even stricter sense, it is someone who accepts or adheres to "The Book of the Law" (which includes the aforementioned Law), however interpreted. And, in the strictest sense, it is someone who adheres to both the "Book of the Law" and, to some extent—greater, lesser, or complete—the remainder of Crowley's writings on Thelema. In "The Book of the Law", Crowley wrote, "Who calls us Thelemites will do no wrong, if he look but close into the word. For there are therein Three Grades, the Hermit, and the Lover, and the man of Earth." As Crowley leaves the contents of "The Book of the Law" up to individual interpretation, the "grades" of Hermit, Lover, and man of Earth can naturally be taken to signify different things. However, Thelemic writer IAO131 believes them to correspond to different stages of the mystical or spiritual path, one progressing from man of Earth to Lover to Hermit, culminating in enlightenment: According to him, the man of Earth symbolizes spiritual self-discipline, the Lover experiential communion with the divine, and the Hermit the dissolution of the ego. Thelema was founded by Aleister Crowley (1875–1947), who was an English occultist and writer. In 1904, Crowley claimed to have received "The Book of the Law" from an entity named Aiwass, which was to serve as the foundation of the religious and philosophical system he called Thelema. Crowley's system of Thelema begins with "The Book of the Law", which bears the official name "Liber AL vel Legis". It was written in Cairo, Egypt, during his honeymoon with his new wife Rose Crowley (née Kelly). This small book contains three chapters, each of which he claimed to have written in exactly one hour, beginning at noon, on April 8, April 9, and April 10, 1904. Crowley claims that he took dictation from an entity named Aiwass, whom he later identified as his own Holy Guardian Angel. Disciple, author, and onetime Crowley secretary Israel Regardie prefers to attribute this voice to the subconscious, but opinions among Thelemites differ widely. Crowley claimed that "no forger could have prepared so complex a set of numerical and literal puzzles" and that study of the text would dispel all doubts about the method of how the book was obtained. Besides the reference to Rabelais, an analysis by Dave Evans shows similarities to "The Beloved of Hathor and Shrine of the Golden Hawk", a play by Florence Farr. Evans says this may result from the fact that "both Farr and Crowley were thoroughly steeped in Golden Dawn imagery and teachings", and that Crowley probably knew the ancient materials that inspired some of Farr's motifs. Sutin also finds similarities between Thelema and the work of W. B. Yeats, attributing this to "shared insight" and perhaps to the older man's knowledge of Crowley. Crowley wrote several commentaries on "The Book of the Law", the last of which he wrote in 1925. This brief statement called simply "The Comment" warns against discussing the book's contents, and states that all "questions of the Law are to be decided only by appeal to my writings" and is signed Ankh-af-na-khonsu. According to Crowley, every individual has a "True Will", to be distinguished from the ordinary wants and desires of the ego. The True Will is essentially one's "calling" or "purpose" in life. Some later magicians have taken this to include the goal of attaining self-realization by one's own efforts, without the aid of God or other divine authority. This brings them close to the position that Crowley held just prior to 1904. Others follow later works such as "Liber II", saying that one's own will in pure form is nothing other than the divine will. "Do what thou wilt shall be the whole of the Law" for Crowley refers not to hedonism, fulfilling everyday desires, but to acting in response to that calling. The Thelemite is a mystic. According to Lon Milo DuQuette, a Thelemite is anyone who bases their actions on striving to discover and accomplish their true will, when a person does their True Will, it is like an orbit, their niche in the universal order, and the universe assists them. In order for the individual to be able to follow their True Will, the everyday self's socially-instilled inhibitions may have to be overcome via deconditioning. Crowley believed that in order to discover the True Will, one had to free the desires of the subconscious mind from the control of the conscious mind, especially the restrictions placed on sexual expression, which he associated with the power of divine creation. He identified the True Will of each individual with the Holy Guardian Angel, a "daimon" unique to each individual. The spiritual quest to find what you are meant to do and do it is also known in Thelema as the Great Work. Thelema draws its principal gods and goddesses from Ancient Egyptian religion. The highest deity in the cosmology of Thelema is the goddess Nuit. She is the night sky arched over the Earth symbolized in the form of a naked woman. She is conceived as the Great Mother, the ultimate source of all things. The second principal deity of Thelema is the god Hadit, conceived as the infinitely small point, complement and consort of Nuit. Hadit symbolizes manifestation, motion, and time. He is also described in "Liber AL vel Legis" as "the flame that burns in every heart of man, and in the core of every star". The third deity in the cosmology of Thelema is Ra-Hoor-Khuit, a manifestation of Horus. He is symbolized as a throned man with the head of a hawk who carries a wand. He is associated with the Sun and the active energies of Thelemic magick. Other deities within the cosmology of Thelema are Hoor-paar-kraat (or Harpocrates), god of silence and inner strength, the brother of Ra-Hoor-Khuit, Babalon, the goddess of all pleasure, known as the Virgin Whore, and Therion, the beast that Babalon rides, who represents the wild animal within man, a force of nature. Thelemites differ widely in their views of the divine, and these views are often tied to their personal paradigms, including their conceptions of what demarcates objective and subjective reality, as well as falsehood and truth: some hold unique, or otherwise very specific or complex views of the nature of divinity, that are not easily explained; many are supernaturalists, claiming that the supernatural or paranormal in some way exist, and incorporate these assumptions into their spiritual practices in some way; others are religious or spiritual naturalists, viewing the "spiritual" or "sacred"—or whatever they feel is, or may be, in reality, analogous to them, or their equivalents—as identical to the material, natural, or physical. Naturalists, whether religious or spiritual, tend to believe that faith in, or experience of, the supernatural is grounded in falsehood or error, or can be explained by delusion or hallucination. "The Book of the Law" can be taken to imply a kind of pantheism or panentheism: the former is the belief that the divine, or ultimate reality, is coterminous with the totality of the cosmos, interpenetrating all phenomena, the sacred identical with the universe; the latter is the same but moreover holds that the divine, sacred, or ultimate reality in some way transcends the mundane. "The Book of the Law" states, "there is no god but man", and many Thelemites see the divine as the inner, perfected individual state—a "true self" or "higher self" often conceived of as the Holy Guardian Angel (although certain Thelemites view the Angel as a separate entity)—that forms the essence of every person. But also in the "Book" Nuit says, "I am Heaven and there is no other God than me, and my lord Hadit". Some Thelemites are polytheists or henotheists, while others are atheists, agnostics, or apatheists. Thelemites frequently hold a monistic view of the cosmos, believing that everything is ultimately derived from one initial and universal state of being, often conceived of as Nuit. (Compare the Neoplatonic view of The One.) "The Book of the Law" states that Hadit, representative, in one sense, of motion, matter, energy, and space-time—i.e. physical phenomena—is a manifestation of Nuit. ("The Book of the Law" opens with the verse, "Had! The manifestation of Nuit.") In other words, the cosmos and all its constituents are a manifestation of, and unified by their relationship to, the underlying divine reality, or monad. (Analogous to the Tao of Taoism.) IAO131 writes, quoting 'the Book of the Law", "Thelema asserts in its own Bible ('The Book of the Law") that “Every man and every woman is a star” and that godhead is “above you & in you” and [that Hadit is] “the flame that burns in every heart of man, and in the core of every star.”" In Thelemic rituals, the divine, however conceived, is addressed by various names, particularly mystical names derived from the Qabalah, including IAO (ιαω; the phrase is uttered during the Gnostic Mass)—an early Greek translation of the Hebrew Tetragrammaton—and Ararita (אראריתא): A Hebrew notarikon for the phrase "Achad Rosh Achdotho Rosh Ichudo Temurato Achad", meaning, roughly "One is the beginning of his unity, the beginning of his uniqueness; his permutation is one." (אחד ראש אחדותו ראש יחודו תמורתו אחד. The phrase is uttered when performing the Lesser Ritual of the Hexagram, a ritual found in Crowley's "Liber O".) "'IAO" is of particular importance in Thelemic ceremonial magick, and functions both as a name for the divine and a magical formula. Crowley associated this formula with yoga, and noted that its letters can signify the attributes of Isis, Apophis, and Osiris, or birth, death, and resurrection, respectively, stages of change which he believed, and many Thelemites believe, is analogous to the processes constantly undergone by the physical universe. Crowley wrote in his "Magick, Liber ABA, Book 4", that IAO is "the principal and most characteristic formula of Osiris, of the Redemption of Mankind. "I" is Isis, Nature, ruined by "A", Apophis the Destroyer, and restored to life by the Redeemer Osiris." Crowley also created a new formula, based on IAO, that he called the "proper hieroglyph of the Ritual of Self-Initiation in this Aeon of Horus": VIAOV (also spelled FIAOF), which results from adding the Hebrew letter "vau" to the beginning and end of "IAO". According to Crowley, VIAOV is a process whereby a person is elevated to the status of the divine, attaining his or her True Will, and yet remains in human form and goes on to "redeem the world": "Thus, he is Man made God, exalted, eager; he has come consciously to his full stature, and so is ready to set out on his journey to redeem the world." (Compare the role of the bodhisattva in Mahayana Buddhism.) Thelemic magick is a system of physical, mental, and spiritual exercises which practitioners believe are of benefit. Crowley defined magick as "the Science and Art of causing Change to occur in conformity with Will", and spelled it with a 'k' to distinguish it from stage magic. He recommended magick as a means for discovering the True Will. Generally, magical practices in Thelema are designed to assist in finding and manifesting the True Will, although some include celebratory aspects as well. Crowley was a prolific writer, integrating Eastern practices with Western magical practices from the Hermetic Order of the Golden Dawn. He recommended a number of these practices to his followers, including basic yoga; (asana and pranayama); rituals of his own devising or based on those of the Golden Dawn, such as the "Lesser ritual of the pentagram", for banishing and invocation; "Liber Samekh", a ritual for the invocation of the Holy Guardian Angel; eucharistic rituals such as "The Gnostic Mass" and "The Mass of the Phoenix"; and "Liber Resh", consisting of four daily adorations to the sun. Much of his work is readily available in print and online. He also discussed sex magick and sexual gnosis in various forms including masturbatory, heterosexual, and homosexual practices, and these form part of his suggestions for the work of those in the higher degrees of the Ordo Templi Orientis. Crowley believed that after discovering the True Will, the magician must also remove any elements of himself that stand in the way of its success. The emphasis of Thelemic magick is not directly on material results, and while many Thelemites do practice magick for goals such as wealth or love, it is not required. Those in a Thelemic magical Order, such as the A∴A∴, or Ordo Templi Orientis, work through a series of degrees or grades via a process of initiation. Thelemites who work on their own or in an independent group try to achieve this ascent or the purpose thereof using the Holy Books of Thelema and/or Crowley's more secular works as a guide, along with their own intuition. Thelemites, both independent ones and those affiliated with an order, can practice a form of performative prayer known as "Liber Resh". One goal in the study of Thelema within the magical Order of the A∴A∴ is for the magician to obtain the knowledge and conversation of the Holy Guardian Angel: conscious communication with their own personal daimon, thus gaining knowledge of their True Will. The chief task for one who has achieved this goes by the name of "crossing the abyss"; completely relinquishing the ego. If the aspirant is unprepared, he will cling to the ego instead, becoming a Black Brother. Rather than becoming one with God, the Black Brother considers his ego to be god. According to Crowley, the Black Brother slowly disintegrates, while preying on others for his own self-aggrandisement. Crowley taught skeptical examination of all results obtained through meditation or magick, at least for the student. He tied this to the necessity of keeping a magical record or diary, that attempts to list all conditions of the event. Remarking on the similarity of statements made by spiritually advanced people of their experiences, he said that fifty years from his time they would have a scientific name based on "an understanding of the phenomenon" to replace such terms as "spiritual" or "supernatural". Crowley stated that his work and that of his followers used "the method of science; the aim of religion", and that the genuine powers of the magician could in some way be objectively tested. This idea has been taken on by later practitioners of Thelema, chaos magic and magick in general. They may consider that they are testing hypotheses with each magical experiment. The difficulty lies in the broadness of their definition of success, in which they may see as evidence of success things which a non-magician would not define as such, leading to confirmation bias. Crowley believed he could demonstrate, by his own example, the effectiveness of magick in producing certain subjective experiences that do not ordinarily result from taking hashish, enjoying oneself in Paris, or walking through the Sahara desert. It is not strictly necessary to practice ritual techniques to be a Thelemite, as due to the focus of Thelemic magick on the True Will, Crowley stated "every intentional act is a magickal act". Liber AL vel Legis does make clear some standards of individual conduct. The primary of these is "Do what thou wilt" which is presented as the whole of the law, and also as a right. Some interpreters of Thelema believe that this right includes an obligation to allow others to do their own wills without interference, but "Liber AL" makes no clear statement on the matter. Crowley himself wrote that there was no need to detail the ethics of Thelema, for everything springs from "Do what thou Wilt". Crowley wrote several additional documents presenting his personal beliefs regarding individual conduct in light of the Law of Thelema, some of which do address the topic interference with others: Liber OZ, "Duty", and "Liber II". "Liber Oz" enumerates some of the rights of the individual implied by the one overarching right, "Do what thou wilt". For each person, these include the right to: live by one's own law; live in the way that one wills to do; work, play, and rest as one will; die when and how one will; eat and drink what one will; live where one will; move about the earth as one will; think, speak, write, draw, paint, carve, etch, mould, build, and dress as one will; love when, where and with whom one will; and kill those who would thwart these rights. "Duty" is described as "A note on the chief rules of practical conduct to be observed by those who accept the Law of Thelema." It is not a numbered "Liber" as are all the documents which Crowley intended for A∴A∴, but rather listed as a document intended specifically for Ordo Templi Orientis. There are four sections: In "Liber II: The Message of the Master Therion", the Law of Thelema is summarized succinctly as "Do what thou wilt—then do nothing else." Crowley describes the pursuit of Will as not only with detachment from possible results, but with tireless energy. It is Nirvana but in a dynamic rather than static form. The True Will is described as the individual's orbit, and if they seek to do anything else, they will encounter obstacles, as doing anything other than the will is a hindrance to it. The core of Thelemic thought is "Do what thou wilt". However, beyond this, there exists a very wide range of interpretation of Thelema. Modern Thelema is a syncretic philosophy and religion, and many Thelemites try to avoid strongly dogmatic or fundamentalist thinking. Crowley himself put strong emphasis on the unique nature of Will inherent in each individual, not following him, saying he did not wish to found a flock of sheep. Thus, contemporary Thelemites may practice more than one religion, including Wicca, Gnosticism, Satanism, Setianism and Luciferianism. Many adherents of Thelema, none more so than Crowley, recognize correlations between Thelemic and other systems of spiritual thought; most borrow freely from the methods and practices of other traditions, including alchemy, astrology, qabalah, tantra, tarot divination and yoga. For example, Nu and Had are thought to correspond with the Tao and Teh of Taoism, Shakti and Shiva of the Hindu Tantras, Shunyata and Bodhicitta of Buddhism, Ain Soph and Kether in the Hermetic Qabalah. There are some Thelemites who do accept "The Book of the Law" in some way but not the rest of Crowley's "inspired" writings or teachings. Others take only specific aspects of his overall system, such as his magical techniques, ethics, mysticism, or religious ideas, while ignoring the rest. Other individuals who consider themselves Thelemites regard what is commonly presented as Crowley's system to be only one possible manifestation of Thelema, creating original systems, such as those of Nema and Kenneth Grant. And one category of Thelemites are non-religious, and simply adhere to the philosophical law of Thelema. Crowley encouraged people to think for themselves, making use of what ideas they like and discarding those they find inessential or unreasonable. In "Magical and Philosophical Commentaries on The Book of the Law", Crowley wrote, "It is the mark of the mind untrained to take its own processes as valid for all men, and its own judgments for absolute truth." The Book of the Law gives several holy days to be observed by Thelemites. There are no established or dogmatic ways to celebrate these days, so as a result Thelemites will often take to their own devices or celebrate in groups, especially within Ordo Templi Orientis. These holy days are usually observed on the following dates: Aleister Crowley was highly prolific and wrote on the subject of Thelema for over 35 years, and many of his books remain in print. During his time, there were several who wrote on the subject, including U.S. O.T.O. Grand Master Charles Stansfeld Jones, whose works on Qabalah are still in print, and Major-General J. F. C. Fuller. Jack Parsons was a scientist researching the use of various fuels for rockets at the California Institute of Technology, and one of Crowley's first American students, for a time leading the Agape Lodge of the Ordo Templi Orientis for Crowley in America. He wrote several short works during his lifetime, some later collected as "Freedom is a Two-edged Sword." He died in 1952 as a result of an explosion, and while not a prolific writer himself, has been the subject of two biographies; "Sex and Rockets" by John Carter, and "Strange Angel" by George Pendle. Since Crowley's death in 1947, there have been other Thelemic writers such as Israel Regardie, who edited many of Crowley's works and also wrote a biography of him, "The Eye in the Triangle", as well as books on Qabalah. Kenneth Grant wrote numerous books on Thelema and the occult, such as "The Typhonian Trilogy". A Thelemic organization is any group, community, or organization based on or espousing Thelemic philosophy or principles, or the philosophy or principles put forth in "The Book of the Law". Several modern organizations of various sizes claim to follow the tenets of Thelema. The two most prominent are both organizations that Crowley headed during his lifetime: the A∴A∴, a magickal and mystical teaching order founded by Crowley, based on the grades of the Golden Dawn system; and Ordo Templi Orientis, an order which initially developed from the Rite of Memphis and Mizraim in the early part of the 20th century, and which includes Ecclesia Gnostica Catholica as its ecclesiastical and religious arm, and Mysteria Mystica Maxima as an initiatory order. Since Crowley's death in 1947, other organizations have formed to carry on his initial work: for example, the Typhonian Order of Kenneth Grant and The Open Source Order of the Golden Dawn. Other groups of widely varying character exist which have drawn inspiration or methods from Thelema, such as the Illuminates of Thanateros and the Temple of Set. Some groups accept the Law of Thelema, but omit certain aspects of Crowley's system while incorporating the works of other mystics, philosophers, and religious systems. The Fraternitas Saturni (Brotherhood of Saturn), founded in 1928 in Germany, accepts the Law of Thelema, but extends it with the phrase "Mitleidlose Liebe!" ("Compassionless love!"). The Thelema Society, also located in Germany, accepts Liber Legis and much of Crowley's work on magick, while incorporating the ideas of other thinkers, such as Friedrich Nietzsche, Charles Sanders Peirce, Martin Heidegger and Niklas Luhmann. The Temple of the Silver Star (not to be confused with the third or "inner order" of A∴A∴) is an academic or educational organization which prepares students to join the A∴A∴ proper. It was founded by Phyllis Seckler "in service to A∴A∴". Other Thelemic organizations include Ordo Astri, Ordo Sunyata Vajra, the Temple of Our Lady of the Abyss, Axis Ordo Mundi Saturna, Order of Chosen Priests and the self-initiatory Thelemic Order of the Golden Dawn (founded by Christopher Hyatt). Thelemites can also be found in other organizations. The president of the Church of All Worlds, LaSara FireFox, identifies as a Thelemite. A significant minority of other CAW members also identify as Thelemites.
https://en.wikipedia.org/wiki?curid=30356
Tiber The Tiber (; ; ) is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino. It drains a basin estimated at . The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks. The river rises at Mount Fumaiolo in central Italy and flows in a generally southerly direction past Perugia and Rome to meet the sea at Ostia. Anciently called (in Latin) "" ("the blond"), in reference to the yellowish colour of its water, the Tiber has heavily advanced at the mouth by about since Roman times, leaving the ancient port of Ostia Antica inland. However, it does not form a proportional delta, owing to a strong north-flowing sea current close to the shore, to the steep shelving of the coast, and to slow tectonic subsidence. The source of the Tiber consists of two springs away from each other on Mount Fumaiolo. These springs are called "Le Vene". The springs are in a beech forest above sea level. During the 1930s, Benito Mussolini had an antique marble Roman column built at the point where the river rises, inscribed QUI NASCE IL FIUME SACRO AI DESTINI DI ROMA ("Here is born the river / sacred to the destinies of Rome"). There is an eagle on the top of the column, part of its fascist symbolism. The first miles of the Tiber run through before entering Umbria. It is probable that the genesis of the name "Tiber" was pre-Latin, like the Roman name of Tibur (modern Tivoli), and may be specifically Italic in origin. The same root is found in the Latin praenomen "Tiberius". There are also Etruscan variants of this praenomen in "Thefarie" (borrowed from Faliscan "*Tiferios", lit. '(He) from the Tiber' < "*Tiferis" 'Tiber') and "Teperie" (via the Latin hydronym "Tiber"). The legendary king Tiberinus, ninth in the king-list of Alba Longa, was said to have drowned in the river Albula, which was afterward called "Tiberis". The myth may have explained a memory of an earlier, perhaps pre-Indo-European name for the river, "white" ("alba") with sediment, or "from the mountains" from pre-Indo-European word "alba, albion" mount, elevated area. "Tiberis/Tifernus" may be a pre-Indo-European substrate word related to Aegean "tifos" "still water", Greek phytonym "τύφη" a kind of swamp and river bank weed ("Typha angustifolia"), Iberian hydronyms "Tibilis, Tebro" and Numidian "Aquae Tibilitanae". Yet another etymology is from *dubri-, water, considered by Alessio as Sicel, whence the form Θύβρις later Tiberis. This root *dubri- is widespread in Western Europe e.g. Dover, Portus Dubris. According to legend, the city of Rome was founded in 753 BC on the banks of the Tiber about from the sea at Ostia. Tiber Island, in the center of the river between Trastevere and the ancient city center, was the site of an important ancient ford and was later bridged. Legend says Rome's founders, the twin brothers Romulus and Remus, were abandoned on its waters, where they were rescued by the she-wolf, Lupa. The river marked the boundary between the lands of the Etruscans to the west, the Sabines to the east and the Latins to the south. Benito Mussolini, born in Romagna, adjusted the boundary between Tuscany and Emilia-Romagna, so that the springs of the Tiber would lie in Romagna. The Tiber was critically important to Roman trade and commerce, as ships could reach as far as upriver; there is evidence that it was used to ship grain from the Val Teverina as long ago as the 5th century BC. It was later used to ship stone, timber and foodstuffs to Rome. During the Punic Wars of the 3rd century BC, the harbour at Ostia became a key naval base. It later became Rome's most important port, where wheat, olive oil, and wine were imported from Rome's colonies around the Mediterranean. Wharves were also built along the riverside in Rome itself, lining the riverbanks around the Campus Martius area. The Romans connected the river with a sewer system (the "Cloaca Maxima") and with an underground network of tunnels and other channels, to bring its water into the middle of the city. Wealthy Romans had garden-parks or "horti" on the banks of the river in Rome up through the first century BC. These may have been sold and developed about a century later. The heavy sedimentation of the river made it difficult to maintain Ostia, prompting the emperors Claudius and Trajan to establish a new port on the Fiumicino in the 1st century AD. They built a new road, the "via Portuensis," to connect Rome with Fiumicino, leaving the city by Porta Portese ('the port gate'). Both ports were eventually abandoned due to silting. Several popes attempted to improve navigation on the Tiber in the 17th and 18th century, with extensive dredging continuing into the 19th century. Trade was boosted for a while but by the 20th century silting had resulted in the river only being navigable as far as Rome itself. The Tiber was once known for its floods — the Campus Martius is a flood plain and would regularly flood to a depth of . The river is now confined between high stone embankments which were begun in 1876. Within the city, the riverbanks are lined by boulevards known as "lungoteveri", streets "along the Tiber". Because the river is identified with Rome, the terms "swimming the Tiber" or "crossing the Tiber" have come to be the Protestant shorthand term for converting to Roman Catholicism. This is most common if the person who converts had been Anglican, the reverse of which is referred to as "swimming the Thames" or "crossing the Thames". In ancient Rome, executed criminals were thrown into the Tiber. People executed at the Gemonian stairs were thrown in the Tiber during the later part of the reign of the emperor Tiberius. This practice continued over the centuries. For example, the corpse of Pope Formosus was thrown into the Tiber after the infamous Cadaver Synod held in 897. In addition to the numerous modern bridges over the Tiber in Rome, there remain a few ancient bridges (now mostly pedestrian-only) that have survived in part (e.g., the Ponte Milvio and the Ponte Sant'Angelo) or in whole (Fabricius' Bridge). In addition to bridges, there are tunnels which the Metro trains use. Following the standard Roman depiction of rivers as powerfully built reclining male gods, the Tiber, also interpreted as a god named Tiberinus, is shown with streams of water flowing from his hair and beard.
https://en.wikipedia.org/wiki?curid=30359
TrueType TrueType is an outline font standard developed by Apple in the late 1980s as a competitor to Adobe's Type 1 fonts used in PostScript. It has become the most common format for fonts on the classic Mac OS, macOS, and Microsoft Windows operating systems. The primary strength of TrueType was originally that it offered font developers a high degree of control over precisely how their fonts are displayed, right down to particular pixels, at various font sizes. With widely varying rendering technologies in use today, pixel-level control is no longer certain in a TrueType font. "TrueType" was known during its development stage, first by the codename "Bass" and later on by the codename "Royal". The system was developed and eventually released as TrueType with the launch of Mac System 7 in May 1991. The initial TrueType outline fonts, four-weight families of "Times Roman", "Helvetica", "Courier", and the pi font "Symbol" replicated the original PostScript fonts of the Apple LaserWriter. Apple also replaced some of their bitmap fonts used by the graphical user-interface of previous Macintosh System versions (including Geneva, Monaco and New York) with scalable TrueType outline-fonts. For compatibility with older systems, Apple shipped these fonts, a TrueType Extension and a TrueType-aware version of Font/DA Mover for System 6. For compatibility with the Laserwriter II, Apple developed fonts like ITC Bookman and ITC Chancery in TrueType format. All of these fonts could now scale to all sizes on screen and printer, making the Macintosh System 7 the first OS to work without any bitmap fonts. The early TrueType systems — being still part of Apple's QuickDraw graphics subsystem — did not render Type 1 fonts on-screen as they do today. At the time, many users had already invested considerable money in Adobe's still proprietary Type 1 fonts. As part of Apple's tactic of opening the font format versus Adobe's desire to keep it closed to all but Adobe licensees, Apple licensed TrueType to Microsoft. When TrueType and the license to Microsoft was announced, John Warnock of Adobe gave an impassioned speech in which he claimed Apple and Microsoft were selling snake oil, and then announced that the Type 1 format was open for anyone to use. Meanwhile, in exchange for TrueType, Apple got a license for TrueImage, a PostScript-compatible page-description language owned by Microsoft that Apple could use in laser printing. This was never actually included in any Apple products when a later deal was struck between Apple and Adobe, where Adobe promised to put a TrueType interpreter in their PostScript printer boards. Apple renewed its agreements with Adobe for the use of PostScript in its printers, resulting in lower royalty payments to Adobe, who was beginning to license printer controllers capable of competing directly with Apple's LaserWriter printers. Part of Adobe's response to learning that TrueType was being developed was to create the Adobe Type Manager software to scale Type 1 fonts for anti-aliased output on-screen. Although ATM initially cost money, rather than coming free with the operating system, it became a "de facto" standard for anyone involved in desktop publishing. Anti-aliased rendering, combined with Adobe applications' ability to zoom in to read small type, and further combined with the now open PostScript Type 1 font format, provided the impetus for an explosion in font design and in desktop publishing of newspapers and magazines. Apple extended TrueType with the launch of TrueType GX in 1994, with additional tables in the sfnt which formed part of QuickDraw GX. This offered powerful extensions in two main areas. First was font axes (morphing), for example allowing fonts to be smoothly adjusted from light to bold or from narrow to extended — competition for Adobe's "multiple master" technology. Second was Line Layout Manager, where particular sequences of characters can be coded to flip to different designs in certain circumstances, useful for example to offer ligatures for "fi", "ffi", "ct", etc. while maintaining the backing store of characters necessary for spell checkers and text searching. However, the lack of user-friendly tools for making TrueType GX fonts meant there were no more than a handful of GX fonts. Much of the technology in TrueType GX, including morphing and substitution, lives on as AAT (Apple Advanced Typography) in macOS. Few font-developers outside Apple attempt to make AAT fonts; instead, OpenType has become the dominant sfnt format. To ensure its wide adoption, Apple licensed TrueType to Microsoft for free. By 1991 Microsoft added TrueType into the Windows 3.1 operating system. In partnership with their contractors, Monotype Imaging, Microsoft put a lot of effort into creating a set of high quality TrueType fonts that were compatible with the core fonts being bundled with PostScript equipment at the time. This included the fonts that are standard with Windows to this day: Times New Roman (compatible with Times Roman), Arial (compatible with Helvetica) and Courier New (compatible with Courier). One should understand "compatible" to mean two things: first, that the fonts are similar in appearance, and second — and very importantly — the fonts have the same character widths, and so can be used to typeset the same documents without reflowing the text. Microsoft and Monotype technicians used TrueType's hinting technology to ensure that these fonts did not suffer from the problem of illegibility at low resolutions, which had previously forced the use of bitmapped fonts for screen display. Subsequent advances in technology have introduced first anti-aliasing, which smooths the edges of fonts at the expense of a slight blurring, and more recently subpixel rendering (the Microsoft implementation goes by the name ClearType), which exploits the pixel structure of LCD based displays to increase the apparent resolution of text. Microsoft has heavily marketed ClearType, and sub-pixel rendering techniques for text are now widely used on all platforms. Microsoft also developed a "smart font" technology, named "TrueType Open" in 1994, later renamed to OpenType in 1996 when it merged support of the Adobe Type 1 glyph outlines. TrueType has long been the most common format for fonts on classic Mac OS, Mac OS X, and Microsoft Windows, although Mac OS X and Microsoft Windows also include native support for Adobe's Type 1 format and the OpenType extension to TrueType (since Mac OS X 10.0 and Windows 2000). While some fonts provided with the new operating systems are now in the OpenType format, most free or inexpensive third-party fonts use plain TrueType. Increasing resolutions and new approaches to screen rendering have reduced the requirement of extensive TrueType hinting. Apple's rendering approach on macOS ignores almost all the hints in a TrueType font, while Microsoft's ClearType ignores many hints, and according to Microsoft, works best with "lightly hinted" fonts. The FreeType project of David Turner has created an independent implementation of the TrueType standard (as well as other font standards in FreeType 2). FreeType is included in many Linux distributions. Until May 2010 there were potential patent infringements in FreeType 1 because parts of the TrueType hinting virtual machine were patented by Apple, a fact not mentioned in the TrueType standards. (Patent holders who contribute to standards published by a major standards body such as ISO are required to disclose the scope of their patents, but TrueType was not such a standard.) FreeType 2 included an optional automatic hinter to avoid the patented technology, but these patents have now expired so FreeType 2.4 now enables these features by default. The outlines of the characters (or glyphs) in TrueType fonts are made of straight line segments and quadratic Bézier curves. These curves are mathematically simpler and faster to process than cubic Bézier curves, which are used both in the PostScript-centered world of graphic design and in Type 1 fonts. However, most shapes require more points to describe with quadratic curves than cubics. This difference also means that it is not possible to convert Type 1 losslessly to the TrueType format, although in practice it is often possible to do a lossless conversion from TrueType to Type 1. TrueType systems include a virtual machine that executes programs inside the font, processing the "hints" of the glyphs. These distort the control points which define the outline, with the intention that the rasterizer produce fewer undesirable features on the glyph. Each glyph's hinting program takes account of the size (in pixels) at which the glyph is to be displayed, as well as other less important factors of the display environment. Although incapable of receiving input and producing output as normally understood in programming, the TrueType hinting language does offer the other prerequisites of programming languages: conditional branching (IF statements), looping an arbitrary number of times (FOR- and WHILE-type statements), variables (although these are simply numbered slots in an area of memory reserved by the font), and encapsulation of code into functions. Special instructions called delta hints are the lowest level control, moving a control point at just one pixel size. The hallmark of effective TrueType glyph programming techniques is that it does as much as possible using variables defined just once in the whole font (e.g., stem widths, cap height, x-height). This means avoiding delta instructions as much as possible. This helps the font developer to make major changes (e.g., the point at which the entire font's main stems jump from 1 to 2 pixels wide) most of the way through development. Creating a very well-hinted TrueType font remains a significant amount of work, despite the increased user-friendliness of programs for adding hints to fonts. Many TrueType fonts therefore have only rudimentary hints, or have hinting automatically applied by the font editor, with results of variable quality. The TrueType format allows for the most basic type of digital rights management – an "embeddable flag field" that specifies whether the author allows embedding of the font file into things like PDF files and websites. Anyone with access to the font file can directly modify this field, and simple tools exist to facilitate modifying it (obviously, modifying this field does not modify the font license and does not give extra legal rights). These tools have been the subject of controversy over potential copyright issues. TrueType Collection (TTC) is an extension of TrueType format that allows combining multiple fonts into a single file, creating substantial space savings for a collection of fonts with many glyphs in common. They were first available in Chinese, Japanese, and Korean versions of Windows, and supported for all regions in Windows 2000 and later. Classic Mac OS included support of TTC starting with Mac OS 8.5. In classic Mac OS and macOS, TTC has file type ttcf. Apple has implemented a proprietary extension to allow color .ttf files for its emoji font Apple Color Emoji. A basic font is composed of multiple tables specified in its header. A table name can have up to 4 letters. A TrueType Collection file begins with a ttcf table that allows access to the fonts within the collection by pointing to individual headers for each included font. The fonts within a collection share the same glyph-outline table, though each font can refer to subsets within those outlines in its own manner, through its 'cmap', 'name' and 'loca' tables. A .ttf extension indicates a regular TrueType font or an OpenType font with TrueType outlines, while a .ttc extension is reserved for TTCs. Windows end user defined character editor (EUDCEDIT.EXE) creates TrueType font with name EUDC.TTE. An OpenType font with PostScript outlines must have an .otf extension. In principle an OpenType font with TrueType outlines may have an .otf extension, but this has rarely been done in practice. In classic Mac OS and macOS, OpenType is one of several formats referred to as data-fork fonts, as they lack the classic Mac resource fork. The suitcase format for TrueType is used on classic Mac OS and macOS. It adds additional Apple-specific information. Like TTC, it can handle multiple fonts within a single file. But unlike TTC, those fonts need not be within the same family. Suitcases come in resource-fork and data-fork formats. The resource-fork version was the original suitcase format. Data-fork-only suitcases, which place the resource fork contents into the data fork, were first supported in macOS. A suitcase packed into the data-fork-only format has the extension "dfont". In the PostScript language, TrueType outlines are handled with a PostScript wrapper as Type 42 for name-keyed or Type 11 for CID-keyed fonts.
https://en.wikipedia.org/wiki?curid=31187
THX 1138 THX 1138 is a 1971 American social science fiction film directed by George Lucas in his feature film directorial debut. It is set in a dystopian future in which the populace is controlled through android police and mandatory use of drugs that suppress emotions. Produced by Francis Ford Coppola and written by Lucas and Walter Murch, it stars Robert Duvall and Donald Pleasence. "THX 1138" was developed from Lucas's student film "", which he made in 1967 while attending the USC School of Cinematic Arts. The feature film was produced in a joint venture between Warner Bros. and American Zoetrope. A novelization by Ben Bova was published in 1971. The film received mixed reviews from critics and failed to find box-office success on initial release; however, the film has subsequently received critical acclaim and gained a cult following, particularly in the aftermath of Lucas' success with "Star Wars" in 1977. In the future, sexual intercourse and reproduction are prohibited, whereas use of mind-altering drugs is mandatory to enforce compliance among the citizens and to ensure their ability to conduct dangerous and demanding tasks. Emotions and the concept of family are taboo. Everyone is clad in identical white uniforms and has shaven heads to emphasize uniformity, except the police androids (who wear black) and robed monks. Instead of names, people have designations with three arbitrary letters (referred to as the "prefix") and four digits, shown on an identity badge worn at all times. At their jobs in central video control centers, SEN 5241 and LUH 3417 keep surveillance on the city. LUH has a male roommate, THX 1138, who works in a factory producing android police officers. At the beginning of the story, THX finishes his shift while the loudspeakers urge the workers to "increase safety"—and congratulate them for only losing 195 workers in the last period—to the competing factory's 242. On the way home, he stops at a confession booth in a row of many, and relates his concerns and mumbles prayers about "party" and "masses", under the Jesus Christ-esque portrait of "OMM 0000". A soothing voice greets THX, and OMM ends the confession with a parting salutation: "You are a true believer, blessings of the State, blessings of the masses. Work hard, increase production, prevent accidents and be happy." At home, THX takes his drugs and watches holobroadcasts while engaging with a masturbatory device. LUH secretly substitutes pills in her possession for THX's medications, causing him to develop nausea, anxiety, and sexual desires. LUH and THX become involved romantically and have sex. THX later is confronted by SEN, who attempts to arrange that THX become his new roommate, but THX files a complaint against SEN for the illegal shift pattern change. Without drugs in his system, THX falters during a critical and hazardous phase of his job, and a control center engages a "mind lock" on THX which raises the level of danger. After the release of the mind lock, THX makes the necessary correction to that work phase. THX and LUH are arrested and THX undergoes drug therapy. He enjoys a brief reunion with LUH, disrupted shortly after she reveals her pregnancy. At THX's trial, THX is sentenced to prison, alongside SEN. Most of the prisoners seem uninterested in escape, but eventually THX and SEN find an exit, and they are later joined by hologram actor SRT 5752, who starred in the holobroadcasts. During the escape, THX and SRT are separated from SEN. Chased by the police robots, THX and SRT are trapped in a control center, from which THX learns that LUH has been "consumed", and her name has been reassigned to fetus 66691 in a growth chamber. SEN eventually escapes to an area reserved for the monks of OMM, where a lone monk notices that SEN has no identification badge. SEN attacks him and later wanders into a child-rearing area, strikes up a conversation with children and sits aimlessly until police androids apprehend him. THX and SRT steal two cars, but SRT crashes his into a concrete pillar. Pursued by two police androids on motorcycles, THX flees to the limits of the city and escapes into a ventilation shaft. The police androids pursue him on motorcycles along the shaft to an escape ladder, but are ordered by Central Command to cease pursuit, on the grounds that the expense of his capture exceeds their allocated budget. The guards inform THX that the surface is uninhabitable but he is undeterred and continues up the ventilation shaft. The city is then revealed to be entirely underground, and THX has escaped onto the surface, where he then witnesses the sun setting. A bird can be seen flying in the distance, indicating that at least some life is still inhabiting the planet's surface. "THX 1138" was the first film made in a planned seven-picture slate commissioned by Warner Bros. from the 1969 incarnation of American Zoetrope. Lucas wrote the initial script draft based on his earlier short film but Coppola and Lucas agreed it was unsatisfactory. Murch assisted Lucas in writing an improved final draft. For some of SEN's dialogue in the film, the script included excerpts from speeches by Richard Nixon. The script required almost the entire cast to shave their heads, either completely bald or with a buzz cut. As a publicity stunt, several actors were filmed having their first haircuts/shaves at unusual venues, with the results used in a promotional featurette titled "". Many of the shaven-headed extras seen in the film were recruited from the nearby addiction recovery program and violent cult Synanon. Filming began on September 22, 1969. The schedule was between 35 and 40 days, completing in November 1969. Lucas filmed "THX 1138" in Techniscope. Most locations for filming were in the San Francisco area, including the unfinished tunnels of the Bay Area Rapid Transit subway system, the Lawrence Livermore National Laboratory, the Marin County Civic Center in San Rafael designed by Frank Lloyd Wright, the Lawrence Hall of Science in Berkeley, the San Francisco International Airport, and at a remote manipulator for a hot cell. Studio sequences were shot at stages in Los Angeles, including a white stage for the "white limbo" sequences. Lucas used entirely natural light. The chase scene featured two Lola T70 Mk III race cars being chased by Yamaha TA125/250cc two-stroke, race-replica motorcycles through two San Francisco Bay Area automotive tunnels: the Caldecott Tunnel between Oakland and Orinda; and the underwater Posey Tube between Oakland and Alameda. According to Caleb Deschanel, cars drove at speeds of while filming the chase. Other cars appearing in several scenes of the movie include the custom-built Ferrari Thomassima cars; one of them is on display in the Ferrari museum in Modena, Italy. The chase featured a motorcycle stunt. Stuntman Ronald "Duffy" Hambleton (credited as Duffy Hamilton) rode his police motorcycle full speed into a fallen paint stand, with a ramp built to Hambleton's specification. He flew over the handlebars, was hit by the airborne motorcycle, landed in the street on his back, and slammed into the crashed car in which Duvall's character had escaped. According to Lucas, it turned out Hambleton was perfectly fine, apart from being angry with the people who had run into the shot to check on him. He was worried that they might have ruined the amazing stunt he had just performed by walking into frame. THX's final climb out to the daylight was filmed (with the camera rotated 90°) in the incomplete (and decidedly horizontal) Bay Area Rapid Transit Transbay Tube before installation of the track supports, with the actors using exposed reinforcing bars on the floor of the tunnel as a "ladder". The end scene, of THX standing before the sunset, was shot at Port Hueneme, California, by a second unit of (additional uncredited photographer) Caleb Deschanel and Matthew Robbins, who played THX in this long shot. After completion of photography, Coppola scheduled a year for Lucas to complete postproduction. Lucas edited the film on a German-made K-E-M flatbed editor in his Mill Valley house by day, with Walter Murch editing sound at night; the two compared notes when they changed over. Murch compiled and synchronized the sound montage, which includes all the "overhead" voices heard throughout the film, radio chatter, announcements, etc. The bulk of the editing was finished by mid-1970. On completion of editing of the film, producer Coppola took it to Warner Bros., the financiers. Studio executives there disliked the film, and insisted that Coppola turn over the negative to an in-house Warner editor, who cut about 4 minutes of the film prior to release. The soundtrack to the THX 1138, conducted by Lalo Schifrin, was released in 1970. Recording took place on October 15 and 16, 1970, at The Burbank Studios in Burbank, California, U.S. "THX 1138" was released to theaters on March 11, 1971 and was a commercial flop, earning back $945,000 in rentals for Warner Bros. but still leaving the studio in the red. A contemporary survey found seven favorable, three mixed, and five negative reviews. Roger Ebert of the "Chicago Sun-Times" rated the film three stars out of four and wrote, ""THX 1138" suffers somewhat from its simple storyline, but as a work of visual imagination it's special, and as haunting as parts of "," "Silent Running" and "The Andromeda Strain."" Gene Siskel of the "Chicago Tribune" awarded two stars out of four and stated, "The principal problem with this film is that it lacks imagination, the essential component of a science fiction film. Some persons might claim that the world of "THX 1138" is here right now. A more reasonable opinion would hold that we are facing the problems of that world right now. Time has passed the film by." Vincent Canby of "The New York Times" wrote, "It is not, however, as either chase drama, or social drama, that "THX 1138" is most interesting. Rather it's as a stunning montage of light, color and sound effects that create their own emotional impact ... Lucas's achievement in his first feature is all the more extraordinary when you realize that he is 25 years old, and that he shot most of the film in San Francisco, on a budget that probably would not cover the cost of half of one of the space ships in Stanley Kubrick's '2001.'" Arthur D. Murphy of "Variety" observed, "Likely not to be an artistic or commercial success in its own time, the American Zoetrope (Francis Ford Coppola group) production just might in time become a classic of stylistic, abstract cinema." Charles Champlin of the "Los Angeles Times" praised the film as "a stunning deployment of the aural and visual resources of the screen to suggest a fearful new world of tyranny by technology," adding that "Lucas is obviously a master of cinematic effects with a special remarkable gift for discovering the look of the future in mundane places like parking structures and office corridors." Champlin stressed that the "real excitement of "THX 1138" is not really the message but the medium—the use of film not to tell a story so much as to convey an experience, a credible impression of a fantastic and scary dictatorship of tomorrow." Kenneth Turan wrote in "The Washington Post", "Fortunately, the film comes over not at all trite but rather as enormously affecting. Lucas obviously believed strongly in this futuristic vision, and the film draws its vitality and unity from his belief, and from the fact that it was not bottled up to meet arbitrary conditions but allowed the free rein necessary to reach completeness." Penelope Houston of "The Monthly Film Bulletin" commented, "Details of the future society—control panels, monitor screens, soothing TV commercial voices, unshakeably calm robot policemen, the human animal turned automaton in appearance and function, but breaking out into a doomed love affair—are all tolerably persuasive, but in sum total rather a pile-up of predictability. On the Orwellian level of ideas, Lucas' passive new world is too indeterminate to carry enough conviction and, consequently, enough of a menacing charge." The film has continued to earn critical acclaim and is currently rated "fresh" on the review aggregator Rotten Tomatoes based on 63 reviews, with a score of 86% and an average rating of 6.85/10. The consensus reads: "George Lucas' feature debut presents a spare, bleak, dystopian future, and features evocatively minimal set design and creepy sound effects." The film has a score of 75/100 at Metacritic indicating "generally favorable reviews". The film received a nomination at the 1971 Cannes Film Festival from the International Federation of Film Critics in the Director's Fortnight section. The first version was a student film for USC School of Cinematic Arts entitled "". The run time was 15 minutes. It was released as a bonus feature on the 2004 Directors' Cut. The 1971 studio version, distributed to theaters, had five minutes taken out (apparently against Lucas' wishes) by Warner Bros. studios. This version (81 minutes long) has never been released on any home media format. In 1977, after the success of "Star Wars", "THX 1138" was re-released with the footage that had been deleted by Warner Bros. edited back in, but it still did not gain popularity. This version (86 minutes) was subsequently released on VHS and LaserDisc. In 2004, "The George Lucas Director's Cut" of the film was released. Under Lucas' supervision, the film underwent an extensive restoration and digital intermediate process by Lowry Digital and Industrial Light & Magic (ILM), where the film's original negative was scanned, digital color correction was applied, and a brand new digital master was created. Computer-generated imagery and audio/video restoration techniques were applied to the film. At Lucas' request, the previsualization team at Skywalker Ranch, working in concert with ILM, planned and executed a single day shoot which would form the basis for new digital visual effects, mostly to expand scenes by extending crowds, filling out settings, and adding detail to the backgrounds of many scenes. The shell dwellers, seen briefly in the beginning of the film (played by dwarves in monkey costumes), are augmented at the end of the film by several computer-generated imagery dog-people which look very different from the live actors in costume. These changes increased the run time of the film to 88 minutes. This director's cut was released to a limited number of digital-projection theaters on September 10, 2004, and then on DVD on September 14, 2004. The film was released on Blu-ray on September 7, 2010. At that time, the film received an "R" rating (for "sexuality/nudity") from the MPAA, due to the changes to the ratings system since the original release (the original film was rated "GP", later changed to "PG"). It is the only film directed by Lucas to carry an "R" rating. A novelization based on the film was written by Ben Bova and published in 1971. It follows the plot of the movie closely, with four notable additions: The significance of the name THX 1138 has been the subject of much speculation. In an interview for the DVD compilation "Reel Talent", which included Lucas's original "4EB" short, Lucas stated that he chose the letters and numbers for their aesthetic qualities, especially their symmetry. According to the book "Cinema by the Bay", published by LucasBooks, Lucas named the film after his telephone number while in college: 849-1138—the letters THX correspond to the numbers 8, 4, and 9 on the keypad. Walter Murch states in the DVD's audio commentary that he always believed Lucas intended THX to be "sex", LUH to be "love", and SEN to be "sin". John Lithgow, in "The Film School Generation" segment of the DVD series "American Cinema", described the title THX 1138 as "reading like a license plate number." Numerous references to "1138" or "THX 1138" appear throughout the "Star Wars" films, as well as other films by George Lucas. For example, THX 138 is the license plate number of John Milner's hot rod in "American Graffiti". Lucas also founded THX Ltd., developer of the "THX" audio/visual reproduction standards.
https://en.wikipedia.org/wiki?curid=31193
Tuning fork A tuning fork is an acoustic resonator in the form of a two-pronged fork with the prongs (tines) formed from a U-shaped bar of elastic metal (usually steel). It resonates at a specific constant pitch when set vibrating by striking it against a surface or with an object, and emits a pure musical tone once the high overtones fade out. A tuning fork's pitch depends on the length and mass of the two prongs. They are traditional sources of standard pitch for tuning musical instruments. The tuning fork was invented in 1711 by British musician John Shore, Sergeant trumpeter and lutenist to the court. A tuning fork is a fork-shaped acoustic resonator used in many applications to produce a fixed tone. The main reason for using the fork shape is that, unlike many other types of resonators, it produces a very pure tone, with most of the vibrational energy at the fundamental frequency. The reason for this is that the frequency of the first overtone is about = = times the fundamental (about octaves above it). By comparison, the first overtone of a vibrating string or metal bar is one octave above (twice) the fundamental, so when the string is plucked or the bar is struck, its vibrations tend to mix the fundamental and overtone frequencies. When the tuning fork is struck, little of the energy goes into the overtone modes; they also die out correspondingly faster, leaving a pure sine wave at the fundamental frequency. It is easier to tune other instruments with this pure tone. Another reason for using the fork shape is that it can then be held at the base without dampening the oscillation. That is because its principal mode of vibration is symmetric, with the two prongs always moving in opposite directions, so that at the base where the two prongs meet there is a node (point of no vibratory motion) which can therefore be handled without removing energy from the oscillation (dampening). However, there is still a tiny motion induced in the handle in its longitudinal direction (thus at right angles to the oscillation of the prongs) which can be made audible using any sort of sound board. Thus by pressing the tuning fork's base against a sound board such as a wooden box, table top, or bridge of a musical instrument, this small motion, but which is at a high acoustic pressure (thus a very high acoustic impedance), is partly converted into audible sound in air which involves a much greater motion (particle velocity) at a relatively low pressure (thus low acoustic impedance). The pitch of a tuning fork can also be heard directly through bone conduction, by pressing the tuning fork against the bone just behind the ear, or even by holding the stem of the fork in ones teeth, conveniently leaving both hands free. Bone conduction using a tuning fork is specifically used in the Weber and Rinne tests for hearing in order to bypass the middle ear. If just held in open air, the sound of a tuning fork is very faint due to the acoustic impedance mismatch between the steel and air. Moreover, since the feeble sound waves emanating from each prong are 180° out of phase, those two opposite waves interfere, largely cancelling each other. Thus when a solid sheet is slid in between the prongs of a vibrating fork, the apparent volume actually "increases", as this cancellation is reduced, just as a loudspeaker requires a baffle in order to radiate efficiently. Commercial tuning forks are tuned to the correct pitch at the factory, and the pitch and frequency in hertz is stamped on them. They can be retuned by filing material off the prongs. Filing the ends of the prongs raises the pitch, while filing the inside of the base of the prongs lowers it. Currently, the most common tuning fork sounds the note of A = 440 Hz, the standard concert pitch that many orchestras use. That A is the pitch of the violin's second string, the first string of the viola, and an octave above the first string of the cello. Orchestras between 1750 and 1820 mostly used A = 423.5 Hz, though there were many forks and many slightly different pitches. Standard tuning forks are available that vibrate at all the pitches within the central octave of the piano, and also other pitches. Well-known tuning fork manufacturers include Ragg and John Walker, both of Sheffield, England. Tuning fork pitch varies slightly with temperature, due mainly to a slight decrease in the modulus of elasticity of steel with increasing temperature. A change in frequency of 48 parts per million per °F (86 ppm per °C) is typical for a steel tuning fork. The frequency decreases (becomes flat) with increasing temperature. Tuning forks are manufactured to have their correct pitch at a standard temperature. The standard temperature is now , but is an older standard. The pitch of other instruments is also subject to variation with temperature change. The frequency of a tuning fork depends on its dimensions and what it's made from: where: The ratio in the equation above can be rewritten as if the prongs are cylindrical with radius , and if the prongs have rectangular cross-section of width along the direction of motion. Tuning forks have traditionally been used to tune musical instruments, though electronic tuners have largely replaced them. Forks can be driven electrically by placing electronic oscillator-driven electromagnets close to the prongs. A number of keyboard musical instruments use principles similar to tuning forks. The most popular of these is the Rhodes piano, in which hammers hit metal tines that vibrate in the magnetic field of a pickup, creating a signal that drives electric amplification. The earlier, un-amplified dulcitone, which used tuning forks directly, suffered from low volume. The quartz crystal that serves as the timekeeping element in modern quartz clocks and watches is in the form of a tiny tuning fork. It usually vibrates at a frequency of 32,768 Hz in the ultrasonic range (above the range of human hearing). It is made to vibrate by small oscillating voltages applied to metal electrodes plated on the surface of the crystal by an electronic oscillator circuit. Quartz is piezoelectric, so the voltage causes the tines to bend rapidly back and forth. The Accutron, an electromechanical watch developed by Max Hetzel and manufactured by Bulova beginning in 1960, used a 360-hertz steel tuning fork as its timekeeper, powered by electromagnets attached to a battery-powered transistor oscillator circuit. The fork provided greater accuracy than conventional balance wheel watches. The humming sound of the tuning fork was audible when the watch was held to the ear. Alternatives to the common A=440 standard include philosophical or scientific pitch with standard pitch of C=512. According to Rayleigh, physicists and acoustic instrument makers used this pitch. The tuning fork John Shore gave to George Frideric Handel produces C=512. Tuning forks, usually C512, are used by medical practitioners to assess a patient's hearing. This is most commonly done with two exams called the Weber test and Rinne test, respectively. Lower-pitched ones, usually at C128, are also used to check vibration sense as part of the examination of the peripheral nervous system. Orthopedic surgeons have explored using a tuning fork (lowest frequency C=128) to assess injuries where bone fracture is suspected. They hold the end of the vibrating fork on the skin above the suspected fracture, progressively closer to the suspected fracture. If there is a fracture, the periosteum of the bone vibrates, and fire nociceptors (pain receptors) causing a local sharp pain. This can indicate a fracture, which the practitioner refers for medical X-ray. The sharp pain of a local sprain can give a false positive. Established practice, however, requires an X-ray regardless, because it's better than missing a real fracture while wondering if a response means a sprain. A systematic review published in 2014 in BMJ Open suggests that this technique is not reliable or accurate enough for clinical use. Tuning forks also play a role in several alternative therapy practices, such as sonopuncture and polarity therapy. A radar gun that measures the speed of cars or a ball in sports is usually calibrated with a tuning fork. Instead of the frequency, these forks are labeled with the calibration speed and radar band (e.g., X-band or K-band) they are calibrated for. Doubled and H-type tuning forks are used for tactical-grade Vibrating Structure Gyroscopes and various types of microelectromechanical systems. Tuning fork forms the sensing part of Vibrating Point Level Sensors Point Level Sensors. The tuning fork is kept vibrating at its resonant frequency by a piezoelectric device. Upon coming in contact with solids, amplitude of oscillation goes down, the same is used as a switching parameter for detecting point level for solids. For liquids, the resonant frequency of tuning fork changes upon coming in contact with the liquids, change in frequency is used to detect level.
https://en.wikipedia.org/wiki?curid=31198
Trireme A trireme (; derived from Latin: "trirēmis" "with three banks of oars"; "triērēs", literally "three-rower") was an ancient vessel and a type of galley that was used by the ancient maritime civilizations of the Mediterranean, especially the Phoenicians, ancient Greeks and Romans. The trireme derives its name from its three rows of oars, manned with one man per oar. The early trireme was a development of the penteconter, an ancient warship with a single row of 25 oars on each side (i.e., a single-banked boat), and of the bireme (, "diērēs"), a warship with two banks of oars, of Phoenician origin. The word dieres does not appear until the Roman period. According to Morrison and Williams, "It must be assumed the term pentekontor covered the two-level type". As a ship it was fast and agile, and it was the dominant warship in the Mediterranean during the 7th to 4th centuries BC, after which it was largely superseded by the larger quadriremes and quinqueremes. Triremes played a vital role in the Persian Wars, the creation of the Athenian maritime empire, and its downfall in the Peloponnesian War. The term is sometimes also used to refer to medieval and early modern galleys with three files of oarsmen per side as triremes. Depictions of two-banked ships (biremes), with or without the "parexeiresia" (the outriggers, see below), are common in 8th century BC and later vases and pottery fragments, and it is at the end of that century that the first references to three-banked ships are found. Fragments from an 8th-century relief at the Assyrian capital of Nineveh depicting the fleets of Tyre and Sidon show ships with rams, and fitted with oars pivoted at two levels. They have been interpreted as two-decked warships, and also as triremes. Modern scholarship is divided on the provenance of the trireme, Greece or Phoenicia, and the exact time it developed into the foremost ancient fighting ship. Clement of Alexandria in the 2nd century, drawing on earlier works, explicitly attributes the invention of the trireme ("trikrotos naus", "three-banked ship") to the Sidonians. According to Thucydides, the trireme was introduced to Greece by the Corinthians in the late 8th century BC, and the Corinthian Ameinocles built four such ships for the Samians. This was interpreted by later writers, Pliny and Diodorus, to mean that triremes were "invented" in Corinth, the possibility remains that the earliest three-banked warships originated in Phoenicia. Herodotus mentions that the Egyptian pharaoh Necho II (610–595 BC) built triremes on the Nile, for service in the Mediterranean, and in the Red Sea, but this reference is disputed by modern historians, and attributed to a confusion, since "triērēs" was by the 5th century used in the generic sense of "warship", regardless its type. The first definite reference to the use of triremes in naval combat dates to ca. 525 BC, when, according to Herodotus, the tyrant Polycrates of Samos was able to contribute 40 triremes to a Persian invasion of Egypt (Battle of Pelusium). Thucydides meanwhile clearly states that in the time of the Persian Wars, the majority of the Greek navies consisted of (probably two-tiered) penteconters and "ploia makrá" ("long ships"). In any case, by the early 5th century, the trireme was becoming the dominant warship type of the eastern Mediterranean, with minor differences between the "Greek" and "Phoenician" types, as literary references and depictions of the ships on coins make clear. The first large-scale naval battle where triremes participated was the Battle of Lade during the Ionian Revolt, where the combined fleets of the Greek Ionian cities were defeated by the Persian fleet, composed of squadrons from their Phoenician, Carian, Cypriot and Egyptian subjects. Athens was at that time embroiled in a conflict with the neighbouring island of Aegina, which possessed a formidable navy. In order to counter this, and possibly with an eye already at the mounting Persian preparations, in 483/2 BC the Athenian statesman Themistocles used his political skills and influence to persuade the Athenian assembly to start the construction of 200 triremes, using the income of the newly discovered silver mines at Laurion. The first clash with the Persian navy was at the Battle of Artemisium, where both sides suffered great casualties. However, the decisive naval clash occurred at Salamis, where Xerxes' invasion fleet was decisively defeated. After Salamis and another Greek victory over the Persian fleet at Mycale, the Ionian cities were freed, and the Delian League was formed under the aegis of Athens. Gradually, the predominance of Athens turned the League effectively into an Athenian Empire. The source and foundation of Athens' power was her strong fleet, composed of over 200 triremes. It not only secured control of the Aegean Sea and the loyalty of her allies, but also safeguarded the trade routes and the grain shipments from the Black Sea, which fed the city's burgeoning population. In addition, as it provided permanent employment for the city's poorer citizens, the fleet played an important role in maintaining and promoting the radical Athenian form of democracy. Athenian maritime power is the first example of thalassocracy in world history. Aside from Athens, other major naval powers of the era included Syracuse, Corfu and Corinth. In the subsequent Peloponnesian War, naval battles fought by triremes were crucial in the power balance between Athens and Sparta. Despite numerous land engagements, Athens was finally defeated through the destruction of her fleet during the Sicilian Expedition, and finally, at the Battle of Aegospotami, at the hands of Sparta and her allies. Based on all archeological evidence, the design of the trireme most likely pushed the technological limits of the ancient world. After gathering the proper timbers and materials it was time to consider the fundamentals of the trireme design. These fundamentals included accommodations, propulsion, weight and waterline, center of gravity and stability, strength, and feasibility. All of these variables are dependent on one another; however a certain area may be more important than another depending on the purpose of the ship. The arrangement and number of oarsmen is the first deciding factor in the size of the ship. For a ship to travel at high speeds would require a high oar-gearing, which is the ratio between the outboard length of an oar and the inboard length; it is this arrangement of the oars which is unique and highly effective for the trireme. The ports would house the oarsmen with a minimal waste of space. There would be three files of oarsmen on each side tightly but workably packed by placing each man outboard of, and in height overlapping, the one below, provided that thalamian tholes were set inboard and their ports enlarged to allow oar movement. Thalamian, zygian, and thranite are the English terms for "thalamios" (θαλάμιος), "zygios" (ζύγιος), and "thranites" (θρανίτης), the Greek words for the oarsmen in, respectively the lowest, middle, and uppermost files of the triereis. Tholes were pins that acted as fulcrums to the oars that allowed them to move. The center of gravity of the ship is low because of the overlapping formation of the files that allow the ports to remain closer to the ships walls. A lower center of gravity would provide adequate stability. The trireme was constructed to maximize all traits of the ship to the point where if any changes were made the design would be compromised. Speed was maximized to the point where any less weight would have resulted in considerable losses to the ship's integrity. The center of gravity was placed at the lowest possible position where the Thalamian tholes were just above the waterline which retained the ship's resistance to waves and the possible rollover. If the center of gravity were placed any higher, the additional beams needed to restore stability would have resulted in the exclusion of the Thalamian tholes due to the reduced hull space. The purpose of the area just below the center of gravity and the waterline known as the "hypozomata" (ὑποζώματα) was to allow bending of the hull when faced with up to 90 kN of force. The calculations of forces that could have been absorbed by the ship are arguable because there is not enough evidence to confirm the exact process of jointing used in ancient times. In a modern reconstruction of the ship, a polysulphide sealant was used to compare to the caulking that evidence suggests was used; however this is also argued because there is simply not enough evidence to authentically reproduce the triereis seams. Triremes required a great deal of upkeep in order to stay afloat, as references to the replacement of ropes, sails, rudders, oars and masts in the middle of campaigns suggest. They also would become waterlogged if left in the sea for too long. In order to prevent this from happening, ships would have to be pulled from the water during the night. The use of lightwoods meant that the ship could be carried ashore by as few as 140 men. Beaching the ships at night, however, would leave the troops vulnerable to surprise attacks. While well-maintained triremes would last up to 25 years, during the Peloponnesian War, Athens had to build nearly 20 triremes a year to maintain their fleet of 300. The Athenian trireme had two great cables of about 47 mm in diameter and twice the ship's length called "hypozomata" (undergirding), and carried two spares. They were possibly rigged fore and aft from end to end along the middle line of the hull just under the main beams and tensioned to 13.5 tonnes force. The "hypozomata" were considered important and secret: their export from Athens was a capital offense. This cable would act as a stretched tendon straight down the middle of the hull, and would have prevented hogging. Additionally, hull plank butts would remain in compression in all but the most severe sea conditions, reducing working of joints and consequent leakage. The "hypozomata" would also have significantly braced the structure of the trireme against the stresses of ramming, giving it an important advantage in combat. According to material scientist J.E. Gordon: "The "hupozoma" was therefore an essential part of the hulls of these ships; they were unable to fight, or even to go to sea at all, without it. Just as it used to be the practice to disarm modern warships by removing the breech-blocks from the guns, so, in classical times, disarmament commissioners used to disarm triremes by removing the "hupozomata"." Excavations of the ship sheds ("neōsoikoi", νεώσοικοι) at the harbour of Zea in Piraeus, which was the main war harbour of ancient Athens, were first carried out by Dragatsis and Wilhelm Dörpfeld in the 1880s. These have provided us with a general outline of the Athenian trireme. The sheds were ca. 40 m long and just 6 m wide. These dimensions are corroborated by the evidence of Vitruvius, whereby the individual space allotted to each rower was 2 cubits. With the Doric cubit of 0.49 m, this results in an overall ship length of just under 37 m. The height of the sheds' interior was established as 4.026 metres, leading to estimates that the height of the hull above the water surface was ca. 2.15 metres. Its draught was relatively shallow, about 1 metre, which, in addition to the relatively flat keel and low weight, allowed it to be beached easily. Construction of the trireme differed from modern practice. The construction of a trireme was expensive and required around 6000 man-days of labor to complete. The ancient Mediterranean practice was to build the outer hull first, and the ribs afterwards. To secure and add strength to the hull, cables ("hypozōmata") were employed, fitted in the keel and stretched by means of windlasses. Hence the triremes were often called "girded" when in commission. The materials from which the trireme was constructed were an important aspect of its design. The three principal timbers included fir, pine, and cedar. Primarily the choice in timber depended on where the construction took place. For example, in Syria and Phoenicia, triereis were made of cedar because pine was not readily available. Pine is stronger and more resistant to decay, but it is heavy unlike fir which was used because it was lightweight. The frame and internal structure would consist of pine and fir for a compromise between durability and weight. Another very strong type of timber is oak; this was primarily used for the hulls of triereis to withstand the force of hauling ashore. Other ships would usually have their hulls made of pine because they would usually come ashore via a port or with the use of an anchor. It was necessary to ride the triereis onto the shores because there simply was no time to anchor a ship during war and gaining control of enemy shores was crucial in the advancement of an invading army. (Petersen) The joints of the ship required finding wood that was capable of absorbing water but was not completely dried out to the point where no water absorption could occur. There would be gaps between the planks of the hull when the ship was new, but once submerged the planks would absorb the water and expand thus forming a watertight hull. Problems would occur for example when shipbuilders would use green wood for the hull; when green timber is allowed to dry it loses moisture which causes cracks in the wood that could cause catastrophic damages to the ship. The sailyards and masts were preferably made from fir because fir trees were naturally tall and provided these parts in usually a single piece. Making durable rope consisted of using both papyrus and white flax; the idea to use such materials is suggested by evidence to have originated in Egypt. In addition, ropes began being made from a variety of esparto grass in the later third century BC. The use of lightwoods meant that the ship could be carried ashore by as few as 140 men, but also that the hull soaked up water, which adversely affected its speed and maneuverability. But it was still faster than other warships. Once the triremes were seaworthy, it is argued that they were highly decorated with, "eyes, nameplates, painted figureheads, and various ornaments". These decorations were used both to show the wealth of the patrician and to make the ship frightening to the enemy. The home port of each trireme was signaled by the wooden statue of a deity located above the bronze ram on the front of the ship. In the case of Athens, since most of the fleet's triremes were paid for by wealthy citizens, there was a natural sense of competition among the patricians to create the "most impressive" trireme, both to intimidate the enemy and to attract the best oarsmen. Of all military expenditure, triremes were the most labor- and (in terms of men and money) investment-intensive. The ship's primary propulsion came from the 180 oars ("kōpai"), arranged in three rows, with one man per oar. Evidence for this is provided by Thucydides, who records that the Corinthian oarsmen carried "each his oar, cushion ("hypersion") and oarloop". The ship also had two masts, a main ("histos megas") and a small foremast ("histos akateios"), with square sails, while steering was provided by two steering oars at the stern (one at the port side, one to starboard). Classical sources indicate that the trireme was capable of sustained speeds of ca. 6 knots at a relatively leisurely pace. There is also a reference by Xenophon of a single day's voyage from Byzantium to Heraclea Pontica, which translates as an average speed of 7.37 knots. These figures seem to be corroborated by the tests conducted with the reconstructed Olympias: a maximum speed of 8 knots and a steady speed of 4 knots could be maintained, with half the crew resting at a time. Given the imperfect nature of the reconstructed ship as well as the fact that it was manned by totally untrained modern men and women, it is reasonable to suggest that ancient triremes, expertly built and navigated by trained men, would attain higher speeds. The distance a trireme could cover in a given day depended much on the weather. On a good day, the oarsmen, rowing for 6–8 hours, could propel the ship between . There were rare instances however when experienced crews and new ships were able to cover nearly twice that distance (Thucydides mentions a trireme travelling 300 kilometres in one day). The commanders of the triremes also had to stay aware of the condition of their men. They had to keep their crews comfortably paced so as not to exhaust them before battle. The total complement ("plērōma") of the ship was about 200. These were divided into the 170 rowers ("eretai"), who provided the ship's motive power, the deck crew headed by the trierarch, and a marine detachment. For the crew of Athenian triremes, the ships were an extension of their democratic beliefs. Rich and poor rowed alongside each other. Victor Davis Hanson argues that this "served the larger civic interest of acculturating thousands as they worked together in cramped conditions and under dire circumstances." During the Peloponnesian War, there were a few variations to the typical crew layout of a trireme. One was a drastically reduced number of oarsmen, so as to use the ship as a troop transport. The thranites would row from the top benches while the rest of the space, below, would be filled with hoplites. In another variation, the Athenians used 10 or so trireme for transporting horses. Such triremes had 60 oarsmen, and rest of the ship was for horses. The trireme was designed for day-long journeys, with no capacity to stay at sea overnight, or to carry the provisions needed to sustain its crew overnight. Each crewman required 2 gallons (7.6 L) of fresh drinking water to stay hydrated each day, but it is unknown quite how this was stored and distributed. This meant that all those aboard were dependent upon the land and peoples of wherever they landed each night for supplies. Sometimes this would entail traveling up to eighty kilometres in order to procure provisions. In the Peloponnesian War, the beached Athenian fleet was caught unawares on more than one occasion, while out looking for food (Battle of Syracuse and Battle of Aegospotami). Cities visited, which suddenly found themselves needing to provide for large numbers of sailors, usually did not mind the extra business - though those in charge of the fleet had to be careful not to deplete them of resources. In Athens, the ship's captain was known as the trierarch ("triērarchos"). He was a wealthy Athenian citizen (usually from the class of the "pentakosiomedimoi"), responsible for manning, fitting out and maintaining the ship for his liturgical year at least; the ship itself belonged to Athens. The "triērarchia" was one of the liturgies of ancient Athens; although it afforded great prestige, it constituted a great financial burden, so that in the 4th century, it was often shared by two citizens, and after 397 BC it was assigned to special boards. The deck and command crew ("hypēresia") was headed by the helmsman, the "kybernētēs", who was always an experienced seaman and was often the commander of the vessel. These experienced sailors were to be found on the upper levels of the triremes. Other officers were the bow lookout ("prōreus" or "prōratēs"), the boatswain ("keleustēs"), the quartermaster ("pentēkontarchos"), the shipwright ("naupēgos"), the piper ("aulētēs") who gave the rowers' rhythm and two superintendents ("toicharchoi"), in charge of the rowers on each side of the ship. What constituted these sailors' experience was a combination of superior rowing skill (physical stamina and/or consistency in hitting with a full stroke) and previous battle experience. The sailors were likely in their thirties and forties. In addition, there were ten sailors handling the masts and the sails. In the ancient navies, crews were composed not of galley slaves but of free men. In the Athenian case in particular, service in ships was the integral part of the military service provided by the lower classes, the "thētai", although metics and hired foreigners were also accepted. Although it has been argued that slaves formed part of the rowing crew in the Sicilian Expedition, a typical Athenian trireme crew during the Peloponnesian War consisted of 80 citizens, 60 metics and 60 foreign hands. Indeed, in the few emergency cases where slaves were used to crew ships, these were deliberately set free, usually before being employed. For instance, the tyrant Dionysius I of Syracuse once set all slaves of Syracuse free to man his galleys, employing thus freedmen, but otherwise relied on citizens and foreigners as oarsmen. In the Athenian navy, the crews enjoyed long practice in peacetime, becoming skilled professionals and ensuring Athens' supremacy in naval warfare. The rowers were divided according to their positions in the ship into "thranitai", "zygitai", and "thalamitai". According to the excavated Naval Inventories, lists of ships' equipment compiled by the Athenian naval boards, there were: Most of the rowers (108 of the 170 - the "zygitai" and "thalamitai"), due to the design of the ship, were unable to see the water and therefore, rowed blindly, therefore coordinating the rowing required great skill and practice. It is not known exactly how this was done, but there are literary and visual references to the use of gestures and pipe playing to convey orders to rowers. In the sea trials of the reconstruction "Olympias", it was evident that this was a difficult problem to solve, given the amount of noise that a full rowing crew generated. In Aristophanes' play "The Frogs" two different rowing chants can be found: ""ryppapai"" and ""o opop"", both corresponding quite well to the sound and motion of the oar going through its full cycle. A varying number of marines ("epibatai"), usually 10–20, were carried aboard for boarding actions. At the Battle of Salamis, each Athenian ship was recorded to have 14 hoplites and 4 archers (usually Scythian mercenaries) on board, but Herodotus narrates that the Chiots had 40 hoplites on board at Lade and that the Persian ships carried a similar number. This reflects the different practices between the Athenians and other, less professional navies. Whereas the Athenians relied on speed and maneuverability, where their highly trained crews had the advantage, other states favored boarding, in a situation that closely mirrored the one that developed during the First Punic War. Grappling hooks would be used both as a weapon and for towing damaged ships (ally or enemy) back to shore. When the triremes were alongside each other, marines would either spear the enemy or hop across and cut the enemy down with their swords. As the presence of too many heavily armed hoplites on deck tended to destabilize the ship, the "epibatai" were normally seated, only rising to carry out any boarding action. The hoplites belonged to the middle social classes, so that they came immediately next to the trierarch in status aboard the ship. In the ancient world, naval combat relied on two methods: boarding and ramming. Artillery in the form of ballistas and catapults was widespread, especially in later centuries, but its inherent technical limitations meant that it could not play a decisive role in combat. The method for boarding was to brush alongside the enemy ship, with oars drawn in, in order to break the enemy's oars and render the ship immobile, to be finished off as convenient. Rams ("embolon") were fitted to the prows of warships, and were used to rupture the hull of the enemy ship. The preferred method of attack was to come in from astern, with the aim not of creating a single hole, but of rupturing as big a length of the enemy vessel as possible. The speed necessary for a successful impact depended on the angle of attack; the greater the angle, the lesser the speed required. At 60 degrees, 4 knots was enough to penetrate the hull, while it increased to 8 knots at 30 degrees. If the target for some reason was in motion in the direction of the attacker, even less speed was required, and especially if the hit came amidships. The Athenians especially became masters in the art of ramming, using light, un-decked ("aphraktai") triremes. In either case, the masts and railings of the ship were taken down prior to engagement to reduce the opportunities for opponents' grappling hooks. Unlike the naval warfare of other eras, boarding an enemy ship was not the primary offensive action of triremes. Triremes' small size allowed for a limited number of marines to be carried aboard. During the 5th and 4th centuries, the trireme's strength was in its maneuverability and speed, not its armor or boarding force. That said, fleets less confident in their ability to ram were prone to load more marines onto their ships. On the deck of a typical trireme in the Peloponnesian War there were 4 or 5 archers and 10 or so marines. These few troops were peripherally effective in an offensive sense, but critical in providing defense for the oarsmen. Should the crew of another trireme board, the marines were all that stood between the enemy troops and the slaughter of the men below. It has also been recorded that if a battle were to take place in the calmer water of a harbor, oarsmen would join the offensive and throw stones (from a stockpile aboard) to aid the marines in harassing/attacking other ships. Squadrons of triremes employed a variety of tactics. The "periplous" (Gk., "sailing around") involved outflanking or encircling the enemy so as to attack them in the vulnerable rear; the "diekplous" (Gk., "Sailing out through") involved a concentrated charge so as to break a hole in the enemy line, allowing galleys to break through and then wheel to attack the enemy line from behind; and the "kyklos" (Gk., "circle") and the "mēnoeidēs kyklos" (Gk. "half-circle"; literally, "moon-shaped (i.e. crescent-shaped) circle"), were defensive tactics to be employed against these manoeuvres. In all of these manoeuvres, the ability to accelerate faster, row faster, and turn more sharply than one's enemy was very important. Athens' strength in the Peloponnesian War came from its navy, whereas Sparta's came from its land-based Hoplite army. As the war progressed however the Spartans came to realize that if they were to undermine Pericles' strategy of outlasting the Peloponnesians by remaining within the walls of Athens indefinitely (a strategy made possible by Athens' Long Walls and fortified port of Piraeus), they were going to have to do something about Athens superior naval force. Once Sparta gained Persia as an ally, they had the funds necessary to construct the new naval fleets necessary to combat the Athenians. Sparta was able to build fleet after fleet, eventually destroying the Athenian fleet at the Battle of Aegospotami. The Spartan General Brasidas summed up the difference in approach to naval warfare between the Spartans and the Athenians: "Athenians relied on speed and maneuverability on the open seas to ram at will clumsier ships; in contrast, a Peloponnesian armada might win only when it fought near land in calm and confined waters, had the greater number of ships in a local theater, and if its better-trained marines on deck and hoplites on shore could turn a sea battle into a contest of infantry." In addition, compared to the high-finesse of the Athenian navy (superior oarsmen who could outflank and ram enemy triremes from the side), the Spartans (as well as their allies and other enemies of Athens) would focus mainly on ramming Athenian triremes head on. It would be these tactics, in combination with those outlined by Brasidas, that led to the defeat of the Athenian fleet at the Second Battle of Syracuse during the Sicilian Expedition. Once a naval battle was underway, for the men involved, there were numerous ways for them to meet their end. Drowning was perhaps the most common way for a crew member to perish. Once a trireme had been rammed, the ensuing panic that engulfed the men trapped below deck no doubt extended the amount of time it took the men to escape. Inclement weather would greatly decrease the crew's odds of survival, leading to a situation like that off Cape Athos in 411 (12 of 10,000 men were saved). An estimated 40,000 Persians died in the Battle of Salamis. In the Peloponnesian War, after the Battle of Arginusae, six Athenian generals were executed for failing to rescue several hundred of their men clinging to wreckage in the water. If the men did not drown, they might be taken prisoner by the enemy. In the Peloponnesian War, "Sometimes captured crews were brought ashore and either cut down or maimed - often grotesquely, by cutting off the right hand or thumb to guarantee that they could never row again." The image found on an early-5th-century black-figure, depicting prisoners bound and thrown into the sea being pushed and prodded under water with poles and spears, shows that enemy treatment of captured sailors in the Peloponnesian War was often brutal. Being speared amid the wreckage of destroyed ships was likely a common cause of death for sailors in the Peloponnesian War. Naval battles were far more of a spectacle than the hoplite battles on land. Sometimes the battles raging at sea were watched by thousands of spectators on shore. Along with this greater spectacle, came greater consequences for the outcome of any given battle. Whereas the average percentage of fatalities from a land battle were between 10-15%, in a sea battle, the forces engaged ran the risk of losing their entire fleet. The number of ships and men in battles was sometimes very high. At the Battle of Arginusae for example, 263 ships were involved, making for a total of 55,000 men, and at the Battle of Aegospotami more than 300 ships and 60,000 seamen were involved. In Battle of Aegospotami, the city-state of Athens lost what was left of its navy: the once 'invincible' thalassocracy lost 170 ships (costing some 400 talents), and the majority of the crews were either killed, captured or lost. During the Hellenistic period, the light trireme was supplanted by larger warships in dominant navies, especially the pentere/quinquereme. The maximum practical number of oar banks a ship could have was three. So the number in the type name did not refer to the banks of oars any more (as for biremes and triremes), but to the number of rowers per vertical section, with several men on each oar. The reason for this development was the increasing use of armour on the bows of warships against ramming attacks, which again required heavier ships for a successful attack. This increased the number of rowers per ship, and also made it possible to use less well-trained personnel for moving these new ships. This change was accompanied by an increased reliance on tactics like boarding, missile skirmishes and using warships as platforms for artillery. Triremes continued to be the mainstay of all smaller navies. While the Hellenistic kingdoms did develop the quinquereme and even larger ships, most navies of the Greek homeland and the smaller colonies could only afford triremes. They were used by the Diadochi Empires and sea powers like Syracuse, Carthage and later Rome. The difference to the classical 5th century Athenian ships was that they were armoured against ramming and carried significantly more marines. Lightened versions of the trireme and smaller vessels were often used as auxiliaries, and still performed quite effectively against the heavier ships, thanks to their greater manoeuvrability. With the rise of Rome the biggest fleet of quinqueremes temporarily ruled the Mediterranean, but during the civil wars after Caesar's death the fleet was on the wrong side and a new warfare with light liburnas was developed. By Imperial times the fleet was relatively small and had mostly political influence, controlling the grain supply and fighting pirates, who usually employed light biremes and liburnians. But instead of the successful liburnians of the Greek Civil War, it was again centred around light triremes, but still with many marines. Out of this type of ship, the dromon developed. In 1985–1987 a shipbuilder in Piraeus, financed by Frank Welsh (an author, Suffolk banker, writer and trireme enthusiast), advised by historian J. S. Morrison and naval architect John F. Coates (who with Welsh founded the Trireme Trust that initiated and managed the project), and informed by evidence from underwater archaeology, built an Athenian-style trireme, "Olympias". Crewed by 170 volunteer oarsmen, "Olympias" in 1988 achieved 9 knots (17 km/h or 10.5 mph). These results, achieved with inexperienced crew, suggest that the ancient writers were not exaggerating about straight-line performance. In addition, "Olympias" was able to execute a 180 degree turn in one minute and in an arc no wider than two and one half (2.5) ship-lengths. Additional sea trials took place in 1987, 1990, 1992 and 1994. In 2004 "Olympias" was used ceremonially to transport the Olympic Flame from the port of Keratsini to the main port of Piraeus as the 2004 Olympic Torch Relay entered its final stages in the run-up to the 2004 Summer Olympics opening ceremony. The builders of the reconstruction project concluded that it effectively proved what had previously been in doubt, i.e., that Athenian triremes were arranged with the crew positioned in a staggered arrangement on three levels with one person per oar. This architecture would have made optimum use of the available internal dimensions. However, since modern humans are on average approximately 6 cm (2 inches) taller than Ancient Greeks (and the same relative dimensions can be presumed for oarsmen and other athletes), the construction of a craft which followed the precise dimensions of the ancient vessel led to cramped rowing conditions and consequent restrictions on the modern crew's ability to propel the vessel with full efficiency, which perhaps explains why the ancient speed records stand unbroken.
https://en.wikipedia.org/wiki?curid=31199
Traveling Wilburys The Traveling Wilburys (sometimes shortened to the Wilburys) were an English–American supergroup consisting of Bob Dylan, George Harrison, Jeff Lynne, Roy Orbison and Tom Petty. Originating from an idea discussed by Harrison and Lynne during the sessions for Harrison's 1987 album "Cloud Nine", the band formed in April 1988 after the five members united to record a bonus track for Harrison's next European single. When this collaboration, "Handle with Care", was deemed too good for such a limited release, the group agreed to record a full album, titled "Traveling Wilburys Vol. 1". Following Orbison's death in December 1988, the band released a second album, which they titled "Traveling Wilburys Vol. 3", in 1990. The project's work received much anticipation given the diverse nature of the singer-songwriters. The band members adopted tongue-in-cheek pseudonyms as half-brothers from a fictional Wilbury family of travelling musicians. "Vol. 1" was a critical and commercial success, helping to revitalise Dylan's and Petty's respective careers. In 1990, the album won the Grammy for Best Rock Performance by a Duo or Group. Although Harrison envisaged a series of Wilburys albums and a film about the band, produced through his company HandMade, the group's final release was in February 1991. After several years of unavailability, the two Wilburys albums were reissued by the Harrison estate in the 2007 box set "The Traveling Wilburys Collection". The box set included a DVD containing their music videos and a documentary on the band's formation. George Harrison first mentioned the Traveling Wilburys publicly during a radio interview with Bob Coburn on the show "Rockline" in February 1988. When asked how he planned to follow up the success of his "Cloud Nine" album, Harrison replied: "What I'd really like to do next is ... to do an album with me and some of my mates ... It's this new group I got [in mind]: it's called the Traveling Wilburys, I'd like to do an album with them and then later we can all do our own albums again." According to Jeff Lynne, who co-produced "Cloud Nine", Harrison introduced the idea of the two of them starting a band together around two months into the sessions for his album, which began in early January 1987. When discussing who the other members might be, Harrison chose Bob Dylan and Lynne opted for Roy Orbison. The term "Wilbury" also originated during the "Cloud Nine" sessions. Referring to recording errors created by faulty equipment, Harrison jokingly remarked to Lynne, ""We'll bury" 'em in the mix." Thereafter, they used the term for any small error in performance. Harrison first suggested "the Trembling Wilburys" as the group's name; at Lynne's suggestion, they amended it to "Traveling Wilburys". During his "Rockline" interview, Harrison voiced his support for Dylan, at a time when the latter was experiencing an artistic and commercial low point in his career. Harrison and Lynne became friends with Tom Petty in October 1987, when Petty and his band, the Heartbreakers, toured Europe as Dylan's backing group. The friendship continued in Los Angeles later that year. There, Harrison struck up a musical rapport with Petty based on their shared love of 1950s rock 'n' roll, and Lynne began collaborating with Petty on what became the latter's debut solo album, "Full Moon Fever", and writing songs with Orbison, Lynne's longtime musical hero, for Orbison's comeback album, "Mystery Girl". According to Petty, Harrison's dream for the Wilburys was to handpick the participants and create "the perfect little band", but the criteria for inclusion were governed most by "who you could hang out with". The five musicians also bonded over a shared appreciation of the English comedy troupe Monty Python. Harrison, who had worked with the members of Monty Python on various productions by his company HandMade Films since the late 1970s, particularly appreciated Orbison's gift for impersonation and his ability to recite entire sketches by the troupe. The band came together in April 1988, when Harrison was in Los Angeles to oversee the filming of his HandMade production "Checking Out". At that time, Warner Bros. Records asked Harrison for a new song to serve as the B-side for the European release of his third single from "Cloud Nine", "This Is Love". During a meal with Lynne and Orbison, Harrison asked Lynne to help him record the track and invited Orbison to attend the session, which he then arranged to take place at Dylan's garage studio in Malibu since no professional studios were available at such short notice. Petty's involvement came about when Harrison went to retrieve his guitar from Petty's house and invited him to attend also. Working on a song that Harrison had recently started writing, the ensemble completed the track, which they titled "Handle with Care" after a label on a box in Dylan's garage. When Harrison presented the recording to Mo Ostin and Lenny Waronker of Warner Bros., the executives insisted that the song was too good to be used as a B-side. In Petty's recollection, Harrison and Lynne then decided to realise their idea of forming a Wilburys band, and first invited him to join before phoning Dylan, who also agreed to join. That night, Harrison, Lynne and Petty drove to Anaheim to see Orbison perform at the Celebrity Theatre and recruited him for the group shortly before he went on stage. In Petty's description, Orbison performed an "unbelievable show", during which "we'd punch each other and go, 'He's in our band, too.' ... We were all so excited." The band members decided to create a full album together, "Traveling Wilburys Vol. 1". Video footage of the creative process was later edited by Harrison into a promotional film for Warner Bros. staff, titled "Whatever Wilbury Wilbury". The album was recorded primarily over a ten-day period in May 1988, to allow for Dylan's limited availability as he prepared for the start of what became known as his Never Ending Tour and for Orbison's tour schedule. These sessions were held in the house of Eurythmics member Dave Stewart, in Los Angeles. The five band members sat in a circle playing acoustic guitars in Stewart's kitchen; once each song's basic track had been written and recorded there (with accompaniment from a drum machine), the group recorded their vocals in another room, usually after dinner each night. Petty recalled that, as a friend but also an avowed fan of Dylan's, Harrison felt the need to clear the air on the first day by saying to him: "We know that you're Bob Dylan and everything, but we're going to just treat you and talk to you like we would anybody else." Dylan replied: "Well, great. Believe it or not, I'm in awe of you guys, and it's the same for me." While most of the songs had a primary composer, all of the band members were creative equals. Petty later described Harrison as the Wilburys' "leader and manager", and credited him with being a bandleader and producer that had a natural instinct for bringing out the best in people and keeping a recording session productive. As the group's producers, Harrison and Lynne directed the sessions, with Harrison often auditioning each member to decide who should sing a particular lead vocal part. The two producers then flew back to England; Lynne recalls that, throughout the flight, he and Harrison enthused about how to turn the sparse, acoustic-based tracks into completed recordings. Overdubs and further recording took place at Harrison's studio, FPSHOT, with "Sideburys" Jim Keltner (drums), Jim Horn (saxophones) and Ray Cooper (percussion). Harrison described the band's sound as "skiffle for the 1990s". The album was released on 18 October 1988. Distributed by Warner Bros., it appeared on the new Wilbury record label rather than on Harrison's Dark Horse label, in the interests of maintaining the group identity. Over the months following the end of recording in the summer, contractual issues had been successfully negotiated between Warner's and the record companies representing Dylan, Petty, Lynne and Orbison. As was the case in 1971 when EMI prepared Harrison's multi-artist live album from the Concert for Bangladesh for release, Columbia, Dylan's label, presented the main stumbling block. In the album credits, the "Wilburys" joke was extended further, with the band members listed under various pseudonyms and pretending to be half-brothers – sons of a fictional Charles Truscott Wilbury, Sr. During promotion for the album, Orbison played along with the mock history, saying: "Some people say Daddy was a cad and a bounder, but I remember him as a Baptist minister." "Vol. 1" was a critical and commercial success, and revitalised the careers of Dylan, Orbison and Petty. As Harrison had intended, the album defied contemporary musical trends such as hip hop, acid house and synthesised pop; author Alan Clayson likens its release to "a Viking longship docking in a hovercraft terminal". The album produced two successful singles and went on to achieve triple-platinum certification for sales in the United States. It was nominated for several awards and won the 1990 Grammy Award for Best Rock Performance by a Duo or Group. Liner notes on the album cover were written by Monty Python's Michael Palin under a pseudonym. Palin's essay was based on an idea by Derek Taylor, who wrote an extensive fictional history of the Wilburys family that otherwise went unused. Harrison planned a feature film about the band, to be produced by HandMade and directed by David Leland, but contractual problems ended the project. Roy Orbison died of a heart attack on 6 December 1988. In tribute to him, the music video for the band's second single, "End of the Line", shows Orbison's guitar rocking in a chair when his vocals are heard. Lynne recalled that Orbison's death in the wake of "Vol. 1"s success was "the most sickening thing to me". He added: "I was devastated for ages ... Me and Roy had had plans to do much more together, and his voice was in really good shape. It was just so sad for that to happen." Although there was speculation in the press that Del Shannon or Roger McGuinn might join the Wilburys, the remaining members never considered replacing Orbison. Lynne later said: "We'd become this unit, we were all good pals … We always knew we were going to do another one, and now it's just the four of us." Harrison was the most active in promoting the Wilburys, carrying out interviews well into 1989. He said he was "wait[ing] for all the other Wilburys to finish being solo artists" so that they could renew the collaboration. By contrast, according to author Clinton Heylin, Dylan appeared to give the band little attention as he focused on re-establishing himself as a live performer before recording his 1989 album "Oh Mercy". In March 1990, Harrison, Lynne, Petty and Dylan reunited to work on a second Wilburys album, which they intentionally misnumbered "Traveling Wilburys Vol. 3". It was preceded by a non-album single, a cover of "Nobody's Child", which the band recorded for Olivia Harrison's Romanian Angel Appeal charity project. The duration of the main album sessions was again dictated by Dylan's touring schedule and limited availability. Having asked Dylan to record a lead vocal for all the songs before his departure, Harrison was then loath to replace many of the parts, resulting in a greater prominence for Dylan as a lead singer. Although he ceded his own role as a lead vocalist to Dylan and to Petty, Harrison took over more of the production and contributed more prominently as a lead guitarist than before. Petty described the album as "a little more rough and ready, a bit more raucous" than "Vol. 1", while Dylan said the new songs were more developed as compositions relative to the "scraped up from jam tapes" approach to the band's debut. "Vol. 3" was released on 29 October 1990. It was dedicated to Orbison, as "Lefty Wilbury", the pseudonym that Orbison had used in 1988 in honour of his hero Lefty Frizzell. The album met with less success than the previous one. According to Mo Ostin, the choice of album title came about through "George being George"; apparently Harrison was making a wry reference to the appearance of a bootleg that served as a sort of "Volume 2". The album's liner notes were written by Eric Idle, another Python member, who again adopted a pseudonym. For the band's final single, "Wilbury Twist", they filmed a video in which Idle, John Candy and other comedic actors attempt to master the song's eponymous dance style. The clip was filmed in Los Angeles and completed on 28 February 1991. According to Jim Keltner, the decision on the group's future after "Vol. 3" lay with Harrison. Keltner said that from his conversations with Lynne, Petty and Dylan, they were all keen to reunite, whereas Harrison wavered in his enthusiasm. While Harrison was against the idea of touring, Petty recalled: "I kept getting down on my knees in front of George, saying, 'Please, it's so much money!" After his 1991 tour of Japan – his first series of concerts since 1974 – Harrison spoke of a possible Traveling Wilburys tour: The Wilburys tour never came about. Petty said about the Wilburys touring: In the Rolling Stone Press book "The New Rolling Stone Encyclopedia of Rock & Roll", the Traveling Wilburys are described as "the ultimate supergroup", with a line-up that represented four eras of rock music history and included "three indisputable gods" in Dylan, Harrison and Orbison. The editors also recognise the band as "the antithesis of a supergroup", due to the musicians' adoption of fraternal alter egos and the humour inherent in the project. AllMusic managing editor Stephen Thomas Erlewine has similarly written: "It's impossible to picture a supergroup with a stronger pedigree than that (all that's missing is a Rolling Stone), but in another sense it's hard to call the Wilburys a true supergroup, since they arrived nearly two decades after the all-star craze of the '70s peaked, and they never had the self-important air of nearly all the other supergroups. That, of course, was the key to their charm …" Speaking to music journalist Paul Zollo in 2004, Petty agreed that humour and self-effacement had been key factors in the Wilburys' success, adding: "We wanted to make something good in a world that seemed to get uglier and uglier and meaner and meaner … And I'm really proud that I was part of it. Because I do think that it brought a little sunshine into the world." Harrison said the project was an opportunity to "put a finger up to the rules" by challenging the norms associated with the music industry. Discussing the Wilburys in Peter Bogdanovich's 2007 documentary "Runnin' Down a Dream", Petty said that one of the strengths behind the concept was that it was free of any intervention from record company, management or marketing concerns, and instead developed naturally from a spirit of co-operation and mutual admiration among five established artists. Author Simon Leng recognises the venture as primarily a channel through which Harrison and Dylan could escape the restrictions of their serious media images, but also, in its guise as a "phantom band", a development by Harrison of the Rutles' satirical approach to the Beatles' legacy, in this case by "de-mythologizing" rock history. Inspired by the Traveling Wilburys' success and particularly its benefit to Petty and Orbison as artists, Lenny Waronker encouraged American guitarist Ry Cooder to form the band Little Village and record for Warner Bros. The group – comprising Cooder, Keltner, John Hiatt and Nick Lowe – released a self-titled album in 1992. Greg Kot of the "Chicago Tribune" described the Notting Hillbillies' "Missing ... Presumed Having a Good Time" as a Traveling Wilburys-type side project for Mark Knopfler of Dire Straits. Writing in "New York" magazine in late 1990, Elizabeth Wurtzel cited the Notting Hillbillies' album and the self-titled debut by Hindu Love Gods – a band consisting of Warren Zevon and members of R.E.M. – as examples of a trend whereby, following the Wilburys' "Vol. 1", "more and more albums seem to be the rock-and-roll equivalents of bowling night." Writing in "The Encyclopedia of Popular Music", Colin Larkin cites the Wilburys' contemporary skiffle as evidence of Lonnie Donegan's continued influence on popular music long after the early 1960s. In his book "Lonnie Donegan and the Birth of British Rock & Roll", Patrick Humphries describes the Wilburys as "a makeshift quintet whose roots were firmly and joyously planted in low-key, low-tech skiffle music". He credits the band with inspiring a brief revival of Donegan's "DIY skiffle", which included Knopfler's Notting Hillbillies. Each member of the Traveling Wilburys has been inducted into the Rock and Roll Hall of Fame, although the band itself has not been inducted. Orbison and Dylan were inducted as solo artists, Harrison was inducted as a member of the Beatles and, posthumously, as a solo artist, Petty as the leader of Tom Petty and the Heartbreakers, and Lynne as a member of the Electric Light Orchestra. In the late 1990s and early 2000s, the two Traveling Wilburys albums had limited availability and were out of print in most areas. Harrison, as primary holder of the rights, did not reissue them before his death. In June 2007, the two albums were reissued as "The Traveling Wilburys Collection", a box set including both albums on CD (with bonus tracks) and a DVD featuring a 25-minute documentary entitled "The True History of the Traveling Wilburys" and a collection of music videos. The box set was released in three editions; the standard edition, with both CDs and DVD in a double Digipak package and a 16-page booklet; a "deluxe" boxed edition with the CDs and DVD and an extensive 40-page booklet, artist postcards, and photographs; or a "deluxe" boxed edition on vinyl. This version omits the DVD, but adds a 12-inch vinyl disc with rare versions of the songs. The release debuted at number 1 in the UK and topped the albums chart in Australia, Ireland and other countries. On the US "Billboard" 200 it reached number 9. The collection sold 500,000 copies worldwide during the first three weeks and remained in the UK top 5 for seven weeks after its release. In November 2009, Genesis Publications, a company with which Harrison had been associated since the late 1970s, announced the release of a limited edition fine-bound book titled "The Traveling Wilburys". Compiled by Olivia Harrison, the book includes rare photographs, recording notes, handwritten lyrics, sketches, and first-hand commentary on the band's history, together with a foreword by Lynne. Petty, Lynne, Olivia Harrison, Barbara Orbison, Keltner and Idle were among those who attended the US launch at a Beverly Hills bookshop in March 2010. In an interview to publicise the book, Lynne expressed his sadness at the deaths of Harrison and Orbison, and reflected: "The Wilburys was such a wonderful band, such a marvellous thing to be part of. They were the best people I could ever wish to work with. Every day was like, 'Wow!' ... it was fun from day one." Jim Keltner, the session drummer and percussionist, was not officially listed as a Wilbury on either album, but was given the nickname "Buster Sidebury". Overdubs on the 2007 bonus tracks "Maxine" and "Like a Ship" were credited to "Ayrton Wilbury", a pseudonym for Dhani Harrison. The name Ayrton was used in honour of F1 driver Ayrton Senna. Jim Horn and Ray Cooper played saxophones and percussion, respectively, on both albums. The lead guitar part on the "Vol. 3" track "She's My Baby" was played by rock guitarist Gary Moore, who received the credit "Ken Wilbury". Harrison appeared as Nelson Wilbury on Warner Bros. Records' Christmas 1988 promotional album "Winter Warnerland" (which also included Paul Reubens as "Pee Wee Wilbury"). In 1992, in his capacity as producer, Harrison credited himself as "Spike and Nelson Wilbury" on his live album "Live in Japan". During that Japanese tour, in December 1991, Harrison credited himself as Nakihama Wilbury. The Tom Petty and the Heartbreakers 1992 single "Christmas All Over Again" contained a greeting that read "Merry Christmas from Nelson and Pee Wee Wilbury". Additionally, at Tom Petty Celebration in 2019, Roy Orbison Jr. was dubbed "Lefty Wilbury Jr." and Alex Orbison as "Ginger Wilbury". The Harrison-made film promoting the Traveling Wilburys, "Whatever Wilbury Wilbury", lists the following credits: "Chopper Wilbury" (editor), "Edison Wilbury" (lighting), "Evelyn Wilbury" (wardrobe), "Clyde B. Wilbury" (special effects), "Big Mac Wilbury" (catering), "Zsa Zsa Wilbury" (make-up) and "Tell M. Wilbury" (production manager). A squirrel is named "Eddie Wilbury" in that film as well.
https://en.wikipedia.org/wiki?curid=31206
Tumor suppressor A tumor suppressor gene, or anti-oncogene, is a gene that regulates a cell during cell division and replication. If the cell grows uncontrollably, it will result in cancer. When a tumor suppressor gene is mutated, it results in a loss or reduction in its function; in combination with other genetic mutations this could allow the cell to grow abnormally. The loss of function for these genes may be even more significant in the development of human cancers, compared to the activation of oncogenes. Tumor suppressor genes can be grouped into the following categories caretaker genes, gatekeeper genes, and more recently landscaper genes. Caretaker genes ensure stability of the genome via DNA repair and subsequently when mutated allow mutations to accumulate. Meanwhile gatekeeper genes directly regulate cell growth by either inhibiting cell cycle progression or inducing apoptosis. Lastly landscaper genes regulate growth by contributing to the surrounding environment, when mutated can cause an environment that promotes unregulated proliferation. The classification schemes are evolving as medical advances are being made from fields including molecular biology, genetics, and epigenetics. Unlike oncogenes, tumor suppressor genes generally follow the two-hit hypothesis, which states both alleles that code for a particular protein must be affected before an effect is manifested. If only one allele for the gene is damaged, the other can still produce enough of the correct protein to retain the appropriate function. In other words, mutant tumor suppressor alleles are usually recessive, whereas mutant oncogene alleles are typically dominant. The two-hit hypothesis was first proposed by A.G. Knudson for cases of retinoblastoma. He observed that 40% of U.S cases were caused by a mutation in the germ-line. However, affected parents could have children without the disease; but the unaffected children became parents of children with retinoblastoma. This indicates that one could inherit a mutated germ-line but not display the disease. Knudson observed that the age of onset of retinoblastoma followed 2nd order kinetics, implying that two independent genetic events were necessary. He recognized that this was consistent with a recessive mutation involving a single gene, but requiring bi-allelic mutation. Hereditary cases involve an inherited mutation and a single mutation in the normal allele. Non-hereditary retinoblastoma involves two mutations, one on each allele. Knudson also noted that hereditary cases often developed bilateral tumors and would develop them earlier in life, compared to non-hereditary cases where individuals were only affected by a single tumor. There are exceptions to the two-hit rule for tumor suppressors, such as certain mutations in the p53 gene product. p53 mutations can function as a dominant negative, meaning that a mutated p53 protein can prevent the function of the natural protein produced from the non-mutated allele. Other tumor-suppressor genes that do not follow the two-hit rule are those that exhibit haploinsufficiency, including PTCH in medulloblastoma and NF1 in neurofibroma. Another example is p27, a cell-cycle inhibitor, that when one allele is mutated causes increased carcinogen susceptibility. Tumor-suppressor genes, or more precisely, the proteins for which they encode, can have a repressive effect on the regulation of the cell cycle, promote apoptosis, or sometimes both. Tumor-suppressor proteins function in various ways, which include the following: There are many different tumor suppressor genes, including As the cost of DNA sequencing continues to diminish, more cancers can be sequenced. This allows for the discovery of novel tumor suppressors and can give insight on how to treat and cure different cancers in the future. Other examples of tumour suppressors include pVHL, APC, CD95, ST5, YPEL3, ST7, and ST14, p16, BRCA2..
https://en.wikipedia.org/wiki?curid=31207
The Angry Brigade The Angry Brigade was a far-left militant group responsible for a series of bomb attacks in England between 1970 and 1972. Using small bombs, they targeted banks, embassies, a BBC Outside Broadcast vehicle, and the homes of Conservative MPs. In total, police attributed 25 bombings to the Angry Brigade. The bombings mostly caused property damage; one person was slightly injured. Of the eight people who stood trial, known as the Stoke Newington Eight, four were acquitted. John Barker, along with Hilary Creek, Anna Mendelssohn and Jim Greenfield, were convicted on majority verdicts, and sentenced to ten years. In a 2014 interview, Barker described the trial as political, but acknowledged that "they framed a guilty man". The events were subsequently turned into a play. In mid-1968 demonstrations took place in London, centred on the US embassy in Grosvenor Square, against US involvement in the Vietnam War. One of the organisers of these demonstrations, Tariq Ali, has said he recalls an approach by someone representing the Angry Brigade who wished to bomb the embassy; he told them it was a terrible idea and no bombing took place. The Angry Brigade decided to launch a bombing campaign with small bombs – in order to maximise media exposure to their demands while keeping collateral damage to a minimum. The campaign started in August 1970 and continued for a year until arrests took place the following summer. Targets included banks, embassies, the Miss World event in 1970 (or rather a BBC Outside Broadcast vehicle earmarked for use in the BBC's coverage) and the homes of Conservative MPs. In total, police attributed 25 bombings to the Angry Brigade. The bombings mostly caused property damage; one person was slightly injured. In the 1980s the Angry Brigade resurfaced as the Angry Brigade Resistance Movement – part of the Irish Republican Socialist Movement (IRSM). Jake Prescott, whose origins were in the mining community of Dunfermline, was arrested and tried in 1971. Melford Stevenson sentenced him to 15 years imprisonment (later reduced to 10), mostly spent in Category A high security prisons. Later he said he realised then that he "was the one who was angry and the people [he] met were more like the Slightly Cross Brigade". The other members of the group from North-East London, the "Stoke Newington Eight", were prosecuted for carrying out bombings as the Angry Brigade in one of the longest criminal trials of English history (it lasted from 30 May to 6 December 1972). As a result of the trial, John Barker, Jim Greenfield, Hilary Creek and Anna Mendelssohn received prison sentences of 10 years. A number of other defendants were found not guilty, including Stuart Christie, who had previously been imprisoned in Spain for carrying explosives with the intent to assassinate the "caudillo" Francisco Franco, and Angela Mason who became a director of the LGBT rights group Stonewall and was awarded an OBE for services to homosexual rights. In February 2002, Prescott apologised for his role in bombing Robert Carr's house and called on other members of the Angry Brigade to also come forward. On 3 February 2002, "The Guardian" reported a history of the Angry Brigade and an update on what its former members were doing then. On 9 August 2002, BBC Radio 4 aired Graham White’s historical drama, "The Trial of the Angry Brigade". Produced by Peter Kavanagh, this was a reconstruction of the trial combined with other background information. The cast included Kenneth Cranham, Juliet Stevenson and Mark Strong. In 2009, British family care activist and novelist Erin Pizzey was successful in a libel case against Macmillan Publishers after "Andrew Marr's History of Modern Britain" had falsely linked her to the Angry Brigade. The publisher also recalled and destroyed the offending version of the book, and republished it with the error removed. The link to the Angry Brigade was made in 2001, in an interview with "The Guardian", in which the article states that she was "thrown out" of the feminist movement after threatening to inform police about a planned bombing by the Angry Brigade of the clothes shop Biba. "I said that if you go on with this – they were discussing bombing Biba [the legendary department store in Kensington] – I'm going to call the police in, because I really don't believe in this."
https://en.wikipedia.org/wiki?curid=31208
Bolzano–Weierstrass theorem In mathematics, specifically in real analysis, the Bolzano–Weierstrass theorem, named after Bernard Bolzano and Karl Weierstrass, is a fundamental result about convergence in a finite-dimensional Euclidean space R"n". The theorem states that each bounded sequence in R"n" has a convergent subsequence. An equivalent formulation is that a subset of R"n" is sequentially compact if and only if it is closed and bounded. The theorem is sometimes called the sequential compactness theorem. The Bolzano–Weierstrass theorem is named after mathematicians Bernard Bolzano and Karl Weierstrass. It was actually first proved by Bolzano in 1817 as a lemma in the proof of the intermediate value theorem. Some fifty years later the result was identified as significant in its own right, and proved again by Weierstrass. It has since become an essential theorem of analysis. First we prove the theorem when formula_1, in which case the ordering on formula_2 can be put to good use. Indeed, we have the following result. Lemma: Every infinite sequence formula_3 in formula_2 has a monotone subsequence. Proof: Let us call a positive integer formula_5 a "peak of the sequence" if formula_6 implies formula_7 "i.e.", if formula_8 is greater than every subsequent term formula_9 in the sequence. Suppose first that the sequence has infinitely many peaks, formula_10. Then the subsequence formula_11 corresponding to these peaks is monotonically decreasing. So suppose now that there are only finitely many peaks, let formula_12 be the last peak and formula_13. Then formula_14 is not a peak, since formula_15, which implies the existence of formula_16 with formula_17 and formula_18. Again, formula_19 is not a peak, hence there is an formula_20 where formula_21 with formula_22. Repeating this process leads to an infinite non-decreasing subsequence  formula_23, as desired. Now suppose one has a bounded sequence in formula_2; by the lemma there exists a monotone subsequence, necessarily bounded. It follows from the monotone convergence theorem that this subsequence must converge. Finally, the general case can be reduced to the case of formula_1 as follows: given a bounded sequence in formula_26, the sequence of first coordinates is a bounded real sequence, hence has a convergent subsequence. One can then extract a subsubsequence on which the second coordinates converge, and so on, until in the end we have passed from the original sequence to a subsequence formula_5 times—which is still a subsequence of the original sequence—on which each coordinate sequence converges, hence the subsequence itself is convergent. There is also an alternative proof of the Bolzano–Weierstrass theorem using nested intervals. We start with a bounded sequence formula_3: Because we halve the length of an interval at each step, the limit of the interval's length is zero. Thus there is a number formula_29 which is in each interval formula_30. Now we show, that formula_29 is an accumulation point of formula_3. Take a neighbourhood formula_33 of formula_29. Because the length of the intervals converges to zero, there is an interval formula_35 which is a subset of formula_33. Because formula_35 contains by construction infinitely many members of formula_3 and formula_39, also formula_33 contains infinitely many members of formula_3. This proves that formula_29 is an accumulation point of formula_3. Thus, there is a subsequence of formula_3 which converges to formula_29. Suppose "A" is a subset of R"n" with the property that every sequence in "A" has a subsequence converging to an element of "A". Then "A" must be bounded, since otherwise there exists a sequence "x""m" in "A" with ||"x""m"|| ≥ "m" for all "m", and then every subsequence is unbounded and therefore not convergent. Moreover, "A" must be closed, since from a noninterior point "x" in the complement of "A", one can build an "A"-valued sequence converging to "x". Thus the subsets "A" of R"n" for which every sequence in "A" has a subsequence converging to an element of "A" – i.e., the subsets which are sequentially compact in the subspace topology – are precisely the closed and bounded subsets. This form of the theorem makes especially clear the analogy to the Heine–Borel theorem, which asserts that a subset of R"n" is compact if and only if it is closed and bounded. In fact, general topology tells us that a metrizable space is compact if and only if it is sequentially compact, so that the Bolzano–Weierstrass and Heine–Borel theorems are essentially the same. There are different important equilibrium concepts in economics, the proofs of the existence of which often require variations of the Bolzano–Weierstrass theorem. One example is the existence of a Pareto efficient allocation. An allocation is a matrix of consumption bundles for agents in an economy, and an allocation is Pareto efficient if no change can be made to it which makes no agent worse off and at least one agent better off (here rows of the allocation matrix must be rankable by a preference relation). The Bolzano–Weierstrass theorem allows one to prove that if the set of allocations is compact and non-empty, then the system has a Pareto-efficient allocation. The Bolzano-Weierstrass Rap, written by Steve Sawin (AKA Slim Dorky) of Fairfield University is well-known in academic circles, and considered by critics to be "the greatest Bolzano-Weierstrass theorem song ever made".
https://en.wikipedia.org/wiki?curid=31211
Tabula rasa Tabula rasa (; 'blank slate') is the theory that individuals are born without built-in mental content, and, therefore all knowledge comes from experience or perception. Epistemological proponents of "tabula rasa" disagree with the doctrine of innatism which holds that the mind is born already in possession of certain knowledge. Generally, proponents of the "tabula rasa" theory also favour the "nurture" side of the nature versus nurture debate when it comes to aspects of one's personality; social and emotional behaviour; knowledge; and sapience. "Tabula rasa" is a Latin phrase often translated as "clean slate" in English and originates from the Roman "tabula" used for notes, which was blanked by heating the wax and then smoothing it. This roughly equates to the English term "blank slate" (or, more literally, "erased slate") which refers to the emptiness of a slate prior to it being written on with chalk. Both may be renewed repeatedly, by melting the wax of the tablet or by erasing the chalk on the slate. In Western philosophy, the concept of "tabula rasa" can be traced back to the writings of Aristotle who writes in his treatise "De Anima" () of the "unscribed tablet." In one of the more well-known passages of this treatise, he writes that: Haven't we already disposed of the difficulty about interaction involving a common element, when we said that mind is in a sense potentially whatever is thinkable, though actually it is nothing until it has thought? What it thinks must be in it just as characters may be said to be on a writing-tablet on which as yet nothing stands written: this is exactly what happens with mind. This idea was further developed in Ancient Greek philosophy by the Stoic school. Stoic epistemology emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon." Diogenes Laërtius attributes a similar belief to the Stoic Zeno of Citium when he writes in "Lives and Opinions of Eminent Philosophers" that: Perception, again, is an impression produced on the mind, its name being appropriately borrowed from impressions on wax made by a seal; and perception they divide into, comprehensible and incomprehensible: Comprehensible, which they call the criterion of facts, and which is produced by a real object, and is, therefore, at the same time conformable to that object; Incomprehensible, which has no relation to any real object, or else, if it has any such relation, does not correspond to it, being but a vague and indistinct representation. In the 11th century, the theory of "tabula rasa" was developed more clearly by the Persian philosopher Avicenna (Arabic: Ibn Sina). He argued that the "human intellect at birth resembled a "tabula rasa", a pure potentiality that is actualized through education and comes to know." Thus, according to Avicenna, knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts," which develops through a "syllogistic method of reasoning; observations lead to propositional statements, which when compounded lead to further abstract concepts." He further argued that the intellect itself "possesses levels of development from the static/material intellect ("al-‘aql al-hayulani"), that potentiality can acquire knowledge to the active intellect ("al-‘aql al-fa‘il"), the state of the human intellect at conjunction with the perfect source of knowledge." In the 12th century, the Andalusian-Islamic philosopher and novelist, Ibn Tufail (known as Abubacer or Ebn Tophail in the West) demonstrated the theory of "tabula rasa" as a thought experiment through his Arabic philosophical novel, "Hayy ibn Yaqdhan", in which he depicts the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled "Philosophus Autodidactus", published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of "tabula rasa" in "An Essay Concerning Human Understanding". In the 13th century, St. Thomas Aquinas brought the Aristotelian and Avicennian notions to the forefront of Christian thought. These notions sharply contrasted with the previously-held Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body here on Earth (cf. Plato's "Phaedo" and "Apology", as well as others). St. Bonaventure (also 13th century) was one of the fiercest intellectual opponents of Aquinas, offering some of the strongest arguments toward the Platonic idea of the mind. The writings of Avicenna, Ibn Tufail, and Aquinas on the "tabula rasa" theory stood unprogressed and untested for several centuries. For example, the late-medieval English jurist Sir John Fortescue, in his work "In Praise of the Laws of England" (Chapter VI), takes for granted the notion of "tabula rasa", stressing it as the basis of the need for the education of the young in general, and of young princes specifically: The modern idea of the theory is attributed mostly to John Locke's expression of the idea in "Essay Concerning Human Understanding", particularly using the term "white paper" in Book II, Chap. I, 2. In Locke's philosophy, "tabula rasa" was the theory that at birth the (human) mind is a "blank slate" without rules for processing data, and that data is added and rules for processing are formed solely by one's sensory experiences. The notion is central to Lockean empiricism; it serves as the starting point for Locke's subsequent explication (in Book II) of simple ideas and complex ideas. As understood by Locke, "tabula rasa" meant that the mind of the individual was born blank, and it also emphasized the freedom of individuals to author their own soul. Individuals are free to define the content of their character—but basic identity as a member of the human species cannot be altered. This presumption of a free, self-authored mind combined with an immutable human nature leads to the Lockean doctrine of "natural" rights. Locke's idea of "tabula rasa" is frequently compared with Thomas Hobbes's viewpoint of human nature, in which humans are endowed with inherent mental content—particularly with selfishness. "Tabula rasa" also features in Sigmund Freud's psychoanalysis. Freud depicted personality traits as being formed by family dynamics (see Oedipus complex). Freud's theories imply that humans lack free will, but also that genetic influences on human personality are minimal. In Freudian psychoanalysis, one is largely determined by one's upbringing. The "tabula rasa" concept became popular in social sciences during the 20th century. Early ideas of eugenics posited that human intelligence correlated strongly with social class, but these ideas were rejected, and the idea that genes (or simply "blood") determined a person's character became regarded as racist. By the 1970s, scientists such as John Money had come to see gender identity as socially constructed, rather than rooted in genetics. Psychologists and neurobiologists have shown evidence that initially, the entire cerebral cortex is programmed and organized to process sensory input, control motor actions, regulate emotion, and respond reflexively (under predetermined conditions). These programmed mechanisms in the brain subsequently act to learn and refine the ability of the organism. For example, psychologist Steven Pinker showed that—in contrast to written language—the brain is "programmed" to pick up spoken language spontaneously. There have been claims by a minority in psychology and neurobiology, however, that the brain is "tabula rasa" only for certain behaviours. For instance, with respect to one's ability to acquire both general and special types of knowledge or skills, Michael Howe argued against the existence of innate talent. There also have been neurological investigations into specific learning and memory functions, such as Karl Lashley's study on mass action and serial interaction mechanisms. Important evidence against the "tabula rasa" model of the mind comes from behavioural genetics, especially twin and adoption studies (see below). These indicate strong genetic influences on personal characteristics such as IQ, alcoholism, gender identity, and other traits. Critically, multivariate studies show that the distinct faculties of the mind, such as memory and reason, fractionate along genetic boundaries. Cultural universals such as emotion and the relative resilience of psychological adaptation to accidental biological changes (for instance the David Reimer case of gender reassignment following an accident) also support basic biological mechanisms in the mind. Twin studies have resulted in important evidence against the "tabula rasa" model of the mind, specifically, of social behaviour. The "social pre-wiring" "hypothesis" (also informally known as "wired to be social") refers to the ontogeny of social interaction. The theory questions whether there is a propensity to socially oriented action already present "before" birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social. Circumstantial evidence supporting the social pre-wiring hypothesis can be revealed when examining newborns' behaviour. Newborns, not even hours after birth, have been found to display a preparedness for social interaction. This preparedness is expressed in ways such as their imitation of facial gestures. This observed behaviour cannot be attributed to any current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behaviour and identity through genetics. Principal evidence of this theory is uncovered by examining twin pregnancies. The main argument is, if there are social behaviours that are inherited and developed before birth, then one should expect twin fetuses to engage in some form of social interaction before they are born. Thus, ten fetuses were analyzed over a period of time using ultrasound techniques. Using kinematic analysis, the results of the experiment were that the twin fetuses would interact with each other for longer periods and more often as the pregnancies went on. Researchers were able to conclude that the performance of movements between the co-twins were not accidental but specifically aimed. The social pre-wiring hypothesis was proven correct:The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin fetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behaviour: when the context enables it, as in the case of twin fetuses, other-directed actions are not only possible but predominant over self-directed actions. In artificial intelligence, "tabula rasa" refers to the development of autonomous agents with a mechanism to reason and plan toward their goal, but no "built-in" knowledge-base of their environment. Thus, they truly are a blank slate. In reality, autonomous agents possess an initial data-set or knowledge-base, but this cannot be immutable or it would hamper autonomy and heuristic ability. Even if the data-set is empty, it usually may be argued that there is a built-in bias in the reasoning and planning mechanisms. Either intentionally or unintentionally placed there by the human designer, it thus negates the true spirit of "tabula rasa". A synthetic (programming) language parser (LR(1), LALR(1) or SLR(1), for example) could be considered a special case of a "tabula rasa", as it is designed to accept "any" of a possibly infinite set of source language programs, within a "single" programming language, and to output either a good parse of the program, or a good machine language translation of the program, either of which represents a "success", or, alternately, a "failure", and nothing else. The "initial data-set" is a set of tables which are generally produced mechanically by a parser table generator, usually from a BNF representation of the source language, and represents a "table representation" of that "single" programming language. AlphaZero achieved superhuman performance in various board games using self-play and "tabula rasa" reinforcement learning, meaning it had no access to human games or hard-coded human knowledge about either game, only being given the rules of the games.
https://en.wikipedia.org/wiki?curid=31212
Typography Typography is the art and technique of arranging type to make written language legible, readable, and appealing when displayed. The arrangement of type involves selecting typefaces, point sizes, line lengths, line-spacing (leading), and letter-spacing (tracking), and adjusting the space between pairs of letters (kerning). The term "typography" is also applied to the style, arrangement, and appearance of the letters, numbers, and symbols created by the process. Type design is a closely related craft, sometimes considered part of typography; most typographers do not design typefaces, and some type designers do not consider themselves typographers. Typography also may be used as a decorative device, unrelated to communication of information. Typography is the work of typesetters (also known as compositors), typographers, graphic designers, art directors, manga artists, comic book artists, graffiti artists, and, now, anyone who arranges words, letters, numbers, and symbols for publication, display, or distribution, from clerical workers and newsletter writers to anyone self-publishing materials. Until the Digital Age, typography was a specialized occupation. Digitization opened up typography to new generations of previously unrelated designers and lay users. As the capability to create typography has become ubiquitous, the application of principles and best practices developed over generations of skilled workers and professionals has diminished. So at a time when scientific techniques can support the proven traditions (e.g., greater legibility with the use of serifs, upper and lower case, contrast, etc.) through understanding the limitations of human vision, typography as often encountered may fail to achieve its principal objective: effective communication. The word "typography" in English comes from the Greek roots "typos" = "impression" and "-graphia" = "writing". Although typically applied to printed, published, broadcast, and reproduced materials in contemporary times, all words, letters, symbols, and numbers written alongside the earliest naturalistic drawings by humans may be called typography. The word, "typography", is derived from the Greek words τύπος "typos" "form" or "impression" and γράφειν "graphein" "to write", traces its origins to the first punches and dies used to make seals and currency in ancient times, which ties the concept to printing. The uneven spacing of the impressions on brick stamps found in the Mesopotamian cities of Uruk and Larsa, dating from the second millennium B.C., may be evidence of type, wherein the reuse of identical characters was applied to create cuneiform text. Babylonian cylinder seals were used to create an impression on a surface by rolling the seal on wet clay. Typography also was implemented in the Phaistos Disc, an enigmatic Minoan printed item from Crete, which dates to between 1850 and 1600 B.C. It has been proposed that Roman lead pipe inscriptions were created with movable type printing, but German typographer Herbert Brekle recently dismissed this view. The essential criterion of type identity was met by medieval print artifacts such as the Latin Pruefening Abbey inscription of 1119 that was created by the same technique as the Phaistos Disc. The silver altarpiece of patriarch Pellegrinus II (1195–1204) in the cathedral of Cividale was printed with individual letter punches. Apparently, the same printing technique may be found in tenth to twelfth century Byzantine reliquaries. Other early examples include individual letter tiles where the words are formed by assembling single letter tiles in the desired order, which were reasonably widespread in medieval Northern Europe. Typography with movable type was invented during the eleventh-century Song dynasty in China by Bi Sheng (990–1051). His movable type system was manufactured from ceramic materials, and clay type printing continued to be practiced in China until the Qing Dynasty. Wang Zhen was one of the pioneers of wooden movable type. Although the wooden type was more durable under the mechanical rigors of handling, repeated printing wore the character faces down and the types could be replaced only by carving new pieces. Metal movable type was first invented in Korea during the Goryeo Dynasty, approximately 1230. Hua Sui introduced bronze type printing to China in 1490 AD. The diffusion of both movable-type systems was limited and the technology did not spread beyond East and Central Asia, however. Modern lead-based movable type, along with the mechanical printing press, is most often attributed to the goldsmith Johannes Gutenberg in 1439. His type pieces, made from a lead-based alloy, suited printing purposes so well that the alloy is still used today. Gutenberg developed specialized techniques for casting and combining cheap copies of letter punches in the vast quantities required to print multiple copies of texts. This technical breakthrough was instrumental in starting the Printing Revolution and the first book printed with lead-based movable type was the Gutenberg Bible. Rapidly advancing technology revolutionized typography in the latter twentieth century. During the 1960s some camera-ready typesetting could be produced in any office or workshop with stand-alone machines such as those introduced by IBM (see: IBM Selectric typewriter). During the same period Letraset introduced Dry transfer technology that allowed designers to transfer types instantly. The famous Lorem Ipsum gained popularity due to its usage in Letraset. During the mid-1980s personal computers such as the Macintosh allowed type designers to create typefaces digitally using commercial graphic design software. Digital technology also enabled designers to create more experimental typefaces as well as the practical typefaces of traditional typography. Designs for typefaces could be created faster with the new technology, and for more specific functions. The cost for developing typefaces was drastically lowered, becoming widely available to the masses. The change has been called the "democratization of type" and has given new designers more opportunities to enter the field. The design of typefaces has developed alongside the development of typesetting systems. Although typography has evolved significantly from its origins, it is a largely conservative art that tends to cleave closely to tradition. This is because legibility is paramount, and so the typefaces that are the most readable usually are retained. In addition, the evolution of typography is inextricably intertwined with lettering by hand and related art forms, especially formal styles, which thrived for centuries preceding typography, and so the evolution of typography must be discussed with reference to this relationship. In the nascent stages of European printing, the typeface (blackletter, or Gothic) was designed in imitation of the popular hand-lettering styles of scribes. Initially, this typeface was difficult to read, because each letter was set in place individually and made to fit tightly into the allocated space. The art of manuscript writing, whose origin was during Hellenistic and Roman bookmaking, reached its zenith in the illuminated manuscripts of the Middle Ages. Metal typefaces notably altered the style, making it "crisp and uncompromising", and also brought about "new standards of composition". During the Renaissance period in France, Claude Garamond was partially responsible for the adoption of Roman typeface that eventually supplanted the more commonly used Gothic (blackletter). Roman typeface also was based on hand-lettering styles. The development of Roman typeface can be traced back to Greek lapidary letters. Greek lapidary letters were carved into stone and "one of the first formal uses of Western letterforms"; after that, Roman lapidary letterforms evolved into the monumental capitals, which laid the foundation for Western typographical design, especially serif typefaces. There are two styles of Roman typefaces: the old style, and the modern. The former is characterized by its similarly weighted lines, while the latter is distinguished by its contrast of light and heavy lines. Often, these styles are combined. By the twentieth century, computers turned typeface design into a rather simplified process. This has allowed the number of typefaces and styles to proliferate exponentially, as there now are thousands available. Unfortunately, confusion between typeface and font (the various styles of a single typeface) occurred in 1984 when Steve Jobs mislabeled typefaces as fonts for Apple computers and his error has been perpetuated throughout the computer industry, leading to common misuse by the public of the term "font" when typeface is the proper term. "Experimental typography" is defined as the unconventional and more artistic approach to typeface selection. Francis Picabia was a Dada pioneer of this practice in the early twentieth century. David Carson is often associated with this movement, particularly for his work in "Ray Gun" magazine in the 1990s. His work caused an uproar in the design community due to his abandonment of standard practices in typeface selection, layout, and design. Experimental typography is said to place emphasis on expressing emotion, rather than having a concern for legibility while communicating ideas, hence considered bordering on being art. There are many facets to the expressive use of typography, and with those come many different techniques to help with visual aid and the graphic design. Spacing and kerning, size-specific spacing, x-height and vertical proportions, character variation, width, weight, and contrast, are several techniques that are necessary to be taken into consideration when thinking about the appropriateness of specific typefaces or creating them. When placing two or more differing and/or contrasting fonts together, these techniques come into play for organizational strategies and demanding attractive qualities. For example, if the bulk of a title has a more unfamiliar or unusual font, simpler sans-serif fonts will help complement the title while attracting more attention to the piece as a whole. In contemporary use, the practice and study of typography include a broad range, covering all aspects of letter design and application, both mechanical (typesetting, type design, and typefaces) and manual (handwriting and calligraphy). Typographical elements may appear in a wide variety of situations, including: Since digitization, typographical uses have spread to a wider range of applications, appearing on web pages, LCD mobile phone screens, and hand-held video games. Recent research in psychology has studied the effects of typography on human cognition. The research points toward multiple applications such as helping readers remember the content better and strategically use fonts to help dyslexic readers. Traditionally, text is "composed" to create a readable, coherent, and visually satisfying typeface that works invisibly, without the awareness of the reader. Even distribution of typeset material, with a minimum of distractions and anomalies, is aimed at producing clarity and transparency. Choice of typeface(s) is the primary aspect of text typography—prose fiction, non-fiction, editorial, educational, religious, scientific, spiritual, and commercial writing all have differing characteristics and requirements of appropriate typefaces (and their fonts or styles). For historic material, established text typefaces frequently are chosen according to a scheme of historical "genre" acquired by a long process of accretion, with considerable overlap among historical periods. Contemporary books are more likely to be set with state-of-the-art "text romans" or "book romans" typefaces with serifs and design values echoing present-day design arts, which are closely based on traditional models such as those of Nicolas Jenson, Francesco Griffo (a punchcutter who created the model for Aldine typefaces), and Claude Garamond. With their more specialized requirements, newspapers and magazines rely on compact, tightly fitted styles of text typefaces with serifs specially designed for the task, which offer maximum flexibility, readability, legibility, and efficient use of page space. Sans serif text typefaces (without serifs) often are used for introductory paragraphs, incidental text, and whole short articles. A current fashion is to pair a sans-serif typeface for headings with a high-performance serif typeface of matching style for the text of an article. Typesetting conventions are modulated by orthography and linguistics, word structures, word frequencies, morphology, phonetic constructs and linguistic syntax. Typesetting conventions also are subject to specific cultural conventions. For example, in French it is customary to insert a non-breaking space before a colon (:) or semicolon (;) in a sentence, while in English it is not. In typesetting, "color" is the overall density of the ink on the page, determined mainly by the typeface, but also by the word spacing, leading, and depth of the margins. Text layout, tone, or color of the set text, and the interplay of text with the white space of the page in combination with other graphic elements impart a "feel" or "resonance" to the subject matter. With printed media, typographers also are concerned with binding margins, paper selection, and printing methods when determining the correct color of the page. Three fundamental aspects of typography are "legibility", "readability", and "aesthetics". Although in a non-technical sense "legible" and "readable" are often used synonymously, typographically they are separate but related concepts. Legibility and readability tend to support aesthetic aspects of a product. "Legibility" describes how easily individual characters can be distinguished from one another. It is described by Walter Tracy as "the quality of being decipherable and recognisable". For instance if a "b" and an "h", or a "3" and an "8", are difficult to distinguish at small sizes, this is a problem of legibility. Typographers are concerned with legibility insofar as it is their job to select the correct font to use. Brush Script is an example of a font containing many characters which might be difficult to distinguish. Selection of case influences the legibility of typography because using only upper-case letters ("all-caps") reduces legibility. Readability refers to how easy it is to read the text as a whole, as opposed to the individual character recognition described by legibility. Use of margins, word- and line-spacing, and clear document structure all impact on readability. Some fonts or font styles, for instance sans-serif fonts, are considered to have low readability, and so be unsuited for large quantities of prose. Legibility "refers to perception" (being able to see as determined by physical limitations of the eye) and readability "refers to comprehension" (understanding the meaning). Good typographers and graphic designers aim to achieve excellence in both. "The typeface chosen should be legible. That is, it should be read without effort. Sometimes legibility is simply a matter of type size; more often, however, it is a matter of typeface design. Case selection always influences legibility. In general, typefaces that are true to the basic letterforms are more legible than typefaces that have been condensed, expanded, embellished, or abstracted. Studies of both legibility and readability have examined a wide range of factors including type size and type design. For example, comparing serif vs. sans-serif type, roman type vs. "oblique type", and "italic type", line length, line spacing, color contrast, the design of right-hand edge (for example, justification, straight right hand edge) vs. ragged right, and whether text is hyphenated. Justified copy must be adjusted tightly during typesetting to prevent loss of readability, something beyond the capabilities of typical personal computers. Legibility research has been published since the late nineteenth century. Although there often are commonalities and agreement on many topics, others often create poignant areas of conflict and variation of opinion. For example, Alex Poole asserts that no one has provided a conclusive answer as to which typeface style, serif or sans serif, provides the most legibility, although differences of opinion exist regarding such debates. Other topics such as justified "vs" unjustified type, use of hyphens, and proper typefaces for people with reading difficulties such as dyslexia, have continued to be subjects of debate. Legibility is usually measured through speed of reading, with comprehension scores used to check for effectiveness (that is, not a rushed or careless read). For example, Miles Tinker, who published numerous studies from the 1930s to the 1960s, used a speed of reading test that required participants to spot incongruous words as an effectiveness filter. The "Readability of Print Unit" at the Royal College of Art under Professor Herbert Spencer with Brian Coe and Linda Reynolds did important work in this area and was one of the centres that revealed the importance of the saccadic rhythm of eye movement for readability—in particular, the ability to take in (i.e., recognise the meaning of groups of) about three words at once and the physiognomy of the eye, which means the eye tires if the line required more than 3 or 4 of these saccadic jumps. More than this is found to introduce strain and errors in reading (e.g., Doubling). The use of all-caps renders words indistinguishable as groups, all letters presenting a uniform line to the eye, requiring special effort for separation and understanding. These days, legibility research tends to be limited to critical issues, or the testing of specific design solutions (for example, when new typefaces are developed). Examples of critical issues include typefaces for people with visual impairment, typefaces and case selection for highway and street signs, or for other conditions where legibility may make a key difference. Much of the legibility research literature is somewhat atheoretical—various factors were tested individually or in combination (inevitably so, as the different factors are interdependent), but many tests were carried out in the absence of a model of reading or visual perception. Some typographers believe that the overall word shape (Bouma) is very important in readability, and that the theory of parallel letter recognition is either wrong, less important, or not the entire picture. Word shape differs by outline, influenced by ascending and descending elements of lower case letters and enables reading the entire word without having to parse out each letter (for example, dog is easily distinguished from cat) and that becomes more influential to being able to read groups of words at a time. Studies distinguishing between Bouma recognition and parallel letter recognition with regard to how people recognize words when they read, have favored parallel letter recognition, which is widely accepted by cognitive psychologists. Some commonly agreed findings of legibility research include: The aesthetic concerns in typography deals not only with the careful selection of one or two harmonizing typefaces and relative type sizes, but also with laying out elements to be printed on a flat surface tastefully and appealingly, among others. For this reason, typographers attempt to observe "typographical principles", the most common of which are listed below: Readability also may be compromised by letter-spacing, word spacing, or leading that is too tight or too loose. It may be improved when generous vertical space separates lines of text, making it easier for the eye to distinguish one line from the next, or previous line. Poorly designed typefaces and those that are too tightly or loosely fitted also may result in poor legibility. Underlining also may reduce readability by eliminating the recognition effect contributed by the descending elements of letters. Periodical publications, especially newspapers and magazines, use typographical elements to achieve an attractive, distinctive appearance, to aid readers in navigating the publication, and in some cases for dramatic effect. By formulating a style guide, a publication or periodical standardizes with a relatively small collection of typefaces, each used for specific elements within the publication, and makes consistent use of typefaces, case, type sizes, italic, boldface, colors, and other typographic features such as combining large and small capital letters together. Some publications, such as "The Guardian" and "The Economist", go so far as to commission a type designer to create customized typefaces for their exclusive use. Different periodical publications design their publications, including their typography, to achieve a particular tone or style. For example, "USA Today" uses a bold, colorful, and comparatively modern style through their use of a variety of typefaces and colors; type sizes vary widely, and the newspaper's name is placed on a colored background. In contrast, "The New York Times" uses a more traditional approach, with fewer colors, less typeface variation, and more columns. Especially on the front page of newspapers and on magazine covers, headlines often are set in larger display typefaces to attract attention, and are placed near the masthead. "Typography utilized to characterize text:" Typography is intended to reveal the character of the text. Through the use of typography, a body of text can instantaneously reveal the mood the author intends to convey to its readers. The message that a body of text conveys has a direct relationship with the typeface that is chosen. Therefore, when a person is focusing on typography and setting type they must pay very close attention to the typeface they decide to choose. Choosing the correct typeface for a body of text can only be done after thoroughly reading the text, understanding its context, and understanding what the text is wishing to convey. Once the typographer has an understanding of the text, then they have the responsibility of using the appropriate typeface to honor the writing done by the author of the text. Knowledge of choosing the correct typeface comes along with understanding the historical background of typefaces and understanding the reason why that typeface was created. For example, if the body of text is titled “Commercial Real Estate Transactions” and further elaborates on the real estate market throughout the body, then the appropriate typeface to use in this instance is a serif typeface. This typeface would be appropriate because the author intends to inform its audience on a serious topic and not entertain his audience with an anecdote; therefore, a serif typeface would effectively convey a sense of seriousness to the audience instantaneously. The typographer would also employ larger-sized font for the title of the text to convey a sense of importance to the title of the text which directly informs the reader of the structure in which the text is intended to be read, as well as increasing readability from varying viewing distances. "Typography utilized to make reading practical:" Typography not only has a direct correlation with honoring the tone of the text, but also shares the responsibility of making the audience commence the reading process as well as sustaining the audience's attention throughout the body of text. Although typography can potentially be utilized to attract the reader's attention to commence the reading process, and create a beautiful/attractive piece of text, the craft of typography is not limited to aesthetics. Typography is a craft that is not stringently encompassed with the aesthetic appeal of the text. On the contrary, the object of typography is to make the reading experience practical and useful. The use of bold colors, multiple typefaces, and colorful backgrounds in a typographic design may be eye-catching; however, it may not be appropriate for all bodies of text and could potentially make text illegible. Overuse of design elements such as colors and typefaces can create an unsettling reading experience, preventing the author of the text from conveying their message to readers. Type may be combined with negative space and images, forming relationships and dialog between the words and images for special effects. Display designs are a potent element in graphic design. Some sign designers exhibit less concern for readability, sacrificing it for an artistic manner. Color and size of type elements may be much more prevalent than in solely text designs. Most display items exploit type at larger sizes, where the details of letter design are magnified. Color is used for its emotional effect in conveying the tone and nature of subject matter. Display typography encompasses: Typography has long been a vital part of promotional material and advertising. Designers often use typefaces to set a theme and mood in an advertisement (for example, using bold, large text to convey a particular message to the reader). Choice of typeface is often used to draw attention to a particular advertisement, combined with efficient use of color, shapes, and images. Today, typography in advertising often reflects a company's brand. Typefaces used in advertisements convey different messages to the reader: classical ones are for a strong personality, while more modern ones may convey clean, neutral look. Bold typefaces are used for making statements and attracting attention. In any design, a balance has to be achieved between the visual impact and communication aspects. Digital technology in the twentieth and twenty-first centuries has enabled the creation of typefaces for advertising that are more experimental than traditional typefaces. The history of inscriptional lettering is intimately tied to the history of writing, the evolution of letterforms and the craft of the hand. The widespread use of the computer and various etching and sandblasting techniques today has made the hand carved monument a rarity, and the number of letter-carvers left in the US continues to dwindle. For monumental lettering to be effective, it must be considered carefully in its context. Proportions of letters need to be altered as their size and distance from the viewer increases. An expert monument designer gains understanding of these nuances through much practice and observation of the craft. Letters drawn by hand and for a specific project have the possibility of being richly specific and profoundly beautiful in the hand of a master. Each also may take up to an hour to carve, so it is no wonder that the automated sandblasting process has become the industry standard. To create a sandblasted letter, a rubber mat is laser-cut from a computer file and glued to the stone. The blasted sand then bites a coarse groove or channel into the exposed surface. Unfortunately, many of the computer applications that create these files and interface with the laser cutter do not have a wide selection of many typefaces, and often have inferior versions of those typefaces that are available. What now can be done in minutes, however, lacks the striking architecture and geometry of the chisel-cut letter that allows light to play across its distinct interior planes.
https://en.wikipedia.org/wiki?curid=31217
Template (C++) Templates are a feature of the C++ programming language that allows functions and classes to operate with generic types. This allows a function or class to work on many different data types without being rewritten for each one. Templates are of great utility to programmers in C++, especially when combined with multiple inheritance and operator overloading. The C++ Standard Library provides many useful functions within a framework of connected templates. Major inspirations for C++ templates were the parameterized modules provided by CLU and the generics provided by Ada. There are three kinds of templates: "function templates", "class templates" and, since C++14, "variable templates". Since C++11, templates may be either variadic or non-variadic; in earlier versions of C++ they are always non-variadic. A "function template" behaves like a function except that the template can have arguments of many different types (see example). In other words, a function template represents a family of functions. The format for declaring function templates with type parameters is: template function_declaration; template function_declaration; Both expressions have the same meaning and behave in exactly the same way. The latter form was introduced to avoid confusion, since a type parameter need not be a class. (It can also be a basic type such as codice_1 or codice_2.) For example, the C++ Standard Library contains the function template codice_3 which returns the larger of codice_4 and codice_5. That function template could be defined like this: template inline T max(T a, T b) { This single function definition works with many data types. Specifically, it works with all data types for which > (the greater-than operator) is defined. The usage of a function template saves space in the source code file in addition to limiting changes to one function description and making the code easier to read. A template does not produce smaller object code, though, compared to writing separate functions for all the different data types used in a specific program. For example, if a program uses both an codice_1 and a codice_2 version of the codice_8 function template shown above, the compiler will create an object code version of codice_8 that operates on codice_1 arguments and another object code version that operates on codice_2 arguments. The compiler output will be identical to what would have been produced if the source code had contained two separate non-templated versions of codice_8, one written to handle codice_1 and one written to handle codice_2. Here is how the function template could be used: int main() In the first two cases, the template argument codice_15 is automatically deduced by the compiler to be codice_1 and codice_2, respectively. In the third case automatic deduction of codice_18 would fail because the type of the parameters must in general match the template arguments exactly. Therefore, we explicitly instantiate the codice_2 version with codice_20. This function template can be instantiated with any copy-constructible type for which the expression codice_21 is valid. For user-defined types, this implies that the greater-than operator (codice_22) must be overloaded in the type. A class template provides a specification for generating classes based on parameters. Class templates are generally used to implement containers. A class template is instantiated by passing a given set of types to it as template arguments. The C++ Standard Library contains many class templates, in particular the containers adapted from the Standard Template Library, such as codice_23. In C++14, templates can be also used for variables, as in the following example: template constexpr T pi = T(3.141592653589793238462643383L); When a function or class is instantiated from a template, a specialization of that template is created by the compiler for the set of arguments used, and the specialization is referred to as being a generated specialization. Sometimes, the programmer may decide to implement a special version of a function (or class) for a given set of template type arguments which is called an explicit specialization. In this way certain template types can have a specialized implementation that is optimized for the type or a more meaningful implementation than the generic implementation. Explicit specialization is used when the behavior of a function or class for particular choices of the template parameters must deviate from the generic behavior: that is, from the code generated by the main template, or templates. For example, the template definition below defines a specific implementation of codice_8 for arguments of type codice_25: template <> bool max(bool a, bool b) { C++11 introduced variadic templates, which can take a variable number of arguments in a manner somewhat similar to variadic functions such as codice_26. Function templates, class templates and (in C++14) variable templates can all be variadic. C++11 introduced template aliases, which act like parameterized typedefs. The following code shows the definition of a template alias codice_27. This allows, for example, codice_28 to be used as shorthand for codice_29. template using StrMap = std::unordered_map; Initially, the concept of templates was not included in some languages, such as Java and C# 1.0. Java's adoption of generics mimics the behavior of templates, but is technically different. C# added generics (parameterized types) in .NET 2.0. The generics in Ada predate C++ templates. Although C++ templates, Java generics, and .NET generics are often considered similar, generics only mimic the basic behavior of C++ templates. Some of the advanced template features utilized by libraries such as Boost and STLSoft, and implementations of the STL itself, for template metaprogramming (explicit or partial specialization, default template arguments, template non-type arguments, template template arguments, ...) are not available with generics. In C++ templates, compile-time cases were historically performed by pattern matching over the template arguments. For example, the template base class in the Factorial example below is implemented by matching 0 rather than with an inequality test, which was previously unavailable. However, the arrival in C++11 of standard library features such as std::conditional has provided another, more flexible way to handle conditional template instantiation. // Induction template struct Factorial { // Base case via template specialization: template <> struct Factorial<0> { With these definitions, one can compute, say 6! at compile time using the expression codice_30. Alternatively, constexpr in C++11 can be used to calculate such values directly using a function at compile-time.
https://en.wikipedia.org/wiki?curid=31218
Theodoric the Great Theodoric the Great (454 – 30 August 526), also spelled Theoderic or called Theodoric the Amal (; , , ""), was king of the Ostrogoths (471–526), and ruler of the independent Ostrogothic Kingdom of Italy between 493–526, regent of the Visigoths (511–526), and a patrician of the East Roman Empire. As ruler of the combined Gothic realms, Theodoric controlled an empire stretching from the Atlantic Ocean to the Adriatic Sea. As a young child of an Ostrogothic nobleman, Theodoric was taken as a hostage in Constantinople, where he spent his formative years and received an east Roman education. Theodoric returned to Pannonia around 470, and throughout the 470s he campaigned against the Sarmatians and competed for influence among the Goths of the Roman Balkans. The emperor Zeno made him a commander of the Eastern Roman forces in 483, and in 484 he was named consul. Nevertheless, Theodoric remained in constant hostilities with the emperor and frequently raided East Roman lands. At the behest of Zeno, Theodoric attacked Odoacer in 489, emerging victorious in 493. As the new ruler of Italy, he upheld a Roman legal administration and scholarly culture and promoted a major building program across Italy. In 505 he expanded into the Balkans, and by 511 he had brought the Visigothic Kingdom under his direct control and established hegemony over the Burgundian and Vandal kingdoms. Theodoric died in 526 and was buried in a grand mausoleum in Ravenna. Theodoric was born in AD 454 in Pannonia on the banks of the Neusiedler See near Carnuntum, the son of king Theodemir, a Germanic Amali nobleman, and his concubine Ereleuva. This was just a year after the Ostrogoths had thrown off nearly a century of domination by the Huns. His Gothic name, which is reconstructed by linguists as "*Þiudareiks", translates into "people-king" or "ruler of the people". In 461, when Theodoric was but seven or eight years of age, he was taken as a hostage in Constantinople to secure the Ostrogoths' compliance with a treaty Theodemir had concluded with the "augustus" Leo I (ruled 457–474). The treaty secured a payment to Constantinople of some 300 pounds' worth of gold each year. Theodoric was well educated by Constantinople's best teachers. His status made him valuable, since the Amal family from which he came (as told by Theodoric), allegedly ruled half of all Goths since the third-century AD. Historian Peter Heather argues that Theodoric's claims were likely self-aggrandizing propaganda and that the Amal dynasty was more limited than modern commentators presume. Until 469, Theodoric remained in Constantinople where he spent formative years "catching up on all the "Romanitas"" it had taken generations of Visigothic Balthi to acquire. Theodoric was treated with favor by the emperor Leo I. He learned to read, write, and perform arithmetic while in captivity in the Eastern Empire. When Leo heard that his imperial army was returning from having been turned back by the Goths near Pannonia, he sent Theodoric home with gifts and no promises of any commitments. On his return in 469/470, Theodoric assumed leadership over the Gothic regions previously ruled by his uncle, Valamir, while his father became king. Not long afterwards near Singidunum (modern Belgrade) in upper Moesia, the Tisza Sarmatian king Babai had extended his authority at Constantinople's expense. Legitimizing his position as a warrior, Theodoric crossed the Danube with six thousand warriors, defeated the Sarmatians and killed Babai; this moment likely crystallized his position and marked the beginning of his kingship, despite not actually having yet assumed the throne. Perhaps to assert his authority as an Amali prince, Theodoric kept the conquered area of Singidunum for himself. Throughout the 470s, sometimes in the name of the empire itself, Theodoric launched campaigns against potential Gothic rivals and other enemies of the Eastern Empire, which made him an important military and political figure. One of his chief rivals was the chieftain of the Thracian Goths Theodoric Strabo (Strabo means "the Squinter"), who had led a major revolt against the emperor Zeno. Finding common ground with the emperor, Theodoric was rewarded by Zeno and made commander of East Roman forces, while his people became "foederati" or federates of the Roman army. Zeno attempted to play one Germanic chieftain against another and take advantage of an opportunity sometime in 476/477 when—after hearing demands from Theodoric for new lands since his people were facing a famine—he offered Theodoric Strabo the command once belonging to Theodoric. Enraged by this betrayal, Theodoric sought his wrath against the communities in the Rhodope Mountains, where his forces commandeered livestock and slaughtered peasants, sacked and burned Stobi in Macedonia and requisitioned supplies from the archbishop at Heraclea. Gothic plundering finally elicited a settlement from Zeno, but Theodoric initially refused any compromise. Theodoric sent one of his confidants, Sidimund, forward to Epidaurum for negotiations with Zeno. While the Roman envoy and Theodoric were negotiating, Zeno sent troops against some of Theodoric's wagons, which were under the protection of his able general Theodimund. Unaware of this treachery, Theodoric's Goths lost around 2,000 wagons and 5,000 of his people were taken captive. He settled his people in Epirus in 479 with the help of his relative Sidimund. In 482, he raided Greece and sacked Larissa. Bad luck, rebellions, and poor decisions left Zeno in an unfortunate position, which subsequently led him to seek another agreement with Theodoric. In 483, Zeno made Theodoric "magister militum praesentalis" and consul designate in 484, whereby he commanded the Danubian provinces of Dacia Ripensis and Moesia Inferior as well as the adjacent regions. Seeking further gains, Theodoric frequently ravaged the provinces of the Eastern Roman Empire, eventually threatening Constantinople itself. By 486, there was little disputing the open hostilities between Theodoric and Zeno. The emperor sought the assistance of the Bulgarians, who were likewise defeated by Theodoric. In 487, Theodoric began his aggressive campaign against Constantinople, blockading the city, occupying strategically important suburbs, and cutting off its water supply; although it seems Theodoric never intended to occupy the city but instead, to use the assault as a means of gaining power and prestige from the Eastern Empire. The Ostrogoths needed a place to live, and Zeno was having serious problems with Odoacer, the Germanic "foederatus" and King of Italy, who although ostensibly viceroy for Zeno, was menacing Byzantine territory and not respecting the rights of Roman citizens in Italy. In 488, Zeno ordered Theodoric to overthrow Odoacer. For this task, he received support from Rugian king Frideric, the son of Theodoric's cousin Giso. Theodoric moved with his people towards Italy in the autumn of 488. On the way he was opposed by the Gepids, whom he defeated at Sirmium in August 489. Arriving in Italy, Theodoric won the battles of Isonzo and Verona in 489. Once again, Theodoric was pressed by Zeno in 490 to attack Odoacer. Theodoric's army was defeated by Odoacer's forces at Faenza in 490, but regained the upper hand after securing victory in the Battle of the Adda River on 11 August 490. For several years, the armies of Odoacer and Theodoric vied for supremacy across the Italian peninsula. In 493, Theodoric took Ravenna. On 2 February 493, Theodoric and Odoacer signed a treaty that assured both parties would rule over Italy. Then on 5 March 493, Theodoric entered the city of Ravenna. A banquet was organised on 15 March 493 in order to celebrate this treaty. At this feast, Theodoric, after making a toast, killed Odoacer. Theodoric drew his sword and struck him on the collarbone. Along with Odoacer, Theodoric had the betrayed king's most loyal followers slaughtered as well, an event which left him as the master of Italy. With Odoacer dead and his forces dispersed, Theodoric now faced the problem of settlement for his people. Concerned about thinning out the Amal line too much, Theodoric believed he could not afford to spread some 40,000 of his tribesmen across the entire Italian peninsula. Such considerations led him to the conclusion that it was best to settle the Ostrogoths in three concentrated areas: around Pavia, Ravenna, and Picenum. Theodoric's kingdom was among the most "Roman" of the barbarian states and he successfully ruled most of Italy for thirty-three years following his treachery against Odoacer. Theodoric extended his hegemony over the Burgundian, Visigothics royals, and Vandal kingdoms through marriage alliances. He had married the sister of the mighty Frankish king, Clovis—likely in recognition of Frankish power. He sent a substantial dowry accompanied by a guard of 5,000 troops with his sister Amalafrida when she married the king of the Vandals and Alans, Thrasamund. In 504–505, Theodoric extended his realms in the Balkans by defeating the Gepids, acquiring the province of Pannonia. Theodoric became regent for the infant Visigothic king, his grandson Amalaric, following the defeat of Alaric II by the Franks under Clovis in 507. The Franks were able to wrest control of Aquitaine from the Visigoths, but otherwise Theodoric was able to defeat their incursions. In 511, the Visigothic Kingdom was brought under Theodoric's direct control, forming a Gothic superstate that extended from the Atlantic to the Danube. While territories that were lost to the Franks remained that way, Theodoric concluded a peace arrangement with the heirs of the Frankish Kingdom once Clovis was dead. Additional evidence of the Gothic king's extensive royal reach include the acts of ecclesiastical councils that were held in Tarragona and Gerona; while both occurred in 516 and 517, they date back to the "regnal years of Theoderic, which seem to commence in the year 511". Like Odoacer, Theodoric was ostensibly only a viceroy for the "augustus" in Constantinople, but he nonetheless adopted the trappings of imperial style, increasingly emphasizing his "neo-imperial status". According to historian Peter Brown, Theodoric was in the habit of commenting that "An able Goth wants to be like a Roman; only a poor Roman would want to be like a Goth." Much like the representatives of the Eastern Empire, Theodoric chose to be clad in robes dyed purple, emulating the imperial colors and perhaps even to reinforce the imperial dispatch of the "augustus" Anastasius I, which outlined Theodoric's position as an imperial colleague. Chroniclers like Cassiodorus added a layer of legitimacy for Theodoric and the Amal tribe from which he came by casting them as cooperative participants in the greater history of the Mediterranean going all the way back to the era of Alexander the Great. In reality—at least in part due to his formidable military—he was able to avoid imperial supervision, and dealings between the emperor and Theodoric were as equals. Unlike Odoacer, however, Theodoric respected the agreement he had made and allowed Roman citizens within his kingdom to be subject to Roman law and the Roman judicial system. The Goths, meanwhile, lived under their own laws and customs. In 519, when a mob had burned down the synagogues of Ravenna, Theodoric ordered the town to rebuild them at its own expense. Theodoric experienced difficulties before his death. He had married off his daughter Amalasuntha to the Visigoth Eutharic, but Eutharic died in August 522 or 523, so no lasting dynastic connection of Ostrogoths and Visigoths was established, which highlighted the tensions between the Eastern Empire and the West. The new "augustus", Justin I—who replaced Anastasius, a man with whom Theodoric had good relations—was under the influence of his nephew Justinian; somehow, imperial views hardened against the West and talk of Rome's fall emerged during this period, leading to questions about the legitimacy of barbarian rule. Theodoric's good relations with the Roman Senate deteriorated due to a presumed senatorial conspiracy in 522, and, in 523, Theodoric had the philosopher and court official Boethius and Boethius' father-in-law Symmachus arrested on charges of treason related to the alleged plot. For his role, Theodoric had Boethius executed in 524. Despite the complex relationship between Theodoric and his son-in-law, the Catholic Burgundian king Sigismund, the two enjoyed a mutual peace for fifteen years. Then in 522, Sigismund killed his own son—Theodoric's grandson—Sigeric; an act which infuriated Theodoric and he retaliated by invading the Burgundian kingdom, accompanied by the Franks. Between the two peoples, Sigismund's Burgundian forces faced two fronts and were defeated. Meanwhile, Sigismund's Arian brother Godomar established himself as king over the remaining Burgundian territory and ruled for a decade. When Theodoric's sister Amalafrida sought to possibly change the direction of Vandal succession following the death of her spouse, the former Vandal king Thrasamund, the new Catholic Vandal king Hilderic had her, along with the accompanying Gothic retinue, killed. Theodoric was incensed and planned an expedition to restore his power over the Vandal kingdom when he died of dysentery in the summer of 526. The Gothic king was succeeded by his grandson Athalaric, with Theodoric's daughter Amalasuntha serving as regent since Athalaric was but ten years of age when Theodoric died. Her role was to carry out the dead ruler's political testament, to seek accommodation with the senate, and maintain peace with the emperor. Suddenly the once united Goths were split and Theodoric's grandson Amalaric ruled the newly independent Visigothic kingdom for the next five years. Theodoric was married once. He had a concubine in Moesia, name unknown, with whom he had two daughters: By his marriage to Audofleda in 493 he had one daughter: After his death in Ravenna in 526, Theodoric was succeeded by his grandson Athalaric. Athalaric was at first represented by his mother Amalasuntha, who served as regent from 526 until 534. The kingdom of the Ostrogoths, however, began to wane and was conquered by Justinian I in 553 after the Battle of Mons Lactarius. Theodoric promoted the rebuilding of Roman cities and the preservation of ancient monuments in Italy. The fame of his building works reached far-away Syria. Theodoric's building program saw more extensive new construction and restoration than that of any of the West Roman emperors after Honorius (395–423). Theodoric devoted most of his architectural attention to his capital, Ravenna. He restored Ravenna's water supply by repairing an aqueduct originally built by Trajan. According to the chronicles of Cassiodorus, a number of cities were renewed by Theodoric's building enterprises, some of which even surpassed the ancient wonders. Historian Jonathan J. Arnold quips: He constructed a "Great Basilica of Hercules" next to a colossal statue of the hero himself. To promote Arianism, the king commissioned a small Arian cathedral, the "Hagia Anastasis", which contains the Arian Baptistery. Three more churches built by Theodoric in Ravenna and its suburbs, S. Andrea dei Goti, S. Giorgio and S. Eusebio, were destroyed in the 13th, 14th and 15th centuries. Theodoric built the Palace of Theodoric for himself in Ravenna, modeled on the Great Palace of Constantinople. It was an expansion of an earlier Roman structure. The palace church of Christ the Redeemer survives and is known today as the Basilica of Sant'Apollinare Nuovo. It was Theodoric's personal church of worship and was modeled specifically according to his tastes. An equestrian statue of Theodoric was erected in the square in front of the palace. Statues like these were symbols of the ancient world, and Theodoric's equestrian likeness was meant to convey his status as the undisputed ruler of the western empire. Theodoric the Great was interred in Ravenna, but his bones were scattered and his mausoleum was converted to a church after Belisarius conquered the city in 540. His mausoleum is one of the finest monuments in Ravenna. Unlike all the other contemporary buildings in Ravenna, which were made of brick, the Mausoleum of Theodoric was built completely from fine quality stone ashlars. The Palace of Domitian on the Palatine Hill was reconstructed, using the receipts from a specially levied tax; while the city walls of Rome were rebuilt, a feat celebrated by the Senate of Rome with a gilded statue of Theodoric. The Senate's Curia, the Theatre of Pompey, the city aqueducts, sewers and a granary were refurbished and repaired and statues were set up in the Flavian Amphitheatre. In 522 the philosopher Boethius became his "magister officiorum" (head of all the government and court services). Boethius was a Roman aristocrat and Christian humanist, who was also a philosopher, poet, theologian, mathematician, astronomer, translator, and commentator on Aristotle and other Greek luminaries. It is hard to overestimate this one-time servant and eventual victim of Theodoric for his influence on philosophy, particularly Christian philosophy, throughout the Middle Ages. Boethius' treatises and commentaries became textbooks for medieval students and the great Greek philosophers were unknown except for his Latin translations. The execution of Boethius did nothing to dissipate tensions between Arians and Catholics but merely raised additional questions about barbarian imperial legitimacy. Theodoric was of the Arian (nontrinitarian) faith and in his final years, he was no longer the disengaged Arian patron of religious toleration that he had seemed earlier in his reign. "Indeed, his death cut short what could well have developed into a major persecution of Catholic churches in retaliation for measures taken by Justinian in Constantinople against Arians there." Despite the Byzantine "caesaropapism", which conflated imperial and ecclesiastical authority in the same person—whereby Theodoric's Arian beliefs were tolerated under two separate emperors—the fact remained that to most clergy across the Eastern Empire, Theodoric was a heretic. At the end of his reign quarrels arose with his Roman subjects and the Byzantine emperor Justin I over the matter of Arianism. Relations between the two kingdoms deteriorated, although Theodoric's military abilities dissuaded the Byzantines from waging war against him. After his death, that reluctance faded quickly. Seeking to restore the glory of ancient Rome, Theodoric ruled Italy during one of its most peaceful and prosperous periods and was accordingly hailed as a new Trajan and Valentinian I for his building efforts and his religious toleration. His far-sighted goals included taking what was best from Roman culture and combining it with Gothic energy and physical power as a way into the future. Relatively amicable relations between Goths and Romans also make Theodoric's kingdom notable. Memories of his reign made him a hero of medieval German legends, as Dietrich von Bern, where the two figures have represented the same person. Theodoric is an important figure in medieval German literature as the character, "Dietrich von Bern", known also in Icelandic literature as "Þiðrekr". In German legends, Dietrich becomes an exile from his native kingdom of Lombardy, fighting with the help of Etzel against his usurping uncle, Ermenrich. Only the Old High German "Hildebrandslied" still contains Odoacer as Dietrich's antagonist. The Old Norse version, based on German sources, moves the location of Dietrich (Thidrek)'s life to Westphalia and northern Germany. The legends paint a generally positive picture of Dietrich, with only some influence from the negative traditions of the church visible.
https://en.wikipedia.org/wiki?curid=31222
Truth serum "Truth serum" is a colloquial name for any of a range of psychoactive drugs used in an effort to obtain information from subjects who are unable or unwilling to provide it otherwise. These include ethanol, scopolamine, 3-quinuclidinyl benzilate, midazolam, flunitrazepam, sodium thiopental, and amobarbital, among others. Although a variety of such substances have been tested, serious issues have been raised about their use scientifically, ethically and legally. There is currently no drug proven to cause consistent or predictable enhancement of truth-telling. Subjects questioned under the influence of such substances have been found to be suggestible and their memories subject to reconstruction and fabrication. When such drugs have been used in the course of investigating civil and criminal cases, they have not been accepted by Western legal systems and legal experts as genuine investigative tools. In the United States, it has been suggested that their use is a potential violation of the Fifth Amendment of the U.S. Constitution (the right to remain silent). Concerns have also been raised through the European Court of Human Rights arguing that use of a truth serum could be considered a violation of a human right to be free from degrading treatment, or could be considered a form of torture. It has been noted to be a violation of the Inter-American Convention to Prevent and Punish Torture. "Truth serum" was abused against psychotic patients as part of old, discredited practices of psychiatry and is no longer used. In a therapeutic context, the controlled administration of intravenous hypnotic medications is called "narcosynthesis" or "narcoanalysis". Such application was first documented by Dr. William Bleckwenn. Reliability and suggestibility of patients are concerns, and the practice of chemically inducing an involuntary mental state is now widely considered to be a form of torture. Sedatives or hypnotics that alter higher cognitive function include ethanol, scopolamine, 3-quinuclidinyl benzilate, potent short or intermediate acting hypnotic benzodiazepines such as midazolam, flunitrazepam, and various short and ultra-short acting barbiturates, including sodium thiopental (commonly known by the brand name Pentothal) and amobarbital (formerly known as sodium amytal). While there have been many clinical studies of the efficacy of narcoanalysis in interrogation or lie detection, there is dispute whether any of them qualify as a randomized, controlled study, that would meet scientific standards for determining effectiveness. India's Central Bureau of Investigation has used intravenous barbiturates for interrogation, often in high-profile cases. One such case was the interrogation of Ajmal Kasab, the only terrorist captured alive by police in the 2008 attacks in Mumbai, India. Kasab was a Pakistani militant and a member of the Lashkar-e-Taiba terrorist group. On 3 May 2010, Kasab was found guilty of 80 offences, including murder, waging war against India, possessing explosives, and other charges. On 6 May 2010, the same trial court sentenced him to death on four counts and to a life sentence on five counts. The Central Bureau of Investigation also conducted this test on Krishna, a key witness (also suspect) in the high-profile 2008 Aarushi-Hemraj Murder Case to seek more information from Krishna and also determine his credibility as a witness with key information, yet not known to the investigating authorities. Per unverified various media sources, Krishna had purported to have deemed Hemraj (prime suspect) as not guilty of Aarushi's murder, claiming he [Hemraj] "treat Aarushi like his own daughter". On May 5, 2010 the Supreme Court Judge Balasubramaniam in the case "Smt. Selvi vs. State of Karnataka" held that narcoanalysis, polygraph and brain mapping tests were to be allowed after consent of accused. The judge stated: "We are of the considered opinion that no individual can be forced and subjected to such techniques involuntarily, and by doing so it amounts to unwarranted intrusion of personal liberty." In Gujrat, Madhya Pradesh High Court permitted narcoanalysis in the investigation of a killing of a tiger that occurred in May 2010. The Jhurjhura Tigress at Bandhavgarh National Park, a mother of three cubs, was found dead as a result of being hit by a vehicle. A Special Task Force requested the narcoanalysis testing of four persons, one of whom refused to consent on grounds of potential post-test complications. A defector from the biological weapons Department 12 of the KGB "illegals" (S) directorate (e.g., presently a part of Russian SVR service) claimed a serum code-named SP-117 was highly effective, and has been widely used. According to him, "The 'remedy which loosens the tongue' has no taste, no smell, no color, and no immediate side effects. Most importantly, a person had no recollection having had the 'heart-to-heart talk'," and felt afterward as though they'd suddenly fallen asleep. Officers of S Directorate primarily used the drug to verify fidelity and trustworthiness of their agents who operated overseas, such as Vitaly Yurchenko. According to Alexander Litvinenko, Russian presidential candidate Ivan Rybkin was drugged with the same substance by FSB agents during his kidnapping in 2004. Scopolamine was promoted by obstetrician Robert Ernest House as an advance that would prevent false convictions, beginning in 1922. He had noted that women in childbirth who were given scopolamine could answer questions accurately even while in a state of twilight sleep, and were oftentimes "exceedingly candid" in their remarks. House proposed that scopolamine could be used when interrogating suspected criminals. He even arranged to administer scopolamine to prisoners in the Dallas County jail. Both men were believed to be guilty, both denied guilt under scopolamine, and both were eventually acquitted. In 1926, the use of scopolamine was rejected in a court case, by Judge Robert Walker Franklin, who questioned both its scientific origin, and the uncertainty of its effect. The United States Office of Strategic Services (OSS) experimented with the use of mescaline, scopolamine, and marijuana as possible truth drugs during World War II. They concluded that the effects were not much different from those of alcohol: subjects became more talkative but that did not mean they were more truthful. Like hypnosis, there were also issues of suggestibility and interviewer influence. Cases involving scopolamine resulted in a mixture of testimonies both for and against those suspected, at times directly contradicting each other. LSD was also considered as a possible truth serum, but found unreliable. During the 1950s and 1960s, the United States Central Intelligence Agency (CIA) carried out a number of investigations including Project MKUltra and Project MKDELTA, which involved illegal use of truth drugs including LSD. A CIA report from 1961, released in 1993, concludes: In 1963, the U.S. Supreme Court ruled, in Townsend v. Sain, that confessions produced as a result of ingestion of truth serum were "unconstitutionally coerced" and therefore inadmissible. The viability of forensic evidence produced from truth sera has been addressed in lower courts – judges and expert witnesses have generally agreed that they are not reliable for lie detection. In 1967, Perry Russo was administered sodium pentothal for interrogation by District Attorney Jim Garrison in the investigation of JFK's assassination. More recently, a judge approved the use of narcoanalysis in the 2012 Aurora, Colorado shooting trial to evaluate whether James Eagan Holmes's state of mind was valid for an insanity plea. Judge William Sylvester ruled that prosecutors would be allowed to interrogate Holmes "under the influence of a medical drug designed to loosen him up and get him to talk", such as sodium amytal, if he filed an insanity plea. The hope was that a 'narcoanalytic interview' could confirm whether or not he had been legally insane on 20 July, the date of the shootings. It is not known whether such an examination was carried out. William Shepherd, chair of the criminal justice section of the American Bar Association, stated, with respect to the Holmes case, that use of a 'truth drug' as proposed, "to ascertain the veracity of a defendant's plea of insanity... would provoke intense legal argument relating to Holmes's right to remain silent under the fifth amendment of the US constitution." Discussing possible effectiveness of such an examination, psychiatrist August Piper stated that "amytal’s inhibition-lowering effects in no way prompt the subject to offer up true statements or memories." "Psychology Today"’s Scott Linfield noted, as per Piper, that "there’s good reason to believe that truth serums merely lower the threshold for reporting virtually all information, both true and false."
https://en.wikipedia.org/wiki?curid=31231
Tripoli Tripoli (; , ) is the capital city and the largest city of Libya, with a population of about 1.165 million people in 2018. It is located in the northwest of Libya on the edge of the desert, on a point of rocky land projecting into the Mediterranean Sea and forming a bay. It includes the port of Tripoli and the country's largest commercial and manufacturing centre. It is also the site of the University of Tripoli. The vast barracks, which includes the former family estate of Muammar Gaddafi, is also located in the city. Colonel Gaddafi largely ruled the country from his residence in this barracks. Tripoli was founded in the 7th century BC by the Phoenicians, who gave it the Libyco-Berber name (, ) before passing into the hands of the Greek rulers of Cyrenaica as Oea (, ). Due to the city's long history, there are many sites of archaeological significance in Tripoli. "Tripoli" may also refer to the (top-level administrative division in the current Libyan system), the Tripoli District. Tripoli is also known as Tripoli-of-the-West ( ), to distinguish it from its Phoenician sister city Tripoli, Lebanon, known in Arabic as (), meaning 'Levantine Tripoli'. It is affectionately called "The Mermaid of the Mediterranean" ( ; lit: 'bride of the sea'), describing its turquoise waters and its whitewashed buildings. Tripoli is a Greek name that means 'Three Cities', introduced in Western European languages through the Italian . In Arabic, it is called , (; Libyan Arabic: , ; Berber: , from , from ). The city was founded in the 7th century BC, by Thera (Santorini)-originated Greeks who gave it the name "Oea" (Oία). There is still a village in Thera (Santorini) named Oία, Oia, Greece as well as another Tripoli in Greece. The Greeks were probably attracted to the site by its natural harbour, flanked on the western shore by the small, easily defensible peninsula, on which they established their colony. The city then passed into the hands of the rulers of Cyrenaica (a Greek colony on the North African shore, east of Tripoli, halfway to Egypt), although the Carthaginians later wrested it from the Greeks. By the latter half of the 2nd century BC, it belonged to the Romans, who included it in their province of Africa, and gave it the name of "Regio Syrtica". Around the beginning of the 3rd century AD, it became known as the Regio Tripolitana, meaning "region of the three cities", namely Oea ("i.e.", modern Tripoli), Sabratha and Leptis Magna. It was probably raised to the rank of a separate province by Septimius Severus, who was a native of Leptis Magna. In spite of centuries of Roman habitation, the only visible Roman remains, apart from scattered columns and capitals (usually integrated in later buildings), is the Arch of Marcus Aurelius from the 2nd century AD. The fact that Tripoli has been continuously inhabited, unlike "e.g.", Sabratha and Leptis Magna, has meant that the inhabitants have either quarried material from older buildings (destroying them in the process) or built on top of them, burying them beneath the streets, where they remain largely unexcavated. There is evidence to suggest that the Tripolitania region was in some economic decline during the 5th and 6th centuries, in part due to the political unrest spreading across the Mediterranean world in the wake of the collapse of the Western Roman empire, as well as pressure from the invading Vandals. According to al-Baladhuri, Tripoli was, unlike Western North Africa, taken by the Muslims very early after Alexandria, in the 22nd year of the Hijra, that is between 30 November 642 and 18 November 643 AD. Following the conquest, Tripoli was ruled by dynasties based in Cairo, Egypt (first the Fatimids, and later the Mamluks), and Kairouan in Ifriqiya (the Arab Fihrids, Muhallabids and Aghlabid dynasties). For some time it was a part of the Berber Almohad empire and of the Hafsids kingdom. In 1510, it was taken by Pedro Navarro, Count of Oliveto for Spain, and, in 1530, it was assigned, together with Malta, to the Knights of St. John, who had lately been expelled by the Ottoman Turks from their stronghold on the island of Rhodes. Finding themselves in very hostile territory, the Knights enhanced the city's walls and other defenses. Though built on top of a number of older buildings (possibly including a Roman public bath), much of the earliest defensive structures of the Tripoli castle (or "Assaraya al-Hamra", "i.e.", the "Red Castle") are attributed to the Knights of St John. Having previously combated piracy from their base on Rhodes, the reason that the Knights were given charge of the city was to prevent it from relapsing into the nest of Barbary pirates it had been prior to the Spanish occupation. The disruption the pirates caused to the Christian shipping lanes in the Mediterranean had been one of the main incentives for the Spanish conquest of the city. The knights kept the city with some trouble until 1551, when they were compelled to surrender to the Ottomans, led by Muslim Turk Turgut Reis. Turgut Reis served as pasha of Tripoli. During his rule, he adorned and built up the city, making it one of the most impressive cities along the North African Coast. Turgut was also buried in Tripoli after his death in 1565. His body was taken from Malta, where he had fallen during the Ottoman siege of the island, to a tomb in the Sidi Darghut Mosque which he had established close to his palace in Tripoli. The palace has since disappeared (supposedly it was situated between the so-called "Ottoman prison" and the Arch of Marcus Aurelius), but the mosque, along with his tomb, still stands, close to the Bab Al-Bahr gate. After the capture by the Ottoman Turks, Tripoli once again became a base of operation for Barbary pirates. One of several Western attempts to dislodge them again was a Royal Navy attack under John Narborough in 1675, of which a vivid eye-witness account has survived. Effective Ottoman rule during this period (1551–1711) was often hampered by the local Janissary corps. Intended to function as enforcers of local administration, the captain of the Janissaries and his cronies were often the "de facto" rulers. In 1711, Ahmed Karamanli, a Janissary officer of Turkish origin, killed the Ottoman governor, the "Pasha", and established himself as ruler of the Tripolitania region. By 1714, he had asserted a sort of semi-independence from the Ottoman Sultan, heralding in the Karamanli dynasty. The Pashas of Tripoli were expected to pay a regular tributary tax to the Sultan but were in all other aspects rulers of an independent kingdom. This order of things continued under the rule of his descendants, accompanied by the brazen piracy and blackmailing until 1835 when the Ottoman Empire took advantage of an internal struggle and re-established its authority. The Ottoman province ("vilayet") of Tripoli (including the dependent "sanjak" of Cyrenaica) lay along the southern shore of the Mediterranean between Tunisia in the west and Egypt in the east. Besides the city itself, the area included Cyrenaica (the Barca plateau), the chain of oases in the Aujila depression, Fezzan and the oases of Ghadames and Ghat, separated by sandy and stony wastelands. In the early part of the 19th century, the regency at Tripoli, owing to its piratical practices, was twice involved in war with the United States. In May 1801, the pasha demanded an increase in the tribute ($83,000) which the U.S. government had been paying since 1796 for the protection of their commerce from piracy under the 1796 Treaty with Tripoli. The demand was refused by third President Thomas Jefferson, and a naval force was sent from the United States to blockade Tripoli. The First Barbary War (1801-1805) dragged on for four years. In 1803, Tripolitan fighters captured the U.S. Navy heavy frigate "Philadelphia" and took its commander, Captain William Bainbridge, and the entire crew as prisoners. This was after the "Philadelphia" was run aground when the captain tried to navigate too close to the port of Tripoli. After several hours aground and Tripolitan gun boats firing upon the "Philadelphia", though none ever struck the "Philadelphia", Captain Bainbridge made the decision to surrender. The "Philadelphia" was later turned against the Americans and anchored in Tripoli Harbor as a gun battery while her officers and crew were held prisoners in Tripoli. The following year, U.S. Navy Lieutenant Stephen Decatur led a successful daring nighttime raid to retake and burn the warship rather than see it remain in enemy hands. Decatur's men set fire to the "Philadelphia" and escaped. A notable incident in the war was the expedition undertaken by diplomatic Consul William Eaton with the objective of replacing the pasha with an elder brother living in exile, who had promised to accede to all the wishes of the United States. Eaton, at the head of a mixed force of US Soldiers, Sailors, and Marines, along with Greek, Arab and Turkish mercenaries numbering approximately 500, marched across the Egyptian / Libyan desert from Alexandria, Egypt and with the aid of three American warships, succeeded in capturing Derna. Soon afterward, on 3 June 1805, peace was concluded. The pasha ended his demands and received $60,000 as ransom for the "Philadelphia" prisoners under the 1805 Treaty with Tripoli. In 1815, in consequence of further outrages and due to the humiliation of the earlier defeat, Captains Bainbridge and Stephen Decatur, at the head of an American squadron, again visited Tripoli and forced the pasha to comply with the demands of the United States. See Second Barbary War. In 1835, the Ottomans took advantage of a local civil war to reassert their direct authority. After that date, Tripoli was under the direct control of the Sublime Porte. Rebellions in 1842 and 1844 were unsuccessful. After the French occupation of Tunisia (1881), the Ottomans increased their garrison in Tripoli considerably. Italy had long claimed that Tripoli fell within its zone of influence and that Italy had the right to preserve order within the state. Under the pretext of protecting its own citizens living in Tripoli from the Ottoman government, it declared war against the Ottomans on 29 September 1911, and announced its intention of annexing Tripoli. On 1 October 1911, a naval battle was fought at Prevesa, Greece, and three Ottoman vessels were destroyed. By the Treaty of Lausanne, Italian sovereignty was acknowledged by the Ottomans, although the caliph was permitted to exercise religious authority. Italy officially granted autonomy after the war, but gradually occupied the region. Originally administered as part of a single colony, Tripoli and its surrounding province were a separate colony from 26 June 1927 to 3 December 1934, when all Italian possessions in North Africa were merged into one colony. By 1938, Tripoli had 108,240 inhabitants, including 39,096 Italians. Tripoli underwent a huge architectural and urbanistic improvement under Italian rule: the first thing the Italians did was to create in the early 1920s a sewage system (that until then lacked) and a modern hospital. In the coast of the province was built in 1937–1938 a section of the Litoranea Balbia, a road that went from Tripoli and Tunisia's frontier to the border of Egypt. The car tag for the Italian province of Tripoli was "TL". Furthermore, the Italians – in order to promote Tripoli's economy – founded in 1927 the Tripoli International Fair, which is considered to be the oldest trade fair in Africa. The so-called "Fiera internazionale di Tripoli" was one of the main international "Fairs" in the colonial world in the 1930s, and was internationally promoted together with the Tripoli Grand Prix as a showcase of Italian Libya. The Italians created the Tripoli Grand Prix, an international motor racing event first held in 1925 on a racing circuit outside Tripoli (it lasted until 1940). The first airport in Libya, the Mellaha Air Base was built by the Italian Air Force in 1923 near the Tripoli racing circuit (actually is called Mitiga International Airport). Tripoli even had a railway station with some small railway connections to nearby cities, when in August 1941 the Italians started to build a new railway (with a gauge, like the one used in Egypt and Tunisia) between Tripoli and Benghazi. But the war (with the defeat of the Italian Army) stopped the construction the next year. Tripoli was controlled by Italy until 1943 when the provinces of Tripolitania and Cyrenaica were captured by Allied forces. The city fell to troops of the British Eighth Army on 23 January 1943. Tripoli was then governed by the British until independence in 1951. Under the terms of the 1947 peace treaty with the Allies, Italy relinquished all claims to Libya. Colonel Muammar Gaddafi became leader of Libya on 1 September 1969. On 15 April 1986, U.S. President Ronald Reagan ordered major bombing raids, dubbed Operation El Dorado Canyon, against Tripoli and Benghazi, killing 45 Libyan military and government personnel as well as 15 civilians. This strike followed US interception of telex messages from Libya's East Berlin embassy suggesting the involvement of Libyan leader Muammar Gaddafi in a bomb explosion on 5 April in West Berlin's La Belle discothèque, a nightclub frequented by US servicemen. Among the alleged fatalities of the 15 April retaliatory attack by the United States was Gaddafi's adopted daughter, Hannah. The United Nations sanctions against Libya imposed in April 1992 under Security Council Resolution 748 were lifted in September 2003, which increased traffic through the Port of Tripoli and had a positive impact on the city's economy. In February and March 2011, Tripoli witnessed intense anti-government protests and violent government responses resulting in hundreds killed and wounded. The city's Green Square was the scene of some of the protests. The anti-Gaddafi protests were eventually crushed, and Tripoli was the site of pro-Gaddafi rallies. The city defenses loyal to Gaddafi included the military headquarters at Bab al-Aziziyah (where Gaddafi's main residence was located) and the Mitiga International Airport. At the latter, on 13 March, Ali Atiyya, a colonel of the Libyan Air Force, defected and joined the revolution. In late February, rebel forces took control of Zawiya, a city approximately to the west of Tripoli, thus increasing the threat to pro-Gaddafi forces in the capital. During the subsequent battle of Zawiya, loyalist forces besieged the city and eventually recaptured it by 10 March. As the 2011 military intervention in Libya commenced on 19 March to enforce a U.N. no-fly zone over the country, the city once again came under air attack. It was the second time that Tripoli was bombed since the 1986 U.S. airstrikes, and the second time since the 1986 airstrike that bombed Bab al-Azizia, Gaddafi's heavily fortified compound. In July and August, Libyan online revolutionary communities posted tweets and updates on attacks by rebel fighters on pro-government vehicles and checkpoints. In one such attack, Saif al-Islam Gaddafi and Abdullah Senussi were targets. The government, however, denied revolutionary activity inside the capital. Several months after the initial uprising, rebel forces in the Nafusa Mountains advanced towards the coast, retaking Zawiya and reaching Tripoli on 21 August. On 21 August, the symbolic Green Square, immediately renamed Martyrs' Square by the rebels, was taken under rebel control and pro-Gaddafi posters were torn down and burned. During a radio address on 1 September, Gaddafi declared that the capital of the Great Socialist People's Libyan Arab Jamahiriya had been moved from Tripoli to Sirte, after rebels had taken control of Tripoli. In August and September 2014, Islamist armed groups extended their control of central Tripoli. The House of Representatives parliament set up operations on a Greek car ferry in Tobruk. A rival New General National Congress parliament continued to operate in Tripoli. Tripoli and its surrounding suburbs all lie within the Tripoli sha'biyah (district). In accordance with Libya's former Jamahiriya political system, Tripoli comprises Local People's Congresses where, in theory, the city's population discuss different matters and elect their own people's committee; at present there are 29 Local People's Congresses. In reality, the former revolutionary committees severely limited the democratic process by closely supervising committee and congress elections at the branch and district levels of governments, Tripoli being no exception. Tripoli is sometimes referred to as "the de jure capital of Libya" because none of the country's ministries are actually located in the capital. Even the former National General People's Congress was held annually in the city of Sirte rather than in Tripoli. As part of a radical decentralization programme undertaken by Gaddafi in September 1988, all General People's Committee secretariats (ministries), except those responsible for foreign liaison (foreign policy and international relations) and information, were moved outside Tripoli. According to diplomatic sources, the former Secretariat for Economy and Trade was moved to Benghazi; the Secretariat for Health to Kufra; and the remainder, excepting one, to Sirte, Muammar Gaddafi's birthplace. In early 1993 it was announced that the Secretariat for Foreign Liaison and International Co-operation was to be moved to Ra's Lanuf. In October 2011, Libya fell to The National Transitional Council (N.T.C.), which took full control, abolishing the Gaddafi-era system of national and local government. Tripoli lies at the western extremity of Libya close to the Tunisian border, on the continent of Africa. Over a thousand kilometres (621 Miles) separates Tripoli from Libya's second largest city, Benghazi. Coastal oases alternate with sandy areas and lagoons along the shores of Tripolitania for more than . Until 2007, the "Sha'biyah" included the city, its suburbs and their immediate surroundings. In older administrative systems and throughout history, there existed a province ("muhafazah"), state ("wilayah") or city-state with a much larger area (though not constant boundaries), which is sometimes mistakenly referred to as Tripoli but more appropriately should be called Tripolitania. As a District, Tripoli borders the following districts: Tripoli has a hot semi-arid climate (Köppen: "BSh") with hot dry summers and relatively wet mild winters. Summers are hot and muggy with temperatures that often exceed ; average July temperatures are between . In December, temperatures have reached as low as , but the average remains at between . The average annual rainfall is less than . Snowfall has occurred in past years. The rainfall can be very erratic. Epic floods in 1945 left Tripoli underwater for several days, but two years later an unprecedented drought caused the loss of thousands of head of cattle. Deficiency in rainfall is no doubt reflected in an absence of permanent rivers or streams in the city as is indeed true throughout the entire country. The allocation of limited water is considered of sufficient importance to warrant the existence of the Secretariat of Dams and Water Resources, and damaging a source of water can be penalized by a heavy fine or imprisonment. The Great Manmade River, a network of pipelines that transport water from the desert to the coastal cities, supplies Tripoli with its water. The grand scheme was initiated by Gaddafi in 1982 and has had a positive impact on the city's inhabitants. Tripoli is dotted with public spaces, but none fit under the category of large city parks. Martyrs' Square, located near the waterfront is scattered with palm trees, the most abundant plant used for landscaping in the city. The Tripoli Zoo, located south of the city center, is a large reserve of plants, trees and open green spaces and was the country's biggest zoo. It has, however, been closed since 2009. Tripoli is one of the main hubs of Libya's economy along with Misrata. It is the leading centre of banking, finance and communication in the country and is one of the leading commercial and manufacturing cities in Libya. Many of the country's largest corporations locate their headquarters and home offices in Tripoli as well as the majority of international companies. Major manufactured goods include processed food, textiles, construction materials, clothing and tobacco products. Since the lifting of sanctions against Libya in 1999 and again in 2003, Tripoli has seen a rise in foreign investment as well as an increase in tourism. Increased traffic has also been recorded in the city's port as well as Libya's main international airport, Tripoli International. The city is home to the Tripoli International Fair, an international industrial, agricultural and commercial event located on Omar Muktar Avenue. One of the active members of the Global Association of the Exhibition Industry (UFI), located in the French capital Paris, the international fair is organized annually and takes place from 2–12 April. Participation averages around 30 countries as well as more than 2000 companies and organizations. Since the rise in tourism and influx of foreign visitors, there has been an increased demand for hotels in the city. To cater for these increased demands, the Corinthia Bab Africa Hotel located in the central business district was constructed in 2003 and is the largest hotel in Libya. Other high end hotels in Tripoli include the Al Waddan Intercontinental and the Tripoli Radisson Blu Hotel as well as others. There is a project under construction which will finish by 2015. It is a part of the Tripoli business center and it will have towers and hotels, a marketing center, restaurants and above ground and underground parking. The cost is planned to be more than 3.0 billion Libyan dinars (US$2.8 billion) Companies with head offices in Tripoli include Afriqiyah Airways and Libyan Airlines. Buraq Air has its head office on the grounds of Mitiga International Airport. By 2017, due to the effects of the Libyan Civil War (2011), rising inflation, militia infighting, bureaucratic issues, multiple central banks, fragmented governments, corruption, and other issues, the economic state of Libya is suffering. Locals in Libya must purchase dollars on the black market, rather than receiving dollars on the official rate of 1.37 Dinars to 1 US Dollar, due to Central bank(s) refusal to give US dollars to the public, the current pricing of Dollars amounts to 10 Dinars to 1 US dollar on the black market, driving the local Libyan economy into ruin and undermining local peoples purchasing power. Militias however have been benefiting from this exploit due to their armed influences and corrupt natures by purchasing dollars on the official rate of 1.30 to 1, and selling them US$1 to 10 LYD. The city's old town, the Medina, is still unspoiled by mass-tourism, though it was increasingly exposed to more and more visitors from abroad, following the lifting of the UN embargo in 2003. However, the walled Medina retains much of its serene old-world ambiance. Three gates provided access to the old town: Bab Zanata in the west, Bab Hawara in the southeast and Bab Al-Bahr in the north wall. The city walls are still standing and can be climbed for good views of the city. The bazaar is also known for its traditional ware; fine jewellery and clothes can be found in the local markets. There are a number of buildings that were constructed by the Italian colonial rulers and later demolished under Gaddafi. They included the Royal Miramare Theatre, next to the Red Castle, and Tripoli Railway Central Station. The Red Castle Museum ("Assaraya al-Hamra"), a vast palace complex with numerous courtyards, dominates the city skyline and is located on the outskirts of the Medina. There are some classical statues and fountains from the Ottoman period scattered around the castle. Among the places of worship, they are predominantly Muslim mosques. There were also Christian churches and temples : Apostolic Vicariate of Tripoli (Catholic Church), Coptic Orthodox Church, Protestant churches, Evangelical Churches. The largest university in Tripoli, the University of Tripoli, is a public university providing free education to the city's inhabitants. Private universities and colleges have also begun to crop up in the last few years. International schools: Football is the most popular sport in the Libyan capital. Tripoli is home of the most prominent football clubs in Libya including Al Madina, Al Ahly Tripoli and Al Ittihad Tripoli. Other sports clubs based in Tripoli include Al Wahda Tripoli and Addahra. The city also played host to the Italian Super Cup in 2002. The 2017 Africa Cup of Nations were to be played in Libya, three of the venues were supposed to be in Tripoli, but it was cancelled due to the ongoing conflict of the Second Libyan Civil War. Tripoli hosted the final games of the official 2009 African Basketball Championship. Tripoli International Airport is the largest airport in Tripoli and Libya. Tripoli also has another airport, the smaller Mitiga International Airport. Tripoli is the interim destination of a railway from Sirte under construction in 2007. In July 2014 The Tripoli international Airport was destroyed, following the Battle of Tripoli Airport, when Zintani militias in charge of security were attacked by Islamist militias of the GNC, code naming the operation 'Libya Dawn' also known as "Libya Dawn Militias", led by Misurati militia general Salah Badi. The event happened after secular Zintani militias were accused with claims of smuggling drugs, alcohol and illegal items, known to have past ties with the Gaddafi Regime. Libya's Mufti Sadiq al Ghariani has praised the Libya Dawn Operation. The result of the Battle for Tripoli's central airport was its complete destruction with 90% of the facilities incapacitated, or burned down with an unknown estimate Billions of dollars in Damage, with another 10 or so planes destroyed. The airport was shelled with Grad rockets with reports of the Air Traffic Control tower completely destroyed, including the main reception building completely wrecked. Surrounding civilian residential areas and infrastructure, of which include Bridges, Electricity equipment, water equipment, and roads were also damaged in the fighting. Oil storage tankers containing large reserves of Kerosene fuels, gases and related chemicals were burnt and large plumes of smoke rose into the air. Reconstruction efforts are currently underway with the GNA giving a contract amounting to $78 Million to an Italian firm 'Emaco Group' or "Aeneas Consorzio", to rebuild the destroyed facilities. All flights have been diverted to ex-military base known as Mitiga International Airport as of 2017. Sister cities:
https://en.wikipedia.org/wiki?curid=31232
Tower of Babel The Tower of Babel (, "Migdal Bavel") narrative in Genesis 11:1–9 is an origin myth meant to explain why the world's peoples speak different languages. According to the story, a united human race in the generations following the Great Flood, speaking a single language and migrating eastward (or from the east), comes to the land of Shinar (). There they agree to build a city and a tower tall enough to reach heaven. God, observing their city and tower, confounds their speech so that they can no longer understand each other, and scatters them around the world. Some modern scholars have associated the Tower of Babel with known structures, notably the Etemenanki, a ziggurat dedicated to the Mesopotamian god Marduk in Babylon. A Sumerian story with some similar elements is told in "Enmerkar and the Lord of Aratta". The phrase "Tower of Babel" does not appear in the Bible; it is always "the city and the tower" () or just "the city" (). The original derivation of the name Babel (also the Hebrew name for Babylon) is uncertain. The native, Akkadian name of the city was "Bāb-ilim", meaning "gate of God". However, that form and interpretation itself are now usually thought to be the result of an Akkadian folk etymology applied to an earlier form of the name, Babilla, of unknown meaning and probably non-Semitic origin. According to the Bible, the city received the name "Babel" from the Hebrew verb בָּלַ֥ל ("bālal"), meaning to jumble or to confuse. The narrative of the tower of Babel is an etiology or explanation of a phenomenon. Etiologies are narratives that explain the origin of a custom, ritual, geographical feature, name, or other phenomenon. The story of the Tower of Babel explains the origins of the multiplicity of languages. God was concerned that humans had blasphemed by building the tower to avoid a second flood so God brought into existence multiple languages. Thus, humans were divided into linguistic groups, unable to understand one another. The story's theme of competition between God and humans appears elsewhere in Genesis, in the story of Adam and Eve in the Garden of Eden. The 1st-century Jewish interpretation found in Flavius Josephus explains the construction of the tower as a hubristic act of defiance against God ordered by the arrogant tyrant Nimrod. There have, however, been some contemporary challenges to this classical interpretation, with emphasis placed on the explicit motive of cultural and linguistic homogeneity mentioned in the narrative (v. 1, 4, 6). This reading of the text sees God's actions not as a punishment for pride, but as an etiology of cultural differences, presenting Babel as the cradle of civilization. Tradition attributes the whole of the Pentateuch to Moses; however, in the late 19th century, the documentary hypothesis was proposed by Julius Wellhausen. This hypothesis proposes four sources: J, E, P and D. Of these hypothetical sources, proponents suggest that this narrative comes from the J or Yahwist source. The etiological nature of the narrative is considered typical of J. In addition, the intentional word play regarding the city of Babel, and the noise of the people's "babbling" is found in the Hebrew words as easily as in English, and is considered typical of the Yahwist source. There is a Sumerian myth similar to that of the Tower of Babel, called "Enmerkar and the Lord of Aratta", where Enmerkar of Uruk is building a massive ziggurat in Eridu and demands a tribute of precious materials from Aratta for its construction, at one point reciting an incantation imploring the god Enki to restore (or in Kramer's translation, to disrupt) the linguistic unity of the inhabited regions—named as Shubur, Hamazi, Sumer, Uri-ki (Akkad), and the Martu land, "the whole universe, the well-guarded people—may they all address Enlil together in a single language." In addition, a further Assyrian myth, dating from the 8th century BC during the Neo-Assyrian Empire (911–605 BC) bears a number of similarities to the later written Biblical story. Various traditions similar to that of the tower of Babel are found in Central America. Some writers connected the Great Pyramid of Cholula to the Tower of Babel. The Dominican friar Diego Durán (1537–1588) reported hearing an account about the pyramid from a hundred-year-old priest at Cholula, shortly after the conquest of Mexico. He wrote that he was told when the light of the sun first appeared upon the land, giants appeared and set off in search of the sun. Not finding it, they built a tower to reach the sky. An angered God of the Heavens called upon the inhabitants of the sky, who destroyed the tower and scattered its inhabitants. The story was not related to either a flood or the confusion of languages, although Frazer connects its construction and the scattering of the giants with the Tower of Babel. Another story, attributed by the native historian Fernando de Alva Cortés Ixtlilxóchitl (c. 1565–1648) to the ancient Toltecs, states that after men had multiplied following a great deluge, they erected a tall "zacuali" or tower, to preserve themselves in the event of a second deluge. However, their languages were confounded and they went to separate parts of the earth. Still another story, attributed to the Tohono O'odham people, holds that Montezuma escaped a great flood, then became wicked and attempted to build a house reaching to heaven, but the Great Spirit destroyed it with thunderbolts. Traces of a somewhat similar story have also been reported among the Tharu of Nepal and northern India. According to David Livingstone, the people he met living near Lake Ngami in 1849 had such a tradition, but with the builders' heads getting "cracked by the fall of the scaffolding". In his 1918 book, "Folklore in the Old Testament", Scottish social anthropologist Sir James George Frazer documented similarities between Old Testament stories, such as the Flood, and indigenous legends around the world. He identified Livingston's account with a tale found in Lozi mythology, wherein the wicked men build a tower of masts to pursue the Creator-God, Nyambe, who has fled to Heaven on a spider-web, but the men perish when the masts collapse. He further relates similar tales of the Ashanti that substitute a pile of porridge pestles for the masts. Frazer moreover cites such legends found among the Kongo people, as well as in Tanzania, where the men stack poles or trees in a failed attempt to reach the moon. He further cited the Karbi and Kuki people of Assam as having a similar story. The traditions of the Karen people of Myanmar, which Frazer considered to show clear 'Abrahamic' influence, also relate that their ancestors migrated there following the abandonment of a great pagoda in the land of the Karenni 30 generations from Adam, when the languages were confused and the Karen separated from the Karenni. He notes yet another version current in the Admiralty Islands, where mankind's languages are confused following a failed attempt to build houses reaching to heaven. Biblical scholars see the Book of Genesis as mythological and not as a historical account of events. Nonetheless, the story of Babel can be interpreted in terms of its context. attributes the Hebrew version of the name, "Babel", to the verb "balal", which means "to confuse or confound" in Hebrew. The first century Roman-Jewish author Flavius Josephus similarly explained that the name was derived from the Hebrew word "Babel (βαβὲλ)", meaning "confusion". The account in Genesis makes no mention of any destruction of the tower. The people whose languages are confounded were simply scattered from there over the face of the Earth and stopped building their city. However, in other sources, such as the Book of Jubilees (chapter 10 v.18–27), Cornelius Alexander (frag. 10), Abydenus (frags. 5 and 6), Josephus ("Antiquities" 1.4.3), and the Sibylline Oracles (iii. 117–129), God overturns the tower with a great wind. In the Talmud, it said that the top of the tower was burnt, the bottom was swallowed, and the middle was left standing to erode over time (Sanhedrin 109a). Etemenanki (Sumerian: "temple of the foundation of heaven and earth") was the name of a ziggurat dedicated to Marduk in the city of Babylon. It was famously rebuilt by the 6th-century BCE Neo-Babylonian dynasty rulers Nabopolassar and Nebuchadnezzar II. According to modern scholars, such as Stephen L. Harris, the biblical story of the Tower of Babel was likely influenced by Etemenanki during the Babylonian captivity of the Hebrews. Nebuchadnezzar wrote that the original tower had been built in antiquity: "A former king built the Temple of the Seven Lights of the Earth, but he did not complete its head. Since a remote time, people had abandoned it, without order expressing their words. Since that time earthquakes and lightning had dispersed its sun-dried clay; the bricks of the casing had split, and the earth of the interior had been scattered in heaps." The seven lights were the planets the Moon and sun thought to orbit Earth in beliefs. In 2011 scholars discovered, in the Schoyen Collection, the oldest known representation of the Etemenanki. Carved on a black stone, "The Tower of Babel Stele" (as it is known) dates from 604 to 562 BCE, the time of Nebuchadnezzar II. The Greek historian Herodotus (440 BCE) later wrote of this ziggurat, which he called the "Temple of Zeus Belus", giving an account of its vast dimensions. The already decayed Great Ziggurat of Babylon was finally destroyed by Alexander the Great in an attempt to rebuild it. He managed to move the tiles of the tower to another location, but his death stopped the reconstruction. Isaac Asimov speculated that the authors of were inspired by the existence of an apparently incomplete ziggurat at Babylon, and by the phonological similarity between Babylonian "Bab-ilu", meaning "gate of God", and the Hebrew word "balal", meaning "mixed", "confused", or "confounded". The Book of Jubilees contains one of the most detailed accounts found anywhere of the Tower. And they began to build, and in the fourth week they made brick with fire, and the bricks served them for stone, and the clay with which they cemented them together was asphalt which comes out of the sea, and out of the fountains of water in the land of Shinar. And they built it: forty and three years were they building it; its breadth was 203 bricks, and the height [of a brick] was the third of one; its height amounted to 5433 cubits and 2 palms, and [the extent of one wall was] thirteen stades [and of the other thirty stades]. (Jubilees 10:20–21, Charles' 1913 translation) In Pseudo-Philo, the direction for the building is ascribed not only to Nimrod, who is made prince of the Hamites, but also to Joktan, as prince of the Semites, and to Phenech son of Dodanim, as prince of the Japhetites. Twelve men are arrested for refusing to bring bricks, including Abraham, Lot, Nahor, and several sons of Joktan. However, Joktan finally saves the twelve from the wrath of the other two princes. The Jewish-Roman historian Flavius Josephus, in his "Antiquities of the Jews" (c. 94 CE), recounted history as found in the Hebrew Bible and mentioned the Tower of Babel. He wrote that it was Nimrod who had the tower built and that Nimrod was a tyrant who tried to turn the people away from God. In this account, God confused the people rather than destroying them because annihilation with a Flood had not taught them to be godly. Now it was Nimrod who excited them to such an affront and contempt of God. He was the grandson of Ham, the son of Noah, a bold man, and of great strength of hand. He persuaded them not to ascribe it to God as if it were through his means they were happy, but to believe that it was their own courage which procured that happiness. He also gradually changed the government into tyranny, seeing no other way of turning men from the fear of God, but to bring them into a constant dependence on his power... Now the multitude were very ready to follow the determination of Nimrod and to esteem it a piece of cowardice to submit to God; and they built a tower, neither sparing any pains, nor being in any degree negligent about the work: and, by reason of the multitude of hands employed in it, it grew very high, sooner than any one could expect; but the thickness of it was so great, and it was so strongly built, that thereby its great height seemed, upon the view, to be less than it really was. It was built of burnt brick, cemented together with mortar, made of bitumen, that it might not be liable to admit water. When God saw that they acted so madly, he did not resolve to destroy them utterly, since they were not grown wiser by the destruction of the former sinners [in the Flood]; but he caused a tumult among them, by producing in them diverse languages, and causing that, through the multitude of those languages, they should not be able to understand one another. The place wherein they built the tower is now called Babylon, because of the confusion of that language which they readily understood before; for the Hebrews mean by the word Babel, confusion. The Sibyl also makes mention of this tower, and of the confusion of the language, when she says thus:—"When all men were of one language, some of them built a high tower, as if they would thereby ascend up to heaven; but the gods sent storms of wind and overthrew the tower, and gave everyone a peculiar language; and for this reason it was that the city was called Babylon." Third Apocalypse of Baruch (or 3 Baruch, c. 2nd century), one of the pseudepigrapha, describes the just rewards of sinners and the righteous in the afterlife. Among the sinners are those who instigated the Tower of Babel. In the account, Baruch is first taken (in a vision) to see the resting place of the souls of "those who built the tower of strife against God, and the Lord banished them." Next he is shown another place, and there, occupying the form of dogs, Those who gave counsel to build the tower, for they whom thou seest drove forth multitudes of both men and women, to make bricks; among whom, a woman making bricks was not allowed to be released in the hour of child-birth, but brought forth while she was making bricks, and carried her child in her apron, and continued to make bricks. And the Lord appeared to them and confused their speech, when they had built the tower to the height of four hundred and sixty-three cubits. And they took a gimlet, and sought to pierce the heavens, saying, Let us see (whether) the heaven is made of clay, or of brass, or of iron. When God saw this He did not permit them, but smote them with blindness and confusion of speech, and rendered them as thou seest." (Greek Apocalypse of Baruch, 3:5–8) Rabbinic literature offers many different accounts of other causes for building the Tower of Babel, and of the intentions of its builders. According to one midrash the builders of the Tower, called "the generation of secession" in the Jewish sources, said: "God has no right to choose the upper world for Himself, and to leave the lower world to us; therefore we will build us a tower, with an idol on the top holding a sword, so that it may appear as if it intended to war with God" (Gen. R. xxxviii. 7; Tan., ed. Buber, Noah, xxvii. et seq.). The building of the Tower was meant to bid defiance not only to God, but also to Abraham, who exhorted the builders to reverence. The passage mentions that the builders spoke sharp words against God, saying that once every 1,656 years, heaven tottered so that the water poured down upon the earth, therefore they would support it by columns that there might not be another deluge (Gen. R. l.c.; Tan. l.c.; similarly Josephus, "Ant." i. 4, § 2). Some among that generation even wanted to war against God in heaven (Talmud Sanhedrin 109a). They were encouraged in this undertaking by the notion that arrows that they shot into the sky fell back dripping with blood, so that the people really believed that they could wage war against the inhabitants of the heavens (Sefer ha-Yashar, Chapter 9:12–36). According to Josephus and Midrash Pirke R. El. xxiv., it was mainly Nimrod who persuaded his contemporaries to build the Tower, while other rabbinical sources assert, on the contrary, that Nimrod separated from the builders. According to another midrashic account, one third of the Tower builders were punished by being transformed into semi-demonic creatures and banished into three parallel dimensions, inhabited now by their descendants. Although not mentioned by name, the Quran has a story with similarities to the biblical story of the Tower of Babel, although set in the Egypt of Moses: Pharaoh asks Haman to build him a stone (or clay) tower so that he can mount up to heaven and confront the God of Moses. Another story in Sura 2:102 mentions the name of Babil, but tells of when the two angels Harut and Marut taught magic to some people in Babylon and warned them that magic is a sin and that their teaching them magic is a test of faith. A tale about Babil appears more fully in the writings of Yaqut (i, 448 f.) and the "" (xiii. 72), but without the tower: mankind were swept together by winds into the plain that was afterward called "Babil", where they were assigned their separate languages by God, and were then scattered again in the same way. In the "History of the Prophets and Kings" by the 9th-century Muslim theologian al-Tabari, a fuller version is given: Nimrod has the tower built in Babil, God destroys it, and the language of mankind, formerly Syriac, is then confused into 72 languages. Another Muslim historian of the 13th century, Abu al-Fida relates the same story, adding that the patriarch Eber (an ancestor of Abraham) was allowed to keep the original tongue, Hebrew in this case, because he would not partake in the building. Although variations similar to the biblical narrative of the Tower of Babel exist within Islamic tradition, the central theme of God separating humankind on the basis of language is alien to Islam according to the author Yahiya Emerick. In Islamic belief, he argues, God created nations to know each other and not to be separated. In the Book of Mormon, a man named Jared and his family ask God that their language not be confounded at the time of the Tower of Babel. Because of their prayers, God preserves their language and leads them to the Valley of Nimrod. From there, they travel across the sea to the Americas. The Church of Jesus Christ of Latter-day Saints teaches that the Tower of Babel story is historical fact. "Although there are many in our day who consider the accounts of the Flood and tower of Babel to be fiction, Latter-day Saints affirm their reality." The confusion of tongues ("confusio linguarum") is the origin myth for the fragmentation of human languages described in , as a result of the construction of the Tower of Babel. Prior to this event, humanity was stated to speak a single language. The preceding states that the decedents of Japheth, Gomer, and Javan dispersed "with their own tongues," creating an apparent contradiction. Scholars have been debating or explaining this apparent contradiction for centuries. During the Middle Ages, the Hebrew language was widely considered the language used by God to address Adam in Paradise, and by Adam as lawgiver (the Adamic language) by various Jewish, Christian, and Muslim scholastics. Dante Alighieri addresses the topic in his "De vulgari eloquentia" (1302-1305). He argues that the Adamic language is of divine origin and therefore unchangeable. He also notes that according to Genesis, the first speech act is due to Eve, addressing the serpent, and not to Adam. In his "Divine Comedy" (c. 1308-1320), however, Dante changes his view to another that treats the Adamic language as the product of Adam. This had the consequence that it could no longer be regarded as immutable, and hence Hebrew could not be regarded as identical with the language of Paradise. Dante concludes ("Paradiso" XXVI) that Hebrew is a derivative of the language of Adam. In particular, the chief Hebrew name for God in scholastic tradition, "El", must be derived of a different Adamic name for God, which Dante gives as "I". Before the acceptance of the Indo-European language family, these languages were considered to be "Japhetite" by some authors (e.g., Rasmus Rask in 1815; see Indo-European studies). Beginning in Renaissance Europe, priority over Hebrew was claimed for the alleged Japhetic languages, which were supposedly never corrupted because their speakers had not participated in the construction of the Tower of Babel. Among the candidates for a living descendant of the Adamic language were: Gaelic (see "Auraicept na n-Éces"); Tuscan (Giovanni Battista Gelli, 1542, Piero Francesco Giambullari, 1564); Dutch (Goropius Becanus, 1569, Abraham Mylius, 1612); Swedish (Olaus Rudbeck, 1675); German (Georg Philipp Harsdörffer, 1641, Schottel, 1641). The Swedish physician Andreas Kempe wrote a satirical tract in 1688, where he made fun of the contest between the European nationalists to claim their native tongue as the Adamic language. Caricaturing the attempts by the Swede Olaus Rudbeck to pronounce Swedish the original language of mankind, Kempe wrote a scathing parody where Adam spoke Danish, God spoke Swedish, and the serpent French. The primacy of Hebrew was still defended by some authors until the emergence of modern linguistics in the second half of the 18th century, e.g. by (1648–1705) in "A philosophicall essay for the reunion of the languages, or, the art of knowing all by the mastery of one" (1675) and by Gottfried Hensel (1687–1767) in his "Synopsis Universae Philologiae" (1741). For a long time, Historical linguistics wrestled with the idea of a single original language. In the Middle Ages, and down to the 17th century, attempts were made to identify a living descendant of the Adamic language. The literal belief that the world's linguistic variety originated with the tower of Babel is pseudolinguistics, and is contrary to the known facts about the origin and history of languages. In the Biblical introduction of the Tower of Babel account, in , it is said that everyone on Earth spoke the same language, but this is inconsistent with the Biblical description of the post-Noahic world described in , where it is said that the descendants of Shem, Ham, and Japheth gave rise to different nations, each with their own language. There have also been a number of traditions around the world that describe a divine confusion of the one original language into several, albeit without any tower. Aside from the Ancient Greek myth that Hermes confused the languages, causing Zeus to give his throne to Phoroneus, Frazer specifically mentions such accounts among the Wasania of Kenya, the Kacha Naga people of Assam, the inhabitants of Encounter Bay in Australia, the Maidu of California, the Tlingit of Alaska, and the K'iche' Maya of Guatemala. The Estonian myth of "the Cooking of Languages" has also been compared. There are several mediaeval historiographic accounts that attempt to make an enumeration of the languages scattered at the Tower of Babel. Because a count of all the descendants of Noah listed by name in chapter 10 of Genesis (LXX) provides 15 names for Japheth's descendants, 30 for Ham's, and 27 for Shem's, these figures became established as the 72 languages resulting from the confusion at Babel—although the exact listing of these languages changed over time. (The LXX Bible has two additional names, Elisa and Cainan, not found in the Masoretic text of this chapter, so early rabbinic traditions, such as the "Mishna", speak instead of "70 languages".) Some of the earliest sources for 72 (sometimes 73) languages are the 2nd-century Christian writers Clement of Alexandria ("Stromata" I, 21) and Hippolytus of Rome ("On the Psalms" 9); it is repeated in the Syriac book "Cave of Treasures" (c. 350 CE), Epiphanius of Salamis' "Panarion" (c. 375) and St. Augustine's "The City of God" 16.6 (c. 410). The chronicles attributed to Hippolytus (c. 234) contain one of the first attempts to list each of the 72 peoples who were believed to have spoken these languages. Isidore of Seville in his "Etymologiae" (c. 600) mentions the number of 72; however, his list of names from the Bible drops the sons of Joktan and substitutes the sons of Abraham and Lot, resulting in only about 56 names total; he then appends a list of some of the nations known in his own day, such as the Longobards and the Franks. This listing was to prove quite influential on later accounts that made the Lombards and Franks themselves into descendants of eponymous grandsons of Japheth, e.g. the "Historia Brittonum" (c. 833), "The Meadows of Gold" by al Masudi (c. 947) and "Book of Roads and Kingdoms" by al-Bakri (1068), the 11th-century "Lebor Gabála Érenn", and the midrashic compilations "Yosippon" (c. 950), "Chronicles of Jerahmeel", and "Sefer haYashar". Other sources that mention 72 (or 70) languages scattered from Babel are the Old Irish poem "Cu cen mathair" by Luccreth moccu Chiara (c. 600); the Irish monastic work "Auraicept na n-Éces"; "History of the Prophets and Kings" by the Persian historian Muhammad ibn Jarir al-Tabari (c. 915); the Anglo-Saxon dialogue "Solomon and Saturn"; the Russian "Primary Chronicle" (c. 1113); the Jewish Kabbalistic work "Bahir" (1174); the "Prose Edda" of Snorri Sturluson (c. 1200); the Syriac "Book of the Bee" (c. 1221); the "Gesta Hunnorum et Hungarorum" (c. 1284; mentions 22 for Shem, 31 for Ham and 17 for Japheth for a total of 70); Villani's 1300 account; and the rabbinic "Midrash ha-Gadol" (14th century). Villani adds that it "was begun 700 years after the Flood, and there were 2,354 years from the beginning of the world to the confusion of the Tower of Babel. And we find that they were 107 years working at it; and men lived long in those times". According to the "Gesta Hunnorum et Hungarorum", however, the project was begun only 200 years following the Deluge. The tradition of 72 languages persisted into later times. Both José de Acosta in his 1576 treatise "De procuranda indorum salute", and António Vieira a century later in his "Sermão da Epifania", expressed amazement at how much this 'number of tongues' could be surpassed, there being hundreds of mutually unintelligible languages indigenous only to Peru and Brazil. The Book of Genesis does not mention how tall the tower was. The phrase used to describe the tower, "its top in the sky" (v.4), was an idiom for impressive height; rather than implying arrogance this was simply a cliché for height. The Book of Jubilees mentions the tower's height as being 5,433 cubits and 2 palms, or , about three times the height of Burj Khalifa, or roughly 1.6 miles high. The Third Apocalypse of Baruch mentions that the 'tower of strife' reached a height of 463 cubits, or , taller than any structure built in human history until the construction of the Eiffel Tower in 1889, which is in height. Gregory of Tours writing c. 594, quotes the earlier historian Orosius (c. 417) as saying the tower was "laid out foursquare on a very level plain. Its wall, made of baked brick cemented with pitch, is fifty cubits wide, two hundred high, and four hundred and seventy stades in circumference. A stade was an ancient Greek unit of length, based on the circumference of a typical sports stadium of the time which was about . Twenty-five gates are situated on each side, which make in all one hundred. The doors of these gates, which are of wonderful size, are cast in bronze. The same historian tells many other tales of this city, and says: 'Although such was the glory of its building still it was conquered and destroyed.'" A typical medieval account is given by Giovanni Villani (1300): He relates that "it measured eighty miles [130 km] round, and it was already 4,000 paces high, or and 1,000 paces thick, and each pace is three of our feet." The 14th-century traveler John Mandeville also included an account of the tower and reported that its height had been 64 furlongs, or , according to the local inhabitants. The 17th-century historian Verstegan provides yet another figure – quoting Isidore, he says that the tower was 5,164 paces high, or , and quoting Josephus that the tower was wider than it was high, more like a mountain than a tower. He also quotes unnamed authors who say that the spiral path was so wide that it contained lodgings for workers and animals, and other authors who claim that the path was wide enough to have fields for growing grain for the animals used in the construction. In his book, "Structures: Or Why Things Don't Fall Down" (Pelican 1978–1984), Professor J.E. Gordon considers the height of the Tower of Babel. He wrote, "brick and stone weigh about 120 lb per cubic foot (2,000 kg per cubic metre) and the crushing strength of these materials is generally rather better than 6,000 lbs per square inch or 40 mega-pascals. Elementary arithmetic shows that a tower with parallel walls could have been built to a height of before the bricks at the bottom were crushed. However, by making the walls taper towards the top they ... could well have been built to a height where the men of Shinnar would run short of oxygen and had difficulty in breathing before the brick walls crushed beneath their own dead weight." Pieter Brueghel's influential portrayal is based on the Colosseum in Rome, while later conical depictions of the tower (as depicted in Doré's illustration) resemble much later Muslim towers observed by 19th-century explorers in the area, notably the Minaret of Samarra. M.C. Escher depicts a more stylized geometrical structure in his woodcut representing the story. The composer Anton Rubinstein wrote an opera based on the story "Der Thurm zu Babel". American choreographer Adam Darius staged a multilingual theatrical interpretation of "The Tower of Babel" in 1993 at the ICA in London. Fritz Lang's 1927 film "Metropolis", in a flashback, plays upon themes of lack of communication between the designers of the tower and the workers who are constructing it. The short scene states how the words used to glorify the tower's construction by its designers took on totally different, oppressive meanings to the workers. This led to its destruction as they rose up against the designers because of the insufferable working conditions. The appearance of the tower was modeled after Brueghel's 1563 painting. The political philosopher Michael Oakeshott surveyed historic variations of the Tower of Babel in different cultures and produced a modern retelling of his own in his 1983 book, "On History". In his retelling, Oakeshott expresses disdain for human willingness to sacrifice individuality, culture, and quality of life for grand collective projects. He attributes this behavior to fascination with novelty, persistent dissatisfaction, greed, and lack of self-reflection. A.S. Byatt's novel "Babel Tower" (1996) is about the question "whether language can be shared, or, if that turns out to be illusory, how individuals, in talking to each other, fail to understand each other". The progressive band Soul Secret wrote a concept album called "BABEL", based on a modernized version of the myth. Science fiction writer Ted Chiang wrote a story called "Tower of Babylon" that imagined a miner's climbing the tower all the way to the top where he meets the vault of heaven. Fantasy novelist Josiah Bancroft has a series "The Books of Babel", which is to conclude with book IV in 2020. In the 1990 Japanese television anime "", the Tower of Babel is used by the Atlanteans as an interstellar communication device. Later in the series, the Neo Atlanteans rebuild the Tower of Babel and use its communication beam as a weapon of mass destruction. Both the original and the rebuilt tower resembles the painting "Tower of Babel" by artist Pieter Bruegel the Elder. In the game called "" the last stages of the game and the final boss fight occurs in the tower. In the web-based game Forge of Empires the Tower of Babel is an available "Great Building".
https://en.wikipedia.org/wiki?curid=31235
Thomas Vinterberg Thomas Vinterberg (; born 19 May 1969) is a Danish film director who, along with Lars von Trier, co-founded the Dogme 95 movement in filmmaking, which established rules for simplifying movie production. He is best known for the films "The Celebration" (1998), "Submarino" (2010), "The Hunt" (2012) and "Far from the Madding Crowd" (2015). Vinterberg was born in Frederiksberg, Denmark. In 1993, he graduated from the National Film School of Denmark with "Last Round" ("Sidste Omgang"), which won the jury and producers' awards at the Munich International Festival of Film Schools, and First Prize at Tel Aviv. That year Vinterberg made his first TV drama for DR TV and his short fiction film "The Boy Who Walked Backwards", produced by Birgitte Hald at Nimbus Film. This film has won awards and accolades all over the world, including Nordic Panorama in Iceland, the International Short Film Festival in Clermont-Ferrand, and the Toronto International Film Festival. His first feature film was "The Biggest Heroes" ("De Største Helte"), a road movie that received acclaim in his native Denmark. In 1995, Vinterberg formed the Dogme 95 movement with Lars von Trier, Kristian Levring, and Søren Kragh-Jacobsen. Following that dogma in 1998, he conceived, wrote and directed (and also had a small acting role in) the first of the Dogme movies, "The Celebration" ("Festen"). As per the rules of the Dogme manifesto, he did not take a directorial credit. However, he and the film won numerous nominations and awards, including the Jury Prize at the 1998 Cannes Film Festival. In 2003 he directed the apocalyptic science fiction love story "It's All About Love", a movie he wrote, directed and produced himself over a period of five years. This movie was entirely in English and featured, among others, Joaquin Phoenix, Claire Danes, and Sean Penn. The movie did not do well, as critics and audiences found it idiosyncratic and somewhat incomprehensible. His next film, the English-language "Dear Wendy" (2005), scripted by Lars von Trier, also flopped, even in his native Denmark where it sold only 14,521 tickets. However he won the Silver George for Best Director at the 27th Moscow International Film Festival. Vinterberg then tried to retrace his roots with a smaller Danish-language production, "En mand kommer hjem" (2007), which also flopped, selling only 31,232 tickets. On 1 August 2008. he directed the music video for "The Day That Never Comes", the first single off Metallica's album "Death Magnetic". His 2010 film "Submarino" was nominated for the Golden Bear at the 60th Berlin International Film Festival. In 2012, his film "The Hunt" competed for the Palme d'Or at the 2012 Cannes Film Festival and was shortlisted for the Best Foreign Language Film award at the 86th Academy Awards. In 2015, he directed "Far from the Madding Crowd", an adaptation of the acclaimed Thomas Hardy novel, starring Carey Mulligan, Matthias Schoenaerts, Michael Sheen and Tom Sturridge. Vinterberg reunited with Matthias Schoenaerts in "Kursk", a film about the Kursk submarine disaster that happened in 2000. In April 2016, the French government appointed Vinterberg a "Chevalier" (Knight) of the Ordre des Arts et des Lettres.
https://en.wikipedia.org/wiki?curid=31236
Tomahawk (missile) The Tomahawk () Land Attack Missile (TLAM) is a long-range, all-weather, jet-powered, subsonic cruise missile that is primarily used by the United States Navy and Royal Navy in ship- and submarine-based land-attack operations. It was designed and initially produced in the 1970s by General Dynamics as a medium- to long-range, low-altitude missile that could be launched from a surface platform. The missile's modular design accommodates a wide variety of warhead, guidance, and range capabilities. At least six variants and multiple upgraded versions have been introduced since then, including air-, sub-, and ground-launched variants and conventional and nuclear-armed ones. As of 2019, only non-nuclear, sea-launched variants assembled by Raytheon are currently in service. The U.S. Navy launched the BGM-109 Tomahawk project, hiring James H. Walker and a team of scientists at the Applied Physics Laboratory near Laurel, Maryland. Since then, it has been upgraded several times with guidance systems for precision navigation. In 1992–1994, McDonnell Douglas Corporation was the sole supplier of Tomahawk Missiles and produced Block II and Block III Tomahawk missiles and remanufactured many Tomahawks to Block III specifications. In 1994, Hughes outbid McDonnell Douglas Aerospace to become the sole supplier of Tomahawk missiles. It is now manufactured by Raytheon. In 2016, the U.S. Department of Defense purchased 149 Tomahawk Block IV missiles for $202.3 million. The Tomahawk was most recently used by the U.S. Navy against Syrian chemical weapons facilities when 66 were launched in the 2018 missile strikes against Syria. There have been several variants of the missile, including: Ground-launched cruise missiles (GLCM) and their truck-like launch vehicles were employed at bases in Europe; they were withdrawn from service to comply with the 1987 Intermediate-Range Nuclear Forces Treaty. Many of the anti-ship versions were converted into TLAMs at the end of the Cold War. The Block III TLAMs that entered service in 1993 can fly 3 percent farther using their new turbofan engines and use Global Positioning System (GPS) receivers to strike more precisely. Block III TLAM-Cs retain the Digital Scene Matching Area Correlation (DSMAC) II navigation system, allowing three kinds of navigation: GPS-only, which allow for rapid mission planning, with some reduced accuracy, DSMAC-only, which take longer to plan but terminal accuracy is somewhat better; and GPS-aided missions that combine DSMAC II and GPS navigation for greatest accuracy. Block IV TLAMs have an improved turbofan engine that allows them to launch more quickly, get better fuel economy, and change speeds in flight. The Block IV TLAMs can loiter better and have a real-time targeting system for striking fleeing targets and electro-optical sensors that allow real-time battle damage assessment. The Block IVs can be given a new target in flight and can transmit an image, via satcom, immediately before impact to help determine whether the missile is on target and the likely damage from the attack. A major improvement to the Tomahawk is network-centric warfare-capabilities, using data from multiple sensors (aircraft, UAVs, satellites, foot soldiers, tanks, ships) to find its target. It will also be able to send data from its sensors to these platforms. "Tomahawk Block II" variants were all tested during January 1981 to October 1983. Deployed in 1984, some of the improvements included: an improved booster rocket, cruise missile radar altimeter, and navigation through the Digital Scene Matching Area Corellator (DSMAC). DSMAC was a highly accurate rudimentary AI which allowed early low power computers to navigate and precisely target objectives using cameras onboard the missile. With its ability to visually identify and aim directly at a target, it was more accurate than weapons using estimated GPS coordinates. Due to the very limited computer power of the day, DSMAC did not directly evaluate the maps, but instead would compute contrast maps and then combine multiple maps into a buffer, then compare the average of those combined images to determine if it was similar to the data in its small memory system. The data for the flight path was very low resolution in order to free up memory to be used for high resolution data about the target area. The guidance data was computed by a mainframe computer which took spy satellite photos and estimated what the terrain would appear like during low level flight. Since this data would not match the real terrain exactly, and since terrain changes seasonally and with changes in light quality, DSMAC would filter out differences between maps and use the remaining similar sections in order to find its location regardless of changes in how the ground appeared. It also had an extremely bright strobe light it could use to illuminate the ground for fractions of a second in order to find its position at night, and was able to take the difference in ground appearance into account. "Tomahawk Block III" introduced in 1993 added time-of-arrival control and improved accuracy for Digital Scene Matching Area Correlator (DSMAC) and jam-resistant GPS, smaller, lighter WDU-36 warhead, engine improvements and extended missile's range. "Tactical Tomahawk Weapons Control System (TTWCS)" takes advantage of a loitering feature in the missile's flight path and allows commanders to redirect the missile to an alternative target, if required. It can be reprogrammed in-flight to attack predesignated targets with GPS coordinates stored in its memory or to any other GPS coordinates. Also, the missile can send data about its status back to the commander. It entered service with the US Navy in late 2004. The Tactical Tomahawk Weapons Control System (TTWCS) added the capability for limited mission planning on board the firing unit (FRU). "Tomahawk Block IV" introduced in 2006 adds the strike controller which can change the missile in flight to one of 15 preprogrammed alternate targets or redirect it to a new target. This targeting flexibility includes the capability to loiter over the battlefield awaiting a more critical target. The missile can also transmit battle damage indication imagery and missile health and status messages via the two-way satellite data link. Firing platforms now have the capability to plan and execute GPS-only missions. Block IV also has an improved anti-jam GPS receiver for enhanced mission performance. Block IV includes Tomahawk Weapons Control System (TTWCS), and Tomahawk Command and Control System (TC2S). On 16 August 2010, the Navy completed the first live test of the Joint Multi-Effects Warhead System (JMEWS), a new warhead designed to give the Tomahawk the same blast-fragmentation capabilities while introducing enhanced penetration capabilities in a single warhead. In the static test, the warhead detonated and created a hole large enough for the follow-through element to completely penetrate the concrete target. In February 2014, U.S. Central Command sponsored development and testing of the JMEWS, analyzing the ability of the programmable warhead to integrate onto the Block IV Tomahawk, giving the missile bunker buster effects to better penetrate hardened structures. In 2012, the USN studied applying Advanced Anti-Radiation Guided Missile (AARGM) technology into the Tactical Tomahawk. In 2014, Raytheon began testing Block IV improvements to attack sea and moving land targets. The new passive radar seeker will pick up the electromagnetic radar signature of a target and follow it, and actively send out a signal to bounce off potential targets before impact to discriminate its legitimacy before impact. Mounting the multi-mode sensor on the missile's nose would remove fuel space, but company officials believe the Navy would be willing to give up space for the sensor's new technologies. The previous Tomahawk Anti-Ship Missile, retired over a decade earlier, was equipped with inertial guidance and the seeker of the Harpoon missile and there was concern with its ability to clearly discriminate between targets from a long distance, since at the time Navy sensors did not have as much range as the missile itself, which would be more reliable with the new seeker's passive detection and millimeter-wave active radar homing. Raytheon estimates adding the new seeker would cost $250,000 per missile. Other upgrades include a sea-skimming flight path. The first Block IV TLAMs modified with a maritime attack capability will enter service in 2021. A supersonic version of the Tomahawk is under consideration for development with a ramjet to increase its speed to Mach 3. A limiting factor to this is the dimensions of shipboard launch tubes. Instead of modifying every ship able to carry cruise missiles, the ramjet-powered Tomahawk would still have to fit within a 21-inch-diameter and 20-foot-long tube. In October 2015, Raytheon announced the Tomahawk had demonstrated new capabilities in a test launch, using its onboard camera to take a reconnaissance photo and transmit it to fleet headquarters. It then entered a loitering pattern until given new targeting coordinates to strike. By January 2016, Los Alamos National Laboratory was working on a project to turn unburned fuel left over when a Tomahawk reaches its target into an additional explosive force. To do this, the missile's JP-10 fuel is turned into a fuel air explosive to combine with oxygen in the air and burn rapidly. The thermobaric explosion of the burning fuel acts, in effect, as an additional warhead and can even be more powerful than the main warhead itself when there is sufficient fuel left in the case of a short-range target. The Tomahawk Block V is planned to go into production in 2020, the Block Va being the Maritime Strike Tomahawk (MST) which allows the missile to engage a moving target at sea and the Block Vb outfitted with the JMEWS warhead for hard-target penetration. All Block IV Tomahawks will be converted to Block V standard, while the remaining Block III missiles will be retired and demilitarized. In 2020, Los Alamos National Laboratory reported that it would use corn-based ethanol to produce domestic fuel for Tomahawk missiles, which also doesn't require harsh acids to manufacture, compared to petroleum-based JP-10. Each missile is stored and launched from a pressurized canister that protects it during transportation and storage, and also serves as a launch tube. These canisters were racked in Armored Box Launchers (ABL), which were installed on the four reactivated "Iowa"-class battleships , , , and . The ABLs were also installed on eight , the four , and the nuclear cruiser . These canisters are also in vertical launching systems (VLS) in other surface ships, capsule launch systems (CLS) in the later and s, and in submarines' torpedo tubes. All ABL equipped ships have been decommissioned. For submarine-launched missiles (called UGM-109s), after being ejected by gas pressure (vertically via the VLS) or by water impulse (horizontally via the torpedo tube), a solid-fuel booster is ignited to propel the missile and guide it out of the water. After achieving flight, the missile's wings are unfolded for lift, the airscoop is exposed and the turbofan engine is employed for cruise flight. Over water, the Tomahawk uses inertial guidance or GPS to follow a preset course; once over land, the missile's guidance system is aided by terrain contour matching (TERCOM). Terminal guidance is provided by the Digital Scene Matching Area Correlation (DSMAC) system or GPS, producing a claimed circular error probable of about 10 meters. The Tomahawk Weapon System consists of the missile, Theater Mission Planning Center (TMPC)/Afloat Planning System, and either the Tomahawk Weapon Control System (on surface ships) or Combat Control System (for submarines). Several versions of control systems have been used, including: On August 18, 2019, the United States Navy conducted a test flight of a Tomahawk missile launched from a ground-based version of the Mark 41 Vertical Launch System. It was the United States' first acknowledged launch of a missile that would have violated the 1987 Intermediate-Range Nuclear Forces Treaty, which the Trump administration withdrew from on August 2. The TLAM-D contains 166 sub-munitions in 24 canisters: 22 canisters of seven each, and two canisters of six each to conform to the dimensions of the airframe. The sub-munitions are the same type of Combined Effects Munition bomblet used in large quantities by the U.S. Air Force with the CBU-87 Combined Effects Munition. The sub-munitions canisters are dispensed two at a time, one per side. The missile can perform up to five separate target segments which enables it to attack multiple targets. However, in order to achieve a sufficient density of coverage typically all 24 canisters are dispensed sequentially from back to front. TERCOM – Terrain Contour Matching. A digital representation of an area of terrain is mapped based on digital terrain elevation data or stereo imagery. This map is then inserted into a TLAM mission which is then loaded onto the missile. When the missile is in flight it compares the stored map data with radar altimeter data collected as the missile overflies the map. Based on comparison results the missile's inertial navigation system is updated and the missile corrects its course. TERCOM was based on, and was a significant improvement on, "Fingerprint," a technology developed in 1964 for the SLAM. DSMAC – Digital Scene Matching Area Correlation. A digitized image of an area is mapped and then inserted into a TLAM mission. During the flight the missile will verify that the images that it has stored correlates with the image it sees below itself. Based on comparison results the missile's inertial navigation system is updated and the missile corrects its course. In the 1991 Gulf War, 288 Tomahawks were launched, 12 from submarines and 276 from surface ships. The first salvo was fired by the Destroyer USS "Paul F. Foster" on January 17, 1991. The attack submarines and followed. On 17 January 1993, 46 Tomahawks were fired at the Zafraniyah Nuclear Fabrication Facility outside Baghdad, in response to Iraq's refusal to cooperate with UN disarmament inspectors. One missile crashed into the side of the Al Rasheed Hotel, killing two civilians. On 26 June 1993, 23 Tomahawks were fired at the Iraqi Intelligence Service's command and control center. On 10 September 1995, launched 13 Tomahawk missiles from the central Adriatic Sea against a key air defense radio relay tower in Bosnian Serb territory during Operation Deliberate Force. On 3 September 1996, 44 ship-launched UGM-109 and B-52-launched AGM-86 cruise missiles were fired at air defense targets in southern Iraq. On 20 August 1998, 79 Tomahawk missiles were fired simultaneously at two targets in Afghanistan and Sudan in retaliation for the bombings of American embassies by Al-Qaeda. On 16 December 1998, 325 Tomahawk missiles were fired at key Iraqi targets during Operation Desert Fox. In early 1999, 218 Tomahawk missiles were fired by U.S. ships and a British submarine during Operation Allied Force against targets in the Federal Republic of Yugoslavia. In October 2001, about 50 Tomahawk missiles struck targets in Afghanistan in the opening hours of Operation Enduring Freedom. During the 2003 invasion of Iraq, more than 802 Tomahawk missiles were fired at key Iraqi targets. On 3 March 2008, two Tomahawk missiles were fired at a target in Somalia by a US vessel during the Dobley airstrike, reportedly in an attempt to kill Saleh Ali Saleh Nabhan, an al Qaeda militant. On 17 December 2009, two Tomahawk missiles were fired at targets in Yemen. One TLAM-D struck an alleged Al-Qaeda training camp in al-Ma’jalah in al-Mahfad, a region of the Abyan governorate of Yemen. Amnesty International reported that 55 people were killed in the attack, including 41 civilians (21 children, 14 women, and six men). The US and Yemen governments refused to confirm or deny involvement, but diplomatic cables released as part of United States diplomatic cables leak later confirmed the missile was fired by a U.S. Navy ship. On 19 March 2011, 124 Tomahawk missiles were fired by U.S. and British forces (112 US, 12 British) against at least 20 Libyan targets around Tripoli and Misrata. As of 22 March 2011, 159 UGM-109 were fired by US and UK ships against Libyan targets. On 23 September 2014, 47 Tomahawk missiles were fired by the United States from and , which were operating from international waters in the Red Sea and Persian Gulf, against ISIL targets in Syria in the vicinity of Raqqa, Deir ez-Zor, Al-Hasakah and Abu Kamal, and against Khorasan group targets in Syria west of Aleppo. On 13 October 2016 five Tomahawk cruise missiles were launched by at three radar sites in Yemen held by Houthi rebels in response to anti-ship missiles fired at US Navy ships the day before. On 6 April 2017, 59 Tomahawk missiles were launched from and , targeting Shayrat Airbase near Homs, in Syria. The strike was in response to a chemical weapons attack, an act allegedly carried out by Syrian President Bashar Al-Assad. U.S. Central Command stated in a press release that Tomahawk missiles hit "aircraft, hardened aircraft shelters, petroleum and logistical storage, ammunition supply bunkers, defense systems, and radars". Initial U.S. reports claimed "approximately 20 planes" were destroyed, and that 58 out of the 59 cruise missiles launched had "severely degraded or destroyed" their intended target. A later report by US Secretary of Defense James Mattis claimed that the strike destroyed about 20% of the Syrian government's operational aircraft. Syrian state-run media claimed that nine civilians, including four children living in nearby villages were killed and another seven wounded as a result of the strike after missiles fell on their homes, but The Pentagon said civilians were not targeted. According to the satellite images the runways and the taxiways have been undamaged and combat flights from the attacked airbase resumed on 7 April a few hours after the attack, although U.S. officials did not state that the runway was a target. An independent bomb damage assessment conducted by ImageSat International counted hits on 44 targets, with some targets being hit by more than one missile; these figures were determined using satellite images of the airbase 10 hours after the strike. However, the Russian defense ministry contends that the combat effectiveness of the attack was "extremely low"; only 23 missiles hit the base destroying six aircraft, and it did not know where the other 36 landed. Russian television news, citing a Syrian source at the airfield, said that nine planes were destroyed by the strikes (5 Su-22M3s, 1 Su-22M4, and 3 Mig-23ML) and that all planes were thought to have been out of action at the time. Al-Masdar News reported that 15 fighter jets were damaged or destroyed and that the destruction of fuel tankers caused several explosions and a large fire. However, Lost Armour's online photographic database, for vehicle losses in the War in Syria, has images of 10 destroyed aircraft at Shayrat airbase. Some observers conclude that the Russian government—and therefore also the Syrian government—was warned and Syria had enough time to move most of the planes to another base. The Syrian Observatory for Human Rights said the strike damaged over a dozen hangars, a fuel depot, and an air defense base. On April 14 2018, the US launched 66 Tomahawk cruise missiles at Syrian targets near Damascus and Homs, as part of the 2018 bombing of Damascus and Homs. These strikes were done in retaliation for alleged Douma chemical attack. The United States Department of Defense said Syria fired 40 defensive missiles at the allied weapons but did not hit any targets. The Russian military said that Syrian air defenses shot down 71 of the 103 missiles launched by the US and its allies. In 1995 the US agreed to sell 65 Tomahawks to the UK for torpedo-launch from their nuclear attack submarines. The first missiles were acquired and test-fired in November 1998; all Royal Navy fleet submarines are now Tomahawk capable, including the "Astute"-class. The Kosovo War in 1999 saw the Swiftsure-class HMS "Splendid" become the first British submarine to fire the Tomahawk in combat. The UK subsequently bought 20 more Block III to replenish stocks. The Royal Navy has since fired Tomahawks during the 2000s Afghanistan War, in Operation Telic as the British contribution to the 2003 Iraq War, and during Operation Ellamy in Libya in 2011. In April 2004, the UK and US governments reached an agreement for the British to buy 64 of the new generation of Tomahawk missile—the Block IV or TacTom missile. It entered service with the Royal Navy on 27 March 2008, three months ahead of schedule. In July 2014 the US approved the sale to the UK of a further 65 submarine-launched Block IV's at a cost of US$140m including spares and support; the Block III missiles were on British books at £1.1m and the Block IV at £0.87m including VAT. The Sylver Vertical Launching System on the new Type 45 destroyer is claimed by its manufacturers to have the capability to fire the Tomahawk, although the A50 launcher carried by the Type 45 is too short for the weapon (the longer A70 silo would be required). Nevertheless, the Type 45 has been designed with weight and space margin for a strike-length Mk41 or Sylver A70 silo to be retrofitted, allowing Type 45 to use the TLAM Block IV if required. The new Type 26 frigates will have strike-length Mk41 VLS tubes. SYLVER user France is developing MdCN, a version of the Storm Shadow/Scalp cruise missile that has a shorter range but a higher speed than Tomahawk and can be launched from the SYLVER system. The Air Force is a former operator of the nuclear-armed version of the Tomahawk, the BGM-109G Gryphon. The Netherlands (2005) and Spain (2002 and 2005) were interested in acquiring the Tomahawk system, but the orders were later cancelled in 2007 and 2009 respectively. In 2009 the Congressional Commission on the Strategic Posture of the United States stated that Japan would be concerned if the TLAM-N were retired, but the government of Japan has denied that it had expressed any such view. The SLCM version of the Popeye was developed by Israel after the US Clinton administration refused an Israeli request in 2000 to purchase Tomahawk SLCM's because of international Missile Technology Control Regime proliferation rules. As of March 12, 2015 Poland has expressed interest in purchasing long-range Tomahawk missiles for its future submarines.
https://en.wikipedia.org/wiki?curid=31238
Trigun Both manga were adapted into an anime television series in 1998. Madhouse animated the TV series which aired on TV Tokyo from April 1, 1998 to September 30, 1998, totaling 26 episodes. The show aired in the United States starting in 2003, as part of Cartoon Network's Adult Swim programming block. An animated feature film called "" was released in April 2010. "Trigun" revolves around a man known as "Vash the Stampede" and two "Bernardelli Insurance Society" employees, Meryl Stryfe and Milly Thompson, who follow him around in order to minimize the damages inevitably caused by his appearance. Most of the damage attributed to Vash is actually caused by bounty hunters in pursuit of the sixty billion double dollar bounty on Vash's head for the destruction of the city of July. However, he cannot remember the incident due to retrograde amnesia, being able to recall only fragments of the destroyed city and memories of his childhood. Throughout his travels, Vash tries to save lives using non-lethal force. He is occasionally joined by a priest, Nicholas D. Wolfwood, who, like Vash, is a superb gunfighter with a mysterious past. As the series progresses, more about Vash's past and the history of human civilization on the planet Gunsmoke is revealed. After leaving college, Yasuhiro Nightow had gone to work selling apartments for the housing corporation Sekisui House, but struggled to keep up with his manga drawing hobby. Reassured by some successes, including a one-shot manga based on the popular video game franchise Samurai Spirits, he quit his job to draw full-time. With the help of a publisher friend, he submitted a Trigun story for the February 1996 issue of the Tokuma Shoten magazine "Shōnen Captain", and began regular serialization two months later in April. However, "Shōnen Captain" was canceled early in 1997, and when Nightow was approached by the magazine "Young King Ours", published by Shōnen Gahōsha, they were interested in him beginning a new work. Nightow though, was troubled by the idea of leaving Trigun incomplete, and requested to be allowed to finish the series. The publishers were sympathetic, and the manga resumed in 1998 as . The story jumps forward two years with the start of Maximum, and takes on a slightly more serious tone, perhaps due to the switch from a "shōnen" to a "seinen" magazine. Despite this, Nightow has stated that the new title was purely down to the change of publishers, and rather than being a sequel it should be seen as a continuation of the same series. The 14th tankōbon was published on February 27, 2008. Shōnen Gahōsha later bought the rights to the original three volume manga series and reissued it as two enlarged volumes. In October 2003 the US publisher Dark Horse Comics released the expanded first volume translated into English by Digital Manga, keeping the original right-to-left format rather than mirroring the pages. "Trigun Maximum" followed quickly, and the entire 14-volume run was released over a five-year period from May 2004 to April 2009. Translations into French, German, Italian, Portuguese and Spanish have also been released. An anthology manga titled, featuring short stories written by several manga artists such as Boichi, Masakazu Ishiguru, Satoshi Mizukami, Ark Performance, Yusuke Takeyama, Yuga Takauchi and Akira Sagami was released in by Shonen Gahosha in Japan in December 2011 and in North America on March 6, 2013. Madhouse produced an anime series based on the manga, also titled "Trigun". Directed by Satoshi Nishimura, the series was broadcast on TV Tokyo from April 1 to September 30, 1998. It is licensed for DVD and Blu-ray in the United States by Funimation Entertainment, who re-released it on DVD on October 27, 2010. The show failed to garner a large audience in Japan during its original showing in 1998, but gained a substantial fan base following its United States premiere on Adult Swim in early 2003. Nightow has stated that due to the finality of the anime's ending, it is unlikely any continuation will be made. A "Trigun" film was originally announced in February 2008 to be released in 2009. The film titled "Trigun: Badlands Rumble" opened in theaters in Japan on April 24, 2010, and was first shown to an American audience at the Sakura-Con 2010 in Seattle, Washington on, April 2, 2010. At Anime Expo 2010, Funimation announced that they had licensed the film as they had with the TV series and planned to release it into theaters. The film made its US television premiere on Saturday, December 28, 2013, on Adult Swim's Toonami block. Though the series received a lukewarm response from native Japanese audiences, Trigun has proved to be a major success with North American viewers during its airing on the Cartoon Network Adult Swim starting in 2003. The anime has received mostly positive reviews from American critics who have praised the series' moral themes and ability to balance lighthearted humor with its more serious plot. The anime series is frequently listed as one of the best anime series; in 2001, "Wizard's Anime" Magazine listed "Trigun" as the 38th best series on their "Top 50 Anime released in North America", and in 2010 "The Los Angeles Times" journalist Charles Solomon placed the series as the seventh best anime on his "Top 10". Theron Martin of Anime News Network gave the anime adaptation a B+ praising the writing stating, "The series never wallows in the clichés inherent to this format simply because the surprisingly high quality of its writing never allows that to happen." However he continued to criticize the visuals stating, "Character rendering regularly looks more like rough drafts than refined final products, with the artists often struggling just to stay on model." Mike Toole of Anime News Network named "Trigun" as one of the most important anime of the 1990s. In 2009 Trigun Maximum won the "Best Comic" Seiun Award at the 48th Japan Science Fiction Convention. Escapist Magazine columnist H.D. Russell reviewed the anime adaptation of the series in early 2016, as part of the "Good Old Anime Review" section focusing on popular anime of the 1990s to early 2000s. Though, noting the series hasn't aged well in terms of animation and English voice acting quality, Russell states the depth of the characters and moral themes of the series more than compensate for its faults. Russell concluded his review giving Trigun a rank of four out of a five stars stating, "Trigun is very often overshadowed by its close cousin "Cowboy Bebop", which is sad, because it truly is a delight to watch. Despite having only decent voice acting (with a few exceptions), average music, and relatively static visuals, Trigun is an absolute blast that had me laughing and thinking the whole way. While it's not perfect, it is fun and it does ask the questions that will make viewers ponder for years to come without ever offering them an answer. Trigun is one that went straight from my backlog to my heart and is truly greater than the sum of its parts." The success of the animated series increased the popularity of the original manga source material with the US release's first volume run of 35,000 sold out shortly after release. The second volume concluded the original series early the next year, and went on to be the top earning manga release of 2004. Despite its relative popularity in the West, Trigun never gained widespread appeal to Japanese audiences. Suggested factors include the "old west" setting, European style character names and a lack of Japanese cultural elements. This would make Trigun one of the rare examples of an anime that is far more successful in the West than it was within its country of origin. The revolver Vash and Knives use, the AGL Arms .45 Long Colt Knives made, are visual replicas of the Remmington Stormbreaker from the tabletop roleplaying game Cyberpunk 2020.
https://en.wikipedia.org/wiki?curid=31241
Tenchi Muyo! A fourth OVA series was produced in Japan, with the first collection released on November 30, 2016. The following episodes were planned to be released with an interval of three months each, and the final part of the series arrived August 30, 2017. On July 12, 2019, it was announced that a fifth OVA series is currently in development, with Masaki Kajishima again serving as chief director and Hideki Shirane writing and overseeing scripts. A twenty-six-episode anime television series called was released in 1995 retelling and expanding upon the original six-episode story. was created in 1997, and is another alternate version of the original story. The latest version of the series called "Ai Tenchi Muyo!", was broadcast on Tokyo MX in 2014. Spin-off series of "Tenchi Muyo!" were also created. "Magical Girl Pretty Sammy" is an example that was adapted into a manga series. The franchise has also spawned soundtrack CDs and other merchandise released both in Japan and in the United States. Masaki Kajishima and Hiroki Hayashi, who both worked on the "Bubblegum Crisis" OVAs, cite the show as being the inspiration for "Tenchi Muyo! Ryo-Ohki". In an interview with AIC, Hayashi described "Bubblegum Crisis" as "a pretty gloomy anime. Serious fighting, complicated human relationships, and dark Mega Tokyo." They thought it would be fun to create some comedy episodes with ideas like the girls going to the hot springs, but it was rejected by the sponsors. He also said that there was a trend to have a bunch of characters of one gender and a single one of the other gender, and asked what if Mackey (Sylia's brother) was a main character, reversing the "Bubblegum" scenario. This became the basis for "Tenchi Muyo!". In designing Ryoko, Kajishima and Hayashi were inspired by the American sitcom "I Dream of Jeannie" and wanted to use her in their works. In the first episode, Tenchi would open the sealed cave, which was a reference to the Jeannie's bottle, and a "cute witch" would jump out. Hayashi said that Tenchi is "sort of" based on Mackey, and that after Tenchi and Ryoko, the other girls were designed to be characters to balance the picture in the very early concepts of the series, and that they are original characters. The first OVA was titled "Tenchi Muyo! Ryo-Ohki" and was created by Masaki Kajishima. The series was divided into 4 OVAs. The fourth OVA released its last episode on September 13, 2017. Hitoshi Okuda wrote two manga series based on the OVA series. The first manga is titled and was published by Kadokawa Shoten and serialized in Dragon Comic Jr. magazine from December 16, 1994 to June 9, 2000. The series was collected into 12 tankōbon volumes. The second series titled was also published by Kadokawa Shoten and serialized in Dragon Comic Age magazine from July 26, 2000 to December 9, 2005. The series was collected into 10 tankōbon volumes. After the first OVA series aired, AIC began looking into TV adaptations beyond the "Mihoshi Special". In 1995 , a 26-episode anime television series was created by Hiroshi Negishi, animated by AIC and produced by Pioneer. It was loosely based on the first six episodes of the OVA series and the "Mihoshi Special". Two years later, another AIC production followed suit called , which aired through 1997 and also ran 26 episodes. It borrowed characters and some plot devices from the previous incarnations, but with a noticeable art shift and very different concepts, such as centering on Tenchi's high school and being a priest in Tokyo. The most recent series aired in October 2014. The series commemorated the 20th anniversary of the franchise and was sponsored by the city of Takahashi, Okayama. An anime film titled "", created by Hiroshi Negishi, is a continuation of the "Tenchi Universe" TV series. A second film, "", was adapted from a novel written by Naoko Hasegawa. The third film, titled "Tenchi Forever! The Movie", is the sequel to "Tenchi Muyo in Love" and was adapted into a manga titled "Tenchi Muyo! In Love 2: Eternal Memory". Kajishima has written several books based on the franchise, including the ongoing "Tenchi Muyo! GXP: Galaxy Police Transporter" novel series, the novels "Shin Tenchi Muyo! Jurai", "Shin Tenchi Muyo! Yosho", and "Shin Tenchi Muyo! Washu", and recently, the "Paradise Wars" spinoff. There are also a number of dōjinshi by and interviews with Kajishima, as well as a companion book, "101 Questions and Answers of Tenchi Muyo! Ryo-Oh-Ki". Naoko Hasegawa wrote a series of thirteen light novels continuing from the first OVA series, starting with "One Visitor After Another: Hexagram Of Love" in 1993. Seven Seas Entertainment has licensed the light novels "Shin Tenchi Muyo! Jurai", "Shin Tenchi Muyo! Yosho", "Shin Tenchi Muyo! Washu" for a North American release. Numerous video games have been released based on the franchise, such as "Tenchi Muyo! Game Hen" for the Super Famicom. A radio drama was released titled "Tenchi Muyo! Ryo-ohki Manatsu no Carnival". On AnimeJapan 2019, it was announced on AIC Rights' booth that a stage play based on the first season of Ryo-Ohki was in development, scheduled to premiere in 2019. On May 17th, the cast and staff was announced, with Kazuhiro Igarashi writing the script and Kazuma Sato starring as Tenchi Masaki. It ran from July 17th to the 21st on the Shinjukumura Live venue, in Tokyo. The first Tenchi spinoff is the "Magical Girl Pretty Sammy" series, a magical girl series where Sasami is the lead character. The first use of Pretty Sammy was in the "Tenchi Muyo! Sound File", a Japanese-only music video release. The same animation was used in the ending of the "Tenchi Muyo! Mihoshi Special". In 1995, a three episode "Pretty Sammy" OVA series began, where Sasami, who is known as Sasami Kawai, magically becomes Pretty Sammy. The second "Pretty Sammy" series is a TV series, which came out in 1996, also known as "Magical Project S". This series is in a separate continuity from the OVA series. Pretty Sammy also appears in the "Mihoshi Special" toward the end of Mihoshi's story, and in an alternate reality sequence in the "Tenchi Universe" series. Also created by Masaki Kajishima, the 1997 OVA series "Photon: The Idiot Adventures" is related to the "Tenchi Muyo! Ryo-ohki" universe, or more specifically, its recent installment, "Tenchi Muyo! War on Geminar". "Tenchi Muyo! War on Geminar" copies a number of elements from "Photon: The Idiot Adventures", such as Koros, Aho energy, having a princess named Lashara, and a young hero with such strong superhuman abilities he's practically invincible. The 1999 series "Dual! Parallel Trouble Adventure" is related to the "Tenchi Muyo! Ryo-ohki" universe, due to the blatant use of the "Lighthawk Wings" associated with the Jurai dynasty in "Tenchi Muyo". The creator of both "Dual!" and "Tenchi Muyo!", Masaki Kajishima, confirmed that "Dual!" does relate to "Tenchi Muyo!", and is in fact an alternate version of the "Tenchi Muyo!" universe. Guardians of Order published a line of English-language "Tenchi Muyo" role-playing game books based on the various series in the Tenchi franchise starting in 2000. "Tenchi Muyo! GXP" was released in Japan in 2001. The series takes place during the Kajishima version of the OVA continuity, and is set a year after the events of the third OVA series (despite being released first chronologically). The main character is Seina Yamada, a friend of Tenchi Masaki who accidentally joined the Galaxy Police. Many characters from "Tenchi Muyo! Ryo-ohki" make appearances in this series, including the use of Seiryo Tennan as a major character and a full-fledged "Tenchi Muyo!" crossover in episode 17. "Battle Programmer Shirase" is a 2003 spin-off of the "Pretty Sammy" OVA and TV series, keeping the character Misao Amano from "Pretty Sammy", and with the main character Akira Shirase, Misao's great-uncle; however, it bears little in common with either "Pretty Sammy" series because it has neither magic, nor Sasami, nor Misao's alter ego Pixy Misa. "" aired in Japan in 2006. The third spin-off featuring Sasami (known here as Sasami Iwakura) as the main character. The most recent "Tenchi" spin-off series is called "Tenchi Muyo! War on Geminar" (aka "Isekai no Seikishi Monogatari") which follows the tale of Tenchi's half-brother Kenshi Masaki as he finds himself in a foreign world that uses humanoid machines to fight their wars.
https://en.wikipedia.org/wiki?curid=31242
Teleprinter A teleprinter (teletypewriter, teletype or TTY) is an electromechanical device that can be used to send and receive typed messages through various communications channels, in both point-to-point and point-to-multipoint configurations. Initially they were used in telegraphy, which developed in the late 1830s and 1840s as the first use of electrical engineering, though teleprinters were not used for telegraphy until 1887 at the earliest. The machines were adapted to provide a user interface to early mainframe computers and minicomputers, sending typed data to the computer and printing the response. Some models could also be used to create punched tape for data storage (either from typed input or from data received from a remote source) and to read back such tape for local printing or transmission. Teleprinters could use a variety of different communication media. These included a simple pair of wires; dedicated non-switched telephone circuits (leased lines); switched networks that operated similarly to the public telephone network (telex); and radio and microwave links (telex-on-radio, or TOR). A teleprinter attached to a modem could also communicate through standard switched public telephone lines. This latter configuration was often used to connect teleprinters to remote computers, particularly in time-sharing environments. Teleprinters have largely been replaced by fully electronic computer terminals which typically have a computer monitor instead of a printer (though the term "TTY" is still occasionally used to refer to them, such as in Unix systems). Teleprinters are still widely used in the aviation industry (see AFTN and airline teletype system), and variations called Telecommunications Devices for the Deaf (TDDs) are used by the hearing impaired for typed communications over ordinary telephone lines. The teleprinter evolved through a series of inventions by a number of engineers, including Samuel Morse, Alexander Bain, Royal Earl House, David Edward Hughes, Emile Baudot, Donald Murray, Charles L. Krum, Edward Kleinschmidt and Frederick G. Creed. Teleprinters were invented in order to send and receive messages without the need for operators trained in the use of Morse code. A system of two teleprinters, with one operator trained to use a keyboard, replaced two trained Morse code operators. The teleprinter system improved message speed and delivery time, making it possible for messages to be flashed across a country with little manual intervention. There were a number of parallel developments on both sides of the Atlantic Ocean. In 1835 Samuel Morse devised a recording telegraph, and Morse code was born. Morse's instrument used a current to displace the armature of an electromagnet, which moved a marker, therefore recording the breaks in the current. Cooke & Wheatstone received a British patent covering telegraphy in 1837 and a second one in 1840 which described a type-printing telegraph with steel type fixed at the tips of petals of a rotating brass daisy-wheel, struck by an “electric hammer” to print Roman letters through carbon paper onto a moving paper tape. In 1841 Alexander Bain devised an electromagnetic printing telegraph machine. It used pulses of electricity created by rotating a dial over contact points to release and stop a type-wheel turned by weight-driven clockwork; a second clockwork mechanism rotated a drum covered with a sheet of paper and moved it slowly upwards so that the type-wheel printed its signals in a spiral. The critical issue was to have the sending and receiving elements working synchronously. Bain attempted to achieve this using centrifugal governors to closely regulate the speed of the clockwork. It was patented, along with other devices, on April 21, 1841. By 1846, the Morse telegraph service was operational between Washington, D.C., and New York. Royal Earl House patented his printing telegraph that same year. He linked two 28-key piano-style keyboards by wire. Each piano key represented a letter of the alphabet and when pressed caused the corresponding letter to print at the receiving end. A "shift" key gave each main key two optional values. A 56-character typewheel at the sending end was synchronised to coincide with a similar wheel at the receiving end. If the key corresponding to a particular character was pressed at the home station, it actuated the typewheel at the distant station just as the same character moved into the printing position, in a way similar to the daisy wheel printer. It was thus an example of a synchronous data transmission system. House's equipment could transmit around 40 instantly readable words per minute, but was difficult to manufacture in bulk. The printer could copy and print out up to 2,000 words per hour. This invention was first put in operation and exhibited at the Mechanics Institute in New York in 1844. Landline teleprinter operations began in 1849, when a circuit was put in service between Philadelphia and New York City. In 1855, David Edward Hughes introduced an improved machine built on the work of Royal Earl House. In less than two years, a number of small telegraph companies, including Western Union in early stages of development, united to form one large corporation – Western Union Telegraph Co. – to carry on the business of telegraphy on the Hughes system. In France, Émile Baudot designed in 1874 a system using a five-unit code, which began to be used extensively in that country from 1877. The British Post Office adopted the Baudot system for use on a simplex circuit between London and Paris in 1897, and subsequently made considerable use of duplex Baudot systems on their Inland Telegraph Services. During 1901, Baudot's code was modified by Donald Murray (1865–1945, originally from New Zealand), prompted by his development of a typewriter-like keyboard. The Murray system employed an intermediate step, a keyboard perforator, which allowed an operator to punch a paper tape, and a tape transmitter for sending the message from the punched tape. At the receiving end of the line, a printing mechanism would print on a paper tape, and/or a reperforator could be used to make a perforated copy of the message. As there was no longer a direct correlation between the operator's hand movement and the bits transmitted, there was no concern about arranging the code to minimize operator fatigue, and instead Murray designed the code to minimize wear on the machinery, assigning the code combinations with the fewest punched holes to the most frequently used characters. The Murray code also introduced what became known as "format effectors" or "control characters" – the CR (Carriage Return) and LF (Line Feed) codes. A few of Baudot's codes moved to the positions where they have stayed ever since: the NULL or BLANK and the DEL code. NULL/BLANK was used as an idle code for when no messages were being sent. In the United States in 1902, electrical engineer Frank Pearne approached Joy Morton, head of Morton Salt, seeking a sponsor for research into the practicalities of developing a printing telegraph system. Joy Morton needed to determine whether this was worthwhile and so consulted mechanical engineer Charles L. Krum, who was vice president of the Western Cold Storage Company. Krum was interested in helping Pearne, so space was set up in a laboratory in the attic of Western Cold Storage. Frank Pearne lost interest in the project after a year and left to get involved in teaching. Krum was prepared to continue Pearne’s work, and in August, 1903 a patent was filed for a 'typebar page printer'. In 1904, Krum filed a patent for a 'type wheel printing telegraph machine' which was issued in August, 1907. In 1906 Charles Krum's son, Howard Krum, joined his father in this work. It was Howard who developed and patented the start-stop synchronizing method for code telegraph systems, which made possible the practical teleprinter. In 1908, a working teleprinter was produced by the Morkrum Company, called the Morkrum Printing Telegraph, which was field tested with the Alton Railroad. In 1910, the Morkrum Company designed and installed the first commercial teletypewriter system on Postal Telegraph Company lines between Boston and New York City using the "Blue Code Version" of the Morkrum Printing Telegraph. In 1916, Edward Kleinschmidt filed a patent application for a typebar page printer. In 1919, shortly after the Morkrum company obtained their patent for a start-stop synchronizing method for code telegraph systems, which made possible the practical teleprinter, Kleinschmidt filed an application titled "Method of and Apparatus for Operating Printing Telegraphs" which included an improved start-stop method. The basic start-stop procedure, however, is much older than the Kleinschmidt and Morkrum inventions. It was already proposed by D'Arlincourt in 1870. Instead of wasting time and money in patent disputes on the start-stop method, Kleinschmidt and the Morkrum Company decided to merge and form the Morkrum-Kleinschmidt Company in 1924. The new company combined the best features of both their machines into a new typewheel printer for which Kleinschmidt, Howard Krum, and Sterling Morton jointly obtained a patent. In 1924 Britain's Creed & Company, founded by Frederick G. Creed, entered the teleprinter field with their Model 1P, a page printer, which was soon superseded by the improved Model 2P. In 1925 Creed acquired the patents for Donald Murray's Murray code, a rationalised Baudot code. The Model 3 tape printer, Creed’s first combined start-stop machine, was introduced in 1927 for the Post Office telegram service. This machine printed received messages directly on to gummed paper tape at a rate of 65 words per minute. Creed created his first keyboard perforator, which used compressed air to punch the holes. He also created a reperforator (receiving perforator) and a printer. The reperforator punched incoming Morse signals on to paper tape and the printer decoded this tape to produce alphanumeric characters on plain paper. This was the origin of the Creed High Speed Automatic Printing System, which could run at an unprecedented 200 words per minute. His system was adopted by the Daily Mail for daily transmission of the newspaper's contents. The Creed Model 7 page printing teleprinter was introduced in 1931 and was used for the inland Telex service. It worked at a speed of 50 baud, about 66 words a minute, using a code based on the Murray code. A teleprinter system was installed in the Federal Aviation Administration Flight Service Station Airway Radio Stations system in 1928, carrying administrative messages, flight information and weather reports. By 1938, the FAA's teleprinter network, handling weather traffic, extended over 20,000 miles, covering all 48 states except Maine, New Hampshire, and South Dakota. There were at least five major types of teleprinter networks: Most teleprinters used the 5-bit International Telegraph Alphabet No. 2 (ITA2). This limited the character set to 32 codes (25 = 32). One had to use a "FIGS" (for "figures") shift key to type numbers and special characters. Special versions of teleprinters had FIGS characters for specific applications, such as weather symbols for weather reports. Print quality was poor by modern standards. The ITA2 code was used asynchronously with start and stop bits: the asynchronous code design was intimately linked with the start-stop electro-mechanical design of teleprinters. (Early systems had used synchronous codes, but were hard to synchronize mechanically). Other codes, such as FIELDATA and Flexowriter, were introduced but never became as popular as ITA2. "Mark" and "space" are terms describing logic levels in teleprinter circuits. The native mode of communication for a teleprinter is a simple series DC circuit that is interrupted, much as a rotary dial interrupts a telephone signal. The marking condition is when the circuit is closed (current is flowing), the spacing condition is when the circuit is open (no current is flowing). The "idle" condition of the circuit is a continuous marking state, with the start of a character signalled by a "start bit", which is always a space. Following the start bit, the character is represented by a fixed number of bits, such as 5 bits in the ITA2 code, each either a mark or a space to denote the specific character or machine function. After the character's bits, the sending machine sends one or more stop bits. The stop bits are marking, so as to be distinct from the subsequent start bit. If the sender has nothing more to send, the line simply remains in the marking state (as if a continuing series of stop bits) until a later space denotes the start of the next character. The time between characters need not be an integral multiple of a bit time, but it must be at least the minimum number of stop bits required by the receiving machine. When the line is broken, the continuous spacing (open circuit, no current flowing) causes a receiving teleprinter to cycle continuously, even in the absence of stop bits. It prints nothing because the characters received are all zeros, the ITA2 blank (or ASCII) null character. Teleprinter circuits were generally leased from a communications common carrier and consisted of ordinary telephone cables that extended from the teleprinter located at the customer location to the common carrier central office. These teleprinter circuits were connected to switching equipment at the central office for Telex and TWX service. Private line teleprinter circuits were not directly connected to switching equipment. Instead, these private line circuits were connected to network hubs and repeaters configured to provide point to point or point to multipoint service. More than two teleprinters could be connected to the same wire circuit by means of a current loop. Earlier teleprinters had three rows of keys and only supported upper case letters. They used the 5 bit ITA2 code and generally worked at 60 to 100 words per minute. Later teleprinters, specifically the Teletype Model 33, used ASCII code, an innovation that came into widespread use in the 1960s as computers became more widely available. "Speed", intended to be roughly comparable to words per minute, is the standard term introduced by Western Union for a mechanical teleprinter data transmission rate using the 5-bit ITA2 code that was popular in the 1940s and for several decades thereafter. Such a machine would send 1 start bit, 5 data bits, and 1.42 stop bits. This unusual stop bit time is actually a rest period to allow the mechanical printing mechanism to synchronize in the event that a garbled signal is received. This is true especially on high frequency radio circuits where selective fading is present. Selective fading causes the mark signal amplitude to be randomly different from the space signal amplitude. Selective fading, or Rayleigh fading can cause two carriers to randomly and independently fade to different depths. Since modern computer equipment cannot easily generate 1.42 bits for the stop period, common practice is to either approximate this with 1.5 bits, or to send 2.0 bits while accepting 1.0 bits receiving. For example, a "60 speed" machine is geared at 45.5 baud (22.0 ms per bit), a "66 speed" machine is geared at 50.0 baud (20.0 ms per bit), a "75 speed" machine is geared at 56.9 baud (17.5 ms per bit), a "100 speed" machine is geared at 74.2 baud (13.5 ms per bit), and a "133 speed" machine is geared at 100.0 baud (10.0 ms per bit). 60 speed became the "de facto" standard for amateur radio RTTY operation because of the widespread availability of equipment at that speed and the U.S. Federal Communications Commission (FCC) restrictions to only 60 speed from 1953 to 1972. Telex, news agency wires and similar services commonly used 66 speed services. There was some migration to 75 and 100 speed as more reliable devices were introduced. However, the limitations of HF transmission such as excessive error rates due to multipath distortion and the nature of ionospheric propagation kept many users at 60 and 66 speed. Most audio recordings in existence today are of teleprinters operating at 60 words per minute, and mostly of the Teletype Model 15. Another measure of the speed of a teletypewriter was in total "operations per minute (OPM)". For example, 60 speed was usually 368 OPM, 66 speed was 404 OPM, 75 speed was 460 OPM, and 100 speed was 600 OPM. Western Union Telexes were usually set at 390 OPM, with 7.0 total bits instead of the customary 7.42 bits. Both wire-service and private teleprinters had bells to signal important incoming messages and could ring 24/7 while the power was turned on. For example, ringing 4 bells on UPI wire-service machines meant an "Urgent" message; 5 bells was a "Bulletin"; and 10 bells was a FLASH, used only for very important news, such as the assassination of John F. Kennedy. The teleprinter circuit was often linked to a 5-bit paper tape punch (or "reperforator") and reader, allowing messages received to be resent on another circuit. Complex military and commercial communications networks were built using this technology. Message centers had rows of teleprinters and large racks for paper tapes awaiting transmission. Skilled operators could read the priority code from the hole pattern and might even feed a "FLASH PRIORITY" tape into a reader while it was still coming out of the punch. Routine traffic often had to wait hours for relay. Many teleprinters had built-in paper tape readers and punches, allowing messages to be saved in machine-readable form and edited off-line. Communication by radio, known as "radioteletype" or "RTTY" (pronounced "ritty"), was also common, especially among military users. Ships, command posts (mobile, stationary, and even airborne) and logistics units took advantage of the ability of operators to send reliable and accurate information with a minimum of training. Amateur radio operators continue to use this mode of communication today, though most use computer-interface sound generators, rather than legacy hardware teleprinter equipment. Numerous modes are in use within the "ham radio" community, from the original ITA2 format to more modern, faster modes, which include error-checking of characters. A typewriter or electromechanical printer can print characters on paper, and execute operations such as move the carriage back to the left margin of the same line (carriage return), advance to the same column of the next line (line feed), and so on. Commands to control non-printing operations were transmitted in exactly the same way as printable characters by sending control characters with defined functions (e.g., the "line feed" character forced the carriage to move to the same position on the next line) to teleprinters. In modern computing and communications a few control characters, such as carriage return and line feed, have retained their original functions (although they are often implemented in software rather than activating electromechanical mechanisms to move a physical printer carriage) but many others are no longer required and are used for other purposes. Some teleprinters had a "Here is" key, which transmitted a fixed sequence of 20 or 22 characters, programmable by breaking tabs off a drum. This sequence could also be transmitted automatically upon receipt of an ENQ (control E) signal, if enabled. This was commonly used to identify a station; the operator could press the key to send the station identifier to the other end, or the remote station could trigger its transmission by sending the ENQ character, essentially asking "who are you?" Creed & Company, a British company, built teleprinters for the GPO's teleprinter service. In 1931 Edward Kleinschmidt formed Kleinschmidt Labs to pursue a different type design of teleprinter. In 1944 Kleinschmidt demonstrated their lightweight unit to the Signal Corps and in 1949 their design was adopted for the Army's portable needs. In 1956, Kleinschmidt Labs merged with Smith-Corona, which then merged with the Marchant Calculating Machine Co., forming the SCM Corporation. By 1979, the Kleinschmidt division was branching off into Electronic Data Interchange, a business in which they became very successful, and replaced the mechanical products, including teleprinters. Kleinschmidt machines, with the military as their primary customer, used standard military designations for their machines. The teleprinter was identified with designations such as a TT-4/FG, while communication "sets" to which a teleprinter might be a part generally used the standard Army/Navy designation system such as AN/FGC-25. This includes Kleinschmidt teleprinter TT-117/FG and tape reperforator TT-179/FG. Morkrum made their first commercial installation of a printing telegraph with the Postal Telegraph Company in Boston and New York in 1910. It became popular with railroads, and the Associated Press adopted it in 1914 for their wire service. Morkrum merged with their competitor Kleinschmidt Electric Company to become Morkrum-Kleinschmidt Corporation shortly before being renamed the Teletype Corporation. Italian office equipment maker Olivetti (est. 1908) started to manufacture teleprinters in order to provide Italian post offices with modern equipment to send and receive telegrams. The first models typed on a paper ribbon, which was then cut and glued into telegram forms. Siemens & Halske, later Siemens AG, a German company, founded in 1897. The Teletype Corporation, a part of American Telephone and Telegraph Company's Western Electric manufacturing arm since 1930, was founded in 1906 as the Morkrum Company. In 1925, a merger between Morkrum and Kleinschmidt Electric Company created the Morkrum-Kleinschmidt Company. The name was changed in December 1928 to Teletype Corporation. In 1930, Teletype Corporation was purchased by the American Telephone and Telegraph Company and became a subsidiary of Western Electric. In 1984, the divestiture of the Bell System resulted in the Teletype name and logo being replaced by the AT&T name and logo, eventually resulting in the brand being extinguished. The last vestiges of what had been the Teletype Corporation ceased in 1990, bringing to a close the dedicated teleprinter business. Despite its long-lasting trademark status, the word "Teletype" went into common generic usage in the news and telecommunications industries. Records of the United States Patent and Trademark Office indicate the trademark has expired and is considered dead. Teletype machines tended to be large, heavy, and extremely robust, capable of running non-stop for months at a time if properly lubricated. The Model 15 stands out as one of a few machines that remained in production for many years. It was introduced in 1930 and remained in production until 1963, a total of 33 years of continuous production. Very few complex machines can match that record. The production run was stretched somewhat by World War II—the Model 28 was scheduled to replace the Model 15 in the mid-1940s, but Teletype built so many factories to produce the Model 15 during World War II, it was more economical to continue mass production of the Model 15. The Model 15, in its receive only, no keyboard, version was the classic "news Teletype" for decades. Several different high-speed printers like the "Ink-tronic" etc. A global teleprinter network, called the "Telex network", was developed in the late 1920s, and was used through most of the 20th century for business communications. The main difference from a standard teleprinter is that Telex includes a switched routing network, originally based on pulse-telephone dialing, which in the United States was provided by Western Union. AT&T developed a competing network called "TWX" which initially also used rotary dialing and Baudot code, carried to the customer premises as pulses of DC on a metallic copper pair. TWX later added a second ASCII-based service using Bell 103 type modems served over lines whose physical interface was identical to regular telephone lines. In many cases, the TWX service was provided by the same telephone central office that handled voice calls, using class of service to prevent POTS customers from connecting to TWX customers. Telex is still in use in some countries for certain applications such as shipping, news, weather reporting and military command. Many business applications have moved to the Internet as most countries have discontinued telex/TWX services. In addition to the 5-bit Baudot code and the much later seven-bit ASCII code, there was a six-bit code known as the Teletypesetter code (TTS) used by news wire services. It was first demonstrated in 1928 and began to see widespread use in the 1950s. Through the use of "shift in" and "shift out" codes, this six-bit code could represent a full set of upper and lower case characters, digits, symbols commonly used in newspapers, and typesetting instructions such as "flush left" or "center", and even "auxiliary font", to switch to italics or bold type, and back to roman ("upper rail"). The TTS produces aligned text, taking into consideration character widths and column width, or line length. A Model 20 Teletype machine with a paper tape punch ("reperforator") was installed at subscriber newspaper sites. Originally these machines would simply punch paper tapes and these tapes could be read by a tape reader attached to a "Teletypesetter operating unit" installed on a Linotype machine. The "operating unit" was essentially a box full of solenoids that sat on top of the Linotype's keyboard and pressed the appropriate keys in response to the codes read from the tape, thus creating type for printing in newspapers and magazines. In later years the incoming 6-bit current loop signal carrying the TTS code was connected to a minicomputer or mainframe for storage, editing, and eventual feed to a phototypesetting machine. Computers used teleprinters for input and output from the early days of computing. Punched card readers and fast printers replaced teleprinters for most purposes, but teleprinters continued to be used as interactive time-sharing terminals until video displays became widely available in the late 1970s. Users typed commands after a prompt character was printed. Printing was unidirectional; if the user wanted to delete what had been typed, further characters were printed to indicate that previous text had been cancelled. When video displays first became available the user interface was initially exactly the same as for an electromechanical printer; expensive and scarce video terminals could be used interchangeably with teleprinters. This was the origin of the text terminal and the command-line interface. Paper tape was sometimes used to prepare input for the computer session off line and to capture computer output. The popular Teletype Model 33 used 7-bit ASCII code (with an eighth parity bit) instead of Baudot. The common modem communications settings, "Start/Stop Bits" and "Parity," stem from the Teletype era. In early operating systems such as Digital's RT-11, serial communication lines were often connected to teleprinters and were given device names starting with tt. This and similar conventions were adopted by many other operating systems. Unix and Unix-like operating systems use the prefix tty, for example /dev/tty13, or pty (for pseudo-tty), such as /dev/ptya0. In many computing contexts, "TTY" has become the name for any text terminal, such as an external console device, a user dialing into the system on a modem on a serial port device, a printing or graphical computer terminal on a computer's serial port or the RS-232 port on a USB-to-RS-232 converter attached to a computer's USB port, or even a terminal emulator application in the window system using a pseudoterminal device. Teleprinters were also used to record fault printout and other information in some TXE telephone exchanges. Although printing news, messages, and other text at a distance is still universal, the dedicated teleprinter tied to a pair of leased copper wires was made functionally obsolete by the fax, personal computer, inkjet printer, email, and the Internet. In the 1980s, packet radio became the most common form of digital communications used in amateur radio. Soon, advanced multimode electronic interfaces such as the AEA PK-232 were developed, which could send and receive not only packet, but various other modulation types including Baudot. This made it possible for a home or laptop computer to replace teleprinters, saving money, complexity, space and the massive amount of paper which mechanical machines used. As a result, by the mid-1990s, amateur use of actual teleprinters had waned, though a core of "purists" still operate on equipment originally manufactured in the 1940s, 1950s, 1960s and 1970s. Despite the obsolescence of teleprinters by the 21st century, its distinctive sound continues to be played in the background of newscasts on the New York City radio station WINS, and Philadelphia's KYW, a tradition dating back to the mid-1960s.
https://en.wikipedia.org/wiki?curid=31247
Travelling salesman problem The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research. The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP. In the theory of computational complexity, the decision version of the TSP (where given a length "L", the task is to decide whether the graph has a tour of at most "L") belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities. The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, many heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely and even problems with millions of cities can be approximated within a small fraction of 1%. The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept "city" represents, for example, customers, soldering points, or DNA fragments, and the concept "distance" represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources will want to minimize the time spent moving the telescope between the sources. In many applications, additional constraints such as limited resources or time windows may be imposed. The origins of the travelling salesman problem are unclear. A handbook for travelling salesmen from 1832 mentions the problem and includes example tours through Germany and Switzerland, but contains no mathematical treatment. The travelling salesman problem was mathematically formulated in the 1800s by the Irish mathematician W.R. Hamilton and by the British mathematician Thomas Kirkman. Hamilton’s Icosian Game was a recreational puzzle based on finding a Hamiltonian cycle. The general form of the TSP appears to have been first studied by mathematicians during the 1930s in Vienna and at Harvard, notably by Karl Menger, who defines the problem, considers the obvious brute-force algorithm, and observes the non-optimality of the nearest neighbour heuristic: It was first considered mathematically in the 1930s by Merrill M. Flood who was looking to solve a school bus routing problem. Hassler Whitney at Princeton University generated interest in the problem, which he called the "48 states problem". The earliest publication using the phrase "traveling salesman problem" was the 1949 RAND Corporation report by Julia Robinson, "On the Hamiltonian game (a traveling salesman problem)." In the 1950s and 1960s, the problem became increasingly popular in scientific circles in Europe and the USA after the RAND Corporation in Santa Monica offered prizes for steps in solving the problem. Notable contributions were made by George Dantzig, Delbert Ray Fulkerson and Selmer M. Johnson from the RAND Corporation, who expressed the problem as an integer linear program and developed the cutting plane method for its solution. They wrote what is considered the seminal paper on the subject in which with these new methods they solved an instance with 49 cities to optimality by constructing a tour and proving that no other tour could be shorter. Dantzig, Fulkerson and Johnson, however, speculated that given a near optimal solution we may be able to find optimality or prove optimality by adding a small number of extra inequalities (cuts). They used this idea to solve their initial 49 city problem using a string model. They found they only needed 26 cuts to come to a solution for their 49 city problem. While this paper did not give an algorithmic approach to TSP problems, the ideas that lay within it were indispensable to later creating exact solution methods for the TSP, though it would take 15 years to find an algorithmic approach in creating these cuts. As well as cutting plane methods, Dantzig, Fulkerson and Johnson used branch and bound algorithms perhaps for the first time. In 1959, Jillian Beardwood, J.H. Halton and John Hammersley published an article entitled “The Shortest Path Through Many Points” in the journal of the Cambridge Philosophical Society.  The Beardwood–Halton–Hammersley theorem provides a practical solution to the traveling salesman problem.  The authors derived an asymptotic formula to determine the length of the shortest route for a salesman who starts at a home or office and visits a fixed number of locations before returning to the start. In the following decades, the problem was studied by many researchers from mathematics, computer science, chemistry, physics, and other sciences. In the 1960s however a new approach was created, that instead of seeking optimal solutions, one would produce a solution whose length is provably bounded by a multiple of the optimal length, and in doing so create lower bounds for the problem; these may then be used with branch and bound approaches. One method of doing this was to create a minimum spanning tree of the graph and then double all its edges, which produces the bound that the length of an optimal tour is at most twice the weight of a minimum spanning tree. In 1976, Christofides and Serdyukov independently of each other made a big advance in this direction: the Christofides-Serdyukov algorithm yields a solution that, in the worst case, is at most 1.5 times longer than the optimal solution. As the algorithm was so simple and quick, many hoped it would give way to a near optimal solution method. This remains the method with the best worst-case scenario. However, for a fairly general special case of the problem it was beaten by a tiny margin in 2011. Richard M. Karp showed in 1972 that the Hamiltonian cycle problem was NP-complete, which implies the NP-hardness of TSP. This supplied a mathematical explanation for the apparent computational difficulty of finding optimal tours. Great progress was made in the late 1970s and 1980, when Grötschel, Padberg, Rinaldi and others managed to exactly solve instances with up to 2,392 cities, using cutting planes and branch and bound. In the 1990s, Applegate, Bixby, Chvátal, and Cook developed the program "Concorde" that has been used in many recent record solutions. Gerhard Reinelt published the TSPLIB in 1991, a collection of benchmark instances of varying difficulty, which has been used by many research groups for comparing results. In 2006, Cook and others computed an optimal tour through an 85,900-city instance given by a microchip layout problem, currently the largest solved TSPLIB instance. For many other instances with millions of cities, solutions can be found that are guaranteed to be within 2–3% of an optimal tour. TSP can be modelled as an undirected weighted graph, such that cities are the graph's vertices, paths are the graph's edges, and a path's distance is the edge's weight. It is a minimization problem starting and finishing at a specified vertex after having visited each other vertex exactly once. Often, the model is a complete graph (i.e., each pair of vertices is connected by an edge). If no path exists between two cities, adding an arbitrarily long edge will complete the graph without affecting the optimal tour. In the "symmetric TSP", the distance between two cities is the same in each opposite direction, forming an undirected graph. This symmetry halves the number of possible solutions. In the "asymmetric TSP", paths may not exist in both directions or the distances might be different, forming a directed graph. Traffic collisions, one-way streets, and airfares for cities with different departure and arrival fees are examples of how this symmetry could break down. The TSP can be formulated as an integer linear program. Several formulations are known. Two notable formulations are the Miller–Tucker–Zemlin (MTZ) formulation and the Dantzig–Fulkerson–Johnson (DFJ) formulation. The DFJ formulation is stronger, though the MTZ formulation is still useful in certain settings. Label the cities with the numbers 1, …, "n" and define: For "i" = 1, …, "n", let formula_2 be a dummy variable, and finally take formula_3 to be the distance from city "i" to city "j". Then TSP can be written as the following integer linear programming problem: The first set of equalities requires that each city is arrived at from exactly one other city, and the second set of equalities requires that from each city there is a departure to exactly one other city. The last constraints enforce that there is only a single tour covering all cities, and not two or more disjointed tours that only collectively cover all cities. To prove this, it is shown below (1) that every feasible solution contains only one closed sequence of cities, and (2) that for every single tour covering all cities, there are values for the dummy variables formula_2 that satisfy the constraints. To prove that every feasible solution contains only one closed sequence of cities, it suffices to show that every subtour in a feasible solution passes through city 1 (noting that the equalities ensure there can only be one such tour). For if we sum all the inequalities corresponding to formula_6 for any subtour of "k" steps not passing through city 1, we obtain: which is a contradiction. It now must be shown that for every single tour covering all cities, there are values for the dummy variables formula_2 that satisfy the constraints. Without loss of generality, define the tour as originating (and ending) at city 1. Choose formula_9 if city "i" is visited in step "t" ("i", "t" = 1, 2, ..., n). Then since formula_2 can be no greater than "n" and formula_12 can be no less than 1; hence the constraints are satisfied whenever formula_13 For formula_6, we have: satisfying the constraint. Label the cities with the numbers 1, …, "n" and define: Take formula_3 to be the distance from city "i" to city "j". Then TSP can be written as the following integer linear programming problem: The last constraint of the DFJ formulation ensures that there are no sub-tours among the non-starting vertices, so the solution returned is a single tour and not the union of smaller tours. Because this leads to an exponential number of possible constraints, in practice it is solved with delayed column generation. The traditional lines of attack for the NP-hard problems are the following: The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using brute-force search). The running time for this approach lies within a polynomial factor of formula_19, the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. One of the earliest applications of dynamic programming is the Held–Karp algorithm that solves the problem in time formula_20. This bound has also been reached by Exclusion-Inclusion in an attempt preceding the dynamic programming approach. Improving these time bounds seems to be difficult. For example, it has not been determined whether an exact algorithm for TSP that runs in time formula_21 exists. Other approaches include: An exact solution for 15,112 German towns from TSPLIB was found in 2001 using the cutting-plane method proposed by George Dantzig, Ray Fulkerson, and Selmer M. Johnson in 1954, based on linear programming. The computations were performed on a network of 110 processors located at Rice University and Princeton University. The total computation time was equivalent to 22.6 years on a single 500 MHz Alpha processor. In May 2004, the travelling salesman problem of visiting all 24,978 towns in Sweden was solved: a tour of length approximately 72,500 kilometres was found and it was proven that no shorter tour exists. In March 2005, the travelling salesman problem of visiting all 33,810 points in a circuit board was solved using "Concorde TSP Solver": a tour of length 66,048,945 units was found and it was proven that no shorter tour exists. The computation took approximately 15.7 CPU-years (Cook et al. 2006). In April 2006 an instance with 85,900 points was solved using "Concorde TSP Solver", taking over 136 CPU-years, see . Various heuristics and approximation algorithms, which quickly yield good solutions, have been devised. These include the Multi-fragment algorithm. Modern methods can find solutions for extremely large problems (millions of cities) within a reasonable time which are with a high probability just 2–3% away from the optimal solution. Several categories of heuristics are recognized. The nearest neighbour (NN) algorithm (a greedy algorithm) lets the salesman choose the nearest unvisited city as his next move. This algorithm quickly yields an effectively short route. For N cities randomly distributed on a plane, the algorithm on average yields a path 25% longer than the shortest possible path. However, there exist many specially arranged city distributions which make the NN algorithm give the worst route.
https://en.wikipedia.org/wiki?curid=31248
Total Access Communication System Total Access Communication System (TACS) and ETACS are mostly-obsolete variants of Advanced Mobile Phone System (AMPS) which was announced as the choice for the first two UK national cellular systems in February 1983, less than a year after the UK government announced the T&Cs for the two competing mobile phone networks in June 1982. Vodafone (known then as Racal-Vodafone) opted for a £30 million turnkey contract from Ericsson (ERA) to design, build and set up its initial network of 100 base station sites. Cellnet (then known Telecom Securicor Cellular Radio Ltd) used development labs in the facilities at General Electric (later made part of Motorola) based at Lynchburg, Virginia, United States. The reason Cellnet used the General Electric labs was because the AMPS system was already in development there, and the company had set up a production facility in readiness for AMPS production in 1985 which the Cellnet TACS was to share. In March 1984 development of prototypes began at General Electric. Production began in 1985 and General Electric produced 20,000 systems that year for Cellnet's distribution in the UK. Production of what was to become the Motorola model were then made at Stotfold, Bedfordshire, England. This production facility continued making TACS until the advent of GSM. TACS cellular phones were used in Europe (including the UK, Italy, Austria and Ireland) and other countries. TACS was also used in Japan under the name Japanese Total Access Communication (JTAC). It was also used in Hong Kong. ETACS was an extended version of TACS with more channels. TACS and ETACS are now obsolete in Europe, having been replaced by the GSM (Global System for Mobile Communications) system. In the United Kingdom, the last ETACS service operated by Vodafone was discontinued on 31 May 2001, after 16 years of service. The competing service in the UK operated by Cellnet (latterly BT Cellnet) was closed on Sunday 1 October 2000. Eircell (now Vodafone Ireland) closed its TACS network on 26 January 2001. This followed a long period during which customers were encouraged to switch to GSM services. When the network was closed, there were very few, if any, active TACS customers left. Customers who switched network were able to keep their phone number, but the (088) prefix was changed to either 087 (Eircell, now Vodafone Ireland) GSM or 086 (Esat Digifone, which became O2 Ireland before merging with Three) GSM. At the time, full mobile number portability was not available to TACS customers and the (088) prefix was closed. An automatic voice message was left in place for 12 months advising callers of the customer's new prefix. ETACS is however still in use in a handful of countries elsewhere in the world. Nordic Mobile Telephone (NMT) is another 1G analog cellular standard that was widely used in Europe, mainly in the Nordic countries, which has now been fully replaced by GSM except for limited use in rural areas due to its superior range. TACS BAND Summary ESNs were issued in batches of 65535 by BABT for phone manufactures to program into each cellular phone to make each one unique to the TACS network with which it attempted to register. The following countries had more than two batches of ESNs allocated to them: UK, Italy, Austria, China, Malaysia, Hong Kong, Singapore, Bahrain, UAE, Kuwait, Philippines, Sri Lanka, Australia.
https://en.wikipedia.org/wiki?curid=31249
Time-division multiple access Time-division multiple access (TDMA) is a channel access method for shared-medium networks. It allows several users to share the same frequency channel by dividing the signal into different time slots. The users transmit in rapid succession, one after the other, each using its own time slot. This allows multiple stations to share the same transmission medium (e.g. radio frequency channel) while using only a part of its channel capacity. TDMA is used in the digital 2G cellular systems such as Global System for Mobile Communications (GSM), IS-136, Personal Digital Cellular (PDC) and iDEN, and in the Digital Enhanced Cordless Telecommunications (DECT) standard for portable phones. TDMA was first used in satellite communication systems by Western Union in its Westar 3 communications satellite in 1979. It is now used extensively in satellite communications, combat-net radio systems, and passive optical network (PON) networks for upstream traffic from premises to the operator. For usage of Dynamic TDMA packet mode communication, see below. TDMA is a type of time-division multiplexing (TDM), with the special point that instead of having one transmitter connected to one receiver, there are multiple transmitters. In the case of the "uplink" from a mobile phone to a base station this becomes particularly difficult because the mobile phone can move around and vary the "timing advance" required to make its transmission match the gap in transmission from its peers. Most 2G cellular systems, with the notable exception of IS-95, are based on TDMA. GSM, D-AMPS, PDC, iDEN, and PHS are examples of TDMA cellular systems. GSM combines TDMA with Frequency Hopping and wideband transmission to minimize common types of interference. In the GSM system, the synchronization of the mobile phones is achieved by sending timing advance commands from the base station which instructs the mobile phone to transmit earlier and by how much. This compensates for the propagation delay resulting from the light speed velocity of radio waves. The mobile phone is not allowed to transmit for its entire time slot, but there is a guard interval at the end of each time slot. As the transmission moves into the guard period, the mobile network adjusts the timing advance to synchronize the transmission. Initial synchronization of a phone requires even more care. Before a mobile transmits there is no way to actually know the offset required. For this reason, an entire time slot has to be dedicated to mobiles attempting to contact the network; this is known as the random-access channel (RACH) in GSM. The mobile attempts to broadcast at the beginning of the time slot, as received from the network. If the mobile is located next to the base station, there will be no time delay and this will succeed. If, however, the mobile phone is at just less than 35 km from the base station, the time delay will mean the mobile's broadcast arrives at the very end of the time slot. In that case, the mobile will be instructed to broadcast its messages starting nearly a whole time slot earlier than would be expected otherwise. Finally, if the mobile is beyond the 35 km cell range in GSM, then the RACH will arrive in a neighbouring time slot and be ignored. It is this feature, rather than limitations of power, that limits the range of a GSM cell to 35 km when no special extension techniques are used. By changing the synchronization between the uplink and downlink at the base station, however, this limitation can be overcome. Although most major 3G systems are primarily based upon CDMA, time-division duplexing (TDD), packet scheduling (dynamic TDMA) and packet oriented multiple access schemes are available in 3G form, combined with CDMA to take advantage of the benefits of both technologies. While the most popular form of the UMTS 3G system uses CDMA and frequency division duplexing (FDD) instead of TDMA, TDMA is combined with CDMA and time-division duplexing in two standard UMTS UTRA. The ITU-T G.hn standard, which provides high-speed local area networking over existing home wiring (power lines, phone lines and coaxial cables) is based on a TDMA scheme. In G.hn, a "master" device allocates "Contention-Free Transmission Opportunities" (CFTXOP) to other "slave" devices in the network. Only one device can use a CFTXOP at a time, thus avoiding collisions. FlexRay protocol which is also a wired network used for safety-critical communication in modern cars, uses the TDMA method for data transmission control. In radio systems, TDMA is usually used alongside frequency-division multiple access (FDMA) and frequency division duplex (FDD); the combination is referred to as FDMA/TDMA/FDD. This is the case in both GSM and IS-136 for example. Exceptions to this include the DECT and Personal Handy-phone System (PHS) micro-cellular systems, UMTS-TDD UMTS variant, and China's TD-SCDMA, which use time-division duplexing, where different time slots are allocated for the base station and handsets on the same frequency. A major advantage of TDMA is that the radio part of the mobile only needs to listen and broadcast for its own time slot. For the rest of the time, the mobile can carry out measurements on the network, detecting surrounding transmitters on different frequencies. This allows safe inter frequency handovers, something which is difficult in CDMA systems, not supported at all in IS-95 and supported through complex system additions in Universal Mobile Telecommunications System (UMTS). This in turn allows for co-existence of microcell layers with macrocell layers. CDMA, by comparison, supports "soft hand-off" which allows a mobile phone to be in communication with up to 6 base stations simultaneously, a type of "same-frequency handover". The incoming packets are compared for quality, and the best one is selected. CDMA's "cell breathing" characteristic, where a terminal on the boundary of two congested cells will be unable to receive a clear signal, can often negate this advantage during peak periods. A disadvantage of TDMA systems is that they create interference at a frequency which is directly connected to the time slot length. This is the buzz which can sometimes be heard if a TDMA phone is left next to a radio or speakers. Another disadvantage is that the "dead time" between time slots limits the potential bandwidth of a TDMA channel. These are implemented in part because of the difficulty in ensuring that different terminals transmit at exactly the times required. Handsets that are moving will need to constantly adjust their timings to ensure their transmission is received at precisely the right time, because as they move further from the base station, their signal will take longer to arrive. This also means that the major TDMA systems have hard limits on cell sizes in terms of range, though in practice the power levels required to receive and transmit over distances greater than the supported range would be mostly impractical anyway. In dynamic time-division multiple access (dynamic TDMA), a scheduling algorithm dynamically reserves a variable number of time slots in each frame to variable bit-rate data streams, based on the traffic demand of each data stream. Dynamic TDMA is used in
https://en.wikipedia.org/wiki?curid=31250
The Prisoner The Prisoner is a 1967 British science fiction-allegorical television series about an unnamed British intelligence agent who is abducted and imprisoned in a mysterious coastal village, where his captors try to find out why he abruptly resigned from his job. It was created by Patrick McGoohan and George Markstein with McGoohan playing the lead role of Number Six. Episode plots have elements of science fiction, allegory and psychological drama, as well as spy fiction. It was produced by Everyman Films for distribution by Lew Grade's ITC Entertainment. A single season of 17 episodes was filmed between September 1966 and January 1968 with exterior location filming in Portmeirion, Wales. Interior scenes were filmed at MGM-British Studios in Borehamwood. The series was first broadcast in Canada beginning on 6 September 1967, in the UK on 29 September 1967, and in the US on 1 June 1968. Although the show was sold as a thriller in the mould of the previous series starring McGoohan, "Danger Man" (retitled as "Secret Agent" in the US), its combination of 1960s countercultural themes and surrealistic setting had a far-reaching influence on science fiction and fantasy TV programming, and on narrative popular culture in general. Since its initial screening, the series has developed a cult following. A six-part TV miniseries remake aired on the US cable channel AMC in November 2009. In 2016, Big Finish Productions reinterpreted the series as an audio drama. The series follows an unnamed British man (played by Patrick McGoohan) who, after abruptly and angrily resigning from his job (seemingly being a government secret service post), apparently prepares to make a hurried departure from the country. While packing his luggage, he is rendered unconscious by knockout gas piped into his London flat. When he wakes, he finds himself in a re-creation of his home, located in a mysterious coastal "village" within which he is held captive, isolated from the mainland by mountains and sea. Although physical movement of residents around the Village is unconstrained, the premises are secured by numerous high-tech monitoring systems and security forces, including a balloon-like automaton called Rover, that recaptures or destroys those who attempt escape. The man encounters the Village's population: hundreds of people from all walks of life and cultures, all seeming to be peacefully living out their lives. They do not use names, but have been assigned numbers, which give no clue as to any person's status within the Village, whether as inmate or guard. Potential escapees therefore have no idea whom they can or cannot trust. The protagonist is assigned Number Six, but he repeatedly refuses the pretence of his new identity. Number Six is monitored heavily by Number Two, the Village administrator, who acts as an agent for the unseen "Number One". A variety of techniques are used by Number Two to try to extract information from Number Six, including hallucinogenic drug experiences, identity theft, mind control, dream manipulation, and various forms of social indoctrination and physical coercion. All of these are employed not only to find out why Number Six resigned as an agent, but also to elicit other purportedly dangerous information he gained as a spy. The position of Number Two is filled in by various other characters on a rotating basis. Sometimes this is part of a larger plan to confuse Number Six; at other times, it seems to be a change of personnel made as a result of failure to successfully interrogate Number Six. Number Six, distrustful of everyone in the Village, refuses to co-operate or provide the answers they seek. He struggles, usually alone, with various goals, such as determining for which side of the Iron Curtain the Village works, if indeed it works for any at all; remaining defiant to its imposed authority; concocting his own plans for escape; learning all he can about the Village; and subverting its operation. His schemes lead to the dismissals of the incumbent Number Two on two occasions, although he never escapes. By the end of the series, the administration, becoming desperate for Number Six's knowledge as well as fearful of his growing influence in the Village, takes drastic measures that threaten the lives of Number Six, Number Two, and the rest of the Village. A major theme of the series is individualism, as represented by Number Six, versus collectivism, as represented by Number Two and the others in the Village. McGoohan stated that the series aimed to demonstrate a balance between the two points. "The Prisoner" consists of seventeen episodes, which were first broadcast from 29 September 1967 to 1 February 1968 in the United Kingdom. While the show was presented as a serialised work, with a clear beginning and end, the ordering of the intermediate episodes is unclear, as the production and original broadcast order were different. Several attempts have been made to create an episode ordering based on script and production notes, and interpretations of the larger narrative of Number Six's time in the Village. The opening and closing sequences of "The Prisoner" have become iconic and cited as "one of the great set-ups of genre drama", by establishing the Orwellian and postmodern themes of the series. The high production values of the opening sequence have been described as more like a feature film than a television programme. Number Six awakes in the mysterious coastal location known to 'residents' as the Village. Most of the 'residents' are prisoners with others acting as guards. Escape from the Village is extremely difficult with mountains to three sides and the sea on the other. Escapees are tracked by CCTV and intercepted by Rover, a white balloon guardian that will attack and asphyxiate people if required. Everyone uses numbers for identification although names are infrequently used. Most of the villagers wear a standard outfit made up of coloured blazers with piping, multicoloured capes, striped sweaters, plimsolls and a variety of head wear with straw boaters prominent. There are several facilities listed on 'The Village Map', including The Labour Exchange, Hospital, Palace of Fun (which is never seen), Old People's Home, and the Green Dome, where Number Two resides. A taxi service operates around the village's buildings. The village of Portmeirion in north Wales was used extensively for the setting of the Village. When necessary the administrators of the Village can call upon a security guardian known as 'Rover' to prevent prisoners leaving the confines of the Village. An initial robotic device was created for the series but was replaced by meteorological balloons when filming commenced. The episodes featured guest stars in the role of Number Two. Of those listed below, only Leo McKern and Colin Gordon reprised the role. McGoohan was the only actor credited during the opening sequence, with Angelo Muscat the only actor considered a 'co-star' of the series. Several actors—among them Alexis Kanner, Christopher Benjamin, and Georgina Cookson—appeared in more than one episode, playing different characters. Kenneth Griffith appeared in "The Girl Who Was Death" and "Fall Out"; while he did play Number Two in "The Girl Who Was Death", his character in "Fall Out" may be the same character after the assignment of Number Two was passed to someone else (or, given events, abandoned). There is also a theory that Patrick Cargill played the same character in the two episodes in which he appeared; the Number Two that he plays in "Hammer into Anvil" may or may not be the same character as Thorpe, the aide to Number Six's superior, from "Many Happy Returns". Maher, McGoohan's stunt double, can be seen at the start of almost every episode, running across the beach; he also appears extensively in "The Schizoid Man" and in "Living in Harmony" as Third Gunman. The show was created while Patrick McGoohan and George Markstein were working on "Danger Man" (known as "Secret Agent" in the US), an espionage show produced by Incorporated Television Company (also called ITC Entertainment). The exact details of who created which aspects of the show are disputed; majority opinion credits McGoohan as the sole creator of the series. However, a disputed co-creator status was later ascribed to Markstein after a series of fan interviews were published in the 1980s. The show itself bears no "created by" credit. Some sources indicate McGoohan was the sole or primary creator of the show. McGoohan stated in a 1977 interview (broadcast as part of a Canadian documentary about "The Prisoner" called "The Prisoner Puzzle") that during the filming of the third season of "Danger Man" he told Lew Grade, managing director of ITC Entertainment, that he wanted to quit working on "Danger Man" after the filming of the proposed fourth series. Grade was unhappy with the decision, but when McGoohan insisted upon quitting, Grade asked if McGoohan had any other possible projects; McGoohan later pitched "The Prisoner". However, in a 1988 article from British Telefantasy magazine "Time Screen", McGoohan indicated that he had planned to pitch "The Prisoner" prior to speaking to Grade. In both accounts, McGoohan pitched the idea orally, rather than having Grade read the proposal in detail; and the two made an oral agreement for the show to be produced by Everyman Films, the production company formed by McGoohan and David Tomblin. In the 1977 account, McGoohan said that Grade approved of the show despite not understanding it, while in the 1988 account Grade expressed clear support for the concept. Other sources, however, credit Markstein, then a script editor for "Danger Man", with a significant or even primary portion of the development of the show. For example, Dave Rogers, in the book "The Prisoner and Danger Man", said that Markstein claimed to have created the concept first and McGoohan later attempted to take credit for it, though Rogers himself doubted that McGoohan would have wanted or needed to do that. A four-page document, generally agreed to have been written by Markstein, setting out an overview of the themes of the series, was published as part of an ITC/ATV press book in 1967. It has usually been accepted that this text originated earlier as a guide for the series writers. Further doubt has been cast on Markstein's version of events by author Rupert Booth in his biography of McGoohan, entitled "Not A Number". Booth points out that McGoohan had outlined the themes of "The Prisoner" in a 1965 interview, long before Markstein's tenure as script editor on the brief fourth season of "Danger Man". At any rate, part of Markstein's inspiration came from his research into the Second World War, where he found that some people had been incarcerated in a resort-like prison called Inverlair Lodge. Markstein suggested that "Danger Man"s main character John Drake (played by McGoohan) could suddenly resign and be kidnapped and sent to such a location. McGoohan added Markstein's suggestion to material he had been working on, which later became "The Prisoner". Furthermore, a 1960 episode of "Danger Man" entitled "View from the Villa" had exteriors filmed in Portmeirion, a Welsh resort village that struck McGoohan as a good location for future projects. According to "Fantasy or Reality", a chapter of "The Prisoner of Portmeirion", The Village is based, in part, on "a strange place in Scotland" operated by the Inter Services Research Bureau (ISRB), wherein "people" with "valuable knowledge of one sort or another" were held prisoners on extended "holidays" in a "luxury prison camp". "The Prisoner"'s story editor, George Markstein, this source contends, knows of "the existence of this 'secure establishment." However, this "Scottish prison camp, in reality was not, of course, a holiday-type village full of people wearing colourful" clothing. Further inspiration came from a "Danger Man" episode called "Colony Three", in which Drake infiltrates a spy school in Eastern Europe during the Cold War. The school, in the middle of nowhere, is set up to look like a normal English town in which pupils and instructors mix as in any other normal city, but the instructors are virtual prisoners with little hope of ever leaving. McGoohan also stated that he was influenced by his experience from theatre, including his work in the Orson Welles play "Moby Dick—Rehearsed" (1955) and a BBC television play, "The Prisoner" by Bridget Boland. McGoohan wrote a forty-page show Bible, which included a: "history of the Village, the sort of telephones they used, the sewerage system, what they ate, the transport, the boundaries, a description of the Village, every aspect of it." McGoohan wrote and directed several episodes, often using pseudonyms. Specifically, McGoohan wrote "Free for All" under the pen name 'Paddy Fitz' (Paddy being the Irish diminutive for Patrick and Fitzpatrick being his mother's maiden name) and directed the episodes "Many Happy Returns" and "A Change of Mind" using the stage name 'Joseph Serf', the surname being ironically a word meaning a peasant who is under the control of a feudal master. Using his own name, McGoohan wrote and directed the last two episodes—"Once Upon a Time" and "Fall Out"—and directed "Free for All". In a 1966 interview for the "Los Angeles Times" by reporter Robert Musel, McGoohan stated: "John Drake of "Secret Agent" is gone." Furthermore, McGoohan stated in a 1985 interview that Number Six is not the same character as John Drake, adding that he had originally wanted another actor to portray the character. However, other sources indicate that several of the crew members who continued on from "Danger Man" to work on "The Prisoner" considered it to be a continuation, and that McGoohan was continuing to play the character of John Drake. Furthermore, Dave Rogers states that Markstein had wanted the character to be a continuation of Drake, but by doing so would have meant paying royalties to Ralph Smart, the creator of "Danger Man". Nevertheless, the second officially licensed novel based on "The Prisoner", published in 1969, refers to Number Six as "Drake" from its very first sentence: "Drake woke." The issue has been debated by fans and TV critics, with some stating the two characters are the same, based on similarities in the shows, the characters, a few repeating actors beyond McGoohan, and certain specific connections in various episodes. McGoohan had originally wanted to produce only seven episodes of "The Prisoner", but Grade argued more shows were necessary in order for him to successfully sell the series to CBS. The exact number which was agreed to and how the series was to end, is disputed by different sources. In an August 1967 article, Dorothy Manners reported that CBS had asked McGoohan to produce 36 segments, but he would agree to produce only 17. According to a 1977 interview, Lew Grade requested 26 episodes; McGoohan thought this would spread the show too thin, but was able to come up with 17 episodes. However, according to "The Prisoner: The Official Companion to the Classic TV Series", the series was originally supposed to run longer but was cancelled, forcing McGoohan to write the final episode in only a few days. "The Prisoner" had its British premiere on 29 September 1967 on ATV Midlands, and the last episode first aired on 1 February 1968 on Scottish Television. The world broadcast premiere was on the CTV Television Network in Canada on 5 September 1967. Filming began with the shooting of the series' opening sequence in London on 28 August 1966, with location work beginning on 5 September 1966, primarily in Portmeirion village near Porthmadog, north Wales. This location partially inspired the show. At the request of Portmeirion's architect Clough Williams-Ellis, the main location for the series was not disclosed until the opening credits of the final episode, where it was described as "The Hotel Portmeirion, Penrhyndeudraeth, North Wales". Many local residents were recruited as extras. The Village setting was further augmented by the use of the backlot facilities at MGM-British Studios in Borehamwood. Additionally, filming of a key sequence of the opening credits—and of exterior location filming for three episodes—took place at 1 Buckingham Place in London, which at the time was a private residence; it doubled as Number Six's home. The building is now a highlight of "Prisoner" location tours, and currently houses the headquarters of the Royal Warrant Holders Association. The episodes "Many Happy Returns", "The Girl Who Was Death" (the cricket match for which was filmed at four different locations, with the main sequences filmed at Eltisley in Cambridgeshire) and "Fall Out" also made use of extensive location shooting in London and other locations. At the time, most British television was broadcast in black and white, but to reach the critical American audience the show was filmed in colour. According to author James Follett, a protégé of "The Prisoner" co-creator George Markstein, Markstein had mapped out an explanation for the Village. In Markstein's mind, a young John Drake, the lead character in the television series "Danger Man", had once submitted a proposal for how to deal with retired secret agents who pose a security risk. Drake's idea was to create a comfortable retirement home where former agents could live out their final years, enduring firm but unobtrusive surveillance. Years later, Drake discovers that his idea has been put into practice, not as a benign means of retirement, but as an interrogation centre and prison camp. Outraged, Drake stages his own resignation, knowing he will be brought to the Village. He hopes to learn everything he can of how his idea has been implemented and find a way to destroy it. However, due to the range of nationalities and agents present in the Village, Drake realises he is not sure whose Village he is in one established by his own people, or by the other side. Drake's conception of the Village would have been the basis for declaring himself to be "Number One". According to Markstein: "'Who is Number Six?' is no mystery he was a secret agent called Drake who quit." Markstein added: The prisoner was going to leave the Village and he was going to have adventures in many parts of the world, but ultimately he would always be a prisoner. By that I don't mean he would always go back to the Village. He would always be a prisoner of his circumstances, his situation, his secret, his background... and 'they' would always be there to ensure that his captivity continues. In the late 1960s, the TV series quickly spawned three novels tied into the series. In the 1970s and into the 1980s, as the series gained cult status a large amount of fan produced material began to appear, with the official appreciation society forming in 1977. In 1988, the first officially sanctioned guide – "The Prisoner Companion" – was released. It was not well received by fans or Patrick McGoohan. In 1989, Oswald and Carraze released "The Prisoner" in France with a translated version appearing shortly after. From the 1990s, numerous other books about the TV series and Patrick McGoohan have been produced. Robert Fairclough's books - including two volumes of original scripts - are considered some of the best researched books available. For the 40th anniversary, Andrew Pixley wrote a well-received and in depth account of the series' production. There are guides to shooting locations in Portmeirion and also a biography of co-creator George Markstein. Some members of the production crew have released books about their time working on the series including Eric Mival and Ian Rakoff. In 1988, DC Comics released the first part of a four-part series of comics based on the characters in the TV series. In 2018 Titan Comics re-issued "Shattered Visage" as well as releasing "The Prisoner: The Uncertainty Machine", another four-part series of comics about another spy returning to the Village. Numerous editions of "The Prisoner" were released in the UK/Region 2 by companies such as Carlton, the copyright holder of the TV series. The first VHS and Betamax releases were through Precision Video in 1982 from 16mm original prints. They released four tapes, each with two episodes edited together; these were "The Arrival"/"The Schizoid Man", "Many Happy Returns"/"A. B. and C.", "Checkmate"/"Free For All" and "The General"/"The Chimes of Big Ben". In 1986 Channel 5 Video (a now a defunct home video brand owned by Universal Pictures) released a series of all 17 episodes on VHS and LaserDisc. In 1993 PolyGram Video released the entire series plus a special feature called "The Best of The Prisoner" on five VHS cassette tapes. In North America, MPI Home Video released a total of 20 VHS videotapes in 1984 encompassing the entire series: one tape for each of the 17 episodes plus three more containing "The Alternate Version of 'The Chimes of Big Ben'", a documentary, and a "best of" retrospective. MPI also released editions of nine LaserDiscs in 1988 and 1998, the last disc of which comprised the final Episode 17, "Fall Out", plus "The Prisoner Video Companion" on side two. In 2000, the first DVD release in the UK was issued by Carlton International Entertainment, with A&E Home Video releasing the same DVDs in North America/Region 1 (in four-episode sets as well as a comprehensive 10-disc "mega-box" edition). A&E subsequently reissued the mega-box in a 40th anniversary edition in 2007. The A&E issue included an alternative version of "The Chimes of Big Ben" and the MPI-produced documentary (but not the redundant "best of" retrospective) among its limited special features. In Australia, Umbrella Entertainment released a DVD set in 2003. In 2005 DeAgostini in the UK released all 17 episodes in a fortnightly partwork series. "The Prisoner: 40th Anniversary Special Edition" DVD box-set released in 2007 featured standard-definition versions from high-definition masters created by Network. It also included a production guide to the series by Andrew Pixley. "The Prisoner: The Complete Series" was released on Blu-ray Disc in the United Kingdom on 28 September 2009, following in North America on 27 October 2009. The episodes were restored by Network to create new high-definition masters. The box-set features all 17 remastered episodes plus extensive special features, including the feature-length documentary "Don't Knock Yourself Out", a restored original edit of "Arrival" and extensive archive photos and production stills. "The Prisoner: 50th Anniversary Set" was released in the United Kingdom on 29 July 2019. It featured a six-disc Blu-ray collection with none of the extra material found on the DVD box-set released for the 40th anniversary included. The first half of Andrew Pixley's production book was now illustrated and presented in hardback, and text commentaries for every episode detailing the production story of the series were included for the first time. A six-CD set of remastered music was also included. Some additional extras were included such as an interview with McGoohan's daughter, Catherine. Missing from the set was the "Don't Knock Yourself Out" documentary, the script PDFs and some episode commentaries. "Six into One: The Prisoner File" (1984, 45 minutes) docudrama presented by Channel 4 after a repeat of the series in the UK. With its central premise to establish a reason why Number 6 resigned, the presentation revolved around a new Number 2 communicating with staff (and Number 1). It reviewed scenes from "Danger Man" and "The Prisoner," incorporated interviews with cast members (including McGoohan) and fans, and addressed the political environment giving rise to the series and McGoohan's heavy workload. "The Prisoner Video Companion" (1990, 48 minutes) American production with clips, including a few from "Danger Man", and voice-over narration discussing origins, interpretations, meaning, symbolism, etc., in a format modeled on the 1988 Warner book, "The Official Prisoner Companion" by Matthew White and Jaffer Ali. It was released to DVD in the early 2000s as a bonus feature with A&E's release of "The Prisoner" series. MPI also issued "The Best of The Prisoner", a video of series excerpts. "Don't Knock Yourself Out" (2007, 95 minutes) documentary issued as part of Network's DVD set for the series' 40th anniversary. It features interviews with around 25 cast and crew members. The documentary received a separate DVD release, featuring an extended cut, in November 2007 accompanied by a featurette titled "Make Sure It Fits", regarding Eric Mival's music editing for the series. "In My Mind" (2017, 78 mins) documentary film by Chris Rodley about his experiences interviewing Patrick McGoohan in 1983 for the "Six into One: The Prisoner File" documentary. Includes previously unseen interview material and insights from McGoohan's daughter, Catherine. In 2009, the series was remade as a miniseries, also titled "The Prisoner", which aired in the U.S. on AMC. The miniseries starred Jim Caviezel as number 6, and Ian McKellen as number 2, and was shot on location in Namibia and South Africa. The series received mixed reviews, mainly unfavourable, with a 45/100 rating by 21 critics and 3.6/10 by 82 users as of July 2018. The series is parodied in the 2000 "The Simpsons" episode "The Computer Wore Menace Shoes". McGoohan reprised his role as Number Six in that episode. The 1998 "The Simpsons" episode "The Joy of Sect" also includes a gag involving Rover. Christopher Nolan was reported to be considering a film version in 2009, but later dropped out of the project. The producer Barry Mendel said a decision to continue with the project depended on the success of the television mini-series. In 2016, Ridley Scott was in talks to direct the screen version. The finale of "The Prisoner" left open-ended questions, generating controversy and letters of outrage. Following this controversial ending, McGoohan "claimed he had to go into hiding for a while".
https://en.wikipedia.org/wiki?curid=31253
Tax law Tax law or revenue law is an area of legal study which deals with the constitutional, common-law, statutory, tax treaty, and regulatory rules that constitute the law applicable to taxation. Primary taxation issues facing the governments world over include; In law schools, "tax law" is a sub-discipline and area of specialist study. U.S. law schools require 30 semester credit hours of required courses, 60 hours or more of electives and a combined total of at least 90 credit hours completed. Law students must choose available courses on which to focus before graduation with the J.D. degree in the United States. This freedom allows law students to take many tax courses such as federal taxation, estate and gift tax, and estates and successions before completing the Juris Doctor and taking the bar exam in a particular U.S. state. Master of Laws (LL.M) programs are offered in Canada, United States, United Kingdom, Australia, Netherlands and an increasing number of countries. Many of these programs focus on domestic and international taxation. In the United States, most LL.M. programs require that the candidate be a graduate of an American Bar Association-accredited law school. In other countries a graduate law degree is sufficient for admission to LL.M. in Taxation law programs. The Master of Laws (LL.M) program is an advanced legal study. General Requirements The Juris Doctor (JD) program is offered by only a number of countries. These include, United States, Australia, Canada, Hong Kong, Japan, Philippines, Singapore and the United Kingdom. The courses vary in duration of years, curriculum and whether or not further training is required, depending on which country the program is in. General Requirements A list of tax faculty ranked by publication downloads is maintained by Paul Caron at TaxProf Blog.
https://en.wikipedia.org/wiki?curid=31256
Tadoma Tadoma is a method of communication used by deafblind individuals, in which the deafblind person places their thumb on the speaker's lips and their fingers along the jawline. The middle three fingers often fall along the speaker's cheeks with the little finger picking up the vibrations of the speaker's throat. It is sometimes referred to as 'tactile lipreading', as the deafblind person feels the movement of the lips, as well as vibrations of the vocal cords, puffing of the cheeks and the warm air produced by nasal sounds such as 'N' and 'M'. There are variations in the hand positioning, and it is a method sometimes used by people to support their remaining hearing. In some cases, especially if the speaker knows sign language, the deaf-blind person may use the Tadoma method with one hand, feeling the speaker's face; and at the same time, the deaf-blind person may use their other hand to feel the speaker sign the same words. In this way, the two methods reinforce each other, giving the deaf-blind person a better chance of understanding what the speaker is trying to communicate. In addition, the Tadoma method can provide the deaf-blind person with a closer connection with speech than they might otherwise have had. This can, in turn, help them to retain speech skills that they developed before going deaf, and in special cases, to learn how to speak brand new words. The Tadoma method was invented by American teacher Sophie Alcorn and developed at the Perkins School for the Blind in Massachusetts. It is named after the first two children to whom it was taught: Winthrop "Tad" Chapman and Oma Simpson. It was hoped that the students would learn to speak by trying to reproduce what they felt on the speaker's face and throat while touching their own face. It is a difficult method to learn and use, and is rarely used nowadays. However, a small number of deafblind people successfully use Tadoma in everyday communication. Helen Keller also used a form of Tadoma.
https://en.wikipedia.org/wiki?curid=31257
Toruń Toruń (, , ; ) is a historical city on the Vistula River in north-central Poland, and a UNESCO World Heritage Site. Its population was 201,447 as of December 2019. Previously, it was the capital of the Toruń Voivodeship (1975–1998) and the Pomeranian Voivodeship (1921–1945). Since 1999, Toruń has been a seat of the self-government of the Kuyavian-Pomeranian Voivodeship and, as such, is one of its two capitals, together with Bydgoszcz. The cities and neighboring counties form the Bydgoszcz–Toruń twin city metropolitan area. Toruń is one of the oldest cities in Poland, with the first settlement dated back to the 8th century and later having been expanded in 1233 by the Teutonic Knights. Over centuries, it was the home for people of diverse backgrounds and religions. From 1264 until 1411 Toruń was part of the Hanseatic League and by the 17th century it was one of the elite trading points, which greatly affected the city's architecture ranging from Brick Gothic to Mannerism and Baroque. In the early-modern age, Toruń was a royal city of Poland and it was one of the four largest cities in the country at the time. After the partitions of Poland it was part of Prussia and later the German Empire. After Poland regained independence in 1918, Toruń was reincorporated into Polish territory, and during World War II was spared from bombing and destruction. This allowed the Old Town to be fully preserved with its iconic central marketplace. Believed to be one of the most beautiful cities in Europe, Toruń is renowned for the Museum of Gingerbread, whose baking tradition dates back nearly a millennium, and its large Cathedral. Toruń is noted for its very high standard of living and quality of life. In 1997 the medieval part of the city was designated a UNESCO World Heritage Site. In 2007 the Old Town in Toruń was added to the list of Seven Wonders of Poland. Toruń is the birthplace of astronomer Nicolaus Copernicus. The first settlement in the vicinity of Toruń is dated by archaeologists to 1100 BC (Lusatian culture). During early medieval times, in the 7th through 13th centuries, it was the location of an old Slavonic settlement, at a ford in the Vistula river. In spring 1231 the Teutonic Knights crossed the river Vistula at the height of Nessau and established a fortress. On 28 December 1233, the Teutonic Knights Hermann von Salza and Hermann Balk, signed the foundation charters for Thorn and Kulm. The original document was lost in 1244. The set of rights in general is known as Kulm law. In 1236, due to frequent flooding, it was relocated to the present site of the Old Town. In 1239 Franciscan friars settled in the city, followed in 1263 by Dominicans. In 1264 the adjacent New Town was founded predominantly to house Torun's growing population of craftsmen and artisans. In 1280, the city (or as it was then, both cities) joined the mercantile Hanseatic League, and thus became an important medieval trade centre. The First Peace of Thorn ending the Polish–Lithuanian–Teutonic War was signed in the city in February 1411 leaving the town in the hands of the Order. In 1440, the gentry of Thorn formed the Prussian Confederation to further oppose the Knights' policies. The Confederation rose against the Monastic state of the Teutonic Knights in 1454 and its delegation submitted a petition to Polish King Casimir IV Jagiellon asking him to regain power over Prussia as the rightful ruler. An act of incorporation was signed in Kraków (6 March 1454), recognizing the region, including Toruń, as part of the Polish Kingdom. These events led to the Thirteen Years' War. The New and the Old Towns amalgamated in 1454. The citizens of Thorn enraged by the Order's ruthless exploitation, conquered the Teutonic castle, and dismantled the fortifications brick by brick, except for the Gdanisko tower, which was used until the 18th century for the gunpowder storage. During the war, Toruń financially supported the Polish Army. The Thirteen Years' War ended in 1466 with the Second Peace of Thorn, in which the Teutonic Order ceded their control over the city to Poland. The Polish King granted the town great privileges, similar to those of Gdańsk. Also in 1454, in the Dybów Castle, in today's left-bank Toruń, the King issued the famous Statutes of Nieszawa, covering a set of privileges for the Polish nobility, an event that is regarded as the birth of the noble democracy in Poland, which lasted until the country's demise in 1795. Throughout history, the city was home to notable personas, scholars and statesmen. In 1473 Nicolaus Copernicus was born and in 1501 Polish King John I Albert died in Toruń; his heart was buried inside St. John's Cathedral. In 1500, the Tuba Dei, which was the largest church bell in Poland at that time, was placed in the church of St. John the Baptist, and a bridge across the Vistula was built, which was the country's longest wooden bridge at that time. In 1506 Toruń became a royal city of Poland. In 1528, the royal mint started operating in Toruń. In 1568 a gymnasium was founded, which after 1594 became one of the leading schools of northern Poland for the centuries to come. Also in 1594, the Toruń's first museum ("Musaeum") was established at the school, beginning the city's museal traditions. A city of great wealth and influence, it enjoyed voting rights during the royal election period. Sejms of the Polish–Lithuanian Commonwealth were held in Toruń in 1576 and 1626. In 1557, during the Protestant Reformation, the city adopted Protestantism. Under Mayor Heinrich Stroband (1586–1609), the city became centralized. Administrative power passed into the hands of the city council. In 1595 Jesuits arrived to promote the Counter-Reformation, taking control of St. John's Church. The Protestant city officials tried to limit the influx of Catholics into the city, as Catholics (Jesuits and Dominican friars) already controlled most of the churches, leaving only St. Mary's to Protestant citizens. In 1645, at a time when religious conflicts occurred in many other European countries and the disastrous Thirty Years' War was fought west of Poland, in Toruń, on the initiative of King Władysław IV Vasa, a three-month congress of European Catholics, Lutherans and Calvinists was held, known as "Colloquium Charitativum", an important event in the history of interreligious dialogue. In 1677 the Prussian historian and educator Christoph Hartknoch was invited to be director of the Thorn Gymnasium, a post which he held until his death in 1687. Hartknoch wrote histories of Prussia, including the cities of Royal Prussia. During the Great Northern War (1700–21), the city was besieged by Swedish troops. The restoration of Augustus the Strong as King of Poland was prepared in the town in the Treaty of Thorn (1709) by Russian Tsar Peter the Great. In the second half of the 17th century, tensions between Catholics and Protestants grew, similarly to religious wars throughout Europe. In the early 18th century about 50 percent of the populace, especially the gentry and middle class, were German-speaking Protestants, while the other 50 percent were Polish-speaking Roman Catholics. Protestant influence was subsequently pushed back after the Tumult of Thorn of 1724. After the Second Partition of Poland in 1793, the city was annexed by Prussia. It was briefly regained by Poles as part of the Duchy of Warsaw in years 1807–1815, even serving as the temporary capital in April and May 1809. In 1809 Toruń was successfully defended by the Poles against the Austrians. After being re-annexed by Prussia in 1815, Toruń was subjected to Germanization and became a strong center of Polish resistance against such policies. New Polish institutions were established, such as Towarzystwo Naukowe w Toruniu ("Toruń Scientific Society"), a major Polish institution in the Prussian Partition of Poland, founded in 1875. In 1976 it was awarded the Commander's Cross with Star of the Order of Polonia Restituta, one of highest Polish decorations. After World War I, Poland declared independence and regained control over the city. In interwar Poland Toruń was capital of the Pomeranian Voivodeship. During World War II, Germans occupied the city from 7 September 1939 to 1 February 1945. Local people were subjected to arrests, expulsions, slave labor and executions, especially the Polish elites as part of the "Intelligenzaktion". From 1940 to 1943, in the northern part of the city there was a German transit camp for Poles expelled from Toruń and the surrounding area, which became infamous for inhuman sanitary conditions. Over 12,000 Poles passed through the camp, and around 1,000 died there, including about 400 children. Despite this, the city fortunately avoided damage during both World Wars, thanks to which it retained its historic architecture ranging from Gothic through Renaissance and Baroque to the 19th and 20th century styles. Listed on the UNESCO list of World Heritage Sites since 1997, Toruń has many monuments of architecture dating back to the Middle Ages. The city is famous for having preserved almost intact its medieval spatial layout and many Gothic buildings, all built from brick, including monumental churches, the Town Hall and many burgher houses. The most interesting monuments are: Toruń has the largest number of preserved Gothic houses in Poland, many with Gothic wall paintings or wood-beam ceilings from the 16th to the 18th centuries. Toruń, unlike many other historic cities in Poland, escaped substantial destruction in World War II. Particularly left intact was the Old Town, all of whose important architectural monuments are originals, not reconstructions. Major renovation projects have been undertaken in recent years to improve the condition and external presentation of the Old Town. Besides the renovation of various buildings, projects such as the reconstruction of the pavement of the streets and squares (reversing them to their historical appearance), and the introduction of new plants, trees and objects of 'small architecture', are underway. Numerous buildings and other constructions, including the city walls along the boulevard, are illuminated at night, creating an impressive effect - probably unique among Polish cities with respect to the size of Toruń's Old Town and the scale of the illumination project itself. Toruń is also home to the Zoo and Botanical Garden opened in 1965 and 1797 respectively and is one of the city's popular tourist attractions. Toruń is divided into 24 administrative districts (dzielnica) or boroughs, each with a degree of autonomy within its own municipal government. The Districts include: Barbarka, Bielany, Bielawy, Bydgoskie Przedmieście, Chełmińskie Przedmieście, Czerniewice, Glinki, Grębocin nad Strugą, Jakubskie Przedmieście, Kaszczorek, Katarzynka, Koniuchy, Mokre, Na Skarpie, Piaski, Podgórz, Rubinkowo, Rudak, Rybaki, Stare Miasto ("Old Town"), Starotoruńskie Przedmieście, Stawki, Winnica, Wrzosy. The colors of Toruń are white and blue in the horizontal arrangement, white top, blue bottom, equal in size. The flag of the city of Toruń is a bipartite sheet. The upper field is white, the lower field is blue. If the flag is hung vertically, the upper edge of the flag must be on the left. The flag with the coat of arms is also in use. The ratio of the height of the coat of arms to the width of the flag is 1:2. The climate can be described as humid continental (Köppen: "Dfb") using the isotherm of 0 °C or an oceanic climate ("Cfb") because it is there 0.2 °C above the normal threshold of 1961–1990, probably definitively in the second if the data is updated. In general terms, the city passes close to the original boundary and dividing line of climates C and D groups in the west–east direction proposed by climatologist Wladimir Köppen considering that it was in relatively close places. Toruń is in the transition between the milder climates of the west and north of the Poland and the more extreme ones like the south (warmer summer) and the east (colder winter). Although not much different from Kraków and Warsaw, except for the slightly cooler winters and the less hot summers. Being close to definitely continental climates, it has a high variability caused by the contact of eastern continental air masses and western oceanic ones. This is influenced by the geographical location of the city – the Toruń Basin to the south, and the Vistula Valley to the north. The most recent statistics show a decrease in the population of the city, from 211,169 in 2001 (highest) to 202,562 in 2018. Among the demographic trends influencing this decline, are: suburbanisation, migration to larger urban centres, and wider trends observed in the whole of Poland such as general population decline, slowed down by immigration in 2017. The birth rate in the city in 2017 was 0.75. Low birthrates have been consistent in the city for the first two decades of 21st Century. The official forecasts from Statistics Poland state that by 2050 the city population will have declined to 157,949. Inside the city itself, most of the population is concentrated on the right (northern) bank of the Vistula river. Two of the most densely populated areas are Rubinkowo and Na Skarpie, housing projects built mostly in the 1970s and 1980s, located between the central and easternmost districts; their total population is about 70,000. The Bydgoszcz–Toruń metro area of Toruń and Bydgoszcz, their counties, and a number of smaller towns, may in total have a population of as much as 800,000. Thus the area contains about one third of the population of the Kuyavia-Pomerania region (which has about 2.1 million inhabitants). The transport network in the city has undergone major development in recent years. The partial completion of ring road (East and South), the completion of the second bridge (2013) and various road, and cycling lane improvements, including construction of Trasa Średnicowa, have decidedly improved the traffic in the city. However, noise barriers that have been erected along the new or refurbished roads have been criticised as not conducive to a beautiful urban landscape. The extensive roadworks have also drawn attention to the declining population numbers, casting doubt that the city might over-delivered for the future number of road users, as the demographic trends forecast from Statistics Poland predicts a reduction of population by almost 1/4 by year 2050. The city's public transport system comprises five tram lines and about 40 bus routes, covering the city and some of the neighboring communities. Toruń is situated at a major road junction, one of the most important in Poland. The A1 highway reaches Toruń, and a southern beltway surrounds the city. Besides these, the European route E75 and a number of domestic roads (numbered 10, 15, and 80) run through the city. With three main railway stations (Toruń Główny, Toruń Miasto and Toruń Wschodni), the city is a major rail junction, with two important lines crossing there (Warszawa–Bydgoszcz and Wrocław–Olsztyn). Two other lines stem from Toruń, toward Malbork and Sierpc. The rail connection with Bydgoszcz is run under a name "BiT City" as a "metropolitan rail". Its main purpose is to allow traveling between and within these cities using one ticket. A joint venture of Toruń, Bydgoszcz, Solec Kujawski and the voivodeship, it is considered as important in integrating Bydgoszcz-Toruń metropolitan area. A major modernization of BiT City railroute, as well as a purchase of completely new vehicles to serve the line, is planned for 2008 and 2009. Technically, it will allow to travel between Toruń-East and Bydgoszcz-Airport stations at a speed of in a time of approximately half an hour. In a few years' time "BiT City" will be integrated with local transportation systems of Toruń and Bydgoszcz, thus creating a uniform metropolitan transportation network – with all necessary funds having been secured in 2008. Since September 2008, the "one-ticket" solution has been introduced also as regards a rail connection with Włocławek, as a "regional ticket". The same is planned for connection with Grudziądz. Two bus depots serve to connect the city with other towns and cities in Poland. , a small sport airfield exists in Toruń; however, a modernization of the airport is seriously considered with a number of investors interested in it. Independently of this, Bydgoszcz Ignacy Jan Paderewski Airport, located about from Toruń city centre, serves the whole Bydgoszcz-Toruń metropolitan area, with a number of regular flights to European cities. Although a medium-sized city, Toruń is the site of the headquarters of some of the largest companies in Poland, or at least of their subsidiaries. The official unemployment rate, as of September 2008, is 5.4%. In 2006, construction of new plants owned by Sharp Corporation and other companies of mainly Japanese origin has started in the neighboring community of Łysomice - about from city centre. The facilities under construction are located in a newly created special economic zone. As a result of cooperation of the companies mentioned above, a vast high-tech complex is to be constructed in the next few years, providing as many as 10,000 jobs (a prediction for 2010) at the cost of about 450 million euros. , the creation of another special economic zone is being considered, this time inside city limits. Thanks to its architectural heritage Toruń is visited by more than 1.5 million tourists a year (1.6 million in 2007). This makes tourism an important branch of the local economy, although time spent in the city by individual tourists or the number of hotels, which can serve them, are still not considered satisfactory. Major investments in renovation of the city's monuments, building new hotels (including high-standard ones), improvement in promotion, as well as launching new cultural and scientific events and facilities, give very good prospects for Toruń's tourism. In recent years Toruń has been a site of intense building construction investments, mainly residential and in its transportation network. The latter has been possible partly due to the use of European Union funds assigned for new member states. Toruń city county generates by far the highest number of new dwellings built each year among all Kuyavian-Pomeranian counties, both relative to its population as well as in absolute values. It has led to almost complete rebuilding of some districts. , many major constructions are either under development or are to be launched soon - the value of some of them exceeding 100 million euros. They include a new speedway stadium, major shopping and entertainment centres, a commercial complex popularly called a "New Centre of Toruń", a music theater, a centre of contemporary art, hotels, office buildings, facilities for the Nicolaus Copernicus University, roads and tram routes, sewage and fresh water delivery systems, residential projects, the possibility of a new bridge over the Vistula, and more. Construction of the A1 motorway and the BiT City fast metropolitan railway also directly affects the city. About 25,000 local firms are registered in Toruń. Toruń has two drama theatres ("Teatr im. Wilama Horzycy" with three stages and "Teatr Wiczy"), two children's theatres ("Baj Pomorski" and "Zaczarowany Świat"), two music theatres ("Mała Rewia", "Studencki Teatr Tańca"), and numerous other theatre groups. The city hosts, among others events, the international theatre festival, "Kontakt", annually in May. A building called "Baj Pomorski" has recently been completely reconstructed. It is now one of the most modern cultural facilities in the city, with its front elevation in the shape of a gigantic chest of drawers. It is located at the south-east edge of the Old Town. Toruń has two cinemas including a Cinema City, which has over 2,000 seats. Over ten major museums document the history of Toruń and the region. Among others, the "House of Kopernik" and the accompanying museum commemorate Nicolaus Copernicus and his revolutionary work, the university museum reveals the history of the city's academic past. The Tony Halik Travelers' Museum (Muzeum Podróżników im.Tony Halika) was established in 2003 after Elżbieta Dzikowska donated to citizens of Toruń a collection of objects from various countries and cultures following the death of her husband, famous explorer and writer Tony Halik. It is managed by the District Museum in Toruń. The Centre of Contemporary Art ("Centrum Sztuki Współczesnej" - "CSW") opened in June 2008 and is one of the most important cultural facilities of this kind in Poland. The modern building is located in the very centre of the city, adjacent to the Old Town. The Toruń Symphonic Orchestra (formerly the Toruń Chamber Orchestra) is well-rooted in the Toruń cultural landscape. Toruń is home to a planetarium (located downtown) and an astronomical observatory (located in nearby community of Piwnice). The latter boasts the largest radio telescope in Central Europe with a diameter of , second only to the Effelsberg radio telescope. Toruń is well known for Toruń gingerbread, a type of piernik often made in elaborate molds. Muzeum Piernika in Toruń is Europe's only museum dedicated to gingerbread. The 15-year-old composer Fryderyk Chopin was smitten with Toruń gingerbread when he visited his godfather, Fryderyk Skarbek, there in the summer of 1825. Toruń is a center of conservative Roman Catholic culture. Redemptorist Tadeusz Rydzyk has organized here Radio Maryja, Telewizja Trwam, a college whose students contribute to the mentioned media. Now a museum is being constructed. Over thirty elementary and primary schools and over ten high schools make up the educational base of Toruń. Besides these, students can also attend a handful of private schools. The largest institution of higher education in Toruń, Nicolaus Copernicus University in Toruń serves over 20 thousand students and was founded in 1945, based on the Toruń Scientific Society, Stefan Batory University in Wilno, and Jan Kazimierz University in Lviv. The existence of a high-ranked and high-profiled university with so many students plays a great role the city's position and importance in general, as well as in creating an image of Toruń's streets and clubs filled with crowds of young people. It also has a serious influence on local economy. Other public institutions of higher education: There are also a number of private higher education facilities: Also located in Toruń is one of the oldest high schools in Poland, , which dates back to a gymnasium founded in 1568. Six hospitals of various specializations provide medical service for Toruń itself, its surrounding area and to the region in general. The two largest of these hospitals, recently run by the voivodeship, are to be taken over by Nicolaus Copernicus University and run as its clinical units. At least one of them is to change its status in 2008, with the formal procedures being very advanced. In addition, there are a number of other healthcare facilities in the city. Honouring Toruń's sister relationship with Philadelphia, Pennsylvania, the Bulwar Filadelfijski ("Philadelphia Boulevard"), a long street running mostly between Vistula River and walls of the Old Town and the boulevard itself, bears its name. The Ślimak Getyński is one of the lanes connecting Piłsudski Bridge / John Paul II Avenue with Philadelphia Boulevard at their downtown interchange. It honours the relationship with Göttingen, its name derived from the street's half-circular shape (Polish word "ślimak" meaning "snail"). Toruń is twinned with:
https://en.wikipedia.org/wiki?curid=31258
Tigris The Tigris () (Turkish: "Dicle)" is the eastern of the two great rivers that define Mesopotamia, the other being the Euphrates. The river flows south from the mountains of southeastern Turkey through Iraq and empties into the Persian Gulf. The Tigris is 1,750 km long, rising in the Taurus Mountains of eastern Turkey about 25 km southeast of the city of Elazig and about 30 km from the headwaters of the Euphrates. The river then flows for 400 km through Turkish Kurdistan before becoming part of the Syria-Turkey border. This stretch of 44 km is the only part of the river that is located in Syria. Some of its affluences are Garzan, Anbarçayi, Batman, and the Great and the Little Zab. Close to its confluence with the Euphrates, the Tigris splits into several channels. First, the artificial Shatt al-Hayy branches off, to join the Euphrates near Nasiriyah. Second, the Shatt al-Muminah and Majar al-Kabir branch off to feed the Central Marshes. Further downstream, two other distributary channels branch off (the Al-Musharrah and Al-Kahla), to feed the Hawizeh Marshes. The main channel continues southwards and is joined by the Al-Kassarah, which drains the Hawizeh Marshes. Finally, the Tigris joins the Euphrates near al-Qurnah to form the Shatt-al-Arab. According to Pliny and other ancient historians, the Euphrates originally had its outlet into the sea separate from that of the Tigris. Baghdad, the capital of Iraq, stands on the banks of the Tigris. The port city of Basra straddles the Shatt al-Arab. In ancient times, many of the great cities of Mesopotamia stood on or near the Tigris, drawing water from it to irrigate the civilization of the Sumerians. Notable Tigris-side cities included Nineveh, Ctesiphon, and Seleucia, while the city of Lagash was irrigated by the Tigris via a canal dug around 2900 B.C. The Tigris has long been an important transport route in a largely desert country. Shallow-draft vessels can go as far as Baghdad, but rafts are needed for transport upstream to Mosul. General Francis Rawdon Chesney hauled two steamers overland through Syria in 1836 to explore the possibility of an overland and river route to India. One steamer, the "Tigris", was wrecked in a storm which sank and killed twenty. Chesney proved the river navigable to powered craft. Later, the Euphrates and Tigris Steam Navigation Company was established in 1861 by the Lynch Brothers trading company, who had two steamers in service. By 1908 ten steamers were on the river. Tourists boarded steam yachts to venture inland as this was the first age of archaeological tourism, and the sites of Ur and Ctesiphon became popular with European travelers. In the First World War, during the British conquest of Ottoman Mesopotamia, Indian and Thames River paddlers were used to supply General Charles Townsend's army, in the Siege of Kut and the Fall of Baghdad (1917). The Tigris Flotilla included vessels Clio, Espiegle, Lawrence, Odin, armed tug Comet, armed launches Lewis Pelly, Miner, Shaitan, Sumana, and sternwheelers Muzaffari/Muzaffar. These were joined by Royal Navy Fly-class gunboats Butterfly, Cranefly, Dragonfly, Mayfly, Sawfly, Snakefly, and Mantis, Moth, and Tarantula. After the war, river trade declined in importance during the 20th century as the Basra-Baghdad-Mosul railway, a previously unfinished portion of the Baghdad Railway, was completed and roads took over much of the freight traffic. The Ancient Greek form "Tigris" () meaning "tiger" (if treated as Greek) was adapted from Old Persian "Tigrā", itself from Elamite "Tigra", itself from Sumerian "Idigna". The original Sumerian "Idigna" or "Idigina" was probably from *"id (i)gina" "running water", which can be interpreted as "the swift river", contrasted to its neighbour, the Euphrates, whose leisurely pace caused it to deposit more silt and build up a higher bed than the Tigris. The Sumerian form was borrowed into Akkadian as "Idiqlat", and from there into the other Semitic languages (cf. Hebrew Ḥîddeqel, Syriac "Deqlaṯ", Arabic "Dijlah"). Another name for the Tigris used in Middle Persian was "Arvand Rud", literally "swift river". Today, however, "Arvand Rud" (New Persian: ) refers to the confluence of the Euphrates and Tigris rivers (known in Arabic as the Shatt al-Arab). In Kurdish, it is also known as "Ava Mezin", "the Great Water". The name of the Tigris in languages that have been important in the region: The Tigris is heavily dammed in Iraq and Turkey to provide water for irrigating the arid and semi-desert regions bordering the river valley. Damming has also been important for averting floods in Iraq, to which the Tigris has historically been notoriously prone following April melting of snow in the Turkish mountains. Recent Turkish damming of the river has been the subject of some controversy, for both its environmental effects within Turkey and its potential to reduce the flow of water downstream. Mosul Dam is the largest dam in Iraq. Water from both rivers is used as a means of pressure during conflicts. In 2014 a major breakthrough in developing consensus between multiple stakeholder representatives of Iraq and Turkey on a Plan of Action for promoting exchange and calibration of data and standards pertaining to Tigris river flows was achieved. The consensus which is referred to as the "Geneva Consensus On Tigris River" was reached at a meeting organized in Geneva by the think tank Strategic Foresight Group. In February 2016, the United States Embassy in Iraq as well as the Prime Minister of Iraq Haider al-Abadi issued warnings that Mosul Dam could collapse. The United States warned people to evacuate the floodplain of the Tigris because between 500,000 and 1.5 million people were at risk of drowning due to flash flood if the dam collapses, and that the major Iraqi cities of Mosul, Tikrit, Samarra, and Baghdad were at risk. In Sumerian mythology, the Tigris was created by the god Enki, who filled the river with flowing water. In Hittite and Hurrian mythology, "Aranzah" (or "Aranzahas" in the Hittite nominative form) is the Hurrian name of the Tigris River, which was divinized. He was the son of Kumarbi and the brother of Teshub and Tašmišu, one of the three gods spat out of Kumarbi's mouth onto Mount Kanzuras. Later he colluded with Anu and the Teshub to destroy Kumarbi (The Kumarbi Cycle). The Tigris appears twice in the Old Testament. First, in the Book of Genesis, it is the third of the four rivers branching off the river issuing out of the Garden of Eden. The second mention is in the Book of Daniel, wherein Daniel states he received one of his visions "when I was by that great river the Tigris". The Tigris River is also mentioned in Islam. The tomb of Imam Ahmad Bin Hanbal and Syed Abdul Razzaq Jilani is in Baghdad and the flow of Tigris restricts the number of visitors. The river featured on the coat of arms of Iraq from 1932–1959.
https://en.wikipedia.org/wiki?curid=31259
Titration Titration (also known as titrimetry and volumetric analysis) is a common laboratory method of quantitative chemical analysis to determine the concentration of an identified analyte (a substance to be analyzed). A reagent, termed the "titrant" or "titrator", is prepared as a standard solution of known concentration and volume. The titrant reacts with a solution of "analyte" (which may also be termed the "titrand") to determine the analyte's concentration. The volume of titrant that reacted with the analyte is termed the "titration volume". The word "titration" descends from the French word "tiltre" (1543), meaning the proportion of gold or silver in coins or in works of gold or silver; i.e., a measure of fineness or purity. "Tiltre" became "titre", which thus came to mean the "fineness of alloyed gold", and then the "concentration of a substance in a given sample". In 1828, the French chemist Joseph Louis Gay-Lussac first used "titre" as a verb ("titrer"), meaning "to determine the concentration of a substance in a given sample". Volumetric analysis originated in late 18th-century France. François-Antoine-Henri Descroizilles () developed the first burette (which was similar to a graduated cylinder) in 1791. Gay-Lussac developed an improved version of the burette that included a side arm, and invented the terms "pipette" and "burette" in an 1824 paper on the standardization of indigo solutions. The first true burette was invented in 1845 by the French chemist Étienne Ossian Henry (1798–1873). A typical titration begins with a beaker or Erlenmeyer flask containing a very precise amount of the analyte and a small amount of indicator (such as phenolphthalein) placed underneath a calibrated burette or chemistry pipetting syringe containing the titrant. Small volumes of the titrant are then added to the analyte and indicator until the indicator changes color in reaction to the titrant saturation threshold, representing arrival at the endpoint of the titration. Depending on the endpoint desired, single drops or less than a single drop of the titrant can make the difference between a permanent and temporary change in the indicator. When the endpoint of the reaction is reached, the volume of reactant consumed is measured and used to calculate the concentration of analyte by where Ca is the concentration of the analyte, typically in molarity; Ct is the concentration of the titrant, typically in molarity; Vt is the volume of the titrant used, typically in liters; M is the mole ratio of the analyte and reactant from the balanced chemical equation; and Va is the volume of the analyte used, typically in liters. Typical titrations require titrant and analyte to be in a liquid (solution) form. Though solids are usually dissolved into an aqueous solution, other solvents such as glacial acetic acid or ethanol are used for special purposes (as in petrochemistry). Concentrated analytes are often diluted to improve accuracy. Many non-acid–base titrations require a constant pH during the reaction. Therefore, a buffer solution may be added to the titration chamber to maintain the pH. In instances where two reactants in a sample may react with the titrant and only one is the desired analyte, a separate masking solution may be added to the reaction chamber which eliminates the effect of the unwanted ion. Some reduction-oxidation (redox) reactions may require heating the sample solution and titrating while the solution is still hot to increase the reaction rate. For instance, the oxidation of some oxalate solutions requires heating to to maintain a reasonable rate of reaction. A titration curve is a curve in graph the "x"-coordinate of which represents the volume of titrant added since the beginning of the titration, and the "y"-coordinate of which represents the concentration of the analyte at the corresponding stage of the titration (in an acid–base titration, the "y"-coordinate usually represents the pH of the solution). In an acid–base titration, the titration curve represents the strength of the corresponding acid and base. For a strong acid and a strong base, the curve will be relatively smooth and very steep near the equivalence point. Because of this, a small change in titrant volume near the equivalence point results in a large pH change and many indicators would be appropriate (for instance litmus, phenolphthalein or bromothymol blue). If one reagent is a weak acid or base and the other is a strong acid or base, the titration curve is irregular and the pH shifts less with small additions of titrant near the equivalence point. For example, the titration curve for the titration between oxalic acid (a weak acid) and sodium hydroxide (a strong base) is pictured. The equivalence point occurs between pH 8-10, indicating the solution is basic at the equivalence point and an indicator such as phenolphthalein would be appropriate. Titration curves corresponding to weak bases and strong acids are similarly behaved, with the solution being acidic at the equivalence point and indicators such as methyl orange and bromothymol blue being most appropriate. Titrations between a weak acid and a weak base have titration curves which are very irregular. Because of this, no definite indicator may be appropriate and a pH meter is often used to monitor the reaction. The type of function that can be used to describe the curve is termed a sigmoid function. There are many types of titrations with different procedures and goals. The most common types of qualitative titration are acid–base titrations and redox titrations. Acid–base titrations depend on the neutralization between an acid and a base when mixed in solution. In addition to the sample, an appropriate pH indicator is added to the titration chamber, representing the pH range of the equivalence point. The acid–base indicator indicates the endpoint of the titration by changing color. The endpoint and the equivalence point are not exactly the same because the equivalence point is determined by the stoichiometry of the reaction while the endpoint is just the color change from the indicator. Thus, a careful selection of the indicator will reduce the indicator error. For example, if the equivalence point is at a pH of 8.4, then the phenolphthalein indicator would be used instead of Alizarin Yellow because phenolphthalein would reduce the indicator error. Common indicators, their colors, and the pH range in which they change color are given in the table above. When more precise results are required, or when the reagents are a weak acid and a weak base, a pH meter or a conductance meter are used. For very strong bases, such as organolithium reagent, metal amides, and hydrides, water is generally not a suitable solvent and indicators whose pKa are in the range of aqueous pH changes are of little use. Instead, the titrant and indicator used are much weaker acids, and anhydrous solvents such as THF are used. The approximate pH during titration can be approximated by three kinds of calculations. Before beginning of titration, the concentration of [H+] is calculated in aqueous solution of weak acid before adding any base. When the number of moles of bases added equals the number of moles of initial acid or so called equivalence point, one of hydrolysis and the pH is calculated in the same way that the conjugate bases of the acid titrated was calculated. Between starting and end points, [H+] is obtained from the Henderson-Hasselbalch equation and titration mixture is considered as buffer. In Henderson-Hasselbalch equation the and are said to be the molarities that would have been present even with dissociation or hydrolysis. In a buffer, [H+] can be calculated exactly but the dissociation of , the hydrolysis of A- and self-ionization of water must be taken into account. Four independent equations must be used: In the equations, formula_6 and formula_7 are the moles of acid () and salt ( where X is the cation), respectively, used in the buffer, and the volume of solution is . The law of mass action is applied to the ionization of water and the dissociation of acid to derived the first and second equations. The mass balance is used in the third equation, where the sum of formula_8 and formula_9 must equal to the number of moles of dissolved acid and base, respectively. Charge balance is used in the fourth equation, where the left hand side represents the total charge of the cations and the right hand side represents the total charge of the anions: formula_10 is the molarity of the cation (e.g. sodium, if sodium salt of the acid or sodium hydroxide is used in making the buffer). Redox titrations are based on a reduction-oxidation reaction between an oxidizing agent and a reducing agent. A potentiometer or a redox indicator is usually used to determine the endpoint of the titration, as when one of the constituents is the oxidizing agent potassium dichromate. The color change of the solution from orange to green is not definite, therefore an indicator such as sodium diphenylamine is used. Analysis of wines for sulfur dioxide requires iodine as an oxidizing agent. In this case, starch is used as an indicator; a blue starch-iodine complex is formed in the presence of excess iodine, signalling the endpoint. Some redox titrations do not require an indicator, due to the intense color of the constituents. For instance, in permanganometry a slight persisting pink color signals the endpoint of the titration because of the color of the excess oxidizing agent potassium permanganate. In iodometry, at sufficiently large concentrations, the disappearance of the deep red-brown triiodide ion can itself be used as an endpoint, though at lower concentrations sensitivity is improved by adding starch indicator, which forms an intensely blue complex with triiodide. Gas phase titrations are titrations done in the gas phase, specifically as methods for determining reactive species by reaction with an excess of some other gas, acting as the titrant. In one common gas phase titration, gaseous ozone is titrated with nitrogen oxide according to the reaction After the reaction is complete, the remaining titrant and product are quantified (e.g., by Fourier transform spectroscopy) (FT-IR); this is used to determine the amount of analyte in the original sample. Gas phase titration has several advantages over simple spectrophotometry. First, the measurement does not depend on path length, because the same path length is used for the measurement of both the excess titrant and the product. Second, the measurement does not depend on a linear change in absorbance as a function of analyte concentration as defined by the Beer-Lambert law. Third, it is useful for samples containing species which interfere at wavelengths typically used for the analyte. Complexometric titrations rely on the formation of a complex between the analyte and the titrant. In general, they require specialized complexometric indicators that form weak complexes with the analyte. The most common example is the use of starch indicator to increase the sensitivity of iodometric titration, the dark blue complex of starch with iodine and iodide being more visible than iodine alone. Other complexometric indicators are Eriochrome Black T for the titration of calcium and magnesium ions, and the chelating agent EDTA used to titrate metal ions in solution. Zeta potential titrations are titrations in which the completion is monitored by the zeta potential, rather than by an indicator, in order to characterize heterogeneous systems, such as colloids. One of the uses is to determine the iso-electric point when surface charge becomes zero, achieved by changing the pH or adding surfactant. Another use is to determine the optimum dose for flocculation or stabilization. An assay is a type of biological titration used to determine the concentration of a virus or bacterium. Serial dilutions are performed on a sample in a fixed ratio (such as 1:1, 1:2, 1:4, 1:8, etc.) until the last dilution does not give a positive test for the presence of the virus. The positive or negative value may be determined by inspecting the infected cells visually under a microscope or by an immunoenzymetric method such as enzyme-linked immunosorbent assay (ELISA). This value is known as the titer. Different methods to determine the endpoint include: Though the terms equivalence point and endpoint are often used interchangeably, they are different terms. "Equivalence point" is the theoretical completion of the reaction: the volume of added titrant at which the number of moles of titrant is equal to the number of moles of analyte, or some multiple thereof (as in polyprotic acids). "Endpoint" is what is actually measured, a physical change in the solution as determined by an indicator or an instrument mentioned above. There is a slight difference between the endpoint and the equivalence point of the titration. This error is referred to as an indicator error, and it is indeterminate. Back titration is a titration done in reverse; instead of titrating the original sample, a known excess of standard reagent is added to the solution, and the excess is titrated. A back titration is useful if the endpoint of the reverse titration is easier to identify than the endpoint of the normal titration, as with precipitation reactions. Back titrations are also useful if the reaction between the analyte and the titrant is very slow, or when the analyte is in a non-soluble solid. The titration process creates solutions with compositions ranging from pure acid to pure base. Identifying the pH associated with any stage in the titration process is relatively simple for monoprotic acids and bases. The presence of more than one acid or base group complicates these computations. Graphical methods, such as the equiligraph, have long been used to account for the interaction of coupled equilibria. These graphical solution methods are simple to implement, however they are used infrequently.
https://en.wikipedia.org/wiki?curid=31260
Twinkle, Twinkle, Little Star "Twinkle, Twinkle, Little Star" is a popular English lullaby. The lyrics are from an early-19th-century English poem by Jane Taylor, The Star. The poem, which is in couplet form, was first published in 1806 in "Rhymes for the Nursery", a collection of poems by Taylor and her sister Ann. It is sung to the tune of the French melody "Ah! vous dirai-je, maman", which was published in 1761 and later arranged by several composers, including Mozart with Twelve Variations on "Ah vous dirai-je, Maman". The English lyrics have five stanzas, although only the first is widely known. It has a Roud Folk Song Index number of 7666. This song is usually performed in the key of C major. The song is in the public domain, and has many adaptations around the world. The English lyrics were first written as a poem by Jane Taylor (1783–1824) and published with the title "The Star" in "Rhymes for the Nursery" by Jane and her sister Ann Taylor (1782–1866) in London in 1806: The lyrics from "The Star" were first published with the tune in "The Singing Master: First Class Tune Book" in 1838. The lyrics of the song are the text of the poem, with the first two lines of the entire poem repeated as a refrain after each stanza. For instance, the first stanza of the lyrics is: The first stanza of the song is typically as written, but further stanzas typically contain minor variations. Additional variations exist such as from 1896 in "Song Stories for the Kindergarten" by Mildred J. Hill. A parody of "Twinkle Twinkle Little Star" titled "Twinkle, Twinkle, Little Bat" is recited by the Mad Hatter in of Lewis Carroll's "Alice's Adventures in Wonderland". An adaptation of the song, named "Twinkle, Twinkle, Little Earth", was written by Charles Randolph Grean, Fred Hertz and Leonard Nimoy. It is included on Nimoy's first 1967 album "Leonard Nimoy Presents Mr. Spock's Music from Outer Space", with him reciting the text as Spock explaining how the star-people wish upon an earth and so forth. The tune of the "Alphabet song" is identical to "Twinkle, Twinkle, Little Star". A version using synonyms from Roget's Thesaurus exists. The opening lyrics are also used to begin the traditional murder ballad "Duncan and Brady". The song can also be played as a singing game.
https://en.wikipedia.org/wiki?curid=31261
TurboGrafx-16 The TurboGrafx-16, known in Japan and France as the , is a cartridge-based home video game console manufactured and marketed by NEC Home Electronics and designed by Hudson Soft. It was released in Japan on October 30, 1987, and in the United States on August 29, 1989. The Japanese model was imported and distributed in France in 1989, and the United Kingdom and Spain received a version based on the American model known as simply TurboGrafx. It was the first console released in the 16-bit era, although it used a modified 8-bit CPU. In Japan, the system was launched as a competitor to the Famicom, but the delayed United States release meant that it ended up competing with the Sega Genesis and later the Super Nintendo Entertainment System. The TurboGrafx-16 has an 8-bit CPU, a 16-bit video color encoder, and a 16-bit video display controller. The GPUs are capable of displaying 482 colors simultaneously, out of 512. With dimensions of just 14 cm × 14 cm × 3.8 cm (5.5 in × 5.5 in × 1.5 in), the Japanese PC Engine is the smallest major home game console ever made. Games were released on HuCard cartridges and later the CD-ROM optical format with the TurboGrafx-CD add-on. The TurboGrafx-16 failed to break into the North American market and sold poorly, which has been blamed on the delayed release and inferior marketing. Despite the "16" in its name and the marketing of the console as a 16-bit platform, it used an 8-bit CPU, a marketing tactic that was criticized by some as deceptive. However, in Japan, the PC Engine, introduced into the market at a much earlier date, was very successful. It gained strong third-party support and outsold the Famicom at its 1987 debut, eventually becoming the Super Famicom's main rival. At least 17 distinct models of the TurboGrafx-16 were made, including portable versions and those that integrated the CD-ROM add-on. An enhanced model, the PC Engine SuperGrafx, was rushed to market in 1989. It featured many performance enhancements and was intended to supersede the standard PC Engine. It failed to catch on - only six titles were released that took advantage of the added power and it was quickly discontinued. The entire series was discontinued in 1994. It was succeeded by the PC-FX, only released in Japan. The TurboGrafx-16 or PC Engine was a collaborative effort between Hudson Soft, who created video game software, and NEC, a company which was dominant in the Japanese personal computer market with their PC-88 and PC-98 platforms. NEC lacked the vital experience in the video gaming industry so approached numerous video game studios for support. By pure coincidence, NEC's interest in entering the lucrative video game market coincided with Hudson's failed attempt to sell designs for then-advanced graphics chips to Nintendo. The two companies successfully joined together to then develop the new system. The PC Engine made its debut in the Japanese market on October 30, 1987, and it was a tremendous success. The PC Engine had an elegant, "eye-catching" design, and it was very small compared to its rivals. This, coupled with a strong software lineup and third-party support from high-profile developers such as Namco and Konami gave NEC a temporary lead in the Japanese market. In 1988, NEC decided to expand to the American market and directed its U.S. operations to develop the system for the new audience. NEC Technologies boss Keith Schaefer formed a team to test the system. They found was a lack of enthusiasm in its name 'PC Engine' and also felt its small size was not very suitable to American consumers who would generally prefer a larger and "futuristic" design. They decided to call the system the 'TurboGrafx-16', a name representing its graphical speed and strength, and its 16-bit GPU. They also completely redesigned the hardware into a large, black casing. This lengthy redesign process and NEC's questions about the system's viability in the United States delayed the TurboGrafx-16's debut. The TurboGrafx-16 was eventually released in the New York City and Los Angeles test market in late August 1989. Disastrously for NEC, this was two weeks after Sega of America released the true 16-bit Genesis to test markets. Unlike NEC, Sega didn't waste time redesigning the original Japanese Mega Drive system. Sega quickly eclipsed the TurboGrafx-16 after its American debut. NEC's decision to pack-in "Keith Courage in Alpha Zones", a Hudson Soft game unknown to western gamers, proved costly as Sega packed-in a port of the hit arcade title "Altered Beast" with the Genesis. NEC's American operations in Chicago were also overhyped about its potential and quickly produced 750,000 units, far above actual demand. This was very profitable for Hudson Soft as NEC paid Hudson Soft royalties for every console produced, whether sold or not. By 1990, it was clear that the system was performing very poorly and severely edged out by Nintendo and Sega's marketing. After seeing the TurboGrafx-16 suffer in America, NEC decided to cancel their European releases. Units for the European markets were already produced, which were essentially US models modified to run on PAL television sets, and branded as simply TurboGrafx. NEC sold this stock to distributors - in the United Kingdom Telegames released the TurboGrafx in 1990 in extremely limited quantities. This model was also released in Spain and Portugal through selected retailers. From November 1989 to 1993, PC Engine consoles as well as some of its add-ons were imported from Japan by French licensed importer Sodipeng ("Société de Distribution de la PC Engine", a subsidiary of Guillemot International). This came after considerable enthusiasm in the French press. The PC Engine was largely available in France and Benelux through major retailers. It came with French language instructions and also an AV cable to enable its compatibility with SECAM television set. Its launch price was 1,790 French francs. By March 1991, NEC claimed that it had sold 750,000 TurboGrafx-16 consoles in the United States and 500,000 CD-ROM units worldwide. However, neither CD-based console would catch on and the North American console gaming market continued to be dominated by the Super NES and Genesis. In May 1994 Turbo Technologies announced that it was dropping support for the Duo, though it would continue to offer repairs for existing units and provide ongoing software releases through independent companies in the U.S. and Canada. The final licensed release for the PC Engine was "Dead of the Brain Part 1 & 2" on June 3, 1999, on the Super CD-ROM² format. The (pronounced CD-ROM-ROM) is an add-on attachment for the PC Engine that was released in Japan on December 4, 1988. The add-on allows the core versions of the console to play PC Engine games in CD-ROM format in addition to standard HuCards. This made the PC Engine the first video game console to use CD-ROMs as a storage media. The add-on consisted of two devices - the CD player itself and the interface unit, which connects the CD player to the console and provides a unified power supply and output for both. It was later released as the TurboGrafx-CD in the United States in November 1989, with a remodeled interface unit in order to suit the different shape of the TurboGrafx-16 console. The TurboGrafx-CD had a launch price of $399.99, and did not include any bundled games. "Fighting Street" and "" were the TurboGrafx-CD launch titles; "Ys Book I & II" soon followed. In 1991, NEC introduced an upgraded version of the CD-ROM² System known as the , which updates the BIOS to Version 3.0 and increases buffer RAM from 64kB to 256kB. This upgrade was released in several forms: the first was the PC Engine Duo on September 21, a new model of the console with a CD-ROM drive and upgraded BIOS/RAM already built into the system. This was followed by the Super System Card released on October 26, an upgrade for the existing CD-ROM² add-on that serves as a replacement to the original System Card. PC Engine owners who did not already own the original CD-ROM² add-on could instead opt for the Super-CD-ROM² unit, an updated version of the add-on released on December 13, which combines the CD-ROM drive, interface unit and Super System Card into one device. On March 12, 1994, NEC introduced a third upgrade known as the , which increases the amount of onboard RAM of the Super CD-ROM² System to 2MB. This upgrade was released in two models: the Arcade Card Duo, designed for PC Engine consoles already equipped with the Super CD-ROM² System, and the Arcade Card Pro, a model for the original CD-ROM² System that combines the functionalities of the Super System Card and Arcade Card Duo into one. The first games for this add-on were ports of the Neo-Geo fighting games "Garō Densetsu 2" and "Ryūkō no Ken". Ports of "World Heroes 2" and "Garō Densetsu Special" were later released for this card, along with several original games released under the Arcade CD-ROM² standard. By this point support for both the TurboGrafx-16 and Turbo Duo was already waning in North America; thus, no North American version of either Arcade Card was produced, though a Japanese Arcade Card can still be used on a North American console through a HuCard converter. Many variations and related products of the PC Engine were released. The PC Engine CoreGrafx is an updated model of the PC Engine, released in Japan on December 8, 1989. It has the same form factor as the original PC Engine, but it changes the color scheme from white and red to black and blue, and replaces the original's RF connectors with an A/V port. It also used a revised CPU, the HU6280a, which supposedly fixed some minor audio issues. A recolored version of the model, known as the PC Engine CoreGrafx II, was released on June 21, 1991. Aside from the different coloring (light grey and orange), it is nearly identical to the original CoreGrafx except that the CPU was changed back to the original HU6280. The PC Engine SuperGrafx, released on the same day as the CoreGrafx in Japan, is an enhanced variation of the PC Engine hardware with updated specs. This model has a second HuC6270A (VDC), a HuC6202 (VDP) that combines the output of the two VDCs, four times as much RAM, twice as much video RAM, and a second layer/plane of scrolling. It also uses the revised HU6280a CPU, but the sound and color palette were not upgraded, making the expensive price tag a big disadvantage to the system. As a result, only five exclusive SuperGrafx games and two hybrid games ("Darius Plus" and "Darius Alpha" were released as standard HuCards which took advantage of the extra video hardware if played on a SuperGrafx) were released, and the system was quickly discontinued. Despite the fact that the SuperGrafx was intended to supersede the original PC Engine, its extra hardware features were not carried over to the later Duo consoles. The SuperGrafx has a BUS expansion port, but requires an adapter in order to utilize the original CD-ROM² System add-on. The PC Engine LT is a model of the console in a laptop form, released on December 13, 1991 in Japan, retailing at ¥99,800. The LT does not require a television display (and does not have any AV output) as it has a built-in flip-up screen and speakers, just as a laptop would have, but unlike the GT the LT runs on a power supply. Its expensive price meant that few units were produced compared to other models. The LT has full expansion port capability, so the CD-ROM² unit is compatible with the LT the same way as it is with the original PC-Engine and CoreGrafx. However, the LT requires an adapter to use the enhanced Super CD-ROM² unit. The PC Engine Shuttle was released in Japan on November 22, 1989 as a less expensive model of the console, retailing at ¥18,800. It was targeted primarily towards younger players with its spaceship-like design and came bundled with a TurboPad II controller, which is shaped differently from the other standard TurboPad controllers. The reduced price was made possible by slimming down the expansion port of the back, making it the first model of the console that was not compatible with the CD-ROM² add-on. However, it does have a slot for a memory backup unit, which is required for certain games. The RF output used on the original PC Engine was also replaced with an A/V port for the Shuttle. The PC Engine GT is a portable version of the PC Engine, released in Japan on December 1, 1990 and then in the United States as the TurboExpress. It can only play HuCard games. It has a backlit, active-matrix color LCD screen, the most advanced on the market for a portable video game unit at the time. The screen contributed to its high price and short battery life, however, which dented its performance in the market. It shares the capabilities of the TurboGrafx-16, giving it 512 available colors (9-bit RGB), stereo sound, and the same custom CPU at 7.15909 MHz. It also has a TV tuner adapter as well as a two-player link cable. NEC Home Electronics released the PC Engine Duo in Japan on September 21, 1991, which combined the PC Engine and Super CD-ROM² unit into a single console. The system can play HuCards, audio CDs, CD+Gs, standard CD-ROM² games and Super CD-ROM² games. The North American version, the TurboDuo, was launched in October 1992. The American version of Duo was originally bundled with one control pad, an AC adapter, RCA cables, "Ys Book I & II" (a CD-ROM² title), and a Super CD-ROM² including "Bonk's Adventure", "Bonk's Revenge", "Gate of Thunder" and a secret version of "Bomberman" accessible via a cheat code. The system was also packaged with one random HuCard game which varied from system to system ("Dungeon Explorer" was the original HuCard pack-in for TurboDuo, although many titles were eventually used, such as Irem's "Ninja Spirit" and Namco's "Final Lap Twin", and then eventually a random pick). Two updated variants were released in Japan: the PC Engine Duo-R (on March 25, 1993) and the PC Engine Duo-RX (on June 25, 1994). The changes were mostly cosmetic, but the RX included a new 6-button controller. The PC-KD863G is a CRT monitor with built-in PC Engine console, released on September 27, 1988 in Japan for ¥138,000. Following NEC's PCs' naming scheme, the PC-KD863G was designed to eliminate the need to buy a separate television set and a console. It output its signals in RGB, so it was clearer at the time than the console which was still limited to RF and composite. However, it has no BUS expansion port, which made it incompatible with the CD-ROM² System and memory backup add-ons The X1-Twin was the first licensed PC Engine-compatible hardware manufactured by a third-party company, released by Sharp in April 1989 for ¥99,800. It is a hybrid system that can run PC Engine games and X1 computer software. Pioneer Corporation's LaserActive supports an add-on module which allows the use of PC Engine games (HuCard, CD-ROM² and Super CD-ROM²) as well as new "LD-ROM²" titles that work only on this device. NEC also released their own LaserActive unit (NEC PCE-LD1) and PC Engine add-on module, under an OEM license. A total of eleven LD-ROM2 titles were produced, with only three of them released in North America. Outside North America and Japan, the TurboGrafx-16 was released in South Korea by a third party company, Haitai, under the name Vistar 16. It was based on the American version but with a new curved design. Daewoo Electronics distributed the PC Engine Shuttle into the South Korean market as well. The PC Engine was never officially released in continental Europe, but some companies imported them and made SCART conversions on a moderate scale. In France, Sodipeng imported Japanese systems and added an RGB Cable called "AudioVideo Plus Cable". This mod improved the original video signal quality extensively and made the consoles work with SECAM televisions. In Germany, several importers sold converted PC Engines with PAL RF as well as RGB output. The connectors and pinouts used for the latter were frequently compatible with the Amiga video port, with two unconnected pins used for the audio channels. All PC Engine systems support the same controller peripherals, including pads, joysticks and multitaps. Except for the Vistar, Shuttle, GT, and systems with built-in CD-ROM drives, all PC Engine units shared the same expansion connector, which allowed for the use of devices such as the CD-ROM unit, battery backup and AV output. The TurboGrafx and Vistar units use a different controller port than the PC Engines, but adaptors are available and the protocol is the same. The TurboGrafx offers the same expansion connector pinout as the PC Engine, but has a slightly different shape so peripherals must be modified to fit. The Arcade Card Pro is designed for the original CD-ROM² System add-on, adding the 2304 kB of RAM required by Arcade CD-ROM² games. The Arcade Card Duo is for the Super CD-ROM² System and the PC-Engine Duo/R/RX consoles and adds 2048 kB RAM, since those systems already have 256K of RAM built-in. The various CD-ROM game types are: All PC Engine hardware outputs video in NTSC format, including the European TurboGrafx; it generates a PAL-compatible video signal by using a chroma encoder chip not found in any other system in the series. The PC Engine is a relatively compact video game console, owing to an efficient three-chip architecture and its use of small ROM cartridges called HuCards (Turbo Chips in North America). Hudson Soft developed the HuCard (Hudson Card) from the Bee Card technology it piloted on the MSX. HuCards are about the size of a credit card, but slightly thicker. They are very similar to the My Card format utilized for certain games released on the SG-1000/SC-3000 and the Mark III/Master System. The largest Japanese HuCard games were up to in size. All PC Engine consoles can play standard HuCards, including the PC Engine SuperGrafx (which has its small library of exclusive HuCards). With the exception of the budget-priced PC Engine Shuttle, the portable PC Engine GT and the PC-KD863G monitor, every PC Engine console is also capable of playing CD-ROM² discs, provided the console is equipped with the required CD-ROM drive and System Card. The SuperGrafx and PC Engine LT both required additional adapters to work on the original CD-ROM² System and Super CD-ROM² respectively, whereas the Duo consoles had the CD-ROM drive and Super System Card integrated into them (as did the Super CD-ROM² player). Some unlicensed CD games by Games Express can only run on Duo consoles, due to their games requiring both a special System Card packaged with the games and the 256 kB of RAM built into the Duo. The console's CPU is a Hudson Soft HuC6280 8-bit microprocessor operating at 1.79 MHz and 7.16 MHz. It features integrated bank-switching hardware (driving a 21-bit external address bus from a 6502-compatible 16-bit address bus), an integrated general-purpose I/O port, a timer, block transfer instructions, and dedicated move instructions for communicating with the HuC6270A VDC. Its 16-bit graphics processor and video color encoder chip were also developed by Hudson Soft. It holds 8 kB of work RAM and 64 kB of video RAM. With HuCards, a limited form of region protection was introduced between markets which for the most part was nothing more than running some of the HuCard's pinout connections in a different arrangement. There were several major after-market converters sold to bypass this protection, and were sold predominantly for use in converting Japanese titles for play on a TG-16. In the Japanese market, NEC went further by adding a hardware level detection function to all PC Engine systems that detected if a game was a U.S. release, and would then refuse to play it. The only known exception to this is the U.S. release of "Klax" which did not contain this function. The explanation commonly given for this by NEC officials is that most U.S. conversions had the difficulty level reduced, and in some cases were censored for what was considered inappropriate content, and consequently, they did not want the U.S. conversion to re-enter the Asian market and negatively impact the perception of a game. With some minor soldering skills, a change could be made to PC Engines to disable this check. The only Japanese games that could not be played on a U.S. system using one of these converters were the SuperGrafx titles which could only be played on a SuperGrafx. There was no region protection on TurboGrafx-CD and CD-ROM² System games. Due to the extremely limited PAL release after NEC decided to cancel a full release, there were no PAL HuCards made. The European TurboGrafx therefore played the NTSC American/Japanese titles, converted to PAL 50 Hz format. In Japan, the PC Engine was very successful, and at one point was the top-selling console in the nation. In North America and Europe the situation was reversed, with both Sega and Nintendo dominating the console market at the expense of NEC. Initially, the TurboGrafx-16 sold well in the U.S., but eventually it suffered from lack of support from third-party software developers and publishers. In 1990, "ACE" magazine praised the console's racing game library, stating that, compared to "all the popular consoles, the PC Engine is way out in front in terms of the range and quality of its race games." Reviewing the Turbo Duo model in 1993, "GamePro" gave it a "thumbs down". Though they praised the system's CD sound, graphics, and five-player capability, they criticized the outdated controller and the games library, saying the third party support was "almost nonexistent" and that most of the first party games were localizations of games better suited to the Japanese market. In 2009, the TurboGrafx-16 was ranked the 13th greatest video game console of all time by IGN, citing "a solid catalog of games worth playing," but also a lack of third party support and the absence of a second controller port. The controversy over bit width marketing strategy reappeared with the advent of the Atari Jaguar console. Mattel did not market its 1979 Intellivision system with bit width, although it used a 16-bit CPU. In 1994, NEC released a new console, the Japan-only PC-FX, a 32-bit system with a tower-like design; it enjoyed a small but steady stream of games until 1998, when NEC finally abandoned the video games industry. NEC supplied rival Nintendo with the RISC-based CPU, V810 (same one used in the PC-FX) for the Virtual Boy and VR4300 CPU for the Nintendo 64, released in 1995/1996, and former rival Sega with a version of its PowerVR 2 GPU for the Dreamcast, released in 1998. NEC supplied Bandai's WonderSwan handheld console, which was developed by Gunpei Yokoi, with the V30 MZ CPU. In 2000s, NEC manufactures dynamic RAM process chip and produced for the GameCube GPU, Flipper, a graphics card development by ArtX. A number of TurboGrafx-16 and TurboGrafx-CD games were released on Nintendo's Virtual Console download service for the Wii, Wii U, and Nintendo 3DS, including several that were originally never released outside Japan. In 2011, ten TurboGrafx-16 games were released on the PlayStation Network for play on the PlayStation 3 and PlayStation Portable in the North American region. In 2010 Hudson released an iPhone application entitled "TurboGrafx-16 GameBox" which allowed users to buy and play a number of select Turbo Grafx games via in-app purchases. In 2016, rapper Kanye West's 8th solo album was initially announced to be titled "Turbo Grafx 16". The title, however, was later changed to Ye. In 2019, Konami announced at E3 2019 the TurboGrafx-16 Mini, a dedicated console featuring many built-in games. It's the first release of official hardware of the TurboGrafx-16 family since the closure of Hudson Soft in 2012. On March 6, 2020, Konami announced that the TurboGrafx-16 Mini and its peripheral accessories will be delayed indefinitely from its previous March 19, 2020, launch date due to the COVID-19 outbreak in China. Emulation programs for the TurboGrafx-16 exist for several modern platforms, such as Wii U homebrew launcher and retro operating systems and architectures and are at varying levels of emulation ranging from beta stage, to near perfect emulation of all PC Engine and TurboGrafx-16 formats.
https://en.wikipedia.org/wiki?curid=31268
Traffic engineering Traffic engineering can mean:
https://en.wikipedia.org/wiki?curid=31270
Tocharian languages Tocharian, also spelled Tokharian ( or ), is an extinct branch of the Indo-European language family. It is known from manuscripts dating from the 5th to the 8th century AD, which were found in oasis cities on the northern edge of the Tarim Basin (now part of Xinjiang in northwest China) and the Lop Desert. The discovery of this language family in the early 20th century contradicted the formerly prevalent idea of an east–west division of the Indo-European language family on the centum–satem isogloss, and prompted reinvigorated study of the family. Identifying the authors with the "Tokharoi" people of ancient Bactria (Tokharistan), early authors called these languages "Tocharian". Although this identification is now generally considered mistaken, the name has remained. The documents record two closely related languages, called Tocharian A (also "East Tocharian", "Agnean" or "Turfanian") and Tocharian B ("West Tocharian" or "Kuchean"). The subject matter of the texts suggests that Tocharian A was more archaic and used as a Buddhist liturgical language, while Tocharian B was more actively spoken in the entire area from Turfan in the east to Tumshuq in the west. A body of loanwords and names found in Prakrit documents from the Lop Nor basin have been dubbed Tocharian C ("Kroränian"). A claimed find of ten Tocharian C texts written in Kharoṣṭhī script has been discredited. The oldest extant manuscripts in Tocharian B are now dated to the 5th or even late 4th century AD, making Tocharian a language of Late Antiquity contemporary with Gothic, Classical Armenian and Primitive Irish. The existence of the Tocharian languages and alphabet was not even suspected until archaeological exploration of the Tarim basin by Aurel Stein in the early 20th century brought to light fragments of manuscripts in an unknown language, dating from the 6th to 8th centuries AD. It soon became clear that these fragments were actually written in two distinct but related languages belonging to a hitherto unknown branch of Indo-European, now known as Tocharian: Prakrit documents from 3rd-century Krorän and Niya on the southeast edge of the Tarim Basin contain loanwords and names that appear to come from a closely related language, referred to as Tocharian C. The discovery of Tocharian upset some theories about the relations of Indo-European languages and revitalized their study. In the 19th century, it was thought that the division between centum and satem languages was a simple west–east division, with centum languages in the west. The theory was undermined in the early 20th century by the discovery of Hittite, a centum language in a relatively eastern location, and Tocharian, which was a centum language despite being the easternmost branch. The result was a new hypothesis, following the wave model of Johannes Schmidt, suggesting that the satem isogloss represents a linguistic innovation in the central part of the Proto-Indo-European home range, and the centum languages along the eastern and the western peripheries did not undergo that change. Most scholars reject Walter Bruno Henning's proposed link to Gutian, a language spoken on the Iranian plateau in the 22nd century BC and known only from personal names. Tocharian probably died out after 840 when the Uyghurs, expelled from Mongolia by the Kyrgyz, moved into the Tarim Basin. The theory is supported by the discovery of translations of Tocharian texts into Uyghur. Some modern Chinese words may ultimately derive from a Tocharian or related source, eg. Old Chinese () "honey", from proto-Tocharian *"ḿət(ə)" (where *"ḿ" is palatalized; cf. Tocharian B "mit"), cognate with English "". A colophon to a Buddhist manuscript in Old Turkish from 800 AD states that it was translated from Sanskrit via a "twγry" language. In 1907, Emil Sieg and Friedrich W. K. Müller guessed that this referred to the newly discovered language of the Turpan area. Sieg and Müller, reading this name as "toxrï", connected it with the ethnonym "Tócharoi" (, Ptolemy VI, 11, 6, 2nd century AD), itself taken from Indo-Iranian (cf. Old Persian "tuxāri-", Khotanese "ttahvāra", and Sanskrit "tukhāra"), and proposed the name "Tocharian" (German "Tocharisch"). Ptolemy's "Tócharoi" are often associated by modern scholars with the Yuezhi of Chinese historical accounts, who founded the Kushan empire. It is now clear that these people actually spoke Bactrian, an Eastern Iranian language, rather than the language of the Tarim manuscripts, so the term "Tocharian" is considered a misnomer. Nevertheless, it remains the standard term for the language of the Tarim Basin manuscripts. In 1938, Walter Henning found the term "four "twγry"" used in early 9th-century manuscripts in Sogdian, Middle Iranian and Uighur. He argued that it referred to the region on the northeast edge of the Tarim, including Agni and Karakhoja but not Kucha. He thus inferred that the colophon referred to the Agnean language. Although the term "twγry" or "toxrï" appears to be the Old Turkic name for the Tocharians, it is not found in Tocharian texts. The apparent self-designation "ārśi" appears in Tocharian A texts. Tocharian B texts use the adjective "kuśiññe", derived from "kuśi" or "kuči", a name also known from Chinese and Turkic documents. The historian Bernard Sergent compounded these names to coin an alternative term "Arśi-Kuči" for the family, recently revised to "Agni-Kuči", but this name has not achieved widespread usage. Tocharian is documented in manuscript fragments, mostly from the 8th century (with a few earlier ones) that were written on palm leaves, wooden tablets and Chinese paper, preserved by the extremely dry climate of the Tarim Basin. Samples of the language have been discovered at sites in Kucha and Karasahr, including many mural inscriptions. Most of attested Tocharian was written in the Tocharian alphabet, a derivative of the Brahmi alphabetic syllabary (abugida) also referred to as North Turkestan Brahmi or slanting Brahmi. However a smaller amount was written in the Manichaean script in which Manichaean texts were recorded. It soon became apparent that a large proportion of the manuscripts were translations of known Buddhist works in Sanskrit and some of them were even bilingual, facilitating decipherment of the new language. Besides the Buddhist and Manichaean religious texts, there were also monastery correspondence and accounts, commercial documents, caravan permits, medical and magical texts, and one love poem. In 1998, Chinese linguist Ji Xianlin published a translation and analysis of fragments of a Tocharian "Maitreyasamiti-Nataka" discovered in 1974 in Yanqi. Tocharian A and B are significantly different, to the point of being mutually unintelligible. A common Proto-Tocharian language must precede the attested languages by several centuries, probably dating to the late 1st millennium BC. Tocharian A is found only in the eastern part of the Tocharian-speaking area, and all extant texts are of a religious nature. Tocharian B, however, is found throughout the range and in both religious and secular texts. As a result, it has been suggested that Tocharian A was a liturgical language, no longer spoken natively, while Tocharian B was the spoken language of the entire area. On the other hand, it is possible that the lack of a secular corpus in Tocharian A is simply an accident, due to the smaller distribution of the language and the fragmentary preservation of Tocharian texts in general. The hypothesized relationship of Tocharian A and B as liturgical and spoken forms, respectively, is sometimes compared with the relationship between Latin and the modern Romance languages, or Classical Chinese and Mandarin. However, in both of these latter cases the liturgical language is the linguistic ancestor of the spoken language, whereas no such relationship holds between Tocharian A and B. In fact, from a phonological perspective Tocharian B is significantly more conservative than Tocharian A, and serves as the primary source for reconstructing Proto-Tocharian. Only Tocharian B preserves the following Proto-Tocharian features: stress distinctions, final vowels, diphthongs, and "o" vs. "e" distinction. In turn, the loss of final vowels in Tocharian A has led to the loss of certain Proto-Tocharian categories still found in Tocharian B, e.g. the vocative case and some of the noun, verb and adjective declensional classes. In their declensional and conjugational endings, the two languages innovated in divergent ways, with neither clearly simpler than the other. For example, both languages show significant innovations in the present active indicative endings but in radically different ways, so that only the second-person singular ending is directly cognate between the two languages, and in most cases neither variant is directly cognate with the corresponding Proto-Indo-European (PIE) form. The agglutinative secondary case endings in the two languages likewise stem from different sources, showing parallel development of the secondary case system after the Proto-Tocharian period. Likewise, some of the verb classes show independent origins, e.g. the class II preterite, which uses reduplication in Tocharian A (possibly from the reduplicated aorist) but long PIE "ē" in Tocharian B (possibly from the long-vowel perfect found in Latin "lēgī", "fēcī", etc.). Tocharian B shows an internal chronological development; three linguistic stages have been detected. The oldest stage is attested only in Kucha. There are also the middle ('classicalʼ), and the late stage. Based on 3rd-century Loulan Gāndhārī Prakrit documents containing Tocharian loanwords such as "kilme" 'district', "ṣoṣthaṃga" 'tax collector', and "ṣilpoga" 'document', T. Burrow suggested in the 1930s the existence of a third Tocharian language, which has been labelled Tocharian C or "Kroränian", "Krorainic", or "Lolanisch". In 2018, ten texts written in the Kharoṣṭhī alphabet from Loulan were published and analyzed in the posthumous papers of Tocharologist Klaus T. Schmidt as being written in Tocharian C. Phonetically, Tocharian C shows preservation of the Proto-Indo-European labiovelar *kʷ in the word "okuson"- "ox", compared to more divergent reflexes in B "okso" and A "ops"-. Based on morphology, Tocharian C is more closely related to Tocharian B than to Tocharian A, as shown by the secondary cases in Tocharian C are more closely related to Tocharian B than to A (e.g. ablative A "–Vṣ", B "–meṃ", C "–maṃ"; 3rd person singular present suffix A "–ṣ", B "–ṃ", C "–ṃ"). These similarities suggest that there may have been a continuum of Tocharian dialects north of the Tarim River ranging from Tocharian B around Kucha to Tocharian C around Loulan/Kroraina. On September 15 and 16, 2019, a group of linguists led by Georges Pinault and Michaël Peyrot met in Leiden to examine Schmidt's transcriptions and the original texts, and concluded they had all been transcribed entirely incorrectly. While a full report of what languages these texts represent is not yet available, their conclusions appear to have discredited Schmidt's Tocharian C claims. Phonetically, Tocharian languages are "centum" Indo-European languages, meaning that they merge the palatovelar consonants of Proto Indo-European with the plain velars (*k, *g, *gʰ) rather than palatalizing them to affricates or sibilants. Centum languages are mostly found in western and southern Europe (Greek, Italic, Celtic, Germanic). In that sense, Tocharian (to some extent like the Greek and the Anatolian languages) seems to have been an isolate in the "satem" (i.e. palatovelar to sibilant) phonetic regions of Indo-European-speaking populations. The discovery of Tocharian contributed to doubts that Proto-Indo-European had originally split into western and eastern branches; today, the centum–satem division is not seen as a real familial division. Note that, although both Tocharian A and Tocharian B have the same set of vowels, they often do not correspond to each other. For example, the sound "a" did not occur in Proto-Tocharian. Tocharian B "a" is derived from former stressed "ä" or unstressed "ā" (reflected unchanged in Tocharian A), while Tocharian A "a" stems from Proto-Tocharian or (reflected as and in Tocharian B), and Tocharian A "e" and "o" stem largely from monophthongization of former diphthongs (still present in Tocharian B). Diphthongs occur in Tocharian B only. The following table lists the reconstructed phonemes in Tocharian along with their standard transcription. Because Tocharian is written in an alphabet used originally for Sanskrit and its descendants, the transcription of the sounds is directly based on the transcription of the corresponding Sanskrit sounds. The Tocharian alphabet also has letters representing all of the remaining Sanskrit sounds, but these appear only in Sanskrit loanwords and are not thought to have had distinct pronunciations in Tocharian. There is some uncertainty as to actual pronunciation of some of the letters, particularly those representing palatalized obstruents (see below). Proto-Tocharian shows radical changes in its vowels from Proto-Indo-European (PIE). Length distinctions eventually disappeared, but prior to that all pairs of long and short vowels had become distinct in quality, and thus have different outcomes. Many pairs of PIE vowels are distinguished in Tocharian only by the occurrence or non-occurrence of palatalization. For example, PIE "o" and "ē" both evolved into Proto-Tocharian "ë" (possibly ), but PIE "ē" palatalized the preceding consonant, and left a "y" when no consonant preceded, while neither of these occurs with PIE "o". Reconstructing the changes between PIE and Proto-Tocharian vowels is fraught with difficulty, and as a result there are a large number of disagreements among different researchers. The basic problems are: Historically, the evolution of the Tocharian vowels was the last part of the diachronic phonology to be understood. In 1938, George S. Lane remarked of Tocharian that "the vocalism so far has defied almost every attempt that has been made to bring it to order", and as late as 1945 still asserted: "That the subject [of palatalization] is a confused and difficult one is generally recognized—but so are most of the problems of Tocharian phonology." However, rapid progress towards understanding the evolution of the vocalic system, and with it the phonology as a whole, occurred during the period of approximately 1948–1960, beginning with Sieg and Siegling (1949). By 1960, the system was well-enough understood that Krause and Thomas's seminal work of that year is still considered one of the most important Tocharian grammatical handbooks. Despite the apparent equivalence between the Tocharian A and B vowel systems, in fact a number of vowels are not cognate between the two varieties, and Proto-Tocharian had a different vowel system from both. For example, Tocharian A "a" reflects a merger of two Proto-Tocharian vowels that are distinguished in Tocharian B as "e" and "o", while Tocharian B "a" reflects a stress-based variant of either Proto-Tocharian "ā" or "ä", while Tocharian A preserves original "ā" and "ä" regardless of the position of stress. As a general rule, Tocharian B reflects the Proto-Tocharian vowel system more faithfully than Tocharian A, which includes a number of changes not found in Tocharian B, e.g. monophthongization of diphthongs, loss of all absolutely final vowels, loss of "ä" in open syllables, and epenthesis of "ä" to break up difficult clusters (esp. word-finally) that resulted from vowel losses. The following table describes a typical minimal reconstruction of Late Proto-Tocharian, which includes all vowels that are generally accepted by Tocharian scholars: The following table describes a "maximal" reconstruction of Proto-Tocharian, following Ringe (1996): Some of the differences between the "minimal" and "maximal" systems are primarily notational: Ringe's "*ǝ" = standard "*ä", and Ringe (along with many other researchers) reconstruct Proto-Tocharian surface *[i] and *[u] as underlying "*äy, *äw" ("*ǝy, *ǝw" in Ringe's notation). However, Ringe reconstructs three vowels "*ë, *e, *ẹ" in place of the single vowel "*e" in the minimal system. The primary distinction is between "*ë" < PIE "*o", which is assumed to be a central rather than a front vowel because it does not trigger palatalization, and "*e" < PIE "*ē", which does trigger palatalization. Other than palatalization effects, both vowels are reflected identically in both Tocharian A and B, and hence a number of researchers project the merger back to Proto-Tocharian. However, some umlaut processes are thought to have operated differently on the two vowels, and as a result Ringe (as well as Adams and some other scholars) prefer to distinguish the two in Proto-Tocharian. Ringe's "*ẹ" vowel, a higher vowel than "*e", is fairly rare and appears as "i" in Tocharian B but "e" in Tocharian A. This vowel does trigger palatalization, and is thought by Ringe to stem primarily from PIE "*oy" and from loanwords. In general, Ringe's Proto-Tocharian reconstruction reflects an earlier stage than the one described by many researchers. Some scholars use a different notation from what is given above: e.g. "æ" or "ë" in place of the "e" of the minimal system, "å" or "ɔ" in place of the "o" of the minimal system, and "ǝ" in place of "ä". The following table shows the changes from Proto-Indo-European (PIE) to Proto-Tocharian (PToch) and on to Tocharian B (TB) and Tocharian A (TA), using the notation of the "minimal" system above: Notes: Proto-Tocharian had phonemic stress, although its position varies depending on the researcher. Many researchers project the Tocharian B stress that is recoverable from "ā"~"a" and "a"~"ä" alternations back to Proto-Tocharian. For the most part, this stress does not reflect PIE stress. Rather, most bisyllabic words have initial stress, and trisyllabic and longer words usually have stress on the second syllable. A number of multisyllabic words in Tocharian B appear to indicate that more than one syllable was stressed; it is thought that these reflect clitics or affixes that still behaved phonologically as separate words in Proto-Tocharian. Ringe, however, prefers to project the PIE stress unchanged into Proto-Tocharian, and assumes that the radically different system seen in Tocharian B evolved within the separate history of that language. The outcome of the PIE sequences "*iH" and "*uH" when not followed by a vowel is disputed. It is generally agreed that "*ih₂" became Proto-Tocharian "*yā"; a similar change occurred in Ancient Greek. It is also usually accepted that "*ih₃" likewise became Proto-Tocharian "*yā", although it is unclear whether this reflects a direct change "*ih₃ > *yā /ya/" or a change "*ih₃ > *yō /yo:/ > *yā /ya/" (echoing a similar change in Ancient Greek), since PIE "*ō" is generally thought to have become Proto-Tocharian "*ā" (which was not a long vowel). The outcomes of all other sequences are much less clear. A number of etymologies appear to indicate a parallel change "*uh₂ > *wā", but some also appear to indicate a change "*uh₂ > *ū > *u". Ringe demonstrates that all the occurrences of "*wā" can potentially be explained as due to analogy, and prefers to postulate a general sound change "*uH > *ū > *u" following the normal outcome of "*uH" in other languages, but a number of other researchers (e.g. Krause and Slocum) prefer to see "*uh₂ > *wā" as the regular sound change. The outcome of "*ih₁" is likewise disputed, with Ringe similarly preferring a regular change "*ih₁ > *ī > i" while others postulate a regular change "*ih₁ > *ye > *yä". As elsewhere, the main difficulty is that, relative to other Indo-European languages, Tocharian is sparsely attested and was subject to a particularly large number of analogical changes. A number of umlaut processes occurred in the Proto-Tocharian period, which tended to increase the number of rounded vowels. Vowel rounding also resulted from the influence of nearby labiovelars, although this occurred after the Proto-Tocharian period, with differing results in Tocharian A and B, generally with more rounding in Tocharian A (e.g. PIE "*gʷṃ-" "come" > PToch "*kʷäm-" > Tocharian A "kum-" but Tocharian B "käm-"). Tocharian A deletes all Proto-Tocharian final vowels, as well as all instances of Proto-Tocharian "ä" in open syllables (which appears to include vowels followed by "Cr" and "Cl" sequences). When this produces impossible consonant sequences, these are rectified by vocalizing "w" and "y" into "u" and "i", if possible; otherwise, an epenthetic "ä" is inserted. Note that most consonant sequences are tolerated word-initially, including unexpected cases like "rt-", "ys-" and "lks-". Example: PIE "h₁rudhros" (Greek "erythros") > PToch "rä́tre" > Toch A *"rtr" > "rtär". Tocharian B deletes only unstressed "ä" in open syllables, and leaves all other vowels alone. Hence PIE "h₁rudhros" > PToch "rä́tre" > Toch B "ratre". If necessary, impossible consonant sequences are rectified as in Tocharian A. The following are the main changes between PIE and Proto-Tocharian: The extant Tocharian languages appear to reflect essentially the same consonant system as in Proto-Tocharian, except in a couple of cases: Unlike in most "centum" languages, Proto-Tocharian maintained separate outcomes of PIE "*kʷ" and "*ḱw". The latter is still reflected as "kw" in Tocharian B, e.g. "yakwe" "horse" < PIE "*eḱwos". Palatalization was a very important process operating in Proto-Tocharian. Palatalization appears to have operated very early, prior to almost all of the vowel changes that took place between PIE and Proto-Tocharian. Palatalization occurred before PIE "e", "ē", "y" and sometimes "i"; specifically, PIE "i" triggered palatalization of dentals but generally not of velars or labials. (According to Ringe, lack of palatalization before PIE "i" was actually due to early change of "i" > "wǝ" after certain sounds.) Palatalization, or lack thereof, is the only way to distinguish PIE "e" and "i" in Tocharian, and the primary way of distinguishing certain other pairs of PIE vowels, e.g. "e" vs. "u" and "ē" vs. "o". Palatalization appeared to have operated in two stages, an earlier one that affected only the sequences "ty" and "dhy", and a later more general one — or at least, the result of palatalization of "t" and "dh" before "y" is different from palatalization before "e, ē" and "i", while other consonants do not show such a dual outcome. (A similar situation occurred in the history of Proto-Greek and Proto-Romance.) Certain sound changes occurred prior to palatalization: The following chart shows the outcome of palatalization: The outcomes of the PIE dentals in Tocharian, and in particular PIE "*d", are complex and difficult to explain. Palatalization sometimes produces "c", sometimes "ts", sometimes "ś", and in some words when a front vocalic does not follow, PIE "*d" (but not other dentals) is lost entirely, e.g. e.g. Toch AB "or" "wood" < PIE "*doru" and Toch B "ime" "thought" < PToch "*w'äimë" < PIE "*weid-mo-". Many occurrences of "c" and "ts" can be explained by the differing effects of a following "y" vs. a front vowel (see above), but a number of difficult cases remain. Most researchers agree that some of the PIE dentals are reflected differently from others — contrary to the situation with all other PIE stops. This in turn suggests that some sound changes must have operated on particular dentals, but not others, prior to the general loss of contrastive voicing and aspiration. There is a large amount of disagreement over what exactly the relevant sound changes were, due to the relatively small number of extant forms involved, the operation of analogy, and disagreement over particular etymologies, including both the PIE roots and ablaut forms involved. Ringe suggests the following changes, in approximate order: Even with this explanation, a lot of words don't have the expected outcomes and require appeal to analogy. For example, the assumption of Grassmann's Law helps to explain only two words, both verbs, in which PIE "*dh" shows up as "ts"; and in both of these words, palatalization to "ś" might have been expected, because the present-tense forms both begin with PIE "*dhe-". Ringe needs to appeal to an analogical depalatalization, based on other forms of the verb with different ablaut patterns in which palatalization was not triggered. This assumption is reasonable, because a lot of other verbs also show analogical depalatalization; but nonetheless, it is rather slender evidence, and it is not surprising that other researchers have proposed different assumptions (e.g. that PToch "*tsä-" is the expected outcome of PIE "*dhe-", with no operation of Grassmann's Law). Likewise, the loss of PIE "*d" in Toch AB "or" "wood" < PIE "*doru" is not explainable by these rules, because it is lost before a vowel rather than a consonant. Ringe again assumes analogy: In this case, the PIE conjugation was nominative "*doru", genitive "*dreus", and Ringe assumes that the normal loss of "*d" in the genitive before "*r" was carried over into the nominative. Again, not all researchers accept this. For example, Krause and Slocum, while accepting the remainder of Ringe's sound changes involving PIE "*d", suggest instead that loss eventually occurred before Proto-Tocharian rounded vowels and before Proto-Tocharian "ë" (from PIE "o"), as well as before nasals and possibly other consonants. Tocharian has completely re-worked the nominal declension system of Proto-Indo-European. The only cases inherited from the proto-language are nominative, genitive, accusative, and (in Tocharian B only) vocative; in Tocharian the old accusative is known as the "oblique" case. In addition to these primary cases, however, each Tocharian language has six cases formed by the addition of an invariant suffix to the oblique case — although the set of six cases is not the same in each language, and the suffixes are largely non-cognate. For example, the Tocharian word ' (Toch B), ' (Toch A) "horse" < PIE "*eḱwos" is declined as follows: The Tocharian A instrumental case rarely occurs with humans. When referring to humans, the oblique singular of most adjectives and of some nouns is marked in both varieties by an ending "-(a)ṃ", which also appears in the secondary cases. An example is ' (Toch B), ' (Toch A) "man", which belongs to the same declension as above, but has oblique singular ' (Toch B), ' (Toch A), and corresponding oblique stems ' (Toch B), ' (Toch A) for the secondary cases. This is thought to stem from the generalization of "n"-stem adjectives as an indication of determinative semantics, seen most prominently in the weak adjective declension in the Germanic languages (where it cooccurs with definite articles and determiners), but also in Latin and Greek "n"-stem nouns (especially proper names) formed from adjectives, e.g. Latin "Catō" (genitive "Catōnis") literally "the sly one" < "catus" "sly", Greek "Plátōn" literally "the broad-shouldered one" < "platús" "broad". In contrast, the verb verbal conjugation system is quite conservative. The majority of Proto-Indo-European verbal classes and categories are represented in some manner in Tocharian, although not necessarily with the same function. Some examples: athematic and thematic present tenses, including null-, "-y-", "-sḱ-", "-s-", "-n-" and "-nH-" suffixes as well as "n"-infixes and various laryngeal-ending stems; "o"-grade and possibly lengthened-grade perfects (although lacking reduplication or augment); sigmatic, reduplicated, thematic and possibly lengthened-grade aorists; optatives; imperatives; and possibly PIE subjunctives. In addition, most PIE sets of endings are found in some form in Tocharian (although with significant innovations), including thematic and athematic endings, primary (non-past) and secondary (past) endings, active and mediopassive endings, and perfect endings. Dual endings are still found, although they are rarely attested and generally restricted to the third person. The mediopassive still reflects the distinction between primary "-r" and secondary "-i", effaced in most Indo-European languages. Both root and suffix ablaut is still well-represented, although again with significant innovations. Tocharian verbs are conjugated in the following categories: A given verb belongs to one of a large number of classes, according to its conjugation. As in Sanskrit, Ancient Greek and (to a lesser extent) Latin, there are independent sets of classes in the indicative present, subjunctive, perfect, imperative, and to a limited extent optative and imperfect, and there is no general correspondence among the different sets of classes, meaning that each verb must be specified using a number of principal parts. The most complex system is the present indicative, consisting of 12 classes, 8 thematic and 4 athematic, with distinct sets of thematic and athematic endings. The following classes occur in Tocharian B (some are missing in Tocharian A): Palatalization of the final root consonant occurs in the 2nd singular, 3rd singular, 3rd dual and 2nd plural in thematic classes II and VIII-XII as a result of the original PIE thematic vowel "e". The subjunctive likewise has 12 classes, denoted i through xii. Most are conjugated identically to the corresponding indicative classes; indicative and subjunctive are distinguished by the fact that a verb in a given indicative class will usually belong to a different subjunctive class. In addition, four subjunctive classes differ from the corresponding indicative classes, two "special subjunctive" classes with differing suffixes and two "varying subjunctive" classes with root ablaut reflecting the PIE perfect. Special subjunctives: Varying subjunctives: The preterite has 6 classes: All except preterite class VI have a common set of endings that stem from the PIE perfect endings, although with significant innovations. The imperative likewise shows 6 classes, with a unique set of endings, found only in the second person, and a prefix beginning with "p-". This prefix usually reflects Proto-Tocharian "*pä-" but unexpected connecting vowels occasionally occur, and the prefix combines with vowel-initial and glide-initial roots in unexpected ways. The prefix is often compared with the Slavic perfective prefix "po-", although the phonology is difficult to explain. Classes i through v tend to co-occur with preterite classes I through V, although there are many exceptions. Class vi is not so much a coherent class as an "irregular" class with all verbs not fitting in other categories. The imperative classes tend to share the same suffix as the corresponding preterite (if any), but to have root vocalism that matches the vocalism of a verb's subjunctive. This includes the root ablaut of subjunctive classes i and v, which tend to co-occur with imperative class i. The optative and imperfect have related formations. The optative is generally built by adding "i" onto the subjunctive stem. Tocharian B likewise forms the imperfect by adding "i" onto the present indicative stem, while Tocharian A has 4 separate imperfect formations: usually "ā" is added to the subjunctive stem, but occasionally to the indicative stem, and sometimes either "ā" or "s" is added directly onto the root. The endings differ between the two languages: Tocharian A uses present endings for the optative and preterite endings for the imperfect, while Tocharian B uses the same endings for both, which are a combination of preterite and unique endings (the latter used in the singular active). As suggested by the above discussion, there are a large number of sets of endings. The present-tense endings come in both thematic and athematic variants, although they are related, with the thematic endings generally reflecting a theme vowel (PIE "e" or "o") plus the athematic endings. There are different sets for the preterite classes I through V; preterite class VI; the imperative; and in Tocharian B, in the singular active of the optative and imperfect. Furthermore, each set of endings comes with both active and mediopassive forms. The mediopassive forms are quite conservative, directly reflecting the PIE variation between "-r" in the present and "-i" in the past. (Most other languages with the mediopassive have generalized one of the two.) The present-tense endings are almost completely divergent between Tocharian A and B. The following shows the thematic endings, with their origin: In traditional Indo-European studies, no hypothesis of a closer genealogical relationship of the Tocharian languages has been widely accepted by linguists. However, lexicostatistical and glottochronological approaches suggest the Anatolian languages, including Hittite, might be the closest relatives of Tocharian. As an example, the same Proto-Indo-European root (but not a common suffixed formation) can be reconstructed to underlie the words for 'wheel': Tocharian A "wärkänt", Tokharian B "yerkwanto" and Hittite "ḫūrkis".
https://en.wikipedia.org/wiki?curid=31273
Trie In computer science, a trie, also called digital tree or prefix tree, is a kind of search tree—an ordered tree data structure used to store a dynamic set or associative array where the keys are usually strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated; i.e., the value of the key is distributed across the structure. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Keys tend to be associated with leaves, though some inner nodes may correspond to keys of interest. Hence, keys are not necessarily associated with every node. For the space-optimized presentation of prefix tree, see compact prefix tree. In the example shown, keys are listed in the nodes and values below them. Each complete English word has an arbitrary integer value associated with it. A trie can be seen as a tree-shaped deterministic finite automaton. Each finite language is generated by a trie automaton, and each trie can be compressed into a deterministic acyclic finite state automaton. Though tries can be keyed by character strings, they need not be. The same algorithms can be adapted to serve similar functions on ordered lists of any construct; e.g., permutations on a list of digits or shapes. In particular, a bitwise trie is keyed on the individual bits making up any fixed-length binary datum, such as an integer or memory address. Tries were first described by René de la Briandais in 1959. The term "trie" was coined two years later by Edward Fredkin, who pronounces it (as "tree"), after the middle syllable of "retrieval". However, other authors pronounce it (as "try"), in an attempt to distinguish it verbally from "tree". As discussed below, a trie has a number of advantages over binary search trees. A trie can also be used to replace a hash table, over which it has the following advantages: However, a trie also has some drawbacks compared to a hash table: A common application of a trie is storing a predictive text or autocomplete dictionary, such as found on a mobile telephone. Such applications take advantage of a trie's ability to quickly search for, insert, and delete entries; however, if storing dictionary words is all that is required (i.e., storage of information auxiliary to each word is not required), a minimal deterministic acyclic finite state automaton (DAFSA) would use less space than a trie. This is because a DAFSA can compress identical branches from the trie which correspond to the same suffixes (or parts) of different words being stored. Tries are also well suited for implementing approximate matching algorithms, including those used in spell checking and hyphenation software. A discrimination tree term index stores its information in a trie data structure. The trie is a tree of nodes which supports Find and Insert operations. Find returns the value for a key string, and Insert inserts a string (the key) and a value into the trie. Both Insert and Find run in time, where m is the length of the key. A simple Node class can be used to represent nodes in the trie: class Node: Note that codice_1 is a dictionary of characters to a node's children; and it is said that a "terminal" node is one which represents a complete string.A trie's value can be looked up as follows: def find(node: Node, key: str) -> Optional[Any]: A slight modifications of this routine can be utilized Insertion proceeds by walking the trie according to the string to be inserted, then appending new nodes for the suffix of the string that is not contained in the trie: def insert(node: Node, key: str, value: Any) -> None: Lexicographic sorting of a set of keys can be accomplished by building a trie from them, with the children of each node sorted lexicographically, and traversing it in pre-order, printing only the leaves' values. This algorithm is a form of radix sort. A trie is the fundamental data structure of Burstsort, which (in 2007) was the fastest known string sorting algorithm due to its efficient cache use. Now there are faster ones. A special kind of trie, called a suffix tree, can be used to index all suffixes in a text in order to carry out fast full text searches. There are several ways to represent tries, corresponding to different trade-offs between memory use and speed of the operations. The basic form is that of a linked set of nodes, where each node contains an array of child pointers, one for each symbol in the alphabet (so for the English alphabet, one would store 26 child pointers and for the alphabet of bytes, 256 pointers). This is simple but wasteful in terms of memory: using the alphabet of bytes (size 256) and four-byte pointers, each node requires a kilobyte of storage, and when there is little overlap in the strings' prefixes, the number of required nodes is roughly the combined length of the stored strings. Put another way, the nodes near the bottom of the tree tend to have few children and there are many of them, so the structure wastes space storing null pointers. The storage problem can be alleviated by an implementation technique called "alphabet reduction", whereby the original strings are reinterpreted as longer strings over a smaller alphabet. E.g., a string of bytes can alternatively be regarded as a string of four-bit units and stored in a trie with sixteen pointers per node. Lookups need to visit twice as many nodes in the worst case, but the storage requirements go down by a factor of eight. An alternative implementation represents a node as a triple and links the children of a node together as a singly linked list: points to the node's first child, to the parent node's next child. The set of children can also be represented as a binary search tree; one instance of this idea is the ternary search tree developed by Bentley and Sedgewick. Another alternative in order to avoid the use of an array of 256 pointers (ASCII), as suggested before, is to store the alphabet array as a bitmap of 256 bits representing the ASCII alphabet, reducing dramatically the size of the nodes. Bitwise tries are much the same as a normal character-based trie except that individual bits are used to traverse what effectively becomes a form of binary tree. Generally, implementations use a special CPU instruction to very quickly find the first set bit in a fixed length key (e.g., GCC's codice_2 intrinsic). This value is then used to index a 32- or 64-entry table which points to the first item in the bitwise trie with that number of leading zero bits. The search then proceeds by testing each subsequent bit in the key and choosing codice_3 or codice_4 appropriately until the item is found. Although this process might sound slow, it is very cache-local and highly parallelizable due to the lack of register dependencies and therefore in fact has excellent performance on modern out-of-order execution CPUs. A red-black tree for example performs much better on paper, but is highly cache-unfriendly and causes multiple pipeline and TLB stalls on modern CPUs which makes that algorithm bound by memory latency rather than CPU speed. In comparison, a bitwise trie rarely accesses memory, and when it does, it does so only to read, thus avoiding SMP cache coherency overhead. Hence, it is increasingly becoming the algorithm of choice for code that performs many rapid insertions and deletions, such as memory allocators (e.g., recent versions of the famous Doug Lea's allocator (dlmalloc) and its descendants). The worst case of steps for lookup is the same as bits used to index bins in the tree. Alternatively, the term "bitwise trie" can more generally refer to a binary tree structure holding integer values, sorting them by their binary prefix. An example is the x-fast trie. Compressing the trie and merging the common branches can sometimes yield large performance gains. This works best under the following conditions: For example, it may be used to represent sparse bitsets; i.e., subsets of a much larger, fixed enumerable set. In such a case, the trie is keyed by the bit element position within the full set. The key is created from the string of bits needed to encode the integral position of each element. Such tries have a very degenerate form with many missing branches. After detecting the repetition of common patterns or filling the unused gaps, the unique leaf nodes (bit strings) can be stored and compressed easily, reducing the overall size of the trie. Such compression is also used in the implementation of the various fast lookup tables for retrieving Unicode character properties. These could include case-mapping tables (e.g., for the Greek letter pi, from Π to π), or lookup tables normalizing the combination of base and combining characters (like the a-umlaut in German, ä, or the dalet-patah-dagesh-ole in Biblical Hebrew, ). For such applications, the representation is similar to transforming a very large, unidimensional, sparse table (e.g., Unicode code points) into a multidimensional matrix of their combinations, and then using the coordinates in the hyper-matrix as the string key of an uncompressed trie to represent the resulting character. The compression will then consist of detecting and merging the common columns within the hyper-matrix to compress the last dimension in the key. For example, to avoid storing the full, multibyte Unicode code point of each element forming a matrix column, the groupings of similar code points can be exploited. Each dimension of the hyper-matrix stores the start position of the next dimension, so that only the offset (typically a single byte) need be stored. The resulting vector is itself compressible when it is also sparse, so each dimension (associated to a layer level in the trie) can be compressed separately. Some implementations do support such data compression within dynamic sparse tries and allow insertions and deletions in compressed tries. However, this usually has a significant cost when compressed segments need to be split or merged. Some tradeoff has to be made between data compression and update speed. A typical strategy is to limit the range of global lookups for comparing the common branches in the sparse trie. The result of such compression may look similar to trying to transform the trie into a directed acyclic graph (DAG), because the reverse transform from a DAG to a trie is obvious and always possible. However, the shape of the DAG is determined by the form of the key chosen to index the nodes, in turn constraining the compression possible. Another compression strategy is to "unravel" the data structure into a single byte array. This approach eliminates the need for node pointers, substantially reducing the memory requirements. This in turn permits memory mapping and the use of virtual memory to efficiently load the data from disk. One more approach is to "pack" the trie. Liang describes a space-efficient implementation of a sparse packed trie applied to automatic hyphenation, in which the descendants of each node may be interleaved in memory. Several trie variants are suitable for maintaining sets of strings in external memory, including suffix trees. A combination of trie and B-tree, called the "B-trie" has also been suggested for this task; compared to suffix trees, they are limited in the supported operations but also more compact, while performing update operations faster.
https://en.wikipedia.org/wiki?curid=31274
The Age of Reason The Age of Reason; Being an Investigation of True and Fabulous Theology is a work by English and American political activist Thomas Paine, arguing for the philosophical position of deism. It follows in the tradition of 18th-century British deism, and challenges institutionalized religion and the legitimacy of the Bible. It was published in three parts in 1794, 1795, and 1807. It was a best-seller in the United States, where it caused a short-lived deistic revival. British audiences, fearing increased political radicalism as a result of the French Revolution, received it with more hostility. "The Age of Reason" presents common deistic arguments; for example, it highlights what Paine saw as corruption of the Christian Church and criticizes its efforts to acquire political power. Paine advocates reason in the place of revelation, leading him to reject miracles and to view the Bible as an ordinary piece of literature, rather than a divinely-inspired text. It promotes natural religion and argues for the existence of a creator-god. Most of Paine's arguments had long been available to the educated elite, but by presenting them in an engaging and irreverent style, he made deism appealing and accessible to the masses. Originally distributed as unbound pamphlets, the book was also cheap, putting it within the reach of a large number of buyers. Fearing the spread of what it viewed as potentially-revolutionary ideas, the British government prosecuted printers and booksellers who tried to publish and distribute it. Nevertheless, Paine's work inspired and guided many free thinkers. Paine's book followed in the tradition of early 18th-century British deism. Those deists, while maintaining individual positions, still shared several sets of assumptions and arguments that Paine articulated in "The Age of Reason". The most important position that united the early deists was their call for "free rational inquiry" into all subjects, especially religion. Saying that early Christianity was founded on freedom of conscience, they demanded religious toleration and an end to religious persecution. They also demanded that debate rest on reason and rationality. Deists embraced a Newtonian worldview and believed that all things in the universe, even God, must obey the laws of nature. Without a concept of natural law, the deists argued, explanations of the workings of nature would descend into irrationality. This belief in natural law drove their skepticism of miracles. Because miracles had to be observed to be validated, deists rejected the accounts laid out in the Bible of God's miracles and argued that such evidence was neither sufficient nor necessary to prove the existence of God. Along these lines, deistic writings insisted that God, as the first cause or prime mover, had created and designed the universe with natural laws as part of his plan. They held that God does not repeatedly alter his plan by suspending natural laws to intervene (miraculously) in human affairs. Deists also rejected the claim that there was only one revealed religious truth or "one true faith". Religion had to be "simple, apparent, ordinary, and universal" if it was to be the logical product of a benevolent God. They, therefore, distinguished between "revealed religions", which they rejected, such as Christianity, and "natural religion", a set of universal beliefs derived from the natural world that demonstrated God's existence (and so they were not atheists). While some deists accepted revelation, most argued that revelation's restriction to small groups or even a single person limited its explanatory power. Moreover, many found the Christian revelations in particular to be contradictory and irreconcilable. According to those writers, revelation could reinforce the evidence for God's existence already apparent in the natural world but more often led to superstition among the masses. Most deists argued that priests had deliberately corrupted Christianity for their own gain by promoting the acceptance of miracles, unnecessary rituals, and illogical and dangerous doctrines (accusations typically referred to as "priestcraft"). The worst of the doctrines was original sin. By convincing people that they required a priest's help to overcome their innate sinfulness, deists argued, religious leaders had enslaved the human population. Deists therefore typically viewed themselves as intellectual liberators. By the time Part I of "The Age of Reason" was published in 1794, many British and French citizens had become disillusioned by the French Revolution. The Reign of Terror had begun, Louis XVI and Marie Antoinette had been tried and executed and Britain was at war with France. The few British radicals who still supported the French revolution and its ideals were viewed with deep suspicion by their countrymen. "The Age of Reason" belongs to the later, more radical, stage of the British political reform movement, which openly embraced republicanism and sometimes atheism and was exemplified by such texts as William Godwin's "Political Justice" (1793). (However, Paine and other deists were not atheists.) By the middle of the decade, the moderate voices had disappeared: Richard Price, the Dissenting minister whose sermon on political liberty had prompted Edmund Burke's "Reflections on the Revolution in France" (1790), had died in 1791, and Joseph Priestley had been forced to flee to America after a Church–and–King mob burned down his home and church. The conservative government, headed by William Pitt, responded to the increasing radicalization by prosecuting several reformers for seditious libel and treason in the famous 1794 Treason Trials. Following the trials and an attack on George III, conservatives were successful in passing the Seditious Meetings Act and the Treasonable Practices Act (also known as the "Two Acts" or the "gagging acts"). The 1795 Acts prohibited freedom of assembly for groups such as the radical London Corresponding Society (LCS) and encouraged indictments against radicals for "libelous and seditious" statements. Afraid of prosecution and disenchanted with the French Revolution, many reformers drifted away from the cause. The LCS, which had previously unified religious Dissenters and political reformers, fractured when Francis Place and other leaders helped Paine publish "The Age of Reason". The society's more religious members withdrew in protest, and the LCS lost around a fifth of its membership. In December 1792, Paine's "Rights of Man, part II", was declared seditious in Britain, and he was forced to flee to France to avoid arrest. Dismayed by the French revolution's turn toward secularism and atheism, he composed Part I of "The Age of Reason" in 1792 and 1793: Although Paine wrote "The Age of Reason" for the French, he dedicated it to his "Fellow Citizens of the United States of America", alluding to his bond with the American revolutionaries. It is unclear when exactly Paine drafted Part I although he wrote in the preface to Part II: According to Paine scholars Edward Davidson and William Scheick, he probably wrote the first draft of Part I in late 1793, but Paine biographer David Hawke argues for a date of early 1793. It is also unclear whether or not a French edition of Part I was published in 1793. François Lanthenas, who translated "The Age of Reason" into French in 1794, wrote that it was first published in France in 1793, but no book fitting his description has been positively identified. Barlow published the first English edition of "The Age of Reason, Part I" in 1794 in London, selling it for a mere three pence. Meanwhile, Paine, considered too moderate by the powerful Jacobin Club of French revolutionaries, was imprisoned for ten months in France. He escaped the guillotine only by accident: the sign marking him out for execution was improperly placed on his cell door. When James Monroe, at that time the new American Minister to France, secured his release in 1794, Paine immediately began work on Part II of "The Age of Reason" despite his poor health. Part II was first published in a pirated edition by H.D. Symonds in London in October 1795. In 1796, Daniel Isaac Eaton published Parts I and II, and sold them at a cost of one shilling and six pence. (Eaton was later forced to flee to America after being convicted of seditious libel for publishing other radical works.) Paine himself financed the shipping of 15,000 copies of his work to America. Later, Francis Place and Thomas Williams collaborated on an edition, which sold about 2,000 copies. Williams also produced his own edition, but the British government indicted him and confiscated the pamphlets. In the late 1790s, Paine fled from France to the United States, where he wrote Part III of "The Age of Reason": "An Examination of the Passages in the New Testament, Quoted from the Old and Called Prophecies Concerning Jesus Christ". Fearing unpleasant and even violent reprisals, Thomas Jefferson convinced him not to publish it in 1802. Five years later, Paine decided to publish despite the backlash he knew would ensue. Following Williams's sentence of one year's hard labor for publishing "The Age of Reason" in 1797, no editions were sold openly in Britain until 1818, when Richard Carlile included it in an edition of Paine's complete works. Carlile charged one shilling and sixpence for the work, and the first run of 1,000 copies sold out in a month. He immediately published a second edition of 3,000 copies. Like Williams, he was prosecuted for seditious libel and blasphemous libel. The prosecutions surrounding the printing of "The Age of Reason" in Britain continued for 30 years after its initial release and encompassed numerous publishers as well as over a hundred booksellers. "The Age of Reason" is divided into three sections. In Part I, Paine outlines his major arguments and personal creed. In Parts II and III he analyzes specific portions of the Bible to demonstrate that it is not the revealed word of God. At the beginning of Part I of the "Age of Reason", Paine lays out his personal belief: Paine's creed encapsulates many of the major themes of the rest of his text: a firm belief in a creator-God; a skepticism regarding most supernatural claims (miracles are specifically mentioned later in the text); a conviction that virtues should be derived from a consideration for others rather than oneself; an animus against corrupt religious institutions; and an emphasis on the individual's right of conscience. Paine begins "The Age of Reason" by attacking revelation. Revelation, he maintains, can be verified only by the individual receivers of the message and so is weak evidence for God's existence. Paine rejects prophecies and miracles: "it is revelation to the first person only, and hearsay to every other, and consequently they are not obliged to believe it." He also points out that the Christian revelations appear to have altered over time to adjust for changing political circumstances. Urging his readers to employ reason rather, than to rely on revelation, Paine argues that the only reliable, unchanging, and universal evidence of God's existence is the natural world. "The Bible of the Deist," he contends, should not be a human invention, such as the Bible, but rather a divine invention—it should be "creation". Paine takes that argument even further by maintaining that the same rules of logic and standards of evidence that govern the analysis of secular texts should be applied to the Bible. In Part II of "The Age of Reason", he does just that by pointing out numerous contradictions in the Bible. For example, Paine notes, "The most extraordinary of all the things called miracles, related in the New Testament, is that of the devil flying away with Jesus Christ, and carrying him to the top of a high mountain, and to the top of the highest pinnacle of the temple, and showing him and promising to him all the kingdoms of the World. How happened it that he did not discover America, or is it only with kingdoms that his sooty highness has any interest? " After establishing that he would refrain from using extra-Biblical sources to inform his criticism, but would instead apply the Bible's own words against itself, Paine questions the sacredness of the Bible and analyzes it as one would any other book. For example, in his analysis of the Book of Proverbs he argues that its sayings are "inferior in keenness to the proverbs of the Spaniards, and not more wise and economical than those of the American Franklin." Describing the Bible as "fabulous mythology," Paine questions whether or not it was revealed to its writers and doubts that the original writers can ever be known (for example, he dismisses the idea that Moses wrote the Pentateuch or that the Gospel's authors are known). Using methods that would not become common in Biblical scholarship until the 19th century, Paine tested the Bible for internal consistency, questioned its historical accuracy, and concluded that it was not divinely inspired. Paine also argues that the Old Testament must be false because it depicts a tyrannical God. The "history of wickedness" pervading the Old Testament convinced Paine that it was simply another set of human-authored myths. He deplores people's credulity: "Brought up in habits of superstition," he wrote, "people in general know not how much wickedness there is in this pretended word of God." Citing Numbers 31:13–47 as an example, in which Moses orders the slaughter of thousands of boys and women and sanctions the rape of thousands of girls at God's behest, Paine calls the Bible a "book of lies, wickedness, and blasphemy; for what can be greater blasphemy than to ascribe the wickedness of man to the orders of the Almighty!" Paine also attacks religious institutions, indicting priests for their lust for power and wealth and the Church's opposition to scientific investigation. He presents the history of Christianity as one of corruption and oppression. Paine criticizes the tyrannical actions of the Church as he had those of governments in the "Rights of Man" and "Common Sense", stating that "the Christian theory is little else than the idolatry of the ancient Mythologists, accommodated to the purposes of power and revenue." That kind of attack distinguishes Paine's book from other deistic works, which were less interested in challenging social and political hierarchies. He argues that the Church and the state are a single corrupt institution that does not act in the best interests of the people and so both must be radically altered: As Jon Mee, a scholar of British radicalism, writes: "Paine believed... a revolution in religion was the natural corollary, even prerequisite, of a fully successful political revolution." Paine lays out a vision of, in Davidson and Scheick's words, "an age of intellectual freedom, when reason would triumph over superstition, when the natural liberties of humanity would supplant priestcraft and kingship, which were both secondary effects of politically managed foolish legends and religious superstitions." It is this vision that scholars have called Paine's "secular millennialism" and it appears in all of his works. He ends the "Rights of Man", for example, with the statement: "From what we now see, nothing of reform in the political world ought to be held improbable. It is an age of revolutions, in which everything may be looked for." Paine "transformed the millennial Protestant vision of the rule of Christ on earth into a secular image of utopia," emphasizing the possibilities of "progress" and "human perfectibility" that could be achieved by humankind, without God's aid. Although Paine liked to say that he read very little, his writings belied that statement; "The Age of Reason" has intellectual roots in the traditions of David Hume, Spinoza, and Voltaire. Since Hume had already made many of the same "moral attacks upon Christianity" that Paine popularized in "The Age of Reason", scholars have concluded that Paine probably read Hume's works on religion or had at least heard about them through the Joseph Johnson circle. Paine would have been particularly drawn to Hume's description of religion as "a positive source of harm to society" that "led men to be factious, ambitious and intolerant." More of an influence on Paine than Hume was Spinoza's "Tractatus Theologico-politicus" (1678). Paine would have been exposed to Spinoza's ideas through the works of other 18th-century deists, most notably Conyers Middleton. Though these larger philosophical traditions are clear influences on "The Age of Reason", Paine owes the greatest intellectual debt to the English deists of the early 18th century, such as Peter Annet. John Toland had argued for the use of reason in interpreting scripture, Matthew Tindal had argued against revelation, Middleton had described the Bible as mythology and questioned the existence of miracles, Thomas Morgan had disputed the claims of the Old Testament, Thomas Woolston had questioned the believability of miracles and Thomas Chubb had maintained that Christianity lacked morality. All of those arguments appear in "The Age of Reason" albeit less coherently. The most distinctive feature of "The Age of Reason", like all of Paine's works, is its linguistic style. Historian Eric Foner argues that Paine's works "forged a new political language" designed to bring politics to the people by using a "clear, simple and straightforward" style. Paine outlined "a new vision—a utopian image of an egalitarian republican society" and his language reflected these ideals. He originated such phrases as "the rights of man," "the age of reason," "the age of revolution," and "the times that try men's souls." Foner also maintains that with "The Age of Reason" Paine "gave deism a new, aggressive, explicitly anti-Christian tone". He did so by employing "vulgar" (that is, "low" or "popular") language, an irreverent tone, and even religious rhetoric. In a letter to Elihu Palmer, one of his most loyal followers in America, Paine describes part of his rhetorical philosophy: Paine's rhetoric had broad appeal; his "pithy" lines were "able to bridge working-class and middle-class cultures" and become common quotations. Part of what makes Paine's style so memorable is his effective use of repetition and rhetorical questions in addition to the profusion of "anecdote, irony, parody, satire, feigned confusion, folk matter, concrete vocabulary, and .. appeals to common sense". Paine's conversational style draws the reader into the text. His use of "we" conveys an "illusion that he and the readers share the activity of constructing an argument." By thus emphasizing the presence of the reader and leaving images and arguments half-formed, Paine encourages his readers to complete them independently. The most distinctive element of Paine's style in "The Age of Reason" is its "vulgarity". In the 18th century, "vulgarity" was associated with the middling and lower classes and not with obscenity and so when Paine celebrates his "vulgar" style and his critics attack it, the dispute is over class accessibility, not profanity. For example, Paine describes the Fall this way: The irreverent tone that Paine, combined with the vulgar style, set his work apart from its predecessors. It took "deism out of the hands of the aristocracy and intellectuals and [brought] it to the people". Paine's rhetorical appeal to "the people" attracted almost as much criticism as his ridicule of the Bible. Bishop Richard Watson, forced to address the new audience in his influential response to Paine, "An Apology for the Bible", wrote: "I shall, designedly, write this and the following letters in a popular manner; hoping that thereby they may stand a chance of being perused by that class of readers, for whom your work seems to be particularly calculated, and who are the most likely to be injured by it." However, it was not only the style that concerned Watson and others but also the cheapness of Paine's book. At one sedition trial in the early 1790s, the Attorney–General tried to prohibit Thomas Cooper from publishing his response to Burke's "Reflections on the Revolution in France" and argued that "although there was no exception to be taken to his pamphlet when in the hands of the upper classes, yet the government would not allow it to appear at a price which would insure its circulation among the people." Paine's style is not only "vulgar" but also irreverent. For example, he wrote that once one dismisses the false idea of Moses being the author of Genesis, "The story of Eve and the serpent, and of Noah and his ark, drops to a level with the Arabian tales, without the merit of being entertaining." Although many early English deists had relied on ridicule to attack the Bible and Christianity, theirs was a refined wit rather than the broad humor that Paine employed. It was the early Deists of the middling ranks, not the educated elite, who initiated the kind of ridicule Paine would make famous. It was Paine's "ridiculing" tone that most angered Churchmen. As John Redwood, a scholar of deism, puts it: "the age of reason could perhaps more eloquently and adequately be called the age of ridicule, for it was ridicule, not reason, that endangered the Church." Significantly, Watson's "Apology" directly chastises Paine for his mocking tone: Paine's Quaker upbringing predisposed him to deistic thinking at the same time that it positioned him firmly within the tradition of religious Dissent. Paine acknowledged that he was indebted to his Quaker background for his skepticism, but the Quakers' esteem for plain speaking, a value expressed both explicitly and implicitly in "The Age of Reason", influenced his writing even more. As the historian E. P. Thompson has put it, Paine "ridiculed the authority of the Bible with arguments which the collier or country girl could understand." His description of the story of the virgin birth of Jesus demystifies biblical language and is "an account of a young woman engaged to be married, and while under this engagement she is, to speak plain language, debauched by a ghost." Quaker conversion narratives also influenced the style of "The Age of Reason". Davidson and Scheick argue that its "introductory statement of purpose, a fervid sense of inward inspiration, a declared expression of conscience, and an evangelical intention to instruct others" resemble the personal confessions of American Quakers. Paine takes advantage of several religious rhetorics beyond those associated with Quakerism in "The Age of Reason", most importantly by millennial language that appealed to his lower-class readers. Claiming that true religious language is universal, Paine uses elements of the Christian rhetorical tradition to undermine the hierarchies perpetuated by religion itself. The sermonic quality of Paine's writing is one of its most recognizable traits. Sacvan Bercovitch, a scholar of the sermon, argues that Paine's writing often resembles that of the jeremiad or "political sermon." He contends that Paine draws on the Puritan tradition in which "theology was wedded to politics and politics to the progress of the kingdom of God". One reason that Paine may have been drawn to this style is because he may have briefly been a Methodist preacher, but that suspicion cannot be verified. "The Age of Reason" provoked a hostile reaction from most readers and critics, although the intensity of that hostility varied by locality. There were four major factors for this animosity: Paine denied that the Bible was a sacred, inspired text; he argued that Christianity was a human invention; his ability to command a large readership frightened those in power; and his irreverent and satirical style of writing about Christianity and the Bible offended many believers. Paine's "Age of Reason" sparked enough anger in Britain to initiate not only a series of government prosecutions but also a pamphlet war. Around 50 unfavorable replies appeared between 1795 and 1799 alone, and refutations were still being published in 1812. Many of them responded specifically to Paine's attack on the Bible in Part II (when Thomas Williams was prosecuted for printing Part II, it became clear its circulation had far exceeded that of Part I). Although critics responded to Paine's analysis of the Bible, they did not usually address his specific arguments. Instead, they advocated a literal reading of the Bible, citing the Bible's long history as evidence of its authority. They also issued "ad hominem" attacks against Paine, describing him "as an enemy of proper thought and of the morality of decent, enlightened people". Dissenters such as Joseph Priestley, who had endorsed the arguments of the "Rights of Man", turned away from those presented in "The Age of Reason". Even the liberal "Analytical Review" was skeptical of Paine's claims and distanced itself from the book. Paine's deism was simply too radical for these more moderate reformers and they feared being tarred with the brush of extremism. Despite the outpouring of antagonistic replies to "The Age of Reason", some scholars have argued that Constantin Volney's deistic "The Ruins" (translations of excerpts from the French original appeared in radical papers such as Thomas Spence's "Pig's Meat" and Daniel Isaac Eaton's "Politics for the People") was actually more influential than "The Age of Reason". According to David Bindman, "The Ruins" "achieved a popularity in England comparable to "Rights of Man" itself." One minister complained that "the mischief arising from the spreading of such a pernicious publication [as "The Age of Reason"] was infinitely greater than any that could spring from limited suffrage and septennial parliaments" (other popular reform causes). It was not until Richard Carlile's 1818 trial for publishing "The Age of Reason" that Paine's text became "the anti-Bible of all lower-class nineteenth-century infidel agitators". Although the book had been selling well before the trial, once Carlile was arrested and charged, 4,000 copies were sold in just a few months. At the trial itself, which created a media frenzy, Carlile read the entirety of "The Age of Reason" into the court record, ensuring it an even wider publication. Between 1818 and 1822, Carlile claimed to have "sent into circulation near 20,000 copies of the "Age of Reason"". Just as in the 1790s, it was the language that most angered the authorities in 1818. As Joss Marsh, in her study of blasphemy in the 19th century, pointed out, "at these trials plain English was reconfigured as itself 'abusive' and 'outrageous.' The "Age of Reason" struggle almost tolled the hour when the words 'plain,' 'coarse,' 'common,' and 'vulgar' took on a pejorative meaning." Carlile was convicted of blasphemy and sentenced to one year in prison but spent six years instead because he refused any "legal conditions" on his release. Paine's new rhetoric came to dominate popular 19th-century radical journalism, particularly that of freethinkers, Chartists and Owenites. Its legacy can be seen in Thomas Jonathan Wooler's radical periodical "The Black Dwarf", Carlile's numerous newspapers and journals, the radical works of William Cobbett, Henry Hetherington's periodicals the "Penny Papers" and the "Poor Man's Guardian", Chartist William Lovett's works, George Holyoake"s newspapers and books on Owenism, and freethinker Charles Bradlaugh's "New Reformer". A century after the publication of "The Age of Reason", Paine's rhetoric was still being used: George William Foote's ""Bible Handbook" (1888) ... systematically manhandles chapters and verses to bring out 'Contradictions,' 'Absurdities,' 'Atrocities,' and 'Obscenities,' exactly in the manner of Paine's "Age of Reason"." The periodical "The Freethinker" (founded in 1881 by George Foote) argued, like Paine, that the "absurdities of faith" could be "slain with laughter." "The Age of Reason", despite having been written for the French, made very little, if any, impact on revolutionary France. Paine wrote that "the people of France were running headlong into atheism and I had the work translated into their own language, to stop them in that career, and fix them to the first article ... of every man's creed who has any creed at all – "I believe in God"" (emphasis Paine's). Paine's arguments were already common and accessible in France; they had, in a sense, already been rejected. While still in France, Paine formed the Church of Theophilanthropy with five other families, a civil religion that held as its central dogma that man should worship God's wisdom and benevolence and imitate those divine attributes as much as possible. The church had no priest or minister, and the traditional Biblical sermon was replaced by scientific lectures or homilies on the teachings of philosophers. It celebrated four festivals honoring St. Vincent de Paul, George Washington, Socrates, and Rousseau. Samuel Adams articulated the goals of this church when he wrote that Paine aimed "to renovate the age by inculcating in the minds of youth the fear and love of the Deity and universal philanthropy." The church closed in 1801, when Napoleon concluded a concordat with the Vatican. In the United States, "The Age of Reason" initially caused a deistic "revival", but was then viciously attacked and largely forgotten. Paine became so reviled that he could still be maligned as a "filthy little atheist" by Theodore Roosevelt over one hundred years later. At the end of the 18th century, America was ripe for Paine's arguments. Ethan Allen published the first American defense of deism, "Reason, The Only Oracle of Man" (1784), but deism remained primarily a philosophy of the educated elite. Men such as Benjamin Franklin and Thomas Jefferson espoused its tenets but at the same time argued that religion served the useful purpose of "social control." It was not until the publication of Paine's more entertaining and popular work that deism reached into the middling and lower classes in America. The public was receptive, in part, because they approved of the secular ideals of the French Revolution. "The Age of Reason" went through 17 editions and sold thousands of copies in the United States. Elihu Palmer, "a blind renegade minister" and Paine's most loyal follower in America, promoted deism throughout the country. Palmer published what became "the bible of American deism", "The Principles of Nature", established deistic societies from Maine to Georgia, built Temples of Reason throughout the nation, and founded two deistic newspapers for which Paine eventually wrote seventeen essays. Foner wrote, ""The Age of Reason" became the most popular deist work ever written... Before Paine it had been possible to be both a Christian and a deist; now such a religious outlook became virtually untenable." Paine presented deism to the masses, and, as in Britain, educated elites feared the consequences of such material in the hands of so many. Their fear helped to drive the backlash which soon followed. Almost immediately after this deistic upsurge, the Second Great Awakening began. George Spater explains that "the revulsion felt for Paine's "Age of Reason" and for other anti-religious thought was so great that a major counter-revolution had been set underway in America before the end of the eighteenth century." By 1796, every student at Harvard was given a copy of Watson's rebuttal of "The Age of Reason". In 1815, Parson Weems, an early American novelist and moralist, published "God's Revenge Against Adultery", in which one of the major characters "owed his early downfall to reading 'PAINE'S AGE OF REASON'". Paine's "libertine" text leads the young man to "bold slanders of the bible" even to the point that he "threw aside his father's good old family bible, and for a surer guide to pleasure took up the AGE OF REASON!" Paine could not publish Part III of "The Age of Reason" in America until 1807 because of the deep antipathy against him. Hailed only a few years earlier as a hero of the American Revolution, Paine was now lambasted in the press and called "the scavenger of faction," a "lilly-livered sinical rogue," a "loathsome reptile," a "demi-human archbeast," "an object of disgust, of abhorrence, of absolute loathing to every decent man except the President of the United States [Thomas Jefferson]." In October 1805 John Adams wrote to his friend Benjamin Waterhouse, an American physician and scientist: Adams viewed Paine's "Age of Reason" not as the embodiment of the Enlightenment but as a "betrayal" of it. Despite all of these attacks, Paine never wavered in his beliefs; when he was dying, a woman came to visit him, claiming that God had instructed her to save his soul. Paine dismissed her in the same tones that he had used in "The Age of Reason": "pooh, pooh, it is not true. You were not sent with any such impertinent message... Pshaw, He would not send such a foolish ugly old woman as you about with His message." "The Age of Reason" was largely ignored after 1820, except by radical groups in Britain and freethinkers in America, such as Robert G. Ingersoll and the American abolitionist Moncure Daniel Conway, who edited his works and wrote the first biography of Paine, favorably reviewed by "The New York Times". Not until the publication of Charles Darwin's "The Origin of Species" in 1859, and the large-scale abandonment of the literal reading of the Bible that it caused in Britain did many of Paine's ideas take hold. As writer Mark Twain said, "It took a brave man before the Civil War to confess he had read the "Age of Reason"... I read it first when I was a cub pilot, read it with fear and hesitation, but marveling at its fearlessness and wonderful power." Paine's criticisms of the church, the monarchy, and the aristocracy appear most clearly in Twain's "A Connecticut Yankee in King Arthur's Court" (1889). Paine's text is still published today, one of the few 18th-century religious texts to be widely available. Its message still resonates, evidenced by Christopher Hitchens, who stated that "if the rights of man are to be upheld in a dark time, we shall require an age of reason". His ends with the claim that "in a time... when both rights and reason are under several kinds of open and covert attack, the life and writing of Thomas Paine will always be part of the arsenal on which we shall need to depend."
https://en.wikipedia.org/wiki?curid=31275
The Bell Curve The Bell Curve: Intelligence and Class Structure in American Life is a 1994 book by psychologist Richard J. Herrnstein and political scientist Charles Murray, in which the authors argue that human intelligence is substantially influenced by both inherited and environmental factors and that it is a better predictor of many personal outcomes, including financial income, job performance, birth out of wedlock, and involvement in crime than are an individual's parental socioeconomic status. They also argue that those with high intelligence, the "cognitive elite", are becoming separated from those of average and below-average intelligence. The book was and remains highly controversial, especially where the authors discussed purported connections between race and intelligence and suggested policy implications based on these purported connections. Shortly after its publication, many people rallied both in criticism and in defense of the book. A number of critical texts were written in response to it. "The Bell Curve", published in 1994, was written by Richard Herrnstein and Charles Murray to explain the variations in intelligence in American society, warn of some consequences of that variation, and propose social policies for mitigating the worst of the consequences. The book's title comes from the bell-shaped normal distribution of intelligence quotient (IQ) scores in a population. The book starts with an introduction that appraises the history of the concept of intelligence from Francis Galton to modern times. Spearman's introduction of the general factor of intelligence and other early advances in research on intelligence are discussed along with a consideration of links between intelligence testing and racial politics. The 1960s are identified as the period in American history when social problems were increasingly attributed to forces outside the individual. This egalitarian ethos, Herrnstein and Murray argue, cannot accommodate biologically based individual differences. The introduction states six of the authors' assumptions, which they claim to be "beyond significant technical dispute": At the close of the introduction, the authors warn the reader against committing the ecological fallacy of inferring things about individuals based on the aggregate data presented in the book. They also assert that intelligence is just one of many valuable human attributes and one whose importance among human virtues is overrated. In the first part of the book Herrnstein and Murray chart how American society was transformed in the 20th century. They argue that America evolved from a society where social origin largely determined one's social status to one where cognitive ability is the leading determinant of status. The growth in college attendance, a more efficient recruitment of cognitive ability, and the sorting of cognitive ability by selective colleges are identified as important drivers of this evolution. Increased occupational sorting by cognitive ability is discussed. The argument is made, based on published meta-analyses, that cognitive ability is the best predictor of worker productivity. Herrnstein and Murray argue that due to increasing returns to cognitive ability, a cognitive elite is being formed in America. This elite is getting richer and progressively more segregated from the rest of society. The second part describes how cognitive ability is related to social behaviors: high ability predicts socially desirable behavior, low ability undesirable behavior. The argument is made that group differences in social outcomes are better explained by intelligence differences rather than socioeconomic status, a perspective, the authors argue, that has been neglected in research. The analyses reported in this part of the book were done using data from the National Longitudinal Survey of Labor Market Experience of Youth (NLSY), a study conducted by the United States Department of Labor's Bureau of Labor Statistics tracking thousands of Americans starting in the 1980s. Only non-Hispanic whites are included in the analyses so as to demonstrate that the relationships between cognitive ability and social behavior are not driven by race or ethnicity. Herrnstein and Murray argue that intelligence is a better predictor of individuals' outcomes than parental socioeconomic status. This argument is based on analyses where individuals' IQ scores are shown to better predict their outcomes as adults than the socioeconomic status of their parents. Such results are reported for many outcomes, including poverty, dropping out of school, unemployment, marriage, divorce, illegitimacy, welfare dependency, criminal offending, and the probability of voting in elections. All participants in the NLSY took the Armed Services Vocational Aptitude Battery (ASVAB), a battery of ten tests taken by all who apply for entry into the armed services. (Some had taken an IQ test in high school, and the median correlation of the Armed Forces Qualification Test (AFQT) scores and those IQ test scores was .81). Participants were later evaluated for social and economic outcomes. In general, IQ/AFQT scores were a better predictor of life outcomes than social class background. Similarly, after statistically controlling for differences in IQ, many outcome differences between racial-ethnic groups disappeared. Values are the percentage of each IQ sub-population, among non-Hispanic whites only, fitting each descriptor. This part of the book discusses ethnic differences in cognitive ability and social behavior. Herrnstein and Murray report that Asian Americans have a higher mean IQ than white Americans, who in turn outscore black Americans. The book argues that the black-white gap is not due to test bias, noting that IQ tests do not tend to underpredict the school or job performance of black individuals and that the gap is larger on apparently culturally neutral test items than on more culturally loaded items. The authors also note that adjusting for socioeconomic status does not eliminate the black-white IQ gap. However, they argue that the gap is narrowing. According to Herrnstein and Murray, the high heritability of IQ within races does not necessarily mean that the cause of differences between races is genetic. On the other hand, they discuss lines of evidence that have been used to support the thesis that the black-white gap is at least partly genetic, such as Spearman's hypothesis. They also discuss possible environmental explanations of the gap, such as the observed generational increases in IQ, for which they coin the term Flynn effect. At the close of this discussion, they write: The authors also stress that regardless of the causes of differences, people should be treated no differently. In Part III, the authors also repeat many of the analyses from Part II, but now compare whites to blacks and Hispanics in the NLSY dataset. They find that after controlling for IQ, many differences in social outcomes between races are diminished. The authors discuss the possibility that high birth rates among those with lower IQs may exert a downward pressure on the national distribution of cognitive ability. They argue that immigration may also have a similar effect. At the close of Part III, Herrnstein and Murray discuss the relation of IQ to social problems. Using the NLSY data, they show that social problems increase as a monotonic function of lower IQ. In this final chapter, the authors discuss the relevance of cognitive ability for understanding major social issues in America. Evidence for experimental attempts to raise intelligence is reviewed. The authors conclude that currently there are no means to boost intelligence by more than a modest degree. The authors criticize the "levelling" of general and secondary education and defend gifted education. They offer a critical overview of affirmative action policies in colleges and workplaces, arguing that their goal should be equality of opportunity rather than equal outcomes. Herrnstein and Murray offer a pessimistic portrait of America's future. They predict that a cognitive elite will further isolate itself from the rest of society, while the quality of life deteriorates for those at the bottom of the cognitive scale. As an antidote to this prognosis, they offer a vision of society where differences in ability are recognized and everybody can have a valued place, stressing the role of local communities and clear moral rules that apply to everybody. Herrnstein and Murray argued the average genetic IQ of the United States is declining, owing to the tendency of the more intelligent having fewer children than the less intelligent, the generation length to be shorter for the less intelligent, and the large-scale immigration to the United States of those with low intelligence. Discussing a possible future political outcome of an intellectually stratified society, the authors stated that they "fear that a new kind of conservatism is becoming the dominant ideology of the affluent—not in the social tradition of an Edmund Burke or in the economic tradition of an Adam Smith but 'conservatism' along Latin American lines, where to be conservative has often meant doing whatever is necessary to preserve the mansions on the hills from the menace of the slums below." Moreover, they fear that increasing welfare will create a "custodial state" in "a high-tech and more lavish version of the Indian reservation for some substantial minority of the nation's population." They also predict increasing totalitarianism: "It is difficult to imagine the United States preserving its heritage of individualism, equal rights before the law, free people running their own lives, once it is accepted that a significant part of the population must be made permanent wards of the states." The authors recommended the elimination of welfare policies which they claim encourage poor women to have babies. "The Bell Curve" received a great deal of media attention. The book was not distributed in advance to the media, except for a few select reviewers picked by Murray and the publisher, which delayed more detailed critiques for months and years after the book's release. Stephen Jay Gould, reviewing the book in "The New Yorker", said that the book "contains no new arguments and presents no compelling data to support its anachronistic social Darwinism" and said that the "authors omit facts, misuse statistical methods, and seem unwilling to admit the consequence of their own words." A 1995 article by Fairness and Accuracy in Reporting writer Jim Naureckas criticized the media response, saying that "While many of these discussions included sharp criticisms of the book, media accounts showed a disturbing tendency to accept Murray and Herrnstein's premises and evidence even while debating their conclusions". After reviewers had more time to review the book's research and conclusions, more significant criticisms begin to appear. Nicholas Lemann, writing in "Slate", said that later reviews showed the book was "full of mistakes ranging from sloppy reasoning to mis-citations of sources to outright mathematical errors." Lemann said that "Unsurprisingly, all the mistakes are in the direction of supporting the authors' thesis." Herrnstein and Murray were criticized for not submitting their work to peer review before publication, an omission many have seen as incompatible with their presentation of it as a scholarly text. Nicholas Lemann noted that the book was not circulated in galley proofs, a common practice to allow potential reviewers and media professionals an opportunity to prepare for the book's arrival. Fifty-two professors, most of them researchers in intelligence and related fields, signed "Mainstream Science on Intelligence", an opinion statement endorsing a number of the views presented in "The Bell Curve". The statement was written by psychologist Linda Gottfredson and published in "The Wall Street Journal" in 1994 and subsequently reprinted in "Intelligence", an academic journal. Of the 131 who were invited by mail to sign the document, 100 responded, with 52 agreeing to sign and 48 declining. Eleven of the 48 who declined to sign claimed that the statement or some part thereof did not represent the mainstream view of intelligence. In response to the controversy surrounding "The Bell Curve", the American Psychological Association's Board of Scientific Affairs established a special task force to publish an investigative report focusing solely on the research presented in the book, not necessarily the policy recommendations that were made. Regarding explanations for racial differences, the APA task force stated: The APA journal that published the statement, "American Psychologist", subsequently published eleven critical responses in January 1997. Many criticisms were collected in the book "The Bell Curve Debate". Stephen Jay Gould wrote that the "entire argument" of the authors of "The Bell Curve" rests on four unsupported, and mostly false, assumptions about intelligence: In a 1995 interview with Frank Miele of "Skeptic", Murray denied making each of these four assumptions. The Nobel Memorial Prize-winning economist James Heckman considers two assumptions made in the book to be questionable: that "g" accounts for correlation across test scores and performance in society, and that "g" cannot be manipulated. Heckman's reanalysis of the evidence used in "The Bell Curve" found contradictions: In response, Murray argued that this was a straw man and that the book does not argue that "g" or IQ are totally immutable or the only factors affecting outcomes. In a 2005 interview, Heckman praised "The Bell Curve" for breaking "a taboo by showing that differences in ability existed and predicted a variety of socioeconomic outcomes" and for playing "a very important role in raising the issue of differences in ability and their importance" and stated that he was "a bigger fan of ["The Bell Curve"] than you might think." However, he also maintained that Herrnstein and Murray overestimated the role of heredity in determining intelligence differences. In 1995, Noam Chomsky, one of the founders of the field of cognitive science, directly criticized the book and its assumptions on IQ. He takes issue with the idea that IQ is 60% heritable saying, the "statement is meaningless" since heritability doesn't have to be genetic. He gives the example of women wearing earrings: He goes on to say there is almost no evidence of a genetic link, and greater evidence that environmental issues are what determine IQ differences. Claude S. Fischer, Michael Hout, Martín Sánchez Jankowski, Samuel R. Lucas, Ann Swidler, and Kim Voss in the book "Inequality by Design" recalculated the effect of socioeconomic status, using the same variables as "The Bell Curve", but weighting them differently. They found that if IQ scores are adjusted, as Herrnstein and Murray did, to eliminate the effect of education, the ability of IQ to predict poverty can become dramatically larger, by as much as 61 percent for whites and 74 percent for blacks. According to the authors, Herrnstein and Murray's finding that IQ predicts poverty much better than socioeconomic status is substantially a result of the way they handled the statistics. In August 1995, National Bureau of Economic Research economist Sanders Korenman and Harvard University sociologist Christopher Winship argued that measurement error was not properly handled by Herrnstein and Murray. Korenman and Winship concluded: "...there is evidence of substantial bias due to measurement error in their estimates of the effects of parents' socioeconomic status. In addition, Herrnstein and Murray's measure of parental socioeconomic status (SES) fails to capture the effects of important elements of family background (such as single-parent family structure at age 14). As a result, their analysis gives an exaggerated impression of the importance of IQ relative to parents' SES, and relative to family background more generally. Estimates based on a variety of methods, including analyses of siblings, suggest that parental family background is at least as important, and may be more important than IQ in determining socioeconomic success in adulthood." In the book "Intelligence, Genes, and Success: Scientists Respond to The Bell Curve", a group of social scientists and statisticians analyzes the genetics-intelligence link, the concept of intelligence, the malleability of intelligence and the effects of education, the relationship between cognitive ability, wages and meritocracy, pathways to racial and ethnic inequalities in health, and the question of public policy. This work argues that much of the public response was polemic, and failed to analyze the details of the science and validity of the statistical arguments underlying the book's conclusions. William J. Matthews writes that part of "The Bell Curve"'s analysis is based on the AFQT "which is not an IQ test but designed to predict performance of certain criterion variables". The AFQT covers subjects such as trigonometry. Heckman observed that the AFQT was designed only to predict success in military training schools and that most of these tests appear to be achievement tests rather than ability tests, measuring factual knowledge and not pure ability. He continues: Janet Currie and Duncan Thomas presented evidence suggesting AFQT scores are likely better markers for family background than "intelligence" in a 1999 study: Charles R. Tittle and Thomas Rotolo found that the more the written, IQ-like, examinations are used as screening devices for occupational access, the stronger the relationship between IQ and income. Thus, rather than higher IQ leading to status attainment because it indicates skills needed in a modern society, IQ may reflect the same test-taking abilities used in artificial screening devices by which status groups protect their domains. Min-Hsiung Huang and Robert M. Hauser write that Herrnstein and Murray provide scant evidence of growth in cognitive sorting. Using data from the General Social Survey, they tested each of these hypotheses using a short verbal ability test which was administered to about 12,500 American adults between 1974 and 1994; the results provided no support for any of the trend hypotheses advanced by Herrnstein and Murray. One chart in "The Bell Curve" purports to show that people with IQs above 120 have become "rapidly more concentrated" in high-IQ occupations since 1940. But Robert Hauser and his colleague Min-Hsiung Huang retested the data and came up with estimates that fell "well below those of Herrnstein and Murray." They add that the data, properly used, "do not tell us anything except that selected, highly educated occupation groups have grown rapidly since 1940." In 1972, Noam Chomsky questioned Herrnstein's idea that society was developing towards a meritocracy. Chomsky criticized the assumptions that people only seek occupations based on material gain. He argued that Herrnstein would not want to become a baker or lumberjack even if he could earn more money that way. He also criticized the assumption that such a society would be fair with pay based on value of contributions. He argued that because there are already unjust great inequalities, people will often be paid not commensurately with contributions to society, but at levels that preserve such inequalities. One part of the controversy concerned the parts of the book which dealt with racial group differences on IQ and the consequences of this. The authors were reported throughout the popular press as arguing that these IQ differences are strictly genetic, when in fact they attributed IQ differences to both genes and the environment in chapter 13: "It seems highly likely to us that both genes and the environment have something to do with racial differences." The introduction to the chapter more cautiously states, "The debate about whether and how much genes and environment have to do with ethnic differences remains unresolved." When several prominent critics turned this into an "assumption" that the authors had attributed most or all of the racial differences in IQ to genes, co-author Charles Murray responded by quoting two passages from the book: In an article praising the book, economist Thomas Sowell criticized some of its aspects, including some of its arguments about race and the malleability of IQ: Rushton (1997) as well as Cochran et al. (2005) have argued that the early testing does in fact support a high average Jewish IQ. Columnist Bob Herbert, writing for "The New York Times", described the book as "a scabrous piece of racial pornography masquerading as serious scholarship". "Mr. Murray can protest all he wants", wrote Herbert; "his book is just a genteel way of calling somebody a nigger." In 1996, Stephen Jay Gould released a revised and expanded edition of his 1981 book "The Mismeasure of Man", intended to more directly refute many of "The Bell Curve"'s claims regarding race and intelligence, and arguing that the evidence for heritability of IQ did not indicate a genetic origin to group differences in intelligence. This book has in turn been criticized. Psychologist David Marks has suggested that the ASVAB test used in the analyses of "The Bell Curve" correlates highly with measures of literacy, and argues that the ASVAB test in fact is not a measure of general intelligence but of literacy. Melvin Konner, professor of anthropology and associate professor of psychiatry and neurology at Emory University, called "Bell Curve" a "deliberate assault on efforts to improve the school performance of African-Americans": The 2014 textbook "Evolutionary Analysis" by Herron and Freeman devoted an entire chapter to debunking what they termed the "Bell Curve fallacy", saying that "Murray and Herrnstein's argument amounts to little more than an appeal to personal incredulity" and that it is a mistake to think that heritability can tell us something about the causes of differences between population means. In reference to the comparison of African-American with European-American IQ scores, the text states that only a common garden experiment, in which the two groups are raised in an environment typically experienced by European-Americans, would allow one to see if the difference is genetic. This kind of experiment, routine with plants and animals, cannot be conducted with humans. Nor is it possible to approximate this design with adoptions into families of the different groups, because the children would be recognizable and possibly be treated differently. The text concludes: "There is no way to assess whether genetics has anything to do with the difference in IQ score between ethnic groups." In 1995, Noam Chomsky criticized the book's conclusions about race and the notion that blacks and people with lower IQs having more children is even a problem. Rutledge M. Dennis suggests that through soundbites of works like Jensen's famous study on the achievement gap, and Herrnstein and Murray's book "The Bell Curve", the media "paints a picture of Blacks and other people of color as collective biological illiterates—as not only intellectually unfit but evil and criminal as well", thus providing, he says "the logic and justification for those who would further disenfranchise and exclude racial and ethnic minorities". Charles Lane pointed out that 17 of the researchers whose work is referenced by the book have also contributed to "Mankind Quarterly", a journal of anthropology founded in 1960 in Edinburgh, which has been viewed as supporting the theory of the genetic superiority of white people. David Bartholomew reports Murray's response as part of the controversy over the Bell Curve. In his afterword to the 1996 Free Press edition of "The Bell Curve", Murray responded that the book "draws its evidence from more than a thousand scholars" and among the researchers mentioned in Lane's list "are some of the most respected psychologists of our time and that almost all of the sources referred to as tainted are articles published in leading refereed journals". "The Bell Curve Wars: Race, Intelligence, and the Future of America" is a collection of articles published in reaction to the book. Edited by Steven Fraser, the writers of these essays do not have a specific viewpoint concerning the content of "The Bell Curve", but express their own critiques of various aspects of the book, including the research methods used, the alleged hidden biases in the research and the policies suggested as a result of the conclusions drawn by the authors. Fraser writes that "by scrutinizing the footnotes and bibliography in "The Bell Curve", readers can more easily recognize the project for what it is: a chilly synthesis of the work of disreputable race theorists and eccentric eugenicists". Since the book provided statistical data making the assertion that blacks were, on average, less intelligent than whites, some people have feared that "The Bell Curve" could be used by extremists to justify genocide and hate crimes. Much of the work referenced by "The Bell Curve" was funded by the Pioneer Fund, which aims to advance the scientific study of heredity and human differences, and has been accused of promoting scientific racism. Murray criticized the characterization of the Pioneer Fund as a racist organization, arguing that it has as much relationship to its founder as "Henry Ford and today's Ford Foundation". Evolutionary biologist Joseph L. Graves described "The Bell Curve" as an example of racist science, containing all the types of errors in the application of scientific method that have characterized the history of scientific racism: Eric Siegel wrote on the "Scientific American" blog that the book "endorses prejudice by virtue of what it does not say. Nowhere does the book address why it investigates racial differences in IQ. By never spelling out a reason for reporting on these differences in the first place, the authors transmit an unspoken yet unequivocal conclusion: Race is a helpful indicator as to whether a person is likely to hold certain capabilities. Even if we assume the presented data trends are sound, the book leaves the reader on his or her own to deduce how to best put these insights to use. The net effect is to tacitly condone the prejudgment of individuals based on race." Similarly, Howard Gardner accused the authors of engaging in "scholarly brinkmanship", arguing that "Whether concerning an issue of science, policy, or rhetoric, the authors come dangerously close to embracing the most extreme positions, yet in the end shy away from doing so ... Scholarly brinkmanship encourages the reader to draw the strongest conclusions, while allowing the authors to disavow this intention."
https://en.wikipedia.org/wiki?curid=31277
House of Tudor The House of Tudor was an English royal house of Welsh origin, descended from the Tudors of Penmynydd. Tudor monarchs ruled the Kingdom of England and its realms, including their ancestral Wales and the Lordship of Ireland (later the Kingdom of Ireland) from 1485 until 1603, with five monarchs in that period: Henry VII, Henry VIII, Edward VI, Mary I and Elizabeth I. The Tudors succeeded the House of Plantagenet as rulers of the Kingdom of England, and were succeeded by the House of Stuart. The first Tudor monarch, Henry VII of England, descended through his mother from a legitimised branch of the English royal House of Lancaster. The Tudor family rose to power in the wake of the Wars of the Roses (1455–1487), which left the House of Lancaster, with which the Tudors were aligned, extinct in the male line. Henry VII succeeded in presenting himself as a candidate not only for traditional Lancastrian supporters, but also for discontented supporters of their rival House of York, and he took the throne by right of conquest. Following his victory at the Battle of Bosworth Field (22 August 1485), he reinforced his position in 1486 by fulfilling his 1483 vow to marry Elizabeth of York, daughter of Edward IV, thus symbolically uniting the former warring factions under the new dynasty. The Tudors extended their power beyond modern England, achieving the full union of England and the Principality of Wales in 1542 (Laws in Wales Acts 1535 and 1542), and successfully asserting English authority over the Kingdom of Ireland (proclaimed by the Crown of Ireland Act 1542). They also maintained the nominal English claim to the Kingdom of France; although none of them made substance of it, Henry VIII fought wars with France trying to reclaim that title. After him, his daughter Mary I lost control of all territory in France permanently with the fall of Calais in 1558. In total, the Tudor monarchs ruled their domains for just over a century. Henry VIII () was the only son of Henry VII to live to the age of maturity. Issues around royal succession (including marriage and the succession rights of women) became major political themes during the Tudor era. When Elizabeth I died without an heir, the Scottish House of Stuart succeeded as England's royal family through the Union of the Crowns of 24 March 1603. The first Stuart to become King of England (), James VI and I, descended from Henry VII's daughter Margaret Tudor, who in 1503 had married King James IV of Scotland in accordance with the 1502 Treaty of Perpetual Peace. For analysis of politics, diplomacy and social history, see Tudor period. The Tudors descended on Henry VII's mother's side from John Beaufort, 1st Earl of Somerset, one of the illegitimate children of the 14th century English prince John of Gaunt (the third surviving son of Edward III) by Gaunt's long-term mistress Katherine Swynford. The descendants of an illegitimate child of English royalty would normally have no claim on the throne, but the situation became complicated when Gaunt and Swynford eventually married in 1396, when John Beaufort was 25. The church retroactively declared the Beauforts legitimate by way of a papal bull the same year, confirmed by an Act of Parliament in 1397. A subsequent proclamation by John of Gaunt's legitimate son, Henry IV, also recognised the Beauforts' legitimacy but declared them ineligible ever to inherit the throne. Nevertheless, the Beauforts remained closely allied with Gaunt's legitimate descendants from his first marriage, the House of Lancaster. However the descent from the Beauforts, despite the above, did not render Henry Tudor a legitimate heir to the throne, nor did the fact that his father's mother, Catherine of Valois, had been a Queen of England, make him an heir. The legitimate heiress was Margaret Pole, Countess of Salisbury, who was descended from the second son of Edward III, Lionel, Duke of Clarence, and also his fourth son, Edmund, Duke of York. Henry Tudor had, however, one thing that the others did not. He had an army which had defeated and killed the last Yorkist King, Richard III, and therefore the support of powerful nobles. His son Henry VIII made sure there were no other claimants to the Throne when he wiped out all the remaining Plantagenet heirs including the Margaret Pole and her family. Only Reginald Pole survived, but he was a cardinal in the Catholic Church. He later became Archbishop of Canterbury under the Catholic Mary I. On 1 November 1455, John Beaufort's granddaughter, Margaret Beaufort, Countess of Richmond and Derby, married Henry VI's maternal half-brother Edmund Tudor, 1st Earl of Richmond. It was his father, Owen Tudor (), who abandoned the Welsh patronymic naming practice and adopted a fixed surname. When he did, he did not choose, as was generally the custom, his father's name, Maredudd, but chose that of his grandfather, Tudur ap Goronwy, instead. This name is sometimes given as "Tewdwr", the Welsh form of Theodore, but Modern Welsh "Tudur", Old Welsh "Tutir" is originally not a variant but a different and completely unrelated name, etymologically identical with Gaulish "Toutorix", from Proto-Celtic "*toutā" "people, tribe" and "*rīxs" "king" (compare Modern Welsh "tud" "territory" and "rhi" "king" respectively), corresponding to Germanic Theodoric. Owen Tudor was one of the bodyguards for the queen dowager Catherine of Valois, whose husband, Henry V, had died in 1422. Evidence suggests that the two were secretly married in 1429. The two sons born of the marriage, Edmund and Jasper, were among the most loyal supporters of the House of Lancaster in its struggle against the House of York. Henry VI ennobled his half-brothers: Edmund became Earl of Richmond on 15 December 1449 and was married to Lady Margaret Beaufort, the great-granddaughter of John of Gaunt, the progenitor of the house of Lancaster; Jasper became the first Earl of Pembroke on 23 November 1452. Edmund died on 3 November 1456. On 28 January 1457, his widow Margaret, who had just attained her fourteenth birthday, gave birth to a son, Henry Tudor, at her brother-in-law's Pembroke Castle. Henry Tudor, the future Henry VII, spent his childhood at Raglan Castle, the home of William Herbert, 1st Earl of Pembroke, a leading Yorkist. Following the murder of Henry VI and death of his son, Edward, in 1471, Henry became the person upon whom the Lancastrian cause rested. Concerned for his young nephew's life, Jasper Tudor took Henry to Brittany for safety. Lady Margaret remained in England and remarried, living quietly while advancing the Lancastrian (and her son's) cause. Capitalizing on the growing unpopularity of Richard III (King of England from 1483), she was able to forge an alliance with discontented Yorkists in support of her son. Two years after Richard III was crowned, Henry and Jasper sailed from the mouth of the Seine to the Milford Haven Waterway and defeated Richard III at the Battle of Bosworth Field (22 August 1485). Upon this victory, Henry Tudor proclaimed himself King Henry VII. Upon becoming king in 1485, Henry VII moved rapidly to secure his hold on the throne. On 18 January 1486 at Westminster, he honoured a pledge made three years earlier and married Elizabeth of York (daughter of King Edward IV). They were third cousins, as both were great-great-grandchildren of John of Gaunt. The marriage unified the warring houses of Lancaster and York and gave the couple's children a strong claim to the throne. The unification of the two houses through this marriage is symbolized by the heraldic emblem of the Tudor rose, a combination of the white rose of York and the red rose of Lancaster. Henry VII and Elizabeth of York had several children, four of whom survived infancy: Henry VII's foreign policy had an objective of dynastic security: witness the alliance forged with the marriage in 1503 of his daughter Margaret to James IV of Scotland and through the marriage of his eldest son. In 1501 Henry VII married his son Arthur to Catherine of Aragon, cementing an alliance with the Spanish monarchs, Ferdinand II of Aragon and Isabella I of Castile. The newlyweds spent their honeymoon at Ludlow Castle, the traditional seat of the Prince of Wales. However, four months after the marriage, Arthur died, leaving his younger brother Henry as heir apparent. Henry VII acquired a papal dispensation allowing Prince Henry to marry Arthur's widow; however, Henry VII delayed the marriage. Henry VII limited his involvement in European politics. He went to war only twice: once in 1489 during the Breton crisis and the invasion of Brittany, and in 1496–1497 in revenge for Scottish support of Perkin Warbeck and for the Scottish invasion of northern England. Henry VII made peace with France in 1492 and the war against Scotland was abandoned because of the Western Rebellion of 1497. Henry VII came to peace with James IV in 1502, paving the way for the marriage of his daughter Margaret. One of the main concerns of Henry VII during his reign was the re-accumulation of the funds in the royal treasury. England had never been one of the wealthier European countries, and after the War of the Roses this was even more true. Through his strict monetary strategy, he was able to leave a considerable amount of money in the Treasury for his son and successor, Henry VIII. Although it is debated whether Henry VII was a great king, he certainly was a successful one if only because he restored the nation's finances, strengthened the judicial system and successfully denied all other claimants to the throne, thus further securing it for his heir. The new King Henry VIII succeeded to the throne on 22 April 1509. He married Catherine of Aragon on 11 June 1509; they were crowned at Westminster Abbey on 24 June the same year. Catherine had been the wife of Henry's older brother Arthur (died 1502); this fact made the course of their marriage a rocky one from the start. A papal dispensation had to be granted for Henry to be able to marry Catherine, and the negotiations took some time. Despite the fact that Henry's father died before he was married to Catherine, he was determined to marry her anyway and to make sure that everyone knew he intended on being his own master. When Henry first came to the throne, he had very little interest in actually ruling; rather, he preferred to indulge in luxuries and to partake in sports. He let others control the kingdom for the first two years of his reign, and then when he became more interested in military strategy, he took more interest in ruling his own realm. In his younger years, Henry was described as a man of gentle friendliness, gentle in debate, and who acted as more of a companion than a king. He was generous in his gifts and affection and was said to be easy to get along with. The Henry that many people picture when they hear his name is the Henry of his later years, when he became obese, volatile, and was known for his great cruelty. Catherine did not bear Henry the sons he was desperate for; her first child, a daughter, was stillborn, and her second child, a son named Henry, Duke of Cornwall, died 52 days after birth. A further set of stillborn children followed, until a daughter, Mary, was born in 1516. When it became clear to Henry that the Tudor line was at risk, he consulted his chief minister Cardinal Thomas Wolsey about the possibility of annulling his marriage to Catherine. Along with Henry's concern that he would not have an heir, it was also obvious to his court that he was becoming tired of his aging wife, who was six years older than he was. Wolsey visited Rome, where he hoped to get the Pope's consent for an annulment. However, the Holy See was reluctant to rescind the earlier papal dispensation and felt heavy pressure from Catherine's nephew, Charles V, Holy Roman Emperor, in support of his aunt. Catherine contested the proceedings, and a protracted legal battle followed. Wolsey fell from favour in 1529 as a result of his failure to procure the annulment, and Henry appointed Thomas Cromwell in his place as chief minister . Despite his failure to produce the results that Henry wanted, Wolsey actively pursued the annulment (divorce was synonymous with annulment at that time). However, Wolsey never planned that Henry would marry Anne Boleyn, with whom the king had become enamoured while she served as a lady-in-waiting in Queen Catherine's household. It is unclear how far Wolsey was actually responsible for the English Reformation, but it is very clear that Henry's desire to marry Anne Boleyn precipitated the schism with Rome. Henry's concern about having an heir to secure his family line and to increase his security while alive would have prompted him to ask for a divorce sooner or later, whether Anne had precipitated it or not. Only Wolsey's sudden death at Leicester on 29 November 1530 on his journey to the Tower of London saved him from the public humiliation and inevitable execution he would have suffered upon his arrival at the Tower. In order to allow Henry to divorce his wife and marry Anne Boleyn, the English parliament enacted laws breaking ties with Rome, and declaring the king Supreme Head of the Church of England (from Elizabeth I the monarch is known as the Supreme Governor of the Church of England), thus severing the ecclesiastical structure of England from the Catholic Church and the Pope. The newly appointed Archbishop of Canterbury, Thomas Cranmer, was then able to declare Henry's marriage to Catherine annulled. Catherine was removed from Court, and she spent the last three years of her life in various English houses under "protectorship," similar to house arrest. This allowed Henry to marry one of his courtiers: Anne Boleyn, the daughter of a minor diplomat Sir Thomas Boleyn. Anne had become pregnant by the end of 1532 and gave birth on 7 September 1533 to Elizabeth, named in honour of Henry's mother. Anne may have had later pregnancies which ended in miscarriage or stillbirth. In May 1536, Anne was arrested, along with six courtiers. Thomas Cromwell stepped in again, claiming that Anne had taken lovers during her marriage to Henry, and she was tried for high treason and incest; these charges were most likely fabricated, but she was found guilty, and executed in May 1536. Henry married again, for the third time, to Jane Seymour, the daughter of a Wiltshire knight, and with whom he had become enamoured while she was still a lady-in-waiting to Queen Anne. Jane became pregnant, and in 1537 produced a son, who became King Edward VI following Henry's death in 1547. Jane died of puerperal fever only a few days after the birth, leaving Henry devastated. Cromwell continued to gain the king's favour when he designed and pushed through the Laws in Wales Acts, uniting England and Wales. In 1540, Henry married for the fourth time to the daughter of a Protestant German duke, Anne of Cleves, thus forming an alliance with the Protestant German states. Henry was reluctant to marry again, especially to a Protestant, but he was persuaded when the court painter Hans Holbein the Younger showed him a flattering portrait of her. She arrived in England in December 1539, and Henry rode to Rochester to meet her on 1 January 1540. Although the historian Gilbert Burnet claimed that Henry called her a "Flanders Mare", there is no evidence that he said this; in truth, court ambassadors negotiating the marriage praised her beauty. Whatever the circumstances were, the marriage failed, and Anne agreed to a peaceful annulment, assumed the title "My Lady, the King's Sister", and received a massive divorce settlement, which included Richmond Palace, Hever Castle, and numerous other estates across the country. Although the marriage made sense in terms of foreign policy, Henry was still enraged and offended by the match. Henry chose to blame Cromwell for the failed marriage, and ordered him beheaded on 28 July 1540. Henry kept his word and took care of Anne in his last years alive; however, after his death Anne suffered from extreme financial hardship because Edward VI's councillors refused to give her any funds and confiscated the homes she had been given. She pleaded to her brother to let her return home, but he only sent a few agents who tried to assist in helping her situation and refused to let her return home. Anne died on 16 July 1557 in Chelsea Manor. The fifth marriage was to the Catholic Catherine Howard, the niece of Thomas Howard, the third Duke of Norfolk. Catherine was promoted by Norfolk in the hope that she would persuade Henry to restore the Catholic religion in England. Henry called her his "rose without a thorn", but the marriage ended in failure. Henry's fancy with Catherine started before the end of his marriage with Anne when she was still a member of Anne's court. Catherine was young and vivacious, but Henry's age made him less inclined to use Catherine in the bedroom; rather, he preferred to admire her, which Catherine soon grew tired of. Catherine, forced into a marriage to an unattractive, obese man over 30 years her senior, had never wanted to marry Henry, and conducted an affair with the King's favourite, Thomas Culpeper, while Henry and she were married. During her questioning, Catherine first denied everything but eventually she was broken down and told of her infidelity and her pre-nuptial relations with other men. Henry, first enraged, threatened to torture her to death but later became overcome with grief and self-pity. She was accused of treason and was executed on 13 February 1542, destroying the English Catholic holdouts' hopes of a national reconciliation with the Catholic Church. Her execution also marked the end of the Howard family's power within the court. By the time Henry conducted another Protestant marriage with his final wife Catherine Parr in 1543, the old Roman Catholic advisers, including the powerful third Duke of Norfolk, had lost all their power and influence. The duke himself was still a committed Catholic, and he was nearly persuaded to arrest Catherine for preaching Lutheran doctrines to Henry while she attended his ill health. However, she managed to reconcile with the King after vowing that she had only argued about religion with him to take his mind off the suffering caused by his ulcerous leg. Her peacemaking also helped reconcile Henry with his daughters Mary and Elizabeth and fostered a good relationship between her and the crown prince. Henry died on 28 January 1547. His will had reinstated his daughters by his annulled marriages to Catherine of Aragon and Anne Boleyn to the line of succession. Edward, his nine-year-old son by Jane Seymour, succeeded as Edward VI of England. Unfortunately, the young King's kingdom was usually in turmoil between nobles who were trying to strengthen their own positions in the kingdom by using the Regency in their favour. Although Henry had specified a group of men to act as regents during Edward's minority, Edward Seymour, Edward's uncle, quickly seized complete control and created himself Duke of Somerset on 15 February 1547. His domination of the Privy Council, the king's most senior body of advisers, was unchallenged. Somerset aimed to unite England and Scotland by marrying Edward to the young Mary, Queen of Scots, and aimed to forcibly impose the English Reformation on the Church of Scotland. Somerset led a large and well equipped army to Scotland, where he and the Scottish regent James Hamilton, 2nd Earl of Arran, commanded their armies at the Battle of Pinkie Cleugh on 10 September 1547. The English won the battle, and after this Queen Mary of Scotland was smuggled to France, where she was betrothed to the Dauphin, the future King Francis II of France. Despite Somerset's disappointment that no Scottish marriage would take place, his victory at Pinkie Cleugh made his position appear unassailable. Edward VI was taught that he had to lead religious reform. In 1549, the Crown ordered the publication of the Book of Common Prayer, containing the forms of worship for daily and Sunday church services. The controversial new book was not welcomed by either reformers or Catholic conservatives; it was especially condemned in Devon and Cornwall, where traditional Catholic loyalty was at its strongest. In Cornwall at the time, many of the people could only speak the Cornish language, so the uniform English Bibles and church services were not understood by many. This caused the Prayer Book Rebellion, in which groups of Cornish non-conformists gathered round the mayor. The rebellion worried Somerset, now Lord Protector, and he sent an army to impose a military solution to the rebellion. The rebellion hardened the Crown against Catholics. Fear of Catholicism focused on Edward's elder sister, Mary, who was a pious and devout Catholic. Although called before the Privy Council several times to renounce her faith and stop hearing the Catholic Mass, she refused. Edward had a good relationship with his sister Elizabeth, who was a Protestant, albeit a moderate one, but this was strained when Elizabeth was accused of having an affair with the Duke of Somerset's brother, Thomas Seymour, 1st Baron Seymour of Sudeley, the husband of Henry's last wife Catherine Parr. Elizabeth was interviewed by one of Edward's advisers, and she was eventually found not to be guilty, despite forced confessions from her servants Catherine Ashley and Thomas Parry. Thomas Seymour was arrested and beheaded on 20 March 1549. Lord Protector Somerset was also losing favour. After forcibly removing Edward VI to Windsor Castle, with the intention of keeping him hostage, Somerset was removed from power by members of the council, led by his chief rival, John Dudley, the first Earl of Warwick, who created himself Duke of Northumberland shortly after his rise. Northumberland effectively became Lord Protector, but he did not use this title, learning from the mistakes his predecessor made. Northumberland was furiously ambitious, and aimed to secure Protestant uniformity while making himself rich with land and money in the process. He ordered churches to be stripped of all traditional Catholic symbolism, resulting in the simplicity often seen in Church of England churches today. A revision of the Book of Common Prayer was published in 1552. When Edward VI became ill in 1553, his advisers looked to the possible imminent accession of the Catholic Lady Mary, and feared that she would overturn all the reforms made during Edward's reign. Perhaps surprisingly, it was the dying Edward himself who feared a return to Catholicism, and wrote a new will repudiating the 1544 will of Henry VIII. This gave the throne to his cousin Lady Jane Grey, the granddaughter of Henry VIII's sister Mary Tudor, who, after the death of Louis XII of France in 1515 had married Henry VIII's favourite Charles Brandon, the first Duke of Suffolk. Lady Jane's mother was Lady Frances Brandon, the daughter of Suffolk and Princess Mary. Northumberland married Jane to his youngest son Guildford Dudley, allowing himself to get the most out of a necessary Protestant succession. Most of Edward's council signed the "Devise for the Succession", and when Edward VI died on 6 July 1553 from his battle with tuberculosis, Lady Jane was proclaimed queen. However, the popular support for the rightful successor Mary – even though she was Catholic – overruled Northumberland's plans, and Jane, who had never wanted to accept the crown, was deposed after just nine days. Mary's supporters joined her in a triumphal procession to London, accompanied by her younger sister Elizabeth. With the death of Edward VI, the direct male line of the House of Tudor went extinct. Mary soon announced her intention to marry the Spanish prince Philip, son of her mother's nephew Charles V, Holy Roman Emperor. The prospect of a marriage alliance with Spain proved unpopular with the English people, who were worried that Spain would use England as a satellite, involving England in wars without the popular support of the people. Popular discontent grew; a Protestant courtier, Thomas Wyatt the younger, led a rebellion against Mary aiming to depose and replace her with her half-sister Elizabeth. The plot was discovered, and Wyatt's supporters were hunted down and killed. Wyatt himself was tortured, in the hope that he would give evidence that Elizabeth was involved so that Mary could have her executed for treason. Wyatt never implicated Elizabeth, and he was beheaded. Elizabeth spent her time between different prisons, including the Tower of London. Mary married Philip at Winchester Cathedral, on 25 July 1554. Philip found her unattractive, and only spent a minimal amount of time with her. Despite Mary believing she was pregnant numerous times during her five-year reign, she never reproduced. Devastated that she rarely saw her husband, and anxious that she was not bearing an heir to Catholic England, Mary became bitter. In her determination to restore England to the Catholic faith and to secure her throne from Protestant threats, she had 200–300 Protestants burnt at the stake in the Marian Persecutions between 1555 and 1558. Protestants came to hate her as "Bloody Mary." Charles Dickens stated that "as bloody Queen Mary this woman has become famous, and as Bloody Queen Mary she will ever be remembered with horror and detestation" Mary's dream of a new, Catholic Habsburg line was finished, and her popularity further declined when she lost the last English area on French soil, Calais, to Francis, Duke of Guise, on 7 January 1558. Mary's reign, however, introduced a new coining system that would be used until the 18th century, and her marriage to Philip II created new trade routes for England. Mary's government took a number of steps towards reversing the inflation, budgetary deficits, poverty, and trade crisis of her kingdom. She explored the commercial potential of Russian, African, and Baltic markets, revised the customs system, worked to counter the currency debasements of her predecessors, amalgamated several revenue courts, and strengthened the governing authority of the middling and larger towns. Mary also welcomed the first Russian ambassador to England, creating relations between England and Russia for the first time. Had she lived a little longer, Catholicism, which she worked so hard to restore into the realm might have taken deeper roots than it did. However, her actions in pursuit of this goal arguably spurred on the Protestant cause, through the many martyrs she made. Mary died on 17 November 1558 at the relatively young age of 42. Elizabeth I, who was staying at Hatfield House at the time of her accession, rode to London to the cheers of both the ruling class and the common people. When Elizabeth came to the throne, there was much apprehension among members of the council appointed by Mary, because many of them (as noted by the Spanish ambassador) had participated in several plots against Elizabeth, such as her imprisonment in the Tower, trying to force her to marry a foreign prince and thereby sending her out of the realm, and even pushing for her death. In response to their fear, she chose as her chief minister Sir William Cecil, a Protestant, and former secretary to Lord Protector the Duke of Somerset and then to the Duke of Northumberland. Under Mary, he had been spared, and often visited Elizabeth, ostensibly to review her accounts and expenditure. Elizabeth also appointed her personal favourite, the son of the Duke of Northumberland Lord Robert Dudley, her Master of the Horse, giving him constant personal access to the queen. Elizabeth had a long, turbulent path to the throne. She had a number of problems during her childhood, one of the main ones being after the execution of her mother, Anne Boleyn. When Anne was beheaded, Henry declared Elizabeth an illegitimate child and she would, therefore, not be able to inherit the throne. After the death of her father, she was raised by his widow, Catherine Parr and her husband Thomas Seymour, 1st Baron Seymour of Sudeley. A scandal arose with her and the Lord Admiral to which she stood trial. During the examinations, she answered truthfully and boldly and all charges were dropped. She was an excellent student, well-schooled in Latin, French, Italian, and somewhat in Greek, and was a talented writer. She was supposedly a very skilled musician as well, in both singing and playing the lute. After the rebellion of Thomas Wyatt the younger, Elizabeth was imprisoned in the Tower of London. No proof could be found that Elizabeth was involved and she was released and retired to the countryside until the death of her sister, Mary I of England. Elizabeth was a moderate Protestant; she was the daughter of Anne Boleyn, who played a key role in the English Reformation in the 1520s. She had been brought up by Blanche Herbert Lady Troy. At her coronation in January 1559, many of the bishops – Catholic, appointed by Mary, who had expelled many of the Protestant clergymen when she became queen in 1553 – refused to perform the service in English. Eventually, the relatively minor Bishop of Carlisle, Owen Oglethorpe, performed the ceremony; but when Oglethorpe attempted to perform traditional Catholic parts of the Coronation, Elizabeth got up and left. Following the Coronation, two important Acts were passed through parliament: the Act of Uniformity and the Act of Supremacy, establishing the Protestant Church of England and creating Elizabeth Supreme Governor of the Church of England ("Supreme Head", the title used by her father and brother, was seen as inappropriate for a woman ruler). These acts, known collectively as the Elizabethan Religious Settlement, made it compulsory to attend church services every Sunday; and imposed an oath on clergymen and statesmen to recognise the Church of England, the independence of the Church of England from the Catholic Church, and the authority of Elizabeth as Supreme Governor. Elizabeth made it clear that if they refused the oath the first time, they would have a second opportunity, after which, if the oath was not sworn, the offenders would be deprived of their offices and estates. Even though Elizabeth was only twenty-five when she came to the throne, she was absolutely sure of her God-given place to be the queen and of her responsibilities as the 'handmaiden of the Lord'. She never let anyone challenge her authority as queen, even though many people, who felt she was weak and should be married, tried to do so. The popularity of Elizabeth was extremely high, but her Privy Council, her Parliament and her subjects thought that the unmarried queen should take a husband; it was generally accepted that, once a queen regnant was married, the husband would relieve the woman of the burdens of head of state. Also, without an heir, the Tudor line would end; the risk of civil war between rival claimants was a possibility if Elizabeth died childless. Numerous suitors from nearly all European nations sent ambassadors to English court to put forward their suit. Risk of death came dangerously close in 1564 when Elizabeth caught smallpox; when she was most at risk, she named Robert Dudley as Lord Protector in the event of her death. After her recovery, she appointed Dudley to the Privy Council and created him Earl of Leicester, in the hope that he would marry Mary, Queen of Scots. Mary rejected him, and instead married Henry Stuart, Lord Darnley, a descendant of Henry VII, giving Mary a stronger claim to the English throne. Although many Catholics were loyal to Elizabeth, many also believed that, because Elizabeth was declared illegitimate after her parents' marriage was annulled, Mary was the strongest legitimate claimant. Despite this, Elizabeth would not name Mary her heir; as she had experienced during the reign of her predecessor Mary I, the opposition could flock around the heir if they were disheartened with Elizabeth's rule. Numerous threats to the Tudor line occurred during Elizabeth's reign. In 1569, a group of Earls led by Charles Neville, the sixth Earl of Westmorland, and Thomas Percy, the seventh Earl of Northumberland attempted to depose Elizabeth and replace her with Mary, Queen of Scots. In 1571, the Protestant-turned-Catholic Thomas Howard, the fourth Duke of Norfolk, had plans to marry Mary, Queen of Scots, and then replace Elizabeth with Mary. The plot, masterminded by Roberto di Ridolfi, was discovered and Norfolk was beheaded. The next major uprising was in 1601, when Robert Devereux, the second Earl of Essex, attempted to raise the city of London against Elizabeth's government. The city of London proved unwilling to rebel; Essex and most of his co-rebels were executed. Threats also came from abroad. In 1570, Pope Pius V issued a Papal bull, "Regnans in Excelsis", excommunicating Elizabeth, and releasing her subjects from their allegiance to her. Elizabeth came under pressure from Parliament to execute Mary, Queen of Scots, to prevent any further attempts to replace her; though faced with several official requests, she vacillated over the decision to execute an anointed queen. Finally, she was persuaded of Mary's (treasonous) complicity in the plotting against her, and she signed the death warrant in 1586. Mary was executed at Fotheringhay Castle on 8 February 1587, to the outrage of Catholic Europe. There are many reasons debated as to why Elizabeth never married. It was rumoured that she was in love with Robert Dudley, 1st Earl of Leicester, and that on one of her summer progresses she had birthed his illegitimate child. This rumour was just one of many that swirled around the two's long-standing friendship. However, more important to focus on were the disasters that many women, such as Lady Jane Grey, suffered due to being married into the royal family. Her sister Mary's marriage to Philip brought great contempt to the country, for many of her subjects despised Spain and Philip and feared that he would try to take complete control. Recalling her father's disdain for Anne of Cleves, Elizabeth also refused to enter into a foreign match with a man that she had never seen before, so that also eliminated a large number of suitors. Despite the uncertainty of Elizabeth's – and therefore the Tudors' – hold on England, she never married. The closest she came to marriage was between 1579 and 1581, when she was courted by Francis, Duke of Anjou, the son of Henry II of France and Catherine de' Medici. Despite Elizabeth's government constantly begging her to marry in the early years of her reign, it was now persuading Elizabeth not to marry the French prince, for his mother, Catherine de' Medici, was suspected of ordering the St Bartholomew's Day massacre of tens of thousands of French Protestant Huguenots in 1572. Elizabeth bowed to public feeling against the marriage, learning from the mistake her sister made when she married Philip II of Spain, and sent the Duke of Anjou away. Elizabeth knew that the continuation of the Tudor line was now impossible; she was forty-eight in 1581, and too old to bear children. By far the most dangerous threat to the Tudor line during Elizabeth's reign was the Spanish Armada of 1588, launched by Elizabeth's old suitor Philip II of Spain and commanded by Alonso de Guzmán El Bueno, the seventh Duke of Medina Sidonia. The Spanish invasion fleet outnumbered the English fleet's 22 galleons and 108 armed merchant ships. The Spanish lost, however, as a result of bad weather on the English Channel, poor planning and logistics, and the skills of Sir Francis Drake and Charles Howard, the second Baron Howard of Effingham (later first Earl of Nottingham). While Elizabeth declined physically with age, her running of the country continued to benefit her people. In response to famine across England due to bad harvests in the 1590s, Elizabeth introduced the poor law, allowing peasants who were too ill to work a certain amount of money from the state. All the money Elizabeth had borrowed from Parliament in 12 of the 13 parliamentary sessions was paid back; by the time of her death, Elizabeth not only had no debts, but was in credit. Elizabeth died childless at Richmond Palace on 24 March 1603. She left behind a legacy and monarchy worth noting. She had pursued her goals of being well endowed with every aspect of ruling her kingdom, and of knowing everything necessary to be an effective monarch. She took part in law, economics, politics and governmental issues both domestic and abroad. Realms that had once been strictly forbidden to the female gender had now been ruled by one. Elizabeth never named a successor. However, her chief minister Sir Robert Cecil had corresponded with the Protestant King James VI of Scotland, great-grandson of Margaret Tudor, and James's succession to the English throne was unopposed. There has been discussion over the selected heir. It has been argued that Elizabeth would have selected James because she felt guilty about what happened to his mother, her cousin. Whether this is true is unknown for certain, for Elizabeth did her best to never show emotion nor give in to claims. Elizabeth was strong and hard-headed and kept her primary goal in sight: providing the best for her people and proving those wrong who doubted her while maintaining a straight composure. The House of Tudor survives through the female line, first with the House of Stuart, which occupied the English throne for most of the following century, and then the House of Hanover, via James' granddaughter Sophia. Queen Elizabeth II is a direct descendant of Henry VII. Public interference regarding the Roses dynasties was always a threat until the 17th century Stuart/Bourbon re-alignment occasioned by a series of events such as the execution of Lady Jane Grey, despite her brother-in-law, Leicester's reputation in Holland, the Rising of the North (in which the old Percy-Neville feud and even anti-Scottish sentiment was discarded on account of religion; Northern England shared the same Avignonese bias as the Scottish court, on par with Valois France and Castile, which became the backbone of the Counter-Reformation, with Protestants being solidly anti-Avignonese) and death of Elizabeth I of England without children. The Tudors made no substantial changes in their foreign policy from either Lancaster or York, whether the alliance was with Aragon or Cleves, the chief foreign enemies continuing as the Auld Alliance, but the Tudors resurrected old ecclesiastic arguments once pursued by Henry II of England and his son John of England. Yorkists were tied so much to the old order that Catholic rebellions (such as the Pilgrimage of Grace) and aspirations (exemplified by William Allen) were seen as continuing in their reactionary footsteps, when in opposition to the Tudors' reformation policies, although the Tudors were not uniformly Protestant according to Continental definition—instead were true to their Lancastrian Beaufort allegiance, in the appointment of Reginald Pole. The essential difference between the Tudors and their predecessors, is the nationalization and integration of John Wycliffe's ideas to the Church of England, holding onto the alignment of Richard II of England and Anne of Bohemia, in which Anne's Hussite brethren were in alliance to her husband's Wycliffite countrymen against the Avignon Papacy. The Tudors otherwise rejected or suppressed other religious notions, whether for the Pope's award of "Fidei Defensor" or to prevent them from being in the hands of the common laity, who might be swayed by cells of foreign Protestants, with whom they had conversation as Marian exiles, pursuing a strategy of containment which the Lancastrians had done (after being vilified by Wat Tyler), even though the phenomenon of "Lollard knights" (like John Oldcastle) had become almost a national sensation all on its own. In essence, the Tudors followed a composite of Lancastrian (the court party) and Yorkist (the church party) policies. Henry VIII tried to extend his father's balancing act between the dynasties for opportunistic interventionism in the Italian Wars, which had unfortunate consequences for his own marriages and the Papal States; the King furthermore tried to use similar tactics for the "via media" concept of Anglicanism. A further parallelism was effected by turning Ireland into a kingdom and sharing the same episcopal establishment as England, whilst enlarging England by the annexation of Wales. The progress to Northern/Roses government would thenceforth pass across the border into Scotland, in 1603, due not only to the civil warring, but also because the Tudors' own line was fragile and insecure, trying to reconcile the mortal enemies who had weakened England to the point of having to bow to new pressures, rather than dictate diplomacy on English terms. The following English rebellions took place against the House of Tudor: The six Tudor monarchs were: As Prince of Wales, Arthur, Henry, and Edward all bore these arms, The Welsh Dragon supporter honoured the Tudor's Welsh origins. The most popular symbol of the house of Tudor was the Tudor rose (see top of page). When Henry Tudor took the crown of England from Richard III in battle, he brought about the end of the Wars of the Roses between the House of Lancaster (whose badge was a red rose) and the House of York (whose badge was a white rose). He married Elizabeth of York to bring all factions together. On his marriage, Henry adopted the Tudor Rose badge conjoining the White Rose of York and the Red Rose of Lancaster. It symbolized the Tudor's right to rule as well the uniting of the kingdom after the Wars of the Roses. It was used by every English, then British, monarch since Henry VII as a royal badge. The Tudors also used monograms to denote themselves: As noted above Tewdur or Tudor is derived from the words tud "territory" and rhi "king". Owen Tudor took it as a surname on being knighted. It is doubtful whether the Tudor kings used the name on the throne. Kings and princes were not seen as needing a name, and a " 'Tudor' name for the royal family was hardly known in the sixteenth century. The royal surname was never used in official publications, and hardly in 'histories' of various sorts before 1584. ... Monarchs were not anxious to publicize their descent in the paternal line from a Welsh adventurer, stressing instead continuity with the historic English and French royal families. Their subjects did not think of them as 'Tudors', or of themselves as 'Tudor people'". Princes and Princesses would have been known as "of England". The medieval practice of colloquially calling princes after their place birth (e.g. Henry of Bolingbroke for Henry IV or Henry of Monmouth for Henry V) was not followed. Henry VII was likely known as "Henry of Richmond" before his taking of the throne. When Richard III called him "Henry Tudor" it was to stress his Welshness and his unfitness for the throne as opposed to himself, "Richard Plantagenet", a "true" descendant of the royal line. The Tudors' claim to the throne combined the Lancastrian claim in their descent from the Beauforts and the Yorkist claim by the marriage of Henry VII to the heiress of Edward IV. Numerous feature films are based on Tudor history. Queen Elizabeth has been in special favorite for filmmakers for generations. According to Elizabeth A. Ford and Deborah C. Mitchell, images of Elizabeth I move: Typee Typee: A Peep at Polynesian Life is the first book by American writer Herman Melville, published first in London, then New York, in 1846. Considered a classic in travel and adventure literature, the narrative is partly based on the author's actual experiences on the island Nuku Hiva in the South Pacific Marquesas Islands in 1842, liberally supplemented with imaginative reconstruction and adaptation of material from other books. The title comes from the valley of Taipivai, once known as Taipi. "Typee" was Melville's most popular work during his lifetime; it made him notorious as the "man who lived among the cannibals". The book presents itself as a piece of travel adventure, but from the beginning there were questions whether the story was true. The London edition of the book appeared in the publisher John Murray's "Colonial and Home Library" series, accounts of foreigners in exotic places, and the slightly suspicious Murray required reassurance that Melville's experiences was first-hand, not the work of a professional travel writer, and that the author had himself experienced the adventures he described. American readers, however, accepted the story at face value. "Typee" is, "in fact, neither literal autobiography nor pure fiction," says scholar Leon Howard. Melville "drew his material from his experiences, from his imagination, and from a variety of travel books when the memory of his experiences were inadequate." He departed from what actually happened in several ways, sometimes by extending factual incidents, sometimes by fabricating them, and sometimes by what one scholar calls "outright lies". The actual one-month stay on which "Typee" is based is presented as four months in the narrative; there is no lake on the actual island on which Melville might have canoed with the lovely Fayaway, and the ridge which Melville describes climbing after escaping the ship he may actually have seen in an engraving. He drew extensively on contemporary accounts by Pacific explorers to add to what might otherwise have been a straightforward story of escape, capture, and re-escape. Most American reviewers accepted the story as authentic, though it provoked disbelief among some British readers. Two years after the novel's publication, many of the events described therein were corroborated by Melville's fellow castaway, Richard Tobias "Toby" Greene. "Typee"s narrative expresses sympathy for the so-called savage natives, while criticizing the missionaries' attempts to civilize them: It may be asserted without fear of contradictions that in all the cases of outrages committed by Polynesians, Europeans have at some time or other been the aggressors, and that the cruel and bloodthirsty disposition of some of the islanders is mainly to be ascribed to the influence of such examples. [The] voluptuous Indian, with every desire supplied, whom Providence has bountifully provided with all the sources of pure and natural enjoyment, and from whom are removed so many of the ills and pains of life—what has he to desire at the hands of Civilization? Will he be the happier? Let the once smiling and populous Hawaiian islands, with their now diseased, starving, and dying natives, answer the question. The missionaries may seek to disguise the matter as they will, but the facts are incontrovertible. The narrator states that Typee natives ate an inhabitant of one of the neighboring valleys, but the natives who captured him reassured him that he would not be eaten. "The Knickerbocker" called "Typee" "a piece of Münchhausenism". New York publisher Evert Augustus Duyckinck wrote to Nathaniel Hawthorne that "it is a lively and pleasant book, not over philosophical perhaps." In 1939 Charles Robert Anderson published "Melville in the South Seas" in which he documented that Melville had spent only one month on the island (rather than the four months he claimed) and that Melville lifted extensive material from travel narratives. "Typee" was published first in London by John Murray on February 26, 1846, and then in New York by Wiley and Putnam on March 17, 1846. It was Melville's first book, and made him one of the best-known American authors overnight. The same version was published in London and New York in the first edition; however, Melville removed critical references to missionaries and Christianity from the second U.S. edition at the request of his American publisher. Later additions included a "Sequel: The Story of Toby" written by Melville, explaining what happened to Toby. Before "Typee"s publication in New York, Wiley and Putnam asked Melville to remove one sentence. In a scene where the "Dolly" is boarded by young women from Nukuheva, Melville originally wrote: Our ship was now given up to every species of riot and debauchery. Not the feeblest barrier was interposed between the unholy passions of the crew and their unlimited gratification. The second sentence was removed from the final version. The inaugural book of the Library of America series, titled "Typee, Omoo, Mardi" (May 6, 1982), was a volume containing "Typee: A Peep at Polynesian Life", its sequel "Omoo: A Narrative of Adventures in the South Seas" (1847), and "Mardi, and a Voyage Thither" (1849). Truncated icosahedron In geometry, the truncated icosahedron is an Archimedean solid, one of 13 convex isogonal nonprismatic solids whose 32 faces are two or more types of regular polygons. It has 12 regular pentagonal faces, 20 regular hexagonal faces, 60 vertices and 90 edges. It is the Goldberg polyhedron GPV(1,1) or {5+,3}1,1, containing pentagonal and hexagonal faces. This geometry is associated with footballs (soccer balls) typically patterned with white hexagons and black pentagons. Geodesic domes such as those whose architecture Buckminster Fuller pioneered are often based on this structure. It also corresponds to the geometry of the fullerene C60 ("buckyball") molecule. It is used in the cell-transitive hyperbolic space-filling tessellation, the bitruncated order-5 dodecahedral honeycomb. This polyhedron can be constructed from an icosahedron with the 12 vertices truncated (cut off) such that one third of each edge is cut off at each of both ends. This creates 12 new pentagon faces, and leaves the original 20 triangle faces as regular hexagons. Thus the length of the edges is one third of that of the original edges. In Geometry and Graph theory, there are some standard polyhedron characteristics. Cartesian coordinates for the vertices of a "truncated icosahedron" centered at the origin are all even permutations of: where "φ" =  is the golden mean. The circumradius is ≈ 4.956 and the edges have length 2. The "truncated icosahedron" has five special orthogonal projections, centered, on a vertex, on two types of edges, and two types of faces: hexagonal and pentagonal. The last two correspond to the A2 and H2 Coxeter planes. The truncated icosahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. If the edge length of a truncated icosahedron is "a", the radius of a circumscribed sphere (one that touches the truncated icosahedron at all vertices) is: where "φ" is the golden ratio. This result is easy to get by using one of the three orthogonal golden rectangles drawn into the original icosahedron (before cut off) as the starting point for our considerations. The angle between the segments joining the center and the vertices connected by shared edge (calculated on the basis of this construction) is approximately 23.281446°. The area "A" and the volume "V" of the truncated icosahedron of edge length "a" are: With unit edges, the surface area is (rounded) 21 for the pentagons and 52 for the hexagons, together 73 (see areas of regular polygons). The truncated icosahedron easily demonstrates the Euler characteristic: The balls used in association football and team handball are perhaps the best-known example of a spherical polyhedron analog to the truncated icosahedron, found in everyday life. The ball comprises the same pattern of regular pentagons and regular hexagons, but it is more spherical due to the pressure of the air inside and the elasticity of the ball. This ball type was introduced to the World Cup in 1970 (starting in 2006, this iconic design has been superseded by alternative patterns). Geodesic domes are typically based on triangular facetings of this geometry with example structures found across the world, popularized by Buckminster Fuller. A variation of the icosahedron was used as the basis of the honeycomb wheels (made from a polycast material) used by the Pontiac Motor Division between 1971 and 1976 on its Trans Am and Grand Prix. This shape was also the configuration of the lenses used for focusing the explosive shock waves of the detonators in both the gadget and Fat Man atomic bombs. The truncated icosahedron can also be described as a model of the Buckminsterfullerene (fullerene) (C60), or "buckyball", molecule – an allotrope of elemental carbon, discovered in 1985. The diameter of the football and the fullerene molecule are 22 cm and about 0.71 nm, respectively, hence the size ratio is ≈31,000,000:1. In popular craft culture, large sparkleballs can be made using a icosahedron pattern and plastic, styrofoam or paper cups. A truncated icosahedron with "solid edges" by Leonardo da Vinci appears as an illustration in Luca Pacioli's book De divina proportione. These uniform star-polyhedra, and one icosahedral stellation have nonuniform truncated icosahedra convex hulls: In the mathematical field of graph theory, a truncated icosahedral graph is the graph of vertices and edges of the "truncated icosahedron", one of the Archimedean solids. It has 60 vertices and 90 edges, and is a cubic Archimedean graph. The truncated icosahedron was known to Archimedes who studied vertex-transitive polyhedra. However, that work was lost. Later, Johannes Kepler rediscovered and wrote about these solids, including the truncated icosahedron. The structure associated was described by Leonardo da Vinci. Albrecht Dürer also reproduced a similar icosahedron containing 12 pentagonal and 20 hexagonal faces but there are no clear documentations of this. The Mismeasure of Man The Mismeasure of Man is a 1981 book by paleontologist Stephen Jay Gould. The book is both a history and critique of the statistical methods and cultural motivations underlying biological determinism, the belief that “the social and economic differences between human groups—primarily races, classes, and sexes—arise from inherited, inborn distinctions and that society, in this sense, is an accurate reflection of biology”. Gould argues that the primary assumption underlying biological determinism is that, “worth can be assigned to individuals and groups by "measuring intelligence as a single quantity"”. Biological determinism is analyzed in discussions of craniometry and psychological testing, the two principal methods used to measure intelligence as a single quantity. According to Gould, these methods possess two deep fallacies. The first fallacy is reification, which is “our tendency to convert abstract concepts into entities”. Examples of reification include the intelligence quotient (IQ) and the general intelligence factor ("g" factor), which have been the cornerstones of much research into human intelligence. The second fallacy is that of “ranking”, which is the “propensity for ordering complex variation as a gradual ascending scale”. The book received many positive reviews in the literary and popular press, but the reviews in scientific journals were, for the most part, highly critical. Literary reviews praised the book for opposing racism, the concept of general intelligence, and biological determinism. Reviews in scientific journals accused Gould of historical inaccuracy, unclear reasoning, and political bias. "The Mismeasure of Man" won the National Book Critics Circle award. Gould’s findings about how 19th-century researcher Samuel George Morton measured skull volumes came under criticism, and even Gould’s defenders found reasons to criticize his work on this topic. In 1996, a second edition was released. It included two additional chapters critiquing Richard Herrnstein and Charles Murray's book "The Bell Curve" (1994). Stephen Jay Gould (; 1941 – 2002) was one of the most influential and widely read authors of popular science of his generation. He was known by the general public mainly for his 300 popular essays in "Natural History" magazine, As in "The Mismeasure of Man", Gould criticized biological theories of human behavior in “Against "Sociobiology"” (1975) and “The Spandrels of San Marco and the Panglossian Paradigm” (1979). "The Mismeasure of Man" is a critical analysis of the early works of scientific racism which promoted "the theory of unitary, innate, linearly rankable intelligence"—such as craniometry, the measurement of skull volume and its relation to intellectual faculties. Gould alleged that much of the research was based largely on racial and social prejudices of the researchers rather than their scientific objectivity; that on occasion, researchers such as Samuel George Morton (1799–1851), Louis Agassiz (1807–1873), and Paul Broca (1824–1880), committed the methodological fallacy of allowing their personal "a priori" expectations to influence their conclusions and analytical reasoning. Gould noted that when Morton switched from using bird seed, which was less reliable, to lead shot to obtain endocranial-volume data, the average skull volumes changed, however these changes were not uniform across Morton's "racial" groupings. To Gould, it appeared that unconscious bias influenced Morton's initial results. Gould speculated, Plausible scenarios are easy to construct. Morton, measuring by seed, picks up a threateningly large black skull, fills it lightly and gives it a few desultory shakes. Next, he takes a distressingly small Caucasian skull, shakes hard, and pushes mightily at the foramen magnum with his thumb. It is easily done, without conscious motivation; expectation is a powerful guide to action. In 1977 Gould conducted his own analysis on some of Morton's endocranial-volume data, and alleged that the original results were based on "a priori" convictions and a selective use of data. He argued that when biases are accounted for, the original hypothesis—an ascending order of skull volume ranging from Blacks to Mongols to Whites—is unsupported by the data. "The Mismeasure of Man" presents a historical evaluation of the concepts of the "intelligence quotient" (IQ) and of the "general intelligence factor" ("g" factor), which were and are the measures for intelligence used by psychologists. Gould proposed that most psychological studies have been heavily biased, by the belief that the human behavior of a race of people is best explained by genetic heredity. He cites the Burt Affair, about the oft-cited twin studies, by Cyril Burt (1883–1971), wherein Burt claimed that human intelligence is highly heritable. As an evolutionary biologist and historian of science, Gould accepted "biological variability" (the premise of the transmission of intelligence via genetic heredity), but opposed "biological determinism", which posits that genes determine a definitive, unalterable social destiny for each man and each woman in life and society. "The Mismeasure of Man" is an analysis of statistical correlation, the mathematics applied by psychologists to establish the validity of IQ tests, and the heritability of intelligence. For example, to establish the validity of the proposition that IQ is supported by a general intelligence factor ("g" factor), the answers to several tests of cognitive ability must positively correlate; thus, for the "g" factor to be a heritable trait, the IQ-test scores of close-relation respondents must correlate more than the IQ-test scores of distant-relation respondents. However, correlation does not imply causation; for example, Gould said that the measures of the changes, over time, in "my age, the population of México, the price of Swiss cheese, my pet turtle’s weight, and the average distance between galaxies" have a high, positive correlation—yet that correlation does not indicate that Gould’s age increased because the Mexican population increased. More specifically, a high, positive correlation between the intelligence quotients of a parent and a child can be presumed either as evidence that IQ is genetically inherited, or that IQ is inherited through social and environmental factors. Moreover, because the data from IQ tests can be applied to arguing the logical validity of either proposition—genetic inheritance and environmental inheritance—the psychometric data have no inherent value. Gould pointed out that if the genetic heritability of IQ were demonstrable within a given racial or ethnic group, it would not explain the causes of IQ differences among the people of a group, or if said IQ differences can be attributed to the environment. For example, the height of a person is genetically determined, but there exist height differences within a given social group that can be attributed to environmental factors (e.g. the quality of nutrition) and to genetic inheritance. The evolutionary biologist Richard Lewontin, a colleague of Gould’s, is a proponent of this argument in relation to IQ tests. An example of the intellectual confusion about what heritability is and is not, is the statement: "If all environments were to become equal for everyone, heritability would rise to 100 percent because all remaining differences in IQ would necessarily be genetic in origin", which Gould said is misleading, at best, and false, at worst. First, it is very difficult to conceive of a world wherein every man, woman, and child grew up in the same environment, because their spatial and temporal dispersion upon the planet Earth makes it impossible. Second, were people to grow up in the same environment, not every difference would be genetic in origin because of the randomness of molecular and genetic development. Therefore, heritability is not a measure of phenotypic (physiognomy and physique) differences among racial and ethnic groups, but of differences between genotype and phenotype in a given population. Furthermore, he dismissed the proposition that an IQ score measures the general intelligence ("g" factor) of a person, because cognitive ability tests (IQ tests) present different types of questions, and the responses tend to form clusters of intellectual acumen. That is, different questions, and the answers to them, yield different scores—which indicate that an IQ test is a combination method of different examinations of different things. As such, Gould proposed that IQ-test proponents assume the existence of "general intelligence" as a discrete quality within the human mind, and thus they analyze the IQ-test data to produce an IQ number that establishes the definitive general intelligence of each man and of each woman. Hence, Gould dismissed the IQ number as an erroneous artifact of the statistical mathematics applied to the raw IQ-test data, especially because psychometric data can be variously analyzed to produce multiple IQ scores. The revised and expanded second edition (1996) includes two additional chapters, which critique Richard Herrnstein and Charles Murray’s book "The Bell Curve" (1994). Gould maintains that their book contains no new arguments and presents no compelling data; it merely refashions earlier arguments for biological determinism, which Gould defines as “the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status”. The majority of reviews of "The Mismeasure of Man" were positive, as Gould notes. Richard Lewontin, a celebrated evolutionary biologist who held positions at both the University of Chicago and Harvard, wrote a glowing review of Gould's book in "The New York Review of Books", endorsing most aspects of its account, and suggesting that it might have been even more critical of the racist intentions of the scientists he discusses, because scientists "sometimes tell deliberate lies because they believe that small lies can serve big truths." Gould said that the most positive review of the first edition to be written by a psychologist was in the "British Journal of Mathematical & Statistical Psychology", which reported that "Gould has performed a valuable service in exposing the logical basis of one of the most important debates in the social sciences, and this book should be required reading for students and practitioners alike." In "The New York Times", journalist Christopher Lehmann-Haupt wrote that the critique of factor analysis "demonstrates persuasively how factor analysis led to the cardinal error in reasoning, of confusing correlation with cause, or, to put it another way, of attributing false concreteness to the abstract". The British journal "Saturday Review" praised the book as a "fascinating historical study of scientific racism", and that its arguments "illustrate both the logical inconsistencies of the theories and the prejudicially motivated, albeit unintentional, misuse of data in each case". In the American "Monthly Review" magazine, Richard York and the sociologist Brett Clark praised the book's thematic concentration, saying that "rather than attempt a grand critique of all 'scientific' efforts aimed at justifying social inequalities, Gould performs a well-reasoned assessment of the errors underlying a specific set of theories and empirical claims". "Newsweek" gave it a positive review for revealing biased science and its abuse. "The Atlantic Monthly" and Phi Beta Kappa’s "The Key Reporter" also reviewed the book favorably. The first edition of "The Mismeasure of Man" won the non-fiction award from the National Book Critics Circle; the Outstanding Book Award for 1983 from the American Educational Research Association; the Italian translation was awarded the "Iglesias" prize in 1991; and in 1998, the Modern Library ranked it as the 24th-best English-language non-fiction book of the 20th century. In December 2006, "Discover" magazine ranked "The Mismeasure of Man" as the 17th-greatest science book of all time. In a paper published in 1988, John S. Michael reported that Samuel G. Morton's original 19th-century study was conducted with less bias than Gould had described; that "contrary to Gould's interpretation ... Morton's research was conducted with integrity". Nonetheless, Michael's analysis suggested that there were discrepancies in Morton's craniometric calculations, that his data tables were scientifically unsound, and he "cannot be excused for his errors, or his unfair comparisons of means". Michael later complained that some authors, including J. Philippe Rushton, selectively "cherry-picked facts" from his research to support their own claims. He lamented, "Some people have turned the Morton-Gould affair into an all or nothing debate in which either one side is right or the other side is right, and I think that is a mistake. Both men made mistakes and proving one wrong does not prove the other one right." In another study, published in 2011, Jason E. Lewis and colleagues re-measured the cranial volumes of the skulls in Morton's collection, and re-examined the respective statistical analyses by Morton and by Gould, concluding that, contrary to Gould's analysis, Morton did not falsify craniometric research results to support his racial and social prejudices, and that the "Caucasians" possessed the greatest average cranial volume in the sample. To the extent that Morton's craniometric measurements were erroneous, the error was away from his personal biases. Ultimately, Lewis and colleagues disagreed with most of Gould's criticisms of Morton, finding that Gould's work was "poorly supported", and that, in their opinion, the confirmation of the results of Morton's original work "weakens the argument of Gould, and others, that biased results are endemic in science". Despite this criticism, the authors acknowledged that they admired Gould's staunch opposition to racism. Lewis' study examined 46% of Morton's samples, whereas Gould's earlier study was based solely on a reexamination of Morton's raw data tables. However Lewis' study was subsequently criticized by a number of scholars for misrepresenting Gould's claims, bias, faulted for examining fewer than half of the skulls in Morton's collection, for failing to correct measurements for age, gender or stature, and for its claim that any meaningful conclusions could be drawn from Morton's data. In 2015 this paper was reviewed by Michael Weisberg, who reported that "most of Gould's arguments against Morton are sound. Although Gould made some errors and overstated his case in a number of places, he provided "prima facia" evidence, as yet unrefuted, that Morton did indeed mismeasure his skulls in ways that conformed to 19th century racial biases". Biologists and philosophers Jonathan Kaplan, Massimo Pigliucci, and Joshua Alexander Banta also published a critique of the group's paper, arguing that many of its claims were misleading and the re-measurements were "completely irrelevant to an evaluation of Gould's published analysis". They also maintain that the "methods deployed by Morton and Gould were both inappropriate" and that "Gould's statistical analysis of Morton's data is in many ways no better than Morton's own". A 2018 paper argues that Morton's data was unbiased but his interpretation of the results was not; the paper argues he had similar findings to research conducted by a contemporary craniologist Freidrich Tidemann, who had interpreted the data differently to argue strongly against any conception of a racial hierarchy. In a review of "The Mismeasure of Man", Bernard Davis, professor of microbiology at Harvard Medical School, said that Gould erected a straw man argument based upon incorrectly defined key terms—specifically "reification"—which Gould furthered with a "highly selective" presentation of statistical data, all motivated more by politics than by science. That Philip Morrison’s laudatory book review of "The Mismeasure of Man" in "Scientific American" was written and published because the editors of the journal had "long seen the study of the genetics of intelligence as a threat to social justice". Davis also criticized the popular-press and the literary-journal book reviews of "The Mismeasure of Man" as generally approbatory; whereas, most scientific-journal book reviews were generally critical. Nonetheless, in 1994, Gould contradicted Davis by arguing that of twenty-four academic book reviews written by experts in psychology, fourteen approved, three were mixed opinions, and seven disapproved of the book. Furthermore, Davis accused Gould of having misrepresented a study by Henry H. Goddard (1866–1957) about the intelligence of Jewish, Hungarian, Italian, and Russian immigrants to the U.S., wherein Gould reported Goddard's qualifying those people as "feeble-minded"; whereas, in the initial sentence of the study, Goddard said the study subjects were atypical members of their ethnic groups, who had been selected because of their suspected sub-normal intelligence. Countering Gould, Davis further explained that Goddard proposed that the low IQs of the sub-normally intelligent men and women who took the cognitive-ability test likely derived from their social environments rather than from their respective genetic inheritances, and concluded that "we may be confident that their children will be of average intelligence, and, if rightly brought up, will be good citizens". In his review, psychologist John B. Carroll said that Gould did not understand "the nature and purpose" of factor analysis. Statistician David J. Bartholomew, of the London School of Economics, said that Gould erred in his use of factor analysis, irrelevantly concentrated upon the fallacy of reification (abstract as concrete), and ignored the contemporary scientific consensus about the existence of the psychometric "g". Reviewing the book, Stephen F. Blinkhorn, a senior lecturer in psychology at the University of Hertfordshire, wrote that "The Mismeasure of Man" was "a masterpiece of propaganda" that selectively juxtaposed data to further a political agenda. Psychologist Lloyd Humphreys, then editor-in-chief of "The American Journal of Psychology" and "Psychological Bulletin", wrote that "The Mismeasure of Man" was "science fiction" and "political propaganda", and that Gould had misrepresented the views of Alfred Binet, Godfrey Thomson, and Lewis Terman. In his review, psychologist Franz Samelson wrote that Gould was wrong in asserting that the psychometric results of the intelligence tests administered to soldier-recruits by the U.S. Army contributed to the legislation of the Immigration Restriction Act of 1924. In their study of the Congressional Record and committee hearings related to the Immigration Act, Mark Snyderman and Richard J. Herrnstein reported that "the [intelligence] testing community did not generally view its findings as favoring restrictive immigration policies like those in the 1924 Act, and Congress took virtually no notice of intelligence testing". Psychologist David P. Barash wrote that Gould unfairly groups sociobiology with "racist eugenics and misguided Social Darwinism". A 2018 paper argued that the Gould was incorrect in his assessment of the Army Beta and that, for the knowledge, technology and test development standards of the time, it was adequate and could measure intelligence, possibly even in the modern day. In his review of "The Mismeasure of Man", Arthur Jensen, a University of California (Berkeley) educational psychologist whom Gould much criticized in the book, wrote that Gould used straw man arguments to advance his opinions, misrepresented other scientists, and propounded a political agenda. According to Jensen, the book was "a patent example" of the bias that political ideology imposes upon science—the very thing that Gould sought to portray in the book. Jensen also criticized Gould for concentrating on long-disproven arguments (noting that 71% of the book's references preceded 1950), rather than addressing "anything currently regarded as important by scientists in the relevant fields", suggesting that drawing conclusions from early human intelligence research is like condemning the contemporary automobile industry based upon the mechanical performance of the Ford Model T. Charles Murray, co-author of "The Bell Curve" (1994), said that his views about the distribution of human intelligence, among the races and the ethnic groups who compose the U.S. population, were misrepresented in "The Mismeasure of Man". Psychologist Hans Eysenck wrote that "The Mismeasure of Man" is a book that presents "a paleontologist's distorted view of what psychologists think, untutored in even the most elementary facts of the science". Arthur Jensen and Bernard Davis argued that if the "g" factor (general intelligence factor) were replaced with a model that tested several types of intelligence, it would change results less than one might expect. Therefore, according to Jensen and Davis, the results of standardized tests of cognitive ability would continue to correlate with the results of other such standardized tests, and that the intellectual achievement gap between black and white people would remain. Psychologist J. Philippe Rushton accused Gould of "scholarly malfeasance" for misrepresenting and for ignoring contemporary scientific research pertinent to the subject of his book, and for attacking dead hypotheses and methods of research. He faulted "The Mismeasure of Man" because it did not mention the magnetic resonance imaging (MRI) studies that showed the existence of statistical correlations among brain-size, IQ, and the "g" factor, despite Rushton having sent copies of the MRI studies to Gould. Rushton further criticized the book for the absence of the results of five studies of twins reared apart corroborating the findings of Cyril Burt—the contemporary average was 0.75 compared to the average of 0.77 reported by Burt. James R. Flynn, a researcher critical of racial theories of intelligence, repeated the arguments of Arthur Jensen about the second edition of "The Mismeasure of Man". Flynn wrote that "Gould's book evades all of Jensen's best arguments for a genetic component in the black–white IQ gap, by positing that they are dependent on the concept of "g" as a general intelligence factor. Therefore, Gould believes that if he can discredit "g" no more need be said. This is manifestly false. Jensen’s arguments would bite no matter whether blacks suffered from a score deficit on one or ten or one hundred factors." Rather than defending Jensen and Rushton, however, Flynn concluded that the Flynn Effect, a nongenetic rise in IQ throughout the 20th century, invalidated their core argument because their methods falsely identified even this change as genetic. According to psychologist Ian Deary, Gould's claim that there is no relation between brain size and IQ is outdated. Furthermore, he reported that Gould refused to correct this in new editions of the book, even though newly available data were brought to his attention by several researchers. Taliban treatment of women While in power in Afghanistan, the Taliban became notorious internationally for their sexism and violence against women. Their stated motive was to create a "secure environment where the chastity and dignity of women may once again be sacrosanct", reportedly based on Pashtunwali beliefs about living in purdah. Afghan women were forced to wear the burqa at all times in public, because, according to one Taliban spokesman, "the face of a woman is a source of corruption" for men not related to them. In a systematic segregation sometimes referred to as gender apartheid, women were not allowed to work, they were not allowed to be educated after the age of eight, and until then were permitted only to study the Qur'an. Women seeking an education were forced to attend underground schools, where they and their teachers risked execution if caught. They were not allowed to be treated by male doctors unless accompanied by a male chaperone, which led to illnesses remaining untreated. They faced public flogging and execution for violations of the Taliban's laws. The Taliban allowed and in some cases encouraged marriage for girls under the age of 16. Amnesty International reported that 80% of Afghan marriages were forced. From the age of eight onward, girls were not allowed to be in direct contact with males other than a close "blood relative", husband, or in-law (see mahram). Other restrictions for women were: The Taliban rulings regarding public conduct placed severe restrictions on a woman's freedom of movement and created difficulties for those who could not afford a burqa or didn't have any "mahram". These women faced virtual house arrest. A woman who was badly beaten by the Taliban for walking the streets alone stated "my father was killed in battle...I have no husband, no brother, no son. How am I to live if I can't go out alone?" A field worker for the NGO Terre des hommes witnessed the impact on female mobility at Kabul's largest state-run orphanage, Taskia Maskan. After the female staff was relieved of their duties, the approximately 400 girls living at the institution were locked inside for a year without being allowed outside for recreation. Decrees that affected women's mobility were: The lives of rural women were less dramatically affected as they generally lived and worked within secure kin environments. A relative level of freedom was necessary for them to continue with their chores or labour. If these women travelled to a nearby town, the same urban restrictions would have applied to them. The Taliban disagreed with past Afghan statutes that allowed the employment of women in a mixed sex workplace. The claim was that this was a breach of purdah and sharia law. On September 30, 1996, the Taliban decreed that all women should be banned from employment. It is estimated that 25 percent of government employees were female, and when compounded by losses in other sectors, many thousands of women were affected. This had a devastating impact on household incomes, especially on vulnerable or widow-headed households, which were common in Afghanistan. Another loss was for those whom the employed women served. Elementary education of children, not just girls, was shut down in Kabul, where virtually all of the elementary school teachers were women. Thousands of educated families fled Kabul for Pakistan after the Taliban took the city in 1996. Among those who remained in Afghanistan, there was an increase in mother and child destitution as the loss of vital income reduced many families to the margin of survival. Taliban Supreme Leader Mohammed Omar assured female civil servants and teachers they would still receive wages of around US$5 per month, although this was a short term offering. A Taliban representative stated: "The Taliban’s act of giving monthly salaries to 30,000 job-free women, now sitting comfortably at home, is a whiplash in the face of those who are defaming Taliban with reference to the rights of women. These people through baseless propaganda are trying to incite the women of Kabul against the Taliban". The Taliban promoted the use of the extended family, or zakat system of charity to ensure women should not need to work. However, years of conflict meant that nuclear families often struggled to support themselves let alone aid additional relatives. Qualification for legislation often rested on men, such as food aid which had to be collected by a male relative. The possibility that a woman may not possess any living male relatives was dismissed by Mullah Ghaus, the acting foreign minister, who said he was surprised at the degree of international attention and concern for such a small percentage of the Afghan population. For rural women there was generally little change in their circumstance, as their lives were dominated by the unpaid domestic, agricultural and reproductive labour necessary for subsistence. Female health professionals were exempted from the employment ban, yet they operated in much-reduced circumstances. The ordeal of physically getting to work due to the segregated bus system and widespread harassment meant some women left their jobs by choice. Of those who remained, many lived in fear of the regime and chose to reside at the hospital during the working week to minimize exposure to Taliban forces. These women were vital to ensuring the continuance of gynecological, ante-natal and midwifery services, be it on a much-compromised level. Under the Rabbani regime, there had been around 200 female staff working in Kabul's Mullalai Hospital, yet barely 50 remained under the Taliban. NGOs operating in Afghanistan after the fall of the Taliban in 2001 found the shortage of female health professionals to be a significant obstacle to their work. The other exception to the employment ban allowed a reduced number of humanitarian workers to remain in service. The Taliban segregation codes meant women were invaluable for gaining access to vulnerable women or conducting outreach research. This exception was not sanctioned by the entire Taliban movement, so instances of female participation, or lack thereof, varied with each circumstance. The city of Herat was particularly affected by Taliban adjustments to the treatment of women, as it had been one of the more cosmopolitan and outward-looking areas of Afghanistan prior to 1995. Women had previously been allowed to work in a limited range of jobs, but this was stopped by Taliban authorities. The new governor of Herat, Mullah Razzaq, issued orders for women to be forbidden to pass his office for fear of their distracting nature. The Taliban claimed to recognize their Islamic duty to offer education to both boys and girls, yet a decree was passed that banned girls above the age of 8 from receiving education. Maulvi Kalamadin insisted it was only a temporary suspension and that females would return to school and work once facilities and street security were adapted to prevent cross-gender contact. The Taliban wished to have total control of Afghanistan before calling upon an Ulema body to determine the content of a new curriculum to replace the Islamic yet unacceptable Mujahadin version. The female employment ban was felt greatly in the education system. Within Kabul alone, the ruling affected 106,256 girls, 148,223 male students, and 8,000 female university undergraduates. 7,793 female teachers were dismissed, a move that crippled the provision of education and caused 63 schools to close due to a sudden lack of educators. Some women ran clandestine schools within their homes for local children, or for other women under the guise of sewing classes, such as the Golden Needle Sewing School. The learners, parents and educators were aware of the consequences should the Taliban discover their activities, but for those who felt trapped under the strict Taliban rule, such actions allowed them a sense of self-determination and hope. Prior to the Taliban taking power in Afghanistan male doctors had been allowed to treat women in hospitals, but the decree that no male doctor should be allowed to touch the body of a woman under the pretext of consultation was soon introduced. With fewer female health professionals in employment, the distances many women had to travel for attention increased while the provision of ante-natal clinics declined. In Kabul, some women established informal clinics in their homes to service family and neighbours, yet as medical supplies were hard to obtain their effectiveness was limited. Many women endured prolonged suffering or a premature death due to the lack of treatment. For those families that had the means, inclination, and mahram support, medical attention could be sought in Pakistan. In October 1996, women were barred from accessing the traditional hammam, public baths, as the opportunities for socializing were ruled un-Islamic. These baths were an important facility in a nation where few possessed running water and the bar gave cause for the UN to predict a rise in scabies and vaginal infections among women denied methods of hygiene as well as access to health care. Nasrine Gross, an Afghan-American author, stated in 2001 that it has been four years since many Afghan women had been able to pray to their God as "Islam prohibits women from praying without a bath after their periods". In June 1998, the Taliban banned women from attending general hospitals in the capital, whereas before they had been able to attend a women-only ward of general hospitals. This left only one hospital in Kabul at which they could seek treatment. Family harmony was badly affected by mental stress, isolation and depression that often accompanied the forced confinement of women. A survey of 160 women concluded that 97 percent showed signs of serious depression and 71 percent reported a decline in their physical well being. Latifa, a Kabul resident and author, wrote: The apartment resembles a prison or a hospital. Silence weighs heavily on all of us. As none of us do much, we haven’t got much to tell each other. Incapable of sharing our emotions, we each enclose ourselves in our own fear and distress. Since everyone is in the same black pit, there isn’t much point in repeating time and again that we can’t see clearly. The Taliban closed the country's beauty salons. Cosmetics such as nail varnish and make-up were prohibited. Taliban restrictions on the cultural presence of women covered several areas. Place names including the word "women" were modified so that the word was not used. Women were forbidden to laugh loudly as it was considered improper for a stranger to hear a woman's voice. Women were prohibited from participating in sports or entering a sports club. The Revolutionary Association of the Women of Afghanistan (RAWA) dealt specifically with these issues. It was founded by Meena Keshwar Kamal, a woman who amongst other things established a bi-lingual magazine called "Women's Message" in 1981. She was assassinated in 1987 at the age of 30, but is revered as a heroine among Afghan women. Punishments were often carried out publicly, either as formal spectacles held in sports stadiums or town squares or spontaneous street beatings. Civilians lived in fear of harsh penalties as there was little mercy; women caught breaking decrees were often treated with extreme violence. Examples: Many punishments were carried out by individual militias without the sanction of Taliban authorities, as it was against official Taliban policy to punish women in the street. A more official line was the punishment of men for instances of female misconduct: a reflection of a patriarchal society and the belief that men are duty bound to control women. Maulvi Kalamadin stated in 1997, "Since we cannot directly punish women, we try to use taxi drivers and shopkeepers as a means to pressure them" to conform. Here are examples of the punishment of men: The protests of international agencies carried little weight with Taliban authorities, who gave precedence to their interpretation of Islamic law and did not feel bound by UN codes or human rights laws, legislation it viewed as instruments for Western imperialism. After the Taliban takeover of Herat in 1995, the UN had hoped the gender policies would become more 'moderate' "as it matured from a popular uprising into a responsible government with linkages to the donor community". The Taliban refused to bow to international pressure and reacted calmly to aid suspensions. In January 2006 a London conference on Afghanistan led to the creation of an International Compact, which included benchmarks for the treatment of women. The Compact includes the following point: "Gender:By end-1389 (20 March 2011): the National Action Plan for Women in Afghanistan will be fully implemented; and, in line with Afghanistan’s MDGs, female participation in all Afghan governance institutions, including elected and appointed bodies and the civil service, will be strengthened." However, an Amnesty International report on June 11, 2008 declared that there needed to be "no more empty promises" with regard to Afghanistan, citing the treatment of women as one such unfulfilled goal. Various Taliban groups have been in existence in Pakistan since around 2002. Most of these Taliban factions have joined an umbrella organization called Tehrik-i-Taliban Pakistan (TTP). Although the Pakistani Taliban is distinct from Afghan Taliban, they have a similar outlook towards women. The Pakistani Taliban too has killed women accusing them of un-Islamic behavior and has forcibly married girls after publicly flogging them for illicit relations. Theft Theft is the taking of another person's property or services without that person's permission or consent with the intent to deprive the rightful owner of it. The word "theft" is also used as an informal shorthand term for some crimes against property, such as burglary, embezzlement, larceny, looting, robbery, shoplifting, library theft or fraud. In some jurisdictions, "theft" is considered to be synonymous with "larceny"; in others, "theft" has replaced "larceny". Someone who carries out an act of or makes a career out of theft is known as a thief. "Theft" is the name of a statutory offence in California, Canada, England and Wales, Hong Kong, Northern Ireland, the Republic of Ireland, and the Australian states of South Australia and Victoria. The "actus reus" of theft is usually defined as an unauthorized taking, keeping, or using of another's property which must be accompanied by a "mens rea" of dishonesty and the intent permanently to deprive the owner or rightful possessor of that property or its use. For example, if X goes to a restaurant and, by mistake, takes Y's scarf instead of her own, she has physically deprived Y of the use of the property (which is the "actus reus") but the mistake prevents X from forming the "mens rea" (i.e., because she believes that she is the owner, she is not dishonest and does not intend to deprive the "owner" of it) so no crime has been committed at this point. But if she realizes the mistake when she gets home and could return the scarf to Y, she will steal the scarf if she dishonestly keeps it (see theft by finding). Note that there may be civil liability for the torts of trespass to chattels or conversion in either eventuality. Section 322(1) of the Criminal Code provides the general definition for theft in Canada: Sections 323 to 333 provide for more specific instances and exclusions: In the general definition above, the Supreme Court of Canada has construed "anything" very broadly, stating that it is not restricted to tangibles, but includes intangibles. To be the subject of theft it must, however: Because of this, confidential information cannot be the subject of theft, as it is not capable of being taken as only tangibles can be taken. It cannot be converted, not because it is an intangible, but because, save in very exceptional far‑fetched circumstances, the owner would never be deprived of it. However, the theft of trade secrets in certain circumstances does constitute part of the offence of economic espionage, which can be prosecuted under s. 19 of the "Security of Information Act". For the purposes of punishment, Section 334 divides theft into two separate offences, according to the value and nature of the goods stolen: Where a motor vehicle is stolen, Section 333.1 provides for a maximum punishment of 10 years for an indictable offence (and a minimum sentence of six months for a third or subsequent conviction), and a maximum sentence of 18 months on summary conviction. Article 2 of the Theft Ordinance provides the general definition of theft in Hong Kong: Theft is a criminal activity in India with punishments which may lead to jail term. Below are excerpts of laws of Indian penal Code which state definitions and punishments for theft. Whoever intending to take dishonestly any movable property out of the possession of any person without that person’s consent, moves that property in order to such taking is said to commit theft.Explanation 1.—A thing so long as it is attached to the earth, not being movable property, is not the subject of theft; but it becomes capable of being the subject of theft as soon as it is severed from the earth. Explanation 2.—A moving effected by the same act which effects the severance may be a theft. Explanation 3.—A person is said to cause a thing to move by removing an obstacle which prevented it from moving or by separating it from any other thing, as well as by actually moving it. Explanation 4.—A person, who by any means causes an animal to move, is said to move that animal, and to move everything which, in consequence of the motion so caused, is moved by that animal. Explanation 5.—The consent mentioned in the definition may be express or implied, and may be given either by the person in possession, or by any person having for that purpose authority either express or implied. Whoever commits theft shall be punished with imprisonment of either description for a term which may extend to three years, or with fine, or with both. Whoever commits theft in any building, tent or vessel, which building, tent or vessel is used as a human dwelling, or used for the custody of property, shall be punished with imprisonment of either description for a term which may extend to seven years, and shall also be liable to fine. Whoever, being a clerk or servant, or being employed in the capacity of a clerk or servant, commits theft in respect of any property in the possession of his master or employer, shall be punished with imprisonment of either description for a term which may extend to seven years, and shall also be liable to fine. Whoever commits theft, having made preparation for causing death, or hurt, or restraint, or fear of death, or of hurt, or of restraint, to any person, in order to the committing of such theft, or in order to the effecting of his escape after the committing of such theft, or in order to the retaining of property taken by such theft, shall be punished with rigorous imprisonment for a term which may extend to ten years, and shall also be liable to fine. Theft is a crime with related articles in the Wetboek van Strafrecht. Theft is a statutory offence, created by section 4(1) of the Criminal Justice (Theft and Fraud Offences) Act, 2001. According to the Romanian Penal Code a person committing theft ("furt") can face a penalty ranging from 1 to 20 years. Degrees of theft: In England and Wales, theft is a statutory offence, created by section 1(1) of the Theft Act 1968. This offence replaces the former offences of larceny, embezzlement and fraudulent conversion. The marginal note to section 1 of the Theft Act 1968 describes it as a "basic definition" of theft. Sections 1(1) and (2) provide: Sections 2 to 6 of the Theft Act 1968 have effect as regards the interpretation and operation of section 1 of that Act. Except as otherwise provided by that Act, sections 2 to 6 of that Act apply only for the purposes of section 1 of that Act. Section 3 provides: See R v Hinks and Lawrence v Metropolitan Police Commissioner. Section 4(1) provides that: Edward Griew said that section 4(1) could, without changing its meaning, be reduced, by omitting words, to: Sections 4(2) to (4) provide that the following can only be stolen under certain circumstances: Intangible property Confidential information and trade secrets are not property within the meaning of section 4. The words "other intangible property" include export quotas that are transferable for value on a temporary or permanent basis. Electricity Electricity cannot be stolen. It is not property within the meaning of section 4 and is not appropriated by switching on a current. "Cf." the offence of abstracting electricity under section 13. Section 5 "belonging to another" requires a distinction to be made between ownership, possession and control: So if A buys a car for cash, A will be the owner. If A then lends the car to B Ltd (a company), B Ltd will have possession. C, an employee of B Ltd then uses the car and has control. If C uses the car in an unauthorized way, C will steal the car from A and B Ltd. This means that it is possible to steal one's own property. In R v Turner, the owner removed his car from the forecourt of a garage where it had been left for collection after repair. He intended to avoid paying the bill. There was an appropriation of the car because it had been physically removed but there were two issues to be decided: Section 6 "with the intent to permanently deprive the other of it" is sufficiently flexible to include situations where the property is later returned. Alternative verdict The offense created by section 12(1) of the Theft Act 1968 (TWOC) is available an alternative verdict on an indictment for theft. Visiting forces Theft is an offence against property for the purposes of section 3 of the Visiting Forces Act 1952. Mode of trial and sentence Theft is triable either way. A person guilty of theft is liable, on conviction on indictment, to imprisonment for a term not exceeding seven years, or on summary conviction to imprisonment for a term not exceeding six months, or to a fine not exceeding the prescribed sum, or to both. Aggravated theft The only offence of aggravated theft is robbery, contrary to section 8 of the Theft Act 1968. Stolen goods For the purposes of the provisions of the Theft Act 1968 which relate to stolen goods, goods obtain in England or Wales or elsewhere by blackmail or fraud are regarded as stolen, and the words "steal", "theft" and "thief" are construed accordingly. Sections 22 to 24 and 26 to 28 of the Theft Act 1968 contain references to stolen goods. Handling stolen goods The offence of handling stolen goods, contrary to section 22(1) of the Theft Act 1968, can only be committed "otherwise than in the course of stealing". Similar or associated offences According to its title, the Theft Act 1968 revises the law as to theft and similar or associated offences. See also the Theft Act 1978. In Northern Ireland, theft is a statutory offence, created by section 1 of the Theft Act (Northern Ireland) 1969. In the United States, crimes must be prosecuted in the jurisdiction in which they occurred. Although federal and state jurisdiction may overlap, even when a criminal act violates both state and federal law, in most cases only the most serious offenses are prosecuted at the federal level. The federal government has criminalized certain narrow categories of theft that directly affect federal agencies or interstate commerce. The Model Penal Code, promulgated by the American Law Institute to help state legislatures update and standardize their laws, includes categories of theft by unlawful taking or by unlawfully disposing of property, theft by deception (fraud), theft by extortion, theft by failure to take measures to return lost or mislaid or mistakenly delivered property, theft by receipt of stolen property, theft by failing to make agreed disposition of received funds, and theft of services. Although many U.S. states have retained larceny as the primary offense, some have now adopted theft provisions. Grand theft, also called "grand larceny", is a term used throughout the United States designating theft that is large in magnitude or serious in potential penological consequences. Grand theft is contrasted with petty theft, also called petit theft, that is of smaller magnitude or lesser seriousness. Theft laws, including the distinction between grand theft and petty theft for cases falling within its jurisdiction, vary by state. This distinction is established by statute, as are the penological consequences. Most commonly, statutes establishing the distinction between grand theft and petty theft do so on the basis of the value of the money or property taken by the thief or lost by the victim, with the dollar threshold for grand theft varying from state to state. Most commonly, the penological consequences of the distinction include the significant one that grand theft can be treated as a felony, while petty theft is generally treated as a misdemeanor. In some states, grand theft of a vehicle may be charged as "grand theft auto" (see motor vehicle theft for more information). Repeat offenders who continue to steal may become subject to life imprisonment in certain states. Sometimes the federal anti-theft-of-government-property law is used to prosecute cases where the Espionage Act would otherwise be involved; the theory being that by retaining sensitive information, the defendant has taken a 'thing of value' from the government. For examples, see the Amerasia case and "United States v. Manning". When stolen property exceeds the amount of $500 it is a felony offense. If property is less than $500, then it is a Class A misdemeanor. Unlike some other states, shoplifting is not defined by a separate statute but falls under the state's general theft statute. The Alaska State Code does not use the terms "grand theft" or "grand larceny". However, it specifies that theft of property valued at more than $1,000 is a felony whereas thefts of lesser amounts are misdemeanors. The felony categories (class 1 and class 2 theft) also include theft of firearms; property taken from the person of another; vessel or aircraft safety or survival equipment; and of access devices. Felony theft is committed when the value of the stolen property exceeds $1000. Regardless of the value of the item, if it is a firearm or an animal taken for the purpose of animal fighting, then the theft is a Class 6 Felony. The Theft Act of 1927 consolidated a variety of common law crimes into theft. The state now distinguishes between two types of theft, grand theft and petty theft. The older crimes of embezzlement, larceny, and stealing, and any preexisting references to them now fall under the theft statute. There are a number of criminal statutes in the California Penal Code defining grand theft in different amounts. Grand theft generally consists of the theft of something of value over $950 (including money, labor or property but is lower with respect to various specified property), Theft is also considered grand theft when more than $250 in crops or marine life forms are stolen, “when the property is taken from the person of another,” or when the property stolen is an automobile, farm animal, or firearm. Petty theft is the default category for all other thefts. Grand theft is punishable by up to a year in jail or prison, and may be charged (depending upon the circumstances) as a misdemeanor or felony, while petty theft is a misdemeanor punishable by a fine or imprisonment not exceeding six months in jail or both. In general, any property taken that carries a value of more than $300 can be considered grand theft in certain circumstances. In Georgia, when a theft offense involves property valued at $500 or less, the crime is punishable as a misdemeanor. Any theft of property determined to be exceeding $500 may be treated as grand theft and charged as a felony. Theft in the first or second degree is a felony. Theft in the first degree means theft above $20,000 or of a firearm or explosive; or theft over $300 during a declared emergency. Theft in the second degree means theft above $750, theft from the person of another, or agricultural products over $100 or aquacultural products from an enclosed property. Theft is a felony if the value of the property exceeds $300 or the property is stolen from the person of another. Thresholds at $10,000, $100,000, and $500,000 determine how severe the punishment can be. The location from which property was stolen is also a factor in sentencing. KRS 514.030 states that theft by unlawful taking or disposition is generally a Class A misdemeanor unless the items stolen are a firearm, anhydrous ammonia, a controlled substance valued at less than $10,000 or any other item or combination of items valued $500 or higher and less than $10,000 in which case the theft is a Class D felony. Theft of items valued at $10,000 or higher and less than $1,000,000 is a Class C felony. Theft of items valued at $1,000,000 or more is a Class B felony, as is first offense theft of anhydrous ammonia for the express purpose of manufacturing methamphetamines in violation of KRS 218A.1432. In the latter case, subsequent offenses are a Class A felony. In Massachusetts, theft may generally be charged as a felony alue of stolen property is greater than $250. Stealing is a felony if the value of stolen property exceeds $500. It is also a felony if "The actor physically takes the property appropriated from the person of the victim" or the stolen property is a vehicle, legal document, credit card, firearm, explosive, U.S. flag on display, livestock animal, fish with value exceeding $75, captive wildlife, controlled substance, or ammonia. Stealing in excess of $25,000 is usually a class B felony (sentence: 5–15 years), while any other felony stealing (not including the felonies of burglary or robbery) that does not involve chemicals is a class C felony (sentence: up to 7 years). Non-felony stealing is a class A misdemeanor (sentence: up to 1 year). Grand larceny consists of stealing property with a value exceeding $1000; or stealing a public record, secret scientific material, firearm, credit or debit card, ammonia, telephone with service, or motor vehicle or religious item with value exceeding $100; or stealing from the person of another or by extortion or from an ATM. The degree of grand larceny is increased if the theft was from an ATM, through extortion involving fear, or involved a value exceeding the thresholds of $3,000, $50,000, or $1,000,000. Grand Larceny: Value of goods exceed $900 (13 V.S.A. § 2501) Grand Larceny: Value of goods exceed $200 (Virginia Code § 18.2-95) Theft of goods valued between $750 and $5000 is second-degree theft, a Class C felony. Theft of goods valued above $5000, of a search-and-rescue dog on duty, of public records from a public office or official, of metal wire from a utility, or of an access device, is a Class B felony, as is theft of a motor vehicle or a firearm. Victoria Theft is defined in the "Crimes Act" 1958 (Vic) as when a person "dishonestly appropriates property belonging to another with the intention of permanently depriving the other of it.". The actus reus and mens rea are defined as follows: Appropriation is defined in section 73(4) of the "Crimes Act" 1958 (Vic) as the assumption of any of the owners rights. It does not have to be all the owner's rights, as long as at least one right has been assumed. If the owner gave their consent to the appropriation there cannot be an appropriation. However, if this consent is obtained by deception, this consent is vitiated. Property – defined in section 71(1) of the "Crimes Act" 1958 (Vic) as being both tangible property, including money and intangible property. Information has been held not be property. Belonging to another – section 73(5) of the "Crimes Act" 1958 (Vic) provides that property belongs to another if that person has ownership, possession, or a proprietary interest in the property. Property can belong to more than one person. sections 73(9) & 73(10) deal with situations where the accused receives property under an obligation or by mistake. South Australia Theft is defined in section 134 of the "Criminal Consolidation Act" 1935 (SA) as being where a person deals with property dishonestly, without the owners consent and intending to deprive the owner of their property, or make a serious encroachment on the proprietary rights of the owner. Under this law, encroachment on proprietary rights means that the property is dealt with in a way that creates a substantial risk that the property will not be returned to the owner, or that the value of the property will be greatly diminished when the owner does get it back. Also, where property is treated as the defendants own property to dispose of, disregarding the actual property owner's rights. For a basic offence, a person found guilty of this offence is liable for imprisonment of up to 10 years. For an aggravated offence, a person found guilty of this offence is liable for imprisonment of up to 15 years. Victoria Intention to permanently deprive – defined at s.73(12) as treating property as it belongs to the accused, rather than the owner. Dishonestly – section 73(2) of the "Crimes Act" 1958 (Vic) creates a negative definition of the term 'dishonestly'. The section deems only three circumstances when the accused is deemed to have been acting honestly. These are a belief in a legal claim of right, a belief that the owner would have consented, or a belief the owner could not be found. South Australia Whether a person's conduct is dishonest is a question of fact to be determined by the jury, based on their own knowledge and experience. As with the definition in Victoria, it contains definitions of what is not dishonesty, including a belief in a legal claim of right or a belief the owner could not be found. In the British West Indies, especially Grenada, there have been a spate of large-scale thefts of tons of sand from beaches. Both Grenada and Jamaica are considering increasing fines and jail time for the thefts. In parts of the world which govern with sharia law, the punishment for theft is amputation of the right hand if the thief does not repent. This ruling is derived from sura 5 verse 38 of the Quran which states "As to the thief, Male or female, cut off his or her hands: a punishment by way of example, from Allah, for their crime: and Allah is Exalted in power." This is viewed as being a deterrent. In Buddhism, one of the five precepts prohibits theft, and involves the intention to steal what one perceives as not belonging to oneself ("what is not given") and acting successfully upon that intention. The severity of the act of theft is judged by the worth of the owner and the worth of that which is stolen. Underhand dealings, fraud, cheating and forgery are also included in this precept. Professions that are seen to violate the precept against theft are working in the gambling industry or marketing products that are not actually required for the customer. Possible causes for acts of theft include both economic and non-economic motivations. For example, an act of theft may be a response to the offender's feelings of anger, grief, depression, anxiety and compulsion, boredom, power and control issues, low self-esteem, a sense of entitlement, an effort to conform or fit in with a peer group, or rebellion. Theft from work may be attributed to factors that include greed, perceptions of economic need, support of a drug addiction, a response to or revenge for work-related issues, rationalization that the act is not actually one of stealing, response to opportunistic temptation, or the same emotional issues that may be involved in any other act of theft. The most common reasons for shoplifting include participation in an organized shoplifting ring, opportunistic theft, compulsive acts of theft, thrill-seeking, and theft due to need. Studies focusing on shoplifting by teenagers suggest that minors shoplift for reasons including the novelty of the experience, peer pressure, the desire to obtain goods that a minor cannot legally purchase, and for economic reasons, as well as self-indulgence and rebellion against parents. Specific forms of theft and other related offences Thomas Bowdler Thomas Bowdler, LRCP, FRS (; 11 July 1754 – 24 February 1825) was an English doctor best known for publishing "The Family Shakespeare", an expurgated edition of William Shakespeare's plays. The work, edited by his sister Henrietta Maria Bowdler, was intended to provide a version of Shakespeare that was more appropriate than the original for 19th-century women and children. Bowdler also published several other works, some reflecting his interest in and knowledge of continental Europe. Bowdler's last work was an expurgated version of Edward Gibbon's "Decline and Fall of the Roman Empire", published posthumously in 1826 under the supervision of his nephew and biographer, Thomas Bowdler the Younger. The verb bowdlerise (or bowdlerize) has linked his name with the censorship or omission of elements deemed inappropriate for children, not only in literature but also in motion pictures and television programmes. Thomas Bowdler was born in Box, near Bath, Somerset, the youngest son of the six children of Thomas Bowdler (c. 1719–1785), a banker of substantial fortune, and his wife, Elizabeth, "née" Cotton (d. 1797), the daughter of Sir John Cotton, 6th Baronet of Conington, Huntingdonshire. Bowdler studied medicine at the universities of St. Andrews and Edinburgh, where he received his degree in 1776, graduating with a thesis on intermittent fevers. He spent the next four years travelling through continental Europe, visiting Germany, Hungary, Italy, Sicily, and Portugal. In 1781 he caught a fever in Lisbon from a young friend whom he was attending to through a fatal illness. He returned to England in broken health and with a strong aversion to the medical profession. In 1781 he was elected a Fellow of the Royal Society (FRS) and a Licentiate of the Royal College of Physicians (LRCP), but did not continue to practice medicine. He devoted himself instead to the cause of prison reform. Bowdler was also a strong chess player and once played eight recorded games against the best chess player of the time, François-André Danican Philidor, who was so confident of his superiority that he played with several handicaps. Bowdler won twice, lost three times, and drew three times. The Bowdler Attack is named after him. Bowdler's first published work was "Letters Written in Holland in the Months of September and October 1787" (1788), which gave his eye-witness account of the Patriots' uprising. In 1800 Bowdler took a lease on a country estate at St. Boniface, on the Isle of Wight, where he lived for ten years. In September 1806, when he was 52, he married Elizabeth Trevenen (née Farquharson), age 48, the widow of the naval officer Captain James Trevenen who had died in Catherine the Great's service at Kronstadt in 1790. The marriage was unhappy, and after a few years Bowdler and his wife separated. They had no children. After the separation, the marriage was never mentioned by the Bowdler family; in the biography of Bowdler written by his nephew, Thomas Bowdler, there is no mention of Bowdler ever marrying. In 1807, the first edition of the Bowdlers' "The Family Shakspeare," covering 20 plays, was published in four small volumes. From 1811 until his death in 1825, Bowdler lived at Rhyddings House, overlooking Swansea Bay, from where he travelled extensively in Britain and continental Europe. In 1815, he published "Observations on Emigration to France, With an Account of Health, Economy, and the Education of Children", a cautionary work propounding his view that English invalids should avoid French spas and go instead to Malta. In 1818, Bowdler published an expanded edition of "The Family Shakspeare", covering all 36 available plays, which had considerable success. By 1827 the work had gone into its fifth edition. In his last years, Bowdler prepared an expurgated version of the works of the historian Edward Gibbon, which was published posthumously in 1826. His sister Jane Bowdler (1743–1784) was a poet and essayist, and another sister, Henrietta Maria Bowdler (Harriet) (1750–1830), collaborated with Bowdler on his expurgated Shakespeare. Bowdler died in Swansea at the age of 70 and was buried there, at Oystermouth. He bequeathed donations to the poor of Swansea and Box. His large library, consisting of unexpurgated volumes of 17th and 18th century tracts, collected by his ancestors Thomas Bowdler (1638–1700) and Thomas Bowdler (1661–1738), was donated to the University of Wales, Lampeter. In 1825 Bowdler's nephew, also called Thomas Bowdler, published his "Memoir of the Late John Bowdler, Esq., to Which Is Added, Some Account of the Late Thomas Bowdler, Esq. Editor of the Family Shakspeare". In Bowdler's childhood, his father had entertained his family with readings from Shakespeare. Later in life, Bowdler realized that his father had been omitting or altering passages he felt unsuitable for the ears of his wife and children. Bowdler felt it would be worthwhile to publish an edition which might be used in a family whose father was not a sufficiently "circumspect and judicious reader" to accomplish this expurgation himself. In 1807 the first edition of the Bowdlers' "The Family Shakspeare" was published in four duodecimo volumes, containing 24 plays. In 1818 the second edition, covering all 36 available plays, was published. Each play is preceded by an introduction wherein Bowdler summarizes and justifies his changes to the text. According to his nephew's "Memoir," the first edition was prepared by Bowdler's sister, Harriet, but both were published under Thomas Bowdler's name. This was likely because a woman could not then publicly admit that she was capable of such editing and compilation, nor that she understood Shakespeare's racy verses. By 1850 eleven editions had been printed. The spelling "Shakspeare", used by Bowdler and also by his nephew Thomas in his memoir of Thomas Bowdler the elder, was changed in later editions (from 1847 on) to "Shakespeare", reflecting changes in the standard spelling of Shakespeare's name. The Bowdlers were not the first to undertake such a project. Bowdler's commitment to not augmenting or adding to Shakespeare's text, instead only removing sensitive material, was in contrast with the practice of earlier editors. Nahum Tate as Poet Laureate had rewritten the tragedy of "King Lear" with a happy ending; In 1807, Charles Lamb and Mary Lamb published "Tales from Shakespeare" for children with synopses of 20 of the plays, but seldom quoted the original text. Though "The Family Shakespeare" was considered a negative example of censorship by the literary establishment and its commitment to the "authentic" Shakespeare, the Bowdlers' expurgated editions made it more acceptable to teach Shakespeare to wider and younger audiences. As said by the poet Algernon Charles Swinburne, "More nauseous and more foolish cant was never chattered than that which would deride the memory or depreciate the merits of Bowdler. No man ever did better service to Shakespeare than the man who made it possible to put him into the hands of intelligent and imaginative children". Some examples of alterations made by Bowdler's edition: Prominent modern literary figures such as Michiko Kakutani (in the New York Times) and William Safire (in his book, "How Not to Write") have accused Bowdler of changing Lady Macbeth's famous "Out, damned spot!" line in "Macbeth" to "Out, crimson spot!" But Bowdler did not do that. Thomas Bulfinch and Stephen Bulfinch did, in their 1865 edition of Shakespeare's works. Treason In law, treason is criminal disloyalty, typically to the state. It is a crime that covers some of the more extreme acts against one's nation or sovereign. This usually includes things such as participating in a war against one's native country, attempting to overthrow its government, spying on its military, its diplomats, or its secret services for a hostile and foreign power, or attempting to kill its head of state. A person who commits treason is known in law as a traitor. Historically, in common law countries, treason also covered the murder of specific social superiors, such as the murder of a husband by his wife or that of a master by his servant. Treason against the king was known as "high treason" and treason against a lesser superior was "petty treason". As jurisdictions around the world abolished petty treason, "treason" came to refer to what was historically known as high treason. At times, the term "traitor" has been used as a political epithet, regardless of any verifiable treasonable action. In a civil war or insurrection, the winners may deem the losers to be traitors. Likewise the term "traitor" is used in heated political discussiontypically as a slur against political dissidents, or against officials in power who are perceived as failing to act in the best interest of their constituents. In certain cases, as with the "Dolchstoßlegende" (Stab-in-the-back myth), the accusation of treason towards a large group of people can be a unifying political message. In English law, high treason was punishable by being hanged, drawn and quartered (men) or burnt at the stake (women), although beheading could be substituted by royal command (usually for royalty and nobility). Those penalties were abolished in 1814, 1790 and 1973 respectively. The penalty was used by later monarchs against people who could reasonably be called traitors. Many of them would now just be considered dissidents. Christian theology and political thinking until after the Enlightenment considered treason and blasphemy synonymous, as it challenged both the state and the will of God. Kings were considered chosen by God, and to betray one's country was to do the work of Satan. The words "treason" and "traitor" are derived from the Latin "tradere", "to deliver or hand over". Specifically, it is derived from the term "Traditors", which refers to bishops and other Christians who turned over sacred scriptures or betrayed their fellow Christians to the Roman authorities under threat of persecution during the Diocletianic Persecution between AD 303 and 305. Originally, the crime of treason was conceived of as being committed against the Monarch; a subject failing in his duty of loyalty to the Sovereign and acting against the Sovereign was deemed to be a traitor. As asserted in the 18th Century trial of Johann Friedrich Struensee in Denmark, a man having sexual relations with a Queen can be considered guilty not only of ordinary adultery but also of treason against her husband, the King. The English Revolution in the 17th Century and the French Revolution in the 18th introduced a radically different concept of loyalty and treason, under which Sovereignty resides with "The Nation" or "The People" - to whom also the Monarch has a duty of loyalty, and for failing which the Monarch, too, could be accused of treason. Charles I in England and Louis XVI in France were found guilty of such treason and duly executed. However, when Charles II was restored to his throne, he considered the revolutionaries who sentenced his father to death as having been traitors in the more traditional sense. In modern times, "traitor" and "treason" are mainly used with reference to a person helping an enemy in time of war or conflict. Many nations' laws mention various types of treason. "Crimes Related to Insurrection" is the internal treason, and may include a coup d'état. "Crimes Related to Foreign Aggression" is the treason of cooperating with foreign aggression positively regardless of the national inside and outside. "Crimes Related to Inducement of Foreign Aggression" is the crime of communicating with aliens secretly to cause foreign aggression or menace. Depending on the country, conspiracy is added to these. In Australia, there are federal and state laws against treason, specifically in the states of New South Wales, South Australia and Victoria. Similarly to Treason laws in the United States, citizens of Australia owe allegiance to their sovereign, the federal and state level. The federal law defining treason in Australia is provided under section 80.1 of the Criminal Code, contained in the schedule of the Commonwealth Criminal Code Act 1995. It defines treason as follows: A person is not guilty of treason under paragraphs (e), (f) or (h) if their assistance or intended assistance is purely humanitarian in nature. The maximum penalty for treason is life imprisonment. Section 80.1AC of the Act creates the related offence of treachery. The Treason Act 1351, the Treason Act 1795 and the Treason Act 1817 form part of the law of New South Wales. The Treason Act 1795 and the Treason Act 1817 have been repealed by Section 11 of the Crimes Act 1900, except in so far as they relate to the compassing, imagining, inventing, devising, or intending death or destruction, or any bodily harm tending to death or destruction, maim, or wounding, imprisonment, or restraint of the person of the heirs and successors of King George III of the United Kingdom, and the expressing, uttering, or declaring of such compassings, imaginations, inventions, devices, or intentions, or any of them. Section 12 of the Crimes Act 1900 (NSW) creates an offence which is derived from section 3 of the Treason Felony Act 1848: Section 16 provides that nothing in Part 2 repeals or affects anything enacted by the Treason Act 1351 (25 Edw.3 c. 2). This section reproduces section 6 of the Treason Felony Act 1848. The offence of treason was created by section 9A(1) of the Crimes Act 1958. It is punishable by a maximum penalty of life imprisonment. In South Australia, treason is defined under Section 7 of the South Australia Criminal Law Consolidation Act 1935 and punished under Section 10A. Any person convicted of treason against South Australia will receive a mandatory sentence of life imprisonment. According to Brazilian law, treason is the crime of disloyalty by a citizen to the Federal Republic of Brazil, applying to combatants of the Brazilian military forces. Treason during wartime is the only crime for which a person can be sentenced to death "(see capital punishment in Brazil)". The only military person in the history of Brazil to be convicted of treason was Carlos Lamarca, an army captain who deserted to become the leader of a communist-terrorist guerrilla against the military government. Section 46 of the Criminal Code has two degrees of treason, called "high treason" and "treason." However, both of these belong to the historical category of high treason, as opposed to petty treason which does not exist in Canadian law. Section 46 reads as follows: High treason (1) Every one commits high treason who, in Canada, Treason It is also illegal for a Canadian citizen or a person who owes allegiance to Her Majesty in right of Canada to do any of the above outside Canada. The penalty for high treason is life imprisonment. The penalty for treason is imprisonment up to a maximum of life, or up to 14 years for conduct under subsection (2)(b) or (e) in peacetime. Finnish law distinguishes between two types of treasonable offences: "maanpetos", treachery in war, and "valtiopetos", an attack against the constitutional order. The terms "maanpetos" and "valtiopetos" are unofficially translated as treason and high treason, respectively. Both are punishable by imprisonment, and if aggravated, by life imprisonment. "Maanpetos" (translates literally to "betrayal of land") consists in joining enemy armed forces, making war against Finland, or serving or collaborating with the enemy. "Maanpetos" proper can only be committed under conditions of war or the threat of war. Espionage, disclosure of a national secret, and certain other related offences are separately defined under the same rubric in the Finnish criminal code. "Valtiopetos" (translates literally to "betrayal of state") consists in using violence or the threat of violence, or unconstitutional means, to bring about the overthrow of the Finnish constitution or to overthrow the president, cabinet or parliament or to prevent them from performing their functions. Article 411-1 of the French Penal Code defines treason as follows: The acts defined by articles 411-2 to 411–11 constitute treason where they are committed by a French national or a soldier in the service of France, and constitute espionage where they are committed by any other person. Article 411-2 prohibits "handing over troops belonging to the French armed forces, or all or part of the national territory, to a foreign power, to a foreign organisation or to an organisation under foreign control, or to their agents". It is punishable by life imprisonment and a fine of €750,000. Generally parole is not available until 18 years of a life sentence have elapsed. Articles 411-3 to 411–10 define various other crimes of collaboration with the enemy, sabotage, and the like. These are punishable with imprisonment for between seven and 30 years. Article 411-11 make it a crime to incite any of the above crimes. Besides treason and espionage, there are many other crimes dealing with national security, insurrection, terrorism and so on. These are all to be found in Book IV of the code. German law differentiates between two types of treason: "High treason" ("Hochverrat") and "treason" ("Landesverrat"). High treason, as defined in Section 81 of the German criminal code is defined as a violent attempt against the existence or the constitutional order of the Federal Republic of Germany, carrying a penalty of life imprisonment or a fixed term of at least ten years. In less serious cases, the penalty is 1–10 years in prison. German criminal law also criminalises high treason against a German state. Preparation of either types of the crime is criminal and carries a penalty of up to five years. The other type of treason, "Landesverrat" is defined in Section 94. It is roughly equivalent to espionage; more precisely, it consists of betraying a secret either directly to a foreign power, or to anyone not allowed to know of it; in the latter case, treason is only committed if the aim of the crime was explicitly to damage the Federal Republic or to favor a foreign power. The crime carries a penalty of one to fifteen years in prison. However, in especially severe cases, life imprisonment or any term of at least of five years may be sentenced. As for many crimes with substantial threats of punishment active repentance is to be considered in mitigation under §83a StGB (Section 83a, Criminal Code). Notable cases involving "Landesverrat" are the Weltbühne trial during the Weimar Republic and the Spiegel scandal of 1962. On 30. July 2015, Germany's Public Prosecutor General Harald Range initiated criminal investigation proceedings against the German blog netzpolitik.org. Section 2 of the Crime Ordinance provides that levying war against the HKSAR Government of the People's Republic of China, conspiring to do so, instigating a foreigner to invade Hong Kong, or assisting any public enemy at war with the HKSAR Government, is treason, punishable with life imprisonment. Article 39 of the Constitution of Ireland (adopted in 1937) states: treason shall consist only in levying war against the State, or assisting any State or person or inciting or conspiring with any person to levy war against the State, or attempting by force of arms or other violent means to overthrow the organs of government established by the Constitution, or taking part or being concerned in or inciting or conspiring with any person to make or to take part or be concerned in any such attempt. Following the enactment of the 1937 constitution, the Treason Act 1939 provided for imposition of the death penalty for treason. The Criminal Justice Act 1990 abolished the death penalty, setting the punishment for treason at life imprisonment, with parole in not less than forty years. No person has been charged under the Treason Act. Irish republican legitimatists who refuse to recognise the legitimacy of the Republic of Ireland have been charged with lesser crimes under the Offences against the State Acts 1939–1998. The Italian law defines various types of crimes that could be generally described as treason ("tradimento"), although they are so many and so precisely defined that no one of them is simply called "tradimento" in the text of "Codice Penale" (Italian Criminal Code). The treason-type crimes are grouped as "crimes against the personhood of the State" ("Crimini contro la personalità dello Stato") in the Second Book, First Title, of the Criminal Code. Articles 241 to 274 detail crimes against the "international personhood of the State" such as "attempt against wholeness, independence and unity of the State" (art.241), "hostilities against a foreign State bringing the Italian State in danger of war" (art.244), "bribery of a citizen by a foreigner against the national interests" (art.246), and "political or military espionage" (art.257). Articles 276 to 292 detail crimes against the "domestic personhood of the State", ranging from "attempt on the President of the Republic" (art.271), "attempt with purposes of terrorism or of subversion" (art.280), "attempt against the Constitution" (art.283), "armed insurrection against the power of the State" (art.284), and "civil war" (art.286). Further articles detail other crimes, especially those of conspiracy, such as "political conspiracy through association" (art.305), or "armed association: creating and participating" (art.306). The penalties for treason-type crimes, before 1948, included death as maximum penalty, and, for some crimes, as the only penalty possible. Nowadays the maximum penalty is life imprisonment ("ergastolo"). Japan does not technically have a law of treason. Instead it has an offence against taking part in foreign aggression against the Japanese state ("gaikan zai"; literally "crime of foreign mischief"). The law applies equally to Japanese and non-Japanese people, while treason in other countries usually applies only to their own citizens. Technically there are two laws, one for the crime of inviting foreign mischief (Japan Criminal Code section 2 clause 81) and the other for supporting foreign mischief once a foreign force has invaded Japan. "Mischief" can be anything from invasion to espionage. Before World War II, Japan had a crime similar to the English crime of high treason ("Taigyaku zai"), which applied to anyone who harmed the Japanese emperor or imperial family. This law was abolished by the American Occupation force after World War II. The application of "Crimes Related to Insurrection" to the Aum Shinrikyo cult of religious terrorists was considered. New Zealand has treason laws that are stipulated under the Crimes Act 1961. Section 73 of the Crimes Act reads as follows: Every one owing allegiance to Her Majesty the Queen in right of New Zealand commits treason who, within or outside New Zealand,— The penalty is life imprisonment, except for conspiracy, for which the maximum sentence is 14 years' imprisonment. Treason was the last capital crime in New Zealand law, with the death penalty not being revoked until 1989, years after it was abolished for murder. Very few people have been prosecuted for the act of treason in New Zealand and none have been prosecuted in recent years. Article 85 of the Constitution of Norway states that "[a]ny person who obeys an order the purpose of which is to disturb the liberty and security of the Storting [Parliament] is thereby guilty of treason against the country." Article 275 of the Criminal Code of Russia defines treason as "espionage, disclosure of state secrets, or any other assistance rendered to a foreign State, a foreign organization, or their representatives in hostile activities to the detriment of the external security of the Russian Federation, committed by a citizen of the Russian Federation." The sentence is imprisonment for 12 to 20 years. It is not a capital offence, even though murder and some aggravated forms of attempted murder are (although Russia currently has a moratorium on the death penalty). Subsequent sections provide for further offences against state security, such as armed rebellion and forcible seizure of power. Sweden's treason laws have seen little application in modern times. The most recent case was in 2001. Four teenagers (their names were not reported) were convicted of treason after they assaulted King Carl XVI Gustaf with a strawberry cream cake on 6 September that year. They were fined between 80 and 100 days' income. There is no single crime of treason in Swiss law; instead, multiple criminal prohibitions apply. Article 265 of the Swiss Criminal Code prohibits "high treason" ("Hochverrat/haute trahison") as follows: Whoever commits an act with the objective of violently – changing the constitution of the Confederation or of a canton, – removing the constitutional authorities of the state from office or making them unable to exercise their authority, – separating Swiss territory from the Confederation or territory from a canton, shall be punished with imprisonment of no less than a year. A separate crime is defined in article 267 as "diplomatic treason" ("Diplomatischer Landesverrat/Trahison diplomatique"): 1. Whoever makes known or accessible a secret, the preservation of which is required in the interest of the Confederation, to a foreign state or its agents, (...) shall be punished with imprisonment of no less than a year. 2. Whoever makes known or accessible a secret, the preservation of which is required in the interest of the Confederation, to the public, shall be punished with imprisonment of up to five years or a monetary penalty. In 1950, in the context of the Cold War, the following prohibition of "foreign enterprises against the security of Switzerland" was introduced as article 266bis: 1 Whoever, with the purpose of inciting or supporting foreign enterprises aimed against the security of Switzerland, enters into contact with a foreign state or with foreign parties or other foreign organizations or their agents, or makes or disseminates untrue or tendentious claims ("unwahre oder entstellende Behauptungen / informations inexactes ou tendancieuses"), shall be punished with imprisonment of up to five years or a monetary penalty. 2 In grave cases the judge may pronounce a sentence of imprisonment of no less than a year. The criminal code also prohibits, among other acts, the suppression or falsification of legal documents or evidence relevant to the international relations of Switzerland (art. 267, imprisonment of no less than a year) and attacks against the independence of Switzerland and incitement of a war against Switzerland (art. 266, up to life imprisonment). The Swiss military criminal code contains additional prohibitions under the general title of "treason", which also apply to civilians, or which in times of war civilians are also (or may by executive decision be made) subject to. These include espionage or transmission of secrets to a foreign power (art. 86); sabotage (art. 86a); "military treason", i.e., the disruption of activities of military significance (art. 87); acting as a franc-tireur (art. 88); disruption of military action by disseminating untrue information (art. 89); military service against Switzerland by Swiss nationals (art. 90); or giving aid to the enemy (art. 91). The penalties for these crimes vary, but include life imprisonment in some cases. Treason "per se" is not defined in the Turkish Penal Code. However, the law defines crimes which are traditionally included in the scope of treason, such as cooperating with the enemy during wartime. Treason is punishable by imprisonment up to life. The British law of treason is entirely statutory and has been so since the Treason Act 1351 (25 Edw. 3 St. 5 c. 2). The Act is written in Norman French, but is more commonly cited in its English translation. The Treason Act 1351 has since been amended several times, and currently provides for four categories of treasonable offences, namely: Another Act, the Treason Act 1702 (1 Anne stat. 2 c. 21), provides for a fifth category of treason, namely: By virtue of the Treason Act 1708, the law of treason in Scotland is the same as the law in England, save that in Scotland the slaying of the Lords of Session and Lords of Justiciary and counterfeiting the Great Seal of Scotland remain treason under sections 11 and 12 of the Treason Act 1708 respectively. Treason is a reserved matter about which the Scottish Parliament is prohibited from legislating. Two acts of the former Parliament of Ireland passed in 1537 and 1542 create further treasons which apply in Northern Ireland. The penalty for treason was changed from death to a maximum of imprisonment for life in 1998 under the Crime And Disorder Act. Before 1998, the death penalty was mandatory, subject to the royal prerogative of mercy. Since the abolition of the death penalty for murder in 1965 an execution for treason was unlikely to have been carried out. Treason laws were used against Irish insurgents before Irish independence. However, members of the Provisional IRA and other militant republican groups were not prosecuted or executed for treason for levying war against the British government during the Troubles. They, along with members of loyalist paramilitary groups, were jailed for murder, violent crimes or terrorist offences. William Joyce ("Lord Haw-Haw") was the last person to be put to death for treason, in 1946. (On the following day Theodore Schurch was executed for treachery, a similar crime, and was the last man to be executed for a crime other than murder in the UK.) As to who can commit treason, it depends on the ancient notion of allegiance. As such, all British nationals (but not other Commonwealth citizens) owe allegiance to the Queen in right of the United Kingdom wherever they may be, as do Commonwealth citizens and aliens present in the United Kingdom at the time of the treasonable act (except diplomats and foreign invading forces), those who hold a British passport however obtained, and aliens who – having lived in Britain and gone abroad again – have left behind family and belongings. The Treason Act 1695 enacted, among other things, a rule that treason could be proved only in a trial by the evidence of two witnesses to the same offence. Nearly one hundred years later this rule was incorporated into the U.S. Constitution, which requires two witnesses to the same overt act. It also provided for a three-year time limit on bringing prosecutions for treason (except for assassinating the king), another rule which has been imitated in some common law countries. The Sedition Act 1661 made it treason to imprison, restrain or wound the king. Although this law was abolished in the United Kingdom in 1998, it still continues to apply in some Commonwealth countries. In the 1790s, opposition political parties were new and not fully accepted. Government leaders often considered their opponents to be traitors. Historian Ron Chernow reports that Secretary of the Treasury Alexander Hamilton and President George Washington "regarded much of the criticism fired at their administration as disloyal, even treasonous, in nature." When an undeclared Quasi-War broke out with France in 1797–98, "Hamilton increasingly mistook dissent for treason and engaged in hyperbole." Furthermore, the Jeffersonian opposition party behaved the same way. After 1801, with a peaceful transition in the political party in power, the rhetoric of "treason" against political opponents diminished. To avoid the abuses of the English law, the scope of treason was specifically restricted in the United States Constitution. Article III, section 3 reads as follows: The Constitution does not itself create the offense; it only restricts the definition (the first paragraph), permits the United States Congress to create the offense, and restricts any punishment for treason to only the convicted (the second paragraph). The crime is prohibited by legislation passed by Congress. Therefore, the United States Code at states: The requirement of testimony of two witnesses was inherited from the British Treason Act 1695. However, Congress has passed laws creating related offenses that punish conduct that undermines the government or the national security, such as sedition in the 1798 Alien and Sedition Acts, or espionage and sedition in the Espionage Act of 1917, which do not require the testimony of two witnesses and have a much broader definition than Article Three treason. Some of these laws are still in effect. The well-known spies Julius and Ethel Rosenberg were charged with conspiracy to commit espionage, rather than treason. In the United States, Benedict Arnold's name is considered synonymous with treason due to his collaboration with the British during the American Revolutionary War. This, however, occurred before the Constitution was written. Arnold became a general in the British Army, which protected him. Since the Constitution came into effect, there have been fewer than 40 federal prosecutions for treason and even fewer convictions. Several men were convicted of treason in connection with the 1794 Whiskey Rebellion but were pardoned by President George Washington. The most famous treason trial, that of Aaron Burr in 1807, resulted in acquittal. In 1807, on a charge of treason, Burr was brought to trial before the United States Circuit Court at Richmond, Virginia. The only physical evidence presented to the grand jury was General James Wilkinson's so-called letter from Burr, which proposed the idea of stealing land in the Louisiana Purchase. The trial was presided over by Chief Justice of the United States John Marshall, acting as a circuit judge. Since no witnesses testified, Burr was acquitted in spite of the full force of Jefferson's political influence thrown against him. Immediately afterward, Burr was tried on a misdemeanor charge and was again acquitted. During the American Civil War, treason trials were held in Indianapolis against Copperheads for conspiring with the Confederacy against the United States. In addition to treason trials, the federal government passed new laws that allowed prosecutors to try people for the charge of disloyalty. Various legislation was passed, including the Conspiracies Act of July 31, 1861. Because the law defining treason in the constitution was so strict, new legislation was necessary to prosecute defiance of the government . Many of the people indicted on charges of conspiracy were not taken to trial, but instead were arrested and detained. In addition to the Conspiracies Act of July 31, 1861, in 1862, the federal government went further to redefine treason in the context of the civil war. The act that was passed is entitled ""An Act to Suppress Insurrection; to punish Treason and Rebellion, to seize and confiscate the Property of Rebels, and for other purposes"." It is colloquially referred to as the "second Confiscation Act". The act essentially lessened the punishment for treason.  Rather than have death as the only possible punishment for treason, the act made it possible to give individuals lesser sentences. After the war the question was whether the United States government would make indictments for treason against leaders of the Confederate States of America, as many people demanded. Jefferson Davis, the Confederate president, was indicted and held in prison for two years. The indictment was dropped in 1869 when the political scene had changed and it was possible he would be acquitted by a jury in Virginia. When accepting Lee's surrender of the Army of Northern Virginia, at Appomattox, in April 1865, Gen. Ulysses S. Grant assured all Confederate soldiers and officers a blanket amnesty, provided they returned to their homes and refrained from any further acts of hostility, and subsequently other Union generals issued similar terms of amnesty when accepting Confederate surrenders. All Confederate officials received a blanket amnesty issued by President Andrew Johnson as he left office in 1869. In 1949 Iva Toguri D'Aquino was convicted of treason for wartime radio broadcasts (under the name of "Tokyo Rose") and sentenced to ten years, of which she served six. As a result of prosecution witnesses having lied under oath, she was pardoned in 1977. In 1952 Tomoya Kawakita, a Japanese-American dual citizen was convicted of treason and sentenced to death for having worked as an interpreter at a Japanese POW camp and having mistreated American prisoners. He was recognized by a former prisoner at a department store in 1946 after having returned to the United States. The sentence was later commuted to life imprisonment and a $10,000 fine. He was released and deported in 1963. The Cold War saw frequent talk linking treason with support for Communist-led causes. The most memorable of these came from Senator Joseph McCarthy, who used rhetoric about the Democrats as guilty of "twenty years of treason". As chosen chair of the Senate Permanent Investigations Subcommittee, McCarthy also investigated various government agencies for Soviet spy rings (see the Venona project); however, he acted as a political fact-finder rather than a criminal prosecutor. The Cold War period saw no prosecutions for explicit treason, but there were convictions and even executions for conspiracy to commit espionage on behalf of the Soviet Union, such as in the Julius and Ethel Rosenberg case. On October 11, 2006, the United States government charged Adam Yahiye Gadahn for videos in which he appeared as a spokesman for al-Qaeda and threatened attacks on American soil. He was killed on January 19, 2015 in an unmanned aircraft (drone) strike in Waziristan, Pakistan. Most states have treason provisions in their constitutions or statutes similar to those in the U.S. Constitution. The Extradition Clause specifically defines treason as an extraditable offense. Thomas Jefferson in 1791 said that any Virginia official who cooperated with the federal Bank of the United States proposed by Alexander Hamilton was guilty of "treason" against the state of Virginia and should be executed. The Bank opened and no one was prosecuted. Several persons have been prosecuted for treason on the state level. Thomas Dorr was convicted for treason against the state of Rhode Island for his part in the Dorr Rebellion, but was eventually granted amnesty. John Brown was convicted of treason against the Commonwealth of Virginia for his part in the raid on Harpers Ferry, and was hanged. The Mormon prophet, Joseph Smith, was charged with treason against Missouri along with five others, at first in front of a state military court, but Smith was allowed to escape to Illinois after his case was transferred to a civilian court for trial on charges of treason and other crimes. Smith was then later imprisoned for trial on charges of treason against Illinois, but was murdered by a lynch mob while in jail awaiting trial. The Constitution of Vietnam proclaims that treason is the most serious crime. It is further regulated in the Criminal Code with the 78th article: Also, according to the Law on Amnesty amended in November 2018, it is impossible for those convicted for treason to be granted amnesty. Early in Islamic history, the only form of treason was seen as the attempt to overthrow a just government or waging war against the State. According to Islamic tradition, the prescribed punishment ranged from imprisonment to the severing of limbs and the death penalty depending on the severity of the crime. However, even in cases of treason the repentance of a person would have to be taken into account. Currently, the consensus among major Islamic schools is that apostasy (leaving Islam) is considered treason and that the penalty is death; this is supported not in the Quran but in hadith. This confusion between apostasy and treason almost certainly had its roots in the Ridda Wars, in which an army of rebel traitors led by the self-proclaimed prophet Musaylima attempted to destroy the caliphate of Abu Bakr. In the 19th and early 20th century, the Iranian Cleric Sheikh Fazlollah Noori opposed the Iranian Constitutional Revolution by inciting insurrection against them through issuing fatwas and publishing pamphlets arguing that democracy would bring vice to the country. The new government executed him for treason in 1909. In Malaysia, it is treason to commit offences against the Yang di-Pertuan Agong's person, or to wage or attempt to wage war or abet the waging of war against the Yang di-Pertuan Agong, a Ruler or Yang di-Pertua Negeri. All these offences are punishable by hanging, which derives from the English treason acts (as a former British colony, Malaysia's legal system is based on English common law). In Algeria, treason is defined as the following: In Bahrain, plotting to topple the regime, collaborating with a foreign hostile country and threatening the life of the Emir are defined as treason and punishable by death. The State Security Law of 1974 was used to crush dissent that could be seen as treasonous, which was criticised for permitting severe human rights violations in accordance with Article One: In the areas controlled by the Palestinian National Authority, it is treason to give assistance to Israeli troops without the authorization of the Palestinian Authority or to sell land to Jews (irrespective of nationality) or non-Jewish Israeli citizens under the Palestinian Land Laws, as part of the PA's general policy of discouraging the expansion of Israeli settlements. Both crimes are capital offences subject to the death penalty, although the former provision has not often been enforced since the beginning of effective security cooperation between the Israel Defense Forces, Israel Police, and Palestinian National Security Forces since the mid-2000s (decade) under the leadership of Prime Minister Salam Fayyad. Likewise, in the Gaza Strip under the Hamas-led government, any sort of cooperation or assistance to Israeli forces during military actions is also punishable by death. There are a number of other crimes against the state short of treason: Different cultures have evolved a variety of terms for "traitor" or collaborator, often based on historical incidences of treason to that culture or of people whose name has become a byword for treason. Type VII submarine Type VII U-boats were the most common type of German World War II U-boat. 703 boats were built by the end of the war. The lone surviving example, , is on display at the Laboe Naval Memorial located in Laboe, Schleswig-Holstein, Germany. The Type VII was based on earlier German submarine designs going back to the World War I Type UB III and especially the cancelled Type UG. The type UG was designed through the Dutch dummy company "NV Ingenieurskantoor voor Scheepsbouw Den Haag" (I.v.S) to circumvent the limitations of the Treaty of Versailles, and was built by foreign shipyards. The Finnish "Vetehinen" class and Spanish Type E-1 also provided some of the basis for the Type VII design. These designs led to the Type VII along with Type I, the latter being built in AG Weser shipyard in Bremen, Germany. The production of Type I was stopped after only two boats; the reasons for this are not certain. The design of the Type I was further used in the development of the Type VII and Type IX. Type VII submarines were the most widely used U-boats of the war and were the most produced submarine class in history, with 703 built. The type had several modifications. The Type VII was the most numerous U-boat type to be involved in the Battle of the Atlantic. Type VIIA U-boats were designed in 1933–34 as the first series of a new generation of attack U-boats. Most Type VIIA U-boats were constructed at Deschimag AG Weser in Bremen with the exception of U-33 through U-36, which were built at Friedrich Krupp Germaniawerft, Kiel. Despite the highly cramped living quarters, type VIIA U-boats were generally popular with their crews because of their fast crash dive speed, which was thought to give them more protection from enemy attacks than bigger, more sluggish types. Also, the smaller boat's lower endurance meant patrols were shorter. They were much more powerful than the smaller Type II U-boats they replaced, with four bow and one external stern torpedo tubes. Usually carrying 11 torpedoes on board, they were very agile on the surface and mounted the quick-firing deck gun with about 220 rounds. Ten Type VIIA boats were built between 1935 and 1937. All but two Type VIIA U-boats were sunk during World War II (famous Otto Schuhart and , which was the first submarine to sink a ship in World War II, both scuttled in Kupfermühlen Bay on 4 May 1945). The boat was powered on the surface by two MAN AG, 6-cylinder, 4-stroke M6V 40/46 diesel engines, giving a total of at 470 to 485 rpm. When submerged it was propelled by two Brown, Boveri & Cie (BBC) GG UB 720/8 double-acting electric motors, giving a total of at 322 rpm. The VIIA had limited fuel capacity, so 24 Type VIIB boats were built between 1936 and 1940 with an additional 33 tonnes of fuel in external saddle tanks, which added another of range at surfaced. More powerful engines made them slightly faster than the VIIA. They had two rudders for greater agility. The torpedo armament was improved by moving the aft tube to the inside of the boat. Now an additional aft torpedo could be carried below the deck plating of the aft torpedo room (which also served as the electric motor room) and two watertight compartments under the upper deck could hold two additional torpedoes, giving it a total of 14 torpedoes. The only exception was , which lacked a stern tube and carried only 12 torpedoes. Type VIIBs included many of the most famous U-boats of World War II, including (the most successful), Prien's , Kretschmer's , and Schepke's . On the surface the boat was powered by two supercharged MAN, 6 cylinder 4-stroke M6V 40/46 diesels (except for "U-45" to "U-50", "U-83", "U-85", "U-87", "U-99", "U-100", and "U-102", which were powered by two supercharged Germaniawerft 6-cylinder 4-stroke F46 diesels) giving a total of at 470 to 490 rpm. When submerged, the boat was powered by two AEG GU 460/8-276 (except in "U-45", "U-46", "U-49", "U-51", "U-52", "U-54", "U-73" to "U-76", "U-99" and "U-100", which retained the BBC motor of the VIIA) electric motors, giving a total of at 295 rpm. The Type VIIC was the workhorse of the German U-boat force, with 568 commissioned from 1940 to 1945. The first VIIC boat commissioned was the in 1940. The Type VIIC was an effective fighting machine and was seen almost everywhere U-boats operated, although its range of only 8,500 nautical miles was not as great as that of the larger Type IX (11,000 nautical miles), severely limiting the time it could spend in the far reaches of the western and southern Atlantic without refueling from a tender or U-boat tanker. The VIIC came into service toward the end of the "First Happy Time" near the beginning of the war and was still the most numerous type in service when Allied anti-submarine efforts finally defeated the U-boat campaign in late 1943 and 1944. Type VIIC differed from the VIIB only in the addition of an active sonar and a few minor mechanical improvements, making it 2 feet longer and 8 tons heavier. Speed and range were essentially the same. Many of these boats were fitted with snorkels in 1944 and 1945. They had the same torpedo tube arrangement as their predecessors, except for , , , , and , which had only two bow tubes, and for , , , , , and , which had no stern tube. On the surface the boats (except for , and to which used MAN M6V40/46s) were propelled by two supercharged Germaniawerft, 6 cylinder, 4-stroke M6V 40/46 diesels totaling at 470 to 490 rpm. For submerged propulsion, several different electric motors were used. Early models used the VIIB configuration of two AEG GU 460/8-276 electric motors, totaling with a max rpm of 296, while newer boats used two BBC GG UB 720/8, Garbe, Lahmeyer & Co. RP 137/c or Siemens-Schuckert-Werke (SSW) GU 343/38-8 electric motors with the same power output as the AEG motors. Perhaps the most famous VIIC boat was , featured in the movie "Das Boot." The concept of the "U-flak" or "Flak Trap" originated the previous year, on 31 August 1942, when was seriously damaged by aircraft. Rather than scrap the boat, it was decided to refit her as a heavily armed anti-aircraft boat intended to combat the losses being inflicted by Allied aircraft in the Bay of Biscay. Two 20 mm quadruple "Flakvierling" mounts and an experimental 37 mm automatic gun were installed on the U-flaks' decks. A battery of 86 mm line-carrying anti-aircraft rockets was tested (similar to a device used by the British in the defense of airfields), but this idea proved unworkable. At times, two additional single 20 mm guns were also mounted. The submarines' limited fuel capacities restricted them to operations only within the Bay of Biscay. Only five torpedoes were carried, preloaded in the tubes, to free up space needed for additional gun crew. Four VIIC boats were modified for use as surface escorts for U-boats departing and returning to French Atlantic bases. These "U-flak" boats were , , , and . Conversion began on three others (, , and ) but none was completed and they were eventually returned to duty as standard VIIC attack boats. The modified boats became operational in June 1943 and at first appeared to be successful against a surprised Royal Air Force. Hoping that the extra firepower might allow the boats to survive relentless British air attacks in the Bay of Biscay and reach their operational areas, Donitz ordered the boats to cross the bay in groups at maximum speed. The effort earned the Germans about two more months of relative freedom, until the RAF modified their tactics. When a pilot saw that a U-boat was going to fight on the surface, he held off attacking and called in reinforcements. When several aircraft had arrived, they all attacked at once. If the U-boat dived, surface vessels were called to the scene to scour the area with sonar and drop depth charges. The British also began equipping some aircraft with RP-3 rockets that could sink a U-boat with a single hit, finally making it too dangerous for a U-boat to attempt to fight it out on the surface regardless of its armament. In November 1943, less than six months after the experiment began, it was discontinued. All U-flaks were converted back to standard attack boats and fitted with "Turm 4", the standard anti-aircraft armament for U-boats at the time. (According to German sources, only six aircraft had been shot down by the U-flaks in six missions, three by "U-441", and one each by "U-256", "U-621", and .) Type VIIC/41 was a slightly modified version of the VIIC and had the same armament and engines. The difference was a stronger pressure hull giving them a deeper crush depth and lighter machinery to compensate for the added steel in the hull, making them slightly lighter than the VIIC. A total of 91 were built. All of them from onwards lacked the fittings to handle mines. Today one Type VIIC/41 still exists: is on display at Laboe (north of Kiel), the only surviving Type VII in the world. The Type VIIC/42 was designed in 1942 and 1943 to replace the aging Type VIIC. It would have had a much stronger pressure hull, with skin thickness up to 28 mm, and would have dived twice as deep as the previous VIICs. These boats would have been very similar in external appearance to the VIIC/41 but with two periscopes in the tower and would have carried two more torpedoes. Contracts were signed for 164 boats and a few boats were laid down, but all were cancelled on 30 September 1943 in favor of the new Type XXI, and none was advanced enough in construction to be launched. It was powered by the same engines as the VIIC. The type VIID boats, designed in 1939 and 1940, were a lengthened – by – version of the VIIC for use as a minelayer. The mines were carried in, and released from, three banks of five vertical tubes just aft of the conning tower. The extended hull also improved fuel and food storage. On the surface the boat used two supercharged Germaniawerft, 6 cylinder, 4-stroke F46 diesels delivering 3,200 bhp (2,400 kW) at between 470 and 490 rpm. When submerged the boat used two AEG GU 460/8-276 electric motors giving a total of 750 shp (560 kW) at 285 rpm. Only one () managed to survive the war; the other five were sunk, killing all crew members. The Type VIIF boats were designed in 1941 as supply boats to rearm U-boats at sea once they had used up their torpedoes. This required a lengthened hull and they were the largest and heaviest type VII boats built. They were armed identically with the other Type VIIs except that they could have up to 39 torpedoes onboard and had no deck guns. Only four Type VIIFs were built. Two of them, and , were sent to support the Monsun Gruppe in the Far East; and remained in the Atlantic. Type VIIF U-boats used the same engines as the Type VIID class. Three were sunk during the war; the surviving boat was surrendered to the Allies following Germany's capitulation. Like most surrendered U-boats, it was subsequently scuttled by the Royal Navy. Three-age system The three-age system is the periodization of history into three time periods; for example: the Stone Age, the Bronze Age, and the Iron Age; although it also refers to other tripartite divisions of historic time periods. In history, archaeology and physical anthropology, the three-age system is a methodological concept adopted during the 19th century by which artifacts and events of late prehistory and early history could be ordered into a recognizable chronology. It was initially developed by C. J. Thomsen, director of the Royal Museum of Nordic Antiquities, Copenhagen, as a means to classify the museum's collections according to whether the artifacts were made of stone, bronze, or iron. The system first appealed to British researchers working in the science of ethnology who adopted it to establish race sequences for Britain's past based on cranial types. Although the craniological ethnology that formed its first scholarly context holds no scientific value, the "relative chronology" of the Stone Age, the Bronze Age and the Iron Age is still in use in a general public context, and the three ages remain the underpinning of prehistoric chronology for Europe, the Mediterranean world and the Near East. The structure reflects the cultural and historical background of Mediterranean Europe and the Middle East and soon underwent further subdivisions, including the 1865 partitioning of the Stone Age into Paleolithic, Mesolithic and Neolithic periods by John Lubbock. It is, however, of little or no use for the establishment of chronological frameworks in sub-Saharan Africa, much of Asia, the Americas and some other areas and has little importance in contemporary archaeological or anthropological discussion for these regions. The concept of dividing pre-historical ages into systems based on metals extends far back in European history, probably originated by Lucretius in the first century BC. But the present archaeological system of the three main ages—stone, bronze and iron—originates with the Danish archaeologist Christian Jürgensen Thomsen (1788–1865), who placed the system on a more scientific basis by typological and chronological studies, at first, of tools and other artifacts present in the Museum of Northern Antiquities in Copenhagen (later the National Museum of Denmark). He later used artifacts and the excavation reports published or sent to him by Danish archaeologists who were doing controlled excavations. His position as curator of the museum gave him enough visibility to become highly influential on Danish archaeology. A well-known and well-liked figure, he explained his system in person to visitors at the museum, many of them professional archaeologists. In his poem, "Works and Days", the ancient Greek poet Hesiod possibly between 750 and 650 BC, defined five successive Ages of Man: 1. Golden, 2. Silver, 3. Bronze, 4. Heroic and 5. Iron. Only the Bronze Age and the Iron Age are based on the use of metal: ... then Zeus the father created the third generation of mortals, the age of bronze ... They were terrible and strong, and the ghastly action of Ares was theirs, and violence. ... The weapons of these men were bronze, of bronze their houses, and they worked as bronzesmiths. There was not yet any black iron. Hesiod knew from the traditional poetry, such as the "Iliad", and the heirloom bronze artifacts that abounded in Greek society, that before the use of iron to make tools and weapons, bronze had been the preferred material and iron was not smelted at all. He did not continue the manufacturing metaphor, but mixed his metaphors, switching over to the market value of each metal. Iron was cheaper than bronze, so there must have been a golden and a silver age. He portrays a sequence of metallic ages, but it is a degradation rather than a progression. Each age has less of a moral value than the preceding. Of his own age he says: "And I wish that I were not any part of the fifth generation of men, but had died before it came, or had been born afterward." The moral metaphor of the ages of metals continued. Lucretius, however, replaced moral degradation with the concept of progress, which he conceived to be like the growth of an individual human being. The concept is evolutionary: For the nature of the world as a whole is altered by age. Everything must pass through successive phases. Nothing remains forever what it was. Everything is on the move. Everything is transformed by nature and forced into new paths ... The Earth passes through successive phases, so that it can no longer bear what it could, and it can now what it could not before. The Romans believed that the species of animals, including humans, were spontaneously generated from the materials of the Earth, because of which the Latin word "mater", "mother", descends to English-speakers as matter and material. In Lucretius the Earth is a mother, Venus, to whom the poem is dedicated in the first few lines. She brought forth humankind by spontaneous generation. Having been given birth as a species, humans must grow to maturity by analogy with the individual. The different phases of their collective life are marked by the accumulation of customs to form material civilization: The earliest weapons were hands, nails and teeth. Next came stones and branches wrenched from trees, and fire and flame as soon as these were discovered. Then men learnt to use tough iron and copper. With copper they tilled the soil. With copper they whipped up the clashing waves of war, ... Then by slow degrees the iron sword came to the fore; the bronze sickle fell into disrepute; the ploughman began to cleave the earth with iron, ... Lucretius envisioned a pre-technological human that was "far tougher than the men of today ... They lived out their lives in the fashion of wild beasts roaming at large." The next stage was the use of huts, fire, clothing, language and the family. City-states, kings and citadels followed them. Lucretius supposes that the initial smelting of metal occurred accidentally in forest fires. The use of copper followed the use of stones and branches and preceded the use of iron. By the 16th century, a tradition had developed based on observational incidents, true or false, that the black objects found widely scattered in large quantities over Europe had fallen from the sky during thunderstorms and were therefore to be considered generated by lightning. They were so published by Konrad Gessner in "De rerum fossilium, lapidum et gemmarum maxime figuris & similitudinibus" at Zurich in 1565 and by many others less famous. The name ceraunia, "thunderstones," had been assigned. Ceraunia were collected by many persons over the centuries including Michele Mercati, Superintendent of the Vatican Botanical Garden in the late 16th century. He brought his collection of fossils and stones to the Vatican, where he studied them at leisure, compiling the results in a manuscript, which was published posthumously by the Vatican at Rome in 1717 as "Metallotheca". Mercati was interested in Ceraunia cuneata, "wedge-shaped thunderstones," which seemed to him to be most like axes and arrowheads, which he now called ceraunia vulgaris, "folk thunderstones," distinguishing his view from the popular one. His view was based on what may be the first in-depth lithic analysis of the objects in his collection, which led him to believe that they are artifacts and to suggest that the historical evolution of these artifacts followed a scheme. Mercati examining the surfaces of the ceraunia noted that the stones were of flint and that they had been chipped all over by another stone to achieve by percussion their current forms. The protrusion at the bottom he identified as the attachment point of a haft. Concluding that these objects were not ceraunia he compared collections to determine exactly what they were. Vatican collections included artifacts from the New World of exactly the shapes of the supposed ceraunia. The reports of the explorers had identified them to be implements and weapons or parts of them. Mercati posed the question to himself, why would anyone prefer to manufacture artifacts of stone rather than of metal, a superior material? His answer was that metallurgy was unknown at that time. He cited Biblical passages to prove that in Biblical times stone was the first material used. He also revived the 3-age system of Lucretius, which described a succession of periods based on the use of stone (and wood), bronze and iron respectively. Due to lateness of publication, Mercati's ideas were already being developed independently; however, his writing served as a further stimulus. On 12 November 1734, Nicholas Mahudel, physician, antiquarian and numismatist, read a paper at a public sitting of the Académie Royale des Inscriptions et Belles-Lettres in which he defined three "usages" of stone, bronze and iron in a chronological sequence. He had presented the paper several times that year but it was rejected until the November revision was finally accepted and published by the Academy in 1740. It was entitled "Les Monumens les plus anciens de l'industrie des hommes, et des Arts reconnus dans les Pierres de Foudres." It expanded the concepts of Antoine de Jussieu, who had gotten a paper accepted in 1723 entitled "De l'Origine et des usages de la Pierre de Foudre". In Mahudel, there is not just one usage for stone, but two more, one each for bronze and iron. He begins his treatise with descriptions and classifications of the "Pierres de Tonnerre et de Foudre", the ceraunia of contemporaneous European interest. After cautioning the audience that natural and man-made objects are often easily confused, he asserts that the specific ""figures"" or "formes that can be distinguished ("formes qui les font distingues")" of the stones were man-made, not natural: It was Man's hand that made them serve as instruments ("C'est la main des hommes qui les leur a données pour servir d'instrumens"...) Their cause, he asserts, is "the industry of our forefathers ("l'industrie de nos premiers pères")." He adds later that bronze and iron implements imitate the uses of the stone ones, suggesting a replacement of stone with metals. Mahudel is careful not to take credit for the idea of a succession of usages in time but states: "it is Michel Mercatus, physician of Clement VIII who first had this idea". He does not coin a term for ages, but speaks only of the times of usages. His use of "l'industrie" foreshadows the 20th century "industries," but where the moderns mean specific tool traditions, Mahudel meant only the art of working stone and metal in general. An important step in the development of the Three-age System came when the Danish antiquarian Christian Jürgensen Thomsen was able to use the Danish national collection of antiquities and the records of their finds as well as reports from contemporaneous excavations to provide a solid empirical basis for the system. He showed that artifacts could be classified into types and that these types varied over time in ways that correlated with the predominance of stone, bronze or iron implements and weapons. In this way he turned the Three-age System from being an evolutionary scheme based on intuition and general knowledge into a system of relative chronology supported by archaeological evidence. Initially, the three-age system as it was developed by Thomsen and his contemporaries in Scandinavia, such as Sven Nilsson and J.J.A. Worsaae, was grafted onto the traditional biblical chronology. But, during the 1830s they achieved independence from textual chronologies and relied mainly on typology and stratigraphy. In 1816 Thomsen at age 27 was appointed to succeed the retiring Rasmus Nyerup as Secretary of the "Kongelige Commission for Oldsagers Opbevaring" ("Royal Commission for the Preservation of Antiquities"), which had been founded in 1807. The post was unsalaried; Thomsen had independent means. At his appointment Bishop Münter said that he was an "amateur with a great range of accomplishments." Between 1816 and 1819 he reorganized the commission's collection of antiquities. In 1819 he opened the first Museum of Northern Antiquities, in Copenhagen, in a former monastery, to house the collections. It later became the National Museum. Like the other antiquarians Thomsen undoubtedly knew of the three-age model of prehistory through the works of Lucretius, the Dane Vedel Simonsen, Montfaucon and Mahudel. Sorting the material in the collection chronologically he mapped out which kinds of artifacts co-occurred in deposits and which did not, as this arrangement would allow him to discern any trends that were exclusive to certain periods. In this way he discovered that stone tools did not co-occur with bronze or iron in the earliest deposits while subsequently bronze did not co-occur with iron - so that three periods could be defined by their available materials, stone, bronze and iron. To Thomsen the find circumstances were the key to dating. In 1821 he wrote in a letter to fellow prehistorian Schröder: nothing is more important than to point out that hitherto we have not paid enough attention to what was found together. and in 1822: we still do not know enough about most of the antiquities either; ... only future archaeologists may be able to decide, but they will never be able to do so if they do not observe what things are found together and our collections are not brought to a greater degree of perfection. This analysis emphasizing co-occurrence and systematic attention to archaeological context allowed Thomsen to build a chronological framework of the materials in the collection and to classify new finds in relation to the established chronology, even without much knowledge of their provenience. In this way, Thomsen's system was a true chronological system rather than an evolutionary or technological system. Exactly when his chronology was reasonably well established is not clear, but by 1825 visitors to the museum were being instructed in his methods. In that year also he wrote to J.G.G. Büsching: To put artifacts in their proper context I consider it most important to pay attention to the chronological sequence, and I believe that the old idea of first stone, then copper, and finally iron, appears to be ever more firmly established as far as Scandinavia is concerned. By 1831 Thomsen was so certain of the utility of his methods that he circulated a pamphlet, ""Scandinavian Artifacts and Their Preservation", advising archaeologists to "observe the greatest care" to note the context of each artifact. The pamphlet had an immediate effect. Results reported to him confirmed the universality of the Three-age System. Thomsen also published in 1832 and 1833 articles in the "Nordisk Tidsskrift for Oldkyndighed", "Scandinavian Journal of Archaeology." He already had an international reputation when in 1836 the Royal Society of Northern Antiquaries published his illustrated contribution to "Guide to Scandinavian Archaeology" in which he put forth his chronology together with comments about typology and stratigraphy. Thomsen was the first to perceive typologies of grave goods, grave types, methods of burial, pottery and decorative motifs, and to assign these types to layers found in excavation. His published and personal advice to Danish archaeologists concerning the best methods of excavation produced immediate results that not only verified his system empirically but placed Denmark in the forefront of European archaeology for at least a generation. He became a national authority when C.C Rafn, secretary of the "Kongelige Nordiske Oldskriftselskab" ("Royal Society of Northern Antiquaries"), published his principal manuscript in "Ledetraad til Nordisk Oldkyndighed" ("Guide to Scandinavian Archaeology") in 1836. The system has since been expanded by further subdivision of each era, and refined through further archaeological and anthropological finds. It was to be a full generation before British archaeology caught up with the Danish. When it did, the leading figure was another multi-talented man of independent means: John Lubbock, 1st Baron Avebury. After reviewing the Three-age System from Lucretius to Thomsen, Lubbock improved it and took it to another level, that of cultural anthropology. Thomsen had been concerned with techniques of archaeological classification. Lubbock found correlations with the customs of savages and civilization. In his 1865 book, "Prehistoric Times", Lubbock divided the Stone Age in Europe, and possibly nearer Asia and Africa, into the Palaeolithic and the Neolithic: By "drift" Lubbock meant river-drift, the alluvium deposited by a river. For the interpretation of Palaeolithic artifacts, Lubbock, pointing out that the times are beyond the reach of history and tradition, suggests an analogy, which was adopted by the anthropologists. Just as the paleontologist uses modern elephants to help reconstruct fossil pachyderms, so the archaeologist is justified in using the customs of the "non-metallic savages" of today to understand "the early races which inhabited our continent." He devotes three chapters to this approach, covering the "modern savages" of the Indian and Pacific Oceans and the Western Hemisphere, but something of a deficit in what would be called today his correct professionalism reveals a field yet in its infancy: Perhaps it will be thought ... I have selected ... the passages most unfavorable to savages. ... In reality the very reverse in the case. ... Their real condition is even worse and more abject than that which I have endeavoured to depict. Sir John Lubbock's use of the terms Palaeolithic ("Old Stone Age") and Neolithic ("New Stone Age") were immediately popular. They were applied, however, in two different senses: geologic and anthropologic. In 1867–68 Ernst Haeckel in 20 public lectures in Jena, entitled "General Morphology", to be published in 1870, referred to the Archaeolithic, the Palaeolithic, the Mesolithic and the Caenolithic as periods in geologic history. He could only have got these terms from Hodder Westropp, who took Palaeolithic from Lubbock, invented Mesolithic ("Middle Stone Age") and Caenolithic instead of Lubbock's Neolithic. None of these terms appear anywhere, including the writings of Haeckel, before 1865. Haeckel's use was innovative. Westropp first used Mesolithic and Caenolithic in 1865, almost immediately after the publication of Lubbock's first edition. He read a paper on the topic before the Anthropological Society of London in 1865, published in 1866 in the "Memoirs". After asserting: Man, in all ages and in all stages of his development, is a tool-making animal. Westropp goes on to define "different epochs of flint, stone, bronze or iron; ..." He never did distinguish the flint from the Stone Age (having realized they were one and the same), but he divided the Stone Age as follows: These three ages were named respectively the Palaeolithic, the Mesolithic and the Kainolithic. He was careful to qualify these by stating: Their presence is thus not always an evidence of a high antiquity, but of an early and barbarous state; ... Lubbock's savagery was now Westropp's barbarism. A fuller exposition of the Mesolithic waited for his book, "Pre-Historic Phases", dedicated to Sir John Lubbock, published in 1872. At that time he restored Lubbock's Neolithic and defined a Stone Age divided into three phases and five stages. The First Stage, "Implements of the Gravel Drift," contains implements that were "roughly knocked into shape." His illustrations show Mode 1 and Mode 2 stone tools, basically Acheulean handaxes. Today they are in the Lower Palaeolithic. The Second Stage, "Flint Flakes" are of the "simplest form" and were struck off cores. Westropp differs in this definition from the modern, as Mode 2 contains flakes for scrapers and similar tools. His illustrations, however, show Modes 3 and 4, of the Middle and Upper Palaeolithic. His extensive lithic analysis leaves no doubt. They are, however, part of Westropp's Mesolithic. The Third Stage, "a more advanced stage" in which "flint flakes were carefully chipped into shape," produced small arrowheads from shattering a piece of flint into "a hundred pieces", selecting the most suitable and working it with a punch. The illustrations show that he had microliths, or Mode 5 tools in mind. His Mesolithic is therefore partly the same as the modern. The Fourth Stage is a part of the Neolithic that is transitional to the Fifth Stage: axes with ground edges leading to implements totally ground and polished. Westropp's agriculture is removed to the Bronze Age, while his Neolithic is pastoral. The Mesolithic is reserved to hunters. In that same year, 1872, Sir John Evans produced a massive work, "The Ancient Stone Implements", in which he in effect repudiated the Mesolithic, making a point to ignore it, denying it by name in later editions. He wrote: Sir John Lubbock has proposed to call them the Archaeolithic, or Palaeolithic, and the Neolithic Periods respectively, terms which have met with almost general acceptance, and of which I shall avail myself in the course of this work. Evans did not, however, follow Lubbock's general trend, which was typological classification. He chose instead to use type of find site as the main criterion, following Lubbock's descriptive terms, such as tools of the drift. Lubbock had identified drift sites as containing Palaeolithic material. Evans added to them the cave sites. Opposed to drift and cave were the surface sites, where chipped and ground tools often occurred in unlayered contexts. Evans decided he had no choice but to assign them all to the most recent. He therefore consigned them to the Neolithic and used the term "Surface Period" for it. Having read Westropp, Sir John knew perfectly well that all the former's Mesolithic implements were surface finds. He used his prestige to quell the concept of Mesolithic as best he could, but the public could see that his methods were not typological. The less prestigious scientists publishing in the smaller journals continued to look for a Mesolithic. For example, Isaac Taylor in "The Origin of the Aryans", 1889, mentions the Mesolithic but briefly, asserting, however, that it formed "a transition between the Palaeolithic and Neolithic Periods." Nevertheless, Sir John fought on, opposing the Mesolithic by name as late as the 1897 edition of his work. Meanwhile, Haeckel had totally abandoned the geologic uses of the -lithic terms. The concepts of Palaeozoic, Mesozoic and Cenozoic had originated in the early 19th century and were gradually becoming coin of the geologic realm. Realizing he was out of step, Haeckel started to transition to the -zoic system as early as 1876 in "The History of Creation", placing the -zoic form in parentheses next to the -lithic form. The gauntlet was officially thrown down before Sir John by J. Allen Brown, speaking for the opposition before the Anthropological Institute on 8 March 1892. In the journal he opens the attack by striking at a "hiatus" in the record: It has been generally assumed that a break occurred between the period during which ... the continent of Europe was inhabited by Palaeolithic Man and his Neolithic successor ... No physical cause, no adequate reasons have ever been assigned for such a hiatus in human existence ... The main hiatus at that time was between British and French archaeology, as the latter had already discovered the gap 20 years earlier and had already considered three answers and arrived at one solution, the modern. Whether Brown did not know or was pretending not to know is unclear. In 1872, the very year of Evans' publication, Mortillet had presented the gap to the Congrès international d'Anthropologie at Brussels: Between the Palaeolithic and Neolithic, there is a wide and deep gap, a large hiatus. Apparently prehistoric man was hunting big game with stone tools one year and farming with domestic animals and ground stone tools the next. Mortillet postulated a "time then unknown ("époque alors inconnue")" to fill the gap. The hunt for the "unknown" was on. On 16 April 1874, Mortillet retracted. "That hiatus is not real ("Cet hiatus n'est pas réel")," he said before the "Société d'Anthropologie", asserting that it was an informational gap only. The other theory had been a gap in nature, that, because of the ice age, man had retreated from Europe. The information must now be found. In 1895 Édouard Piette stated that he had heard Édouard Lartet speak of "the remains from the intermediate period ("les vestiges de l'époque intermédiaire")", which were yet to be discovered, but Lartet had not published this view. The gap had become a transition. However, asserted Piette: I was fortunate to discover the remains of that unknown time which separated the Magdalenian age from that of polished stone axes ... it was, at Mas-d'Azil in 1887 and 1888 when I made this discovery. He had excavated the type site of the Azilian Culture, the basis of today's Mesolithic. He found it sandwiched between the Magdalenian and the Neolithic. The tools were like those of the Danish kitchen-middens, termed the Surface Period by Evans, which were the basis of Westropp's Mesolithic. They were Mode 5 stone tools, or microliths. He mentions neither Westropp nor the Mesolithic, however. For him this was a "solution of continuity ("solution de continuité")" To it he assigns the semi-domestication of dog, horse, cow, etc., which "greatly facilitated the work of Neolithic man ("a beaucoup facilité la tàche de l'homme néolithique")." Brown in 1892 does not mention Mas-d'Azil. He refers to the "transition or 'Mesolithic' forms" but to him these are "rough hewn axes chipped over the entire surface" mentioned by Evans as the earliest of the Neolithic. Where Piette believed he had discovered something new, Brown wanted to break out known tools considered Neolithic. Sir John Evans never changed his mind, giving rise to a dichotomous view of the Mesolithic and a multiplication of confusing terms. On the continent, all seemed settled: there was a distinct Mesolithic with its own tools and both tools and customs were transitional to the Neolithic. Then in 1910, the Swedish archaeologist, Knut Stjerna, addressed another problem of the Three-Age System: although a culture was predominantly classified as one period, it might contain material that was the same as or like that of another. His example was the Gallery grave Period of Scandinavia. It was not uniformly Neolithic, but contained some objects of bronze and more importantly to him three different subcultures. One of these "civilisations" (sub-cultures) located in the north and east of Scandinavia was rather different, featuring but few gallery graves, using instead stone-lined pit graves containing implements of bone, such as harpoon and javelin heads. He observed that they "persisted during the recent Paleolithic period and also during the Protoneolithic." Here he had used a new term, "Protoneolithic", which was according to him to be applied to the Danish kitchen-middens. Stjerna also said that the eastern culture "is attached to the Paleolithic civilization ("se trouve rattachée à la civilisation paléolithique")." However, it was not intermediary and of its intermediates he said "we cannot discuss them here ("nous ne pouvons pas examiner ici")." This "attached" and non-transitional culture he chose to call the Epipaleolithic, defining it as follows: With Epipaleolithic I mean the period during the early days that followed the age of the reindeer, the one that retained Paleolithic customs. This period has two stages in Scandinavia, that of Maglemose and that of Kunda. ("Par époque épipaléolithique j'entends la période qui, pendant les premiers temps qui ont suivi l'âge du Renne, conserve les coutumes paléolithiques. Cette période présente deux étapes en Scandinavie, celle de Maglemose et de Kunda.") There is no mention of any Mesolithic, but the material he described had been previously connected with the Mesolithic. Whether or not Stjerna intended his Protoneolithic and Epipaleolithic as a replacement for the Mesolithic is not clear, but Hugo Obermaier, a German archaeologist who taught and worked for many years in Spain, to whom the concepts are often erroneously attributed, used them to mount an attack on the entire concept of Mesolithic. He presented his views in "El Hombre fósil", 1916, which was translated into English in 1924. Viewing the Epipaleolithic and the Protoneolithic as a "transition" and an "interim" he affirmed that they were not any sort of "transformation:" But in my opinion this term is not justified, as it would be if these phases presented a natural evolutionary development – a progressive transformation from Paleolithic to Neolithic. In reality, the final phase of the Capsian, the Tardenoisian, the Azilian and the northern Maglemose industries are the posthumous descendants of the Palaeolithic ... The ideas of Stjerna and Obermaier introduced a certain ambiguity into the terminology, which subsequent archaeologists found and find confusing. Epipaleolithic and Protoneolithic cover the same cultures, more or less, as does the Mesolithic. Publications on the Stone Age after 1916 include some sort of explanation of this ambiguity, leaving room for different views. Strictly speaking the Epipaleolithic is the earlier part of the Mesolithic. Some identify it with the Mesolithic. To others it is an Upper Paleolithic transition to the Mesolithic. The exact use in any context depends on the archaeological tradition or the judgement of individual archaeologists. The issue continues. The post-Darwinian approach to the naming of periods in earth history focused at first on the lapse of time: early (Palaeo-), middle (Meso-) and late (Ceno-). This conceptualization automatically imposes a three-age subdivision to any period, which is predominant in modern archaeology: Early, Middle and Late Bronze Age; Early, Middle and Late Minoan, etc. The criterion is whether the objects in question look simple or are elaborative. If a horizon contains objects that are post-late and simpler-than-late they are sub-, as in Submycenaean. Haeckel's presentations are from a different point of view. His "History of Creation" of 1870 presents the ages as "Strata of the Earth's Crust," in which he prefers "upper", "mid-" and "lower" based on the order in which one encounters the layers. His analysis features an Upper and Lower Pliocene as well as an Upper and Lower Diluvial (his term for the Pleistocene). Haeckel, however, was relying heavily on Lyell. In the 1833 edition of "Principles of Geology" (the first) Lyell devised the terms Eocene, Miocene and Pliocene to mean periods of which the "strata" contained some (Eo-, "early"), lesser (Mio-) and greater (Plio-) numbers of "living Mollusca represented among fossil assemblages of western Europe." The Eocene was given Lower, Middle, Upper; the Miocene a Lower and Upper; and the Pliocene an Older and Newer, which scheme would indicate an equivalence between Lower and Older, and Upper and Newer. In a French version, "Nouveaux Éléments de Géologie", in 1839 Lyell called the Older Pliocene the Pliocene and the Newer Pliocene the Pleistocene (Pleist-, "most"). Then in "Antiquity of Man" in 1863 he reverted to his previous scheme, adding "Post-Tertiary" and "Post-Pliocene." In 1873 the Fourth Edition of "Antiquity of Man" restores Pleistocene and identifies it with Post-Pliocene. As this work was posthumous, no more was heard from Lyell. Living or deceased, his work was immensely popular among scientists and laymen alike. "Pleistocene" caught on immediately; it is entirely possible that he restored it by popular demand. In 1880 Dawkins published "The Three Pleistocene Strata" containing a new manifesto for British archaeology: The continuity between geology, prehistoric archaeology and history is so direct that it is impossible to picture early man in this country without using the results of all these three sciences. He intends to use archaeology and geology to "draw aside the veil" covering the situations of the peoples mentioned in proto-historic documents, such as Caesar's "Commentaries" and the "Agricola" of Tacitus. Adopting Lyell's scheme of the Tertiary, he divides Pleistocene into Early, Mid- and Late. Only the Palaeolithic falls into the Pleistocene; the Neolithic is in the "Prehistoric Period" subsequent. Dawkins defines what was to become the Upper, Middle and Lower Paleolithic, except that he calls them the "Upper Cave-Earth and Breccia," the "Middle Cave-Earth," and the "Lower Red Sand," with reference to the names of the layers. The next year, 1881, Geikie solidified the terminology into Upper and Lower Palaeolithic: In Kent's Cave the implements obtained from the lower stages were of a much ruder description than the various objects detected in the upper cave-earth ... And a very long time must have elapsed between the formation of the lower and upper Palaeolithic beds in that cave. The Middle Paleolithic in the modern sense made its appearance in 1911 in the 1st edition of William Johnson Sollas' "Ancient Hunters". It had been used in varying senses before then. Sollas associates the period with the Mousterian technology and the relevant modern people with the Tasmanians. In the 2nd edition of 1915 he has changed his mind for reasons that are not clear. The Mousterian has been moved to the Lower Paleolithic and the people changed to the Australian aborigines; furthermore, the association has been made with Neanderthals and the Levalloisian added. Sollas says wistfully that they are in "the very middle of the Palaeolithic epoch." Whatever his reasons, the public would have none of it. From 1911 on, Mousterian was Middle Paleolithic, except for holdouts. Alfred L. Kroeber in 1920, "Three essays on the antiquity and races of man," reverting to Lower Paleolithic, explains that he is following Louis Laurent Gabriel de Mortillet. The English-speaking public remained with Middle Paleolithic. Thomsen had formalized the Three-age System by the time of its publication in 1836. The next step forward was the formalization of the Palaeolithic and Neolithic by Sir John Lubbock in 1865. Between these two times Denmark held the lead in archaeology, especially because of the work of Thomsen's at first junior associate and then successor, Jens Jacob Asmussen Worsaae, rising in the last year of his life to Kultus Minister of Denmark. Lubbock offers full tribute and credit to him in "Prehistoric Times". Worsaae in 1862 in "Om Tvedelingen af Steenalderen", previewed in English even before its publication by "The Gentleman's Magazine", concerned about changes in typology during each period, proposed a bipartite division of each age:Both for Bronze and Stone it was now evident that a few hundred years would not suffice. In fact, good grounds existed for dividing each of these periods into two, if not more. He called them earlier or later. The three ages became six periods. The British seized on the concept immediately. Worsaae's earlier and later became Lubbock's palaeo- and neo- in 1865, but alternatively English speakers used Earlier and Later Stone Age, as did Lyell's 1883 edition of "Principles of Geology", with older and younger as synonyms. As there is no room for a middle between the comparative adjectives, they were later modified to early and late. The scheme created a problem for further bipartite subdivisions, which would have resulted in such terms as early early Stone Age, but that terminology was avoided by adoption of Geikie's upper and lower Paleolithic. Amongst African archaeologists, the terms Old Stone Age, Middle Stone Age and Late Stone Age are preferred. When Sir John Lubbock was doing the preliminary work for his 1865 "magnum opus", Charles Darwin and Alfred Russel Wallace were jointly publishing their first papers On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection. Darwins's On the Origin of Species came out in 1859, but he did not elucidate the theory of evolution as it applies to man until the Descent of Man in 1871. Meanwhile, Wallace read a paper in 1864 to the Anthropological Society of London that was a major influence on Sir John, publishing in the very next year. He quoted Wallace:From the moment when the first skin was used as a covering, when the first rude spear was formed to assist in the chase, the first seed sown or shoot planted, a grand revolution was effected in nature, a revolution which in all the previous ages of the world's history had had no parallel, for a being had arisen who was no longer necessarily subject to change with the changing universe,—a being who was in some degree superior to nature, inasmuch as he knew how to control and regulate her action, and could keep himself in harmony with her, not by a change in body, but by an advance in mind. Wallace distinguishing between mind and body was asserting that natural selection shaped the form of man only until the appearance of mind; after then, it played no part. Mind formed modern man, meaning that result of mind, culture. Its appearance overthrew the laws of nature. Wallace used the term "grand revolution." Although Lubbock believed that Wallace had gone too far in that direction he did adopt a theory of evolution combined with the revolution of culture. Neither Wallace not Lubbock offered any explanation of how the revolution came about, or felt that they had to offer one. Revolution is an acceptance that in the continuous evolution of objects and events sharp and inexplicable disconformities do occur, as in geology. And so it is not surprising that in the 1874 Stockholm meeting of the International Congress of Anthropology and Prehistoric Archaeology, in response to Ernst Hamy's denial of any "break" between Paleolithic and Neolithic based on material from dolmens near Paris "showing a continuity between the paleolithic and neolithic folks," Edouard Desor, geologist and archaeologist, replied: "that the introduction of domesticated animals was a complete revolution and enables us to separate the two epochs completely." A revolution as defined by Wallace and adopted by Lubbock is a change of regime, or rules. If man was the new rule-setter through culture then the initiation of each of Lubbock's four periods might be regarded as a change of rules and therefore as a distinct revolution, and so "Chambers's Journal", a reference work, in 1879 portrayed each of them as:...an advance in knowledge and civilization which amounted to a revolution in the then existing manners and customs of the world. Because of the controversy over Westropp's Mesolithic and Mortillet's Gap beginning in 1872 archaeological attention focused mainly on the revolution at the Palaeolithic—Neolithic boundary as an explanation of the gap. For a few decades the Neolithic Period, as it was called, was described as a kind of revolution. In the 1890s, a standard term, the Neolithic Revolution, began to appear in encyclopedias such as Pears. In 1925 the Cambridge Ancient History reported:There are quite a large number of archaeologists who justifiably consider the period of the Late Stone Age to be a Neolithic revolution and an economic revolution at the same time. For that is the period when primitive agriculture developed and cattle breeding began. In 1936 a champion came forward who would advance the Neolithic Revolution into the mainstream view: Vere Gordon Childe. After giving the Neolithic Revolution scant mention in his first notable work, the 1928 edition of "New Light on the Most Ancient East", Childe made a major presentation in the first edition of "Man Makes Himself" in 1936 developing Wallace's and Lubbock's theme of the human revolution against the supremacy of nature and supplying detail on two revolutions, the Paleolithic—Neolithic and the Neolithic-Bronze Age, which he called the Second or Urban revolution. Lubbock had been as much of an ethnologist as an archaeologist. The founders of cultural anthropology, such as Tylor and Morgan, were to follow his lead on that. Lubbock created such concepts as savages and barbarians based on the customs of then modern tribesmen and made the presumption that the terms can be applied without serious inaccuracy to the men of the Paleolithic and the Neolithic. Childe broke with this view:The assumption that any savage tribe today is primitive, in the sense that its culture faithfully reflects that of much more ancient men is gratuitous. Childe concentrated on the inferences to be made from the artifacts:But when the tools ... are considered ... in their totality, they may reveal much more. They disclose not only the level of technical skill ... but also their economy ... The archaeologists's ages correspond roughly to economic stages. Each new "age" is ushered in by an economic revolution ... The archaeological periods were indications of economic ones:Archaeologists can define a period when it was apparently the sole economy, the sole organization of production ruling anywhere on the earth's surface. These periods could be used to supplement historical ones where history was not available. He reaffirmed Lubbock's view that the Paleolithic was an age of food gathering and the Neolithic an age of food production. He took a stand on the question of the Mesolithic identifying it with the Epipaleolithic. The Mesolithic was to him "a mere continuance of the Old Stone Age mode of life" between the end of the Pleistocene and the start of the Neolithic. Lubbock's terms "savagery" and "barbarism" do not much appear in "Man Makes Himself" but the sequel, "What Happened in History" (1942), reuses them (attributing them to Morgan, who got them from Lubbock) with an economic significance: savagery for food-gathering and barbarism for Neolithic food production. Civilization begins with the urban revolution of the Bronze Age. Even as Childe was developing this revolution theme the ground was sinking under him. Lubbock did not find any pottery associated with the Paleolithic, asserting of its to him last period, the Reindeer, "no fragments of metal or pottery have yet been found." He did not generalize but others did not hesitate to do so. The next year, 1866, Dawkins proclaimed of Neolithic people that "these invented the use of pottery..." From then until the 1930s pottery was considered a sine qua non of the Neolithic. The term Pre-Pottery Age came into use in the late 19th century but it meant Paleolithic. Meanwhile, the Palestine Exploration Fund founded in 1865 completing its survey of excavatable sites in Palestine in 1880 began excavating in 1890 at the site of ancient Lachish near Jerusalem, the first of a series planned under the licensing system of the Ottoman Empire. Under their auspices in 1908 Ernst Sellin and Carl Watzinger began excavation at Jericho (Tell es-Sultan) previously excavated for the first time by Sir Charles Warren in 1868. They discovered a Neolithic and Bronze Age city there. Subsequent excavations in the region by them and others turned up other walled cities that appear to have preceded the Bronze Age urbanization. All excavation ceased for World War I. When it was over the Ottoman Empire was no longer a factor there. In 1919 the new British School of Archaeology in Jerusalem assumed archaeological operations in Palestine. John Garstang finally resumed excavation at Jericho 1930-1936. The renewed dig uncovered another 3000 years of prehistory that was in the Neolithic but did not make use of pottery. He called it the Pre-pottery Neolithic, as opposed to the Pottery Neolithic, subsequently often called the Aceramic or Pre-ceramic and Ceramic Neolithic. Kathleen Kenyon was a young photographer then with a natural talent for archaeology. Solving a number of dating problems she soon advanced to the forefront of British archaeology through skill and judgement. In World War II she served as a commander in the Red Cross. In 1952–58 she took over operations at Jericho as the Director of the British School, verifying and expanding Garstang's work and conclusions. There were two Pre-pottery Neolithic periods, she concluded, A and B. Moreover, the PPN had been discovered at most of the major Neolithic sites in the near East and Greece. By this time her personal stature in archaeology was at least equal to that of V. Gordon Childe. While the three-age system was being attributed to Childe in popular fame, Kenyon became gratuitously the discoverer of the PPN. More significantly the question of revolution or evolution of the Neolithic was increasingly being brought before the professional archaeologists. Danish archaeology took the lead in defining the Bronze Age, with little of the controversy surrounding the Stone Age. British archaeologists patterned their own excavations after those of the Danish, which they followed avidly in the media. References to the Bronze Age in British excavation reports began in the 1820s contemporaneously with the new system being promulgated by C.J. Thomsen. Mention of the Early and Late Bronze Age began in the 1860s following the bipartite definitions of Worsaae. In 1874 at the Stockholm meeting of the International Congress of Anthropology and Prehistoric Archaeology, a suggestion was made by A. Bertrand that no distinct age of bronze had existed, that the bronze artifacts discovered were really part of the Iron Age. Hans Hildebrand in refutation pointed to two Bronze Ages and a transitional period in Scandinavia. John Evans denied any defect of continuity between the two and asserted there were three Bronze Ages, "the early, middle and late Bronze Age." His view for the Stone Age, following Lubbock, was quite different, denying, in "The Ancient Stone Implements", any concept of a Middle Stone Age. In his 1881 parallel work, "The Ancient Bronze Implements", he affirmed and further defined the three periods, strangely enough recusing himself from his previous terminology, Early, Middle and Late Bronze Age (the current forms) in favor of "an earlier and later stage" and "middle". He uses Bronze Age, Bronze Period, Bronze-using Period and Bronze Civilization interchangeably. Apparently Evans was sensitive of what had gone before, retaining the terminology of the bipartite system while proposing a tripartite one. After stating a catalogue of types of bronze implements he defines his system:The Bronze Age of Britain may, therefore, be regarded as an aggregate of three stages: the first, that characterized by the flat or slightly flanged celts, and the knife-daggers ... the second, that characterized by the more heavy dagger-blades and the flanged celts and tanged spear-heads or daggers, ... and the third, by palstaves and socketed celts and the many forms of tools and weapons, ... It is in this third stage that the bronze sword and the true socketed spear-head first make their advent. In chapter 1 of his work, Evans proposes for the first time a transitional Copper Age between the Neolithic and the Bronze Age. He adduces evidence from far-flung places such as China and the Americas to show that the smelting of copper universally preceded alloying with tin to make bronze. He does not know how to classify this fourth age. On the one hand he distinguishes it from the Bronze Age. On the other hand, he includes it:In thus speaking of a bronze-using period I by no means wish to exclude the possible use of copper unalloyed with tin. Evans goes into considerable detail tracing references to the metals in classical literature: Latin "aer, aeris" and Greek "" first for "copper" and then for "bronze." He does not mention the adjective of "aes", which is "aēneus", nor is he interested in formulating New Latin words for the Copper Age, which is good enough for him and many English authors from then on. He offers literary proof that bronze had been in use before iron and copper before bronze. In 1884 the center of archaeological interest shifted to Italy with the excavation of Remedello and the discovery of the Remedello culture by Gaetano Chierici. According to his 1886 biographers, Luigi Pigorini and Pellegrino Strobel, Chierici devised the term Età Eneo-litica to describe the archaeological context of his findings, which he believed were the remains of Pelasgians, or people that preceded Greek and Latin speakers in the Mediterranean. The age (Età) was:A period of transition from the age of stone to that of bronze (periodo di transizione dall'età della pietra a quella del bronzo) Whether intentional or not, the definition was the same as Evans', except that Chierici was adding a term to New Latin. He describes the transition by stating the beginning (litica, or Stone Age) and the ending (eneo-, or Bronze Age); in English, "the stone-to-bronze period." Shortly after, "Eneolithic" or "Aeneolithic" began turning up in scholarly English as a synonym for "Copper Age." Sir John's own son, Arthur Evans, beginning to come into his own as an archaeologist and already studying Cretan civilization, refers in 1895 to some clay figures of "aeneolithic date" (quotes his). The three-age system is a way of dividing prehistory, and the Iron Age is therefore considered to end in a particular culture with either the start of its protohistory, when it begins to be written about by outsiders, or when its own historiography begins. Although iron is still the major hard material in use in modern civilization, and steel is a vital and indispensable modern industry, as far as archaeologists are concerned the Iron Age has therefore now ended for all cultures in the world. The date when it is taken to end varies greatly between cultures, and in many parts of the world there was no Iron Age at all, for example in Pre-Columbian America and the prehistory of Australia. For these and other regions the three-age system is little used. By a convention among archaeologists, in the Ancient Near East the Iron Age is taken to end with the start of the Achaemenid Empire in the 6th century BC, as the history of that is told by the Greek historian Herodotus. This remains the case despite a good deal of earlier local written material having become known since the convention was established. In Western Europe the Iron Age is ended by Roman conquest. In South Asia the start of the Maurya Empire about 320 BC is usually taken as the end point; although we have a considerable quantity of earlier written texts from India, they give us relatively little in the way of a conventional record of political history. For Egypt, China and Greece "Iron Age" is not a very useful concept, and relatively little used as a period term. In the first two prehistory has ended, and periodization by historical ruling dynasties has already begun, in the Bronze Age, which these cultures do have. In Greece the Iron Age begins during the Greek Dark Ages, and coincides with the cessation of a historical record for some centuries. For Scandinavia and other parts of northern Europe that the Romans did not reach, the Iron Age continues until the start of the Viking Age in about 800 AD. The question of the dates of the objects and events discovered through archaeology is the prime concern of any system of thought that seeks to summarize history through the formulation of ages or epochs. An age is defined through comparison of contemporaneous events. Increasingly, the terminology of archaeology is parallel to that of historical method. An event is "undocumented" until it turns up in the archaeological record. Fossils and artifacts are "documents" of the epochs hypothesized. The correction of dating errors is therefore a major concern. In the case where parallel epochs defined in history were available, elaborate efforts were made to align European and Near Eastern sequences with the datable chronology of Ancient Egypt and other known civilizations. The resulting grand sequence was also spot checked by evidence of calculateable solar or other astronomical events. These methods are only available for the relatively short term of recorded history. Most prehistory does not fall into that category. Physical science provides at least two general groups of dating methods, stated below. Data collected by these methods is intended to provide an absolute chronology to the framework of periods defined by relative chronology. The initial comparisons of artifacts defined periods that were local to a site, group of sites or region. Advances made in the fields of seriation, typology, stratification and the associative dating of artifacts and features permitted even greater refinement of the system. The ultimate development is the reconstruction of a global catalogue of layers (or as close to it as possible) with different sections attested in different regions. Ideally once the layer of the artifact or event is known a quick lookup of the layer in the grand system will provide a ready date. This is considered the most reliable method. It is used for calibration of the less reliable chemical methods. Any material sample contains elements and compounds that are subject to decay into other elements and compounds. In cases where the rate of decay is predictable and the proportions of initial and end products can be known exactly, consistent dates of the artifact can be calculated. Due to the problem of sample contamination and variability of the natural proportions of the materials in the media, sample analysis in the case where verification can be checked by grand layering systems has often been found to be widely inaccurate. Chemical dates therefore are only considered reliable used in conjunction with other methods. They are collected in groups of data points that form a pattern when graphed. Isolated dates are not considered reliable. The term Megalithic does not refer to a period of time, but merely describes the use of large stones by ancient peoples from any period. An eolith is a stone that might have been formed by natural process but occurs in contexts that suggest modification by early humans or other primates for percussion. * Formation of states starts during the Early Bronze Age in Egypt and Mesopotamia and during the Late Bronze Age first empires are founded. The Three-age System has been criticized since at least the 19th century. Every phase of its development has been contested. Some of the arguments that have been presented against it follow. In some cases criticism resulted in other, parallel three-age systems, such as the concepts expressed by Lewis Henry Morgan in "Ancient Society", based on ethnology. These disagreed with the metallic basis of epochization. The critic generally substituted his own definitions of epochs. Vere Gordon Childe said of the early cultural anthropologists:Last century Herbert Spencer, Lewis H. Morgan and Tylor propounded divergent schemes ... they arranged these in a logical order ... They assumed that the logical order was a temporal one... The competing systems of Morgan and Tylor remained equally unverified—and incompatible—theories. More recently, many archaeologists have questioned the validity of dividing time into epochs at all. For example, one recent critic, Graham Connah, describes the three-age system as "epochalism" and asserts:So many archaeological writers have used this model for so long that for many readers it has taken on a reality of its own. In spite of the theoretical agonizing of the last half-century, epochalism is still alive and well ... Even in parts of the world where the model is still in common use, it needs to be accepted that, for example, there never was actually such a thing as 'the Bronze Age.' Some view the three-age system as over-simple; that is, it neglects vital detail and forces complex circumstances into a mold they do not fit. Rowlands argues that the division of human societies into epochs based on the presumption of a single set of related changes is not realistic:But as a more rigorous sociological approach has begun to show that changes at the economic, political and ideological levels are not 'all of apiece' we have come to realise that time may be segmented in as many ways as convenient to the researcher concerned. The three-age system is a relative chronology. The explosion of archaeological data acquired in the 20th century was intended to elucidate the relative chronology in detail. One consequence was the collection of absolute dates. Connah argues:As radiocarbon and other forms of absolute dating contributed more detailed and more reliable chronologies, the epochal model ceased to be necessary. Peter Bogucki of Princeton University summarizes the perspective taken by many modern archaeologists: Although modern archaeologists realize that this tripartite division of prehistoric society is far too simple to reflect the complexity of change and continuity, terms like 'Bronze Age' are still used as a very general way of focusing attention on particular times and places and thus facilitating archaeological discussion. Another common criticism attacks the broader application of the three-age system as a cross-cultural model for social change. The model was originally designed to explain data from Europe and West Asia, but archaeologists have also attempted to use it to explain social and technological developments in other parts of the world such as the Americas, Australasia, and Africa. Many archaeologists working in these regions have criticized this application as eurocentric. Graham Connah writes that: ... attempts by Eurocentric archaeologists to apply the model to African archaeology have produced little more than confusion, whereas in the Americas or Australasia it has been irrelevant, ... Alice B. Kehoe further explains this position as it relates to American archaeology: ... Professor Wilson's presentation of prehistoric archaeology was a European product carried across the Atlantic to promote an American science compatible with its European model. Kehoe goes on to complain of Wilson that "he accepted and reprised the idea that the European course of development was paradigmatic for humankind." This criticism argues that the different societies of the world underwent social and technological developments in different ways. A sequence of events that describes the developments of one civilization may not necessarily apply to another, in this view. Instead social and technological developments must be described within the context of the society being studied. Tachyon A tachyon () or tachyonic particle is a hypothetical particle that always travels faster than light. Most physicists believe that faster-than-light particles cannot exist because they are not consistent with the known laws of physics. If such particles did exist, they could be used to build a tachyonic antitelephone and send signals faster than light, which (according to special relativity) would lead to violations of causality. No experimental evidence for the existence of such particles has been found. E. C. G. Sudarshan, V.K Deshpande and Baidyanath Misra were the first to propose the existence of particles faster than light and named them "meta-particles". After that the possibility of particles moving faster than light was also proposed by Robert Ehrlich and Arnold Sommerfeld, independently of each other. In the 1967 paper that coined the term, Gerald Feinberg proposed that tachyonic particles could be quanta of a quantum field with imaginary mass. However, it was soon realized that excitations of such imaginary mass fields do "not" under any circumstances propagate faster than light, and instead the imaginary mass gives rise to an instability known as tachyon condensation. Nevertheless, in modern physics the term often refers to imaginary mass fields rather than to faster-than-light particles. Such fields have come to play a significant role in modern physics. The term comes from the , "tachy", meaning . The complementary particle types are called luxons (which always move at the speed of light) and bradyons (which always move slower than light); both of these particle types are known to exist. In special relativity, a faster-than-light particle would have space-like four-momentum, in contrast to ordinary particles that have time-like four-momentum. Although in some theories the mass of tachyons is regarded as imaginary, in some modern formulations the mass is considered real, the formulas for the momentum and energy being redefined to this end. Moreover, since tachyons are constrained to the spacelike portion of the energy–momentum graph, they could not slow down to subluminal speeds. In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called "bradyons" in discussions of tachyons) must also apply to tachyons. In particular the energy–momentum relation: (where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle: This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy. When "v" is larger than "c", the denominator in the equation for the energy is imaginary, as the value under the radical is negative. Because the total energy must be real, the numerator must "also" be imaginary: i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number. In some modern formulations of the theory, the mass of tachyons is regarded as real. One curious effect is that, unlike ordinary particles, the speed of a tachyon "increases" as its energy decreases. In particular, formula_3 approaches zero when formula_4 approaches infinity. (For ordinary bradyonic matter, "E" increases with increasing speed, becoming arbitrarily large as "v" approaches "c", the speed of light). Therefore, just as bradyons are forbidden to break the light-speed barrier, so too are tachyons forbidden from slowing down to below "c", because infinite energy is required to reach the barrier from either above or below. As noted by Albert Einstein, Tolman, and others, special relativity implies that faster-than-light particles, if they existed, could be used to communicate backwards in time. In 1985, Chodos proposed that neutrinos can have a tachyonic nature. The possibility of standard model particles moving at superluminal speeds can be modeled using Lorentz invariance violating terms, for example in the Standard-Model Extension. In this framework, neutrinos experience Lorentz-violating oscillations and can travel faster than light at high energies. This proposal was strongly criticized. A tachyon with an electric charge would lose energy as Cherenkov radiation—just as ordinary charged particles do when they exceed the local speed of light in a medium (other than a hard vacuum). A charged tachyon traveling in a vacuum, therefore, undergoes a constant proper time acceleration and, by necessity, its world line forms a hyperbola in space-time. However reducing a tachyon's energy "increases" its speed, so that the single hyperbola formed is of "two" oppositely charged tachyons with opposite momenta (same magnitude, opposite sign) which annihilate each other when they simultaneously reach infinite speed at the same place in space. (At infinite speed, the two tachyons have no energy each and finite momentum of opposite direction, so no conservation laws are violated in their mutual annihilation. The time of annihilation is frame dependent.) Even an electrically neutral tachyon would be expected to lose energy via gravitational Cherenkov radiation (unless gravitons are themselves tachyons), because it has a gravitational mass, and therefore increases in speed as it travels, as described above. If the tachyon interacts with any other particles, it can also radiate Cherenkov energy into those particles. Neutrinos interact with the other particles of the Standard Model, and Andrew Cohen and Sheldon Glashow used this to argue that the faster-than-light neutrino anomaly cannot be explained by making neutrinos propagate faster than light, and must instead be due to an error in the experiment. Further investigation of the experiment showed that the results were indeed erroneous. Causality is a fundamental principle of physics. If tachyons can transmit information faster than light, then according to relativity they violate causality, leading to logical paradoxes of the "kill your own grandfather" type. This is often illustrated with thought experiments such as the "tachyon telephone paradox" or "logically pernicious self-inhibitor." The problem can be understood in terms of the relativity of simultaneity in special relativity, which says that different inertial reference frames will disagree on whether two events at different locations happened "at the same time" or not, and they can also disagree on the order of the two events (technically, these disagreements occur when the spacetime interval between the events is 'space-like', meaning that neither event lies in the future light cone of the other). If one of the two events represents the sending of a signal from one location and the second event represents the reception of the same signal at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity ensures that all reference frames agree that the transmission-event happened before the reception-event. However, in the case of a hypothetical signal moving faster than light, there would always be some frames in which the signal was received before it was sent so that the signal could be said to have moved backward in time. Because one of the two fundamental postulates of special relativity says that the laws of physics should work the same way in every inertial frame, if it is possible for signals to move backward in time in any one frame, it must be possible in all frames. This means that if observer A sends a signal to observer B which moves faster than light in A's frame but backwards in time in B's frame, and then B sends a reply which moves faster than light in B's frame but backwards in time in A's frame, it could work out that A receives the reply before sending the original signal, challenging causality in "every" frame and opening the door to severe logical paradoxes. Mathematical details can be found in the tachyonic antitelephone article, and an illustration of such a scenario using spacetime diagrams can be found in "Baker, R. (2003)" The reinterpretation principle asserts that a tachyon sent "back" in time can always be "reinterpreted" as a tachyon traveling "forward" in time, because observers cannot distinguish between the emission and absorption of tachyons. The attempt to "detect" a tachyon "from" the future (and violate causality) would actually "create" the same tachyon and send it "forward" in time (which is causal). However, this principle is not widely accepted as resolving the paradoxes. Instead, what would be required to avoid paradoxes is that unlike any known particle, tachyons do not interact in any way and can never be detected or observed, because otherwise a tachyon beam could be modulated and used to create an anti-telephone or a "logically pernicious self-inhibitor". All forms of energy are believed to interact at least gravitationally, and many authors state that superluminal propagation in Lorentz invariant theories always leads to causal paradoxes. In modern physics, all fundamental particles are regarded as excitations of quantum fields. There are several distinct ways in which tachyonic particles could be embedded into a field theory. In the paper that coined the term "tachyon", Gerald Feinberg studied Lorentz invariant quantum fields with imaginary mass. Because the group velocity for such a field is superluminal, naively it appears that its excitations propagate faster than light. However, it was quickly understood that the superluminal group velocity does not correspond to the speed of propagation of any localized excitation (like a particle). Instead, the negative mass represents an instability to tachyon condensation, and all excitations of the field propagate subluminally and are consistent with causality. Despite having no faster-than-light propagation, such fields are referred to simply as "tachyons" in many sources. Tachyonic fields play an important role in modern physics. Perhaps the most famous is the Higgs boson of the Standard Model of particle physics, which has an imaginary mass in its uncondensed phase. In general, the phenomenon of spontaneous symmetry breaking, which is closely related to tachyon condensation, plays an important role in many aspects of theoretical physics, including the Ginzburg–Landau and BCS theories of superconductivity. Another example of a tachyonic field is the tachyon of bosonic string theory. Tachyons are predicted by bosonic string theory and also the Neveu-Schwarz (NS) and NS-NS sectors, which are respectively the open bosonic sector and closed bosonic sector, of RNS Superstring theory prior to the GSO projection. However such tachyons are not possible due to the Sen conjecture, also known as tachyon condensation. This resulted in the necessity for the GSO projection. In theories that do not respect Lorentz invariance, the speed of light is not (necessarily) a barrier, and particles can travel faster than the speed of light without infinite energy or causal paradoxes. A class of field theories of that type is the so-called Standard Model extensions. However, the experimental evidence for Lorentz invariance is extremely good, so such theories are very tightly constrained. By modifying the kinetic energy of the field, it is possible to produce Lorentz invariant field theories with excitations that propagate superluminally. However, such theories, in general, do not have a well-defined Cauchy problem (for reasons related to the issues of causality discussed above), and are probably inconsistent quantum mechanically. The term was coined by Gerald Feinberg in a 1967 paper titled "Possibility of Faster-Than-Light Particles". He had been inspired by the science-fiction story "Beep" by James Blish. Feinberg studied the kinematics of such particles according to special relativity. In his paper he also introduced fields with imaginary mass (now also referred to as tachyons) in an attempt to understand the microphysical origin such particles might have. The first hypothesis regarding faster-than-light particles is sometimes attributed to German physicist Arnold Sommerfeld in 1904, and more recent discussions happened in 1962 and 1969. In September 2011, it was reported that a tau neutrino had traveled faster than the speed of light in a major release by CERN; however, later updates from CERN on the OPERA project indicate that the faster-than-light readings were due to a faulty element of the experiment's fibre optic timing system. Tachyons have appeared in many works of fiction. They have been used as a standby mechanism upon which many science fiction authors rely to establish faster-than-light communication, with or without reference to causality issues. The word "tachyon" has become widely recognized to such an extent that it can impart a science-fictional connotation even if the subject in question has no particular relation to superluminal travel (a form of technobabble, akin to "positronic brain"). The Starlost The Starlost is a Canadian-produced science fiction television series created by writer Harlan Ellison and broadcast in 1973 on CTV in Canada and syndicated to local stations in the United States. The show's setting is a huge generational colony spacecraft called "Earthship Ark", which has gone off course. Many of the descendants of the original crew and colonists are unaware, however, that they are aboard a ship. The series experienced a number of production difficulties, and Ellison broke with the project before the airing of its first episode. Foreseeing the destruction of Earth, humanity builds a multi-generational starship called "Earthship Ark", wide and long. The ship contains dozens of biospheres, each kilometres across and housing people of different cultures; their goal is to find and seed a new world of a distant star. In 2385, more than 100 years into the voyage, an unexplained accident occurs, and the ship goes into emergency mode, whereby each biosphere is sealed off from the others. In 2790, 405 years after the accident, Devon (Keir Dullea) a resident of Cypress Corners, an agrarian community with a culture resembling that of the Amish, discovers that his world is far larger and more mysterious than he had realized. Considered an outcast because of his questioning of the way things are, especially his refusal to accept the arranged marriage of his love Rachel (Gay Rowan) to his friend Garth (Robin Ward), Devon finds out that the Cypress Corners elders have been deliberately manipulating the local computer terminal, which they call "The Voice of The Creator". The congregation pursues Devon for attacking the elders and stealing a computer cassette on which they have recorded their orders, and its leaders plot to execute him, but the elderly Abraham, who also questions the elders, gives Devon a key to a dark, mysterious doorway, which Abraham himself is afraid to enter. The frightened Devon escapes into the service areas of the ship and accesses a computer data station that explains the nature and purpose of the Ark and hints at its problems. When Devon returns to Cypress Corners to tell his community what he has learned, he is put on trial for heresy and condemned to death by stoning. Escaping on the night before his execution with the aid of Garth, Devon convinces Rachel to come with him, and Garth pursues them. When Rachel refuses to return with Garth, he joins her and Devon. Eventually they make their way to the ship's bridge, containing the skeletal remains of its crew. It is badly damaged and its control systems are inoperative. The three discover that the Ark is on a collision course with a Class G star similar to the Sun, and realize that the only way to save the Ark and its passengers is to find the backup bridge, at the other end of the Ark, and reactivate the navigation and propulsion systems. Occasionally, they are aided by the ship's partially functioning computer system. 20th Century Fox was involved in the project with Douglas Trumbull as executive producer. Science fiction writer and editor Ben Bova was brought in as science advisor. Harlan Ellison was approached by Robert Kline, a 20th Century Fox television producer, to come up with an idea for a science fiction TV series consisting of eight episodes, to pitch to the BBC as a co-production in February 1973. The BBC rejected the idea. Unable to sell "The Starlost" for prime time, Kline decided to pursue a low budget approach and produce it for syndication. By May, Kline had sold the idea to 48 NBC stations and the Canadian CTV network. Ellison claimed that to get Canadian government subsidies, the production was shot in Canada and Canadian writers produced the scripts from story outlines by Ellison. However, several produced episodes were written entirely by American writers. Before Ellison could begin work on the show's production bible, a writers' strike began, running from March 6 to June 24. Kline negotiated an exception with the Writer's Guild, on the grounds that the production was wholly Canadian — and Ellison went to work on a bible for the series. Originally, the show was to be filmed with a special effects camera system developed by Doug Trumbull called Magicam. The system comprised two cameras whose motion was servo controlled. One camera would film actors against a blue screen, while the other would shoot a model background. The motion of both cameras was synchronized and scaled appropriately, allowing both the camera and the actors to move through model sets. The technology did not work reliably. In the end, simple blue screen effects were used, forcing static camera shots. The failure of the Magicam system was a major blow — as the Canadian studio space that had been rented was too small to build the required sets. In the end, partial sets were built, but the lack of space hampered production. As the filming went on, Ellison grew disenchanted with the budget cuts, details that were changed, and what he characterized as a progressive dumbing down of the story. Ellison's dissatisfaction extended to the new title of the pilot episode; he had titled it "Phoenix Without Ashes" but it was changed to "Voyage of Discovery". Before the production of the pilot episode was completed, Ellison invoked a clause in his contract to force the producers to use his alternative registered writer's name of "Cordwainer Bird" on the end credits. Sixteen episodes were made. Fox decided not to pick up the options for the remainder of the series. On March 31, 1974, Ellison received a Writers Guild of America Award for Best Original Screenplay for the original script (the pilot script as originally written, not the version that was filmed). A novelization of this script by Edward Bryant, "Phoenix Without Ashes", was published in 1975; this contained a lengthy foreword by Ellison describing what had gone on in production. In 2010, the novel was adapted into comic book form by IDW Publishing. Ben Bova, in an editorial in "Analog Science Fiction" (June 1974) and in interviews in fanzines, made it clear how disgruntled he had been as science adviser. In 1975, he published a novel entitled "The Starcrossed", depicting a scientist taken on as a science adviser for a terrible science fiction series. "The Starlost" has generally received a negative reception from historians of science fiction television: "The Encyclopedia of Science Fiction" described "The Starlost" as "dire", while "The Best of Science Fiction TV" included "The Starlost" in its list of the "Worst Science Fiction Shows of All Time". The "Starlog Photo Guidebook TV Episode Guides Volume 1" (1981) lists two unfilmed episodes, "God That Died" and "People in the Dark". Episodes of the original series were rebroadcast in 1978 and further in 1982. A number of episodes were also edited together to create movie-length installments that were sold to cable television broadcasters in the late 1980s. All 16 episodes were at one time available in a VHS boxed set. The first DVD release was limited to the five feature-length edited versions. In September/October 2008, the full series was released on DVD by VCI Entertainment. Aside from the digitally remastered episodes, a "presentation reel" created for potential broadcasters is also included. Hosted by Dullea and Trumbull, and predating Ellison's departure as he is credited under his own name with creating the series, the short feature includes sample footage using the later-abandoned Magicam technology, some filmed special effects footage taken from other productions along with model footage from the film "Silent Running" to represent the "Earthship Ark" concept, and a different series logo. In early 2019, a Roku channel began, airing "The Starlost" as its only program. Tora Bora Tora Bora (, "Black Cave") is a cave complex, part of the Safed Koh mountain range of eastern Afghanistan. It is situated in the Pachir Aw Agam District of Nangarhar, approximately west of the Khyber Pass and north of the border of the Federally Administered Tribal Areas in Pakistan. Tora Bora was known to be a stronghold location of the Taliban, used by military forces against the Soviet Union during the 1980s. Tora Bora and the surrounding Safed Koh range had natural caverns formed by streams eating into the limestone, that had later been expanded into a CIA-financed complex built for the Mujahedeen. The lithological nature of Tora Bora is predominantly metamorphic gneiss and schist. The base at Tora Bora was developed as a CIA-financed complex built for the Mujahideen following the 1979 Soviet invasion of Afghanistan, and has been described by the western media as an "impregnable cave fortress" housing 2,000 men complete with a hospital, a hydroelectric power plant, offices, a hotel, arms and ammunition stores, roads large enough to drive a tank into, and sophisticated tunnel, and ventilation systems. During the U.S. invasion of Afghanistan, the cave complex was one of the strongholds of the Taliban and Al-Qaeda, according to United States Secretary of Defense Donald Rumsfeld. It was the location of the December 2001 Battle of Tora Bora, and suspected hideout of Al-Qaeda leader Osama bin Laden. It was reported that in 2007, U.S. intelligence suspected bin Laden planned to meet with top Al-Qaeda and Taliban commanders at Tora Bora prior to the launch of a possible attack on Europe or the United States. Both the British and American press published detailed plans of the base. When shown a plan during an NBC interview, Rumsfeld said, "This is serious business; there's not one of those, there are many of those". An elaborate military operation was planned which included deployment of the CIA-US Special Operations Forces team with laser markers to guide non-stop heavy air strikes during 72 hours. When Tora Bora was eventually captured by the U.S. and Afghan troops, no traces of the supposed "fortress" were found despite painstaking searches in the surrounding areas. Tora Bora turned out to be a system of small natural caves housing, at most, 200 fighters. While arms and ammunition stores were found, there were no traces of the advanced facilities claimed to exist. In a 2002 interview with by PBS's "Frontline", a Staff Sergeant from the U.S. Special Forces Operational Detachment Alpha (ODA) 572 described the caves: The complex later was retaken by the Taliban, and served as an important base for the Taliban insurgency. In 2017, Tora Bora was attacked and captured by the Islamic State of Iraq and the Levant – Khorasan Province (ISIL-K), though the Afghan National Army soon recaptured it. Taiga Taiga (; ; relates to Mongolic and Turkic languages), generally referred to in North America as boreal forest or snow forest, is a biome characterized by coniferous forests consisting mostly of pines, spruces, and larches. The taiga or boreal forest is the world's largest land biome. In North America, it covers most of inland Canada, Alaska, and parts of the northern contiguous United States. In Eurasia, it covers most of Sweden, Finland, much of Russia from Karelia in the west to the Pacific Ocean (including much of Siberia), much of Norway and Estonia, some of the Scottish Highlands, some lowland/coastal areas of Iceland, and areas of northern Kazakhstan, northern Mongolia, and northern Japan (on the island of Hokkaidō). The main tree species, the length of the growing season and summer temperatures vary. For example, the taiga of North America mostly consists of spruces; Scandinavian and Finnish taiga consists of a mix of spruce, pines and birch; Russian taiga has spruces, pines and larches depending on the region, while the Eastern Siberian taiga is a vast larch forest. The Taiga in its current form is a relatively recent phenomenon, having only existed for the last 12,000 years since the beginning of the Holocene epoch, covering land that had been mammoth steppe or under the Scandinavian Ice Sheet in Eurasia and under the Laurentide Ice Sheet in North America during the Late Pleistocene. A different use of the term taiga is often encountered in the English language, with "boreal forest" used in the United States and Canada to refer to only the more southerly part of the biome, while "taiga" is used to describe the more barren areas of the northernmost part of the biome approaching the tree line and the tundra biome. Hoffman (1958) discusses the origin of this differential use in North America and why it is an inappropriate differentiation of the Russian term. Although at high elevations taiga grades into alpine tundra through Krummholz, it is not exclusively an alpine biome; and unlike subalpine forest, much of taiga is lowlands. Taiga is the world's largest land biome (depending on how one defines a biome, it could also be considered the second-largest, after deserts and xeric shrublands), covering or 11.5% of the Earth's land area. The largest areas are located in Russia and Canada. The taiga is the terrestrial biome with the lowest annual average temperatures after the tundra and permanent ice caps. Extreme winter minimums in the northern taiga are typically lower than those of the tundra. The lowest reliably recorded temperatures in the Northern Hemisphere were recorded in the taiga of northeastern Russia. The taiga or boreal forest has a subarctic climate with very large temperature range between seasons, but the long and cold winter is the dominant feature. This climate is classified as "Dfc", "Dwc", "Dsc", "Dfd" and "Dwd" in the Köppen climate classification scheme, meaning that the short summer (24 h average or more) lasts 1–3 months and always less than 4 months. In Siberian taiga the average temperature of the coldest month is between and . There are also some much smaller areas grading towards the oceanic "Cfc" climate with milder winters, whilst the extreme south and (in Eurasia) west of the taiga reaches into humid continental climates ("Dfb", "Dwb") with longer summers. The mean annual temperature generally varies from , but there are taiga areas in eastern Siberia and interior Alaska-Yukon where the mean annual reaches down to . According to some sources, the boreal forest grades into a temperate mixed forest when mean annual temperature reaches about . Discontinuous permafrost is found in areas with mean annual temperature below freezing (), whilst in the "Dfd" and "Dwd" climate zones continuous permafrost occurs and restricts growth to very shallow-rooted trees like Siberian larch. The winters, with average temperatures below freezing, last five to seven months. Temperatures vary from throughout the whole year. The summers, while short, are generally warm and humid. In much of the taiga, would be a typical winter day temperature and an average summer day. The growing season, when the vegetation in the taiga comes alive, is usually slightly longer than the climatic definition of summer as the plants of the boreal biome have a lower threshold to trigger growth. In Canada, Scandinavia and Finland, the growing season is often estimated by using the period of the year when the 24-hour average temperature is or more. For the Taiga Plains in Canada, growing season varies from 80 to 150 days, and in the Taiga Shield from 100 to 140 days. Some sources claim 130 days growing season as typical for the taiga. Other sources mention that 50–100 frost-free days are characteristic. Data for locations in southwest Yukon gives 80–120 frost-free days. The closed canopy boreal forest in Kenozersky National Park near Plesetsk, Arkhangelsk Province, Russia, on average has 108 frost-free days. The longest growing season is found in the smaller areas with oceanic influences; in coastal areas of Scandinavia and Finland, the growing season of the closed boreal forest can be 145–180 days. The shortest growing season is found at the northern taiga–tundra ecotone, where the northern taiga forest no longer can grow and the tundra dominates the landscape when the growing season is down to 50–70 days, and the 24-hr average of the warmest month of the year usually is or less. High latitudes mean that the sun does not rise far above the horizon, and less solar energy is received than further south. But the high latitude also ensures very long summer days, as the sun stays above the horizon nearly 20 hours each day, or up to 24 hours, with only around 6 hours of daylight, or none, occurring in the dark winters, depending on latitude. The areas of the taiga inside the Arctic Circle have midnight sun in mid-summer and polar night in mid-winter. The taiga experiences relatively low precipitation throughout the year (generally annually, in some areas), primarily as rain during the summer months, but also as fog and snow. This fog, especially predominant in low-lying areas during and after the thawing of frozen Arctic seas, means that sunshine is not abundant in the affected taiga areas even during the long summer days. As evaporation is consequently low for most of the year, precipitation exceeds evaporation, and is sufficient to sustain the dense vegetation growth including large trees. (In the steppe biome, often found south of taiga in the northern hemisphere, evapotranspiration exceeds precipitation, restricting vegetation to mostly grasses.) Snow may remain on the ground for as long as nine months in the northernmost extensions of the taiga ecozone. In general, taiga grows to the south of the July isotherm, but occasionally as far north as the July isotherm. Rich in spruces, Scots pines in the western Siberian plain, the taiga is dominated by larch in Eastern Siberia, before returning to its original floristic richness on the Pacific shores. Two deciduous trees mingle throughout southern Siberia: birch and Populus tremula. The southern limit is more variable, depending on rainfall; taiga may be replaced by forest steppe south of the July isotherm where rainfall is very low, but more typically extends south to the July isotherm, and locally where rainfall is higher (notably in eastern Siberia and adjacent Outer Manchuria) south to the July isotherm. In these warmer areas the taiga has higher species diversity, with more warmth-loving species such as Korean pine, Jezo spruce, and Manchurian fir, and merges gradually into mixed temperate forest or, more locally (on the Pacific Ocean coasts of North America and Asia), into coniferous temperate rainforests where oak and hornbeam appear and join the conifers, birch and Populus tremula. The area currently classified as taiga in Europe and North America (except Alaska) was recently glaciated. As the glaciers receded they left depressions in the topography that have since filled with water, creating lakes and bogs (especially muskeg soil) found throughout the taiga. In Sweden the taiga is associated with the Norrland terrain. Taiga soil tends to be young and poor in nutrients. It lacks the deep, organically enriched profile present in temperate deciduous forests. The thinness of the soil is due largely to the cold, which hinders the development of soil and the ease with which plants can use its nutrients. Fallen leaves and moss can remain on the forest floor for a long time in the cool, moist climate, which limits their organic contribution to the soil; acids from evergreen needles further leach the soil, creating spodosol, also known as podzol. Since the soil is acidic due to the falling pine needles, the forest floor has only lichens and some mosses growing on it. In clearings in the forest and in areas with more boreal deciduous trees, there are more herbs and berries growing. Diversity of soil organisms in the boreal forest is high, comparable to the tropical rainforest. Since North America and Asia used to be connected by the Bering land bridge, a number of animal and plant species (more animals than plants) were able to colonize both continents and are distributed throughout the taiga biome (see Circumboreal Region). Others differ regionally, typically with each genus having several distinct species, each occupying different regions of the taiga. Taigas also have some small-leaved deciduous trees like birch, alder, willow, and poplar; mostly in areas escaping the most extreme winter cold. However, the Dahurian larch tolerates the coldest winters in the Northern Hemisphere in eastern Siberia. The very southernmost parts of the taiga may have trees such as oak, maple, elm and lime scattered among the conifers, and there is usually a gradual transition into a temperate mixed forest, such as the eastern forest-boreal transition of eastern Canada. In the interior of the continents with the driest climate, the boreal forests might grade into temperate grassland. There are two major types of taiga. The southern part is the closed canopy forest, consisting of many closely spaced trees with mossy ground cover. In clearings in the forest, shrubs and wildflowers are common, such as the fireweed. The other type is the lichen woodland or sparse taiga, with trees that are farther-spaced and lichen ground cover; the latter is common in the northernmost taiga. In the northernmost taiga the forest cover is not only more sparse, but often stunted in growth form; moreover, ice pruned asymmetric black spruce (in North America) are often seen, with diminished foliage on the windward side. In Canada, Scandinavia and Finland, the boreal forest is usually divided into three subzones: The high boreal (north boreal) or taiga zone; the middle boreal (closed forest); and the southern boreal, a closed canopy boreal forest with some scattered temperate deciduous trees among the conifers, such as maple, elm and oak. This southern boreal forest experiences the longest and warmest growing season of the biome, and in some regions (including Scandinavia, Finland and western Russia) this subzone is commonly used for agricultural purposes. The boreal forest is home to many types of berries; some are confined to the southern and middle closed boreal forest (such as wild strawberry and partridgeberry); others grow in most areas of the taiga (such as cranberry and cloudberry), and some can grow in both the taiga and the low arctic (southern part of) tundra (such as bilberry, bunchberry and lingonberry). The forests of the taiga are largely coniferous, dominated by larch, spruce, fir and pine. The woodland mix varies according to geography and climate so for example the Eastern Canadian forests ecoregion of the higher elevations of the Laurentian Mountains and the northern Appalachian Mountains in Canada is dominated by balsam fir "Abies balsamea", while further north the Eastern Canadian Shield taiga of northern Quebec and Labrador is notably black spruce "Picea mariana" and tamarack larch "Larix laricina". Evergreen species in the taiga (spruce, fir, and pine) have a number of adaptations specifically for survival in harsh taiga winters, although larch, which is extremely cold-tolerant, is deciduous. Taiga trees tend to have shallow roots to take advantage of the thin soils, while many of them seasonally alter their biochemistry to make them more resistant to freezing, called "hardening". The narrow conical shape of northern conifers, and their downward-drooping limbs, also help them shed snow. Because the sun is low in the horizon for most of the year, it is difficult for plants to generate energy from photosynthesis. Pine, spruce and fir do not lose their leaves seasonally and are able to photosynthesize with their older leaves in late winter and spring when light is good but temperatures are still too low for new growth to commence. The adaptation of evergreen needles limits the water lost due to transpiration and their dark green color increases their absorption of sunlight. Although precipitation is not a limiting factor, the ground freezes during the winter months and plant roots are unable to absorb water, so desiccation can be a severe problem in late winter for evergreens. Although the taiga is dominated by coniferous forests, some broadleaf trees also occur, notably birch, aspen, willow, and rowan. Many smaller herbaceous plants, such as ferns and occasionally ramps grow closer to the ground. Periodic stand-replacing wildfires (with return times of between 20–200 years) clear out the tree canopies, allowing sunlight to invigorate new growth on the forest floor. For some species, wildfires are a necessary part of the life cycle in the taiga; some, e.g. jack pine have cones which only open to release their seed after a fire, dispersing their seeds onto the newly cleared ground; certain species of fungi (such as morels) are also known to do this. Grasses grow wherever they can find a patch of sun, and mosses and lichens thrive on the damp ground and on the sides of tree trunks. In comparison with other biomes, however, the taiga has low biological diversity. Coniferous trees are the dominant plants of the taiga biome. A very few species in four main genera are found: the evergreen spruce, fir and pine, and the deciduous larch. In North America, one or two species of fir and one or two species of spruce are dominant. Across Scandinavia and western Russia, the Scots pine is a common component of the taiga, while taiga of the Russian Far East and Mongolia is dominated by larch. The boreal forest, or taiga, supports a relatively small range of animals due to the harshness of the climate. Canada's boreal forest includes 85 species of mammals, 130 species of fish, and an estimated 32,000 species of insects. Insects play a critical role as pollinators, decomposers, and as a part of the food web. Many nesting birds rely on them for food in the summer months. The cold winters and short summers make the taiga a challenging biome for reptiles and amphibians, which depend on environmental conditions to regulate their body temperatures, and there are only a few species in the boreal forest including red-sided garter snake, common European adder, blue-spotted salamander, northern two-lined salamander, Siberian salamander, wood frog, northern leopard frog, boreal chorus frog, American toad, and Canadian toad. Most hibernate underground in winter. Fish of the taiga must be able to withstand cold water conditions and be able to adapt to life under ice-covered water. Species in the taiga include Alaska blackfish, northern pike, walleye, longnose sucker, white sucker, various species of cisco, lake whitefish, round whitefish, pygmy whitefish, Arctic lamprey, various grayling species, brook trout (including sea-run brook trout in the Hudson Bay area), chum salmon, Siberian taimen, lenok and lake chub. The taiga is home to a number of large herbivorous mammals, such as moose and reindeer/caribou. Some areas of the more southern closed boreal forest also have populations of other deer species such as the elk (wapiti) and roe deer. The largest animal in the taiga is the wood bison, found in northern Canada, Alaska and has been newly introduced into the Russian far-east. Small mammals of the Taiga biome include rodent species including beaver, squirrel, North American porcupine and vole, as well as a small number of lagomorph species such as snowshoe hare and mountain hare. These species have adapted to survive the harsh winters in their native ranges. Some larger mammals, such as bears, eat heartily during the summer in order to gain weight, and then go into hibernation during the winter. Other animals have adapted layers of fur or feathers to insulate them from the cold. Predatory mammals of the taiga must be adapted to travel long distances in search of scattered prey or be able to supplement their diet with vegetation or other forms of food (such as raccoons). Mammalian predators of the taiga include Canada lynx, Eurasian lynx, stoat, Siberian weasel, least weasel, sable, American marten, North American river otter, European otter, American mink, wolverine, Asian badger, fisher, gray wolf, coyote, red fox, brown bear, American black bear, Asiatic black bear, polar bear (only small areas at the taiga – tundra ecotone) and Siberian tiger. More than 300 species of birds have their nesting grounds in the taiga. Siberian thrush, white-throated sparrow, and black-throated green warbler migrate to this habitat to take advantage of the long summer days and abundance of insects found around the numerous bogs and lakes. Of the 300 species of birds that summer in the taiga only 30 stay for the winter. These are either carrion-feeding or large raptors that can take live mammal prey, including golden eagle, rough-legged buzzard (also known as the rough-legged hawk), and raven, or else seed-eating birds, including several species of grouse and crossbills. Fire has been one of the most important factors shaping the composition and development of boreal forest stands (Rowe 1955); it is the dominant stand-renewing disturbance through much of the Canadian boreal forest (Amiro et al. 2001). The fire history that characterizes an ecosystem is its "fire regime", which has 3 elements: (1) fire type and intensity (e.g., crown fires, severe surface fires, and light surface fires), (2) size of typical fires of significance, and (3) frequency or return intervals for specific land units. The average time within a fire regime to burn an area equivalent to the total area of an ecosystem is its "fire rotation" (Heinselman 1973) or "fire cycle" (Van Wagner 1978). However, as Heinselman (1981) noted, each physiographic site tends to have its own return interval, so that some areas are skipped for long periods, while others might burn two-times or more often during a nominal fire rotation. The dominant fire regime in the boreal forest is high-intensity crown fires or severe surface fires of very large size, often more than 10,000 ha (100 km²), and sometimes more than 400,000 ha (4000 km²). Such fires kill entire stands. Fire rotations in the drier regions of western Canada and Alaska average 50–100 years, shorter than in the moister climates of eastern Canada, where they may average 200 years or more. Fire cycles also tend to be long near the tree line in the subarctic spruce-lichen woodlands. The longest cycles, possibly 300 years, probably occur in the western boreal in floodplain white spruce. Amiro et al. (2001) calculated the mean fire cycle for the period 1980 to 1999 in the Canadian boreal forest (including taiga) at 126 years. Increased fire activity has been predicted for western Canada, but parts of eastern Canada may experience less fire in future because of greater precipitation in a warmer climate. The mature boreal forest pattern in the south shows balsam fir dominant on well-drained sites in eastern Canada changing centrally and westward to a prominence of white spruce, with black spruce and tamarack forming the forests on peats, and with jack pine usually present on dry sites except in the extreme east, where it is absent. The effects of fires are inextricably woven into the patterns of vegetation on the landscape, which in the east favour black spruce, paper birch, and jack pine over balsam fir, and in the west give the advantage to aspen, jack pine, black spruce, and birch over white spruce. Many investigators have reported the ubiquity of charcoal under the forest floor and in the upper soil profile. Charcoal in soils provided Bryson et al. (1965) with clues about the forest history of an area 280 km north of the then current tree line at Ennadai Lake, District Keewatin, Northwest Territories. Two lines of evidence support the thesis that fire has always been an integral factor in the boreal forest: (1) direct, eye-witness accounts and forest-fire statistics, and (2) indirect, circumstantial evidence based on the effects of fire, as well as on persisting indicators. The patchwork mosaic of forest stands in the boreal forest, typically with abrupt, irregular boundaries circumscribing homogenous stands, is indirect but compelling testimony to the role of fire in shaping the forest. The fact is that most boreal forest stands are less than 100 years old, and only in the rather few areas that have escaped burning are there stands of white spruce older than 250 years. The prevalence of fire-adaptive morphologic and reproductive characteristics of many boreal plant species is further evidence pointing to a long and intimate association with fire. Seven of the ten most common trees in the boreal forest—jack pine, lodgepole pine, aspen, balsam poplar ("Populus balsamifera"), paper birch, tamarack, black spruce – can be classed as pioneers in their adaptations for rapid invasion of open areas. White spruce shows some pioneering abilities, too, but is less able than black spruce and the pines to disperse seed at all seasons. Only balsam fir and alpine fir seem to be poorly adapted to reproduce after fire, as their cones disintegrate at maturity, leaving no seed in the crowns. The oldest forests in the northwest boreal region, some older than 300 years, are of white spruce occurring as pure stands on moist floodplains. Here, the frequency of fire is much less than on adjacent uplands dominated by pine, black spruce and aspen. In contrast, in the Cordilleran region, fire is most frequent in the valley bottoms, decreasing upward, as shown by a mosaic of young pioneer pine and broadleaf stands below, and older spruce–fir on the slopes above. Without fire, the boreal forest would become more and more homogeneous, with the long-lived white spruce gradually replacing pine, aspen, balsam poplar, and birch, and perhaps even black spruce, except on the peatlands. Large areas of Siberia's taiga have been harvested for lumber since the collapse of the Soviet Union. Previously, the forest was protected by the restrictions of the Soviet Forest Ministry, but with the collapse of the Union, the restrictions regarding trade with Western nations have vanished. Trees are easy to harvest and sell well, so loggers have begun harvesting Russian taiga evergreen trees for sale to nations previously forbidden by Soviet law. In Canada, eight percent of the taiga is protected from development, the provincial government allows forest management to occur on Crown land under rigorous constraints. The main forestry practice in the boreal forest of Canada is clearcutting, which involves cutting down most of the trees in a given area, then replanting the forest as a monocrop (one species of tree) the following season. Some of the products from logged boreal forests include toilet paper, copy paper, newsprint, and lumber. More than 90% of boreal forest products from Canada are exported for consumption and processing in the United States. Some of the larger cities situated in this biome are Murmansk, Arkhangelsk, Yakutsk, Anchorage, Yellowknife, Tromsø, Luleå, and Oulu. Most companies that harvest in Canadian forests are certified by an independent third party agency such as the Forest Stewardship Council (FSC), Sustainable Forests Initiative (SFI), or the Canadian Standards Association (CSA). While the certification process differs between these groups, all of them include forest stewardship, respect for aboriginal peoples, compliance with local, provincial or national environmental laws, forest worker safety, education and training, and other environmental, business, and social requirements. The prompt renewal of all harvest sites by planting or natural renewal is also required. During the last quarter of the twentieth century, the zone of latitude occupied by the boreal forest experienced some of the greatest temperature increases on Earth. Winter temperatures have increased more than summer temperatures. The number of days with extremely cold temperatures (e.g., −20 to −40 °C (−4 to −40 °F) has decreased irregularly but systematically in nearly all the boreal region, allowing better survival for tree-damaging insects. In summer, the daily low temperature has increased more than the daily high temperature. In Fairbanks, Alaska, the length of the frost-free season has increased from 60–90 days in the early twentieth century to about 120 days a century later. Summer warming has been shown to increase water stress and reduce tree growth in dry areas of the southern boreal forest in central Alaska, western Canada and portions of far eastern Russia. Precipitation is relatively abundant in Scandinavia, Finland, northwest Russia and eastern Canada, where a longer growth season (i.e. the period when sap flow is not impeded by frozen water) accelerate tree growth. As a consequence of this warming trend, the warmer parts of the boreal forests are susceptible to replacement by grassland, parkland or temperate forest. In Siberia, the taiga is converting from predominantly needle-shedding larch trees to evergreen conifers in response to a warming climate. This is likely to further accelerate warming, as the evergreen trees will absorb more of the sun's rays. Given the vast size of the area, such a change has the potential to affect areas well outside of the region. In much of the boreal forest in Alaska, the growth of white spruce trees are stunted by unusually warm summers, while trees on some of the coldest fringes of the forest are experiencing faster growth than previously. Lack of moisture in the warmer summers are also stressing the birch trees of central Alaska. Recent years have seen outbreaks of insect pests in forest-destroying plagues: the spruce-bark beetle ("Dendroctonus rufipennis") in Yukon and Alaska; the mountain pine beetle in British Columbia; the aspen-leaf miner; the larch sawfly; the spruce budworm ("Choristoneura fumiferana"); the spruce coneworm. The effect of sulphur dioxide on woody boreal forest species was investigated by Addison et al. (1984), who exposed plants growing on native soils and tailings to 15.2 μmol/m3 (0.34 ppm) of SO2 on CO2 assimilation rate (NAR). The Canadian maximum acceptable limit for atmospheric SO2 is 0.34 ppm. Fumigation with SO2 significantly reduced NAR in all species and produced visible symptoms of injury in 2–20 days. The decrease in NAR of deciduous species (trembling aspen ["Populus tremuloides"], willow ["Salix"], green alder ["Alnus viridis"], and white birch ["Betula papyrifera"]) was significantly more rapid than of conifers (white spruce, black spruce ["Picea mariana"], and jack pine ["Pinus banksiana"]) or an evergreen angiosperm (Labrador tea) growing on a fertilized Brunisol. These metabolic and visible injury responses seemed to be related to the differences in S uptake owing in part to higher gas exchange rates for deciduous species than for conifers. Conifers growing in oil sands tailings responded to SO2 with a significantly more rapid decrease in NAR compared with those growing in the Brunisol, perhaps because of predisposing toxic material in the tailings. However, sulphur uptake and visible symptom development did not differ between conifers growing on the 2 substrates. Acidification of precipitation by anthropogenic, acid-forming emissions has been associated with damage to vegetation and reduced forest productivity, but 2-year-old white spruce that were subjected to simulated acid rain (at pH 4.6, 3.6, and 2.6) applied weekly for 7 weeks incurred no statistically significant (P 0.05) reduction in growth during the experiment compared with the background control (pH 5.6) (Abouguendia and Baschak 1987). However, symptoms of injury were observed in all treatments, the number of plants and the number of needles affected increased with increasing rain acidity and with time. Scherbatskoy and Klein (1983) found no significant effect of chlorophyll concentration in white spruce at pH 4.3 and 2.8, but Abouguendia and Baschak (1987) found a significant reduction in white spruce at pH 2.6, while the foliar sulphur content significantly greater at pH 2.6 than any of the other treatments. Many nations are taking direct steps to protect the ecology of the taiga by prohibiting logging, mining, oil and gas production, and other forms of development. In February 2010 the Canadian government established protection for 13,000 square kilometres of boreal forest by creating a new 10,700-square-kilometre park reserve in the Mealy Mountains area of eastern Canada and a 3,000-square-kilometre waterway provincial park that follows alongside the Eagle River from headwaters to sea. Two Canadian provincial governments, Ontario and Quebec, introduced measures in 2008 that would protect at least half of their northern boreal forest. Although both provinces admitted it will take years to plan, work with Aboriginal and local communities and ultimately map out precise boundaries of the areas off-limits to development, the measures are expected to create some of the largest protected areas networks in the world once completed. Both announcements came the following year after a letter signed by 1,500 scientists called on political leaders to protect at least half of the boreal forest. The taiga stores enormous quantities of carbon, more than the world's temperate and tropical forests combined, much of it in wetlands and peatland. In fact, current estimates place boreal forests as storing twice as much carbon per unit area as tropical forests. One of the biggest areas of research and a topic still full of unsolved questions is the recurring disturbance of fire and the role it plays in propagating the lichen woodland. The phenomenon of wildfire by lightning strike is the primary determinant of understory vegetation and because of this, it is considered to be the predominant force behind community and ecosystem properties in the lichen woodland. The significance of fire is clearly evident when one considers that understory vegetation influences tree seedling germination in the short term and decomposition of biomass and nutrient availability in the long term. The recurrent cycle of large, damaging fire occurs approximately every 70 to 100 years. Understanding the dynamics of this ecosystem is entangled with discovering the successional paths that the vegetation exhibits after a fire. Trees, shrubs, and lichens all recover from fire-induced damage through vegetative reproduction as well as invasion by propagules. Seeds that have fallen and become buried provide little help in re-establishment of a species. The reappearance of lichens is reasoned to occur because of varying conditions and light/nutrient availability in each different microstate. Several different studies have been done that have led to the formation of the theory that post-fire development can be propagated by any of four pathways: self replacement, species-dominance relay, species replacement, or gap-phase self replacement. Self replacement is simply the re-establishment of the pre-fire dominant species. Species-dominance relay is a sequential attempt of tree species to establish dominance in the canopy. Species replacement is when fires occur in sufficient frequency to interrupt species dominance relay. Gap-Phase Self-Replacement is the least common and so far has only been documented in Western Canada. It is a self replacement of the surviving species into the canopy gaps after a fire kills another species. The particular pathway taken after a fire disturbance depends on how the landscape is able to support trees as well as fire frequency. Fire frequency has a large role in shaping the original inception of the lower forest line of the lichen woodland taiga. It has been hypothesized by Serge Payette that the spruce-moss forest ecosystem was changed into the lichen woodland biome due to the initiation of two compounded strong disturbances: large fire and the appearance and attack of the spruce budworm. The spruce budworm is a deadly insect to the spruce populations in the southern regions of the taiga. J.P. Jasinski confirmed this theory five years later stating “Their [lichen woodlands] persistence, along with their previous moss forest histories and current occurrence adjacent to closed moss forests, indicate that they are an alternative stable state to the spruce–moss forests”. Type II submarine The Type II U-boat was designed by Nazi Germany as a coastal U-boat, modeled after the CV-707 submarine, which was designed by the Dutch dummy company NV Ingenieurskantoor voor Scheepsbouw Den Haag (I.v.S) (set up by Germany after World War I in order to maintain and develop German submarine technology and to circumvent the limitations set by the Treaty of Versailles) and built in 1933 by the Finnish Crichton-Vulcan shipyard in Turku, Finland. It was too small to undertake sustained operations far away from the home support facilities. Its primary role was found to be in the training schools, preparing new German naval officers for command. It appeared in four sub-types. Germany was stripped of its U-boats by the Treaty of Versailles at the end of World War I, but in the late 1920s and early 1930s began to rebuild its armed forces. The pace of rearmament accelerated under Adolf Hitler, and the first Type II U-boat was laid down on 11 February 1935. Knowing that the world would see this step towards rearmament, Hitler reached an agreement with Britain to build a navy up to 35% of the size of the Royal Navy in surface vessels, but equal to the British in number of submarines. This agreement was signed on 18 June 1935, and was commissioned 11 days later. The defining characteristic of the Type II was its small size. Known as the "Einbaum" ("dugout canoe"), it had some advantages over larger boats, chiefly its ability to work in shallow water, dive quickly, and increased stealth due to the low conning tower. However, it had a shallower maximum depth, short range, cramped living conditions, and carried fewer torpedoes. The boat had a single hull, with no watertight compartments. There were three torpedo tubes forward (none aft), with space for another two torpedoes inside the pressure hull for reloads. A single 20 mm anti-aircraft gun was provided, but no deck gun was mounted. Space inside was limited. The two spare torpedoes extended from just behind the torpedo tubes to just in front of the control room, and most of the 24-man crew lived in this forward area around the torpedoes, sharing 12 bunks. Four bunks were also provided aft of the engines for the engine room crew. Cooking and sanitary facilities were basic, and in this environment long patrols were very arduous. Most Type IIs only saw operational service during the early years of the war, thereafter remaining in training bases. Six were stripped down to their hulls, transported by river and truck to Linz (on the Danube), and reassembled for use in the Black Sea against the Soviet Union. In contrast to other German submarine types, few Type IIs were lost. This reflects their use as training boats, although accidents accounted for several vessels. These boats were a first step towards re-armament, intended to provide Germany with experience in submarine construction and operation and lay the foundation for larger boats to build upon. Only one of these submarines survive; the prototype CV-707, renamed "Vesikko" by the Finnish Navy which later bought it. On 3 February 2008, "The Telegraph" reported that a U-20 had been discovered by Selçuk Kolay (a Turkish marine engineer) in of water off the coast of the Turkish city of Zonguldak. According to the report, Kolay knows where U-23 and U-19 submarines are, scuttled in deeper water near the U-20. The Type IIA was a single hull, all welded boat with internal ballast tanks. Compared to the other variants, it had a smaller bridge and could carry the German G7a, G7e torpedoes as well as TM-type torpedo mines. There were two periscopes in the conning tower; an aerial (navigation) periscope at the front of the tower, and an attack periscope in the middle of the tower. There were serrated net cutters in the bow. The net cutters were adopted from World War 1 boats but were quickly discontinued during World War 2. Deutsche Werke AG of Kiel built six Type IIAs in 1934 and 1935. The prototype, built in Finland: Finnish submarine Vesikko The Type IIB was a lengthened version of the Type IIA. Three additional compartments were inserted amidships which were fitted with additional diesel tanks beneath the control room. The range was increased to 1,800 nautical miles at 12 knots. Diving time was also improved to 30 seconds. Deutsche Werke AG of Kiel built four Type IIBs in 1935 and 1936; Germaniawerft of Kiel built fourteen in 1935 and 1936; and Flender Werke AG of Lübeck built two between 1938 and 1940. In total, twenty were built. There were 20 Type IIB submarines commissioned. The Type IIC was a further lengthened version of the Type IIB with an additional two compartments inserted amidships to accommodate improved radio room facilities. The additional diesel tanks beneath the control room were further enlarged, extending the range to 1,900 nautical miles at 12 knots. Deutsche Werke AG of Kiel built eight Type IICs between 1937 and 1940. There were eight Type IIC submarines commissioned. The Type IID had additional saddle tanks fitted to the sides of the external hull. These saddle tanks were used to accommodate additional diesel storage tanks. The diesel oil would float atop the saddle tanks. As oil was consumed, water would gradually fill the tanks to compensate for the positive buoyancy. The range was nearly doubled to at and enabled the Type II to conduct longer operations around the British Isles. A further development was the propellers were fitted with Kort nozzles, intended to improve propulsion efficiency. Deutsche Werke AG of Kiel built sixteen Type IIDs in 1939 and 1940. There were 16 Type IID submarines commissioned. See list of German Type II submarines for individual ship details. Tritium Tritium ( or ) or hydrogen-3 (symbol T or 3H) is a rare and radioactive isotope of hydrogen. The nucleus of tritium (sometimes called a triton) contains one proton and two neutrons, whereas the nucleus of the common isotope hydrogen-1 (protium) contains just one proton, and that of hydrogen-2 (deuterium) contains one proton and one neutron. Naturally occurring tritium is extremely rare on Earth. The atmosphere has only trace amounts, formed by the interaction of its gases with cosmic rays. It can be produced by irradiating lithium metal or lithium-bearing ceramic pebbles in a nuclear reactor. Tritium is used as a radioactive tracer, in radioluminescent light sources for watches and instruments, and, along with deuterium, as a fuel for nuclear fusion reactions with applications in energy generation and weapons. The name of this isotope is derived from Greek "τρίτος" ("trítos"), meaning "third". Tritium was first detected in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck after bombarding deuterium with deuterons. Deuterium is another isotope of hydrogen. However, their experiment could not isolate tritium, which was later accomplished by Luis Alvarez and Robert Cornog, who also realized tritium's radioactivity. Willard F. Libby recognized that tritium could be used for radiometric dating of water and wine. While tritium has several different experimentally determined values of its half-life, the National Institute of Standards and Technology lists (). It decays into helium-3 by beta decay as in this nuclear equation: and it releases 18.6 keV of energy in the process. The electron's kinetic energy varies, with an average of 5.7 keV, while the remaining energy is carried off by the nearly undetectable electron antineutrino. Beta particles from tritium can penetrate only about 6.0 mm of air, and they are incapable of passing through the dead outermost layer of human skin. The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187) appropriate for absolute neutrino mass measurements in the laboratory (the most recent experiment being KATRIN). The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting. Tritium is most often produced in nuclear reactors by neutron activation of lithium-6.The release and diffusion of tritium and helium produced by the fission of lithium can take place within ceramics referred to as breeder ceramics. The production of tritium from lithium-6 in such breeder ceramics is possible with neutrons of any energy, and is an exothermic reaction yielding 4.8 MeV. In comparison, the fusion of deuterium with tritium releases about 17.6 MeV of energy. For applications in proposed fusion energy reactors, such as ITER, pebbles consisting of lithium bearing ceramics including Li2TiO3 and Li4SiO4, are being developed for tritium breeding within a helium cooled pebble bed (HCPB), also known as a breeder blanket. High-energy neutrons can also produce tritium from lithium-7 in an endothermic (net heat consuming) reaction, consuming 2.466 MeV. This was discovered when the 1954 Castle Bravo nuclear test produced an unexpectedly high yield. High-energy neutrons irradiating boron-10 will also occasionally produce tritium: A more common result of boron-10 neutron capture is and a single alpha particle. Tritium is also produced in heavy water-moderated reactors whenever a deuterium nucleus captures a neutron. This reaction has a quite small absorption cross section, making heavy water a good neutron moderator, and relatively little tritium is produced. Even so, cleaning tritium from the moderator may be desirable after several years to reduce the risk of its escaping to the environment. Ontario Power Generation's "Tritium Removal Facility" processes up to of heavy water a year, and it separates out about of tritium, making it available for other uses. Deuterium's absorption cross section for thermal neutrons is about 0.52 millibarns, whereas that of oxygen-16 () is about 0.19 millibarns and that of oxygen-17 () is about 240 millibarns. Tritium is an uncommon product of the nuclear fission of uranium-235, plutonium-239, and uranium-233, with a production of about one atom per 10,000 fissions. The release or recovery of tritium needs to be considered in the operation of nuclear reactors, especially in the reprocessing of nuclear fuels and in the storage of spent nuclear fuel. The production of tritium is not a goal, but rather a side-effect. It is discharged to the atmosphere in small quantities by some nuclear power plants. In June 2016 the Tritiated Water Task Force released a report on the status of tritium in tritiated water at Fukushima Daiichi nuclear plant, as part of considering options for final disposal of the stored contaminated cooling water. This identified that the March 2016 holding of tritium on-site was 760 TBq (equivalent to 2.1 g of tritium or 14 mL of tritiated water) in a total of 860,000 m3 of stored water. This report also identified the reducing concentration of tritium in the water extracted from the buildings etc. for storage, seeing a factor of ten decrease over the five years considered (2011–2016), 3.3 MBq/L to 0.3 MBq/L (after correction for the 5% annual decay of tritium). According to a report by an expert panel considering the best approach to dealing with this issue, ""Tritium could be separated theoretically, but there is no practical separation technology on an industrial scale. Accordingly, a controlled environmental release is said to be the best way to treat low-tritium-concentration water."" Tritium's decay product helium-3 has a very large cross section (5330 barns) for reacting with thermal neutrons, expelling a proton, hence it is rapidly converted back to tritium in nuclear reactors. Tritium occurs naturally due to cosmic rays interacting with atmospheric gases. In the most important reaction for natural production, a fast neutron (which must have energy greater than 4.0 MeV) interacts with atmospheric nitrogen: Worldwide, the production of tritium from natural sources is 148 petabecquerels per year. The global equilibrium inventory of tritium created by natural sources remains approximately constant at 2,590 petabecquerels. This is due to a fixed production rate and losses proportional to the inventory. According to a 1996 report from Institute for Energy and Environmental Research on the US Department of Energy, only of tritium had been produced in the United States from 1955 to 1996. Since it continually decays into helium-3, the total amount remaining was about at the time of the report. Tritium for American nuclear weapons was produced in special heavy water reactors at the Savannah River Site until their closures in 1988. With the Strategic Arms Reduction Treaty (START) after the end of the Cold War, the existing supplies were sufficient for the new, smaller number of nuclear weapons for some time. The production of tritium was resumed with irradiation of rods containing lithium (replacing the usual control rods containing boron, cadmium, or hafnium), at the reactors of the commercial Watts Bar Nuclear Generating Station from 2003–2005 followed by extraction of tritium from the rods at the new Tritium Extraction Facility at the Savannah River Site beginning in November 2006. Tritium leakage from the rods during reactor operations limits the number that can be used in any reactor without exceeding the maximum allowed tritium levels in the coolant. Tritium has an atomic mass of 3.0160492 u. Diatomic tritium (2 or 2) is a gas at standard temperature and pressure. Combined with oxygen, it forms a liquid called tritiated water (2). Tritium's specific activity is . Tritium figures prominently in studies of nuclear fusion because of its favorable reaction cross section and the large amount of energy (17.6 MeV) produced through its reaction with deuterium: All atomic nuclei contain protons as their only electrically charged particles. They therefore repel one another because like charges repel. However, if the atoms have a high enough temperature and pressure (for example, in the core of the Sun), then their random motions can overcome such electrical repulsion (called the Coulomb force), and they can come close enough for the strong nuclear force to take effect, fusing them into heavier atoms. The tritium nucleus, containing one proton and two neutrons, has the same charge as the nucleus of ordinary hydrogen, and it experiences the same electrostatic repulsive force when brought close to another atomic nucleus. However, the neutrons in the tritium nucleus increase the attractive strong nuclear force when brought close enough to another atomic nucleus. As a result, tritium can more easily fuse with other light atoms, compared with the ability of ordinary hydrogen to do so. The same is true, albeit to a lesser extent, of deuterium. This is why brown dwarfs (so-called failed stars) cannot utilize ordinary hydrogen, but they do fuse the small minority of deuterium nuclei. Like the other isotopes of hydrogen, tritium is difficult to confine. Rubber, plastic, and some kinds of steel are all somewhat permeable. This has raised concerns that if tritium were used in large quantities, in particular for fusion reactors, it may contribute to radioactive contamination, although its short half-life should prevent significant long-term accumulation in the atmosphere. The high levels of atmospheric nuclear weapons testing that took place prior to the enactment of the Partial Test Ban Treaty proved to be unexpectedly useful to oceanographers. The high levels of tritium oxide introduced into upper layers of the oceans have been used in the years since then to measure the rate of mixing of the upper layers of the oceans with their lower levels. Tritium is an isotope of hydrogen, which allows it to readily bind to hydroxyl radicals, forming tritiated water (HTO), and to carbon atoms. Since tritium is a low energy beta emitter, it is not dangerous externally (its beta particles are unable to penetrate the skin), but it can be a radiation hazard when inhaled, ingested via food or water, or absorbed through the skin. HTO has a short biological half-life in the human body of 7 to 14 days, which both reduces the total effects of single-incident ingestion and precludes long-term bioaccumulation of HTO from the environment. The biological half life of tritiated water in the human body, which is a measure of body water turn over, varies with the season. Studies on the biological half life of occupational radiation workers for free water tritium in the coastal region of Karnataka, India, show that the biological half life in the winter season is twice that of the summer season. Tritium has leaked from 48 of 65 nuclear sites in the US. In one case, leaking water contained of tritium per litre, which is 375 times the EPA limit for drinking water. The US Nuclear Regulatory Commission states that in normal operation in 2003, 56 pressurized water reactors released of tritium (maximum: 2,080 Ci; minimum: 0.1 Ci; average: 725 Ci) and 24 boiling water reactors released (maximum: 174 Ci; minimum: 0 Ci; average: 27.7 Ci), in liquid effluents. According to the U.S. Environmental Protection Agency, self-illuminating exit signs improperly disposed in municipal landfills have been recently found to contaminate waterways. The legal limits for tritium in drinking water vary from country to country. Some figures are given below: The American limit is calculated to yield a dose of 4.0 millirems (or 40 microsieverts in SI units) per year. This is about 1.3% of the natural background radiation (roughly 3,000 μSv). The beta particles emitted by the radioactive decay of small amounts of tritium cause chemicals called phosphors to glow. This radioluminescence is used in self-powered lighting devices called betalights, which are used for night illumination of firearm sights, watches, exit signs, map lights, navigational compasses (such as current-use M-1950 U.S. military compasses), knives and a variety of other devices. Tritium has replaced radioluminescent paint containing radium in this application. The latter can cause bone cancer and has been banned in most countries for decades. , commercial demand for tritium is 400 grams per year and the cost is approximately US$30,000 per gram. Tritium is an important component in nuclear weapons. It is used to enhance the efficiency and yield of fission bombs and the fission stages of hydrogen bombs in a process known as "boosting" as well as in external neutron initiators for such weapons. These are devices incorporated in nuclear weapons which produce a pulse of neutrons when the bomb is detonated to initiate the fission reaction in the fissionable core (pit) of the bomb, after it is compressed to a critical mass by explosives. Actuated by an ultrafast switch like a krytron, a small particle accelerator drives ions of tritium and deuterium to energies above the 15 keV or so needed for deuterium-tritium fusion and directs them into a metal target where the tritium and deuterium are adsorbed as hydrides. High-energy fusion neutrons from the resulting fusion radiate in all directions. Some of these strike plutonium or uranium nuclei in the primary's pit, initiating nuclear chain reaction. The quantity of neutrons produced is large in absolute numbers, allowing the pit to quickly achieve neutron levels that would otherwise need many more generations of chain reaction, though still small compared to the total number of nuclei in the pit. Before detonation, a few grams of tritium-deuterium gas are injected into the hollow "pit" of fissile plutonium or uranium. The early stages of the fission chain reaction supply enough heat and compression to start deuterium-tritium fusion, then both fission and fusion proceed in parallel, the fission assisting the fusion by continuing heating and compression, and the fusion assisting the fission with highly energetic (14.1 MeV) neutrons. As the fission fuel depletes and also explodes outward, it falls below the density needed to stay critical by itself, but the fusion neutrons make the fission process progress faster and continue longer than it would without boosting. Increased yield comes overwhelmingly from the increase in fission. The energy released by the fusion itself is much smaller because the amount of fusion fuel is so much smaller. The effects of boosting include: The tritium in a warhead is continually undergoing radioactive decay, hence becoming unavailable for fusion. Furthermore, its decay product, helium-3, absorbs neutrons if exposed to the ones emitted by nuclear fission. This potentially offsets or reverses the intended effect of the tritium, which was to generate many free neutrons, if too much helium-3 has accumulated from the decay of tritium. Therefore, it is necessary to replenish tritium in boosted bombs periodically. The estimated quantity needed is 4 grams per warhead. To maintain constant levels of tritium, about 0.20 grams per warhead per year must be supplied to the bomb. One mole of deuterium-tritium gas would contain about 3.0 grams of tritium and 2.0 grams of deuterium. In comparison, the 20 moles of plutonium in a nuclear bomb consists of about 4.5 kilograms of plutonium-239. Since tritium undergoes radioactive decay, and is also difficult to confine physically, the much larger secondary charge of heavy hydrogen isotopes needed in a true hydrogen bomb uses solid lithium deuteride as its source of deuterium and tritium, producing the tritium "in situ" during secondary ignition. During the detonation of the primary fission bomb stage in a thermonuclear weapon (Teller-Ullam staging), the sparkplug, a cylinder of 235U/239Pu at the center of the fusion stage(s), begins to fission in a chain reaction, from excess neutrons channeled from the primary. The neutrons released from the fission of the sparkplug split lithium-6 into tritium and helium-4, while lithium-7 is split into helium-4, tritium, and one neutron. As these reactions occur, the fusion stage is compressed by photons from the primary and fission of the 238U or 238U/235U jacket surrounding the fusion stage. Therefore, the fusion stage breeds its own tritium as the device detonates. In the extreme heat and pressure of the explosion, some of the tritium is then forced into fusion with deuterium, and that reaction releases even more neutrons. Since this fusion process requires an extremely high temperature for ignition, and it produces fewer and less energetic neutrons (only fission, deuterium-tritium fusion, and splitting are net neutron producers), lithium deuteride is not used in boosted bombs, but rather for multi-stage hydrogen bombs. Tritium is an important fuel for controlled nuclear fusion in both magnetic confinement and inertial confinement fusion reactor designs. The experimental fusion reactor ITER and the National Ignition Facility (NIF) will use deuterium-tritium fuel. The deuterium-tritium reaction is favorable since it has the largest fusion cross section (about 5.0 barns) and it reaches this maximum cross section at the lowest energy (about 65 keV center-of-mass) of any potential fusion fuel. The Tritium Systems Test Assembly (TSTA) was a facility at the Los Alamos National Laboratory dedicated to the development and demonstration of technologies required for fusion-relevant deuterium-tritium processing. Tritium is sometimes used as a radiolabel. It has the advantage that almost all organic chemicals contain hydrogen, making it easy to find a place to put tritium on the molecule under investigation. It has the disadvantage of producing a comparatively weak signal. Tritium can be used in a betavoltaic device to create an atomic battery to generate electricity. Aside from chlorofluorocarbons, tritium can act as a transient tracer and has the ability to "outline" the biological, chemical, and physical paths throughout the world oceans because of its evolving distribution. Tritium has thus been used as a tool to examine ocean circulation and ventilation and, for such purposes, is usually measured in Tritium Units where 1 TU is defined as the ratio of 1 tritium atom to 1018 hydrogen atoms, approximately equal to 0.118 Bq/liter. As noted earlier, nuclear weapons testing, primarily in the high-latitude regions of the Northern Hemisphere, throughout the late 1950s and early 1960s introduced large amounts of tritium into the atmosphere, especially the stratosphere. Before these nuclear tests, there were only about 3 to 4 kilograms of tritium on the Earth's surface; but these amounts rose by 2 or 3 orders of magnitude during the post-test period. Some sources reported natural background levels were exceeded by approximately 1,000 TU in 1963 and 1964 and the isotope is used in the northern hemisphere to estimate the age of groundwater and construct hydrogeologic simulation models. Recent scientific sources have estimated atmospheric levels at the height of weapons testing to approach 1,000 TU and pre-fallout levels of rainwater to be between 5 and 10 TU. In 1963 Valentia Island Ireland recorded 2,000 TU in precipitation. While in the stratosphere (post-test period), the tritium interacted with and oxidized to water molecules and was present in much of the rapidly produced rainfall, making tritium a prognostic tool for studying the evolution and structure of the hydrologic cycle as well as the ventilation and formation of water masses in the North Atlantic Ocean. Bomb-tritium data were used from the Transient Tracers in the Ocean (TTO) program in order to quantify the replenishment and overturning rates for deep water located in the North Atlantic. Bomb-tritium also enters the deep ocean around the Antarctic. Most of the bomb tritiated water (HTO) throughout the atmosphere can enter the ocean through the following processes: a) precipitation, b) vapor exchange, and c) river runoff – these processes make HTO a great tracer for time-scales up to a few decades. Using the data from these processes for 1981, the 1 TU isosurface lies between 500 and 1,000 meters deep in the subtropical regions and then extends to 1,500–2,000 meters south of the Gulf Stream due to recirculation and ventilation in the upper portion of the Atlantic Ocean. To the north, the isosurface deepens and reaches the floor of the abyssal plain which is directly related to the ventilation of the ocean floor over 10 to 20-year time-scales. Also evident in the Atlantic Ocean is the tritium profile near Bermuda between the late 1960s and late 1980s. There is a downward propagation of the tritium maximum from the surface (1960s) to 400 meters (1980s), which corresponds to a deepening rate of approximately 18 meters per year. There are also tritium increases at 1,500 meters depth in the late 1970s and 2,500 meters in the middle of the 1980s, both of which correspond to cooling events in the deep water and associated deep water ventilation. From a study in 1991, the tritium profile was used as a tool for studying the mixing and spreading of newly formed North Atlantic Deep Water (NADW), corresponding to tritium increases to 4 TU. This NADW tends to spill over sills that divide the Norwegian Sea from the North Atlantic Ocean and then flows to the west and equatorward in deep boundary currents. This process was explained via the large-scale tritium distribution in the deep North Atlantic between 1981 and 1983. The sub-polar gyre tends to be freshened (ventilated) by the NADW and is directly related to the high tritium values (> 1.5 TU). Also evident was the decrease in tritium in the deep western boundary current by a factor of 10 from the Labrador Sea to the Tropics, which is indicative of loss to ocean interior due to turbulent mixing and recirculation. In a 1998 study, tritium concentrations in surface seawater and atmospheric water vapor (10 meters above the surface) were sampled at the following locations: the Sulu Sea, the Fremantle Bay, the Bay of Bengal, the Penang Bay, and the Strait of Malacca. Results indicated that the tritium concentration in surface seawater was highest at the Fremantle Bay (approximately 0.40 Bq/liter), which could be accredited to the mixing of runoff of freshwater from nearby lands due to large amounts found in coastal waters. Typically, lower concentrations were found between 35 and 45 degrees south latitude and near the equator. Results also indicated that (in general) tritium has decreased over the years (up to 1997) due to the physical decay of bomb tritium in the Indian Ocean. As for water vapor, the tritium concentration was approximately one order of magnitude greater than surface seawater concentrations (ranging from 0.46 to 1.15 Bq/liter). Therefore, the water vapor tritium is not affected by the surface seawater concentration; thus, the high tritium concentrations in the vapor were concluded to be a direct consequence of the downward movement of natural tritium from the stratosphere to the troposphere (therefore, the ocean air showed a dependence on latitudinal change). In the North Pacific Ocean, the tritium (introduced as bomb tritium in the Northern Hemisphere) spread in three dimensions. There were subsurface maxima in the middle and low latitude regions, which is indicative of lateral mixing (advection) and diffusion processes along lines of constant potential density (isopycnals) in the upper ocean. Some of these maxima even correlate well with salinity extrema. In order to obtain the structure for ocean circulation, the tritium concentrations were mapped on 3 surfaces of constant potential density (23.90, 26.02, and 26.81). Results indicated that the tritium was well-mixed (at 6 to 7 TU) on the 26.81 isopycnal in the subarctic cyclonic gyre and there appeared to be a slow exchange of tritium (relative to shallower isopycnals) between this gyre and the anticyclonic gyre to the south; also, the tritium on the 23.90 and 26.02 surfaces appeared to be exchanged at a slower rate between the central gyre of the North Pacific and the equatorial regions. The depth penetration of bomb tritium can be separated into 3 distinct layers. Layer 1 is the shallowest layer and includes the deepest, ventilated layer in winter; it has received tritium via radioactive fallout and lost some due to advection and/or vertical diffusion and contains approximately 28% of the total amount of tritium. Layer 2 is below the first layer but above the 26.81 isopycnal and is no longer part of the mixed layer. Its 2 sources are diffusion downward from the mixed layer and lateral expansions outcropping strata (poleward); it contains about 58% of the total tritium. Layer 3 is representative of waters that are deeper than the outcrop isopycnal and can only receive tritium via vertical diffusion; it contains the remaining 14% of the total tritium. The impacts of the nuclear fallout were felt in the United States throughout the Mississippi River System. Tritium concentrations can be used to understand the residence times of continental hydrologic systems (as opposed to the usual oceanic hydrologic systems) which include surface waters such as lakes, streams, and rivers. Studying these systems can also provide societies and municipals with information for agricultural purposes and overall river water quality. In a 2004 study, several rivers were taken into account during the examination of tritium concentrations (starting in the 1960s) throughout the Mississippi River Basin: Ohio River (largest input to the Mississippi River flow), Missouri River, and Arkansas River. The largest tritium concentrations were found in 1963 at all the sampled locations throughout these rivers and correlate well with the peak concentrations in precipitation due to the nuclear bomb tests in 1962. The overall highest concentrations occurred in the Missouri River (1963) and were greater than 1,200 TU while the lowest concentrations were found in the Arkansas River (never greater than 850 TU and less than 10 TU in the mid-1980s). Several processes can be identified using the tritium data from the rivers: direct runoff and outflow of water from groundwater reservoirs. Using these processes, it becomes possible to model the response of the river basins to the transient tritium tracer. Two of the most common models are the following: Unfortunately, both models fail to reproduce the tritium in river waters; thus, a two-member mixing model was developed that consists of 2 components: a prompt-flow component (recent precipitation – "piston") and a component where waters reside in the basin for longer than 1 year ("well-mixed reservoir"). Therefore, the basin tritium concentration becomes a function of the residence times within the basin, sinks (radioactive decay) or sources of tritium, and the input function. For the Ohio River, the tritium data indicated that about 40% of the flow was composed of precipitation with residence times of less than 1 year (in the Ohio basin) and older waters consisted of residence times of about 10 years. Thus, the short residence times (less than 1 year) corresponded to the "prompt-flow" component of the two-member mixing model. As for the Missouri River, results indicated that residence times were approximately 4 years with the prompt-flow component being around 10% (these results are due to the series of dams in the area of the Missouri River). As for the mass flux of tritium through the main stem of the Mississippi River into the Gulf of Mexico, data indicated that approximately 780 grams of tritium has flowed out of the River and into the Gulf between 1961 and 1997, an average of 7.7 PBq/yr. And current fluxes through the Mississippi River are about 1 to 2 grams per year as opposed to the pre-bomb period fluxes of roughly 0.4 grams per year.
https://en.wikipedia.org/wiki?curid=31278
Typee Typee: A Peep at Polynesian Life is the first book by American writer Herman Melville, published first in London, then New York, in 1846. Considered a classic in travel and adventure literature, the narrative is partly based on the author's actual experiences on the island Nuku Hiva in the South Pacific Marquesas Islands in 1842, liberally supplemented with imaginative reconstruction and adaptation of material from other books. The title comes from the valley of Taipivai, once known as Taipi. "Typee" was Melville's most popular work during his lifetime; it made him notorious as the "man who lived among the cannibals". The book presents itself as a piece of travel adventure, but from the beginning there were questions whether the story was true. The London edition of the book appeared in the publisher John Murray's "Colonial and Home Library" series, accounts of foreigners in exotic places, and the slightly suspicious Murray required reassurance that Melville's experiences was first-hand, not the work of a professional travel writer, and that the author had himself experienced the adventures he described. American readers, however, accepted the story at face value. "Typee" is, "in fact, neither literal autobiography nor pure fiction," says scholar Leon Howard. Melville "drew his material from his experiences, from his imagination, and from a variety of travel books when the memory of his experiences were inadequate." He departed from what actually happened in several ways, sometimes by extending factual incidents, sometimes by fabricating them, and sometimes by what one scholar calls "outright lies". The actual one-month stay on which "Typee" is based is presented as four months in the narrative; there is no lake on the actual island on which Melville might have canoed with the lovely Fayaway, and the ridge which Melville describes climbing after escaping the ship he may actually have seen in an engraving. He drew extensively on contemporary accounts by Pacific explorers to add to what might otherwise have been a straightforward story of escape, capture, and re-escape. Most American reviewers accepted the story as authentic, though it provoked disbelief among some British readers. Two years after the novel's publication, many of the events described therein were corroborated by Melville's fellow castaway, Richard Tobias "Toby" Greene. "Typee"s narrative expresses sympathy for the so-called savage natives, while criticizing the missionaries' attempts to civilize them: It may be asserted without fear of contradictions that in all the cases of outrages committed by Polynesians, Europeans have at some time or other been the aggressors, and that the cruel and bloodthirsty disposition of some of the islanders is mainly to be ascribed to the influence of such examples. [The] voluptuous Indian, with every desire supplied, whom Providence has bountifully provided with all the sources of pure and natural enjoyment, and from whom are removed so many of the ills and pains of life—what has he to desire at the hands of Civilization? Will he be the happier? Let the once smiling and populous Hawaiian islands, with their now diseased, starving, and dying natives, answer the question. The missionaries may seek to disguise the matter as they will, but the facts are incontrovertible. The narrator states that Typee natives ate an inhabitant of one of the neighboring valleys, but the natives who captured him reassured him that he would not be eaten. "The Knickerbocker" called "Typee" "a piece of Münchhausenism". New York publisher Evert Augustus Duyckinck wrote to Nathaniel Hawthorne that "it is a lively and pleasant book, not over philosophical perhaps." In 1939 Charles Robert Anderson published "Melville in the South Seas" in which he documented that Melville had spent only one month on the island (rather than the four months he claimed) and that Melville lifted extensive material from travel narratives. "Typee" was published first in London by John Murray on February 26, 1846, and then in New York by Wiley and Putnam on March 17, 1846. It was Melville's first book, and made him one of the best-known American authors overnight. The same version was published in London and New York in the first edition; however, Melville removed critical references to missionaries and Christianity from the second U.S. edition at the request of his American publisher. Later additions included a "Sequel: The Story of Toby" written by Melville, explaining what happened to Toby. Before "Typee"s publication in New York, Wiley and Putnam asked Melville to remove one sentence. In a scene where the "Dolly" is boarded by young women from Nukuheva, Melville originally wrote: Our ship was now given up to every species of riot and debauchery. Not the feeblest barrier was interposed between the unholy passions of the crew and their unlimited gratification. The second sentence was removed from the final version. The inaugural book of the Library of America series, titled "Typee, Omoo, Mardi" (May 6, 1982), was a volume containing "Typee: A Peep at Polynesian Life", its sequel "Omoo: A Narrative of Adventures in the South Seas" (1847), and "Mardi, and a Voyage Thither" (1849).
https://en.wikipedia.org/wiki?curid=31279
Truncated icosahedron In geometry, the truncated icosahedron is an Archimedean solid, one of 13 convex isogonal nonprismatic solids whose 32 faces are two or more types of regular polygons. It has 12 regular pentagonal faces, 20 regular hexagonal faces, 60 vertices and 90 edges. It is the Goldberg polyhedron GPV(1,1) or {5+,3}1,1, containing pentagonal and hexagonal faces. This geometry is associated with footballs (soccer balls) typically patterned with white hexagons and black pentagons. Geodesic domes such as those whose architecture Buckminster Fuller pioneered are often based on this structure. It also corresponds to the geometry of the fullerene C60 ("buckyball") molecule. It is used in the cell-transitive hyperbolic space-filling tessellation, the bitruncated order-5 dodecahedral honeycomb. This polyhedron can be constructed from an icosahedron with the 12 vertices truncated (cut off) such that one third of each edge is cut off at each of both ends. This creates 12 new pentagon faces, and leaves the original 20 triangle faces as regular hexagons. Thus the length of the edges is one third of that of the original edges. In Geometry and Graph theory, there are some standard polyhedron characteristics. Cartesian coordinates for the vertices of a "truncated icosahedron" centered at the origin are all even permutations of: where "φ" =  is the golden mean. The circumradius is ≈ 4.956 and the edges have length 2. The "truncated icosahedron" has five special orthogonal projections, centered, on a vertex, on two types of edges, and two types of faces: hexagonal and pentagonal. The last two correspond to the A2 and H2 Coxeter planes. The truncated icosahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. If the edge length of a truncated icosahedron is "a", the radius of a circumscribed sphere (one that touches the truncated icosahedron at all vertices) is: where "φ" is the golden ratio. This result is easy to get by using one of the three orthogonal golden rectangles drawn into the original icosahedron (before cut off) as the starting point for our considerations. The angle between the segments joining the center and the vertices connected by shared edge (calculated on the basis of this construction) is approximately 23.281446°. The area "A" and the volume "V" of the truncated icosahedron of edge length "a" are: With unit edges, the surface area is (rounded) 21 for the pentagons and 52 for the hexagons, together 73 (see areas of regular polygons). The truncated icosahedron easily demonstrates the Euler characteristic: The balls used in association football and team handball are perhaps the best-known example of a spherical polyhedron analog to the truncated icosahedron, found in everyday life. The ball comprises the same pattern of regular pentagons and regular hexagons, but it is more spherical due to the pressure of the air inside and the elasticity of the ball. This ball type was introduced to the World Cup in 1970 (starting in 2006, this iconic design has been superseded by alternative patterns). Geodesic domes are typically based on triangular facetings of this geometry with example structures found across the world, popularized by Buckminster Fuller. A variation of the icosahedron was used as the basis of the honeycomb wheels (made from a polycast material) used by the Pontiac Motor Division between 1971 and 1976 on its Trans Am and Grand Prix. This shape was also the configuration of the lenses used for focusing the explosive shock waves of the detonators in both the gadget and Fat Man atomic bombs. The truncated icosahedron can also be described as a model of the Buckminsterfullerene (fullerene) (C60), or "buckyball", molecule – an allotrope of elemental carbon, discovered in 1985. The diameter of the football and the fullerene molecule are 22 cm and about 0.71 nm, respectively, hence the size ratio is ≈31,000,000:1. In popular craft culture, large sparkleballs can be made using a icosahedron pattern and plastic, styrofoam or paper cups. A truncated icosahedron with "solid edges" by Leonardo da Vinci appears as an illustration in Luca Pacioli's book De divina proportione. These uniform star-polyhedra, and one icosahedral stellation have nonuniform truncated icosahedra convex hulls: In the mathematical field of graph theory, a truncated icosahedral graph is the graph of vertices and edges of the "truncated icosahedron", one of the Archimedean solids. It has 60 vertices and 90 edges, and is a cubic Archimedean graph. The truncated icosahedron was known to Archimedes who studied vertex-transitive polyhedra. However, that work was lost. Later, Johannes Kepler rediscovered and wrote about these solids, including the truncated icosahedron. The structure associated was described by Leonardo da Vinci. Albrecht Dürer also reproduced a similar icosahedron containing 12 pentagonal and 20 hexagonal faces but there are no clear documentations of this.
https://en.wikipedia.org/wiki?curid=31282
The Mismeasure of Man The Mismeasure of Man is a 1981 book by paleontologist Stephen Jay Gould. The book is both a history and critique of the statistical methods and cultural motivations underlying biological determinism, the belief that “the social and economic differences between human groups—primarily races, classes, and sexes—arise from inherited, inborn distinctions and that society, in this sense, is an accurate reflection of biology”. Gould argues that the primary assumption underlying biological determinism is that, “worth can be assigned to individuals and groups by "measuring intelligence as a single quantity"”. Biological determinism is analyzed in discussions of craniometry and psychological testing, the two principal methods used to measure intelligence as a single quantity. According to Gould, these methods possess two deep fallacies. The first fallacy is reification, which is “our tendency to convert abstract concepts into entities”. Examples of reification include the intelligence quotient (IQ) and the general intelligence factor ("g" factor), which have been the cornerstones of much research into human intelligence. The second fallacy is that of “ranking”, which is the “propensity for ordering complex variation as a gradual ascending scale”. The book received many positive reviews in the literary and popular press, but the reviews in scientific journals were, for the most part, highly critical. Literary reviews praised the book for opposing racism, the concept of general intelligence, and biological determinism. Reviews in scientific journals accused Gould of historical inaccuracy, unclear reasoning, and political bias. "The Mismeasure of Man" won the National Book Critics Circle award. Gould’s findings about how 19th-century researcher Samuel George Morton measured skull volumes came under criticism, and even Gould’s defenders found reasons to criticize his work on this topic. In 1996, a second edition was released. It included two additional chapters critiquing Richard Herrnstein and Charles Murray's book "The Bell Curve" (1994). Stephen Jay Gould (; 1941 – 2002) was one of the most influential and widely read authors of popular science of his generation. He was known by the general public mainly for his 300 popular essays in "Natural History" magazine, As in "The Mismeasure of Man", Gould criticized biological theories of human behavior in “Against "Sociobiology"” (1975) and “The Spandrels of San Marco and the Panglossian Paradigm” (1979). "The Mismeasure of Man" is a critical analysis of the early works of scientific racism which promoted "the theory of unitary, innate, linearly rankable intelligence"—such as craniometry, the measurement of skull volume and its relation to intellectual faculties. Gould alleged that much of the research was based largely on racial and social prejudices of the researchers rather than their scientific objectivity; that on occasion, researchers such as Samuel George Morton (1799–1851), Louis Agassiz (1807–1873), and Paul Broca (1824–1880), committed the methodological fallacy of allowing their personal "a priori" expectations to influence their conclusions and analytical reasoning. Gould noted that when Morton switched from using bird seed, which was less reliable, to lead shot to obtain endocranial-volume data, the average skull volumes changed, however these changes were not uniform across Morton's "racial" groupings. To Gould, it appeared that unconscious bias influenced Morton's initial results. Gould speculated, Plausible scenarios are easy to construct. Morton, measuring by seed, picks up a threateningly large black skull, fills it lightly and gives it a few desultory shakes. Next, he takes a distressingly small Caucasian skull, shakes hard, and pushes mightily at the foramen magnum with his thumb. It is easily done, without conscious motivation; expectation is a powerful guide to action. In 1977 Gould conducted his own analysis on some of Morton's endocranial-volume data, and alleged that the original results were based on "a priori" convictions and a selective use of data. He argued that when biases are accounted for, the original hypothesis—an ascending order of skull volume ranging from Blacks to Mongols to Whites—is unsupported by the data. "The Mismeasure of Man" presents a historical evaluation of the concepts of the "intelligence quotient" (IQ) and of the "general intelligence factor" ("g" factor), which were and are the measures for intelligence used by psychologists. Gould proposed that most psychological studies have been heavily biased, by the belief that the human behavior of a race of people is best explained by genetic heredity. He cites the Burt Affair, about the oft-cited twin studies, by Cyril Burt (1883–1971), wherein Burt claimed that human intelligence is highly heritable. As an evolutionary biologist and historian of science, Gould accepted "biological variability" (the premise of the transmission of intelligence via genetic heredity), but opposed "biological determinism", which posits that genes determine a definitive, unalterable social destiny for each man and each woman in life and society. "The Mismeasure of Man" is an analysis of statistical correlation, the mathematics applied by psychologists to establish the validity of IQ tests, and the heritability of intelligence. For example, to establish the validity of the proposition that IQ is supported by a general intelligence factor ("g" factor), the answers to several tests of cognitive ability must positively correlate; thus, for the "g" factor to be a heritable trait, the IQ-test scores of close-relation respondents must correlate more than the IQ-test scores of distant-relation respondents. However, correlation does not imply causation; for example, Gould said that the measures of the changes, over time, in "my age, the population of México, the price of Swiss cheese, my pet turtle’s weight, and the average distance between galaxies" have a high, positive correlation—yet that correlation does not indicate that Gould’s age increased because the Mexican population increased. More specifically, a high, positive correlation between the intelligence quotients of a parent and a child can be presumed either as evidence that IQ is genetically inherited, or that IQ is inherited through social and environmental factors. Moreover, because the data from IQ tests can be applied to arguing the logical validity of either proposition—genetic inheritance and environmental inheritance—the psychometric data have no inherent value. Gould pointed out that if the genetic heritability of IQ were demonstrable within a given racial or ethnic group, it would not explain the causes of IQ differences among the people of a group, or if said IQ differences can be attributed to the environment. For example, the height of a person is genetically determined, but there exist height differences within a given social group that can be attributed to environmental factors (e.g. the quality of nutrition) and to genetic inheritance. The evolutionary biologist Richard Lewontin, a colleague of Gould’s, is a proponent of this argument in relation to IQ tests. An example of the intellectual confusion about what heritability is and is not, is the statement: "If all environments were to become equal for everyone, heritability would rise to 100 percent because all remaining differences in IQ would necessarily be genetic in origin", which Gould said is misleading, at best, and false, at worst. First, it is very difficult to conceive of a world wherein every man, woman, and child grew up in the same environment, because their spatial and temporal dispersion upon the planet Earth makes it impossible. Second, were people to grow up in the same environment, not every difference would be genetic in origin because of the randomness of molecular and genetic development. Therefore, heritability is not a measure of phenotypic (physiognomy and physique) differences among racial and ethnic groups, but of differences between genotype and phenotype in a given population. Furthermore, he dismissed the proposition that an IQ score measures the general intelligence ("g" factor) of a person, because cognitive ability tests (IQ tests) present different types of questions, and the responses tend to form clusters of intellectual acumen. That is, different questions, and the answers to them, yield different scores—which indicate that an IQ test is a combination method of different examinations of different things. As such, Gould proposed that IQ-test proponents assume the existence of "general intelligence" as a discrete quality within the human mind, and thus they analyze the IQ-test data to produce an IQ number that establishes the definitive general intelligence of each man and of each woman. Hence, Gould dismissed the IQ number as an erroneous artifact of the statistical mathematics applied to the raw IQ-test data, especially because psychometric data can be variously analyzed to produce multiple IQ scores. The revised and expanded second edition (1996) includes two additional chapters, which critique Richard Herrnstein and Charles Murray’s book "The Bell Curve" (1994). Gould maintains that their book contains no new arguments and presents no compelling data; it merely refashions earlier arguments for biological determinism, which Gould defines as “the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status”. The majority of reviews of "The Mismeasure of Man" were positive, as Gould notes. Richard Lewontin, a celebrated evolutionary biologist who held positions at both the University of Chicago and Harvard, wrote a glowing review of Gould's book in "The New York Review of Books", endorsing most aspects of its account, and suggesting that it might have been even more critical of the racist intentions of the scientists he discusses, because scientists "sometimes tell deliberate lies because they believe that small lies can serve big truths." Gould said that the most positive review of the first edition to be written by a psychologist was in the "British Journal of Mathematical & Statistical Psychology", which reported that "Gould has performed a valuable service in exposing the logical basis of one of the most important debates in the social sciences, and this book should be required reading for students and practitioners alike." In "The New York Times", journalist Christopher Lehmann-Haupt wrote that the critique of factor analysis "demonstrates persuasively how factor analysis led to the cardinal error in reasoning, of confusing correlation with cause, or, to put it another way, of attributing false concreteness to the abstract". The British journal "Saturday Review" praised the book as a "fascinating historical study of scientific racism", and that its arguments "illustrate both the logical inconsistencies of the theories and the prejudicially motivated, albeit unintentional, misuse of data in each case". In the American "Monthly Review" magazine, Richard York and the sociologist Brett Clark praised the book's thematic concentration, saying that "rather than attempt a grand critique of all 'scientific' efforts aimed at justifying social inequalities, Gould performs a well-reasoned assessment of the errors underlying a specific set of theories and empirical claims". "Newsweek" gave it a positive review for revealing biased science and its abuse. "The Atlantic Monthly" and Phi Beta Kappa’s "The Key Reporter" also reviewed the book favorably. The first edition of "The Mismeasure of Man" won the non-fiction award from the National Book Critics Circle; the Outstanding Book Award for 1983 from the American Educational Research Association; the Italian translation was awarded the "Iglesias" prize in 1991; and in 1998, the Modern Library ranked it as the 24th-best English-language non-fiction book of the 20th century. In December 2006, "Discover" magazine ranked "The Mismeasure of Man" as the 17th-greatest science book of all time. In a paper published in 1988, John S. Michael reported that Samuel G. Morton's original 19th-century study was conducted with less bias than Gould had described; that "contrary to Gould's interpretation ... Morton's research was conducted with integrity". Nonetheless, Michael's analysis suggested that there were discrepancies in Morton's craniometric calculations, that his data tables were scientifically unsound, and he "cannot be excused for his errors, or his unfair comparisons of means". Michael later complained that some authors, including J. Philippe Rushton, selectively "cherry-picked facts" from his research to support their own claims. He lamented, "Some people have turned the Morton-Gould affair into an all or nothing debate in which either one side is right or the other side is right, and I think that is a mistake. Both men made mistakes and proving one wrong does not prove the other one right." In another study, published in 2011, Jason E. Lewis and colleagues re-measured the cranial volumes of the skulls in Morton's collection, and re-examined the respective statistical analyses by Morton and by Gould, concluding that, contrary to Gould's analysis, Morton did not falsify craniometric research results to support his racial and social prejudices, and that the "Caucasians" possessed the greatest average cranial volume in the sample. To the extent that Morton's craniometric measurements were erroneous, the error was away from his personal biases. Ultimately, Lewis and colleagues disagreed with most of Gould's criticisms of Morton, finding that Gould's work was "poorly supported", and that, in their opinion, the confirmation of the results of Morton's original work "weakens the argument of Gould, and others, that biased results are endemic in science". Despite this criticism, the authors acknowledged that they admired Gould's staunch opposition to racism. Lewis' study examined 46% of Morton's samples, whereas Gould's earlier study was based solely on a reexamination of Morton's raw data tables. However Lewis' study was subsequently criticized by a number of scholars for misrepresenting Gould's claims, bias, faulted for examining fewer than half of the skulls in Morton's collection, for failing to correct measurements for age, gender or stature, and for its claim that any meaningful conclusions could be drawn from Morton's data. In 2015 this paper was reviewed by Michael Weisberg, who reported that "most of Gould's arguments against Morton are sound. Although Gould made some errors and overstated his case in a number of places, he provided "prima facia" evidence, as yet unrefuted, that Morton did indeed mismeasure his skulls in ways that conformed to 19th century racial biases". Biologists and philosophers Jonathan Kaplan, Massimo Pigliucci, and Joshua Alexander Banta also published a critique of the group's paper, arguing that many of its claims were misleading and the re-measurements were "completely irrelevant to an evaluation of Gould's published analysis". They also maintain that the "methods deployed by Morton and Gould were both inappropriate" and that "Gould's statistical analysis of Morton's data is in many ways no better than Morton's own". A 2018 paper argues that Morton's data was unbiased but his interpretation of the results was not; the paper argues he had similar findings to research conducted by a contemporary craniologist Freidrich Tidemann, who had interpreted the data differently to argue strongly against any conception of a racial hierarchy. In a review of "The Mismeasure of Man", Bernard Davis, professor of microbiology at Harvard Medical School, said that Gould erected a straw man argument based upon incorrectly defined key terms—specifically "reification"—which Gould furthered with a "highly selective" presentation of statistical data, all motivated more by politics than by science. That Philip Morrison’s laudatory book review of "The Mismeasure of Man" in "Scientific American" was written and published because the editors of the journal had "long seen the study of the genetics of intelligence as a threat to social justice". Davis also criticized the popular-press and the literary-journal book reviews of "The Mismeasure of Man" as generally approbatory; whereas, most scientific-journal book reviews were generally critical. Nonetheless, in 1994, Gould contradicted Davis by arguing that of twenty-four academic book reviews written by experts in psychology, fourteen approved, three were mixed opinions, and seven disapproved of the book. Furthermore, Davis accused Gould of having misrepresented a study by Henry H. Goddard (1866–1957) about the intelligence of Jewish, Hungarian, Italian, and Russian immigrants to the U.S., wherein Gould reported Goddard's qualifying those people as "feeble-minded"; whereas, in the initial sentence of the study, Goddard said the study subjects were atypical members of their ethnic groups, who had been selected because of their suspected sub-normal intelligence. Countering Gould, Davis further explained that Goddard proposed that the low IQs of the sub-normally intelligent men and women who took the cognitive-ability test likely derived from their social environments rather than from their respective genetic inheritances, and concluded that "we may be confident that their children will be of average intelligence, and, if rightly brought up, will be good citizens". In his review, psychologist John B. Carroll said that Gould did not understand "the nature and purpose" of factor analysis. Statistician David J. Bartholomew, of the London School of Economics, said that Gould erred in his use of factor analysis, irrelevantly concentrated upon the fallacy of reification (abstract as concrete), and ignored the contemporary scientific consensus about the existence of the psychometric "g". Reviewing the book, Stephen F. Blinkhorn, a senior lecturer in psychology at the University of Hertfordshire, wrote that "The Mismeasure of Man" was "a masterpiece of propaganda" that selectively juxtaposed data to further a political agenda. Psychologist Lloyd Humphreys, then editor-in-chief of "The American Journal of Psychology" and "Psychological Bulletin", wrote that "The Mismeasure of Man" was "science fiction" and "political propaganda", and that Gould had misrepresented the views of Alfred Binet, Godfrey Thomson, and Lewis Terman. In his review, psychologist Franz Samelson wrote that Gould was wrong in asserting that the psychometric results of the intelligence tests administered to soldier-recruits by the U.S. Army contributed to the legislation of the Immigration Restriction Act of 1924. In their study of the Congressional Record and committee hearings related to the Immigration Act, Mark Snyderman and Richard J. Herrnstein reported that "the [intelligence] testing community did not generally view its findings as favoring restrictive immigration policies like those in the 1924 Act, and Congress took virtually no notice of intelligence testing". Psychologist David P. Barash wrote that Gould unfairly groups sociobiology with "racist eugenics and misguided Social Darwinism". A 2018 paper argued that the Gould was incorrect in his assessment of the Army Beta and that, for the knowledge, technology and test development standards of the time, it was adequate and could measure intelligence, possibly even in the modern day. In his review of "The Mismeasure of Man", Arthur Jensen, a University of California (Berkeley) educational psychologist whom Gould much criticized in the book, wrote that Gould used straw man arguments to advance his opinions, misrepresented other scientists, and propounded a political agenda. According to Jensen, the book was "a patent example" of the bias that political ideology imposes upon science—the very thing that Gould sought to portray in the book. Jensen also criticized Gould for concentrating on long-disproven arguments (noting that 71% of the book's references preceded 1950), rather than addressing "anything currently regarded as important by scientists in the relevant fields", suggesting that drawing conclusions from early human intelligence research is like condemning the contemporary automobile industry based upon the mechanical performance of the Ford Model T. Charles Murray, co-author of "The Bell Curve" (1994), said that his views about the distribution of human intelligence, among the races and the ethnic groups who compose the U.S. population, were misrepresented in "The Mismeasure of Man". Psychologist Hans Eysenck wrote that "The Mismeasure of Man" is a book that presents "a paleontologist's distorted view of what psychologists think, untutored in even the most elementary facts of the science". Arthur Jensen and Bernard Davis argued that if the "g" factor (general intelligence factor) were replaced with a model that tested several types of intelligence, it would change results less than one might expect. Therefore, according to Jensen and Davis, the results of standardized tests of cognitive ability would continue to correlate with the results of other such standardized tests, and that the intellectual achievement gap between black and white people would remain. Psychologist J. Philippe Rushton accused Gould of "scholarly malfeasance" for misrepresenting and for ignoring contemporary scientific research pertinent to the subject of his book, and for attacking dead hypotheses and methods of research. He faulted "The Mismeasure of Man" because it did not mention the magnetic resonance imaging (MRI) studies that showed the existence of statistical correlations among brain-size, IQ, and the "g" factor, despite Rushton having sent copies of the MRI studies to Gould. Rushton further criticized the book for the absence of the results of five studies of twins reared apart corroborating the findings of Cyril Burt—the contemporary average was 0.75 compared to the average of 0.77 reported by Burt. James R. Flynn, a researcher critical of racial theories of intelligence, repeated the arguments of Arthur Jensen about the second edition of "The Mismeasure of Man". Flynn wrote that "Gould's book evades all of Jensen's best arguments for a genetic component in the black–white IQ gap, by positing that they are dependent on the concept of "g" as a general intelligence factor. Therefore, Gould believes that if he can discredit "g" no more need be said. This is manifestly false. Jensen’s arguments would bite no matter whether blacks suffered from a score deficit on one or ten or one hundred factors." Rather than defending Jensen and Rushton, however, Flynn concluded that the Flynn Effect, a nongenetic rise in IQ throughout the 20th century, invalidated their core argument because their methods falsely identified even this change as genetic. According to psychologist Ian Deary, Gould's claim that there is no relation between brain size and IQ is outdated. Furthermore, he reported that Gould refused to correct this in new editions of the book, even though newly available data were brought to his attention by several researchers.
https://en.wikipedia.org/wiki?curid=31283
Taliban treatment of women While in power in Afghanistan, the Taliban became notorious internationally for their sexism and violence against women. Their stated motive was to create a "secure environment where the chastity and dignity of women may once again be sacrosanct", reportedly based on Pashtunwali beliefs about living in purdah. Afghan women were forced to wear the burqa at all times in public, because, according to one Taliban spokesman, "the face of a woman is a source of corruption" for men not related to them. In a systematic segregation sometimes referred to as gender apartheid, women were not allowed to work, they were not allowed to be educated after the age of eight, and until then were permitted only to study the Qur'an. Women seeking an education were forced to attend underground schools, where they and their teachers risked execution if caught. They were not allowed to be treated by male doctors unless accompanied by a male chaperone, which led to illnesses remaining untreated. They faced public flogging and execution for violations of the Taliban's laws. The Taliban allowed and in some cases encouraged marriage for girls under the age of 16. Amnesty International reported that 80% of Afghan marriages were forced. From the age of eight onward, girls were not allowed to be in direct contact with males other than a close "blood relative", husband, or in-law (see mahram). Other restrictions for women were: The Taliban rulings regarding public conduct placed severe restrictions on a woman's freedom of movement and created difficulties for those who could not afford a burqa or didn't have any "mahram". These women faced virtual house arrest. A woman who was badly beaten by the Taliban for walking the streets alone stated "my father was killed in battle...I have no husband, no brother, no son. How am I to live if I can't go out alone?" A field worker for the NGO Terre des hommes witnessed the impact on female mobility at Kabul's largest state-run orphanage, Taskia Maskan. After the female staff was relieved of their duties, the approximately 400 girls living at the institution were locked inside for a year without being allowed outside for recreation. Decrees that affected women's mobility were: The lives of rural women were less dramatically affected as they generally lived and worked within secure kin environments. A relative level of freedom was necessary for them to continue with their chores or labour. If these women travelled to a nearby town, the same urban restrictions would have applied to them. The Taliban disagreed with past Afghan statutes that allowed the employment of women in a mixed sex workplace. The claim was that this was a breach of purdah and sharia law. On September 30, 1996, the Taliban decreed that all women should be banned from employment. It is estimated that 25 percent of government employees were female, and when compounded by losses in other sectors, many thousands of women were affected. This had a devastating impact on household incomes, especially on vulnerable or widow-headed households, which were common in Afghanistan. Another loss was for those whom the employed women served. Elementary education of children, not just girls, was shut down in Kabul, where virtually all of the elementary school teachers were women. Thousands of educated families fled Kabul for Pakistan after the Taliban took the city in 1996. Among those who remained in Afghanistan, there was an increase in mother and child destitution as the loss of vital income reduced many families to the margin of survival. Taliban Supreme Leader Mohammed Omar assured female civil servants and teachers they would still receive wages of around US$5 per month, although this was a short term offering. A Taliban representative stated: "The Taliban’s act of giving monthly salaries to 30,000 job-free women, now sitting comfortably at home, is a whiplash in the face of those who are defaming Taliban with reference to the rights of women. These people through baseless propaganda are trying to incite the women of Kabul against the Taliban". The Taliban promoted the use of the extended family, or zakat system of charity to ensure women should not need to work. However, years of conflict meant that nuclear families often struggled to support themselves let alone aid additional relatives. Qualification for legislation often rested on men, such as food aid which had to be collected by a male relative. The possibility that a woman may not possess any living male relatives was dismissed by Mullah Ghaus, the acting foreign minister, who said he was surprised at the degree of international attention and concern for such a small percentage of the Afghan population. For rural women there was generally little change in their circumstance, as their lives were dominated by the unpaid domestic, agricultural and reproductive labour necessary for subsistence. Female health professionals were exempted from the employment ban, yet they operated in much-reduced circumstances. The ordeal of physically getting to work due to the segregated bus system and widespread harassment meant some women left their jobs by choice. Of those who remained, many lived in fear of the regime and chose to reside at the hospital during the working week to minimize exposure to Taliban forces. These women were vital to ensuring the continuance of gynecological, ante-natal and midwifery services, be it on a much-compromised level. Under the Rabbani regime, there had been around 200 female staff working in Kabul's Mullalai Hospital, yet barely 50 remained under the Taliban. NGOs operating in Afghanistan after the fall of the Taliban in 2001 found the shortage of female health professionals to be a significant obstacle to their work. The other exception to the employment ban allowed a reduced number of humanitarian workers to remain in service. The Taliban segregation codes meant women were invaluable for gaining access to vulnerable women or conducting outreach research. This exception was not sanctioned by the entire Taliban movement, so instances of female participation, or lack thereof, varied with each circumstance. The city of Herat was particularly affected by Taliban adjustments to the treatment of women, as it had been one of the more cosmopolitan and outward-looking areas of Afghanistan prior to 1995. Women had previously been allowed to work in a limited range of jobs, but this was stopped by Taliban authorities. The new governor of Herat, Mullah Razzaq, issued orders for women to be forbidden to pass his office for fear of their distracting nature. The Taliban claimed to recognize their Islamic duty to offer education to both boys and girls, yet a decree was passed that banned girls above the age of 8 from receiving education. Maulvi Kalamadin insisted it was only a temporary suspension and that females would return to school and work once facilities and street security were adapted to prevent cross-gender contact. The Taliban wished to have total control of Afghanistan before calling upon an Ulema body to determine the content of a new curriculum to replace the Islamic yet unacceptable Mujahadin version. The female employment ban was felt greatly in the education system. Within Kabul alone, the ruling affected 106,256 girls, 148,223 male students, and 8,000 female university undergraduates. 7,793 female teachers were dismissed, a move that crippled the provision of education and caused 63 schools to close due to a sudden lack of educators. Some women ran clandestine schools within their homes for local children, or for other women under the guise of sewing classes, such as the Golden Needle Sewing School. The learners, parents and educators were aware of the consequences should the Taliban discover their activities, but for those who felt trapped under the strict Taliban rule, such actions allowed them a sense of self-determination and hope. Prior to the Taliban taking power in Afghanistan male doctors had been allowed to treat women in hospitals, but the decree that no male doctor should be allowed to touch the body of a woman under the pretext of consultation was soon introduced. With fewer female health professionals in employment, the distances many women had to travel for attention increased while the provision of ante-natal clinics declined. In Kabul, some women established informal clinics in their homes to service family and neighbours, yet as medical supplies were hard to obtain their effectiveness was limited. Many women endured prolonged suffering or a premature death due to the lack of treatment. For those families that had the means, inclination, and mahram support, medical attention could be sought in Pakistan. In October 1996, women were barred from accessing the traditional hammam, public baths, as the opportunities for socializing were ruled un-Islamic. These baths were an important facility in a nation where few possessed running water and the bar gave cause for the UN to predict a rise in scabies and vaginal infections among women denied methods of hygiene as well as access to health care. Nasrine Gross, an Afghan-American author, stated in 2001 that it has been four years since many Afghan women had been able to pray to their God as "Islam prohibits women from praying without a bath after their periods". In June 1998, the Taliban banned women from attending general hospitals in the capital, whereas before they had been able to attend a women-only ward of general hospitals. This left only one hospital in Kabul at which they could seek treatment. Family harmony was badly affected by mental stress, isolation and depression that often accompanied the forced confinement of women. A survey of 160 women concluded that 97 percent showed signs of serious depression and 71 percent reported a decline in their physical well being. Latifa, a Kabul resident and author, wrote: The apartment resembles a prison or a hospital. Silence weighs heavily on all of us. As none of us do much, we haven’t got much to tell each other. Incapable of sharing our emotions, we each enclose ourselves in our own fear and distress. Since everyone is in the same black pit, there isn’t much point in repeating time and again that we can’t see clearly. The Taliban closed the country's beauty salons. Cosmetics such as nail varnish and make-up were prohibited. Taliban restrictions on the cultural presence of women covered several areas. Place names including the word "women" were modified so that the word was not used. Women were forbidden to laugh loudly as it was considered improper for a stranger to hear a woman's voice. Women were prohibited from participating in sports or entering a sports club. The Revolutionary Association of the Women of Afghanistan (RAWA) dealt specifically with these issues. It was founded by Meena Keshwar Kamal, a woman who amongst other things established a bi-lingual magazine called "Women's Message" in 1981. She was assassinated in 1987 at the age of 30, but is revered as a heroine among Afghan women. Punishments were often carried out publicly, either as formal spectacles held in sports stadiums or town squares or spontaneous street beatings. Civilians lived in fear of harsh penalties as there was little mercy; women caught breaking decrees were often treated with extreme violence. Examples: Many punishments were carried out by individual militias without the sanction of Taliban authorities, as it was against official Taliban policy to punish women in the street. A more official line was the punishment of men for instances of female misconduct: a reflection of a patriarchal society and the belief that men are duty bound to control women. Maulvi Kalamadin stated in 1997, "Since we cannot directly punish women, we try to use taxi drivers and shopkeepers as a means to pressure them" to conform. Here are examples of the punishment of men: The protests of international agencies carried little weight with Taliban authorities, who gave precedence to their interpretation of Islamic law and did not feel bound by UN codes or human rights laws, legislation it viewed as instruments for Western imperialism. After the Taliban takeover of Herat in 1995, the UN had hoped the gender policies would become more 'moderate' "as it matured from a popular uprising into a responsible government with linkages to the donor community". The Taliban refused to bow to international pressure and reacted calmly to aid suspensions. In January 2006 a London conference on Afghanistan led to the creation of an International Compact, which included benchmarks for the treatment of women. The Compact includes the following point: "Gender:By end-1389 (20 March 2011): the National Action Plan for Women in Afghanistan will be fully implemented; and, in line with Afghanistan’s MDGs, female participation in all Afghan governance institutions, including elected and appointed bodies and the civil service, will be strengthened." However, an Amnesty International report on June 11, 2008 declared that there needed to be "no more empty promises" with regard to Afghanistan, citing the treatment of women as one such unfulfilled goal. Various Taliban groups have been in existence in Pakistan since around 2002. Most of these Taliban factions have joined an umbrella organization called Tehrik-i-Taliban Pakistan (TTP). Although the Pakistani Taliban is distinct from Afghan Taliban, they have a similar outlook towards women. The Pakistani Taliban too has killed women accusing them of un-Islamic behavior and has forcibly married girls after publicly flogging them for illicit relations.
https://en.wikipedia.org/wiki?curid=31285
Theft Theft is the taking of another person's property or services without that person's permission or consent with the intent to deprive the rightful owner of it. The word "theft" is also used as an informal shorthand term for some crimes against property, such as burglary, embezzlement, larceny, looting, robbery, shoplifting, library theft or fraud. In some jurisdictions, "theft" is considered to be synonymous with "larceny"; in others, "theft" has replaced "larceny". Someone who carries out an act of or makes a career out of theft is known as a thief. "Theft" is the name of a statutory offence in California, Canada, England and Wales, Hong Kong, Northern Ireland, the Republic of Ireland, and the Australian states of South Australia and Victoria. The "actus reus" of theft is usually defined as an unauthorized taking, keeping, or using of another's property which must be accompanied by a "mens rea" of dishonesty and the intent permanently to deprive the owner or rightful possessor of that property or its use. For example, if X goes to a restaurant and, by mistake, takes Y's scarf instead of her own, she has physically deprived Y of the use of the property (which is the "actus reus") but the mistake prevents X from forming the "mens rea" (i.e., because she believes that she is the owner, she is not dishonest and does not intend to deprive the "owner" of it) so no crime has been committed at this point. But if she realizes the mistake when she gets home and could return the scarf to Y, she will steal the scarf if she dishonestly keeps it (see theft by finding). Note that there may be civil liability for the torts of trespass to chattels or conversion in either eventuality. Section 322(1) of the Criminal Code provides the general definition for theft in Canada: Sections 323 to 333 provide for more specific instances and exclusions: In the general definition above, the Supreme Court of Canada has construed "anything" very broadly, stating that it is not restricted to tangibles, but includes intangibles. To be the subject of theft it must, however: Because of this, confidential information cannot be the subject of theft, as it is not capable of being taken as only tangibles can be taken. It cannot be converted, not because it is an intangible, but because, save in very exceptional far‑fetched circumstances, the owner would never be deprived of it. However, the theft of trade secrets in certain circumstances does constitute part of the offence of economic espionage, which can be prosecuted under s. 19 of the "Security of Information Act". For the purposes of punishment, Section 334 divides theft into two separate offences, according to the value and nature of the goods stolen: Where a motor vehicle is stolen, Section 333.1 provides for a maximum punishment of 10 years for an indictable offence (and a minimum sentence of six months for a third or subsequent conviction), and a maximum sentence of 18 months on summary conviction. Article 2 of the Theft Ordinance provides the general definition of theft in Hong Kong: Theft is a criminal activity in India with punishments which may lead to jail term. Below are excerpts of laws of Indian penal Code which state definitions and punishments for theft. Whoever intending to take dishonestly any movable property out of the possession of any person without that person’s consent, moves that property in order to such taking is said to commit theft.Explanation 1.—A thing so long as it is attached to the earth, not being movable property, is not the subject of theft; but it becomes capable of being the subject of theft as soon as it is severed from the earth. Explanation 2.—A moving effected by the same act which effects the severance may be a theft. Explanation 3.—A person is said to cause a thing to move by removing an obstacle which prevented it from moving or by separating it from any other thing, as well as by actually moving it. Explanation 4.—A person, who by any means causes an animal to move, is said to move that animal, and to move everything which, in consequence of the motion so caused, is moved by that animal. Explanation 5.—The consent mentioned in the definition may be express or implied, and may be given either by the person in possession, or by any person having for that purpose authority either express or implied. Whoever commits theft shall be punished with imprisonment of either description for a term which may extend to three years, or with fine, or with both. Whoever commits theft in any building, tent or vessel, which building, tent or vessel is used as a human dwelling, or used for the custody of property, shall be punished with imprisonment of either description for a term which may extend to seven years, and shall also be liable to fine. Whoever, being a clerk or servant, or being employed in the capacity of a clerk or servant, commits theft in respect of any property in the possession of his master or employer, shall be punished with imprisonment of either description for a term which may extend to seven years, and shall also be liable to fine. Whoever commits theft, having made preparation for causing death, or hurt, or restraint, or fear of death, or of hurt, or of restraint, to any person, in order to the committing of such theft, or in order to the effecting of his escape after the committing of such theft, or in order to the retaining of property taken by such theft, shall be punished with rigorous imprisonment for a term which may extend to ten years, and shall also be liable to fine. Theft is a crime with related articles in the Wetboek van Strafrecht. Theft is a statutory offence, created by section 4(1) of the Criminal Justice (Theft and Fraud Offences) Act, 2001. According to the Romanian Penal Code a person committing theft ("furt") can face a penalty ranging from 1 to 20 years. Degrees of theft: In England and Wales, theft is a statutory offence, created by section 1(1) of the Theft Act 1968. This offence replaces the former offences of larceny, embezzlement and fraudulent conversion. The marginal note to section 1 of the Theft Act 1968 describes it as a "basic definition" of theft. Sections 1(1) and (2) provide: Sections 2 to 6 of the Theft Act 1968 have effect as regards the interpretation and operation of section 1 of that Act. Except as otherwise provided by that Act, sections 2 to 6 of that Act apply only for the purposes of section 1 of that Act. Section 3 provides: See R v Hinks and Lawrence v Metropolitan Police Commissioner. Section 4(1) provides that: Edward Griew said that section 4(1) could, without changing its meaning, be reduced, by omitting words, to: Sections 4(2) to (4) provide that the following can only be stolen under certain circumstances: Intangible property Confidential information and trade secrets are not property within the meaning of section 4. The words "other intangible property" include export quotas that are transferable for value on a temporary or permanent basis. Electricity Electricity cannot be stolen. It is not property within the meaning of section 4 and is not appropriated by switching on a current. "Cf." the offence of abstracting electricity under section 13. Section 5 "belonging to another" requires a distinction to be made between ownership, possession and control: So if A buys a car for cash, A will be the owner. If A then lends the car to B Ltd (a company), B Ltd will have possession. C, an employee of B Ltd then uses the car and has control. If C uses the car in an unauthorized way, C will steal the car from A and B Ltd. This means that it is possible to steal one's own property. In R v Turner, the owner removed his car from the forecourt of a garage where it had been left for collection after repair. He intended to avoid paying the bill. There was an appropriation of the car because it had been physically removed but there were two issues to be decided: Section 6 "with the intent to permanently deprive the other of it" is sufficiently flexible to include situations where the property is later returned. Alternative verdict The offense created by section 12(1) of the Theft Act 1968 (TWOC) is available an alternative verdict on an indictment for theft. Visiting forces Theft is an offence against property for the purposes of section 3 of the Visiting Forces Act 1952. Mode of trial and sentence Theft is triable either way. A person guilty of theft is liable, on conviction on indictment, to imprisonment for a term not exceeding seven years, or on summary conviction to imprisonment for a term not exceeding six months, or to a fine not exceeding the prescribed sum, or to both. Aggravated theft The only offence of aggravated theft is robbery, contrary to section 8 of the Theft Act 1968. Stolen goods For the purposes of the provisions of the Theft Act 1968 which relate to stolen goods, goods obtain in England or Wales or elsewhere by blackmail or fraud are regarded as stolen, and the words "steal", "theft" and "thief" are construed accordingly. Sections 22 to 24 and 26 to 28 of the Theft Act 1968 contain references to stolen goods. Handling stolen goods The offence of handling stolen goods, contrary to section 22(1) of the Theft Act 1968, can only be committed "otherwise than in the course of stealing". Similar or associated offences According to its title, the Theft Act 1968 revises the law as to theft and similar or associated offences. See also the Theft Act 1978. In Northern Ireland, theft is a statutory offence, created by section 1 of the Theft Act (Northern Ireland) 1969. In the United States, crimes must be prosecuted in the jurisdiction in which they occurred. Although federal and state jurisdiction may overlap, even when a criminal act violates both state and federal law, in most cases only the most serious offenses are prosecuted at the federal level. The federal government has criminalized certain narrow categories of theft that directly affect federal agencies or interstate commerce. The Model Penal Code, promulgated by the American Law Institute to help state legislatures update and standardize their laws, includes categories of theft by unlawful taking or by unlawfully disposing of property, theft by deception (fraud), theft by extortion, theft by failure to take measures to return lost or mislaid or mistakenly delivered property, theft by receipt of stolen property, theft by failing to make agreed disposition of received funds, and theft of services. Although many U.S. states have retained larceny as the primary offense, some have now adopted theft provisions. Grand theft, also called "grand larceny", is a term used throughout the United States designating theft that is large in magnitude or serious in potential penological consequences. Grand theft is contrasted with petty theft, also called petit theft, that is of smaller magnitude or lesser seriousness. Theft laws, including the distinction between grand theft and petty theft for cases falling within its jurisdiction, vary by state. This distinction is established by statute, as are the penological consequences. Most commonly, statutes establishing the distinction between grand theft and petty theft do so on the basis of the value of the money or property taken by the thief or lost by the victim, with the dollar threshold for grand theft varying from state to state. Most commonly, the penological consequences of the distinction include the significant one that grand theft can be treated as a felony, while petty theft is generally treated as a misdemeanor. In some states, grand theft of a vehicle may be charged as "grand theft auto" (see motor vehicle theft for more information). Repeat offenders who continue to steal may become subject to life imprisonment in certain states. Sometimes the federal anti-theft-of-government-property law is used to prosecute cases where the Espionage Act would otherwise be involved; the theory being that by retaining sensitive information, the defendant has taken a 'thing of value' from the government. For examples, see the Amerasia case and "United States v. Manning". When stolen property exceeds the amount of $500 it is a felony offense. If property is less than $500, then it is a Class A misdemeanor. Unlike some other states, shoplifting is not defined by a separate statute but falls under the state's general theft statute. The Alaska State Code does not use the terms "grand theft" or "grand larceny". However, it specifies that theft of property valued at more than $1,000 is a felony whereas thefts of lesser amounts are misdemeanors. The felony categories (class 1 and class 2 theft) also include theft of firearms; property taken from the person of another; vessel or aircraft safety or survival equipment; and of access devices. Felony theft is committed when the value of the stolen property exceeds $1000. Regardless of the value of the item, if it is a firearm or an animal taken for the purpose of animal fighting, then the theft is a Class 6 Felony. The Theft Act of 1927 consolidated a variety of common law crimes into theft. The state now distinguishes between two types of theft, grand theft and petty theft. The older crimes of embezzlement, larceny, and stealing, and any preexisting references to them now fall under the theft statute. There are a number of criminal statutes in the California Penal Code defining grand theft in different amounts. Grand theft generally consists of the theft of something of value over $950 (including money, labor or property but is lower with respect to various specified property), Theft is also considered grand theft when more than $250 in crops or marine life forms are stolen, “when the property is taken from the person of another,” or when the property stolen is an automobile, farm animal, or firearm. Petty theft is the default category for all other thefts. Grand theft is punishable by up to a year in jail or prison, and may be charged (depending upon the circumstances) as a misdemeanor or felony, while petty theft is a misdemeanor punishable by a fine or imprisonment not exceeding six months in jail or both. In general, any property taken that carries a value of more than $300 can be considered grand theft in certain circumstances. In Georgia, when a theft offense involves property valued at $500 or less, the crime is punishable as a misdemeanor. Any theft of property determined to be exceeding $500 may be treated as grand theft and charged as a felony. Theft in the first or second degree is a felony. Theft in the first degree means theft above $20,000 or of a firearm or explosive; or theft over $300 during a declared emergency. Theft in the second degree means theft above $750, theft from the person of another, or agricultural products over $100 or aquacultural products from an enclosed property. Theft is a felony if the value of the property exceeds $300 or the property is stolen from the person of another. Thresholds at $10,000, $100,000, and $500,000 determine how severe the punishment can be. The location from which property was stolen is also a factor in sentencing. KRS 514.030 states that theft by unlawful taking or disposition is generally a Class A misdemeanor unless the items stolen are a firearm, anhydrous ammonia, a controlled substance valued at less than $10,000 or any other item or combination of items valued $500 or higher and less than $10,000 in which case the theft is a Class D felony. Theft of items valued at $10,000 or higher and less than $1,000,000 is a Class C felony. Theft of items valued at $1,000,000 or more is a Class B felony, as is first offense theft of anhydrous ammonia for the express purpose of manufacturing methamphetamines in violation of KRS 218A.1432. In the latter case, subsequent offenses are a Class A felony. In Massachusetts, theft may generally be charged as a felony alue of stolen property is greater than $250. Stealing is a felony if the value of stolen property exceeds $500. It is also a felony if "The actor physically takes the property appropriated from the person of the victim" or the stolen property is a vehicle, legal document, credit card, firearm, explosive, U.S. flag on display, livestock animal, fish with value exceeding $75, captive wildlife, controlled substance, or ammonia. Stealing in excess of $25,000 is usually a class B felony (sentence: 5–15 years), while any other felony stealing (not including the felonies of burglary or robbery) that does not involve chemicals is a class C felony (sentence: up to 7 years). Non-felony stealing is a class A misdemeanor (sentence: up to 1 year). Grand larceny consists of stealing property with a value exceeding $1000; or stealing a public record, secret scientific material, firearm, credit or debit card, ammonia, telephone with service, or motor vehicle or religious item with value exceeding $100; or stealing from the person of another or by extortion or from an ATM. The degree of grand larceny is increased if the theft was from an ATM, through extortion involving fear, or involved a value exceeding the thresholds of $3,000, $50,000, or $1,000,000. Grand Larceny: Value of goods exceed $900 (13 V.S.A. § 2501) Grand Larceny: Value of goods exceed $200 (Virginia Code § 18.2-95) Theft of goods valued between $750 and $5000 is second-degree theft, a Class C felony. Theft of goods valued above $5000, of a search-and-rescue dog on duty, of public records from a public office or official, of metal wire from a utility, or of an access device, is a Class B felony, as is theft of a motor vehicle or a firearm. Victoria Theft is defined in the "Crimes Act" 1958 (Vic) as when a person "dishonestly appropriates property belonging to another with the intention of permanently depriving the other of it.". The actus reus and mens rea are defined as follows: Appropriation is defined in section 73(4) of the "Crimes Act" 1958 (Vic) as the assumption of any of the owners rights. It does not have to be all the owner's rights, as long as at least one right has been assumed. If the owner gave their consent to the appropriation there cannot be an appropriation. However, if this consent is obtained by deception, this consent is vitiated. Property – defined in section 71(1) of the "Crimes Act" 1958 (Vic) as being both tangible property, including money and intangible property. Information has been held not be property. Belonging to another – section 73(5) of the "Crimes Act" 1958 (Vic) provides that property belongs to another if that person has ownership, possession, or a proprietary interest in the property. Property can belong to more than one person. sections 73(9) & 73(10) deal with situations where the accused receives property under an obligation or by mistake. South Australia Theft is defined in section 134 of the "Criminal Consolidation Act" 1935 (SA) as being where a person deals with property dishonestly, without the owners consent and intending to deprive the owner of their property, or make a serious encroachment on the proprietary rights of the owner. Under this law, encroachment on proprietary rights means that the property is dealt with in a way that creates a substantial risk that the property will not be returned to the owner, or that the value of the property will be greatly diminished when the owner does get it back. Also, where property is treated as the defendants own property to dispose of, disregarding the actual property owner's rights. For a basic offence, a person found guilty of this offence is liable for imprisonment of up to 10 years. For an aggravated offence, a person found guilty of this offence is liable for imprisonment of up to 15 years. Victoria Intention to permanently deprive – defined at s.73(12) as treating property as it belongs to the accused, rather than the owner. Dishonestly – section 73(2) of the "Crimes Act" 1958 (Vic) creates a negative definition of the term 'dishonestly'. The section deems only three circumstances when the accused is deemed to have been acting honestly. These are a belief in a legal claim of right, a belief that the owner would have consented, or a belief the owner could not be found. South Australia Whether a person's conduct is dishonest is a question of fact to be determined by the jury, based on their own knowledge and experience. As with the definition in Victoria, it contains definitions of what is not dishonesty, including a belief in a legal claim of right or a belief the owner could not be found. In the British West Indies, especially Grenada, there have been a spate of large-scale thefts of tons of sand from beaches. Both Grenada and Jamaica are considering increasing fines and jail time for the thefts. In parts of the world which govern with sharia law, the punishment for theft is amputation of the right hand if the thief does not repent. This ruling is derived from sura 5 verse 38 of the Quran which states "As to the thief, Male or female, cut off his or her hands: a punishment by way of example, from Allah, for their crime: and Allah is Exalted in power." This is viewed as being a deterrent. In Buddhism, one of the five precepts prohibits theft, and involves the intention to steal what one perceives as not belonging to oneself ("what is not given") and acting successfully upon that intention. The severity of the act of theft is judged by the worth of the owner and the worth of that which is stolen. Underhand dealings, fraud, cheating and forgery are also included in this precept. Professions that are seen to violate the precept against theft are working in the gambling industry or marketing products that are not actually required for the customer. Possible causes for acts of theft include both economic and non-economic motivations. For example, an act of theft may be a response to the offender's feelings of anger, grief, depression, anxiety and compulsion, boredom, power and control issues, low self-esteem, a sense of entitlement, an effort to conform or fit in with a peer group, or rebellion. Theft from work may be attributed to factors that include greed, perceptions of economic need, support of a drug addiction, a response to or revenge for work-related issues, rationalization that the act is not actually one of stealing, response to opportunistic temptation, or the same emotional issues that may be involved in any other act of theft. The most common reasons for shoplifting include participation in an organized shoplifting ring, opportunistic theft, compulsive acts of theft, thrill-seeking, and theft due to need. Studies focusing on shoplifting by teenagers suggest that minors shoplift for reasons including the novelty of the experience, peer pressure, the desire to obtain goods that a minor cannot legally purchase, and for economic reasons, as well as self-indulgence and rebellion against parents. Specific forms of theft and other related offences
https://en.wikipedia.org/wiki?curid=31287
Treason In law, treason is criminal disloyalty, typically to the state. It is a crime that covers some of the more extreme acts against one's nation or sovereign. This usually includes things such as participating in a war against one's native country, attempting to overthrow its government, spying on its military, its diplomats, or its secret services for a hostile and foreign power, or attempting to kill its head of state. A person who commits treason is known in law as a traitor. Historically, in common law countries, treason also covered the murder of specific social superiors, such as the murder of a husband by his wife or that of a master by his servant. Treason against the king was known as "high treason" and treason against a lesser superior was "petty treason". As jurisdictions around the world abolished petty treason, "treason" came to refer to what was historically known as high treason. At times, the term "traitor" has been used as a political epithet, regardless of any verifiable treasonable action. In a civil war or insurrection, the winners may deem the losers to be traitors. Likewise the term "traitor" is used in heated political discussiontypically as a slur against political dissidents, or against officials in power who are perceived as failing to act in the best interest of their constituents. In certain cases, as with the "Dolchstoßlegende" (Stab-in-the-back myth), the accusation of treason towards a large group of people can be a unifying political message. In English law, high treason was punishable by being hanged, drawn and quartered (men) or burnt at the stake (women), although beheading could be substituted by royal command (usually for royalty and nobility). Those penalties were abolished in 1814, 1790 and 1973 respectively. The penalty was used by later monarchs against people who could reasonably be called traitors. Many of them would now just be considered dissidents. Christian theology and political thinking until after the Enlightenment considered treason and blasphemy synonymous, as it challenged both the state and the will of God. Kings were considered chosen by God, and to betray one's country was to do the work of Satan. The words "treason" and "traitor" are derived from the Latin "tradere", "to deliver or hand over". Specifically, it is derived from the term "Traditors", which refers to bishops and other Christians who turned over sacred scriptures or betrayed their fellow Christians to the Roman authorities under threat of persecution during the Diocletianic Persecution between AD 303 and 305. Originally, the crime of treason was conceived of as being committed against the Monarch; a subject failing in his duty of loyalty to the Sovereign and acting against the Sovereign was deemed to be a traitor. As asserted in the 18th Century trial of Johann Friedrich Struensee in Denmark, a man having sexual relations with a Queen can be considered guilty not only of ordinary adultery but also of treason against her husband, the King. The English Revolution in the 17th Century and the French Revolution in the 18th introduced a radically different concept of loyalty and treason, under which Sovereignty resides with "The Nation" or "The People" - to whom also the Monarch has a duty of loyalty, and for failing which the Monarch, too, could be accused of treason. Charles I in England and Louis XVI in France were found guilty of such treason and duly executed. However, when Charles II was restored to his throne, he considered the revolutionaries who sentenced his father to death as having been traitors in the more traditional sense. In modern times, "traitor" and "treason" are mainly used with reference to a person helping an enemy in time of war or conflict. Many nations' laws mention various types of treason. "Crimes Related to Insurrection" is the internal treason, and may include a coup d'état. "Crimes Related to Foreign Aggression" is the treason of cooperating with foreign aggression positively regardless of the national inside and outside. "Crimes Related to Inducement of Foreign Aggression" is the crime of communicating with aliens secretly to cause foreign aggression or menace. Depending on the country, conspiracy is added to these. In Australia, there are federal and state laws against treason, specifically in the states of New South Wales, South Australia and Victoria. Similarly to Treason laws in the United States, citizens of Australia owe allegiance to their sovereign, the federal and state level. The federal law defining treason in Australia is provided under section 80.1 of the Criminal Code, contained in the schedule of the Commonwealth Criminal Code Act 1995. It defines treason as follows: A person is not guilty of treason under paragraphs (e), (f) or (h) if their assistance or intended assistance is purely humanitarian in nature. The maximum penalty for treason is life imprisonment. Section 80.1AC of the Act creates the related offence of treachery. The Treason Act 1351, the Treason Act 1795 and the Treason Act 1817 form part of the law of New South Wales. The Treason Act 1795 and the Treason Act 1817 have been repealed by Section 11 of the Crimes Act 1900, except in so far as they relate to the compassing, imagining, inventing, devising, or intending death or destruction, or any bodily harm tending to death or destruction, maim, or wounding, imprisonment, or restraint of the person of the heirs and successors of King George III of the United Kingdom, and the expressing, uttering, or declaring of such compassings, imaginations, inventions, devices, or intentions, or any of them. Section 12 of the Crimes Act 1900 (NSW) creates an offence which is derived from section 3 of the Treason Felony Act 1848: Section 16 provides that nothing in Part 2 repeals or affects anything enacted by the Treason Act 1351 (25 Edw.3 c. 2). This section reproduces section 6 of the Treason Felony Act 1848. The offence of treason was created by section 9A(1) of the Crimes Act 1958. It is punishable by a maximum penalty of life imprisonment. In South Australia, treason is defined under Section 7 of the South Australia Criminal Law Consolidation Act 1935 and punished under Section 10A. Any person convicted of treason against South Australia will receive a mandatory sentence of life imprisonment. According to Brazilian law, treason is the crime of disloyalty by a citizen to the Federal Republic of Brazil, applying to combatants of the Brazilian military forces. Treason during wartime is the only crime for which a person can be sentenced to death "(see capital punishment in Brazil)". The only military person in the history of Brazil to be convicted of treason was Carlos Lamarca, an army captain who deserted to become the leader of a communist-terrorist guerrilla against the military government. Section 46 of the Criminal Code has two degrees of treason, called "high treason" and "treason." However, both of these belong to the historical category of high treason, as opposed to petty treason which does not exist in Canadian law. Section 46 reads as follows: High treason (1) Every one commits high treason who, in Canada, Treason It is also illegal for a Canadian citizen or a person who owes allegiance to Her Majesty in right of Canada to do any of the above outside Canada. The penalty for high treason is life imprisonment. The penalty for treason is imprisonment up to a maximum of life, or up to 14 years for conduct under subsection (2)(b) or (e) in peacetime. Finnish law distinguishes between two types of treasonable offences: "maanpetos", treachery in war, and "valtiopetos", an attack against the constitutional order. The terms "maanpetos" and "valtiopetos" are unofficially translated as treason and high treason, respectively. Both are punishable by imprisonment, and if aggravated, by life imprisonment. "Maanpetos" (translates literally to "betrayal of land") consists in joining enemy armed forces, making war against Finland, or serving or collaborating with the enemy. "Maanpetos" proper can only be committed under conditions of war or the threat of war. Espionage, disclosure of a national secret, and certain other related offences are separately defined under the same rubric in the Finnish criminal code. "Valtiopetos" (translates literally to "betrayal of state") consists in using violence or the threat of violence, or unconstitutional means, to bring about the overthrow of the Finnish constitution or to overthrow the president, cabinet or parliament or to prevent them from performing their functions. Article 411-1 of the French Penal Code defines treason as follows: The acts defined by articles 411-2 to 411–11 constitute treason where they are committed by a French national or a soldier in the service of France, and constitute espionage where they are committed by any other person. Article 411-2 prohibits "handing over troops belonging to the French armed forces, or all or part of the national territory, to a foreign power, to a foreign organisation or to an organisation under foreign control, or to their agents". It is punishable by life imprisonment and a fine of €750,000. Generally parole is not available until 18 years of a life sentence have elapsed. Articles 411-3 to 411–10 define various other crimes of collaboration with the enemy, sabotage, and the like. These are punishable with imprisonment for between seven and 30 years. Article 411-11 make it a crime to incite any of the above crimes. Besides treason and espionage, there are many other crimes dealing with national security, insurrection, terrorism and so on. These are all to be found in Book IV of the code. German law differentiates between two types of treason: "High treason" ("Hochverrat") and "treason" ("Landesverrat"). High treason, as defined in Section 81 of the German criminal code is defined as a violent attempt against the existence or the constitutional order of the Federal Republic of Germany, carrying a penalty of life imprisonment or a fixed term of at least ten years. In less serious cases, the penalty is 1–10 years in prison. German criminal law also criminalises high treason against a German state. Preparation of either types of the crime is criminal and carries a penalty of up to five years. The other type of treason, "Landesverrat" is defined in Section 94. It is roughly equivalent to espionage; more precisely, it consists of betraying a secret either directly to a foreign power, or to anyone not allowed to know of it; in the latter case, treason is only committed if the aim of the crime was explicitly to damage the Federal Republic or to favor a foreign power. The crime carries a penalty of one to fifteen years in prison. However, in especially severe cases, life imprisonment or any term of at least of five years may be sentenced. As for many crimes with substantial threats of punishment active repentance is to be considered in mitigation under §83a StGB (Section 83a, Criminal Code). Notable cases involving "Landesverrat" are the Weltbühne trial during the Weimar Republic and the Spiegel scandal of 1962. On 30. July 2015, Germany's Public Prosecutor General Harald Range initiated criminal investigation proceedings against the German blog netzpolitik.org. Section 2 of the Crime Ordinance provides that levying war against the HKSAR Government of the People's Republic of China, conspiring to do so, instigating a foreigner to invade Hong Kong, or assisting any public enemy at war with the HKSAR Government, is treason, punishable with life imprisonment. Article 39 of the Constitution of Ireland (adopted in 1937) states: treason shall consist only in levying war against the State, or assisting any State or person or inciting or conspiring with any person to levy war against the State, or attempting by force of arms or other violent means to overthrow the organs of government established by the Constitution, or taking part or being concerned in or inciting or conspiring with any person to make or to take part or be concerned in any such attempt. Following the enactment of the 1937 constitution, the Treason Act 1939 provided for imposition of the death penalty for treason. The Criminal Justice Act 1990 abolished the death penalty, setting the punishment for treason at life imprisonment, with parole in not less than forty years. No person has been charged under the Treason Act. Irish republican legitimatists who refuse to recognise the legitimacy of the Republic of Ireland have been charged with lesser crimes under the Offences against the State Acts 1939–1998. The Italian law defines various types of crimes that could be generally described as treason ("tradimento"), although they are so many and so precisely defined that no one of them is simply called "tradimento" in the text of "Codice Penale" (Italian Criminal Code). The treason-type crimes are grouped as "crimes against the personhood of the State" ("Crimini contro la personalità dello Stato") in the Second Book, First Title, of the Criminal Code. Articles 241 to 274 detail crimes against the "international personhood of the State" such as "attempt against wholeness, independence and unity of the State" (art.241), "hostilities against a foreign State bringing the Italian State in danger of war" (art.244), "bribery of a citizen by a foreigner against the national interests" (art.246), and "political or military espionage" (art.257). Articles 276 to 292 detail crimes against the "domestic personhood of the State", ranging from "attempt on the President of the Republic" (art.271), "attempt with purposes of terrorism or of subversion" (art.280), "attempt against the Constitution" (art.283), "armed insurrection against the power of the State" (art.284), and "civil war" (art.286). Further articles detail other crimes, especially those of conspiracy, such as "political conspiracy through association" (art.305), or "armed association: creating and participating" (art.306). The penalties for treason-type crimes, before 1948, included death as maximum penalty, and, for some crimes, as the only penalty possible. Nowadays the maximum penalty is life imprisonment ("ergastolo"). Japan does not technically have a law of treason. Instead it has an offence against taking part in foreign aggression against the Japanese state ("gaikan zai"; literally "crime of foreign mischief"). The law applies equally to Japanese and non-Japanese people, while treason in other countries usually applies only to their own citizens. Technically there are two laws, one for the crime of inviting foreign mischief (Japan Criminal Code section 2 clause 81) and the other for supporting foreign mischief once a foreign force has invaded Japan. "Mischief" can be anything from invasion to espionage. Before World War II, Japan had a crime similar to the English crime of high treason ("Taigyaku zai"), which applied to anyone who harmed the Japanese emperor or imperial family. This law was abolished by the American Occupation force after World War II. The application of "Crimes Related to Insurrection" to the Aum Shinrikyo cult of religious terrorists was considered. New Zealand has treason laws that are stipulated under the Crimes Act 1961. Section 73 of the Crimes Act reads as follows: Every one owing allegiance to Her Majesty the Queen in right of New Zealand commits treason who, within or outside New Zealand,— The penalty is life imprisonment, except for conspiracy, for which the maximum sentence is 14 years' imprisonment. Treason was the last capital crime in New Zealand law, with the death penalty not being revoked until 1989, years after it was abolished for murder. Very few people have been prosecuted for the act of treason in New Zealand and none have been prosecuted in recent years. Article 85 of the Constitution of Norway states that "[a]ny person who obeys an order the purpose of which is to disturb the liberty and security of the Storting [Parliament] is thereby guilty of treason against the country." Article 275 of the Criminal Code of Russia defines treason as "espionage, disclosure of state secrets, or any other assistance rendered to a foreign State, a foreign organization, or their representatives in hostile activities to the detriment of the external security of the Russian Federation, committed by a citizen of the Russian Federation." The sentence is imprisonment for 12 to 20 years. It is not a capital offence, even though murder and some aggravated forms of attempted murder are (although Russia currently has a moratorium on the death penalty). Subsequent sections provide for further offences against state security, such as armed rebellion and forcible seizure of power. Sweden's treason laws have seen little application in modern times. The most recent case was in 2001. Four teenagers (their names were not reported) were convicted of treason after they assaulted King Carl XVI Gustaf with a strawberry cream cake on 6 September that year. They were fined between 80 and 100 days' income. There is no single crime of treason in Swiss law; instead, multiple criminal prohibitions apply. Article 265 of the Swiss Criminal Code prohibits "high treason" ("Hochverrat/haute trahison") as follows: Whoever commits an act with the objective of violently – changing the constitution of the Confederation or of a canton, – removing the constitutional authorities of the state from office or making them unable to exercise their authority, – separating Swiss territory from the Confederation or territory from a canton, shall be punished with imprisonment of no less than a year. A separate crime is defined in article 267 as "diplomatic treason" ("Diplomatischer Landesverrat/Trahison diplomatique"): 1. Whoever makes known or accessible a secret, the preservation of which is required in the interest of the Confederation, to a foreign state or its agents, (...) shall be punished with imprisonment of no less than a year. 2. Whoever makes known or accessible a secret, the preservation of which is required in the interest of the Confederation, to the public, shall be punished with imprisonment of up to five years or a monetary penalty. In 1950, in the context of the Cold War, the following prohibition of "foreign enterprises against the security of Switzerland" was introduced as article 266bis: 1 Whoever, with the purpose of inciting or supporting foreign enterprises aimed against the security of Switzerland, enters into contact with a foreign state or with foreign parties or other foreign organizations or their agents, or makes or disseminates untrue or tendentious claims ("unwahre oder entstellende Behauptungen / informations inexactes ou tendancieuses"), shall be punished with imprisonment of up to five years or a monetary penalty. 2 In grave cases the judge may pronounce a sentence of imprisonment of no less than a year. The criminal code also prohibits, among other acts, the suppression or falsification of legal documents or evidence relevant to the international relations of Switzerland (art. 267, imprisonment of no less than a year) and attacks against the independence of Switzerland and incitement of a war against Switzerland (art. 266, up to life imprisonment). The Swiss military criminal code contains additional prohibitions under the general title of "treason", which also apply to civilians, or which in times of war civilians are also (or may by executive decision be made) subject to. These include espionage or transmission of secrets to a foreign power (art. 86); sabotage (art. 86a); "military treason", i.e., the disruption of activities of military significance (art. 87); acting as a franc-tireur (art. 88); disruption of military action by disseminating untrue information (art. 89); military service against Switzerland by Swiss nationals (art. 90); or giving aid to the enemy (art. 91). The penalties for these crimes vary, but include life imprisonment in some cases. Treason "per se" is not defined in the Turkish Penal Code. However, the law defines crimes which are traditionally included in the scope of treason, such as cooperating with the enemy during wartime. Treason is punishable by imprisonment up to life. The British law of treason is entirely statutory and has been so since the Treason Act 1351 (25 Edw. 3 St. 5 c. 2). The Act is written in Norman French, but is more commonly cited in its English translation. The Treason Act 1351 has since been amended several times, and currently provides for four categories of treasonable offences, namely: Another Act, the Treason Act 1702 (1 Anne stat. 2 c. 21), provides for a fifth category of treason, namely: By virtue of the Treason Act 1708, the law of treason in Scotland is the same as the law in England, save that in Scotland the slaying of the Lords of Session and Lords of Justiciary and counterfeiting the Great Seal of Scotland remain treason under sections 11 and 12 of the Treason Act 1708 respectively. Treason is a reserved matter about which the Scottish Parliament is prohibited from legislating. Two acts of the former Parliament of Ireland passed in 1537 and 1542 create further treasons which apply in Northern Ireland. The penalty for treason was changed from death to a maximum of imprisonment for life in 1998 under the Crime And Disorder Act. Before 1998, the death penalty was mandatory, subject to the royal prerogative of mercy. Since the abolition of the death penalty for murder in 1965 an execution for treason was unlikely to have been carried out. Treason laws were used against Irish insurgents before Irish independence. However, members of the Provisional IRA and other militant republican groups were not prosecuted or executed for treason for levying war against the British government during the Troubles. They, along with members of loyalist paramilitary groups, were jailed for murder, violent crimes or terrorist offences. William Joyce ("Lord Haw-Haw") was the last person to be put to death for treason, in 1946. (On the following day Theodore Schurch was executed for treachery, a similar crime, and was the last man to be executed for a crime other than murder in the UK.) As to who can commit treason, it depends on the ancient notion of allegiance. As such, all British nationals (but not other Commonwealth citizens) owe allegiance to the Queen in right of the United Kingdom wherever they may be, as do Commonwealth citizens and aliens present in the United Kingdom at the time of the treasonable act (except diplomats and foreign invading forces), those who hold a British passport however obtained, and aliens who – having lived in Britain and gone abroad again – have left behind family and belongings. The Treason Act 1695 enacted, among other things, a rule that treason could be proved only in a trial by the evidence of two witnesses to the same offence. Nearly one hundred years later this rule was incorporated into the U.S. Constitution, which requires two witnesses to the same overt act. It also provided for a three-year time limit on bringing prosecutions for treason (except for assassinating the king), another rule which has been imitated in some common law countries. The Sedition Act 1661 made it treason to imprison, restrain or wound the king. Although this law was abolished in the United Kingdom in 1998, it still continues to apply in some Commonwealth countries. In the 1790s, opposition political parties were new and not fully accepted. Government leaders often considered their opponents to be traitors. Historian Ron Chernow reports that Secretary of the Treasury Alexander Hamilton and President George Washington "regarded much of the criticism fired at their administration as disloyal, even treasonous, in nature." When an undeclared Quasi-War broke out with France in 1797–98, "Hamilton increasingly mistook dissent for treason and engaged in hyperbole." Furthermore, the Jeffersonian opposition party behaved the same way. After 1801, with a peaceful transition in the political party in power, the rhetoric of "treason" against political opponents diminished. To avoid the abuses of the English law, the scope of treason was specifically restricted in the United States Constitution. Article III, section 3 reads as follows: The Constitution does not itself create the offense; it only restricts the definition (the first paragraph), permits the United States Congress to create the offense, and restricts any punishment for treason to only the convicted (the second paragraph). The crime is prohibited by legislation passed by Congress. Therefore, the United States Code at states: The requirement of testimony of two witnesses was inherited from the British Treason Act 1695. However, Congress has passed laws creating related offenses that punish conduct that undermines the government or the national security, such as sedition in the 1798 Alien and Sedition Acts, or espionage and sedition in the Espionage Act of 1917, which do not require the testimony of two witnesses and have a much broader definition than Article Three treason. Some of these laws are still in effect. The well-known spies Julius and Ethel Rosenberg were charged with conspiracy to commit espionage, rather than treason. In the United States, Benedict Arnold's name is considered synonymous with treason due to his collaboration with the British during the American Revolutionary War. This, however, occurred before the Constitution was written. Arnold became a general in the British Army, which protected him. Since the Constitution came into effect, there have been fewer than 40 federal prosecutions for treason and even fewer convictions. Several men were convicted of treason in connection with the 1794 Whiskey Rebellion but were pardoned by President George Washington. The most famous treason trial, that of Aaron Burr in 1807, resulted in acquittal. In 1807, on a charge of treason, Burr was brought to trial before the United States Circuit Court at Richmond, Virginia. The only physical evidence presented to the grand jury was General James Wilkinson's so-called letter from Burr, which proposed the idea of stealing land in the Louisiana Purchase. The trial was presided over by Chief Justice of the United States John Marshall, acting as a circuit judge. Since no witnesses testified, Burr was acquitted in spite of the full force of Jefferson's political influence thrown against him. Immediately afterward, Burr was tried on a misdemeanor charge and was again acquitted. During the American Civil War, treason trials were held in Indianapolis against Copperheads for conspiring with the Confederacy against the United States. In addition to treason trials, the federal government passed new laws that allowed prosecutors to try people for the charge of disloyalty. Various legislation was passed, including the Conspiracies Act of July 31, 1861. Because the law defining treason in the constitution was so strict, new legislation was necessary to prosecute defiance of the government . Many of the people indicted on charges of conspiracy were not taken to trial, but instead were arrested and detained. In addition to the Conspiracies Act of July 31, 1861, in 1862, the federal government went further to redefine treason in the context of the civil war. The act that was passed is entitled ""An Act to Suppress Insurrection; to punish Treason and Rebellion, to seize and confiscate the Property of Rebels, and for other purposes"." It is colloquially referred to as the "second Confiscation Act". The act essentially lessened the punishment for treason.  Rather than have death as the only possible punishment for treason, the act made it possible to give individuals lesser sentences. After the war the question was whether the United States government would make indictments for treason against leaders of the Confederate States of America, as many people demanded. Jefferson Davis, the Confederate president, was indicted and held in prison for two years. The indictment was dropped in 1869 when the political scene had changed and it was possible he would be acquitted by a jury in Virginia. When accepting Lee's surrender of the Army of Northern Virginia, at Appomattox, in April 1865, Gen. Ulysses S. Grant assured all Confederate soldiers and officers a blanket amnesty, provided they returned to their homes and refrained from any further acts of hostility, and subsequently other Union generals issued similar terms of amnesty when accepting Confederate surrenders. All Confederate officials received a blanket amnesty issued by President Andrew Johnson as he left office in 1869. In 1949 Iva Toguri D'Aquino was convicted of treason for wartime radio broadcasts (under the name of "Tokyo Rose") and sentenced to ten years, of which she served six. As a result of prosecution witnesses having lied under oath, she was pardoned in 1977. In 1952 Tomoya Kawakita, a Japanese-American dual citizen was convicted of treason and sentenced to death for having worked as an interpreter at a Japanese POW camp and having mistreated American prisoners. He was recognized by a former prisoner at a department store in 1946 after having returned to the United States. The sentence was later commuted to life imprisonment and a $10,000 fine. He was released and deported in 1963. The Cold War saw frequent talk linking treason with support for Communist-led causes. The most memorable of these came from Senator Joseph McCarthy, who used rhetoric about the Democrats as guilty of "twenty years of treason". As chosen chair of the Senate Permanent Investigations Subcommittee, McCarthy also investigated various government agencies for Soviet spy rings (see the Venona project); however, he acted as a political fact-finder rather than a criminal prosecutor. The Cold War period saw no prosecutions for explicit treason, but there were convictions and even executions for conspiracy to commit espionage on behalf of the Soviet Union, such as in the Julius and Ethel Rosenberg case. On October 11, 2006, the United States government charged Adam Yahiye Gadahn for videos in which he appeared as a spokesman for al-Qaeda and threatened attacks on American soil. He was killed on January 19, 2015 in an unmanned aircraft (drone) strike in Waziristan, Pakistan. Most states have treason provisions in their constitutions or statutes similar to those in the U.S. Constitution. The Extradition Clause specifically defines treason as an extraditable offense. Thomas Jefferson in 1791 said that any Virginia official who cooperated with the federal Bank of the United States proposed by Alexander Hamilton was guilty of "treason" against the state of Virginia and should be executed. The Bank opened and no one was prosecuted. Several persons have been prosecuted for treason on the state level. Thomas Dorr was convicted for treason against the state of Rhode Island for his part in the Dorr Rebellion, but was eventually granted amnesty. John Brown was convicted of treason against the Commonwealth of Virginia for his part in the raid on Harpers Ferry, and was hanged. The Mormon prophet, Joseph Smith, was charged with treason against Missouri along with five others, at first in front of a state military court, but Smith was allowed to escape to Illinois after his case was transferred to a civilian court for trial on charges of treason and other crimes. Smith was then later imprisoned for trial on charges of treason against Illinois, but was murdered by a lynch mob while in jail awaiting trial. The Constitution of Vietnam proclaims that treason is the most serious crime. It is further regulated in the Criminal Code with the 78th article: Also, according to the Law on Amnesty amended in November 2018, it is impossible for those convicted for treason to be granted amnesty. Early in Islamic history, the only form of treason was seen as the attempt to overthrow a just government or waging war against the State. According to Islamic tradition, the prescribed punishment ranged from imprisonment to the severing of limbs and the death penalty depending on the severity of the crime. However, even in cases of treason the repentance of a person would have to be taken into account. Currently, the consensus among major Islamic schools is that apostasy (leaving Islam) is considered treason and that the penalty is death; this is supported not in the Quran but in hadith. This confusion between apostasy and treason almost certainly had its roots in the Ridda Wars, in which an army of rebel traitors led by the self-proclaimed prophet Musaylima attempted to destroy the caliphate of Abu Bakr. In the 19th and early 20th century, the Iranian Cleric Sheikh Fazlollah Noori opposed the Iranian Constitutional Revolution by inciting insurrection against them through issuing fatwas and publishing pamphlets arguing that democracy would bring vice to the country. The new government executed him for treason in 1909. In Malaysia, it is treason to commit offences against the Yang di-Pertuan Agong's person, or to wage or attempt to wage war or abet the waging of war against the Yang di-Pertuan Agong, a Ruler or Yang di-Pertua Negeri. All these offences are punishable by hanging, which derives from the English treason acts (as a former British colony, Malaysia's legal system is based on English common law). In Algeria, treason is defined as the following: In Bahrain, plotting to topple the regime, collaborating with a foreign hostile country and threatening the life of the Emir are defined as treason and punishable by death. The State Security Law of 1974 was used to crush dissent that could be seen as treasonous, which was criticised for permitting severe human rights violations in accordance with Article One: In the areas controlled by the Palestinian National Authority, it is treason to give assistance to Israeli troops without the authorization of the Palestinian Authority or to sell land to Jews (irrespective of nationality) or non-Jewish Israeli citizens under the Palestinian Land Laws, as part of the PA's general policy of discouraging the expansion of Israeli settlements. Both crimes are capital offences subject to the death penalty, although the former provision has not often been enforced since the beginning of effective security cooperation between the Israel Defense Forces, Israel Police, and Palestinian National Security Forces since the mid-2000s (decade) under the leadership of Prime Minister Salam Fayyad. Likewise, in the Gaza Strip under the Hamas-led government, any sort of cooperation or assistance to Israeli forces during military actions is also punishable by death. There are a number of other crimes against the state short of treason: Different cultures have evolved a variety of terms for "traitor" or collaborator, often based on historical incidences of treason to that culture or of people whose name has become a byword for treason.
https://en.wikipedia.org/wiki?curid=31292
Type VII submarine Type VII U-boats were the most common type of German World War II U-boat. 703 boats were built by the end of the war. The lone surviving example, , is on display at the Laboe Naval Memorial located in Laboe, Schleswig-Holstein, Germany. The Type VII was based on earlier German submarine designs going back to the World War I Type UB III and especially the cancelled Type UG. The type UG was designed through the Dutch dummy company "NV Ingenieurskantoor voor Scheepsbouw Den Haag" (I.v.S) to circumvent the limitations of the Treaty of Versailles, and was built by foreign shipyards. The Finnish "Vetehinen" class and Spanish Type E-1 also provided some of the basis for the Type VII design. These designs led to the Type VII along with Type I, the latter being built in AG Weser shipyard in Bremen, Germany. The production of Type I was stopped after only two boats; the reasons for this are not certain. The design of the Type I was further used in the development of the Type VII and Type IX. Type VII submarines were the most widely used U-boats of the war and were the most produced submarine class in history, with 703 built. The type had several modifications. The Type VII was the most numerous U-boat type to be involved in the Battle of the Atlantic. Type VIIA U-boats were designed in 1933–34 as the first series of a new generation of attack U-boats. Most Type VIIA U-boats were constructed at Deschimag AG Weser in Bremen with the exception of U-33 through U-36, which were built at Friedrich Krupp Germaniawerft, Kiel. Despite the highly cramped living quarters, type VIIA U-boats were generally popular with their crews because of their fast crash dive speed, which was thought to give them more protection from enemy attacks than bigger, more sluggish types. Also, the smaller boat's lower endurance meant patrols were shorter. They were much more powerful than the smaller Type II U-boats they replaced, with four bow and one external stern torpedo tubes. Usually carrying 11 torpedoes on board, they were very agile on the surface and mounted the quick-firing deck gun with about 220 rounds. Ten Type VIIA boats were built between 1935 and 1937. All but two Type VIIA U-boats were sunk during World War II (famous Otto Schuhart and , which was the first submarine to sink a ship in World War II, both scuttled in Kupfermühlen Bay on 4 May 1945). The boat was powered on the surface by two MAN AG, 6-cylinder, 4-stroke M6V 40/46 diesel engines, giving a total of at 470 to 485 rpm. When submerged it was propelled by two Brown, Boveri & Cie (BBC) GG UB 720/8 double-acting electric motors, giving a total of at 322 rpm. The VIIA had limited fuel capacity, so 24 Type VIIB boats were built between 1936 and 1940 with an additional 33 tonnes of fuel in external saddle tanks, which added another of range at surfaced. More powerful engines made them slightly faster than the VIIA. They had two rudders for greater agility. The torpedo armament was improved by moving the aft tube to the inside of the boat. Now an additional aft torpedo could be carried below the deck plating of the aft torpedo room (which also served as the electric motor room) and two watertight compartments under the upper deck could hold two additional torpedoes, giving it a total of 14 torpedoes. The only exception was , which lacked a stern tube and carried only 12 torpedoes. Type VIIBs included many of the most famous U-boats of World War II, including (the most successful), Prien's , Kretschmer's , and Schepke's . On the surface the boat was powered by two supercharged MAN, 6 cylinder 4-stroke M6V 40/46 diesels (except for "U-45" to "U-50", "U-83", "U-85", "U-87", "U-99", "U-100", and "U-102", which were powered by two supercharged Germaniawerft 6-cylinder 4-stroke F46 diesels) giving a total of at 470 to 490 rpm. When submerged, the boat was powered by two AEG GU 460/8-276 (except in "U-45", "U-46", "U-49", "U-51", "U-52", "U-54", "U-73" to "U-76", "U-99" and "U-100", which retained the BBC motor of the VIIA) electric motors, giving a total of at 295 rpm. The Type VIIC was the workhorse of the German U-boat force, with 568 commissioned from 1940 to 1945. The first VIIC boat commissioned was the in 1940. The Type VIIC was an effective fighting machine and was seen almost everywhere U-boats operated, although its range of only 8,500 nautical miles was not as great as that of the larger Type IX (11,000 nautical miles), severely limiting the time it could spend in the far reaches of the western and southern Atlantic without refueling from a tender or U-boat tanker. The VIIC came into service toward the end of the "First Happy Time" near the beginning of the war and was still the most numerous type in service when Allied anti-submarine efforts finally defeated the U-boat campaign in late 1943 and 1944. Type VIIC differed from the VIIB only in the addition of an active sonar and a few minor mechanical improvements, making it 2 feet longer and 8 tons heavier. Speed and range were essentially the same. Many of these boats were fitted with snorkels in 1944 and 1945. They had the same torpedo tube arrangement as their predecessors, except for , , , , and , which had only two bow tubes, and for , , , , , and , which had no stern tube. On the surface the boats (except for , and to which used MAN M6V40/46s) were propelled by two supercharged Germaniawerft, 6 cylinder, 4-stroke M6V 40/46 diesels totaling at 470 to 490 rpm. For submerged propulsion, several different electric motors were used. Early models used the VIIB configuration of two AEG GU 460/8-276 electric motors, totaling with a max rpm of 296, while newer boats used two BBC GG UB 720/8, Garbe, Lahmeyer & Co. RP 137/c or Siemens-Schuckert-Werke (SSW) GU 343/38-8 electric motors with the same power output as the AEG motors. Perhaps the most famous VIIC boat was , featured in the movie "Das Boot." The concept of the "U-flak" or "Flak Trap" originated the previous year, on 31 August 1942, when was seriously damaged by aircraft. Rather than scrap the boat, it was decided to refit her as a heavily armed anti-aircraft boat intended to combat the losses being inflicted by Allied aircraft in the Bay of Biscay. Two 20 mm quadruple "Flakvierling" mounts and an experimental 37 mm automatic gun were installed on the U-flaks' decks. A battery of 86 mm line-carrying anti-aircraft rockets was tested (similar to a device used by the British in the defense of airfields), but this idea proved unworkable. At times, two additional single 20 mm guns were also mounted. The submarines' limited fuel capacities restricted them to operations only within the Bay of Biscay. Only five torpedoes were carried, preloaded in the tubes, to free up space needed for additional gun crew. Four VIIC boats were modified for use as surface escorts for U-boats departing and returning to French Atlantic bases. These "U-flak" boats were , , , and . Conversion began on three others (, , and ) but none was completed and they were eventually returned to duty as standard VIIC attack boats. The modified boats became operational in June 1943 and at first appeared to be successful against a surprised Royal Air Force. Hoping that the extra firepower might allow the boats to survive relentless British air attacks in the Bay of Biscay and reach their operational areas, Donitz ordered the boats to cross the bay in groups at maximum speed. The effort earned the Germans about two more months of relative freedom, until the RAF modified their tactics. When a pilot saw that a U-boat was going to fight on the surface, he held off attacking and called in reinforcements. When several aircraft had arrived, they all attacked at once. If the U-boat dived, surface vessels were called to the scene to scour the area with sonar and drop depth charges. The British also began equipping some aircraft with RP-3 rockets that could sink a U-boat with a single hit, finally making it too dangerous for a U-boat to attempt to fight it out on the surface regardless of its armament. In November 1943, less than six months after the experiment began, it was discontinued. All U-flaks were converted back to standard attack boats and fitted with "Turm 4", the standard anti-aircraft armament for U-boats at the time. (According to German sources, only six aircraft had been shot down by the U-flaks in six missions, three by "U-441", and one each by "U-256", "U-621", and .) Type VIIC/41 was a slightly modified version of the VIIC and had the same armament and engines. The difference was a stronger pressure hull giving them a deeper crush depth and lighter machinery to compensate for the added steel in the hull, making them slightly lighter than the VIIC. A total of 91 were built. All of them from onwards lacked the fittings to handle mines. Today one Type VIIC/41 still exists: is on display at Laboe (north of Kiel), the only surviving Type VII in the world. The Type VIIC/42 was designed in 1942 and 1943 to replace the aging Type VIIC. It would have had a much stronger pressure hull, with skin thickness up to 28 mm, and would have dived twice as deep as the previous VIICs. These boats would have been very similar in external appearance to the VIIC/41 but with two periscopes in the tower and would have carried two more torpedoes. Contracts were signed for 164 boats and a few boats were laid down, but all were cancelled on 30 September 1943 in favor of the new Type XXI, and none was advanced enough in construction to be launched. It was powered by the same engines as the VIIC. The type VIID boats, designed in 1939 and 1940, were a lengthened – by – version of the VIIC for use as a minelayer. The mines were carried in, and released from, three banks of five vertical tubes just aft of the conning tower. The extended hull also improved fuel and food storage. On the surface the boat used two supercharged Germaniawerft, 6 cylinder, 4-stroke F46 diesels delivering 3,200 bhp (2,400 kW) at between 470 and 490 rpm. When submerged the boat used two AEG GU 460/8-276 electric motors giving a total of 750 shp (560 kW) at 285 rpm. Only one () managed to survive the war; the other five were sunk, killing all crew members. The Type VIIF boats were designed in 1941 as supply boats to rearm U-boats at sea once they had used up their torpedoes. This required a lengthened hull and they were the largest and heaviest type VII boats built. They were armed identically with the other Type VIIs except that they could have up to 39 torpedoes onboard and had no deck guns. Only four Type VIIFs were built. Two of them, and , were sent to support the Monsun Gruppe in the Far East; and remained in the Atlantic. Type VIIF U-boats used the same engines as the Type VIID class. Three were sunk during the war; the surviving boat was surrendered to the Allies following Germany's capitulation. Like most surrendered U-boats, it was subsequently scuttled by the Royal Navy.
https://en.wikipedia.org/wiki?curid=31294
Three-age system The three-age system is the periodization of history into three time periods; for example: the Stone Age, the Bronze Age, and the Iron Age; although it also refers to other tripartite divisions of historic time periods. In history, archaeology and physical anthropology, the three-age system is a methodological concept adopted during the 19th century by which artifacts and events of late prehistory and early history could be ordered into a recognizable chronology. It was initially developed by C. J. Thomsen, director of the Royal Museum of Nordic Antiquities, Copenhagen, as a means to classify the museum's collections according to whether the artifacts were made of stone, bronze, or iron. The system first appealed to British researchers working in the science of ethnology who adopted it to establish race sequences for Britain's past based on cranial types. Although the craniological ethnology that formed its first scholarly context holds no scientific value, the "relative chronology" of the Stone Age, the Bronze Age and the Iron Age is still in use in a general public context, and the three ages remain the underpinning of prehistoric chronology for Europe, the Mediterranean world and the Near East. The structure reflects the cultural and historical background of Mediterranean Europe and the Middle East and soon underwent further subdivisions, including the 1865 partitioning of the Stone Age into Paleolithic, Mesolithic and Neolithic periods by John Lubbock. It is, however, of little or no use for the establishment of chronological frameworks in sub-Saharan Africa, much of Asia, the Americas and some other areas and has little importance in contemporary archaeological or anthropological discussion for these regions. The concept of dividing pre-historical ages into systems based on metals extends far back in European history, probably originated by Lucretius in the first century BC. But the present archaeological system of the three main ages—stone, bronze and iron—originates with the Danish archaeologist Christian Jürgensen Thomsen (1788–1865), who placed the system on a more scientific basis by typological and chronological studies, at first, of tools and other artifacts present in the Museum of Northern Antiquities in Copenhagen (later the National Museum of Denmark). He later used artifacts and the excavation reports published or sent to him by Danish archaeologists who were doing controlled excavations. His position as curator of the museum gave him enough visibility to become highly influential on Danish archaeology. A well-known and well-liked figure, he explained his system in person to visitors at the museum, many of them professional archaeologists. In his poem, "Works and Days", the ancient Greek poet Hesiod possibly between 750 and 650 BC, defined five successive Ages of Man: 1. Golden, 2. Silver, 3. Bronze, 4. Heroic and 5. Iron. Only the Bronze Age and the Iron Age are based on the use of metal: ... then Zeus the father created the third generation of mortals, the age of bronze ... They were terrible and strong, and the ghastly action of Ares was theirs, and violence. ... The weapons of these men were bronze, of bronze their houses, and they worked as bronzesmiths. There was not yet any black iron. Hesiod knew from the traditional poetry, such as the "Iliad", and the heirloom bronze artifacts that abounded in Greek society, that before the use of iron to make tools and weapons, bronze had been the preferred material and iron was not smelted at all. He did not continue the manufacturing metaphor, but mixed his metaphors, switching over to the market value of each metal. Iron was cheaper than bronze, so there must have been a golden and a silver age. He portrays a sequence of metallic ages, but it is a degradation rather than a progression. Each age has less of a moral value than the preceding. Of his own age he says: "And I wish that I were not any part of the fifth generation of men, but had died before it came, or had been born afterward." The moral metaphor of the ages of metals continued. Lucretius, however, replaced moral degradation with the concept of progress, which he conceived to be like the growth of an individual human being. The concept is evolutionary: For the nature of the world as a whole is altered by age. Everything must pass through successive phases. Nothing remains forever what it was. Everything is on the move. Everything is transformed by nature and forced into new paths ... The Earth passes through successive phases, so that it can no longer bear what it could, and it can now what it could not before. The Romans believed that the species of animals, including humans, were spontaneously generated from the materials of the Earth, because of which the Latin word "mater", "mother", descends to English-speakers as matter and material. In Lucretius the Earth is a mother, Venus, to whom the poem is dedicated in the first few lines. She brought forth humankind by spontaneous generation. Having been given birth as a species, humans must grow to maturity by analogy with the individual. The different phases of their collective life are marked by the accumulation of customs to form material civilization: The earliest weapons were hands, nails and teeth. Next came stones and branches wrenched from trees, and fire and flame as soon as these were discovered. Then men learnt to use tough iron and copper. With copper they tilled the soil. With copper they whipped up the clashing waves of war, ... Then by slow degrees the iron sword came to the fore; the bronze sickle fell into disrepute; the ploughman began to cleave the earth with iron, ... Lucretius envisioned a pre-technological human that was "far tougher than the men of today ... They lived out their lives in the fashion of wild beasts roaming at large." The next stage was the use of huts, fire, clothing, language and the family. City-states, kings and citadels followed them. Lucretius supposes that the initial smelting of metal occurred accidentally in forest fires. The use of copper followed the use of stones and branches and preceded the use of iron. By the 16th century, a tradition had developed based on observational incidents, true or false, that the black objects found widely scattered in large quantities over Europe had fallen from the sky during thunderstorms and were therefore to be considered generated by lightning. They were so published by Konrad Gessner in "De rerum fossilium, lapidum et gemmarum maxime figuris & similitudinibus" at Zurich in 1565 and by many others less famous. The name ceraunia, "thunderstones," had been assigned. Ceraunia were collected by many persons over the centuries including Michele Mercati, Superintendent of the Vatican Botanical Garden in the late 16th century. He brought his collection of fossils and stones to the Vatican, where he studied them at leisure, compiling the results in a manuscript, which was published posthumously by the Vatican at Rome in 1717 as "Metallotheca". Mercati was interested in Ceraunia cuneata, "wedge-shaped thunderstones," which seemed to him to be most like axes and arrowheads, which he now called ceraunia vulgaris, "folk thunderstones," distinguishing his view from the popular one. His view was based on what may be the first in-depth lithic analysis of the objects in his collection, which led him to believe that they are artifacts and to suggest that the historical evolution of these artifacts followed a scheme. Mercati examining the surfaces of the ceraunia noted that the stones were of flint and that they had been chipped all over by another stone to achieve by percussion their current forms. The protrusion at the bottom he identified as the attachment point of a haft. Concluding that these objects were not ceraunia he compared collections to determine exactly what they were. Vatican collections included artifacts from the New World of exactly the shapes of the supposed ceraunia. The reports of the explorers had identified them to be implements and weapons or parts of them. Mercati posed the question to himself, why would anyone prefer to manufacture artifacts of stone rather than of metal, a superior material? His answer was that metallurgy was unknown at that time. He cited Biblical passages to prove that in Biblical times stone was the first material used. He also revived the 3-age system of Lucretius, which described a succession of periods based on the use of stone (and wood), bronze and iron respectively. Due to lateness of publication, Mercati's ideas were already being developed independently; however, his writing served as a further stimulus. On 12 November 1734, Nicholas Mahudel, physician, antiquarian and numismatist, read a paper at a public sitting of the Académie Royale des Inscriptions et Belles-Lettres in which he defined three "usages" of stone, bronze and iron in a chronological sequence. He had presented the paper several times that year but it was rejected until the November revision was finally accepted and published by the Academy in 1740. It was entitled "Les Monumens les plus anciens de l'industrie des hommes, et des Arts reconnus dans les Pierres de Foudres." It expanded the concepts of Antoine de Jussieu, who had gotten a paper accepted in 1723 entitled "De l'Origine et des usages de la Pierre de Foudre". In Mahudel, there is not just one usage for stone, but two more, one each for bronze and iron. He begins his treatise with descriptions and classifications of the "Pierres de Tonnerre et de Foudre", the ceraunia of contemporaneous European interest. After cautioning the audience that natural and man-made objects are often easily confused, he asserts that the specific ""figures"" or "formes that can be distinguished ("formes qui les font distingues")" of the stones were man-made, not natural: It was Man's hand that made them serve as instruments ("C'est la main des hommes qui les leur a données pour servir d'instrumens"...) Their cause, he asserts, is "the industry of our forefathers ("l'industrie de nos premiers pères")." He adds later that bronze and iron implements imitate the uses of the stone ones, suggesting a replacement of stone with metals. Mahudel is careful not to take credit for the idea of a succession of usages in time but states: "it is Michel Mercatus, physician of Clement VIII who first had this idea". He does not coin a term for ages, but speaks only of the times of usages. His use of "l'industrie" foreshadows the 20th century "industries," but where the moderns mean specific tool traditions, Mahudel meant only the art of working stone and metal in general. An important step in the development of the Three-age System came when the Danish antiquarian Christian Jürgensen Thomsen was able to use the Danish national collection of antiquities and the records of their finds as well as reports from contemporaneous excavations to provide a solid empirical basis for the system. He showed that artifacts could be classified into types and that these types varied over time in ways that correlated with the predominance of stone, bronze or iron implements and weapons. In this way he turned the Three-age System from being an evolutionary scheme based on intuition and general knowledge into a system of relative chronology supported by archaeological evidence. Initially, the three-age system as it was developed by Thomsen and his contemporaries in Scandinavia, such as Sven Nilsson and J.J.A. Worsaae, was grafted onto the traditional biblical chronology. But, during the 1830s they achieved independence from textual chronologies and relied mainly on typology and stratigraphy. In 1816 Thomsen at age 27 was appointed to succeed the retiring Rasmus Nyerup as Secretary of the "Kongelige Commission for Oldsagers Opbevaring" ("Royal Commission for the Preservation of Antiquities"), which had been founded in 1807. The post was unsalaried; Thomsen had independent means. At his appointment Bishop Münter said that he was an "amateur with a great range of accomplishments." Between 1816 and 1819 he reorganized the commission's collection of antiquities. In 1819 he opened the first Museum of Northern Antiquities, in Copenhagen, in a former monastery, to house the collections. It later became the National Museum. Like the other antiquarians Thomsen undoubtedly knew of the three-age model of prehistory through the works of Lucretius, the Dane Vedel Simonsen, Montfaucon and Mahudel. Sorting the material in the collection chronologically he mapped out which kinds of artifacts co-occurred in deposits and which did not, as this arrangement would allow him to discern any trends that were exclusive to certain periods. In this way he discovered that stone tools did not co-occur with bronze or iron in the earliest deposits while subsequently bronze did not co-occur with iron - so that three periods could be defined by their available materials, stone, bronze and iron. To Thomsen the find circumstances were the key to dating. In 1821 he wrote in a letter to fellow prehistorian Schröder: nothing is more important than to point out that hitherto we have not paid enough attention to what was found together. and in 1822: we still do not know enough about most of the antiquities either; ... only future archaeologists may be able to decide, but they will never be able to do so if they do not observe what things are found together and our collections are not brought to a greater degree of perfection. This analysis emphasizing co-occurrence and systematic attention to archaeological context allowed Thomsen to build a chronological framework of the materials in the collection and to classify new finds in relation to the established chronology, even without much knowledge of their provenience. In this way, Thomsen's system was a true chronological system rather than an evolutionary or technological system. Exactly when his chronology was reasonably well established is not clear, but by 1825 visitors to the museum were being instructed in his methods. In that year also he wrote to J.G.G. Büsching: To put artifacts in their proper context I consider it most important to pay attention to the chronological sequence, and I believe that the old idea of first stone, then copper, and finally iron, appears to be ever more firmly established as far as Scandinavia is concerned. By 1831 Thomsen was so certain of the utility of his methods that he circulated a pamphlet, ""Scandinavian Artifacts and Their Preservation", advising archaeologists to "observe the greatest care" to note the context of each artifact. The pamphlet had an immediate effect. Results reported to him confirmed the universality of the Three-age System. Thomsen also published in 1832 and 1833 articles in the "Nordisk Tidsskrift for Oldkyndighed", "Scandinavian Journal of Archaeology." He already had an international reputation when in 1836 the Royal Society of Northern Antiquaries published his illustrated contribution to "Guide to Scandinavian Archaeology" in which he put forth his chronology together with comments about typology and stratigraphy. Thomsen was the first to perceive typologies of grave goods, grave types, methods of burial, pottery and decorative motifs, and to assign these types to layers found in excavation. His published and personal advice to Danish archaeologists concerning the best methods of excavation produced immediate results that not only verified his system empirically but placed Denmark in the forefront of European archaeology for at least a generation. He became a national authority when C.C Rafn, secretary of the "Kongelige Nordiske Oldskriftselskab" ("Royal Society of Northern Antiquaries"), published his principal manuscript in "Ledetraad til Nordisk Oldkyndighed" ("Guide to Scandinavian Archaeology") in 1836. The system has since been expanded by further subdivision of each era, and refined through further archaeological and anthropological finds. It was to be a full generation before British archaeology caught up with the Danish. When it did, the leading figure was another multi-talented man of independent means: John Lubbock, 1st Baron Avebury. After reviewing the Three-age System from Lucretius to Thomsen, Lubbock improved it and took it to another level, that of cultural anthropology. Thomsen had been concerned with techniques of archaeological classification. Lubbock found correlations with the customs of savages and civilization. In his 1865 book, "Prehistoric Times", Lubbock divided the Stone Age in Europe, and possibly nearer Asia and Africa, into the Palaeolithic and the Neolithic: By "drift" Lubbock meant river-drift, the alluvium deposited by a river. For the interpretation of Palaeolithic artifacts, Lubbock, pointing out that the times are beyond the reach of history and tradition, suggests an analogy, which was adopted by the anthropologists. Just as the paleontologist uses modern elephants to help reconstruct fossil pachyderms, so the archaeologist is justified in using the customs of the "non-metallic savages" of today to understand "the early races which inhabited our continent." He devotes three chapters to this approach, covering the "modern savages" of the Indian and Pacific Oceans and the Western Hemisphere, but something of a deficit in what would be called today his correct professionalism reveals a field yet in its infancy: Perhaps it will be thought ... I have selected ... the passages most unfavorable to savages. ... In reality the very reverse in the case. ... Their real condition is even worse and more abject than that which I have endeavoured to depict. Sir John Lubbock's use of the terms Palaeolithic ("Old Stone Age") and Neolithic ("New Stone Age") were immediately popular. They were applied, however, in two different senses: geologic and anthropologic. In 1867–68 Ernst Haeckel in 20 public lectures in Jena, entitled "General Morphology", to be published in 1870, referred to the Archaeolithic, the Palaeolithic, the Mesolithic and the Caenolithic as periods in geologic history. He could only have got these terms from Hodder Westropp, who took Palaeolithic from Lubbock, invented Mesolithic ("Middle Stone Age") and Caenolithic instead of Lubbock's Neolithic. None of these terms appear anywhere, including the writings of Haeckel, before 1865. Haeckel's use was innovative. Westropp first used Mesolithic and Caenolithic in 1865, almost immediately after the publication of Lubbock's first edition. He read a paper on the topic before the Anthropological Society of London in 1865, published in 1866 in the "Memoirs". After asserting: Man, in all ages and in all stages of his development, is a tool-making animal. Westropp goes on to define "different epochs of flint, stone, bronze or iron; ..." He never did distinguish the flint from the Stone Age (having realized they were one and the same), but he divided the Stone Age as follows: These three ages were named respectively the Palaeolithic, the Mesolithic and the Kainolithic. He was careful to qualify these by stating: Their presence is thus not always an evidence of a high antiquity, but of an early and barbarous state; ... Lubbock's savagery was now Westropp's barbarism. A fuller exposition of the Mesolithic waited for his book, "Pre-Historic Phases", dedicated to Sir John Lubbock, published in 1872. At that time he restored Lubbock's Neolithic and defined a Stone Age divided into three phases and five stages. The First Stage, "Implements of the Gravel Drift," contains implements that were "roughly knocked into shape." His illustrations show Mode 1 and Mode 2 stone tools, basically Acheulean handaxes. Today they are in the Lower Palaeolithic. The Second Stage, "Flint Flakes" are of the "simplest form" and were struck off cores. Westropp differs in this definition from the modern, as Mode 2 contains flakes for scrapers and similar tools. His illustrations, however, show Modes 3 and 4, of the Middle and Upper Palaeolithic. His extensive lithic analysis leaves no doubt. They are, however, part of Westropp's Mesolithic. The Third Stage, "a more advanced stage" in which "flint flakes were carefully chipped into shape," produced small arrowheads from shattering a piece of flint into "a hundred pieces", selecting the most suitable and working it with a punch. The illustrations show that he had microliths, or Mode 5 tools in mind. His Mesolithic is therefore partly the same as the modern. The Fourth Stage is a part of the Neolithic that is transitional to the Fifth Stage: axes with ground edges leading to implements totally ground and polished. Westropp's agriculture is removed to the Bronze Age, while his Neolithic is pastoral. The Mesolithic is reserved to hunters. In that same year, 1872, Sir John Evans produced a massive work, "The Ancient Stone Implements", in which he in effect repudiated the Mesolithic, making a point to ignore it, denying it by name in later editions. He wrote: Sir John Lubbock has proposed to call them the Archaeolithic, or Palaeolithic, and the Neolithic Periods respectively, terms which have met with almost general acceptance, and of which I shall avail myself in the course of this work. Evans did not, however, follow Lubbock's general trend, which was typological classification. He chose instead to use type of find site as the main criterion, following Lubbock's descriptive terms, such as tools of the drift. Lubbock had identified drift sites as containing Palaeolithic material. Evans added to them the cave sites. Opposed to drift and cave were the surface sites, where chipped and ground tools often occurred in unlayered contexts. Evans decided he had no choice but to assign them all to the most recent. He therefore consigned them to the Neolithic and used the term "Surface Period" for it. Having read Westropp, Sir John knew perfectly well that all the former's Mesolithic implements were surface finds. He used his prestige to quell the concept of Mesolithic as best he could, but the public could see that his methods were not typological. The less prestigious scientists publishing in the smaller journals continued to look for a Mesolithic. For example, Isaac Taylor in "The Origin of the Aryans", 1889, mentions the Mesolithic but briefly, asserting, however, that it formed "a transition between the Palaeolithic and Neolithic Periods." Nevertheless, Sir John fought on, opposing the Mesolithic by name as late as the 1897 edition of his work. Meanwhile, Haeckel had totally abandoned the geologic uses of the -lithic terms. The concepts of Palaeozoic, Mesozoic and Cenozoic had originated in the early 19th century and were gradually becoming coin of the geologic realm. Realizing he was out of step, Haeckel started to transition to the -zoic system as early as 1876 in "The History of Creation", placing the -zoic form in parentheses next to the -lithic form. The gauntlet was officially thrown down before Sir John by J. Allen Brown, speaking for the opposition before the Anthropological Institute on 8 March 1892. In the journal he opens the attack by striking at a "hiatus" in the record: It has been generally assumed that a break occurred between the period during which ... the continent of Europe was inhabited by Palaeolithic Man and his Neolithic successor ... No physical cause, no adequate reasons have ever been assigned for such a hiatus in human existence ... The main hiatus at that time was between British and French archaeology, as the latter had already discovered the gap 20 years earlier and had already considered three answers and arrived at one solution, the modern. Whether Brown did not know or was pretending not to know is unclear. In 1872, the very year of Evans' publication, Mortillet had presented the gap to the Congrès international d'Anthropologie at Brussels: Between the Palaeolithic and Neolithic, there is a wide and deep gap, a large hiatus. Apparently prehistoric man was hunting big game with stone tools one year and farming with domestic animals and ground stone tools the next. Mortillet postulated a "time then unknown ("époque alors inconnue")" to fill the gap. The hunt for the "unknown" was on. On 16 April 1874, Mortillet retracted. "That hiatus is not real ("Cet hiatus n'est pas réel")," he said before the "Société d'Anthropologie", asserting that it was an informational gap only. The other theory had been a gap in nature, that, because of the ice age, man had retreated from Europe. The information must now be found. In 1895 Édouard Piette stated that he had heard Édouard Lartet speak of "the remains from the intermediate period ("les vestiges de l'époque intermédiaire")", which were yet to be discovered, but Lartet had not published this view. The gap had become a transition. However, asserted Piette: I was fortunate to discover the remains of that unknown time which separated the Magdalenian age from that of polished stone axes ... it was, at Mas-d'Azil in 1887 and 1888 when I made this discovery. He had excavated the type site of the Azilian Culture, the basis of today's Mesolithic. He found it sandwiched between the Magdalenian and the Neolithic. The tools were like those of the Danish kitchen-middens, termed the Surface Period by Evans, which were the basis of Westropp's Mesolithic. They were Mode 5 stone tools, or microliths. He mentions neither Westropp nor the Mesolithic, however. For him this was a "solution of continuity ("solution de continuité")" To it he assigns the semi-domestication of dog, horse, cow, etc., which "greatly facilitated the work of Neolithic man ("a beaucoup facilité la tàche de l'homme néolithique")." Brown in 1892 does not mention Mas-d'Azil. He refers to the "transition or 'Mesolithic' forms" but to him these are "rough hewn axes chipped over the entire surface" mentioned by Evans as the earliest of the Neolithic. Where Piette believed he had discovered something new, Brown wanted to break out known tools considered Neolithic. Sir John Evans never changed his mind, giving rise to a dichotomous view of the Mesolithic and a multiplication of confusing terms. On the continent, all seemed settled: there was a distinct Mesolithic with its own tools and both tools and customs were transitional to the Neolithic. Then in 1910, the Swedish archaeologist, Knut Stjerna, addressed another problem of the Three-Age System: although a culture was predominantly classified as one period, it might contain material that was the same as or like that of another. His example was the Gallery grave Period of Scandinavia. It was not uniformly Neolithic, but contained some objects of bronze and more importantly to him three different subcultures. One of these "civilisations" (sub-cultures) located in the north and east of Scandinavia was rather different, featuring but few gallery graves, using instead stone-lined pit graves containing implements of bone, such as harpoon and javelin heads. He observed that they "persisted during the recent Paleolithic period and also during the Protoneolithic." Here he had used a new term, "Protoneolithic", which was according to him to be applied to the Danish kitchen-middens. Stjerna also said that the eastern culture "is attached to the Paleolithic civilization ("se trouve rattachée à la civilisation paléolithique")." However, it was not intermediary and of its intermediates he said "we cannot discuss them here ("nous ne pouvons pas examiner ici")." This "attached" and non-transitional culture he chose to call the Epipaleolithic, defining it as follows: With Epipaleolithic I mean the period during the early days that followed the age of the reindeer, the one that retained Paleolithic customs. This period has two stages in Scandinavia, that of Maglemose and that of Kunda. ("Par époque épipaléolithique j'entends la période qui, pendant les premiers temps qui ont suivi l'âge du Renne, conserve les coutumes paléolithiques. Cette période présente deux étapes en Scandinavie, celle de Maglemose et de Kunda.") There is no mention of any Mesolithic, but the material he described had been previously connected with the Mesolithic. Whether or not Stjerna intended his Protoneolithic and Epipaleolithic as a replacement for the Mesolithic is not clear, but Hugo Obermaier, a German archaeologist who taught and worked for many years in Spain, to whom the concepts are often erroneously attributed, used them to mount an attack on the entire concept of Mesolithic. He presented his views in "El Hombre fósil", 1916, which was translated into English in 1924. Viewing the Epipaleolithic and the Protoneolithic as a "transition" and an "interim" he affirmed that they were not any sort of "transformation:" But in my opinion this term is not justified, as it would be if these phases presented a natural evolutionary development – a progressive transformation from Paleolithic to Neolithic. In reality, the final phase of the Capsian, the Tardenoisian, the Azilian and the northern Maglemose industries are the posthumous descendants of the Palaeolithic ... The ideas of Stjerna and Obermaier introduced a certain ambiguity into the terminology, which subsequent archaeologists found and find confusing. Epipaleolithic and Protoneolithic cover the same cultures, more or less, as does the Mesolithic. Publications on the Stone Age after 1916 include some sort of explanation of this ambiguity, leaving room for different views. Strictly speaking the Epipaleolithic is the earlier part of the Mesolithic. Some identify it with the Mesolithic. To others it is an Upper Paleolithic transition to the Mesolithic. The exact use in any context depends on the archaeological tradition or the judgement of individual archaeologists. The issue continues. The post-Darwinian approach to the naming of periods in earth history focused at first on the lapse of time: early (Palaeo-), middle (Meso-) and late (Ceno-). This conceptualization automatically imposes a three-age subdivision to any period, which is predominant in modern archaeology: Early, Middle and Late Bronze Age; Early, Middle and Late Minoan, etc. The criterion is whether the objects in question look simple or are elaborative. If a horizon contains objects that are post-late and simpler-than-late they are sub-, as in Submycenaean. Haeckel's presentations are from a different point of view. His "History of Creation" of 1870 presents the ages as "Strata of the Earth's Crust," in which he prefers "upper", "mid-" and "lower" based on the order in which one encounters the layers. His analysis features an Upper and Lower Pliocene as well as an Upper and Lower Diluvial (his term for the Pleistocene). Haeckel, however, was relying heavily on Lyell. In the 1833 edition of "Principles of Geology" (the first) Lyell devised the terms Eocene, Miocene and Pliocene to mean periods of which the "strata" contained some (Eo-, "early"), lesser (Mio-) and greater (Plio-) numbers of "living Mollusca represented among fossil assemblages of western Europe." The Eocene was given Lower, Middle, Upper; the Miocene a Lower and Upper; and the Pliocene an Older and Newer, which scheme would indicate an equivalence between Lower and Older, and Upper and Newer. In a French version, "Nouveaux Éléments de Géologie", in 1839 Lyell called the Older Pliocene the Pliocene and the Newer Pliocene the Pleistocene (Pleist-, "most"). Then in "Antiquity of Man" in 1863 he reverted to his previous scheme, adding "Post-Tertiary" and "Post-Pliocene." In 1873 the Fourth Edition of "Antiquity of Man" restores Pleistocene and identifies it with Post-Pliocene. As this work was posthumous, no more was heard from Lyell. Living or deceased, his work was immensely popular among scientists and laymen alike. "Pleistocene" caught on immediately; it is entirely possible that he restored it by popular demand. In 1880 Dawkins published "The Three Pleistocene Strata" containing a new manifesto for British archaeology: The continuity between geology, prehistoric archaeology and history is so direct that it is impossible to picture early man in this country without using the results of all these three sciences. He intends to use archaeology and geology to "draw aside the veil" covering the situations of the peoples mentioned in proto-historic documents, such as Caesar's "Commentaries" and the "Agricola" of Tacitus. Adopting Lyell's scheme of the Tertiary, he divides Pleistocene into Early, Mid- and Late. Only the Palaeolithic falls into the Pleistocene; the Neolithic is in the "Prehistoric Period" subsequent. Dawkins defines what was to become the Upper, Middle and Lower Paleolithic, except that he calls them the "Upper Cave-Earth and Breccia," the "Middle Cave-Earth," and the "Lower Red Sand," with reference to the names of the layers. The next year, 1881, Geikie solidified the terminology into Upper and Lower Palaeolithic: In Kent's Cave the implements obtained from the lower stages were of a much ruder description than the various objects detected in the upper cave-earth ... And a very long time must have elapsed between the formation of the lower and upper Palaeolithic beds in that cave. The Middle Paleolithic in the modern sense made its appearance in 1911 in the 1st edition of William Johnson Sollas' "Ancient Hunters". It had been used in varying senses before then. Sollas associates the period with the Mousterian technology and the relevant modern people with the Tasmanians. In the 2nd edition of 1915 he has changed his mind for reasons that are not clear. The Mousterian has been moved to the Lower Paleolithic and the people changed to the Australian aborigines; furthermore, the association has been made with Neanderthals and the Levalloisian added. Sollas says wistfully that they are in "the very middle of the Palaeolithic epoch." Whatever his reasons, the public would have none of it. From 1911 on, Mousterian was Middle Paleolithic, except for holdouts. Alfred L. Kroeber in 1920, "Three essays on the antiquity and races of man," reverting to Lower Paleolithic, explains that he is following Louis Laurent Gabriel de Mortillet. The English-speaking public remained with Middle Paleolithic. Thomsen had formalized the Three-age System by the time of its publication in 1836. The next step forward was the formalization of the Palaeolithic and Neolithic by Sir John Lubbock in 1865. Between these two times Denmark held the lead in archaeology, especially because of the work of Thomsen's at first junior associate and then successor, Jens Jacob Asmussen Worsaae, rising in the last year of his life to Kultus Minister of Denmark. Lubbock offers full tribute and credit to him in "Prehistoric Times". Worsaae in 1862 in "Om Tvedelingen af Steenalderen", previewed in English even before its publication by "The Gentleman's Magazine", concerned about changes in typology during each period, proposed a bipartite division of each age:Both for Bronze and Stone it was now evident that a few hundred years would not suffice. In fact, good grounds existed for dividing each of these periods into two, if not more. He called them earlier or later. The three ages became six periods. The British seized on the concept immediately. Worsaae's earlier and later became Lubbock's palaeo- and neo- in 1865, but alternatively English speakers used Earlier and Later Stone Age, as did Lyell's 1883 edition of "Principles of Geology", with older and younger as synonyms. As there is no room for a middle between the comparative adjectives, they were later modified to early and late. The scheme created a problem for further bipartite subdivisions, which would have resulted in such terms as early early Stone Age, but that terminology was avoided by adoption of Geikie's upper and lower Paleolithic. Amongst African archaeologists, the terms Old Stone Age, Middle Stone Age and Late Stone Age are preferred. When Sir John Lubbock was doing the preliminary work for his 1865 "magnum opus", Charles Darwin and Alfred Russel Wallace were jointly publishing their first papers On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection. Darwins's On the Origin of Species came out in 1859, but he did not elucidate the theory of evolution as it applies to man until the Descent of Man in 1871. Meanwhile, Wallace read a paper in 1864 to the Anthropological Society of London that was a major influence on Sir John, publishing in the very next year. He quoted Wallace:From the moment when the first skin was used as a covering, when the first rude spear was formed to assist in the chase, the first seed sown or shoot planted, a grand revolution was effected in nature, a revolution which in all the previous ages of the world's history had had no parallel, for a being had arisen who was no longer necessarily subject to change with the changing universe,—a being who was in some degree superior to nature, inasmuch as he knew how to control and regulate her action, and could keep himself in harmony with her, not by a change in body, but by an advance in mind. Wallace distinguishing between mind and body was asserting that natural selection shaped the form of man only until the appearance of mind; after then, it played no part. Mind formed modern man, meaning that result of mind, culture. Its appearance overthrew the laws of nature. Wallace used the term "grand revolution." Although Lubbock believed that Wallace had gone too far in that direction he did adopt a theory of evolution combined with the revolution of culture. Neither Wallace not Lubbock offered any explanation of how the revolution came about, or felt that they had to offer one. Revolution is an acceptance that in the continuous evolution of objects and events sharp and inexplicable disconformities do occur, as in geology. And so it is not surprising that in the 1874 Stockholm meeting of the International Congress of Anthropology and Prehistoric Archaeology, in response to Ernst Hamy's denial of any "break" between Paleolithic and Neolithic based on material from dolmens near Paris "showing a continuity between the paleolithic and neolithic folks," Edouard Desor, geologist and archaeologist, replied: "that the introduction of domesticated animals was a complete revolution and enables us to separate the two epochs completely." A revolution as defined by Wallace and adopted by Lubbock is a change of regime, or rules. If man was the new rule-setter through culture then the initiation of each of Lubbock's four periods might be regarded as a change of rules and therefore as a distinct revolution, and so "Chambers's Journal", a reference work, in 1879 portrayed each of them as:...an advance in knowledge and civilization which amounted to a revolution in the then existing manners and customs of the world. Because of the controversy over Westropp's Mesolithic and Mortillet's Gap beginning in 1872 archaeological attention focused mainly on the revolution at the Palaeolithic—Neolithic boundary as an explanation of the gap. For a few decades the Neolithic Period, as it was called, was described as a kind of revolution. In the 1890s, a standard term, the Neolithic Revolution, began to appear in encyclopedias such as Pears. In 1925 the Cambridge Ancient History reported:There are quite a large number of archaeologists who justifiably consider the period of the Late Stone Age to be a Neolithic revolution and an economic revolution at the same time. For that is the period when primitive agriculture developed and cattle breeding began. In 1936 a champion came forward who would advance the Neolithic Revolution into the mainstream view: Vere Gordon Childe. After giving the Neolithic Revolution scant mention in his first notable work, the 1928 edition of "New Light on the Most Ancient East", Childe made a major presentation in the first edition of "Man Makes Himself" in 1936 developing Wallace's and Lubbock's theme of the human revolution against the supremacy of nature and supplying detail on two revolutions, the Paleolithic—Neolithic and the Neolithic-Bronze Age, which he called the Second or Urban revolution. Lubbock had been as much of an ethnologist as an archaeologist. The founders of cultural anthropology, such as Tylor and Morgan, were to follow his lead on that. Lubbock created such concepts as savages and barbarians based on the customs of then modern tribesmen and made the presumption that the terms can be applied without serious inaccuracy to the men of the Paleolithic and the Neolithic. Childe broke with this view:The assumption that any savage tribe today is primitive, in the sense that its culture faithfully reflects that of much more ancient men is gratuitous. Childe concentrated on the inferences to be made from the artifacts:But when the tools ... are considered ... in their totality, they may reveal much more. They disclose not only the level of technical skill ... but also their economy ... The archaeologists's ages correspond roughly to economic stages. Each new "age" is ushered in by an economic revolution ... The archaeological periods were indications of economic ones:Archaeologists can define a period when it was apparently the sole economy, the sole organization of production ruling anywhere on the earth's surface. These periods could be used to supplement historical ones where history was not available. He reaffirmed Lubbock's view that the Paleolithic was an age of food gathering and the Neolithic an age of food production. He took a stand on the question of the Mesolithic identifying it with the Epipaleolithic. The Mesolithic was to him "a mere continuance of the Old Stone Age mode of life" between the end of the Pleistocene and the start of the Neolithic. Lubbock's terms "savagery" and "barbarism" do not much appear in "Man Makes Himself" but the sequel, "What Happened in History" (1942), reuses them (attributing them to Morgan, who got them from Lubbock) with an economic significance: savagery for food-gathering and barbarism for Neolithic food production. Civilization begins with the urban revolution of the Bronze Age. Even as Childe was developing this revolution theme the ground was sinking under him. Lubbock did not find any pottery associated with the Paleolithic, asserting of its to him last period, the Reindeer, "no fragments of metal or pottery have yet been found." He did not generalize but others did not hesitate to do so. The next year, 1866, Dawkins proclaimed of Neolithic people that "these invented the use of pottery..." From then until the 1930s pottery was considered a sine qua non of the Neolithic. The term Pre-Pottery Age came into use in the late 19th century but it meant Paleolithic. Meanwhile, the Palestine Exploration Fund founded in 1865 completing its survey of excavatable sites in Palestine in 1880 began excavating in 1890 at the site of ancient Lachish near Jerusalem, the first of a series planned under the licensing system of the Ottoman Empire. Under their auspices in 1908 Ernst Sellin and Carl Watzinger began excavation at Jericho (Tell es-Sultan) previously excavated for the first time by Sir Charles Warren in 1868. They discovered a Neolithic and Bronze Age city there. Subsequent excavations in the region by them and others turned up other walled cities that appear to have preceded the Bronze Age urbanization. All excavation ceased for World War I. When it was over the Ottoman Empire was no longer a factor there. In 1919 the new British School of Archaeology in Jerusalem assumed archaeological operations in Palestine. John Garstang finally resumed excavation at Jericho 1930-1936. The renewed dig uncovered another 3000 years of prehistory that was in the Neolithic but did not make use of pottery. He called it the Pre-pottery Neolithic, as opposed to the Pottery Neolithic, subsequently often called the Aceramic or Pre-ceramic and Ceramic Neolithic. Kathleen Kenyon was a young photographer then with a natural talent for archaeology. Solving a number of dating problems she soon advanced to the forefront of British archaeology through skill and judgement. In World War II she served as a commander in the Red Cross. In 1952–58 she took over operations at Jericho as the Director of the British School, verifying and expanding Garstang's work and conclusions. There were two Pre-pottery Neolithic periods, she concluded, A and B. Moreover, the PPN had been discovered at most of the major Neolithic sites in the near East and Greece. By this time her personal stature in archaeology was at least equal to that of V. Gordon Childe. While the three-age system was being attributed to Childe in popular fame, Kenyon became gratuitously the discoverer of the PPN. More significantly the question of revolution or evolution of the Neolithic was increasingly being brought before the professional archaeologists. Danish archaeology took the lead in defining the Bronze Age, with little of the controversy surrounding the Stone Age. British archaeologists patterned their own excavations after those of the Danish, which they followed avidly in the media. References to the Bronze Age in British excavation reports began in the 1820s contemporaneously with the new system being promulgated by C.J. Thomsen. Mention of the Early and Late Bronze Age began in the 1860s following the bipartite definitions of Worsaae. In 1874 at the Stockholm meeting of the International Congress of Anthropology and Prehistoric Archaeology, a suggestion was made by A. Bertrand that no distinct age of bronze had existed, that the bronze artifacts discovered were really part of the Iron Age. Hans Hildebrand in refutation pointed to two Bronze Ages and a transitional period in Scandinavia. John Evans denied any defect of continuity between the two and asserted there were three Bronze Ages, "the early, middle and late Bronze Age." His view for the Stone Age, following Lubbock, was quite different, denying, in "The Ancient Stone Implements", any concept of a Middle Stone Age. In his 1881 parallel work, "The Ancient Bronze Implements", he affirmed and further defined the three periods, strangely enough recusing himself from his previous terminology, Early, Middle and Late Bronze Age (the current forms) in favor of "an earlier and later stage" and "middle". He uses Bronze Age, Bronze Period, Bronze-using Period and Bronze Civilization interchangeably. Apparently Evans was sensitive of what had gone before, retaining the terminology of the bipartite system while proposing a tripartite one. After stating a catalogue of types of bronze implements he defines his system:The Bronze Age of Britain may, therefore, be regarded as an aggregate of three stages: the first, that characterized by the flat or slightly flanged celts, and the knife-daggers ... the second, that characterized by the more heavy dagger-blades and the flanged celts and tanged spear-heads or daggers, ... and the third, by palstaves and socketed celts and the many forms of tools and weapons, ... It is in this third stage that the bronze sword and the true socketed spear-head first make their advent. In chapter 1 of his work, Evans proposes for the first time a transitional Copper Age between the Neolithic and the Bronze Age. He adduces evidence from far-flung places such as China and the Americas to show that the smelting of copper universally preceded alloying with tin to make bronze. He does not know how to classify this fourth age. On the one hand he distinguishes it from the Bronze Age. On the other hand, he includes it:In thus speaking of a bronze-using period I by no means wish to exclude the possible use of copper unalloyed with tin. Evans goes into considerable detail tracing references to the metals in classical literature: Latin "aer, aeris" and Greek "" first for "copper" and then for "bronze." He does not mention the adjective of "aes", which is "aēneus", nor is he interested in formulating New Latin words for the Copper Age, which is good enough for him and many English authors from then on. He offers literary proof that bronze had been in use before iron and copper before bronze. In 1884 the center of archaeological interest shifted to Italy with the excavation of Remedello and the discovery of the Remedello culture by Gaetano Chierici. According to his 1886 biographers, Luigi Pigorini and Pellegrino Strobel, Chierici devised the term Età Eneo-litica to describe the archaeological context of his findings, which he believed were the remains of Pelasgians, or people that preceded Greek and Latin speakers in the Mediterranean. The age (Età) was:A period of transition from the age of stone to that of bronze (periodo di transizione dall'età della pietra a quella del bronzo) Whether intentional or not, the definition was the same as Evans', except that Chierici was adding a term to New Latin. He describes the transition by stating the beginning (litica, or Stone Age) and the ending (eneo-, or Bronze Age); in English, "the stone-to-bronze period." Shortly after, "Eneolithic" or "Aeneolithic" began turning up in scholarly English as a synonym for "Copper Age." Sir John's own son, Arthur Evans, beginning to come into his own as an archaeologist and already studying Cretan civilization, refers in 1895 to some clay figures of "aeneolithic date" (quotes his). The three-age system is a way of dividing prehistory, and the Iron Age is therefore considered to end in a particular culture with either the start of its protohistory, when it begins to be written about by outsiders, or when its own historiography begins. Although iron is still the major hard material in use in modern civilization, and steel is a vital and indispensable modern industry, as far as archaeologists are concerned the Iron Age has therefore now ended for all cultures in the world. The date when it is taken to end varies greatly between cultures, and in many parts of the world there was no Iron Age at all, for example in Pre-Columbian America and the prehistory of Australia. For these and other regions the three-age system is little used. By a convention among archaeologists, in the Ancient Near East the Iron Age is taken to end with the start of the Achaemenid Empire in the 6th century BC, as the history of that is told by the Greek historian Herodotus. This remains the case despite a good deal of earlier local written material having become known since the convention was established. In Western Europe the Iron Age is ended by Roman conquest. In South Asia the start of the Maurya Empire about 320 BC is usually taken as the end point; although we have a considerable quantity of earlier written texts from India, they give us relatively little in the way of a conventional record of political history. For Egypt, China and Greece "Iron Age" is not a very useful concept, and relatively little used as a period term. In the first two prehistory has ended, and periodization by historical ruling dynasties has already begun, in the Bronze Age, which these cultures do have. In Greece the Iron Age begins during the Greek Dark Ages, and coincides with the cessation of a historical record for some centuries. For Scandinavia and other parts of northern Europe that the Romans did not reach, the Iron Age continues until the start of the Viking Age in about 800 AD. The question of the dates of the objects and events discovered through archaeology is the prime concern of any system of thought that seeks to summarize history through the formulation of ages or epochs. An age is defined through comparison of contemporaneous events. Increasingly, the terminology of archaeology is parallel to that of historical method. An event is "undocumented" until it turns up in the archaeological record. Fossils and artifacts are "documents" of the epochs hypothesized. The correction of dating errors is therefore a major concern. In the case where parallel epochs defined in history were available, elaborate efforts were made to align European and Near Eastern sequences with the datable chronology of Ancient Egypt and other known civilizations. The resulting grand sequence was also spot checked by evidence of calculateable solar or other astronomical events. These methods are only available for the relatively short term of recorded history. Most prehistory does not fall into that category. Physical science provides at least two general groups of dating methods, stated below. Data collected by these methods is intended to provide an absolute chronology to the framework of periods defined by relative chronology. The initial comparisons of artifacts defined periods that were local to a site, group of sites or region. Advances made in the fields of seriation, typology, stratification and the associative dating of artifacts and features permitted even greater refinement of the system. The ultimate development is the reconstruction of a global catalogue of layers (or as close to it as possible) with different sections attested in different regions. Ideally once the layer of the artifact or event is known a quick lookup of the layer in the grand system will provide a ready date. This is considered the most reliable method. It is used for calibration of the less reliable chemical methods. Any material sample contains elements and compounds that are subject to decay into other elements and compounds. In cases where the rate of decay is predictable and the proportions of initial and end products can be known exactly, consistent dates of the artifact can be calculated. Due to the problem of sample contamination and variability of the natural proportions of the materials in the media, sample analysis in the case where verification can be checked by grand layering systems has often been found to be widely inaccurate. Chemical dates therefore are only considered reliable used in conjunction with other methods. They are collected in groups of data points that form a pattern when graphed. Isolated dates are not considered reliable. The term Megalithic does not refer to a period of time, but merely describes the use of large stones by ancient peoples from any period. An eolith is a stone that might have been formed by natural process but occurs in contexts that suggest modification by early humans or other primates for percussion. * Formation of states starts during the Early Bronze Age in Egypt and Mesopotamia and during the Late Bronze Age first empires are founded. The Three-age System has been criticized since at least the 19th century. Every phase of its development has been contested. Some of the arguments that have been presented against it follow. In some cases criticism resulted in other, parallel three-age systems, such as the concepts expressed by Lewis Henry Morgan in "Ancient Society", based on ethnology. These disagreed with the metallic basis of epochization. The critic generally substituted his own definitions of epochs. Vere Gordon Childe said of the early cultural anthropologists:Last century Herbert Spencer, Lewis H. Morgan and Tylor propounded divergent schemes ... they arranged these in a logical order ... They assumed that the logical order was a temporal one... The competing systems of Morgan and Tylor remained equally unverified—and incompatible—theories. More recently, many archaeologists have questioned the validity of dividing time into epochs at all. For example, one recent critic, Graham Connah, describes the three-age system as "epochalism" and asserts:So many archaeological writers have used this model for so long that for many readers it has taken on a reality of its own. In spite of the theoretical agonizing of the last half-century, epochalism is still alive and well ... Even in parts of the world where the model is still in common use, it needs to be accepted that, for example, there never was actually such a thing as 'the Bronze Age.' Some view the three-age system as over-simple; that is, it neglects vital detail and forces complex circumstances into a mold they do not fit. Rowlands argues that the division of human societies into epochs based on the presumption of a single set of related changes is not realistic:But as a more rigorous sociological approach has begun to show that changes at the economic, political and ideological levels are not 'all of apiece' we have come to realise that time may be segmented in as many ways as convenient to the researcher concerned. The three-age system is a relative chronology. The explosion of archaeological data acquired in the 20th century was intended to elucidate the relative chronology in detail. One consequence was the collection of absolute dates. Connah argues:As radiocarbon and other forms of absolute dating contributed more detailed and more reliable chronologies, the epochal model ceased to be necessary. Peter Bogucki of Princeton University summarizes the perspective taken by many modern archaeologists: Although modern archaeologists realize that this tripartite division of prehistoric society is far too simple to reflect the complexity of change and continuity, terms like 'Bronze Age' are still used as a very general way of focusing attention on particular times and places and thus facilitating archaeological discussion. Another common criticism attacks the broader application of the three-age system as a cross-cultural model for social change. The model was originally designed to explain data from Europe and West Asia, but archaeologists have also attempted to use it to explain social and technological developments in other parts of the world such as the Americas, Australasia, and Africa. Many archaeologists working in these regions have criticized this application as eurocentric. Graham Connah writes that: ... attempts by Eurocentric archaeologists to apply the model to African archaeology have produced little more than confusion, whereas in the Americas or Australasia it has been irrelevant, ... Alice B. Kehoe further explains this position as it relates to American archaeology: ... Professor Wilson's presentation of prehistoric archaeology was a European product carried across the Atlantic to promote an American science compatible with its European model. Kehoe goes on to complain of Wilson that "he accepted and reprised the idea that the European course of development was paradigmatic for humankind." This criticism argues that the different societies of the world underwent social and technological developments in different ways. A sequence of events that describes the developments of one civilization may not necessarily apply to another, in this view. Instead social and technological developments must be described within the context of the society being studied.
https://en.wikipedia.org/wiki?curid=31295
Tachyon A tachyon () or tachyonic particle is a hypothetical particle that always travels faster than light. Most physicists believe that faster-than-light particles cannot exist because they are not consistent with the known laws of physics. If such particles did exist, they could be used to build a tachyonic antitelephone and send signals faster than light, which (according to special relativity) would lead to violations of causality. No experimental evidence for the existence of such particles has been found. E. C. G. Sudarshan, V.K Deshpande and Baidyanath Misra were the first to propose the existence of particles faster than light and named them "meta-particles". After that the possibility of particles moving faster than light was also proposed by Robert Ehrlich and Arnold Sommerfeld, independently of each other. In the 1967 paper that coined the term, Gerald Feinberg proposed that tachyonic particles could be quanta of a quantum field with imaginary mass. However, it was soon realized that excitations of such imaginary mass fields do "not" under any circumstances propagate faster than light, and instead the imaginary mass gives rise to an instability known as tachyon condensation. Nevertheless, in modern physics the term often refers to imaginary mass fields rather than to faster-than-light particles. Such fields have come to play a significant role in modern physics. The term comes from the , "tachy", meaning . The complementary particle types are called luxons (which always move at the speed of light) and bradyons (which always move slower than light); both of these particle types are known to exist. In special relativity, a faster-than-light particle would have space-like four-momentum, in contrast to ordinary particles that have time-like four-momentum. Although in some theories the mass of tachyons is regarded as imaginary, in some modern formulations the mass is considered real, the formulas for the momentum and energy being redefined to this end. Moreover, since tachyons are constrained to the spacelike portion of the energy–momentum graph, they could not slow down to subluminal speeds. In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called "bradyons" in discussions of tachyons) must also apply to tachyons. In particular the energy–momentum relation: (where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle: This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy. When "v" is larger than "c", the denominator in the equation for the energy is imaginary, as the value under the radical is negative. Because the total energy must be real, the numerator must "also" be imaginary: i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number. In some modern formulations of the theory, the mass of tachyons is regarded as real. One curious effect is that, unlike ordinary particles, the speed of a tachyon "increases" as its energy decreases. In particular, formula_3 approaches zero when formula_4 approaches infinity. (For ordinary bradyonic matter, "E" increases with increasing speed, becoming arbitrarily large as "v" approaches "c", the speed of light). Therefore, just as bradyons are forbidden to break the light-speed barrier, so too are tachyons forbidden from slowing down to below "c", because infinite energy is required to reach the barrier from either above or below. As noted by Albert Einstein, Tolman, and others, special relativity implies that faster-than-light particles, if they existed, could be used to communicate backwards in time. In 1985, Chodos proposed that neutrinos can have a tachyonic nature. The possibility of standard model particles moving at superluminal speeds can be modeled using Lorentz invariance violating terms, for example in the Standard-Model Extension. In this framework, neutrinos experience Lorentz-violating oscillations and can travel faster than light at high energies. This proposal was strongly criticized. A tachyon with an electric charge would lose energy as Cherenkov radiation—just as ordinary charged particles do when they exceed the local speed of light in a medium (other than a hard vacuum). A charged tachyon traveling in a vacuum, therefore, undergoes a constant proper time acceleration and, by necessity, its world line forms a hyperbola in space-time. However reducing a tachyon's energy "increases" its speed, so that the single hyperbola formed is of "two" oppositely charged tachyons with opposite momenta (same magnitude, opposite sign) which annihilate each other when they simultaneously reach infinite speed at the same place in space. (At infinite speed, the two tachyons have no energy each and finite momentum of opposite direction, so no conservation laws are violated in their mutual annihilation. The time of annihilation is frame dependent.) Even an electrically neutral tachyon would be expected to lose energy via gravitational Cherenkov radiation (unless gravitons are themselves tachyons), because it has a gravitational mass, and therefore increases in speed as it travels, as described above. If the tachyon interacts with any other particles, it can also radiate Cherenkov energy into those particles. Neutrinos interact with the other particles of the Standard Model, and Andrew Cohen and Sheldon Glashow used this to argue that the faster-than-light neutrino anomaly cannot be explained by making neutrinos propagate faster than light, and must instead be due to an error in the experiment. Further investigation of the experiment showed that the results were indeed erroneous. Causality is a fundamental principle of physics. If tachyons can transmit information faster than light, then according to relativity they violate causality, leading to logical paradoxes of the "kill your own grandfather" type. This is often illustrated with thought experiments such as the "tachyon telephone paradox" or "logically pernicious self-inhibitor." The problem can be understood in terms of the relativity of simultaneity in special relativity, which says that different inertial reference frames will disagree on whether two events at different locations happened "at the same time" or not, and they can also disagree on the order of the two events (technically, these disagreements occur when the spacetime interval between the events is 'space-like', meaning that neither event lies in the future light cone of the other). If one of the two events represents the sending of a signal from one location and the second event represents the reception of the same signal at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity ensures that all reference frames agree that the transmission-event happened before the reception-event. However, in the case of a hypothetical signal moving faster than light, there would always be some frames in which the signal was received before it was sent so that the signal could be said to have moved backward in time. Because one of the two fundamental postulates of special relativity says that the laws of physics should work the same way in every inertial frame, if it is possible for signals to move backward in time in any one frame, it must be possible in all frames. This means that if observer A sends a signal to observer B which moves faster than light in A's frame but backwards in time in B's frame, and then B sends a reply which moves faster than light in B's frame but backwards in time in A's frame, it could work out that A receives the reply before sending the original signal, challenging causality in "every" frame and opening the door to severe logical paradoxes. Mathematical details can be found in the tachyonic antitelephone article, and an illustration of such a scenario using spacetime diagrams can be found in "Baker, R. (2003)" The reinterpretation principle asserts that a tachyon sent "back" in time can always be "reinterpreted" as a tachyon traveling "forward" in time, because observers cannot distinguish between the emission and absorption of tachyons. The attempt to "detect" a tachyon "from" the future (and violate causality) would actually "create" the same tachyon and send it "forward" in time (which is causal). However, this principle is not widely accepted as resolving the paradoxes. Instead, what would be required to avoid paradoxes is that unlike any known particle, tachyons do not interact in any way and can never be detected or observed, because otherwise a tachyon beam could be modulated and used to create an anti-telephone or a "logically pernicious self-inhibitor". All forms of energy are believed to interact at least gravitationally, and many authors state that superluminal propagation in Lorentz invariant theories always leads to causal paradoxes. In modern physics, all fundamental particles are regarded as excitations of quantum fields. There are several distinct ways in which tachyonic particles could be embedded into a field theory. In the paper that coined the term "tachyon", Gerald Feinberg studied Lorentz invariant quantum fields with imaginary mass. Because the group velocity for such a field is superluminal, naively it appears that its excitations propagate faster than light. However, it was quickly understood that the superluminal group velocity does not correspond to the speed of propagation of any localized excitation (like a particle). Instead, the negative mass represents an instability to tachyon condensation, and all excitations of the field propagate subluminally and are consistent with causality. Despite having no faster-than-light propagation, such fields are referred to simply as "tachyons" in many sources. Tachyonic fields play an important role in modern physics. Perhaps the most famous is the Higgs boson of the Standard Model of particle physics, which has an imaginary mass in its uncondensed phase. In general, the phenomenon of spontaneous symmetry breaking, which is closely related to tachyon condensation, plays an important role in many aspects of theoretical physics, including the Ginzburg–Landau and BCS theories of superconductivity. Another example of a tachyonic field is the tachyon of bosonic string theory. Tachyons are predicted by bosonic string theory and also the Neveu-Schwarz (NS) and NS-NS sectors, which are respectively the open bosonic sector and closed bosonic sector, of RNS Superstring theory prior to the GSO projection. However such tachyons are not possible due to the Sen conjecture, also known as tachyon condensation. This resulted in the necessity for the GSO projection. In theories that do not respect Lorentz invariance, the speed of light is not (necessarily) a barrier, and particles can travel faster than the speed of light without infinite energy or causal paradoxes. A class of field theories of that type is the so-called Standard Model extensions. However, the experimental evidence for Lorentz invariance is extremely good, so such theories are very tightly constrained. By modifying the kinetic energy of the field, it is possible to produce Lorentz invariant field theories with excitations that propagate superluminally. However, such theories, in general, do not have a well-defined Cauchy problem (for reasons related to the issues of causality discussed above), and are probably inconsistent quantum mechanically. The term was coined by Gerald Feinberg in a 1967 paper titled "Possibility of Faster-Than-Light Particles". He had been inspired by the science-fiction story "Beep" by James Blish. Feinberg studied the kinematics of such particles according to special relativity. In his paper he also introduced fields with imaginary mass (now also referred to as tachyons) in an attempt to understand the microphysical origin such particles might have. The first hypothesis regarding faster-than-light particles is sometimes attributed to German physicist Arnold Sommerfeld in 1904, and more recent discussions happened in 1962 and 1969. In September 2011, it was reported that a tau neutrino had traveled faster than the speed of light in a major release by CERN; however, later updates from CERN on the OPERA project indicate that the faster-than-light readings were due to a faulty element of the experiment's fibre optic timing system. Tachyons have appeared in many works of fiction. They have been used as a standby mechanism upon which many science fiction authors rely to establish faster-than-light communication, with or without reference to causality issues. The word "tachyon" has become widely recognized to such an extent that it can impart a science-fictional connotation even if the subject in question has no particular relation to superluminal travel (a form of technobabble, akin to "positronic brain").
https://en.wikipedia.org/wiki?curid=31296
The Starlost The Starlost is a Canadian-produced science fiction television series created by writer Harlan Ellison and broadcast in 1973 on CTV in Canada and syndicated to local stations in the United States. The show's setting is a huge generational colony spacecraft called "Earthship Ark", which has gone off course. Many of the descendants of the original crew and colonists are unaware, however, that they are aboard a ship. The series experienced a number of production difficulties, and Ellison broke with the project before the airing of its first episode. Foreseeing the destruction of Earth, humanity builds a multi-generational starship called "Earthship Ark", wide and long. The ship contains dozens of biospheres, each kilometres across and housing people of different cultures; their goal is to find and seed a new world of a distant star. In 2385, more than 100 years into the voyage, an unexplained accident occurs, and the ship goes into emergency mode, whereby each biosphere is sealed off from the others. In 2790, 405 years after the accident, Devon (Keir Dullea) a resident of Cypress Corners, an agrarian community with a culture resembling that of the Amish, discovers that his world is far larger and more mysterious than he had realized. Considered an outcast because of his questioning of the way things are, especially his refusal to accept the arranged marriage of his love Rachel (Gay Rowan) to his friend Garth (Robin Ward), Devon finds out that the Cypress Corners elders have been deliberately manipulating the local computer terminal, which they call "The Voice of The Creator". The congregation pursues Devon for attacking the elders and stealing a computer cassette on which they have recorded their orders, and its leaders plot to execute him, but the elderly Abraham, who also questions the elders, gives Devon a key to a dark, mysterious doorway, which Abraham himself is afraid to enter. The frightened Devon escapes into the service areas of the ship and accesses a computer data station that explains the nature and purpose of the Ark and hints at its problems. When Devon returns to Cypress Corners to tell his community what he has learned, he is put on trial for heresy and condemned to death by stoning. Escaping on the night before his execution with the aid of Garth, Devon convinces Rachel to come with him, and Garth pursues them. When Rachel refuses to return with Garth, he joins her and Devon. Eventually they make their way to the ship's bridge, containing the skeletal remains of its crew. It is badly damaged and its control systems are inoperative. The three discover that the Ark is on a collision course with a Class G star similar to the Sun, and realize that the only way to save the Ark and its passengers is to find the backup bridge, at the other end of the Ark, and reactivate the navigation and propulsion systems. Occasionally, they are aided by the ship's partially functioning computer system. 20th Century Fox was involved in the project with Douglas Trumbull as executive producer. Science fiction writer and editor Ben Bova was brought in as science advisor. Harlan Ellison was approached by Robert Kline, a 20th Century Fox television producer, to come up with an idea for a science fiction TV series consisting of eight episodes, to pitch to the BBC as a co-production in February 1973. The BBC rejected the idea. Unable to sell "The Starlost" for prime time, Kline decided to pursue a low budget approach and produce it for syndication. By May, Kline had sold the idea to 48 NBC stations and the Canadian CTV network. Ellison claimed that to get Canadian government subsidies, the production was shot in Canada and Canadian writers produced the scripts from story outlines by Ellison. However, several produced episodes were written entirely by American writers. Before Ellison could begin work on the show's production bible, a writers' strike began, running from March 6 to June 24. Kline negotiated an exception with the Writer's Guild, on the grounds that the production was wholly Canadian — and Ellison went to work on a bible for the series. Originally, the show was to be filmed with a special effects camera system developed by Doug Trumbull called Magicam. The system comprised two cameras whose motion was servo controlled. One camera would film actors against a blue screen, while the other would shoot a model background. The motion of both cameras was synchronized and scaled appropriately, allowing both the camera and the actors to move through model sets. The technology did not work reliably. In the end, simple blue screen effects were used, forcing static camera shots. The failure of the Magicam system was a major blow — as the Canadian studio space that had been rented was too small to build the required sets. In the end, partial sets were built, but the lack of space hampered production. As the filming went on, Ellison grew disenchanted with the budget cuts, details that were changed, and what he characterized as a progressive dumbing down of the story. Ellison's dissatisfaction extended to the new title of the pilot episode; he had titled it "Phoenix Without Ashes" but it was changed to "Voyage of Discovery". Before the production of the pilot episode was completed, Ellison invoked a clause in his contract to force the producers to use his alternative registered writer's name of "Cordwainer Bird" on the end credits. Sixteen episodes were made. Fox decided not to pick up the options for the remainder of the series. On March 31, 1974, Ellison received a Writers Guild of America Award for Best Original Screenplay for the original script (the pilot script as originally written, not the version that was filmed). A novelization of this script by Edward Bryant, "Phoenix Without Ashes", was published in 1975; this contained a lengthy foreword by Ellison describing what had gone on in production. In 2010, the novel was adapted into comic book form by IDW Publishing. Ben Bova, in an editorial in "Analog Science Fiction" (June 1974) and in interviews in fanzines, made it clear how disgruntled he had been as science adviser. In 1975, he published a novel entitled "The Starcrossed", depicting a scientist taken on as a science adviser for a terrible science fiction series. "The Starlost" has generally received a negative reception from historians of science fiction television: "The Encyclopedia of Science Fiction" described "The Starlost" as "dire", while "The Best of Science Fiction TV" included "The Starlost" in its list of the "Worst Science Fiction Shows of All Time". The "Starlog Photo Guidebook TV Episode Guides Volume 1" (1981) lists two unfilmed episodes, "God That Died" and "People in the Dark". Episodes of the original series were rebroadcast in 1978 and further in 1982. A number of episodes were also edited together to create movie-length installments that were sold to cable television broadcasters in the late 1980s. All 16 episodes were at one time available in a VHS boxed set. The first DVD release was limited to the five feature-length edited versions. In September/October 2008, the full series was released on DVD by VCI Entertainment. Aside from the digitally remastered episodes, a "presentation reel" created for potential broadcasters is also included. Hosted by Dullea and Trumbull, and predating Ellison's departure as he is credited under his own name with creating the series, the short feature includes sample footage using the later-abandoned Magicam technology, some filmed special effects footage taken from other productions along with model footage from the film "Silent Running" to represent the "Earthship Ark" concept, and a different series logo. In early 2019, a Roku channel began, airing "The Starlost" as its only program.
https://en.wikipedia.org/wiki?curid=31298
Taiga Taiga (; ; relates to Mongolic and Turkic languages), generally referred to in North America as boreal forest or snow forest, is a biome characterized by coniferous forests consisting mostly of pines, spruces, and larches. The taiga or boreal forest is the world's largest land biome. In North America, it covers most of inland Canada, Alaska, and parts of the northern contiguous United States. In Eurasia, it covers most of Sweden, Finland, much of Russia from Karelia in the west to the Pacific Ocean (including much of Siberia), much of Norway and Estonia, some of the Scottish Highlands, some lowland/coastal areas of Iceland, and areas of northern Kazakhstan, northern Mongolia, and northern Japan (on the island of Hokkaidō). The main tree species, the length of the growing season and summer temperatures vary. For example, the taiga of North America mostly consists of spruces; Scandinavian and Finnish taiga consists of a mix of spruce, pines and birch; Russian taiga has spruces, pines and larches depending on the region, while the Eastern Siberian taiga is a vast larch forest. The Taiga in its current form is a relatively recent phenomenon, having only existed for the last 12,000 years since the beginning of the Holocene epoch, covering land that had been mammoth steppe or under the Scandinavian Ice Sheet in Eurasia and under the Laurentide Ice Sheet in North America during the Late Pleistocene. A different use of the term taiga is often encountered in the English language, with "boreal forest" used in the United States and Canada to refer to only the more southerly part of the biome, while "taiga" is used to describe the more barren areas of the northernmost part of the biome approaching the tree line and the tundra biome. Hoffman (1958) discusses the origin of this differential use in North America and why it is an inappropriate differentiation of the Russian term. Although at high elevations taiga grades into alpine tundra through Krummholz, it is not exclusively an alpine biome; and unlike subalpine forest, much of taiga is lowlands. Taiga is the world's largest land biome (depending on how one defines a biome, it could also be considered the second-largest, after deserts and xeric shrublands), covering or 11.5% of the Earth's land area. The largest areas are located in Russia and Canada. The taiga is the terrestrial biome with the lowest annual average temperatures after the tundra and permanent ice caps. Extreme winter minimums in the northern taiga are typically lower than those of the tundra. The lowest reliably recorded temperatures in the Northern Hemisphere were recorded in the taiga of northeastern Russia. The taiga or boreal forest has a subarctic climate with very large temperature range between seasons, but the long and cold winter is the dominant feature. This climate is classified as "Dfc", "Dwc", "Dsc", "Dfd" and "Dwd" in the Köppen climate classification scheme, meaning that the short summer (24 h average or more) lasts 1–3 months and always less than 4 months. In Siberian taiga the average temperature of the coldest month is between and . There are also some much smaller areas grading towards the oceanic "Cfc" climate with milder winters, whilst the extreme south and (in Eurasia) west of the taiga reaches into humid continental climates ("Dfb", "Dwb") with longer summers. The mean annual temperature generally varies from , but there are taiga areas in eastern Siberia and interior Alaska-Yukon where the mean annual reaches down to . According to some sources, the boreal forest grades into a temperate mixed forest when mean annual temperature reaches about . Discontinuous permafrost is found in areas with mean annual temperature below freezing (), whilst in the "Dfd" and "Dwd" climate zones continuous permafrost occurs and restricts growth to very shallow-rooted trees like Siberian larch. The winters, with average temperatures below freezing, last five to seven months. Temperatures vary from throughout the whole year. The summers, while short, are generally warm and humid. In much of the taiga, would be a typical winter day temperature and an average summer day. The growing season, when the vegetation in the taiga comes alive, is usually slightly longer than the climatic definition of summer as the plants of the boreal biome have a lower threshold to trigger growth. In Canada, Scandinavia and Finland, the growing season is often estimated by using the period of the year when the 24-hour average temperature is or more. For the Taiga Plains in Canada, growing season varies from 80 to 150 days, and in the Taiga Shield from 100 to 140 days. Some sources claim 130 days growing season as typical for the taiga. Other sources mention that 50–100 frost-free days are characteristic. Data for locations in southwest Yukon gives 80–120 frost-free days. The closed canopy boreal forest in Kenozersky National Park near Plesetsk, Arkhangelsk Province, Russia, on average has 108 frost-free days. The longest growing season is found in the smaller areas with oceanic influences; in coastal areas of Scandinavia and Finland, the growing season of the closed boreal forest can be 145–180 days. The shortest growing season is found at the northern taiga–tundra ecotone, where the northern taiga forest no longer can grow and the tundra dominates the landscape when the growing season is down to 50–70 days, and the 24-hr average of the warmest month of the year usually is or less. High latitudes mean that the sun does not rise far above the horizon, and less solar energy is received than further south. But the high latitude also ensures very long summer days, as the sun stays above the horizon nearly 20 hours each day, or up to 24 hours, with only around 6 hours of daylight, or none, occurring in the dark winters, depending on latitude. The areas of the taiga inside the Arctic Circle have midnight sun in mid-summer and polar night in mid-winter. The taiga experiences relatively low precipitation throughout the year (generally annually, in some areas), primarily as rain during the summer months, but also as fog and snow. This fog, especially predominant in low-lying areas during and after the thawing of frozen Arctic seas, means that sunshine is not abundant in the affected taiga areas even during the long summer days. As evaporation is consequently low for most of the year, precipitation exceeds evaporation, and is sufficient to sustain the dense vegetation growth including large trees. (In the steppe biome, often found south of taiga in the northern hemisphere, evapotranspiration exceeds precipitation, restricting vegetation to mostly grasses.) Snow may remain on the ground for as long as nine months in the northernmost extensions of the taiga ecozone. In general, taiga grows to the south of the July isotherm, but occasionally as far north as the July isotherm. Rich in spruces, Scots pines in the western Siberian plain, the taiga is dominated by larch in Eastern Siberia, before returning to its original floristic richness on the Pacific shores. Two deciduous trees mingle throughout southern Siberia: birch and Populus tremula. The southern limit is more variable, depending on rainfall; taiga may be replaced by forest steppe south of the July isotherm where rainfall is very low, but more typically extends south to the July isotherm, and locally where rainfall is higher (notably in eastern Siberia and adjacent Outer Manchuria) south to the July isotherm. In these warmer areas the taiga has higher species diversity, with more warmth-loving species such as Korean pine, Jezo spruce, and Manchurian fir, and merges gradually into mixed temperate forest or, more locally (on the Pacific Ocean coasts of North America and Asia), into coniferous temperate rainforests where oak and hornbeam appear and join the conifers, birch and Populus tremula. The area currently classified as taiga in Europe and North America (except Alaska) was recently glaciated. As the glaciers receded they left depressions in the topography that have since filled with water, creating lakes and bogs (especially muskeg soil) found throughout the taiga. In Sweden the taiga is associated with the Norrland terrain. Taiga soil tends to be young and poor in nutrients. It lacks the deep, organically enriched profile present in temperate deciduous forests. The thinness of the soil is due largely to the cold, which hinders the development of soil and the ease with which plants can use its nutrients. Fallen leaves and moss can remain on the forest floor for a long time in the cool, moist climate, which limits their organic contribution to the soil; acids from evergreen needles further leach the soil, creating spodosol, also known as podzol. Since the soil is acidic due to the falling pine needles, the forest floor has only lichens and some mosses growing on it. In clearings in the forest and in areas with more boreal deciduous trees, there are more herbs and berries growing. Diversity of soil organisms in the boreal forest is high, comparable to the tropical rainforest. Since North America and Asia used to be connected by the Bering land bridge, a number of animal and plant species (more animals than plants) were able to colonize both continents and are distributed throughout the taiga biome (see Circumboreal Region). Others differ regionally, typically with each genus having several distinct species, each occupying different regions of the taiga. Taigas also have some small-leaved deciduous trees like birch, alder, willow, and poplar; mostly in areas escaping the most extreme winter cold. However, the Dahurian larch tolerates the coldest winters in the Northern Hemisphere in eastern Siberia. The very southernmost parts of the taiga may have trees such as oak, maple, elm and lime scattered among the conifers, and there is usually a gradual transition into a temperate mixed forest, such as the eastern forest-boreal transition of eastern Canada. In the interior of the continents with the driest climate, the boreal forests might grade into temperate grassland. There are two major types of taiga. The southern part is the closed canopy forest, consisting of many closely spaced trees with mossy ground cover. In clearings in the forest, shrubs and wildflowers are common, such as the fireweed. The other type is the lichen woodland or sparse taiga, with trees that are farther-spaced and lichen ground cover; the latter is common in the northernmost taiga. In the northernmost taiga the forest cover is not only more sparse, but often stunted in growth form; moreover, ice pruned asymmetric black spruce (in North America) are often seen, with diminished foliage on the windward side. In Canada, Scandinavia and Finland, the boreal forest is usually divided into three subzones: The high boreal (north boreal) or taiga zone; the middle boreal (closed forest); and the southern boreal, a closed canopy boreal forest with some scattered temperate deciduous trees among the conifers, such as maple, elm and oak. This southern boreal forest experiences the longest and warmest growing season of the biome, and in some regions (including Scandinavia, Finland and western Russia) this subzone is commonly used for agricultural purposes. The boreal forest is home to many types of berries; some are confined to the southern and middle closed boreal forest (such as wild strawberry and partridgeberry); others grow in most areas of the taiga (such as cranberry and cloudberry), and some can grow in both the taiga and the low arctic (southern part of) tundra (such as bilberry, bunchberry and lingonberry). The forests of the taiga are largely coniferous, dominated by larch, spruce, fir and pine. The woodland mix varies according to geography and climate so for example the Eastern Canadian forests ecoregion of the higher elevations of the Laurentian Mountains and the northern Appalachian Mountains in Canada is dominated by balsam fir "Abies balsamea", while further north the Eastern Canadian Shield taiga of northern Quebec and Labrador is notably black spruce "Picea mariana" and tamarack larch "Larix laricina". Evergreen species in the taiga (spruce, fir, and pine) have a number of adaptations specifically for survival in harsh taiga winters, although larch, which is extremely cold-tolerant, is deciduous. Taiga trees tend to have shallow roots to take advantage of the thin soils, while many of them seasonally alter their biochemistry to make them more resistant to freezing, called "hardening". The narrow conical shape of northern conifers, and their downward-drooping limbs, also help them shed snow. Because the sun is low in the horizon for most of the year, it is difficult for plants to generate energy from photosynthesis. Pine, spruce and fir do not lose their leaves seasonally and are able to photosynthesize with their older leaves in late winter and spring when light is good but temperatures are still too low for new growth to commence. The adaptation of evergreen needles limits the water lost due to transpiration and their dark green color increases their absorption of sunlight. Although precipitation is not a limiting factor, the ground freezes during the winter months and plant roots are unable to absorb water, so desiccation can be a severe problem in late winter for evergreens. Although the taiga is dominated by coniferous forests, some broadleaf trees also occur, notably birch, aspen, willow, and rowan. Many smaller herbaceous plants, such as ferns and occasionally ramps grow closer to the ground. Periodic stand-replacing wildfires (with return times of between 20–200 years) clear out the tree canopies, allowing sunlight to invigorate new growth on the forest floor. For some species, wildfires are a necessary part of the life cycle in the taiga; some, e.g. jack pine have cones which only open to release their seed after a fire, dispersing their seeds onto the newly cleared ground; certain species of fungi (such as morels) are also known to do this. Grasses grow wherever they can find a patch of sun, and mosses and lichens thrive on the damp ground and on the sides of tree trunks. In comparison with other biomes, however, the taiga has low biological diversity. Coniferous trees are the dominant plants of the taiga biome. A very few species in four main genera are found: the evergreen spruce, fir and pine, and the deciduous larch. In North America, one or two species of fir and one or two species of spruce are dominant. Across Scandinavia and western Russia, the Scots pine is a common component of the taiga, while taiga of the Russian Far East and Mongolia is dominated by larch. The boreal forest, or taiga, supports a relatively small range of animals due to the harshness of the climate. Canada's boreal forest includes 85 species of mammals, 130 species of fish, and an estimated 32,000 species of insects. Insects play a critical role as pollinators, decomposers, and as a part of the food web. Many nesting birds rely on them for food in the summer months. The cold winters and short summers make the taiga a challenging biome for reptiles and amphibians, which depend on environmental conditions to regulate their body temperatures, and there are only a few species in the boreal forest including red-sided garter snake, common European adder, blue-spotted salamander, northern two-lined salamander, Siberian salamander, wood frog, northern leopard frog, boreal chorus frog, American toad, and Canadian toad. Most hibernate underground in winter. Fish of the taiga must be able to withstand cold water conditions and be able to adapt to life under ice-covered water. Species in the taiga include Alaska blackfish, northern pike, walleye, longnose sucker, white sucker, various species of cisco, lake whitefish, round whitefish, pygmy whitefish, Arctic lamprey, various grayling species, brook trout (including sea-run brook trout in the Hudson Bay area), chum salmon, Siberian taimen, lenok and lake chub. The taiga is home to a number of large herbivorous mammals, such as moose and reindeer/caribou. Some areas of the more southern closed boreal forest also have populations of other deer species such as the elk (wapiti) and roe deer. The largest animal in the taiga is the wood bison, found in northern Canada, Alaska and has been newly introduced into the Russian far-east. Small mammals of the Taiga biome include rodent species including beaver, squirrel, North American porcupine and vole, as well as a small number of lagomorph species such as snowshoe hare and mountain hare. These species have adapted to survive the harsh winters in their native ranges. Some larger mammals, such as bears, eat heartily during the summer in order to gain weight, and then go into hibernation during the winter. Other animals have adapted layers of fur or feathers to insulate them from the cold. Predatory mammals of the taiga must be adapted to travel long distances in search of scattered prey or be able to supplement their diet with vegetation or other forms of food (such as raccoons). Mammalian predators of the taiga include Canada lynx, Eurasian lynx, stoat, Siberian weasel, least weasel, sable, American marten, North American river otter, European otter, American mink, wolverine, Asian badger, fisher, gray wolf, coyote, red fox, brown bear, American black bear, Asiatic black bear, polar bear (only small areas at the taiga – tundra ecotone) and Siberian tiger. More than 300 species of birds have their nesting grounds in the taiga. Siberian thrush, white-throated sparrow, and black-throated green warbler migrate to this habitat to take advantage of the long summer days and abundance of insects found around the numerous bogs and lakes. Of the 300 species of birds that summer in the taiga only 30 stay for the winter. These are either carrion-feeding or large raptors that can take live mammal prey, including golden eagle, rough-legged buzzard (also known as the rough-legged hawk), and raven, or else seed-eating birds, including several species of grouse and crossbills. Fire has been one of the most important factors shaping the composition and development of boreal forest stands (Rowe 1955); it is the dominant stand-renewing disturbance through much of the Canadian boreal forest (Amiro et al. 2001). The fire history that characterizes an ecosystem is its "fire regime", which has 3 elements: (1) fire type and intensity (e.g., crown fires, severe surface fires, and light surface fires), (2) size of typical fires of significance, and (3) frequency or return intervals for specific land units. The average time within a fire regime to burn an area equivalent to the total area of an ecosystem is its "fire rotation" (Heinselman 1973) or "fire cycle" (Van Wagner 1978). However, as Heinselman (1981) noted, each physiographic site tends to have its own return interval, so that some areas are skipped for long periods, while others might burn two-times or more often during a nominal fire rotation. The dominant fire regime in the boreal forest is high-intensity crown fires or severe surface fires of very large size, often more than 10,000 ha (100 km²), and sometimes more than 400,000 ha (4000 km²). Such fires kill entire stands. Fire rotations in the drier regions of western Canada and Alaska average 50–100 years, shorter than in the moister climates of eastern Canada, where they may average 200 years or more. Fire cycles also tend to be long near the tree line in the subarctic spruce-lichen woodlands. The longest cycles, possibly 300 years, probably occur in the western boreal in floodplain white spruce. Amiro et al. (2001) calculated the mean fire cycle for the period 1980 to 1999 in the Canadian boreal forest (including taiga) at 126 years. Increased fire activity has been predicted for western Canada, but parts of eastern Canada may experience less fire in future because of greater precipitation in a warmer climate. The mature boreal forest pattern in the south shows balsam fir dominant on well-drained sites in eastern Canada changing centrally and westward to a prominence of white spruce, with black spruce and tamarack forming the forests on peats, and with jack pine usually present on dry sites except in the extreme east, where it is absent. The effects of fires are inextricably woven into the patterns of vegetation on the landscape, which in the east favour black spruce, paper birch, and jack pine over balsam fir, and in the west give the advantage to aspen, jack pine, black spruce, and birch over white spruce. Many investigators have reported the ubiquity of charcoal under the forest floor and in the upper soil profile. Charcoal in soils provided Bryson et al. (1965) with clues about the forest history of an area 280 km north of the then current tree line at Ennadai Lake, District Keewatin, Northwest Territories. Two lines of evidence support the thesis that fire has always been an integral factor in the boreal forest: (1) direct, eye-witness accounts and forest-fire statistics, and (2) indirect, circumstantial evidence based on the effects of fire, as well as on persisting indicators. The patchwork mosaic of forest stands in the boreal forest, typically with abrupt, irregular boundaries circumscribing homogenous stands, is indirect but compelling testimony to the role of fire in shaping the forest. The fact is that most boreal forest stands are less than 100 years old, and only in the rather few areas that have escaped burning are there stands of white spruce older than 250 years. The prevalence of fire-adaptive morphologic and reproductive characteristics of many boreal plant species is further evidence pointing to a long and intimate association with fire. Seven of the ten most common trees in the boreal forest—jack pine, lodgepole pine, aspen, balsam poplar ("Populus balsamifera"), paper birch, tamarack, black spruce – can be classed as pioneers in their adaptations for rapid invasion of open areas. White spruce shows some pioneering abilities, too, but is less able than black spruce and the pines to disperse seed at all seasons. Only balsam fir and alpine fir seem to be poorly adapted to reproduce after fire, as their cones disintegrate at maturity, leaving no seed in the crowns. The oldest forests in the northwest boreal region, some older than 300 years, are of white spruce occurring as pure stands on moist floodplains. Here, the frequency of fire is much less than on adjacent uplands dominated by pine, black spruce and aspen. In contrast, in the Cordilleran region, fire is most frequent in the valley bottoms, decreasing upward, as shown by a mosaic of young pioneer pine and broadleaf stands below, and older spruce–fir on the slopes above. Without fire, the boreal forest would become more and more homogeneous, with the long-lived white spruce gradually replacing pine, aspen, balsam poplar, and birch, and perhaps even black spruce, except on the peatlands. Large areas of Siberia's taiga have been harvested for lumber since the collapse of the Soviet Union. Previously, the forest was protected by the restrictions of the Soviet Forest Ministry, but with the collapse of the Union, the restrictions regarding trade with Western nations have vanished. Trees are easy to harvest and sell well, so loggers have begun harvesting Russian taiga evergreen trees for sale to nations previously forbidden by Soviet law. In Canada, eight percent of the taiga is protected from development, the provincial government allows forest management to occur on Crown land under rigorous constraints. The main forestry practice in the boreal forest of Canada is clearcutting, which involves cutting down most of the trees in a given area, then replanting the forest as a monocrop (one species of tree) the following season. Some of the products from logged boreal forests include toilet paper, copy paper, newsprint, and lumber. More than 90% of boreal forest products from Canada are exported for consumption and processing in the United States. Some of the larger cities situated in this biome are Murmansk, Arkhangelsk, Yakutsk, Anchorage, Yellowknife, Tromsø, Luleå, and Oulu. Most companies that harvest in Canadian forests are certified by an independent third party agency such as the Forest Stewardship Council (FSC), Sustainable Forests Initiative (SFI), or the Canadian Standards Association (CSA). While the certification process differs between these groups, all of them include forest stewardship, respect for aboriginal peoples, compliance with local, provincial or national environmental laws, forest worker safety, education and training, and other environmental, business, and social requirements. The prompt renewal of all harvest sites by planting or natural renewal is also required. During the last quarter of the twentieth century, the zone of latitude occupied by the boreal forest experienced some of the greatest temperature increases on Earth. Winter temperatures have increased more than summer temperatures. The number of days with extremely cold temperatures (e.g., −20 to −40 °C (−4 to −40 °F) has decreased irregularly but systematically in nearly all the boreal region, allowing better survival for tree-damaging insects. In summer, the daily low temperature has increased more than the daily high temperature. In Fairbanks, Alaska, the length of the frost-free season has increased from 60–90 days in the early twentieth century to about 120 days a century later. Summer warming has been shown to increase water stress and reduce tree growth in dry areas of the southern boreal forest in central Alaska, western Canada and portions of far eastern Russia. Precipitation is relatively abundant in Scandinavia, Finland, northwest Russia and eastern Canada, where a longer growth season (i.e. the period when sap flow is not impeded by frozen water) accelerate tree growth. As a consequence of this warming trend, the warmer parts of the boreal forests are susceptible to replacement by grassland, parkland or temperate forest. In Siberia, the taiga is converting from predominantly needle-shedding larch trees to evergreen conifers in response to a warming climate. This is likely to further accelerate warming, as the evergreen trees will absorb more of the sun's rays. Given the vast size of the area, such a change has the potential to affect areas well outside of the region. In much of the boreal forest in Alaska, the growth of white spruce trees are stunted by unusually warm summers, while trees on some of the coldest fringes of the forest are experiencing faster growth than previously. Lack of moisture in the warmer summers are also stressing the birch trees of central Alaska. Recent years have seen outbreaks of insect pests in forest-destroying plagues: the spruce-bark beetle ("Dendroctonus rufipennis") in Yukon and Alaska; the mountain pine beetle in British Columbia; the aspen-leaf miner; the larch sawfly; the spruce budworm ("Choristoneura fumiferana"); the spruce coneworm. The effect of sulphur dioxide on woody boreal forest species was investigated by Addison et al. (1984), who exposed plants growing on native soils and tailings to 15.2 μmol/m3 (0.34 ppm) of SO2 on CO2 assimilation rate (NAR). The Canadian maximum acceptable limit for atmospheric SO2 is 0.34 ppm. Fumigation with SO2 significantly reduced NAR in all species and produced visible symptoms of injury in 2–20 days. The decrease in NAR of deciduous species (trembling aspen ["Populus tremuloides"], willow ["Salix"], green alder ["Alnus viridis"], and white birch ["Betula papyrifera"]) was significantly more rapid than of conifers (white spruce, black spruce ["Picea mariana"], and jack pine ["Pinus banksiana"]) or an evergreen angiosperm (Labrador tea) growing on a fertilized Brunisol. These metabolic and visible injury responses seemed to be related to the differences in S uptake owing in part to higher gas exchange rates for deciduous species than for conifers. Conifers growing in oil sands tailings responded to SO2 with a significantly more rapid decrease in NAR compared with those growing in the Brunisol, perhaps because of predisposing toxic material in the tailings. However, sulphur uptake and visible symptom development did not differ between conifers growing on the 2 substrates. Acidification of precipitation by anthropogenic, acid-forming emissions has been associated with damage to vegetation and reduced forest productivity, but 2-year-old white spruce that were subjected to simulated acid rain (at pH 4.6, 3.6, and 2.6) applied weekly for 7 weeks incurred no statistically significant (P 0.05) reduction in growth during the experiment compared with the background control (pH 5.6) (Abouguendia and Baschak 1987). However, symptoms of injury were observed in all treatments, the number of plants and the number of needles affected increased with increasing rain acidity and with time. Scherbatskoy and Klein (1983) found no significant effect of chlorophyll concentration in white spruce at pH 4.3 and 2.8, but Abouguendia and Baschak (1987) found a significant reduction in white spruce at pH 2.6, while the foliar sulphur content significantly greater at pH 2.6 than any of the other treatments. Many nations are taking direct steps to protect the ecology of the taiga by prohibiting logging, mining, oil and gas production, and other forms of development. In February 2010 the Canadian government established protection for 13,000 square kilometres of boreal forest by creating a new 10,700-square-kilometre park reserve in the Mealy Mountains area of eastern Canada and a 3,000-square-kilometre waterway provincial park that follows alongside the Eagle River from headwaters to sea. Two Canadian provincial governments, Ontario and Quebec, introduced measures in 2008 that would protect at least half of their northern boreal forest. Although both provinces admitted it will take years to plan, work with Aboriginal and local communities and ultimately map out precise boundaries of the areas off-limits to development, the measures are expected to create some of the largest protected areas networks in the world once completed. Both announcements came the following year after a letter signed by 1,500 scientists called on political leaders to protect at least half of the boreal forest. The taiga stores enormous quantities of carbon, more than the world's temperate and tropical forests combined, much of it in wetlands and peatland. In fact, current estimates place boreal forests as storing twice as much carbon per unit area as tropical forests. One of the biggest areas of research and a topic still full of unsolved questions is the recurring disturbance of fire and the role it plays in propagating the lichen woodland. The phenomenon of wildfire by lightning strike is the primary determinant of understory vegetation and because of this, it is considered to be the predominant force behind community and ecosystem properties in the lichen woodland. The significance of fire is clearly evident when one considers that understory vegetation influences tree seedling germination in the short term and decomposition of biomass and nutrient availability in the long term. The recurrent cycle of large, damaging fire occurs approximately every 70 to 100 years. Understanding the dynamics of this ecosystem is entangled with discovering the successional paths that the vegetation exhibits after a fire. Trees, shrubs, and lichens all recover from fire-induced damage through vegetative reproduction as well as invasion by propagules. Seeds that have fallen and become buried provide little help in re-establishment of a species. The reappearance of lichens is reasoned to occur because of varying conditions and light/nutrient availability in each different microstate. Several different studies have been done that have led to the formation of the theory that post-fire development can be propagated by any of four pathways: self replacement, species-dominance relay, species replacement, or gap-phase self replacement. Self replacement is simply the re-establishment of the pre-fire dominant species. Species-dominance relay is a sequential attempt of tree species to establish dominance in the canopy. Species replacement is when fires occur in sufficient frequency to interrupt species dominance relay. Gap-Phase Self-Replacement is the least common and so far has only been documented in Western Canada. It is a self replacement of the surviving species into the canopy gaps after a fire kills another species. The particular pathway taken after a fire disturbance depends on how the landscape is able to support trees as well as fire frequency. Fire frequency has a large role in shaping the original inception of the lower forest line of the lichen woodland taiga. It has been hypothesized by Serge Payette that the spruce-moss forest ecosystem was changed into the lichen woodland biome due to the initiation of two compounded strong disturbances: large fire and the appearance and attack of the spruce budworm. The spruce budworm is a deadly insect to the spruce populations in the southern regions of the taiga. J.P. Jasinski confirmed this theory five years later stating “Their [lichen woodlands] persistence, along with their previous moss forest histories and current occurrence adjacent to closed moss forests, indicate that they are an alternative stable state to the spruce–moss forests”.
https://en.wikipedia.org/wiki?curid=31302
Type II submarine The Type II U-boat was designed by Nazi Germany as a coastal U-boat, modeled after the CV-707 submarine, which was designed by the Dutch dummy company NV Ingenieurskantoor voor Scheepsbouw Den Haag (I.v.S) (set up by Germany after World War I in order to maintain and develop German submarine technology and to circumvent the limitations set by the Treaty of Versailles) and built in 1933 by the Finnish Crichton-Vulcan shipyard in Turku, Finland. It was too small to undertake sustained operations far away from the home support facilities. Its primary role was found to be in the training schools, preparing new German naval officers for command. It appeared in four sub-types. Germany was stripped of its U-boats by the Treaty of Versailles at the end of World War I, but in the late 1920s and early 1930s began to rebuild its armed forces. The pace of rearmament accelerated under Adolf Hitler, and the first Type II U-boat was laid down on 11 February 1935. Knowing that the world would see this step towards rearmament, Hitler reached an agreement with Britain to build a navy up to 35% of the size of the Royal Navy in surface vessels, but equal to the British in number of submarines. This agreement was signed on 18 June 1935, and was commissioned 11 days later. The defining characteristic of the Type II was its small size. Known as the "Einbaum" ("dugout canoe"), it had some advantages over larger boats, chiefly its ability to work in shallow water, dive quickly, and increased stealth due to the low conning tower. However, it had a shallower maximum depth, short range, cramped living conditions, and carried fewer torpedoes. The boat had a single hull, with no watertight compartments. There were three torpedo tubes forward (none aft), with space for another two torpedoes inside the pressure hull for reloads. A single 20 mm anti-aircraft gun was provided, but no deck gun was mounted. Space inside was limited. The two spare torpedoes extended from just behind the torpedo tubes to just in front of the control room, and most of the 24-man crew lived in this forward area around the torpedoes, sharing 12 bunks. Four bunks were also provided aft of the engines for the engine room crew. Cooking and sanitary facilities were basic, and in this environment long patrols were very arduous. Most Type IIs only saw operational service during the early years of the war, thereafter remaining in training bases. Six were stripped down to their hulls, transported by river and truck to Linz (on the Danube), and reassembled for use in the Black Sea against the Soviet Union. In contrast to other German submarine types, few Type IIs were lost. This reflects their use as training boats, although accidents accounted for several vessels. These boats were a first step towards re-armament, intended to provide Germany with experience in submarine construction and operation and lay the foundation for larger boats to build upon. Only one of these submarines survive; the prototype CV-707, renamed "Vesikko" by the Finnish Navy which later bought it. On 3 February 2008, "The Telegraph" reported that a U-20 had been discovered by Selçuk Kolay (a Turkish marine engineer) in of water off the coast of the Turkish city of Zonguldak. According to the report, Kolay knows where U-23 and U-19 submarines are, scuttled in deeper water near the U-20. The Type IIA was a single hull, all welded boat with internal ballast tanks. Compared to the other variants, it had a smaller bridge and could carry the German G7a, G7e torpedoes as well as TM-type torpedo mines. There were two periscopes in the conning tower; an aerial (navigation) periscope at the front of the tower, and an attack periscope in the middle of the tower. There were serrated net cutters in the bow. The net cutters were adopted from World War 1 boats but were quickly discontinued during World War 2. Deutsche Werke AG of Kiel built six Type IIAs in 1934 and 1935. The prototype, built in Finland: Finnish submarine Vesikko The Type IIB was a lengthened version of the Type IIA. Three additional compartments were inserted amidships which were fitted with additional diesel tanks beneath the control room. The range was increased to 1,800 nautical miles at 12 knots. Diving time was also improved to 30 seconds. Deutsche Werke AG of Kiel built four Type IIBs in 1935 and 1936; Germaniawerft of Kiel built fourteen in 1935 and 1936; and Flender Werke AG of Lübeck built two between 1938 and 1940. In total, twenty were built. There were 20 Type IIB submarines commissioned. The Type IIC was a further lengthened version of the Type IIB with an additional two compartments inserted amidships to accommodate improved radio room facilities. The additional diesel tanks beneath the control room were further enlarged, extending the range to 1,900 nautical miles at 12 knots. Deutsche Werke AG of Kiel built eight Type IICs between 1937 and 1940. There were eight Type IIC submarines commissioned. The Type IID had additional saddle tanks fitted to the sides of the external hull. These saddle tanks were used to accommodate additional diesel storage tanks. The diesel oil would float atop the saddle tanks. As oil was consumed, water would gradually fill the tanks to compensate for the positive buoyancy. The range was nearly doubled to at and enabled the Type II to conduct longer operations around the British Isles. A further development was the propellers were fitted with Kort nozzles, intended to improve propulsion efficiency. Deutsche Werke AG of Kiel built sixteen Type IIDs in 1939 and 1940. There were 16 Type IID submarines commissioned. See list of German Type II submarines for individual ship details.
https://en.wikipedia.org/wiki?curid=31304