text
stringlengths
10
951k
source
stringlengths
39
44
Relay league A relay league is a chain of message forwarding stations in a system of optical telegraphs, radio telegraph stations, or riding couriers. Early 19th century methods of this type evolved into the electrical telegraph networks of the mid-to-late 19th century. Radio amateurs have been early in arranging relay leagues, as is reflected in the name of the organization of American Radio Relay League (ARRL). Radio amateur message relay operations were originally conducted in the first two decades of the 20th century using Morse code via spark-gap transmitters. As vacuum tubes became affordable, operations shifted to more efficient manual telegraphy transmitters, referred to as "CW" (Continuous wave). Messages were relayed station-to-station, typically involving four or more re-transmission cycles to cover the continental United States, in an organized system of amateur radio networks. After World War II, voice and radioteletype implementations of the message relay system were employed.
https://en.wikipedia.org/wiki?curid=25521
History of radio The early history of radio is the history of technology that produces and uses radio instruments that use radio waves. Within the timeline of radio, many people contributed theory and inventions in what became radio. Radio development began as "wireless telegraphy". Later radio history increasingly involves matters of broadcasting. The idea of wireless communication predates the discovery of "radio" with experiments in "wireless telegraphy" via inductive and capacitive induction and transmission through the ground, water, and even train tracks from the 1830s on. James Clerk Maxwell showed in theoretical and mathematical form in 1864 that electromagnetic waves could propagate through free space. It is likely that the first intentional transmission of a signal by means of electromagnetic waves was performed in an experiment by David Edward Hughes around 1880, although this was considered to be induction at the time. In 1888 Heinrich Rudolf Hertz was able to conclusively prove transmitted airborne electromagnetic waves in an experiment confirming Maxwell's theory of electromagnetism. After the discovery of these "Hertzian waves" (it would take almost 20 years for the term "radio" to be universally adopted for this type of electromagnetic radiation) many scientists and inventors experimented with transmitting and detecting Hertzian waves. Maxwell's theory showing that light and Hertzian electromagnetic waves were the same phenomenon at different wavelengths led "Maxwellian" scientists such as John Perry, Frederick Thomas Trouton and Alexander Trotter to assume they would be analogous to optical light. The Serbian American engineer Nikola Tesla (who proposed a wireless power/communication earth conduction system similar to radio in 1893) consider Hertzian waves relatively useless for his system since "light" could not transmit further than line of sight. In 1892 the physicist William Crookes wrote on the possibilities of wireless telegraphy based on Hertzian waves Others, such as Sir Oliver Lodge, Jagadish Chandra Bose, and Alexander Popov were involved in the development of components and theory involved with the transmission and reception of airborne electromagnetic waves for their own theoretical work. Over several years starting in 1894 the Italian inventor Guglielmo Marconi built the first engineering complete, commercially successful wireless telegraphy system based on airborne Hertzian waves (radio transmission). Marconi demonstrated the application of radio in military and marine communications and started a company for the development and propagation of radio communication services and equipment. The meaning and usage of the word "radio" has developed in parallel with developments within the field of communications and can be seen to have three distinct phases: electromagnetic waves and experimentation; wireless communication and technical development; and radio broadcasting and commercialization. In an 1864 presentation, published in 1865, James Clerk Maxwell proposed theories of electromagnetism, with mathematical proofs, that showed that light and predicted that radio and x-rays were all types of electromagnetic waves propagating through free space. In 1886–88 Heinrich Rudolf Hertz conducted a series of experiments that proved the existence of Maxwell's electromagnetic waves, using a frequency in what would later be called the "radio" spectrum. Many individuals—inventors, engineers, developers and businessmen—constructed systems based on their own understanding of these and other phenomena, some predating Maxwell and Hertz's discoveries. Thus "wireless telegraphy" and radio wave-based systems can be attributed to multiple "inventors". Development from a laboratory demonstration to a commercial entity spanned several decades and required the efforts of many practitioners. In 1878, David E. Hughes noticed that sparks could be heard in a telephone receiver when experimenting with his carbon microphone. He developed this carbon-based detector further and eventually could detect signals over a few hundred yards. He demonstrated his discovery to the Royal Society in 1880, but was told it was merely induction, and therefore abandoned further research. Thomas Edison came across the electromagnetic phenomenon while experimenting with a telegraph at Menlo Park. He noted an unexplained transmission effect while experimenting with a telegraph. He referred to this as etheric force in an announcement on November 28, 1875. Elihu Thomson published his findings on Edison's new "force", again attributing it to induction, an explanation that Edison accepted. Edison would go on the next year to take out on a system of electrical wireless communication between ships based on electrostatic coupling using the water and elevated terminals. Although this was not a radio system, Edison would sell his patent rights to his friend Guglielmo Marconi at the Marconi Company in 1903, rather than another interested party who might end up working against Marconi's interests. Between 1886 and 1888 Heinrich Rudolf Hertz published the results of his experiments wherein he was able to transmit electromagnetic waves (radio waves) through the air, proving Maxwell's electromagnetic theory. Thus, given Hertz comprehensive discoveries, radio waves were referred to as "Hertzian waves". Between 1890 and 1892 physicists such as John Perry, Frederick Thomas Trouton and William Crookes proposed electromagnetic or Hertzian waves as a navigation aid or means of communication, with Crookes writing on the possibilities of wireless telegraphy based on Hertzian waves in 1892. In a lecture on the work of Hertz, shortly after his death, Professors Oliver Lodge and Alexander Muirhead demonstrated wireless signaling using Hertzian (radio) waves in the lecture theater of the Oxford University Museum of Natural History on August 14, 1894. During the demonstration radio waves were sent from the neighboring Clarendon Laboratory building, and received by apparatus in the lecture theater. Building on the work of Lodge, the Bengali Indian physicist Jagadish Chandra Bose ignited gunpowder and rang a bell at a distance, using millimeter-range-wavelength microwaves, in a November 1894 public demonstration at the Town Hall of Kolkata, India. Bose wrote in a Bengali essay, "Adrisya Alok" ("Invisible Light"), "The invisible light can easily pass through brick walls, buildings etc. Therefore, messages can be transmitted by means of it without the mediation of wires." Bose's first scientific paper, "On polarisation of electric rays by double-refracting crystals" was communicated to the Asiatic Society of Bengal in May 1895. Following that, Bose produced a series of articles in English, one after another. His second paper was communicated to the Royal Society of London by Lord Rayleigh in October 1895. In December 1895, the London journal "The Electrician" (Vol. 36) published Bose's paper, "On a new electro-polariscope". At that time, the word 'coherer', coined by Lodge, was used in the English-speaking world to mean Hertzian wave receivers or detectors. "The Electrician" (December 1895) readily commented on Bose's coherer. "The Englishman" (18 January 1896) quoted from "The Electrician" and commented as follows: "Should Professor Bose succeed in perfecting and patenting his ‘Coherer’, we may in time see the whole system of coast lighting throughout the navigable world revolutionised by an Indian Bengali scientist working single handed[ly] in our Presidency College Laboratory." Bose planned to "perfect his coherer", but never thought of patenting it. In 1895, conducting experiments along the lines of Hertz's research, Alexander Stepanovich Popov built his first radio receiver, which contained a coherer. Popover further refined his invention as a lightning detector and presented to the Russian Physical and Chemical Society on May 7, 1895. A depiction of the lightning detector was printed in the "Journal of the Russian Physical and Chemical Society" the same year (publication of the minutes 15/201 of this session – December issue of the journal RPCS). An earlier description of the device was given by Dmitry Aleksandrovich Lachinov in July 1895 in the second edition of his course "Fundamentals of Meteorology and Climatology", which was the first such course in Russia. Popov's receiver was created on the improved basis of Lodge's receiver, and originally intended for reproduction of its experiments. In 1894, the young Italian inventor Guglielmo Marconi began working on the idea of building long distance wireless transmission systems based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing. Marconi read through the literature and used the ideas of others who were experimenting with radio waves but did a great deal to develop devices such as portable transmitters and receiver systems that could work over long distances, turning what was essentially a laboratory experiment into a useful communication system. By August 1895, Marconi was field testing his system but even with improvements he was only able to transmit signals up to one-half mile, a distance Oliver Lodge had predicted in 1894 as the maximum transmission distance for radio waves. Marconi raised the height of his antenna and hit upon the idea of grounding his transmitter and receiver. With these improvements the system was capable of transmitting signals up to and over hills. Marconi's experimental apparatus proved to be the first engineering-complete, commercially successful radio transmission system. Marconi's apparatus is also credited with saving the 700 people who survived the tragic "Titanic" disaster. In 1896, Marconi was awarded British patent 12039, "Improvements in transmitting electrical impulses and signals and in apparatus there-for", the first patent ever issued for a Hertzian wave (radio wave) base wireless telegraphic system. In 1897, he established a radio station on the Isle of Wight, England. Marconi opened his "wireless" factory in the former silk-works at Hall Street, Chelmsford, England in 1898, employing around 60 people. Shortly after the 1900s, Marconi held the patent rights for radio. Marconi would go on to win the Nobel Prize in Physics in 1909 and be more successful than any other inventor in his ability to "commercialize" radio and its associated equipment into a global business. In the US some of his subsequent patented refinements (but not his original radio patent) would be overturned in a 1935 court case (upheld by the US Supreme Court in 1943). In 1900, Brazilian priest Roberto Landell de Moura transmitted the human voice wirelessly. According to the newspaper "Jornal do Comercio" (June 10, 1900), he conducted his first public experiment on June 3, 1900, in front of journalists and the General Consul of Great Britain, C.P. Lupton, in São Paulo, Brazil, for a distance of approximately . The points of transmission and reception were Alto de Santana and Paulista Avenue. One year after that experiment, de Moura received his first patent from the Brazilian government. It was described as "equipment for the purpose of phonetic transmissions through space, land and water elements at a distance with or without the use of wires." Four months later, knowing that his invention had real value, he left Brazil for the United States with the intent of patenting the machine at the U.S. Patent Office in Washington, D.C. Having few resources, he had to rely on friends to push his project. Despite great difficulty, three patents were awarded: "The Wave Transmitter" (October 11, 1904), which is the precursor of today's radio transceiver; "The Wireless Telephone" and the "Wireless Telegraph", both dated November 22, 1904. The next advancement was the vacuum tube detector, invented by Westinghouse engineers. On Christmas Eve 1906, Reginald Fessenden used a synchronous rotary-spark transmitter for the first radio program broadcast, from Ocean Bluff-Brant Rock, Massachusetts. Ships at sea heard a broadcast that included Fessenden playing "O Holy Night" on the violin and reading a passage from the Bible. This was, for all intents and purposes, the first transmission of what is now known as amplitude modulation or AM radio. In June 1912 Marconi opened the world's first purpose-built radio factory at New Street Works in Chelmsford, England. The first radio news program was broadcast August 31, 1920 by station 8MK in Detroit, Michigan, which survives today as all-news format station WWJ under ownership of the CBS network. The first college radio station began broadcasting on October 14, 1920 from Union College, Schenectady, New York under the personal call letters of Wendell King, an African-American student at the school. That month 2ADD (renamed WRUC in 1947), aired what is believed to be the first public entertainment broadcast in the United States, a series of Thursday night concerts initially heard within a radius and later for a radius. In November 1920, it aired the first broadcast of a sporting event. At 9 pm on August 27, 1920, Sociedad Radio Argentina aired a live performance of Richard Wagner's opera "Parsifal" from the Coliseo Theater in downtown Buenos Aires. Only about twenty homes in the city had receivers to tune in this radio program. Meanwhile, regular entertainment broadcasts commenced in 1922 from the Marconi Research Centre at Writtle, England. Sports broadcasting began at this time as well, including the college football on radio broadcast of a 1921 West Virginia vs. Pittsburgh football game. One of the first developments in the early 20th century was that aircraft used commercial AM radio stations for navigation. This continued until the early 1960s when VOR systems became widespread. In the early 1930s, single sideband and frequency modulation were invented by amateur radio operators. By the end of the decade, they were established commercial modes. Radio was used to transmit pictures visible as television as early as the 1920s. Commercial television transmissions started in North America and Europe in the 1940s. In 1947 AT&T commercialized the Mobile Telephone Service. From its start in St. Louis in 1946, AT&T then introduced Mobile Telephone Service to one hundred towns and highway corridors by 1948. Mobile Telephone Service was a rarity with only 5,000 customers placing about 30,000 calls each week. Because only three radio channels were available, only three customers in any given city could make mobile telephone calls at one time. Mobile Telephone Service was expensive, costing US$15 per month, plus $0.30–0.40 per local call, equivalent to (in 2012 US dollars) about $176 per month and $3.50–4.75 per call. The Advanced Mobile Phone System analog mobile cell phone system, developed by Bell Labs, was introduced in the Americas in 1978, gave much more capacity. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s. Following development of transistor technology, bipolar junction transistors led to the development of the transistor radio. In 1954, the Regency company introduced a pocket transistor radio, the TR-1, powered by a "standard 22.5 V Battery." In 1955, the newly formed Sony company introduced its first transistorized radio, the TR-55. It was small enough to fit in a vest pocket, powered by a small battery. It was durable, because it had no vacuum tubes to burn out. In 1957, Sony introduced the TR-63, the first mass-produced transistor radio, leading to the mass-market penetration of transistor radios. Over the next 20 years, transistors replaced tubes almost completely except for high-power transmitters. By the mid-1960s, the Radio Corporation of America (RCA) were using metal–oxide–semiconductor field-effect transistors (MOSFETs) in their consumer products, including FM radio, television and amplifiers. Metal–oxide–semiconductor (MOS) large-scale integration (LSI) provided a practical and economic solution for radio technology, and was used in mobile radio systems by the early 1970s. By 1963, color television was being broadcast commercially (though not all broadcasts or programs were in color), and the first (radio) communication satellite, "Telstar", was launched. In the 1970s, LORAN became the premier radio navigation system. Soon, the U.S. Navy experimented with satellite navigation, culminating in the launch of the Global Positioning System (GPS) constellation in 1987. In early radio, and to a limited extent much later, the transmission signal of the radio station was specified in meters, referring to the wavelength, the length of the radio wave. This is the origin of the terms long wave, medium wave, and short wave radio. Portions of the radio spectrum reserved for specific purposes were often referred to by wavelength: the 40-meter band, used for amateur radio, for example. The relation between wavelength and frequency is reciprocal: the higher the frequency, the shorter the wave, and vice versa. As equipment progressed, precise frequency control became possible; early stations often did not have a precise frequency, as it was affected by the temperature of the equipment, among other factors. Identifying a radio signal by its frequency rather than its length proved much more practical and useful, and starting in the 1920s this became the usual method of identifying a signal, especially in the United States. Frequencies specified in number of cycles per second (kilocycles, megacycles) were replaced by the more specific designation of hertz (cycles per second) about 1965. In the 1970s, the U.S. long-distance telephone network began to transition towards a digital telephone network, employing digital radios for many of its links. The transition towards digital telecommunication networks was enabled by mixed-signal MOS integrated circuit chips using switched-capacitor (SC) and pulse-code modulation (PCM) technologies. In the late 1980s, Asad Ali Abidi at UCLA developed RF CMOS (radio-frequency CMOS), a radio transceiver system on a mixed-signal MOS IC chip, which enabled the introduction of digital signal processing in wireless communications. In 1990, discrete cosine transform (DCT) video coding standards enabled digital television (DTV) transmission in both standard-definition TV (SDTV) and high-definition TV (HDTV) formats. In the early 1990s, amateur radio experimenters began to use personal computers with audio cards to process radio signals. In the 1990s, the wireless revolution began, with the advent of digital wireless networks. It began with the introduction of digital cellular mobile networks, enabled by LDMOS (power MOSFET) RF power amplifiers and CMOS RF circuits. In 1994, the U.S. Army and DARPA launched an aggressive, successful project to construct a software-defined radio that can be programmed to be virtually any radio by changing its software program. Digital transmissions began to be applied to commercial broadcasting in the late 1990s. In 1995, Digital Audio Broadcasting (DAB), a digital radio standard, launched in Europe. ISDB-S, a Japanese digital television standard, was launched in 1996, and was later followed by the ISDB-T digital radio standard. Around the start of the 20th century, the Slaby-Arco wireless system was developed by Adolf Slaby and Georg von Arco. In 1900, Reginald Fessenden made a weak transmission of voice over the airwaves. In 1901, Marconi conducted the first successful transatlantic experimental radio communications. In 1907, Marconi established the first commercial transatlantic radio communications service, between Clifden, Ireland and Glace Bay, Newfoundland. Julio Cervera Baviera developed radio in Spain around 1902. Cervera Baviera obtained patents in England, Germany, Belgium, and Spain. In May–June 1899, Cervera had, with the blessing of the Spanish Army, visited Marconi's radiotelegraphic installations on the English Channel, and worked to develop his own system. He began collaborating with Marconi on resolving the problem of a wireless communication system, obtaining some patents by the end of 1899. Cervera, who had worked with Marconi and his assistant George Kemp in 1899, resolved the difficulties of wireless telegraph and obtained his first patents prior to the end of that year. On March 22, 1902, Cervera founded the Spanish Wireless Telegraph and Telephone Corporation and brought to his corporation the patents he had obtained in Spain, Belgium, Germany and England. He established the second and third regular radiotelegraph service in the history of the world in 1901 and 1902 by maintaining regular transmissions between Tarifa and Ceuta (across the Straits of Gibraltar) for three consecutive months, and between Javea (Cabo de la Nao) and Ibiza (Cabo Pelado). This is after Marconi established the radiotelegraphic service between the Isle of Wight and Bournemouth in 1898. In 1906, Domenico Mazzotto wrote: "In Spain the Minister of War has applied the system perfected by the commander of military engineering, Julio Cervera Baviera (English patent No. 20084 (1899))." Cervera thus achieved some success in this field, but his radiotelegraphic activities ceased suddenly, the reasons for which are unclear to this day. Using various patents, the British Marconi company was established in 1897 by Guglielmo Marconi and began communication between coast radio stations and ships at sea. A year after, in 1898, they successfully introduced their first radio station in Chelmsford. This company, along with its subsidiaries Canadian Marconi and American Marconi, had a stranglehold on ship-to-shore communication. It operated much the way American Telephone and Telegraph operated until 1983, owning all of its equipment and refusing to communicate with non-Marconi equipped ships. Many inventions improved the quality of radio, and amateurs experimented with uses of radio, thus planting the first seeds of broadcasting. The company Telefunken was founded on May 27, 1903, as "Telefunken society for wireless telefon" of Siemens & Halske (S & H) and the Allgemeine Elektrizitäts-Gesellschaft ("General Electricity Company") as joint undertakings for radio engineering in Berlin. It continued as a joint venture of AEG and Siemens AG, until Siemens left in 1941. In 1911, Kaiser Wilhelm II sent Telefunken engineers to West Sayville, New York to erect three 600-foot (180-m) radio towers there. Nikola Tesla assisted in the construction. A similar station was erected in Nauen, creating the only wireless communication between North America and Europe. By 1947, the company released the world's popular microphone called U47 which was widely used around the world. The invention of amplitude-modulated (AM) radio, so that more than one station can send signals (as opposed to spark-gap radio, where one transmitter covers the entire bandwidth of the spectrum) is attributed to Reginald Fessenden and Lee de Forest. According to some sources, notably Fessenden's wife Helen's biography, on Christmas Eve 1906, Reginald Fessenden used an Alexanderson alternator and rotary spark-gap transmitter to make the first radio audio broadcast, from Brant Rock, Massachusetts. Ships at sea heard a broadcast that included Fessenden playing "O Holy Night" on the violin and reading a passage from the Bible. However, Fessenden himself never mentioned that date: rather, he wrote of experiments with voice as early as 1902. And some of his experiments with voice and music, which occurred in mid-to-late December 1906, were reported in the "American Telephone Journal". Following development of transistor technology, bipolar junction transistors led to the development of the transistor radio. In 1954, Regency introduced a pocket transistor radio, the TR-1, powered by a "standard 22.5V Battery". In 1955, the newly formed Sony company introduced its first transistorized radio, the TR-55. In 1957, Sony introduced the TR-63, the first mass-produced transistor radio, leading to the mass-market penetration of transistor radios. It was small enough to fit in a vest pocket, and able to be powered by a small battery. It was durable, because there were no tubes to burn out. Over the next twenty years, transistors displaced tubes almost completely except for picture tubes and very high power or very high frequency uses. In the early 1960s, VOR systems finally became widespread for aircraft navigation; before that, aircraft used commercial AM radio stations for navigation. (AM stations are still marked on U.S. aviation charts). By the mid-1960s, the Radio Corporation of America (RCA) were using metal–oxide–semiconductor field-effect transistors (MOSFETs) in their consumer products, including FM radio, television and amplifiers. Metal–oxide–semiconductor (MOS) large-scale integration (LSI) provided a practical and economic solution for radio technology, and was used in mobile radio systems by the early 1970s. In the 1970s, LORAN became the premier radio navigation system. Soon, the US Navy experimented with satellite navigation. In 1987, the Global Positioning System (GPS) constellation of satellites was launched. Telegraphy did not go away on radio. Instead, the degree of automation increased. On land-lines in the 1930s, teletypewriters automated encoding, and were adapted to pulse-code dialing to automate routing, a service called telex. For thirty years, telex was the cheapest form of long-distance communication, because up to 25 telex channels could occupy the same bandwidth as one voice channel. For business and government, it was an advantage that telex directly produced written documents. Telex systems were adapted to short-wave radio by sending tones over single sideband. CCITT R.44 (the most advanced pure-telex standard) incorporated character-level error detection and retransmission as well as automated encoding and routing. For many years, telex-on-radio (TOR) was the only reliable way to reach some third-world countries. TOR remains reliable, though less-expensive forms of e-mail are displacing it. Many national telecom companies historically ran nearly pure telex networks for their governments, and they ran many of these links over short wave radio. Documents including maps and photographs went by radiofax, or wireless photoradiogram, invented in 1924 by Richard H. Ranger of Radio Corporation of America (RCA). This method prospered in the mid-20th century and faded late in the century. Radio navigation plays an important role during war time, especially in World War II. Before the discovery of the crystal oscillator, radio navigation had many limits. However, as radio technology expanding, navigation is easier to use, and it provides a better position. Although there are many advantages, the radio navigation systems often comes with complex equipment such as the radio compass receiver, compass indicator, or the radar plan position indicator. All of these require users to obtain certain knowledge. In 1947, AT&T commercialized the Mobile Telephone Service. From its start in St. Louis in 1946, AT&T then introduced Mobile Telephone Service to one hundred towns and highway corridors by 1948. Mobile Telephone Service was a rarity with only 5,000 customers placing about each week. Because only three radio channels were available, only three customers in any given city could make mobile telephone calls at one time. Mobile Telephone Service was expensive, costing US$15 per month, plus $0.30–0.40 per local call, equivalent to (in 2012 US dollars) about $176 per month and $3.50–4.75 per call. The development of metal–oxide–semiconductor (MOS) large-scale integration (LSI) technology, information theory and cellular networking led to the development of affordable mobile communications. The Advanced Mobile Phone System analog mobile cell phone system, developed by Bell Labs and introduced in the Americas in 1978, gave much more capacity. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s. The beginning of radio broadcasting started with different creations of developing the radio receivers and transmitter including the crystal sets and the first vacuum tubes. These help to transmit the radio waves for long distance broadcasting. The most common type of receiver before vacuum tubes was the crystal set, although some early radios used some type of amplification through electric current or battery. Inventions of the triode amplifier, motor-generator, and detector enabled audio radio. The use of amplitude modulation (AM), by which soundwaves can be transmitted over a continuous-wave radio signal of narrow bandwidth (as opposed to spark-gap radio, which sent rapid strings of damped-wave pulses that consumed lots of bandwidth and were only suitable for Morse-code telegraphy) was pioneered by Fessenden and Lee de Forest. The art and science of crystal sets is still pursued as a hobby in the form of simple un-amplified radios that 'runs on nothing, forever'. They are used as a teaching tool by groups such as the Boy Scouts of America to introduce youngsters to electronics and radio. As the only energy available is that gathered by the antenna system, loudness is necessarily limited. During the mid-1920s, amplifying vacuum tubes (or "thermionic valves" in the UK) revolutionized radio receivers and transmitters. John Ambrose Fleming developed a vacuum tube diode. Lee de Forest placed a screen, added a "grid" electrode, creating the triode. The Dutch company "Nederlandsche Radio-Industrie" and its owner engineer, Hanso Idzerda, made the first regular wireless broadcast for entertainment from its workshop in The Hague on 6 November 1919. The company manufactured both transmitters and receivers. Its popular program was broadcast four nights per week on AM 670 metres, until 1924 when the company ran into financial troubles. On 27 August 1920, regular wireless broadcasts for entertainment began in Argentina, pioneered by Enrique Telémaco Susini and his associates, and spark gap telegraphy stopped. On 31 August 1920 the first known radio news program was broadcast by station 8MK, the unlicensed predecessor of WWJ (AM) in Detroit, Michigan. In 1922 regular wireless broadcasts for entertainment began in the UK from the Marconi Research Centre 2MT at Writtle near Chelmsford, England. Early radios ran the entire power of the transmitter through a carbon microphone. In the 1920s, the Westinghouse company bought Lee de Forest's and Edwin Armstrong's patent. During the mid-1920s, Amplifying vacuum tubes (US)/thermionic valves (UK) revolutionized radio receivers and transmitters. Westinghouse engineers developed a more modern vacuum tube. In 1933, FM radio was patented by inventor Edwin H. Armstrong. FM uses frequency modulation of the radio wave to reduce static and interference from electrical equipment and the atmosphere. In 1937, W1XOJ, the first experimental FM radio station, was granted a construction permit by the US Federal Communications Commission (FCC). In the 1930s, regular analog television broadcasting began in some parts of Europe and North America. By the end of the decade there were roughly 25,000 all-electronic television receivers in existence worldwide, the majority of them in the UK. In the US, Armstrong's FM system was designated by the FCC to transmit and receive television sound. After World War II, FM radio broadcasting was introduced in Germany. At a meeting in Copenhagen in 1948, a new wavelength plan was set up for Europe. Because of the recent war, Germany (which did not exist as a state and so was not invited) was only given a small number of medium-wave frequencies, which were not very good for broadcasting. For this reason Germany began broadcasting on UKW ("Ultrakurzwelle", i.e. ultra short wave, nowadays called VHF) which was not covered by the Copenhagen plan. After some amplitude modulation experience with VHF, it was realized that FM radio was a much better alternative for VHF radio than AM. Because of this history FM Radio is still referred to as "UKW Radio" in Germany. Other European nations followed a bit later, when the superior sound quality of FM and the ability to run many more local stations because of the more limited range of VHF broadcasts were realized. The British government and the state-owned postal services found themselves under massive pressure from the wireless industry (including telegraphy) and early radio adopters to open up to the new medium. In an internal confidential report from February 25, 1924, the "Imperial Wireless Telegraphy Committee" stated: When radio was introduced in the early 1920s, many predicted it would kill the phonograph record industry. Radio was a free medium for the public to hear music for which they would normally pay. While some companies saw radio as a new avenue for promotion, others feared it would cut into profits from record sales and live performances. Many record companies would not license their records to be played over the radio, and had their major stars sign agreements that they would not perform on radio broadcasts. Indeed, the music recording industry had a severe drop in profits after the introduction of the radio. For a while, it appeared as though radio was a definite threat to the record industry. Radio ownership grew from two out of five homes in 1931 to four out of five homes in 1938. Meanwhile, record sales fell from $75 million in 1929 to $26 million in 1938 (with a low point of $5 million in 1933), though the economics of the situation were also affected by the Great Depression. The copyright owners were concerned that they would see no gain from the popularity of radio and the ‘free’ music it provided. Luckily, what they needed to make this new medium work for them already existed in previous copyright law. The copyright holder for a song had control over all public performances ‘for profit.’ The problem now was proving that the radio industry, which was just figuring out for itself how to make money from advertising and currently offered free music to anyone with a receiver, was making a profit from the songs. The test case was against Bamberger's Department Store in Newark, New Jersey in 1922. The store was broadcasting music from its store on the radio station WOR. No advertisements were heard, except at the beginning of the broadcast which announced "L. Bamberger and Co., One of America's Great Stores, Newark, New Jersey." It was determined through this and previous cases (such as the lawsuit against Shanley's Restaurant) that Bamberger was using the songs for commercial gain, thus making it a public performance for profit, which meant the copyright owners were due payment. With this ruling the American Society of Composers, Authors and Publishers (ASCAP) began collecting licensing fees from radio stations in 1923. The beginning sum was $250 for all music protected under ASCAP, but for larger stations the price soon ballooned to $5,000. Edward Samuels reports in his book "The Illustrated Story of Copyright" that "radio and TV licensing represents the single greatest source of revenue for ASCAP and its composers […] and [a]n average member of ASCAP gets about $150–$200 per work per year, or about $5,000-$6,000 for all of a member's compositions." Not long after the Bamberger ruling, ASCAP had to once again defend their right to charge fees, in 1924. The Dill Radio Bill would have allowed radio stations to play music without paying and licensing fees to ASCAP or any other music-licensing corporations. The bill did not pass. Radio technology was first used for ships to communicate at sea. To ensure safety, the Wireless Ship Act of 1910marks the first time the U.S. government implies regulations on radio systems on ships. This act requires ships to have a radio system with a professional operator if they want to travel more than 200 miles offshore or have more than 50 people on board. However, this act had many flaws including the competition of radio operatorsincluding the two majors company (British and American Marconi). They tended to delay communication for ships that used their competitor's system. This yields the tragic incident of the sink of the Titanic in 1912. In 1912, the sinking of the Titanic due to delayed emergency signals. This happened due to many uncontrolled waves from different radio stations that interfered with the emergency signal from the ship.  After this tragedy, the government passed on the Radio Act of 1912to prevent the story to repeat itself in the future. In this act, the state took control of the waves spectrum, separating between a regular signal versus emergency signals from ships. The Radio Act of 1927gave the Federal Radio Commissionthe power to grant and deny licenses, and to assign frequencies and power levels for each licensee. In 1928 it began requiring licenses of existing stations and setting controls on who could broadcast from where on what frequency and at what power. Some stations could not obtain a license and ceased operations. In section 29, the Radio Act of 1927 mentioned that the content of the broadcast should be freely present, and the government cannot interfere with this. The introduction of The Communications Act of 1934led to the establishment of the Federal Communications Commissions (FCC). The FCC's responsibility is to control the industry including "telephone, telegraph, and radio communications." Under this Act, all carriers have to keep records of authorized interference and unauthorized interference. This Act also supports the President in time of war. If the government needs to use the communication facilities in time of war, they are allowed to. The question of the 'first' publicly targeted licensed radio station in the U.S. has more than one answer and depends on semantics. Settlement of this 'first' question may hang largely upon what constitutes 'regular' programming Many contributed to wireless. Individuals that helped to further the science include, among others:
https://en.wikipedia.org/wiki?curid=25522
Richard Feynman Richard Phillips Feynman ForMemRS (; May 11, 1918 – February 15, 1988) was an American theoretical physicist, known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, the physics of the superfluidity of supercooled liquid helium, as well as his work in particle physics for which he proposed the parton model. For contributions to the development of quantum electrodynamics, Feynman received the Nobel Prize in Physics in 1965 jointly with Julian Schwinger and Shin'ichirō Tomonaga. Feynman developed a widely used pictorial representation scheme for the mathematical expressions describing the behavior of subatomic particles, which later became known as Feynman diagrams. During his lifetime, Feynman became one of the best-known scientists in the world. In a 1999 poll of 130 leading physicists worldwide by the British journal "Physics World", he was ranked as one of the ten greatest physicists of all time. He assisted in the development of the atomic bomb during World War II and became known to a wide public in the 1980s as a member of the Rogers Commission, the panel that investigated the Space Shuttle "Challenger" disaster. Along with his work in theoretical physics, Feynman has been credited with pioneering the field of quantum computing and introducing the concept of nanotechnology. He held the Richard C. Tolman professorship in theoretical physics at the California Institute of Technology. Feynman was a keen popularizer of physics through both books and lectures, including a 1959 talk on top-down nanotechnology called "There's Plenty of Room at the Bottom" and the three-volume publication of his undergraduate lectures, "The Feynman Lectures on Physics". Feynman also became known through his semi-autobiographical books "Surely You're Joking, Mr. Feynman!" and "What Do You Care What Other People Think?", and books written about him such as "Tuva or Bust!" by Ralph Leighton and the biography "Genius: The Life and Science of Richard Feynman" by James Gleick. Feynman was born on May 11, 1918, in Queens, New York City, to Lucille , a homemaker, and Melville Arthur Feynman, a sales manager originally from Minsk in Belarus (then part of the Russian Empire). Feynman was a late talker, and did not speak until after his third birthday. As an adult he spoke with a New York accent strong enough to be perceived as an affectation or exaggeration—so much so that his friends Wolfgang Pauli and Hans Bethe once commented that Feynman spoke like a "bum". The young Feynman was heavily influenced by his father, who encouraged him to ask questions to challenge orthodox thinking, and who was always ready to teach Feynman something new. From his mother, he gained the sense of humor that he had throughout his life. As a child, he had a talent for engineering, maintained an experimental laboratory in his home, and delighted in repairing radios. When he was in grade school, he created a home burglar alarm system while his parents were out for the day running errands. When Richard was five his mother gave birth to a younger brother, Henry Phillips, who died at age four weeks. Four years later, Richard's sister Joan was born and the family moved to Far Rockaway, Queens. Though separated by nine years, Joan and Richard were close, and they both shared a curiosity about the world. Though their mother thought women lacked the capacity to understand such things, Richard encouraged Joan's interest in astronomy, and Joan eventually became an astrophysicist. Feynman's parents were both from Jewish families but not religious, and by his youth, Feynman described himself as an "avowed atheist". Many years later, in a letter to Tina Levitan, declining a request for information for her book on Jewish Nobel Prize winners, he stated, "To select, for approbation the peculiar elements that come from some supposedly Jewish heredity is to open the door to all kinds of nonsense on racial theory", adding, "at thirteen I was not only converted to other religious views, but I also stopped believing that the Jewish people are in any way 'the chosen people'". Later in his life, during a visit to the Jewish Theological Seminary, he encountered the Talmud for the first time. He saw that it contained the original text in a little square on the page, and surrounding it were commentaries written over time by different people. In this way the Talmud had evolved, and everything that was discussed was carefully recorded. Despite being impressed, Feynman was disappointed with the lack of interest for nature and the outside world expressed by the rabbis, who only cared about questions which arise from the Talmud. Feynman attended Far Rockaway High School, a school in Far Rockaway, Queens, which was also attended by fellow Nobel laureates Burton Richter and Baruch Samuel Blumberg. Upon starting high school, Feynman was quickly promoted into a higher math class. An IQ test administered in high school estimated his IQ at 125—high but "merely respectable", according to biographer James Gleick. His sister Joan did better, allowing her to claim that she was smarter. Years later he declined to join Mensa International, saying that his IQ was too low. Physicist Steve Hsu stated of the test: When Feynman was 15, he taught himself trigonometry, advanced algebra, infinite series, analytic geometry, and both differential and integral calculus. Before entering college, he was experimenting with and deriving mathematical topics such as the half-derivative using his own notation. He created special symbols for logarithm, sine, cosine and tangent functions so they did not look like three variables multiplied together, and for the derivative, to remove the temptation of canceling out the formula_1's. A member of the Arista Honor Society, in his last year in high school he won the New York University Math Championship. His habit of direct characterization sometimes rattled more conventional thinkers; for example, one of his questions, when learning feline anatomy, was "Do you have a map of the cat?" (referring to an anatomical chart). Feynman applied to Columbia University but was not accepted because of their quota for the number of Jews admitted. Instead, he attended the Massachusetts Institute of Technology, where he joined the Pi Lambda Phi fraternity. Although he originally majored in mathematics, he later switched to electrical engineering, as he considered mathematics to be too abstract. Noticing that he "had gone too far", he then switched to physics, which he claimed was "somewhere in between". As an undergraduate, he published two papers in the "Physical Review". One of these, which was co-written with Manuel Vallarta, was titled "The Scattering of Cosmic Rays by the Stars of a Galaxy". The other was his senior thesis, on "Forces in Molecules", based on an idea by John C. Slater, who was sufficiently impressed by the paper to have it published. Today, it is known as the Hellmann–Feynman theorem. In 1939, Feynman received a bachelor's degree, and was named a Putnam Fellow. He attained a perfect score on the graduate school entrance exams to Princeton University in physics—an unprecedented feat—and an outstanding score in mathematics, but did poorly on the history and English portions. The head of the physics department there, Henry D. Smyth, had another concern, writing to Philip M. Morse to ask: "Is Feynman Jewish? We have no definite rule against Jews but have to keep their proportion in our department reasonably small because of the difficulty of placing them." Morse conceded that Feynman was indeed Jewish, but reassured Smyth that Feynman's "physiognomy and manner, however, show no trace of this characteristic". Attendees at Feynman's first seminar, which was on the classical version of the Wheeler-Feynman absorber theory, included Albert Einstein, Wolfgang Pauli, and John von Neumann. Pauli made the prescient comment that the theory would be extremely difficult to quantize, and Einstein said that one might try to apply this method to gravity in general relativity, which Sir Fred Hoyle and Jayant Narlikar did much later as the Hoyle–Narlikar theory of gravity. Feynman received a Ph.D. from Princeton in 1942; his thesis advisor was John Archibald Wheeler. In his doctoral thesis titled "The Principle of Least Action in Quantum Mechanics", Feynman applied the principle of stationary action to problems of quantum mechanics, inspired by a desire to quantize the Wheeler–Feynman absorber theory of electrodynamics, and laid the groundwork for the path integral formulation and Feynman diagrams. A key insight was that positrons behaved like electrons moving backwards in time. James Gleick wrote: One of the conditions of Feynman's scholarship to Princeton was that he could not be married; nevertheless, he continued to see his high school sweetheart, Arline Greenbaum, and was determined to marry her once he had been awarded his Ph.D. despite the knowledge that she was seriously ill with tuberculosis. This was an incurable disease at the time, and she was not expected to live more than two years. On June 29, 1942, they took the ferry to Staten Island, where they were married in the city office. The ceremony was attended by neither family nor friends and was witnessed by a pair of strangers. Feynman could only kiss Arline on the cheek. After the ceremony he took her to Deborah Hospital, where he visited her on weekends. In 1941, with World War II raging in Europe but the United States not yet at war, Feynman spent the summer working on ballistics problems at the Frankford Arsenal in Pennsylvania. After the attack on Pearl Harbor had brought the United States into the war, Feynman was recruited by Robert R. Wilson, who was working on means to produce enriched uranium for use in an atomic bomb, as part of what would become the Manhattan Project. At the time, Feynman had not earned a graduate degree. Wilson's team at Princeton was working on a device called an isotron, intended to electromagnetically separate uranium-235 from uranium-238. This was done in a quite different manner from that used by the calutron that was under development by a team under Wilson's former mentor, Ernest O. Lawrence, at the Radiation Laboratory of the University of California. On paper, the isotron was many times more efficient than the calutron, but Feynman and Paul Olum struggled to determine whether or not it was practical. Ultimately, on Lawrence's recommendation, the isotron project was abandoned. At this juncture, in early 1943, Robert Oppenheimer was establishing the Los Alamos Laboratory, a secret laboratory on a mesa in New Mexico where atomic bombs would be designed and built. An offer was made to the Princeton team to be redeployed there. "Like a bunch of professional soldiers," Wilson later recalled, "we signed up, en masse, to go to Los Alamos." Like many other young physicists, Feynman soon fell under the spell of the charismatic Oppenheimer, who telephoned Feynman long distance from Chicago to inform him that he had found a sanatorium in Albuquerque, New Mexico, for Arline. They were among the first to depart for New Mexico, leaving on a train on March 28, 1943. The railroad supplied Arline with a wheelchair, and Feynman paid extra for a private room for her. At Los Alamos, Feynman was assigned to Hans Bethe's Theoretical (T) Division, and impressed Bethe enough to be made a group leader. He and Bethe developed the Bethe–Feynman formula for calculating the yield of a fission bomb, which built upon previous work by Robert Serber. As a junior physicist, he was not central to the project. He administered the computation group of human computers in the theoretical division. With Stanley Frankel and Nicholas Metropolis, he assisted in establishing a system for using IBM punched cards for computation. He invented a new method of computing logarithms that he later used on the Connection Machine. Other work at Los Alamos included calculating neutron equations for the Los Alamos "Water Boiler", a small nuclear reactor, to measure how close an assembly of fissile material was to criticality. On completing this work, Feynman was sent to the Clinton Engineer Works in Oak Ridge, Tennessee, where the Manhattan Project had its uranium enrichment facilities. He aided the engineers there in devising safety procedures for material storage so that criticality accidents could be avoided, especially when enriched uranium came into contact with water, which acted as a neutron moderator. He insisted on giving the rank and file a lecture on nuclear physics so that they would realize the dangers. He explained that while any amount of unenriched uranium could be safely stored, the enriched uranium had to be carefully handled. He developed a series of safety recommendations for the various grades of enrichments. He was told that if the people at Oak Ridge gave him any difficulty with his proposals, he was to inform them that Los Alamos "could not be responsible for their safety otherwise". Returning to Los Alamos, Feynman was put in charge of the group responsible for the theoretical work and calculations on the proposed uranium hydride bomb, which ultimately proved to be infeasible. He was sought out by physicist Niels Bohr for one-on-one discussions. He later discovered the reason: most of the other physicists were too much in awe of Bohr to argue with him. Feynman had no such inhibitions, vigorously pointing out anything he considered to be flawed in Bohr's thinking. He said he felt as much respect for Bohr as anyone else, but once anyone got him talking about physics, he would become so focused he forgot about social niceties. Perhaps because of this, Bohr never warmed to Feynman. At Los Alamos, which was isolated for security, Feynman amused himself by investigating the combination locks on the cabinets and desks of physicists. He often found that they left the lock combinations on the factory settings, wrote the combinations down, or used easily guessable combinations like dates. He found one cabinet's combination by trying numbers he thought a physicist might use (it proved to be 27–18–28 after the base of natural logarithms, "e" = 2.71828 ...), and found that the three filing cabinets where a colleague kept research notes all had the same combination. He left notes in the cabinets as a prank, spooking his colleague, Frederic de Hoffmann, into thinking a spy had gained access to them. Feynman's $380 monthly salary was about half the amount needed for his modest living expenses and Arline's medical bills, and they were forced to dip into her $3,300 in savings. On weekends he drove to Albuquerque to see Arline in a car borrowed from his friend Klaus Fuchs. Asked who at Los Alamos was most likely to be a spy, Fuchs mentioned Feynman's safe cracking and frequent trips to Albuquerque; Fuchs himself later confessed to spying for the Soviet Union. The FBI would compile a bulky file on Feynman, particularly in view of Feynman's Q clearance. Informed that Arline was dying, Feynman drove to Albuquerque and sat with her for hours until she died on June 16, 1945. He then immersed himself in work on the project and was present at the Trinity nuclear test. Feynman claimed to be the only person to see the explosion without the very dark glasses or welder's lenses provided, reasoning that it was safe to look through a truck windshield, as it would screen out the harmful ultraviolet radiation. The immense brightness of the explosion made him duck to the truck's floor, where he saw a temporary "purple splotch" afterimage. Feynman nominally held an appointment at the University of Wisconsin–Madison as an assistant professor of physics, but was on unpaid leave during his involvement in the Manhattan Project. In 1945, he received a letter from Dean Mark Ingraham of the College of Letters and Science requesting his return to the university to teach in the coming academic year. His appointment was not extended when he did not commit to returning. In a talk given there several years later, Feynman quipped, "It's great to be back at the only university that ever had the good sense to fire me." As early as October 30, 1943, Bethe had written to the chairman of the physics department of his university, Cornell, to recommend that Feynman be hired. On February 28, 1944, this was endorsed by Robert Bacher, also from Cornell, and one of the most senior scientists at Los Alamos. This led to an offer being made in August 1944, which Feynman accepted. Oppenheimer had also hoped to recruit Feynman to the University of California, but the head of the physics department, Raymond T. Birge, was reluctant. He made Feynman an offer in May 1945, but Feynman turned it down. Cornell matched its salary offer of $3,900 per annum. Feynman became one of the first of the Los Alamos Laboratory's group leaders to depart, leaving for Ithaca, New York, in October 1945. Because Feynman was no longer working at the Los Alamos Laboratory, he was no longer exempt from the draft. At his induction, physical Army psychiatrists diagnosed Feynman as suffering from a mental illness and the Army gave him a 4-F exemption on mental grounds. His father died suddenly on October 8, 1946, and Feynman suffered from depression. On October 17, 1946, he wrote a letter to Arline, expressing his deep love and heartbreak. The letter was sealed and only opened after his death. "Please excuse my not mailing this," the letter concluded, "but I don't know your new address." Unable to focus on research problems, Feynman began tackling physics problems, not for utility, but for self-satisfaction. One of these involved analyzing the physics of a twirling, nutating disk as it is moving through the air, inspired by an incident in the cafeteria at Cornell when someone tossed a dinner plate in the air. He read the work of Sir William Rowan Hamilton on quaternions, and attempted unsuccessfully to use them to formulate a relativistic theory of electrons. His work during this period, which used equations of rotation to express various spinning speeds, ultimately proved important to his Nobel Prize–winning work, yet because he felt burned out and had turned his attention to less immediately practical problems, he was surprised by the offers of professorships from other renowned universities, including the Institute for Advanced Study, the University of California, Los Angeles, and the University of California, Berkeley. Feynman was not the only frustrated theoretical physicist in the early post-war years. Quantum electrodynamics suffered from infinite integrals in perturbation theory. These were clear mathematical flaws in the theory, which Feynman and Wheeler had unsuccessfully attempted to work around. "Theoreticians", noted Murray Gell-Mann, "were in disgrace." In June 1947, leading American physicists met at the Shelter Island Conference. For Feynman, it was his "first big conference with big men ... I had never gone to one like this one in peacetime." The problems plaguing quantum electrodynamics were discussed, but the theoreticians were completely overshadowed by the achievements of the experimentalists, who reported the discovery of the Lamb shift, the measurement of the magnetic moment of the electron, and Robert Marshak's two-meson hypothesis. Bethe took the lead from the work of Hans Kramers, and derived a renormalized non-relativistic quantum equation for the Lamb shift. The next step was to create a relativistic version. Feynman thought that he could do this, but when he went back to Bethe with his solution, it did not converge. Feynman carefully worked through the problem again, applying the path integral formulation that he had used in his thesis. Like Bethe, he made the integral finite by applying a cut-off term. The result corresponded to Bethe's version. Feynman presented his work to his peers at the Pocono Conference in 1948. It did not go well. Julian Schwinger gave a long presentation of his work in quantum electrodynamics, and Feynman then offered his version, titled "Alternative Formulation of Quantum Electrodynamics". The unfamiliar Feynman diagrams, used for the first time, puzzled the audience. Feynman failed to get his point across, and Paul Dirac, Edward Teller and Niels Bohr all raised objections. To Freeman Dyson, one thing at least was clear: Shin'ichirō Tomonaga, Schwinger and Feynman understood what they were talking about even if no one else did, but had not published anything. He was convinced that Feynman's formulation was easier to understand, and ultimately managed to convince Oppenheimer that this was the case. Dyson published a paper in 1949, which added new rules to Feynman's that told how to implement renormalization. Feynman was prompted to publish his ideas in the "Physical Review" in a series of papers over three years. His 1948 papers on "A Relativistic Cut-Off for Classical Electrodynamics" attempted to explain what he had been unable to get across at Pocono. His 1949 paper on "The Theory of Positrons" addressed the Schrödinger equation and Dirac equation, and introduced what is now called the Feynman propagator. Finally, in papers on the "Mathematical Formulation of the Quantum Theory of Electromagnetic Interaction" in 1950 and "An Operator Calculus Having Applications in Quantum Electrodynamics" in 1951, he developed the mathematical basis of his ideas, derived familiar formulae and advanced new ones. While papers by others initially cited Schwinger, papers citing Feynman and employing Feynman diagrams appeared in 1950, and soon became prevalent. Students learned and used the powerful new tool that Feynman had created. Computer programs were later written to compute Feynman diagrams, providing a tool of unprecedented power. It is possible to write such programs because the Feynman diagrams constitute a formal language with a formal grammar. Marc Kac provided the formal proofs of the summation under history, showing that the parabolic partial differential equation can be re-expressed as a sum under different histories (that is, an expectation operator), what is now known as the Feynman–Kac formula, the use of which extends beyond physics to many applications of stochastic processes. To Schwinger, however, the Feynman diagram was "pedagogy, not physics". By 1949, Feynman was becoming restless at Cornell. He never settled into a particular house or apartment, living in guest houses or student residences, or with married friends "until these arrangements became sexually volatile". He liked to date undergraduates, hire prostitutes, and sleep with the wives of friends. He was not fond of Ithaca's cold winter weather, and pined for a warmer climate. Above all, at Cornell, he was always in the shadow of Hans Bethe. Despite all of this, Feynman looked back favorably on the Telluride House, where he resided for a large period of his Cornell career. In an interview, he described the House as "a group of boys that have been specially selected because of their scholarship, because of their cleverness or whatever it is, to be given free board and lodging and so on, because of their brains". He enjoyed the house's convenience and said that "it's there that I did the fundamental work" for which he won the Nobel Prize. Feynman spent several weeks in Rio de Janeiro in July 1949. That year, the Soviet Union detonated its first atomic bomb, generating anti-communist hysteria. Fuchs was arrested as a Soviet spy in 1950 and the FBI questioned Bethe about Feynman's loyalty. Physicist David Bohm was arrested on December 4, 1950 and emigrated to Brazil in October 1951. Because of the fears of a nuclear war, a girlfriend told Feynman that he should also consider moving to South America. He had a sabbatical coming for 1951–52, and elected to spend it in Brazil, where he gave courses at the Centro Brasileiro de Pesquisas Físicas. In Brazil, Feynman was impressed with "samba" music, and learned to play a metal percussion instrument, the "frigideira". He was an enthusiastic amateur player of bongo and conga drums and often played them in the pit orchestra in musicals. He spent time in Rio with his friend Bohm but Bohm could not convince Feynman to investigate Bohm's ideas on physics. Feynman did not return to Cornell. Bacher, who had been instrumental in bringing Feynman to Cornell, had lured him to the California Institute of Technology (Caltech). Part of the deal was that he could spend his first year on sabbatical in Brazil. He had become smitten by Mary Louise Bell from Neodesha, Kansas. They had met in a cafeteria in Cornell, where she had studied the history of Mexican art and textiles. She later followed him to Caltech, where he gave a lecture. While he was in Brazil, she taught classes on the history of furniture and interiors at Michigan State University. He proposed to her by mail from Rio de Janeiro, and they married in Boise, Idaho, on June 28, 1952, shortly after he returned. They frequently quarreled and she was frightened by his violent temper. Their politics were different; although he registered and voted as a Republican, she was more conservative, and her opinion on the 1954 Oppenheimer security hearing ("Where there's smoke there's fire") offended him. They separated on May 20, 1956. An interlocutory decree of divorce was entered on June 19, 1956, on the grounds of "extreme cruelty". The divorce became final on May 5, 1958. In the wake of the 1957 Sputnik crisis, the U.S. government's interest in science rose for a time. Feynman was considered for a seat on the President's Science Advisory Committee, but was not appointed. At this time, the FBI interviewed a woman close to Feynman, possibly Mary Lou, who sent a written statement to J. Edgar Hoover on August 8, 1958: The U.S. government nevertheless sent Feynman to Geneva for the September 1958 Atoms for Peace Conference. On the beach at Lake Geneva, he met Gweneth Howarth, who was from Ripponden, Yorkshire, and working in Switzerland as an "au pair". Feynman's love life had been turbulent since his divorce; his previous girlfriend had walked off with his Albert Einstein Award medal and, on the advice of an earlier girlfriend, had feigned pregnancy and blackmailed him into paying for an abortion, then used the money to buy furniture. When Feynman found that Howarth was being paid only $25 a month, he offered her $20 a week to be his live-in maid. Feynman knew that this sort of behavior was illegal under the Mann Act, so he had a friend, Matthew Sands, act as her sponsor. Howarth pointed out that she already had two boyfriends, but decided to take Feynman up on his offer, and arrived in Altadena, California, in June 1959. She made a point of dating other men, but Feynman proposed in early 1960. They were married on September 24, 1960, at the Huntington Hotel in Pasadena. They had a son, Carl, in 1962, and adopted a daughter, Michelle, in 1968. Besides their home in Altadena, they had a beach house in Baja California, purchased with the money from Feynman's Nobel Prize. Feynman tried marijuana and ketamine at John Lilly's sensory deprivation tanks, as a way of studying consciousness. He gave up alcohol when he began to show vague, early signs of alcoholism, as he did not want to do anything that could damage his brain. Despite his curiosity about hallucinations, he was reluctant to experiment with LSD. There had been protests over his alleged sexism in 1968, and again in 1972, but there is no evidence he discriminated against women. Feynman recalled protesters entering a hall and picketing a lecture he was about to make in San Francisco, calling him a "sexist pig". Seeing the protesters, as Feynman later recalled the incident, he addressed institutional sexism by saying that "women do indeed suffer prejudice and discrimination in physics". At Caltech, Feynman investigated the physics of the superfluidity of supercooled liquid helium, where helium seems to display a complete lack of viscosity when flowing. Feynman provided a quantum-mechanical explanation for the Soviet physicist Lev Landau's theory of superfluidity. Applying the Schrödinger equation to the question showed that the superfluid was displaying quantum mechanical behavior observable on a macroscopic scale. This helped with the problem of superconductivity, but the solution eluded Feynman. It was solved with the BCS theory of superconductivity, proposed by John Bardeen, Leon Neil Cooper, and John Robert Schrieffer in 1957. Feynman, inspired by a desire to quantize the Wheeler–Feynman absorber theory of electrodynamics, laid the groundwork for the path integral formulation and Feynman diagrams. With Murray Gell-Mann, Feynman developed a model of weak decay, which showed that the current coupling in the process is a combination of vector and axial currents (an example of weak decay is the decay of a neutron into an electron, a proton, and an antineutrino). Although E. C. George Sudarshan and Robert Marshak developed the theory nearly simultaneously, Feynman's collaboration with Murray Gell-Mann was seen as seminal because the weak interaction was neatly described by the vector and axial currents. It thus combined the 1933 beta decay theory of Enrico Fermi with an explanation of parity violation. Feynman attempted an explanation, called the parton model, of the strong interactions governing nucleon scattering. The parton model emerged as a complement to the quark model developed by Gell-Mann. The relationship between the two models was murky; Gell-Mann referred to Feynman's partons derisively as "put-ons". In the mid-1960s, physicists believed that quarks were just a bookkeeping device for symmetry numbers, not real particles; the statistics of the omega-minus particle, if it were interpreted as three identical strange quarks bound together, seemed impossible if quarks were real. The SLAC National Accelerator Laboratory deep inelastic scattering experiments of the late 1960s showed that nucleons (protons and neutrons) contained point-like particles that scattered electrons. It was natural to identify these with quarks, but Feynman's parton model attempted to interpret the experimental data in a way that did not introduce additional hypotheses. For example, the data showed that some 45% of the energy momentum was carried by electrically neutral particles in the nucleon. These electrically neutral particles are now seen to be the gluons that carry the forces between the quarks, and their three-valued color quantum number solves the omega-minus problem. Feynman did not dispute the quark model; for example, when the fifth quark was discovered in 1977, Feynman immediately pointed out to his students that the discovery implied the existence of a sixth quark, which was discovered in the decade after his death. After the success of quantum electrodynamics, Feynman turned to quantum gravity. By analogy with the photon, which has spin 1, he investigated the consequences of a free massless spin 2 field and derived the Einstein field equation of general relativity, but little more. The computational device that Feynman discovered then for gravity, "ghosts", which are "particles" in the interior of his diagrams that have the "wrong" connection between spin and statistics, have proved invaluable in explaining the quantum particle behavior of the Yang–Mills theories, for example, quantum chromodynamics and the electro-weak theory. He did work on all four of the forces of nature: electromagnetic, the weak force, the strong force and gravity. John and Mary Gribbin state in their book on Feynman that "Nobody else has made such influential contributions to the investigation of all four of the interactions". Partly as a way to bring publicity to progress in physics, Feynman offered $1,000 prizes for two of his challenges in nanotechnology; one was claimed by William McLellan and the other by Tom Newman. Feynman was also interested in the relationship between physics and computation. He was also one of the first scientists to conceive the possibility of quantum computers. In the 1980s he began to spend his summers working at Thinking Machines Corporation, helping to build some of the first parallel supercomputers and considering the construction of quantum computers. In 1984–1986, he developed a variational method for the approximate calculation of path integrals, which has led to a powerful method of converting divergent perturbation expansions into convergent strong-coupling expansions (variational perturbation theory) and, as a consequence, to the most accurate determination of critical exponents measured in satellite experiments. In the early 1960s, Feynman acceded to a request to "spruce up" the teaching of undergraduates at Caltech. After three years devoted to the task, he produced a series of lectures that later became "The Feynman Lectures on Physics". He wanted a picture of a drumhead sprinkled with powder to show the modes of vibration at the beginning of the book. Concerned over the connections to drugs and rock and roll that could be made from the image, the publishers changed the cover to plain red, though they included a picture of him playing drums in the foreword. "The Feynman Lectures on Physics" occupied two physicists, Robert B. Leighton and Matthew Sands, as part-time co-authors for several years. Even though the books were not adopted by universities as textbooks, they continue to sell well because they provide a deep understanding of physics. Many of his lectures and miscellaneous talks were turned into other books, including "The Character of Physical Law", "", "Statistical Mechanics", "Lectures on Gravitation", and the "Feynman Lectures on Computation". Feynman wrote about his experiences teaching physics undergraduates in Brazil. The students' study habits and the Portuguese language textbooks were so devoid of any context or applications for their information that, in Feynman's opinion, the students were not learning physics at all. At the end of the year, Feynman was invited to give a lecture on his teaching experiences, and he agreed to do so, provided he could speak frankly, which he did. Feynman opposed rote learning or unthinking memorization and other teaching methods that emphasized form over function. "Clear thinking" and "clear presentation" were fundamental prerequisites for his attention. It could be perilous even to approach him unprepared, and he did not forget fools and pretenders. In 1964, he served on the California State Curriculum Commission, which was responsible for approving textbooks to be used by schools in California. He was not impressed with what he found. Many of the mathematics texts covered subjects of use only to pure mathematicians as part of the "New Math". Elementary students were taught about sets, but: In April 1966, Feynman delivered an address to the National Science Teachers Association, in which he suggested how students could be made to think like scientists, be open-minded, curious, and especially, to doubt. In the course of the lecture, he gave a definition of science, which he said came about by several stages. The evolution of intelligent life on planet Earth—creatures such as cats that play and learn from experience. The evolution of humans, who came to use language to pass knowledge from one individual to the next, so that the knowledge was not lost when an individual died. Unfortunately, incorrect knowledge could be passed down as well as correct knowledge, so another step was needed. Galileo and others started doubting the truth of what was passed down and to investigate "ab initio", from experience, what the true situation was—this was science. In 1974, Feynman delivered the Caltech commencement address on the topic of "cargo cult science", which has the semblance of science, but is only pseudoscience due to a lack of "a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty" on the part of the scientist. He instructed the graduating class that "The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. You just have to be honest in a conventional way after that." Feynman served as doctoral advisor to 31 students. In 1977, Feynman supported his colleague Jenijoy La Belle, who had been hired as Caltech's first female professor in 1969, and filed suit with the Equal Employment Opportunity Commission after she was refused tenure in 1974. The EEOC ruled against Caltech in 1977, adding that La Belle had been paid less than male colleagues. La Belle finally received tenure in 1979. Many of Feynman's colleagues were surprised that he took her side. He had got to know La Belle and both liked and admired her. In the 1960s, Feynman began thinking of writing an autobiography, and he began granting interviews to historians. In the 1980s, working with Ralph Leighton (Robert Leighton's son), he recorded chapters on audio tape that Ralph transcribed. The book was published in 1985 as "Surely You're Joking, Mr. Feynman!" and became a best-seller. Gell-Mann was upset by Feynman's account in the book of the weak interaction work, and threatened to sue, resulting in a correction being inserted in later editions. This incident was just the latest provocation in decades of bad feeling between the two scientists. Gell-Mann often expressed frustration at the attention Feynman received; he remarked: "[Feynman] was a great scientist, but he spent a great deal of his effort generating anecdotes about himself." Feynman has been criticized for a chapter in the book entitled "You Just "Ask" Them", where he describes how he learned to seduce women at a bar he went to in the summer of 1946. A mentor taught him to ask a woman if she would sleep with him before buying her anything. He describes seeing women at the bar as "bitches" in his thoughts, and tells a story of how he told a woman named Ann that "You are worse than a whore" after Ann convinced him to buy her sandwiches by telling him he would eat them at her place, but then, after he bought them, saying they actually couldn't eat together because another man was coming over; later on that same evening Ann returned to the bar to take Feynman to her place. Feynman states at the end of the chapter that this behaviour wasn't typical of him: "So it worked even with an ordinary girl! But no matter how effective the lesson was, I never really used it after that. I didn't enjoy doing it that way. But it was interesting to know that things worked much differently from how I was brought up." When invited to join the Rogers Commission, which investigated the "Challenger" disaster, Feynman was hesitant. The nation's capital, he told his wife, was "a great big world of mystery to me, with tremendous forces". But she convinced him to go, saying he might discover something others overlooked. Because Feynman did not balk at blaming NASA for the disaster, he clashed with the politically savvy commission chairman William Rogers, a former Secretary of State. During a break in one hearing, Rogers told commission member Neil Armstrong, "Feynman is becoming a pain in the ass." During a televised hearing, Feynman demonstrated that the material used in the shuttle's O-rings became less resilient in cold weather by compressing a sample of the material in a clamp and immersing it in ice-cold water. The commission ultimately determined that the disaster was caused by the primary O-ring not properly sealing in unusually cold weather at Cape Canaveral. Feynman devoted the latter half of his book "What Do You Care What Other People Think?" to his experience on the Rogers Commission, straying from his usual convention of brief, light-hearted anecdotes to deliver an extended and sober narrative. Feynman's account reveals a disconnect between NASA's engineers and executives that was far more striking than he expected. His interviews of NASA's high-ranking managers revealed startling misunderstandings of elementary concepts. For instance, NASA managers claimed that there was a 1 in 100,000 chance of a catastrophic failure aboard the Shuttle, but Feynman discovered that NASA's own engineers estimated the chance of a catastrophe at closer to 1 in 200. He concluded that NASA management's estimate of the reliability of the Space Shuttle was unrealistic, and he was particularly angered that NASA used it to recruit Christa McAuliffe into the Teacher-in-Space program. He warned in his appendix to the commission's report (which was included only after he threatened not to sign the report), "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled." The first public recognition of Feynman's work came in 1954, when Lewis Strauss, the chairman of the Atomic Energy Commission (AEC) notified him that he had won the Albert Einstein Award, which was worth $15,000 and came with a gold medal. Because of Strauss's actions in stripping Oppenheimer of his security clearance, Feynman was reluctant to accept the award, but Isidor Isaac Rabi cautioned him: "You should never turn a man's generosity as a sword against him. Any virtue that a man has, even if he has many vices, should not be used as a tool against him." It was followed by the AEC's Ernest Orlando Lawrence Award in 1962. Schwinger, Tomonaga and Feynman shared the 1965 Nobel Prize in Physics "for their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles". He was elected a Foreign Member of the Royal Society in 1965, received the Oersted Medal in 1972, and the National Medal of Science in 1979. He was elected a Member of the National Academy of Sciences, but ultimately resigned and is no longer listed by them. In 1978, Feynman sought medical treatment for abdominal pains and was diagnosed with liposarcoma, a rare form of cancer. Surgeons removed a tumor the size of a football that had crushed one kidney and his spleen. Further operations were performed in October 1986 and October 1987. He was again hospitalized at the UCLA Medical Center on February 3, 1988. A ruptured duodenal ulcer caused kidney failure, and he declined to undergo the dialysis that might have prolonged his life for a few months. Watched over by his wife Gweneth, sister Joan, and cousin Frances Lewine, he died on February 15, 1988, at age 69. When Feynman was nearing death, he asked his friend and colleague Danny Hillis why Hillis appeared so sad. Hillis replied that he thought Feynman was going to die soon. Feynman said that this sometimes bothered him, too, adding, when you get to be as old as he was, and have told so many stories to so many people, even when he was dead he would not be completely gone. Near the end of his life, Feynman attempted to visit the Tuvan Autonomous Soviet Socialist Republic (ASSR) in Russia, a dream thwarted by Cold War bureaucratic issues. The letter from the Soviet government authorizing the trip was not received until the day after he died. His daughter Michelle later made the journey. His burial was at Mountain View Cemetery and Mausoleum in Altadena, California. His last words were: "I'd hate to die twice. It's so boring." Aspects of Feynman's life have been portrayed in various media. Feynman was portrayed by Matthew Broderick in the 1996 biopic "Infinity". Actor Alan Alda commissioned playwright Peter Parnell to write a two-character play about a fictional day in the life of Feynman set two years before Feynman's death. The play, "QED", premiered at the Mark Taper Forum in Los Angeles in 2001 and was later presented at the Vivian Beaumont Theater on Broadway, with both presentations starring Alda as Richard Feynman. Real Time Opera premiered its opera "Feynman" at the Norfolk (CT) Chamber Music Festival in June 2005. In 2011, Feynman was the subject of a biographical graphic novel entitled simply "Feynman", written by Jim Ottaviani and illustrated by Leland Myrick. In 2013, Feynman's role on the Rogers Commission was dramatised by the BBC in "The Challenger" (US title: "The Challenger Disaster"), with William Hurt playing Feynman. In the 2016 book, "Idea Makers: Personal Perspectives on the Lives & Ideas of Some Notable People", it states that one of the things Feynman often said was that "peace of mind is the most important prerequisite for creative work." Feynman felt one should do everything possible to achieve that peace of mind. Feynman is commemorated in various ways. On May 4, 2005, the United States Postal Service issued the "American Scientists" commemorative set of four 37-cent self-adhesive stamps in several configurations. The scientists depicted were Richard Feynman, John von Neumann, Barbara McClintock, and Josiah Willard Gibbs. Feynman's stamp, sepia-toned, features a photograph of a 30-something Feynman and eight small Feynman diagrams. The stamps were designed by Victor Stabin under the artistic direction of Carl T. Herrman. The main building for the Computing Division at Fermilab is named the "Feynman Computing Center" in his honor. A photograph of Richard Feynman giving a lecture was part of the 1997 poster series commissioned by Apple Inc. for their "Think Different" advertising campaign. The Sheldon Cooper character in "The Big Bang Theory" is a Feynman fan who emulates him by playing the bongo drums. On January 27, 2016, Bill Gates wrote an article "The Best Teacher I Never Had" describing Feynman's talents as a teacher which inspired Gates to create Project Tuva to place the videos of Feynman's Messenger Lectures, "The Character of Physical Law", on a website for public viewing. In 2015 Gates made a video on why he thought Feynman was special. The video was made for the 50th anniversary of Feynman's 1965 Nobel Prize, in response to Caltech's request for thoughts on Feynman. "The Feynman Lectures on Physics" is perhaps his most accessible work for anyone with an interest in physics, compiled from lectures to Caltech undergraduates in 1961–1964. As news of the lectures' lucidity grew, professional physicists and graduate students began to drop in to listen. Co-authors Robert B. Leighton and Matthew Sands, colleagues of Feynman, edited and illustrated them into book form. The work has endured and is useful to this day. They were edited and supplemented in 2005 with "Feynman's Tips on Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics" by Michael Gottlieb and Ralph Leighton (Robert Leighton's son), with support from Kip Thorne and other physicists.
https://en.wikipedia.org/wiki?curid=25523
Research Research is "creative and systematic work undertaken to increase the stock of knowledge, including knowledge of humans, culture and society, and the use of this stock of knowledge to devise new applications." It involves the collection, organization, and analysis of information to increase our understanding of a topic or issue. At a general level, research has three steps: 1. Pose a question. 2. Collect data to answer the question. 3. Present an answer to the question. This should be a familiar process. You engage in solving problems every day and you start with a question, collect some information, and then form an answer. Research is important for three reasons.1. Research adds to our knowledge: Adding to knowledge means that educators undertake research to contribute to existing information about issues 2.Research improves practice: Research is also important because it suggests improvements for practice. Armed with research results, teachers and other educators become more effective professionals. 3. Research informs policy debates: research also provides information to policy makers when they research and debate educational topics. A research project may also be an expansion on past work in the field. Research projects can be used to develop further knowledge on a topic, or in the example of a school research project, they can be used to further a student's research prowess to prepare them for future jobs or reports. To test the validity of instruments, procedures, or experiments, research may replicate elements of prior projects or the project as a whole. The primary purposes of basic research (as opposed to applied research) are documentation, discovery, interpretation, or the research and development (R&D) of methods and systems for the advancement of human knowledge. Approaches to research depend on epistemologies, which vary considerably both within and between humanities and sciences. There are several forms of research: scientific, humanities, artistic, economic, social, business, marketing, practitioner research, life, technological, etc. The scientific study of research practices is known as meta-research. The word "research" is derived from the Middle French ""recherche"", which means "to go about seeking", the term itself being derived from the Old French term ""recerchier"" a compound word from "re-" + "cerchier", or "sercher", meaning 'search'. The earliest recorded use of the term was in 1577. has been defined in a number of different ways, and while there are similarities, there does not appear to be a single, all-encompassing definition that is embraced by all who engage in it. One definition of research is used by the OECD, "Any creative systematic activity undertaken in order to increase the stock of knowledge, including knowledge of man, culture and society, and the use of this knowledge to devise new applications." Another definition of research is given by John W. Creswell, who states that "research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue". It consists of three steps: pose a question, collect data to answer the question, and present an answer to the question. The Merriam-Webster Online Dictionary defines research in more detail as "studious inquiry or examination; "especially" : investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws" Original research, also called primary research, is research that is not exclusively based on a summary, review, or synthesis of earlier publications on the subject of research. This material is of a primary-source character. The purpose of the original research is to produce new knowledge, rather than to present the existing knowledge in a new form (e.g., summarized or classified). Original research can take a number of forms, depending on the discipline it pertains to. In experimental work, it typically involves direct or indirect observation of the researched subject(s), e.g., in the laboratory or in the field, documents the methodology, results, and conclusions of an experiment or set of experiments, or offers a novel interpretation of previous results. In analytical work, there are typically some new (for example) mathematical results produced, or a new way of approaching an existing problem. In some subjects which do not typically carry out experimentation or analysis of this kind, the originality is in the particular way existing understanding is changed or re-interpreted based on the outcome of the work of the researcher. The degree of originality of the research is among major criteria for articles to be published in academic journals and usually established by means of peer review. Graduate students are commonly required to perform original research as part of a dissertation. Scientific research is a systematic way of gathering data and harnessing curiosity. This research provides scientific information and theories for the explanation of the nature and the properties of the world. It makes practical applications possible. Scientific research is funded by public authorities, by charitable organizations and by private groups, including many companies. Scientific research can be subdivided into different classifications according to their academic and application disciplines. Scientific research is a widely used criterion for judging the standing of an academic institution, but some argue that such is an inaccurate assessment of the institution, because the quality of research does not tell about the quality of teaching (these do not necessarily correlate). Research in the humanities involves different methods such as for example hermeneutics and semiotics. Humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. Context is always important, and context can be social, historical, political, cultural, or ethnic. An example of research in the humanities is historical research, which is embodied in historical method. Historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. Other studies aim to merely examine the occurrence of behaviours in societies and communities, without particularly looking for reasons or motivations to explain these. These studies may be qualitative or quantitative, and can use a variety of approaches, such as queer theory or feminist theory. Artistic research, also seen as 'practice-based research', can take form when creative works are considered both the research and the object of research itself. It is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth. Generally, research is understood to follow a certain structural process. Though step order may vary depending on the subject matter and researcher, the following steps are usually part of most formal research, both basic and applied: A common misconception is that a hypothesis will be proven (see, rather, null hypothesis). Generally, a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. If the outcome is inconsistent with the hypothesis, then the hypothesis is rejected (see falsifiability). However, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. This careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. In this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true. A useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. As the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. In this case, a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. Researchers can also use a null hypothesis, which states no relationship or difference between the independent or dependent variables. The historical method comprises the techniques and guidelines by which historians use historical sources and other evidence to research and then to write history. There are various history guidelines that are commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis. This includes lower criticism and sensual criticism. Though items may vary depending on the subject matter and researcher, the following concepts are part of most formal historical research: The controversial trend of artistic teaching becoming more academics-oriented is leading to artistic research being accepted as the primary mode of enquiry in art as in the case of other disciplines. One of the characteristics of artistic research is that it must accept subjectivity as opposed to the classical scientific methods. As such, it is similar to the social sciences in using qualitative research and intersubjectivity as tools to apply measurement and critical analysis. Artistic research has been defined by the University of Dance and Circus (Dans och Cirkushögskolan, DOCH), Stockholm in the following manner – "Artistic research is to investigate and test with the purpose of gaining knowledge within and for our artistic disciplines. It is based on artistic practices, methods, and criticality. Through presented documentation, the insights gained shall be placed in a context." Artistic research aims to enhance knowledge and understanding with presentation of the arts. A simpler understanding by Julian Klein defines artistic research as any kind of research employing the artistic mode of perception. For a survey of the central problematics of today's artistic research, see Giaco Schiesser. According to artist Hakan Topal, in artistic research, "perhaps more so than other disciplines, intuition is utilized as a method to identify a wide range of new and unexpected productive modalities". Most writers, whether of fiction or non-fiction books, also have to do research to support their creative work. This may be factual, historical, or background research. Background research could include, for example, geographical or procedural research. The Society for Artistic Research (SAR) publishes the triannual "Journal for Artistic Research" ("JAR"), an international, online, open access, and peer-reviewed journal for the identification, publication, and dissemination of artistic research and its methodologies, from all arts disciplines and it runs the "Research Catalogue" (RC), a searchable, documentary database of artistic research, to which anyone can contribute. Patricia Leavy addresses eight arts-based research (ABR) genres: narrative inquiry, fiction-based research, poetry, music, dance, theatre, film, and visual art. In 2016 ELIA (European League of the Institutes of the Arts) launched "The Florence Principles' on the Doctorate in the Arts". The Florence Principles relating to the Salzburg Principles and the Salzburg Recommendations of EUA (European University Association) name seven points of attention to specify the Doctorate / PhD in the Arts compared to a scientific doctorate / PhD The Florence Principles have been endorsed and are supported also by AEC, CILECT, CUMULUS and SAR. Research is often conducted using the hourglass model structure of research. The hourglass model starts with a broad spectrum for research, focusing in on the required information through the method of the project (like the neck of the hourglass), then expands the research in the form of discussion and results. The major steps in conducting research are: The steps generally represent the overall process; however, they should be viewed as an ever-changing iterative process rather than a fixed set of steps. Most research begins with a general statement of the problem, or rather, the purpose for engaging in the study. The literature review identifies flaws or holes in previous research which provides justification for the study. Often, a literature review is conducted in a given subject area before a research question is identified. A gap in the current literature, as identified by a researcher, then engenders a research question. The research question may be parallel to the hypothesis. The hypothesis is the supposition to be tested. The researcher(s) collects data to test the hypothesis. The researcher(s) then analyzes and interprets the data via a variety of statistical methods, engaging in what is known as empirical research. The results of the data analysis in rejecting or failing to reject the null hypothesis are then reported and evaluated. At the end, the researcher may discuss avenues for further research. However, some researchers advocate for the reverse approach: starting with articulating findings and discussion of them, moving "up" to identification of a research problem that emerges in the findings and literature review. The reverse approach is justified by the transactional nature of the research endeavor where research inquiry, research questions, research method, relevant research literature, and so on are not fully known until the findings have fully emerged and been interpreted. Rudolph Rummel says, "... no researcher should accept any one or two tests as definitive. It is only when a range of tests are consistent over many kinds of data, researchers, and methods can one have confidence in the results." Plato in Meno talks about an inherent difficulty, if not a paradox, of doing research that can be paraphrased in the following way, "If you know what you're searching for, why do you search for it?! [i.e., you have already found it] If you don't know what you're searching for, what are you searching for?!" The goal of the research process is to produce new knowledge or deepen understanding of a topic or issue. This process takes three main forms (although, as previously discussed, the boundaries between them may be obscure): There are two major types of empirical research design: qualitative research and quantitative research. Researchers choose qualitative or quantitative methods according to the nature of the research topic they want to investigate and the research questions they aim to answer: Social media posts are used for qualitative research. The quantitative data collection methods rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. These methods produce results that are easy to summarize, compare, and generalize. Quantitative research is concerned with testing hypotheses derived from theory or being able to estimate the size of a phenomenon of interest. If the research question is about people, participants may be randomly assigned to different treatments (this is the only way that a quantitative study can be considered a true experiment). If this is not feasible, the researcher may collect data on participant and situational characteristics to statistically control for their influence on the dependent, or outcome, variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants. In either qualitative or quantitative research, the researcher(s) may collect primary or secondary data. Primary data is data collected specifically for the research, such as through interviews or questionnaires. Secondary data is data that already exists, such as census data, which can be re-used for the research. It is good ethical research practice to use secondary data wherever possible. Mixed-method research, i.e. research that includes qualitative and quantitative elements, using both primary and secondary data, is becoming more common. This method has benefits that using one method alone cannot offer. For example, a researcher may choose to conduct a qualitative study and follow it up with a quantitative study to gain additional insights. Big data has brought big impacts on research methods so that now many researchers do not put much effort into data collection; furthermore, methods to analyze easily available huge amounts of data have also been developed. Non-empirical (theoretical) research is an approach that involves the development of theory as opposed to using observation and experimentation. As such, non-empirical research seeks solutions to problems using existing knowledge as its source. This, however, does not mean that new ideas and innovations cannot be found within the pool of existing and established knowledge. Non-empirical research is not an absolute alternative to empirical research because they may be used together to strengthen a research approach. Neither one is less effective than the other since they have their particular purpose in science. Typically empirical research produces observations that need to be explained; then theoretical research tries to explain them, and in so doing generates empirically testable hypotheses; these hypotheses are then tested empirically, giving more observations that may need further explanation; and so on. See Scientific method. A simple example of a non-empirical task is the prototyping of a new drug using a differentiated application of existing knowledge; another is the development of a business process in the form of a flow chart and texts where all the ingredients are from established knowledge. Much of cosmological research is theoretical in nature. Mathematics research does not rely on externally available data; rather, it seeks to prove theorems about mathematical objects. Research ethics is concerned with the moral issues that arise during or as a result of research activities, as well as the ethical conduct of researchers. Historically, the revelation of scandals such as Nazi human experimentation and the Tuskegee syphilis experiment led to the realisation that clear measures are needed for the ethical governance of research to ensure that people, animals and environments are not unduly harmed in research. When making ethical decisions, we may be guided by different things and philosophers commonly distinguish between approaches like deontology, consequentialism, virtue ethics and value (ethics). Regardless of approach, the application of ethical theory to specific controversial topics is known as applied ethics and research ethics can be viewed as a form of applied ethics because ethical theory is applied in real-world research scenarios. Ethical issues may arise in the design and implementation of research involving human experimentation or animal experimentation. There may also be consequences for the environment, for society or for future generations that need to be considered. Research ethics is most developed as a concept in medical research, the most notable Code being the 1964 Declaration of Helsinki. Research in other fields such as social sciences, information technology, biotechnology, or engineering may generate different types of ethical concerns to those in medical research. Nowadays, research ethics is commonly distinguished from matters of research integrity that includes issues such as scientific misconduct (e.g. fraud, fabrication of data or plagiarism). Meta-research is the study of research through the use of research methods. Also known as "research on research", it aims to reduce waste and increase the quality of research in all fields. Meta-research concerns itself with the detection of bias, methodological flaws, and other errors and inefficiencies. Among the finding of meta-research is a low rates of reproducibility across a large number of fields. This widespread difficulty in reproducing research has been termed the "replication crisis." In many disciplines, Western methods of conducting research are predominant. Researchers are overwhelmingly taught Western methods of data collection and study. The increasing participation of indigenous peoples as researchers has brought increased attention to the lacuna in culturally-sensitive methods of data collection. Western methods of data collection may not be the most accurate or relevant for research on non-Western societies. For example, "Hua Oranga" was created as a criterion for psychological evaluation in Māori populations, and is based on dimensions of mental health important to the Māori people – "taha wairua (the spiritual dimension), taha hinengaro (the mental dimension), taha tinana (the physical dimension), and taha whanau (the family dimension)". Periphery scholars face the challenges of exclusion and linguicism in research and academic publication. As the great majority of mainstream academic journals are written in English, multilingual periphery scholars often must translate their work to be accepted to elite Western-dominated journals. Multilingual scholars' influences from their native communicative styles can be assumed to be incompetence instead of difference. Peer review is a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Usually, the peer review process involves experts in the same field who are consulted by editors to give a review of the scholarly works produced by a colleague of theirs from an unbiased and impartial point of view, and this is usually done free of charge. The tradition of peer reviews being done for free has however brought many pitfalls which are also indicative of why most peer reviewers decline many invitations to review. It was observed that publications from periphery countries rarely rise to the same elite status as those of North America and Europe, because limitations on the availability of resources including high-quality paper and sophisticated image-rendering software and printing tools render these publications less able to satisfy standards currently carrying formal or informal authority in the publishing industry. These limitations in turn result in the under-representation of scholars from periphery nations among the set of publications holding prestige status relative to the quantity and quality of those scholars' research efforts, and this under-representation in turn results in disproportionately reduced acceptance of the results of their efforts as contributions to the body of knowledge available worldwide. The open access movement assumes that all information generally deemed useful should be free and belongs to a "public domain", that of "humanity". This idea gained prevalence as a result of Western colonial history and ignores alternative conceptions of knowledge circulation. For instance, most indigenous communities consider that access to certain information proper to the group should be determined by relationships. There is alleged to be a double standard in the Western knowledge system. On the one hand, "digital right management" used to restrict access to personal information on social networking platforms is celebrated as a protection of privacy, while simultaneously when similar functions are used by cultural groups (i.e. indigenous communities) this is denounced as "access control" and reprehended as censorship. Even though Western dominance seems to be prominent in research, some scholars, such as Simon Marginson, argue for "the need [for] a plural university world". Marginson argues that the East Asian Confucian model could take over the Western model. This could be due to changes in funding for research both in the East and the West. Focussed on emphasizing educational achievement, East Asian cultures, mainly in China and South Korea, have encouraged the increase of funding for research expansion. In contrast, in the Western academic world, notably in the United Kingdom as well as in some state governments in the United States, funding cuts for university research have occurred, which some say may lead to the future decline of Western dominance in research. In several national and private academic systems, the professionalisation of research has resulted in formal job titles. In present-day Russia, the former Soviet Union and in some post-Soviet states the term "researcher" (, "nauchny sotrudnik") is both a generic term for a person who carried out scientific research, as well as a job position within the frameworks of the USSR Academy of Sciences, Soviet universities, and in other research-oriented establishments. The following ranks are known: Academic publishing is a system that is necessary for academic scholars to peer review the work and make it available for a wider audience. The system varies widely by field and is also always changing, if often slowly. Most academic work is published in journal article or book form. There is also a large body of research that exists in either a thesis or dissertation form. These forms of research can be found in databases explicitly for theses and dissertations. In publishing, STM publishing is an abbreviation for academic publications in science, technology, and medicine. Most established academic fields have their own scientific journals and other outlets for publication, though many academic journals are somewhat interdisciplinary, and publish work from several distinct fields or subfields. The kinds of publications that are accepted as contributions of knowledge or research vary greatly between fields, from the print to the electronic format. A study suggests that researchers should not give great consideration to findings that are not replicated frequently. It has also been suggested that all published studies should be subjected to some measure for assessing the validity or reliability of its procedures to prevent the publication of unproven findings. Business models are different in the electronic environment. Since about the early 1990s, licensing of electronic resources, particularly journals, has been very common. Presently, a major trend, particularly with respect to scholarly journals, is open access. There are two main forms of open access: open access publishing, in which the articles or the whole journal is freely available from the time of publication, and self-archiving, where the author makes a copy of their own work freely available on the web. Most funding for scientific research comes from three major sources: corporate research and development departments; private foundations, for example, the Bill and Melinda Gates Foundation; and government research councils such as the National Institutes of Health in the USA and the Medical Research Council in the UK. These are managed primarily through universities and in some cases through military contractors. Many senior researchers (such as group leaders) spend a significant amount of their time applying for grants for research funds. These grants are necessary not only for researchers to carry out their research but also as a source of merit. The Social Psychology Network provides a comprehensive list of U.S. Government and private foundation funding sources.
https://en.wikipedia.org/wiki?curid=25524
René Descartes René Descartes ( or ; ; Latinized: Renatus Cartesius; adjectival form: "Cartesian", ; 31 March 1596 – 11 February 1650) was a French philosopher, mathematician, and scientist. A native of the Kingdom of France, he spent about 20 years (1629–1649) of his life in the Dutch Republic after serving for a while in the Dutch States Army of Maurice of Nassau, Prince of Orange and the Stadtholder of the United Provinces. One of the most notable intellectual figures of the Dutch Golden Age, Descartes is also widely regarded as one of the founders of modern philosophy. Many elements of Descartes's philosophy have precedents in late Aristotelianism, the revived Stoicism of the 16th century, or in earlier philosophers like Augustine. In his natural philosophy, he differed from the schools on two major points: first, he rejected the splitting of corporeal substance into matter and form; second, he rejected any appeal to final ends, divine or natural, in explaining natural phenomena. In his theology, he insists on the absolute freedom of God's act of creation. Refusing to accept the authority of previous philosophers, Descartes frequently set his views apart from the philosophers who preceded him. In the opening section of the "Passions of the Soul", an early modern treatise on emotions, Descartes goes so far as to assert that he will write on this topic "as if no one had written on these matters before." His best known philosophical statement is "" ("I think, therefore I am"; ), found in "Discourse on the Method" (1637; in French and Latin) and "Principles of Philosophy" (1644, in Latin). Descartes has often been called the father of modern philosophy, and is largely seen as responsible for the increased attention given to epistemology in the 17th century. He laid the foundation for 17th-century continental rationalism, later advocated by Spinoza and Leibniz, and was later opposed by the empiricist school of thought consisting of Hobbes, Locke, Berkeley, and Hume. Leibniz, Spinoza, and Descartes were all well-versed in mathematics as well as philosophy, and Descartes and Leibniz contributed greatly to science as well. Descartes's "Meditations on First Philosophy" (1641) continues to be a standard text at most university philosophy departments. Descartes's influence in mathematics is equally apparent; the Cartesian coordinate system was named after him. He is credited as the father of analytical geometry, the bridge between algebra and geometry—used in the discovery of infinitesimal calculus and analysis. Descartes was also one of the key figures in the Scientific Revolution. René Descartes was born in La Haye en Touraine (now Descartes, Indre-et-Loire), France, on 31 March 1596. His mother, Jeanne Brochard, died soon after giving birth to him, and so he was not expected to survive. Descartes's father, Joachim, was a member of the Parlement of Brittany at Rennes. René lived with his grandmother and with his great-uncle. Although the Descartes family was Roman Catholic, the Poitou region was controlled by the Protestant Huguenots. In 1607, late because of his fragile health, he entered the Jesuit Collège Royal Henry-Le-Grand at La Flèche, where he was introduced to mathematics and physics, including Galileo's work. After graduation in 1614, he studied for two years (1615–16) at the University of Poitiers, earning a "Baccalauréat" and "Licence" in canon and civil law in 1616, in accordance with his father's wishes that he should become a lawyer. From there he moved to Paris. In "Discourse on the Method", Descartes recalls: I entirely abandoned the study of letters. Resolving to seek no knowledge other than that of which could be found in myself or else in the great book of the world, I spent the rest of my youth traveling, visiting courts and armies, mixing with people of diverse temperaments and ranks, gathering various experiences, testing myself in the situations which fortune offered me, and at all times reflecting upon whatever came my way to derive some profit from it. In accordance with his ambition to become a professional military officer, in 1618 Descartes joined, as a mercenary, the Protestant Dutch States Army in Breda under the command of Maurice of Nassau, and undertook a formal study of military engineering, as established by Simon Stevin. Descartes, therefore, received much encouragement in Breda to advance his knowledge of mathematics. In this way, he became acquainted with Isaac Beeckman, the principal of a Dordrecht school, for whom he wrote the "Compendium of Music" (written 1618, published 1650). Together they worked on free fall, catenary, conic section, and fluid statics. Both believed that it was necessary to create a method that thoroughly linked mathematics and physics. While in the service of the Catholic Duke Maximilian of Bavaria since 1619, Descartes was present at the Battle of the White Mountain near Prague, in November 1620. According to Adrien Baillet, on the night of 10–11 November 1619 (St. Martin's Day), while stationed in Neuburg an der Donau, Descartes shut himself in a room with an "oven" (probably a cocklestove) to escape the cold. While within, he had three dreams and believed that a divine spirit revealed to him a new philosophy. However, it is likely that what Descartes considered to be his second dream was actually an episode of exploding head syndrome. Upon exiting, he had formulated analytical geometry and the idea of applying the mathematical method to philosophy. He concluded from these visions that the pursuit of science would prove to be, for him, the pursuit of true wisdom and a central part of his life's work. Descartes also saw very clearly that all truths were linked with one another, so that finding a fundamental truth and proceeding with logic would open the way to all science. Descartes discovered this basic truth quite soon: his famous "I think, therefore I am." In 1620 Descartes left the army. He visited Basilica della Santa Casa in Loreto, then visited various countries before returning to France, and during the next few years spent time in Paris. It was there that he composed his first essay on method: "Regulae ad Directionem Ingenii" (Rules for the Direction of the Mind). He arrived in La Haye in 1623, selling all of his property to invest in bonds, which provided a comfortable income for the rest of his life. Descartes was present at the siege of La Rochelle by Cardinal Richelieu in 1627. In the fall of the same year, in the residence of the papal nuncio Guidi di Bagno, where he came with Mersenne and many other scholars to listen to a lecture given by the alchemist Nicolas de Villiers, Sieur de Chandoux on the principles of a supposed new philosophy, Cardinal Bérulle urged him to write an exposition of his new philosophy in some location beyond the reach of the Inquisition. Descartes returned to the Dutch Republic in 1628. In April 1629 he joined the University of Franeker, studying under Adriaan Metius, either living with a Catholic family or renting the Sjaerdemaslot. The next year, under the name "Poitevin", he enrolled at the Leiden University to study mathematics with Jacobus Golius, who confronted him with Pappus's hexagon theorem, and astronomy with Martin Hortensius. In October 1630 he had a falling-out with Beeckman, whom he accused of plagiarizing some of his ideas. In Amsterdam, he had a relationship with a servant girl, Helena Jans van der Strom, with whom he had a daughter, Francine, who was born in 1635 in Deventer. She died of scarlet fever at the age of 5. Unlike many moralists of the time, Descartes did not deprecate the passions but rather defended them; he wept upon Francine's death in 1640. According to a recent biography by Jason Porterfield, "Descartes said that he did not believe that one must refrain from tears to prove oneself a man." Russell Shorto speculates that the experience of fatherhood and losing a child formed a turning point in Descartes's work, changing its focus from medicine to a quest for universal answers. Despite frequent moves, he wrote all his major work during his 20-plus years in the Netherlands, initiating a revolution in mathematics and philosophy. In 1633, Galileo was condemned by the Italian Inquisition, and Descartes abandoned plans to publish "Treatise on the World", his work of the previous four years. Nevertheless, in 1637 he published parts of this work in three essays: "Les Météores" (The Meteors), "La Dioptrique" (Dioptrics) and "La Géométrie" (Geometry), preceded by an introduction, his famous "Discours de la méthode" ("'Discourse on the Method"'). In it, Descartes lays out four rules of thought, meant to ensure that our knowledge rests upon a firm foundation: In "La Géométrie", Descartes exploited the discoveries he made with Pierre de Fermat, having been able to do so because his paper, Introduction to Loci, was published posthumously in 1679. This later became known as Cartesian Geometry. Descartes continued to publish works concerning both mathematics and philosophy for the rest of his life. In 1641 he published a metaphysics treatise, "Meditationes de Prima Philosophia" (Meditations on First Philosophy), written in Latin and thus addressed to the learned. It was followed in 1644 by "Principia Philosophiæ" (Principles of Philosophy), a kind of synthesis of the "Discourse on the Method" and "Meditations on First Philosophy". In 1643, Cartesian philosophy was condemned at the University of Utrecht, and Descartes was obliged to flee to the Hague, settling in Egmond-Binnen. Christia Mercer posits that the most influential ideas in "Meditations on First Philosophy" were lifted from Spanish author and Roman Catholic nun Teresa of Ávila, who, fifty years earlier, published "The Interior Castle", concerning the role of philosophical reflection in intellectual growth. Descartes began (through Alfonso Polloti, an Italian general in Dutch service) a six-year correspondence with Princess Elisabeth of Bohemia, devoted mainly to moral and psychological subjects. Connected with this correspondence, in 1649 he published "Les Passions de l'âme" (Passions of the Soul), which he dedicated to the Princess. In 1647, he was awarded a pension by King Louis XIV of France, though it was never paid. A French translation of "Principia Philosophiæ", prepared by Abbot Claude Picot, was published in 1647. This edition Descartes also dedicated to Princess Elisabeth. In the preface to the French edition, Descartes praised true philosophy as a means to attain wisdom. He identifies four ordinary sources to reach wisdom and finally says that there is a fifth, better and more secure, consisting in the search for first causes. By 1649, Descartes had become one of Europe's most famous philosophers and scientists. That year, Queen Christina of Sweden invited Descartes to her court to organize a new scientific academy and tutor her in his ideas about love. She was interested in and stimulated Descartes to publish the "Passions of the Soul", a work based on his correspondence with Princess Elisabeth. Descartes accepted, and moved to Sweden in the middle of winter. He was a guest at the house of Pierre Chanut, living on Västerlånggatan, less than 500 meters from Tre Kronor in Stockholm. There, Chanut and Descartes made observations with a Torricellian mercury barometer. Challenging Blaise Pascal, Descartes took the first set of barometric readings in Stockholm to see if atmospheric pressure could be used in forecasting the weather. Descartes arranged to give lessons to Queen Christina after her birthday, three times a week at 5 am, in her cold and draughty castle. It soon became clear they did not like each other; she did not care for his mechanical philosophy, nor did he share her interest in Ancient Greek. By 15 January 1650, Descartes had seen Christina only four or five times. On 1 February he contracted pneumonia and died on 11 February. The cause of death was pneumonia according to Chanut, but peripneumonia according to Christina's physician Johann van Wullen who was not allowed to bleed him. (The winter seems to have been mild, except for the second half of January which was harsh as described by Descartes himself; however, "this remark was probably intended to be as much Descartes' take on the intellectual climate as it was about the weather.") As a Catholic in a Protestant nation, he was interred in a graveyard used mainly for orphans in Adolf Fredriks kyrka in Stockholm. His manuscripts came into the possession of Claude Clerselier, Chanut's brother-in-law, and "a devout Catholic who has begun the process of turning Descartes into a saint by cutting, adding and publishing his letters selectively." In 1663, the Pope placed his works on the Index of Prohibited Books. In 1666 his remains were taken to France and buried in the Saint-Étienne-du-Mont. In 1671 Louis XIV prohibited all the lectures in Cartesianism. Although the National Convention in 1792 had planned to transfer his remains to the Panthéon, he was reburied in the Abbey of Saint-Germain-des-Prés in 1819, missing a finger and the skull. His skull is on display in the Musée de l'Homme in Paris. Initially, Descartes arrives at only a single first principle: I think. Thought cannot be separated from me, therefore, I exist ("Discourse on the Method" and "Principles of Philosophy"). Most notably, this is known as "cogito ergo sum" (English: "I think, therefore I am"). Therefore, Descartes concluded, if he doubted, then something or someone must be doing the doubting; therefore, the very fact that he doubted proved his existence. "The simple meaning of the phrase is that if one is skeptical of existence, that is in and of itself proof that he does exist." These two first principles—I think and I exist—were later confirmed by Descartes's clear and distinct perception (delineated in his ): that I clearly and distinctly perceive these two principles, Descartes reasoned, ensures their indubitability. Descartes concludes that he can be certain that he exists because he thinks. But in what form? He perceives his body through the use of the senses; however, these have previously been unreliable. So Descartes determines that the only indubitable knowledge is that he is a "thinking thing". Thinking is what he does, and his power must come from his essence. Descartes defines "thought" ("cogitatio") as "what happens in me such that I am immediately conscious of it, insofar as I am conscious of it". Thinking is thus every activity of a person of which the person is immediately conscious. He gave reasons for thinking that waking thoughts are distinguishable from dreams, and that one's mind cannot have been "hijacked" by an evil demon placing an illusory external world before one's senses. In this manner, Descartes proceeds to construct a system of knowledge, discarding perception as unreliable and, instead, admitting only deduction as a method. Descartes, influenced by the automatons on display throughout the city of Paris, began to investigate the connection between the mind and body, and how the two interact. His main influences for dualism were theology and physics. The theory on the dualism of mind and body is Descartes's signature doctrine and permeates other theories he advanced. Known as Cartesian dualism (or Mind-Body Dualism), his theory on the separation between the mind and the body went on to influence subsequent Western philosophies. In "Meditations on First Philosophy", Descartes attempted to demonstrate the existence of God and the distinction between the human soul and the body. Humans are a union of mind and body; thus Descartes's dualism embraced the idea that mind and body are distinct but closely joined. While many contemporary readers of Descartes found the distinction between mind and body difficult to grasp, he thought it was entirely straightforward. Descartes employed the concept of "modes", which are the ways in which substances exist. In "Principles of Philosophy", Descartes explained, "we can clearly perceive a substance apart from the mode which we say differs from it, whereas we cannot, conversely, understand the mode apart from the substance". To perceive a mode apart from its substance requires an intellectual abstraction, which Descartes explained as follows: The intellectual abstraction consists in my turning my thought away from one part of the contents of this richer idea the better to apply it to the other part with greater attention. Thus, when I consider a shape without thinking of the substance or the extension whose shape it is, I make a mental abstraction. According to Descartes, two substances are really distinct when each of them can exist apart from the other. Thus Descartes reasoned that God is distinct from humans, and the body and mind of a human are also distinct from one another. He argued that the great differences between body (an extended thing) and mind (an un-extended, immaterial thing) make the two ontologically distinct. But that the mind was utterly indivisible: because "when I consider the mind, or myself in so far as I am merely a thinking thing, I am unable to distinguish any part within myself; I understand myself to be something quite single and complete." In "Meditations" Descartes invokes his causal adequacy principle to support his trademark argument for the existence of God, quoting Lucretius in defence: "Ex nihilo nihil fit", meaning "Nothing comes from nothing" (Lucretius). Granted, neither Descartes nor Lucretius originated the philosophical claim, appearing as it does in the classical metaphysics of Plato and Aristotle. Moreover, in "Meditations" Descartes discusses a piece of wax and exposes the single most characteristic doctrine of Cartesian dualism: that the universe contained two radically different kinds of substances—the mind or soul defined as thinking, and the body defined as matter and unthinking. The Aristotelian philosophy of Descartes's days held that the universe was inherently purposeful or teleological. Everything that happened, be it the motion of the stars or the growth of a tree, was supposedly explainable by a certain purpose, goal or end that worked its way out within nature. Aristotle called this the "final cause," and these final causes were indispensable for explaining the ways nature operated. Descartes's theory of dualism supports the distinction between traditional Aristotelian science and the new science of Kepler and Galileo, which denied the role of a divine power and "final causes" in its attempts to explain nature. Descartes's dualism provided the philosophical rationale for the latter by expelling the final cause from the physical universe (or "res extensa") in favor of the mind (or "res cogitans"). Therefore, while Cartesian dualism paved the way for modern physics, it also held the door open for religious beliefs about the immortality of the soul. Descartes's dualism of mind and matter implied a concept of human beings. A human was according to Descartes a composite entity of mind and body. Descartes gave priority to the mind and argued that the mind could exist without the body, but the body could not exist without the mind. In "Meditations" Descartes even argues that while the mind is a substance, the body is composed only of "accidents". But he did argue that mind and body are closely joined: Nature also teaches me, by the sensations of pain, hunger, thirst and so on, that I am not merely present in my body as a pilot in his ship, but that I am very closely joined and, as it were, intermingled with it, so that I and the body form a unit. If this were not so, I, who am nothing but a thinking thing, would not feel pain when the body was hurt, but would perceive the damage purely by the intellect, just as a sailor perceives by sight if anything in his ship is broken. Descartes's discussion on embodiment raised one of the most perplexing problems of his dualism philosophy: What exactly is the relationship of union between the mind and the body of a person? Therefore, Cartesian dualism set the agenda for philosophical discussion of the mind–body problem for many years after Descartes's death. Descartes was also a rationalist and believed in the power of innate ideas. Descartes argued the theory of innate knowledge and that all humans were born with knowledge through the higher power of God. It was this theory of innate knowledge that later led philosopher John Locke (1632–1704) to combat the theory of empiricism, which held that all knowledge is acquired through experience. In "The Passions of the Soul", written between 1645 and 1646, Descartes discussed the common contemporary belief that the human body contained animal spirits. These animal spirits were believed to be light and roaming fluids circulating rapidly around the nervous system between the brain and the muscles, and served as a metaphor for feelings, like being in high or bad spirit. These animal spirits were believed to affect the human soul, or passions of the soul. Descartes distinguished six basic passions: wonder, love, hatred, desire, joy and sadness. All of these passions, he argued, represented different combinations of the original spirit, and influenced the soul to will or want certain actions. He argued, for example, that fear is a passion that moves the soul to generate a response in the body. In line with his dualist teachings on the separation between the soul and the body, he hypothesized that some part of the brain served as a connector between the soul and the body and singled out the pineal gland as connector. Descartes argued that signals passed from the ear and the eye to the pineal gland, through animal spirits. Thus different motions in the gland cause various animal spirits. He argued that these motions in the pineal gland are based on God's will and that humans are supposed to want and like things that are useful to them. But he also argued that the animal spirits that moved around the body could distort the commands from the pineal gland, thus humans had to learn how to control their passions. Descartes advanced a theory on automatic bodily reactions to external events which influenced 19th-century reflex theory. He argued that external motions such as touch and sound reach the endings of the nerves and affect the animal spirits. Heat from fire affects a spot on the skin and sets in motion a chain of reactions, with the animal spirits reaching the brain through the central nervous system, and in turn animal spirits are sent back to the muscles to move the hand away from the fire. Through this chain of reactions the automatic reactions of the body do not require a thought process. Above all he was among the first scientists who believed that the soul should be subject to scientific investigation. He challenged the views of his contemporaries that the soul was divine, thus religious authorities regarded his books as dangerous. Descartes's writings went on to form the basis for theories on emotions and how cognitive evaluations were translated into affective processes. Descartes believed that the brain resembled a working machine and unlike many of his contemporaries believed that mathematics and mechanics could explain the most complicated processes of the mind. In the 20th century Alan Turing advanced computer science based on mathematical biology as inspired by Descartes. His theories on reflexes also served as the foundation for advanced physiological theories more than 200 years after his death. The physiologist Ivan Pavlov was a great admirer of Descartes. For Descartes, ethics was a science, the highest and most perfect of them. Like the rest of the sciences, ethics had its roots in metaphysics. In this way, he argues for the existence of God, investigates the place of man in nature, formulates the theory of mind-body dualism, and defends free will. However, as he was a convinced rationalist, Descartes clearly states that reason is sufficient in the search for the goods that we should seek, and virtue consists in the correct reasoning that should guide our actions. Nevertheless, the quality of this reasoning depends on knowledge, because a well-informed mind will be more capable of making good choices, and it also depends on mental condition. For this reason, he said that a complete moral philosophy should include the study of the body. He discussed this subject in the correspondence with Princess Elisabeth of Bohemia, and as a result wrote his work "The Passions of the Soul", that contains a study of the psychosomatic processes and reactions in man, with an emphasis on emotions or passions. His works about human passion and emotion would be the basis for the philosophy of his followers (see Cartesianism), and would have a lasting impact on ideas concerning what literature and art should be, specifically how it should invoke emotion. Humans should seek the sovereign good that Descartes, following Zeno, identifies with virtue, as this produces a solid blessedness or pleasure. For Epicurus the sovereign good was pleasure, and Descartes says that, in fact, this is not in contradiction with Zeno's teaching, because virtue produces a spiritual pleasure, that is better than bodily pleasure. Regarding Aristotle's opinion that happiness depends on the goods of fortune, Descartes does not deny that this good contributes to happiness but remarks that they are in great proportion outside one's own control, whereas one's mind is under one's complete control. The moral writings of Descartes came at the last part of his life, but earlier, in his "Discourse on the Method" he adopted three maxims to be able to act while he put all his ideas into doubt. This is known as his . In the third and fifth "Meditation", Descartes offers an ontological proof of a benevolent God (through both the ontological argument and trademark argument). Because God is benevolent, Descartes can have some faith in the account of reality his senses provide him, for God has provided him with a working mind and sensory system and does not desire to deceive him. From this supposition, however, Descartes finally establishes the possibility of acquiring knowledge about the world based on deduction "and" perception. Regarding epistemology, therefore, Descartes can be said to have contributed such ideas as a rigorous conception of foundationalism and the possibility that reason is the only reliable method of attaining knowledge. Descartes, nevertheless, was very much aware that experimentation was necessary to verify and validate theories. In his "Meditations on First Philosophy" Descartes sets forth two proofs for God's existence. One of these is founded upon the possibility of thinking the "idea of a being that is supremely perfect and infinite," and suggests that "of all the ideas that are in me, the idea that I have of God is the most true, the most clear and distinct." Descartes considered himself to be a devout Catholic, and one of the purposes of the "Meditations" was to defend the Catholic faith. His attempt to ground theological beliefs on reason encountered intense opposition in his time. Pascal regarded Descartes's views as a rationalist and mechanist, and accused him of deism: "I cannot forgive Descartes; in all his philosophy, Descartes did his best to dispense with God. But Descartes could not avoid prodding God to set the world in motion with a snap of his lordly fingers; after that, he had no more use for God," while a powerful contemporary, Martin Schoock, accused him of atheist beliefs, though Descartes had provided an explicit critique of atheism in his "Meditations". The Catholic Church prohibited his books in 1663. Descartes also wrote a response to external world skepticism. Through this method of scepticism, he does not doubt for the sake of doubting but to achieve concrete and reliable information. In other words, certainty. He argues that sensory perceptions come to him involuntarily, and are not willed by him. They are external to his senses, and according to Descartes, this is evidence of the existence of something outside of his mind, and thus, an external world. Descartes goes on to show that the things in the external world are material by arguing that God would not deceive him as to the ideas that are being transmitted, and that God has given him the "propensity" to believe that such ideas are caused by material things. Descartes also believes a substance is something that does not need any assistance to function or exist. Descartes further explains how only God can be a true “substance”. But minds are substances, meaning they need only God for it to function. The mind is a thinking substance. The means for a thinking substance stem from ideas. Descartes steered clear of theological questions, restricting his attention to showing that there is no incompatibility between his metaphysics and theological orthodoxy. He avoided trying to demonstrate theological dogmas metaphysically. When challenged that he had not established the immortality of the soul merely in showing that the soul and the body are distinct substances, for example, he replied that he 'does not take it upon himself to use the power of human reason to settle any of those matters which depend on the free will of God'. Descartes is often regarded as the first thinker to emphasize the use of reason to develop the natural sciences. For him the philosophy was a thinking system that embodied all knowledge, as he related in a letter to a French translator: In his "Discourse on the Method", he attempts to arrive at a fundamental set of principles that one can know as true without any doubt. To achieve this, he employs a method called hyperbolical/metaphysical doubt, also sometimes referred to as methodological skepticism: he rejects any ideas that can be doubted and then re-establishes them in order to acquire a firm foundation for genuine knowledge. Descartes built his ideas from scratch. He relates this to architecture: the top soil is taken away to create a new building or structure. Descartes calls his doubt the soil and new knowledge the buildings. To Descartes, Aristotle's foundationalism is incomplete and his method of doubt enhances foundationalism. Descartes denied that animals had reason or intelligence. He argued that animals did not lack sensations or perceptions, but these could be explained mechanistically. Whereas humans had a soul, or mind, and were able to feel pain and anxiety, animals by virtue of not having a soul could not feel pain or anxiety. If animals showed signs of distress then this was to protect the body from damage, but the innate state needed for them to suffer was absent. Although Descartes's views were not universally accepted they became prominent in Europe and North America, allowing humans to treat animals with impunity. The view that animals were quite separate from humanity and merely machines allowed for the maltreatment of animals, and was sanctioned in law and societal norms until the middle of the 19th century. The publications of Charles Darwin would eventually erode the Cartesian view of animals. Darwin argued that the continuity between humans and other species opened the possibilities that animals did not have dissimilar properties to suffer. Descartes has often been dubbed the father of modern Western philosophy, the thinker whose approach has profoundly changed the course of Western philosophy and set the basis for modernity. The first two of his "Meditations on First Philosophy", those that formulate the famous methodic doubt, represent the portion of Descartes's writings that most influenced modern thinking. It has been argued that Descartes himself did not realize the extent of this revolutionary move. In shifting the debate from "what is true" to "of what can I be certain?," Descartes arguably shifted the authoritative guarantor of truth from God to humanity (even though Descartes himself claimed he received his visions from God)—while the traditional concept of "truth" implies an external authority, "certainty" instead relies on the judgment of the individual. In an anthropocentric revolution, the human being is now raised to the level of a subject, an agent, an emancipated being equipped with autonomous reason. This was a revolutionary step that established the basis of modernity, the repercussions of which are still being felt: the emancipation of humanity from Christian revelational truth and Church doctrine; humanity making its own law and taking its own stand. In modernity, the guarantor of truth is not God anymore but human beings, each of whom is a "self-conscious shaper and guarantor" of their own reality. In that way, each person is turned into a reasoning adult, a subject and agent, as opposed to a child obedient to God. This change in perspective was characteristic of the shift from the Christian medieval period to the modern period, a shift that had been anticipated in other fields, and which was now being formulated in the field of philosophy by Descartes. This anthropocentric perspective of Descartes's work, establishing human reason as autonomous, provided the basis for the Enlightenment's emancipation from God and the Church. According to Martin Heidegger, the perspective of Descartes's work also provided the basis for all subsequent anthropology. Descartes's philosophical revolution is sometimes said to have sparked modern anthropocentrism and subjectivism. One of Descartes's most enduring legacies was his development of Cartesian or analytic geometry, which uses algebra to describe geometry. Descartes "invented the convention of representing unknowns in equations by "x", "y", and "z", and knowns by "a", "b", and "c"". He also "pioneered the standard notation" that uses superscripts to show the powers or exponents; for example, the 2 used in x2 to indicate x squared. He was first to assign a fundamental place for algebra in our system of knowledge, using it as a method to automate or mechanize reasoning, particularly about abstract, unknown quantities. European mathematicians had previously viewed geometry as a more fundamental form of mathematics, serving as the foundation of algebra. Algebraic rules were given geometric proofs by mathematicians such as Pacioli, Cardan, Tartaglia and Ferrari. Equations of degree higher than the third were regarded as unreal, because a three-dimensional form, such as a cube, occupied the largest dimension of reality. Descartes professed that the abstract quantity "a2" could represent length as well as an area. This was in opposition to the teachings of mathematicians, such as Vieta, who argued that it could represent only area. Although Descartes did not pursue the subject, he preceded Gottfried Wilhelm Leibniz in envisioning a more general science of algebra or "universal mathematics," as a precursor to symbolic logic, that could encompass logical principles and methods symbolically, and mechanize general reasoning. Descartes's work provided the basis for the calculus developed by Newton and Leibniz, who applied infinitesimal calculus to the tangent line problem, thus permitting the evolution of that branch of modern mathematics. His rule of signs is also a commonly used method to determine the number of positive and negative roots of a polynomial. The beginning to Descartes's interest in physics is accredited to the amateur scientist and mathematician Isaac Beeckman, who was at the forefront of a new school of thought known as mechanical philosophy. With this foundation of reasoning, Descartes formulated many of his theories on mechanical and geometrical physics. Descartes discovered an early form of the law of conservation of mechanical momentum (a measure of the motion of an object), and envisioned it as pertaining to motion in a straight line, as opposed to perfect circular motion, as Galileo had envisioned it. He outlined his views on the universe in his "Principles of Philosophy". Descartes also made contributions to the field of optics. He showed by using geometric construction and the law of refraction (also known as Descartes's law, or more commonly Snell's law outside France) that the angular radius of a rainbow is 42 degrees (i.e., the angle subtended at the eye by the edge of the rainbow and the ray passing from the sun through the rainbow's centre is 42°). He also independently discovered the law of reflection, and his essay on optics was the first published mention of this law. Current popular opinion holds that Descartes had the most influence of anyone on the young Newton, and this is arguably one of his most important contributions. Decartes's influence extended not directly from his original French edition of "La Géométrie", however, but rather from Frans van Schooten's expanded second Latin edition of the work. Newton continued Descartes's work on cubic equations, which will free the subject from fetters of the Greek perspectives. The most important concept was his very modern treatment of single variables. In commercial terms, "Discourse" appeared during Descartes's lifetime in a single edition of 500 copies, 200 of which were set aside for the author. Sharing a similar fate was the only French edition of "Meditations", which had not managed to sell out by the time of Descartes's death. A concomitant Latin edition of the latter was, however, eagerly sought out by Europe's scholarly community and proved a commercial success for Descartes. Although Descartes was well known in academic circles towards the end of his life, the teaching of his works in schools was controversial. Henri de Roy (Henricus Regius, 1598–1679), Professor of Medicine at the University of Utrecht, was condemned by the Rector of the University, Gijsbert Voet (Voetius), for teaching Descartes's physics. In January 2010, a previously unknown letter from Descartes, dated 27 May 1641, was found by the Dutch philosopher Erik-Jan Bos when browsing through Google. Bos found the letter mentioned in a summary of autographs kept by Haverford College in Haverford, Pennsylvania. The college was unaware that the letter had never been published. This was the third letter by Descartes found in the last 25 years.
https://en.wikipedia.org/wiki?curid=25525
Romansh language Romansh (; sometimes also spelled Romansch, Rumantsch, or Romanche; Romansh: "rumantsch", "rumàntsch", "romauntsch" or "romontsch") is a Romance language spoken predominantly in the southeastern Swiss canton of Grisons (Graubünden). Romansh has been recognized as a national language of Switzerland since 1938, and as an official language in correspondence with Romansh-speaking citizens since 1996, along with German, French and Italian. It also has official status in the canton of Grisons alongside German and Italian and is used as the medium of instruction in schools in Romansh-speaking areas. It is sometimes grouped by linguists with Ladin and Friulian as a Rhaeto-Romance language ("retorumantsch"), though this is disputed. Romansh is one of the descendant languages of the spoken Latin language of the Roman Empire, which by the 5th century AD replaced the Celtic and Raetic languages previously spoken in the area. Romansh retains a small number of words from these languages. Romansh has also been strongly influenced by German in vocabulary and morphosyntax. The language gradually retreated to its current area over the centuries, being replaced in other areas by Alemannic and Bavarian dialects. The earliest writing identified as Romansh dates from the 10th or 11th century, although major works did not appear until the 16th century, when several regional written varieties began to develop. During the 19th century the area where the language was spoken declined, but the Romansh speakers had a literary revival and started a language movement dedicated to halting the decline of the language. In the 2000 Swiss census, 35,095 people (of whom 27,038 live in the canton of Grisons) indicated Romansh as the language of "best command", and 61,815 as a "regularly spoken" language. In 2010, Switzerland switched to a yearly system of assessment that uses a combination of municipal citizen records and a limited number of surveys. , Romansh speakers make up 44,354 inhabitants of Switzerland, or 0.85% of its population, and 28,698 inhabitants of the canton of Grisons, or 14.7% of Grisons' population. About 28% of the Romansh-speaking people in the Romansh-speaking areas also speak one other language fluently, e.g. German or Italian, which are the other official languages of Grisons. Romansh is divided into five different regional dialects (Sursilvan, Sutsilvan, Surmiran, Putèr, and Vallader), each with its own standardized written language. In addition, a pan-regional variety called Rumantsch Grischun was introduced in 1982, which is controversial among Romansh speakers. Romansh is a Romance language descending from Vulgar Latin, the spoken language of the Roman Empire. Within the Romance languages, Romansh stands out because of its peripheral location, which has resulted in several archaic features. Another distinguishing feature is the centuries-long language contact with German, which is most noticeable in the vocabulary and to a lesser extent the syntax of Romansh. Romansh belongs to the Gallo-Romance branch of the Romance languages, which includes languages such as French, Occitan, and Lombard. The main feature placing Romansh within the Gallo-Romance languages is the fronting of Latin to or , as seen in Latin "muru(m)" ("wall"), which is or in Romansh. The main features distinguishing Romansh from the Gallo-Italic languages to the south, and placing it closer to French, are: Another defining feature of the Romansh language is the use of unstressed vowels. All unstressed vowels (except /a/) disappeared. Whether or not Romansh, Friulan and Ladin should compose a separate "Rhaeto-Romance" subgroup within Gallo-Romance is an unresolved issue, known as the "Questione ladina". Some linguists posit that these languages are descended from a common language, which was fractured geographically through the spread of German and Italian. The Italian linguist Graziadio Ascoli first made the claim in 1873. The other position holds that any similarities between these three languages can be explained through their relative geographic isolation, which shielded them from certain linguistic changes. By contrast, the Gallo-Italic varieties of Northern Italy were more open to linguistic influences from the South. Linguists who take this position often point out that the similarities between the languages are comparatively few. This position was first introduced by the Italian dialectologist Carlo Battisti. This linguistic dispute became politically relevant for the Italian irredentist movement. Italian nationalists interpreted Battisti's hypothesis as implying that Romansh, Friulan and Ladin were not separate languages but rather Italian dialects. They used this as an argument to claim the territories for Italy where these languages were spoken. From a sociolinguistic perspective, however, this question is largely irrelevant. The speakers of Romansh have always identified as speaking a language distinct from both Italian and other Romance varieties. Romansh comprises a group of closely related dialects, which are most commonly divided into five different varieties, each of which has developed a standardized form. These standardized regional standards are referred to as "idioms" in Romansh to distinguish them from the local vernaculars, which are referred to as "dialects". These dialects form a dialect continuum without clear-cut divisions. Historically a continuous speech area, this continuum has now been ruptured by the spread of German, so that Romansh is now geographically divided into at least two non-adjacent parts. Aside from these five major dialects, two additional varieties are often distinguished. One is the dialect of the Val Müstair, which is closely related to Vallader but often separately referred to as "Jauer" (derived from the personal pronoun "jau" 'I', i.e. 'the "jau"-sayers'). Less commonly distinguished is the dialect of Tujetsch and the Val Medel, which is markedly different from Sursilvan and is referred to as "Tuatschin". Additionally, the standardized variety Rumantsch Grischun, intended for pan-regional use, was introduced in 1982. The dialect of the Val Bregaglia is usually considered a variety of Lombard, and speakers use Italian as their written language, even though the dialect shares many features with the neighboring Putèr dialect of Romansh. As these varieties form a continuum with small transitions from each village to the next, there is no straightforward internal grouping of the Romansh dialects. The Romansh language area can be described best as consisting of two widely divergent varieties, Sursilvan in the west and the dialects of the Engadine in the east, with Sutsilvan and Surmiran forming a transition zone between them. The Engadinese varieties "Putèr" and "Vallader" are often referred to as one specific variety known as "Ladin" (rm. ), which is not to be confused with the closely related language in Italy's Dolomite mountains also known as Ladin. Sutsilvan and Surmiran are sometimes grouped together as Central Romansh (rm. "Grischun central"), and then grouped together with Sursilvan as "Rhenish Romansh" (in German, "Rheinischromanisch"). One feature that separates the Rhenish varieties from Ladin is the retention of the rounded front vowels and (written "ü" and "ö") in Ladin, which have been unrounded in the other dialects, as in Ladin , Sursilvan , Surmiran "meir" ‘wall’ or Ladin to Rhenish ‘cheese’. Another is the development of Latin -CT-, which has developed into /tɕ/ in the Rhenish varieties as in "détg" ‘said’ or "fatg" ‘did’, while developing into /t/ in Ladin ("dit" and "fat"). A feature separating Sursilvan from Central Romansh, however, involves the extent of palatalization of Latin /k/ in front of /a/, which is rare in Sursilvan but common in the other varieties: Sursilvan , Sutsilvan "tgea", Surmiran "tgesa", Putèr , and Vallader 'house'. Overall however, the Central Romansh varieties do not share many unique features, but rather connect Sursilvan and Ladin through a succession of numerous small differences from one village to the next. The dialects of Romansh are not always mutually comprehensible. Speakers of Sursilvan and Ladin, in particular, are usually unable to understand each other initially. Because speakers usually identify themselves primarily with their regional dialect, many do not take the effort to attempt to understand unfamiliar dialects, and prefer to speak Swiss German with speakers of other varieties. A common Romansh identity is not widespread outside of intellectual circles, even though this has been changing among the younger generation. Romansh originates from the spoken Latin brought to the region by Roman soldiers, merchants, and officials following the conquest of the modern-day Grisons area by the Romans in 15 BC. Before that, the inhabitants spoke Celtic and Raetic languages, with Raetic apparently being spoken mainly in the Lower Engadine valley. Traces of these languages survive mainly in toponyms, including village names such as Tschlin, Scuol, Savognin, Glion, Breil/Brigels, Brienz/Brinzauls, Purtenza, and Trun. Additionally, a small number of pre-Latin words have survived in Romansh, mainly concerning animals, plants, and geological features unique to the Alps, such as "camutsch" 'chamois' and "grava" 'scree'. It is unknown how rapidly the Celtic and Raetic inhabitants were Romanized following the conquest of Raetia. Some linguists assume that the area was rapidly Romanized following the Roman conquest, whereas others think that this process did not end until the 4th or 5th century, when more thoroughly Romanized Celts from farther north fled south to avoid invasions by Germanic tribes. The process was certainly complete and the pre-Roman languages extinct by the 5th–6th century, when Raetia became part of the Ostrogothic Kingdom. Around 537 AD, the Ostrogoths handed over the province of Raetia Prima to the Frankish Empire, which continued to have local rulers administering the so-called Duchy of Chur. However, after the death of the last Victorid ruler, Bishop Tello, around 765, Charlemagne assigned a Germanic duke to administer the region. Additionally, the Diocese of Chur was transferred by the (pre-Schism) Roman Catholic Church from the Archdiocese of Milan to the Diocese of Mainz in 843. The combined effect was a cultural reorientation towards the German-speaking north, especially as the ruling élite now comprised almost entirely speakers of German. At the time, Romansh was spoken over a much wider area, stretching north into the present-day cantons of Glarus and St. Gallen, to the Walensee in the northwest, and Rüthi and the Alpine Rhine Valley in the northeast. In the east, parts of modern-day Vorarlberg were Romansh-speaking, as were parts of Tyrol. The northern areas, called Lower Raetia, became German-speaking by the 12th century; and by the 15th century, the Rhine Valley of St. Gallen and the areas around the Wallensee were entirely German-speaking. This language shift was a long, drawn-out process, with larger, central towns adopting German first, while the more peripheral areas around them remained Romansh-speaking longer. The shift to German was caused in particular by the influence of the local German-speaking élites and by German-speaking immigrants from the north, with the lower and rural classes retaining Romansh longer. In addition, beginning around 1270, the German-speaking Walser began settling in sparsely populated or uninhabited areas within the Romansh-speaking heartland. The Walser sometimes expanded into Romansh-speaking areas from their original settlements, which then often became German-speaking, such as Davos, Schanfigg, the Prättigau, Schams, and Valendas, which became German-speaking by the 14th century. In rare cases, these Walser settlements were eventually assimilated by their Romansh-speaking neighbors, for instance, Oberhalbstein and Medel and Tujetsch in the Surselva region. The Germanization of Chur had particular long-term consequences. Even though the city had long before ceased to be a cultural center of Romansh, the spoken language of the capital of the Diocese of Chur continued to be Romansh until the 15th century. After a fire in 1465 which virtually destroyed the city, many German-speaking artisans who had been called in to help repair the damage settled there, causing German to become the majority language. In a chronicle written in 1571–72, Durich Chiampell mentions that Romansh was still spoken in Chur roughly a hundred years before, but had since then rapidly given way to German and was now not much appreciated by the inhabitants of the city. Many linguists regard the loss of Chur to German as a crucial event. According to Sylvia Osswald, for example, it occurred precisely at a time when the introduction of the printing press could have led to the adoption of the Romansh dialect of the capital as a common written language for all Romansh speakers. Other linguists such as Jachen Curdin Arquint remain skeptical of this view, however, and assume that the various Romansh-speaking regions would still have developed their own separate written standards. Instead, several regional written varieties of Romansh began appearing during the 16th century. Gian Travers wrote the first surviving work in Romansh, the "Chianzun dalla guerra dagl Chiaste da Müs", in the Putèr dialect. This epic poem, written in 1527, describes the first Musso war, in which Travers himself had taken part. Travers also translated numerous biblical plays into Romansh, though only the titles survive for many of them. Another early writer, Giachem Bifrun, who also wrote in Putèr, penned the first printed book in Romansh, a catechism published in 1552. In 1560 he published a translation of the New Testament: "L'g Nuof Sainc Testamaint da nos Signer Jesu Christ". Two years later, in 1562, another writer from the Engadine, Durich Chiampel, published the "Cudesch da Psalms", a collection of church songs in the Vallader dialect. These early works are generally well written and show that the authors had a large amount of Romansh vocabulary at their disposal, contrary to what one might expect of the first pieces of writing in a language. Because of this, the linguist Ricarda Liver assumes that these written works built on an earlier, pre-literature tradition of using Romansh in administrative and legal situations, of which no evidence survives. In their prefaces, the authors themselves often mention the novelty of writing Romansh, and discuss an apparently common prejudice that Romansh was a language that could not be written. The first writing in the Sursilvan and Sutsilvan dialects appears in the 17th century. As in the Engadine, these early works usually focused on religious themes, in particular the struggles between Protestants and Counter-Reformers. Daniel Bonifaci produced the first surviving work in this category, the catechism "Curt mussameint dels principals punctgs della Christianevla Religiun", published in 1601 in the Sutsilvan dialect. A second edition, published in 1615, is closer to Sursilvan, however, and writings in Sutsilvan do not appear again until the 20th century. In 1611, "Igl Vêr Sulaz da pievel giuvan" ("The true joys of young people"), a series of religious instructions for Protestant youths, was published by Steffan Gabriel. Four years later, in 1615, a Catholic catechism, "Curt Mussament", was published in response, written by Gion Antoni Calvenzano. The first translation of the New Testament into Sursilvan was published in 1648 by the son of Steffan Gabriel, Luci Gabriel. The first complete translation of the Bible, the "Bibla da Cuera", was published between 1717 and 1719. The Sursilvan dialect thus had two separate written varieties, one used by the Protestants with its cultural center around Ilanz, and a Catholic variety with the Disentis Abbey as its center. The Engadine dialect was also written in two varieties: Putèr in the Upper Valley and Vallader in the Lower Valley. The Sutsilvan areas either used the Protestant variety of Sursilvan, or simply used German as their main written language. The Surmiran region began developing its own variety in the early 18th century, with a catechism being published in 1703, though either the Catholic variety of Sursilvan or Putèr was more commonly used there until the 20th century. In the 16th century, the language border between Romansh and German largely stabilized, and it remained almost unchanged until the late 19th century. During this period, only isolated areas became German-speaking, mainly a few villages around Thusis and the village of Samnaun. In the case of Samnaun, the inhabitants adopted the Bavarian dialect of neighboring Tyrol, making Samnaun the only municipality of Switzerland where a Bavarian dialect is spoken. The Vinschgau in South Tyrol was still Romansh-speaking in the 17th century, after which it became entirely German-speaking because of the Counter-Reformation denunciation of Romansh as a "Protestant language". When Grisons became part of Switzerland in 1803, it had a population of roughly 73,000, of whom around 36,600 were Romansh speakers—many of them monolingual—living mostly within the Romansh-speaking valleys. The language border with German, which had mostly been stable since the 16th century, now began moving again as more and more villages shifted to German. One cause was the admission of Grisons as a Swiss canton, which brought Romansh-speakers into more frequent contact with German-speakers. Another factor was the increased power of the central government of Grisons, which had always used German as its administrative language. In addition, many Romansh-speakers migrated to the larger cities, which were German-speaking, while speakers of German settled in Romansh villages. Moreover, economic changes meant that the Romansh-speaking villages, which had mostly been self-sufficient, engaged in more frequent commerce with German-speaking regions. Also, improvements in the infrastructure made travel and contact with other regions much easier than it had been. Finally, the rise of tourism made knowledge of German an economic necessity in many areas, while the agricultural sector, which had been a traditional domain of Romansh, became less important. All this meant that knowledge of German became more and more of a necessity for Romansh speakers and that German became more and more a part of daily life. For the most part, German was seen not as a threat but rather as an important asset for communicating outside one's home region. The common people frequently demanded better access to learning German. When public schools began to appear, many municipalities decided to adopt German as the medium of instruction, as in the case of Ilanz, where German became the language of schooling in 1833, when the town was still largely Romansh-speaking. Some people even welcomed the disappearance of Romansh, in particular among progressives. In their eyes, Romansh was an obstacle to the economic and intellectual development of the Romansh people. For instance, the priest Heinrich Bansi from Ardez wrote in 1797: "The biggest obstacle to the moral and economical improvement of these regions is the language of the people, Ladin [...] The German language could certainly be introduced with ease into the Engadine, as soon as one could convince the people of the immense advantages of it". Others however, saw Romansh as an economic asset, since it gave the Romansh an advantage when learning other Romance languages. In 1807, for example, the priest Mattli Conrad wrote an article listing the advantages and disadvantages of Romansh: In response however, the editor of the newspaper added that: According to the testimony of experienced and vigilant language teachers, while the one who is born Romansh can easily learn to understand these languages and make himself understood in them, he has great difficulties in learning them properly, since precisely because of the similarity, he mixes them so easily with his own bastardized language. [...] in any case, the conveniences named should hold no weight against all the disadvantages that come from such an isolated and uneducated language. According to Mathias Kundert, this quote is a good example of the attitude of many German-speakers towards Romansh at the time. According to Mathias Kundert, while there was never a plan to Germanize the Romansh areas of Grisons, many German-speaking groups wished that the entire canton would become German-speaking. They were careful however, to avoid any drastic measures to that extent, in order not to antagonize the influential Romansh minority. The decline of Romansh over the 20th century can be seen through the results of the Swiss censuses. The decline in percentages is only partially due to the Germanization of Romansh areas, since the Romansh-speaking valleys always had a lower overall population growth than other parts of the canton. Starting in the mid-19th century however, a revival movement began, often called the "Rhaeto-Romansh renaissance". This movement involved an increased cultural activity, as well as the foundation of several organizations dedicated to protecting the Romansh language. In 1863, the first of several attempts was made to found an association for all Romansh regions, which eventually led to the foundation of the "Società Retorumantscha" in 1885. In 1919, the Lia Rumantscha was founded to serve as an umbrella organization for the various regional language societies. Additionally, the role of Romansh in schooling was strengthened, with the first Romansh school books being published in the 1830s and 1840s. Initially, these were merely translations of the German editions, but by the end of the 19th century teaching materials were introduced which took the local Romansh culture into consideration. Additionally, Romansh was introduced as a subject in teacher's college in 1860 and was recognized as an official language by the canton in 1880. Around the same time, grammar and spelling guidelines began to be developed for the regional written dialects. One of the earliest was the "Ortografia et ortoëpia del idiom romauntsch d'Engiadin'ota" by Zaccaria Pallioppi, published in 1857. For Sursilvan, a first attempt to standardize the written language was the "Ortografia gienerala, speculativa ramontscha" by Baseli Carigiet, published in 1858, followed by a Sursilvan-German dictionary in 1882, and the "Normas ortografias" by Giachen Caspar Muoth in 1888. Neither of these guidelines managed to gather much support however. At the same time, the Canton published school books in its own variety. Sursilvan was then definitely standardized through the works of Gion Cahannes, who published "Grammatica Romontscha per Surselva e Sutselva" in 1924, followed by "Entruidament devart nossa ortografia" in 1927. The Surmiran dialect had its own norms established in 1903, when the Canton agreed to finance the school book "Codesch da lectura per las scolas primaras de Surmeir", though a definite guideline, the "Normas ortograficas per igl rumantsch da Surmeir", was not published until 1939. In the meantime, the norms of Pallioppi had come under criticism in the Engadine due to the strong influence of Italian in them. This led to an orthographic reform which was concluded by 1928, when the "Pitschna introducziun a la nouva ortografia ladina ufficiala" by Cristoffel Bardola was published. A separate written variety for Sutsilvan was developed in 1944 by Giuseppe Gangale. Around 1880, the entire Romansh-speaking area still formed a continuous geographical unit. But by the end of the century, the so-called "Central-Grisons language bridge" began to disappear. From Thusis, which had become German-speaking in the 16th/17th century, the Heinzenberg and Domleschg valleys were gradually Germanized over the next decades. Around the turn of the century, the inner Heinzenberg and Cazis became German-speaking, followed by Rothenbrunnen, Rodels, Almens, and Pratval, splitting the Romansh area into two geographically non-connected parts. In the 1920s and 1930s the rest of the villages in the valley became mainly German-speaking, sealing the split. In order to halt the decline of Romansh, the Lia Rumantscha began establishing Romansh day care schools, called "Scoletas", beginning in the 1940s with the aim of reintroducing Romansh to children. Although the "Scoletas" had some success – of the ten villages where Scoletas were established, the children began speaking Romansh amongst themselves in four, with the children in four others acquiring at least some knowledge of Romansh – the program ultimately failed to preserve the language in the valley. A key factor was the disinterest of the parents, whose main motivation for sending their children to the Scoletas appears to have been that they were looked after for a few hours and given a meal every day, rather than an interest in preserving Romansh. The other factor was that after entering primary school, the children received a few hours a week of Romansh instruction at best. As a result, the last Scoletas were closed in the 1960s with the exception of Präz, where the Scoleta remained open until 1979. In other areas, such as the Engadine and the Surselva, where the pressure of German was equally strong, Romansh was maintained much better and remained a commonly spoken language. According to the linguist Mathias Kundert, one important factor was the different social prestige of Romansh. In the Heinzenberg and Domleschg valleys, the elite had been German-speaking for centuries, so that German was associated with power and education, even though most people did not speak it, whereas Romansh was associated with peasant life. In the Engadine and the Surselva by contrast, the elite was itself Romansh-speaking, so that Romansh there was "not only the language spoken to children and cows, but also that of the village notable, the priest, and the teacher." Additionally, Romansh schools had been common for several years before German had become a necessity, so that Romansh was firmly established as a medium of education. Likewise, in the Upper Engadine, where factors such as increased mobility and immigration by German speakers were even stronger, Romansh was more firmly established as a language of education and administration, so that the language was maintained to a much greater extent. In Central Grisons, by contrast, German had been a central part of schooling since the beginning, and virtually all schools switched entirely to German as the language of instruction by 1900, with children in many schools being punished for speaking Romansh well into the 1930s. Early attempts to create a unified written language for Romansh include the "Romonsch fusionau" of Gion Antoni Bühler in 1867 and the "Interrumantsch" by Leza Uffer in 1958. Neither was able to gain much support, and their creators were largely the only ones actively using them. In the meantime, the Romansh movement sought to promote the different regional varieties while promoting a gradual convergence of the five varieties, called the ""avischinaziun"". In 1982, however, the then secretary of the Lia Rumantscha, a sociolinguist named Bernard Cathomas, launched a project for designing a pan-regional variety. The linguist Heinrich Schmid presented to the Lia Rumantscha the same year the rules and directives for this standard language under the name Rumantsch Grischun. Schmid's approach consisted of creating a language as equally acceptable as possible to speakers of the different dialects, by choosing those forms which were found in a majority of the three strongest varieties: Sursilvan, Surmiran, and Vallader. The elaboration of the new standard was endorsed by the Swiss National Fund and carried out by a team of young Romansh linguists under the guidance of Georges Darms and Anna-Alice Dazzi-Gross. The Lia Rumantscha then began introducing Rumantsch Grischun to the public, announcing that it would be chiefly introduced into domains where only German was being used, such as official forms and documents, billboards, and commercials. In 1984, the assembly of delegates of the head organization Lia Rumantscha decided to use the new standard language when addressing all Romansh-speaking areas of the Grisons. From the very start, Rumansh Grischun has been implemented only on the basis of a decision of the particular institutions. In 1986, the federal administration began to use Rumantsch Grischun for single texts. The same year, however, several influential figures began to criticize the introduction of Rumantsch Grischun. Donat Cadruvi, at the time the president of the cantonal government, claimed that the Lia Rumantscha was trying to force the issue. Romansh writer Theo Candinas also called for a public debate on the issue, calling Rumantsch Grischun a "plague" and "death blow" to Romansh and its introduction a "Romansh Kristallnacht", thus launching a highly emotional and bitter debate which would continue for several years. The following year, Theo Candinas published another article titled "Rubadurs Garmadis" in which he compared the proponents of Rumantsch Grischun to Nazi thugs raiding a Romansh village and desecrating, destroying, and burning the Romansh cultural heritage. The proponents responded by labeling the opponents as a small group of archconservative and narrow-minded Sursilvans and CVP politicians among other things. The debate was characterized by a heavy use of metaphors, with opponents describing Rumantsch Grischun as a "test-tube baby" or "castrated language". They argued that it was an artificial and infertile creation which lacked a heart and soul, in contrast to the traditional dialects. On the other side, proponents called on the Romansh people to nurture the "new-born" to allow it to grow, with Romansh writer Ursicin Derungs calling Rumantsch Grischun a ""lungatg virginal"" 'virgin language' that now had to be seduced and turned into a blossoming woman. The opposition to Rumantsch Grischun also became clear in the Swiss census of 1990, in which certain municipalities refused to distribute questionnaires in Rumantsch Grischun, requesting the German version instead. Following a survey on the opinion of the Romansh population on the issue, the government of Grisons decided in 1996 that Rumantsch Grischun would be used when addressing all Romansh speakers, but the regional varieties could continue to be used when addressing a single region or municipality. In schools, Rumantsch Grischun was not to replace the regional dialects but only be taught passively. The compromise was largely accepted by both sides. A further recommendation in 1999, known as the "Haltinger concept", also proposed that the regional varieties should remain the basis of the Romansh schools, with Rumantsch Grischun being introduced in middle school and secondary school. The government of Grisons then took steps to strengthen the role of Rumantsch Grischun as an official language. Since the cantonal constitution explicitly named Sursilvan and Engadinese as the languages of ballots, a referendum was launched to amend the relevant article. In the referendum, which took place on June 10, 2001, 65% voted in favor of naming Rumantsch Grischun the only official Romansh variety of the Canton. Opponents of Rumantsch Grischun such as Renata Coray and Matthias Grünert argue, however, that if only those municipalities with at least 30% Romansh speakers were considered, the referendum would have been rejected by 51%, with an even larger margin if only those with at least 50% Romansh speakers were considered. They thus interpret the results as the Romansh minority having been overruled by the German-speaking majority of the canton. A major change in policy came in 2003, when the cantonal government proposed a number of spending cuts, including a proposal according to which new Romansh teaching materials would not be published except in Rumantsch Grischun from 2006 onwards, the logical result of which would be to abolish the regional varieties as languages of instruction. The cantonal parliament passed the measure in August 2003, even advancing the deadline to 2005. The decision was met by strong opposition, in particular in the Engadine, where teachers collected over 4,300 signatures opposing the measure, followed by a second petition signed by around 180 Romansh writers and cultural figures, including many who were supportive of Rumantsch Grischun but opposed its introduction as a language of instruction. Opponents argued that Romansh culture and identity was transmitted through the regional varieties and not through Rumantsch Grischun and that Rumantsch Grischun would serve to weaken rather than strengthen Romansh, possibly leading to a switch to German-language schools and a swift Germanization of Romansh areas. The cantonal government refused to debate the issue again however, instead deciding on a three-step plan in December 2004 to introduce Rumantsch Grischun as the language of schooling, allowing the municipalities to choose when they would make the switch. The decision not to publish any new teaching materials in the regional varieties was not overturned, however, raising the question of what would happen in those municipalities that refused to introduce Rumantsch Grischun at all, since the language of schooling is decided by the municipalities themselves in Grisons. The teachers of the Engadine in particular were outraged over the decision, but those in the Surmeir were mostly satisfied. Few opinions were heard from the Surselva, which was interpreted either as support or resignation, depending in the viewpoint of the observer. In 2007–2008, 23 so called "pioneer-municipalities" (Lantsch/Lenz, Brienz/Brinzauls, Tiefencastel, Alvaschein, Mon, Stierva, Salouf, Cunter, Riom-Parsonz, Savognin, Tinizong-Rona, Mulegns, Sur, Marmorera, Falera, Laax, Trin, Müstair, Santa Maria Val Müstair, Valchava, Fuldera, Tschierv and Lü) introduced Rumantsch Grischun as the language of instruction in 1st grade, followed by an additional 11 (Ilanz, Schnaus, Flond, Schluein, Pitasch, Riein, Sevgein, Castrisch, Surcuolm, Luven and Duvin) the following year and another 6 (Sagogn, Rueun, Siat, Pigniu, Waltensburg/Vuorz and Andiast) in 2009-2010. However, other municipalities, including the entire Engadine valley and most of the Surselva, continued to use their regional variety. The cantonal government aimed to introduce Rumantsch Grischun as the sole language of instruction in Romansh schools by 2020. In early 2011, however, a group of opponents in the Surselva and the Engadine founded the association Pro Idioms, demanding the overturning of the government decision of 2003 and launching numerous local initiatives to return to the regional varieties as the language of instruction. In April 2011, Riein became the first municipality to vote to return to teaching in Sursilvan, followed by an additional 4 in December, and a further 10 in early 2012, including Val Müstair (returning to Vallader), which had been the first to introduce Rumantsch Grischun. As of September 2013, all municipalities in the Surselva, with the exception of Pitasch, have decided to return to teaching in Sursilvan. Supporters of Rumantsch Grischun then announced that they would take the issue to the Federal Supreme Court of Switzerland and announced their intention to launch a cantonal referendum to enshrine Rumantsch Grischun as the language of instruction. The Lia Rumantscha opposes these moves and now supports a model of coexistence in which Rumantsch Grischun will supplement but not replace the regional varieties in school. It cites the need for keeping linguistic peace among Romansh speakers, as it says that the decades-long debate over the issue has torn friends and even families apart. The 2003 decision, in December 2011, has been overturned so the canton will again finance school books in the regional varieties. Rumantsch Grischun is still a project in progress. At the start of 2014, it is in use as a school language in the central part of Grisons and in the bilingual classes in the region of Chur. It is taught in upper-secondary schools, in the university of teacher education in Chur and at the universities of Zürich and Fribourg, along with the Romansh idioms. It remains an official and administrative language in the Swiss Confederation and the Canton of Grisons as well as in public and private institutions for all kinds of texts intended for the whole Romansh-speaking territory. Rumantsch Grischun is read in the news of Radiotelevisiun Svizra Rumantscha and written in the daily newspaper "La Quotidiana", along with the Romansh idioms. Thanks to many new texts in a wide variety of political and social functions, the Romansh vocabulary has been decisively broadened. The "Pledari Grond" German–Rumantsch Grischun dictionary, with more than 215 000 entries, is the most comprehensive collection of Romansh words, which can also be used in the idioms with the necessary phonetic shifts. The signatories of "Pro Rumantsch" stress that Romansh needs both the idioms and Rumantsch Grischun if it is to improve its chances in today's communication society. In Switzerland, official language use is governed by the "territorial principle": Cantonal law determines which of the four national languages enjoys official status in which part of the territory. Only the federal administration is officially quadrilingual. Romansh is an official language at the federal level, one of the three official languages of the Canton of Grisons, and is a working language in various districts and numerous municipalities within the canton. The first Swiss constitution of 1848, as well as the subsequent revision of 1872, made no mention of Romansh, which at the time was not a working language of the Canton of Grisons either. The federal government did finance a translation of the constitution into the two Romansh varieties Sursilvan and Vallader in 1872, noting, however, that these did not carry the force of law. Romansh became a national language of Switzerland in 1938, following a referendum. However, a distinction was introduced between "national languages" and "official languages". The status of a national language was largely symbolic, whereas only official languages were to be used in official documents, a status reserved for German, French, and Italian. The recognition of Romansh as the fourth national language is best seen within the context of the "Spiritual defence" preceding World War II, which aimed to underline the special status of Switzerland as a multinational country. Additionally, this was supposed to discredit the efforts of Italian nationalists to claim Romansh as a dialect of Italian and establish a claim to parts of Grisons. The Romansh language movement led by the Lia Rumantscha was mostly satisfied with the status as a national but not official language. Their aims at the time were to secure a symbolic "right of residence" for Romansh, and not actual use in official documents. This status did have disadvantages however. For instance, official name registers and property titles had to be in German, French, or Italian. This meant that Romansh-speaking parents were often forced to register their children under German or Italian versions of their Romansh names. As late as 1984, the Canton of Grisons was ordered not to make entries into its corporate registry in Romansh. The Swiss National Bank first planned to include Romansh on its bills in 1956, when a new series was introduced. Due to disputes within the Lia Rumantscha over whether the bills were to feature the Sursilvan version ""Banca nazionala svizra"" or the Vallader version ""Banca naziunala svizzra"", the bills eventually featured the Italian version twice, alongside French and German. When new bills were again introduced in 1976/77, a Romansh version was added by finding a compromise between the two largest varieties Sursilvan and Vallader, which read ""Banca naziunala svizra"". The numbers on the bills were printed in Surmiran, a minor intermediate dialect. Following a referendum on March 10, 1996, Romansh was recognized as a partial official language of Switzerland alongside German, French, and Italian in article 70 of the federal constitution. According to the article, German, French, Italian, and Rhaeto-Romansh are national languages of Switzerland. The official languages are declared to be German, French, and Italian, and Rhaeto-Romansh is an official language for correspondence with Romansh-speaking people. This means that in principle, it is possible to address the federal administration in Romansh and receive an answer in the same language. In what the Federal Culture Office itself admits is "more a placatory and symbolic use" of Romansh, the federal authorities occasionally translate some official texts into Romansh. In general, though, demand for Romansh-language services is low because, according to the Federal Culture Office, Romansh speakers may either dislike the official Rumantsch Grischun idiom or prefer to use German in the first place, as most are perfectly bilingual. Without a unified standard language, the status of an official language of the Swiss Confederation would not have been conferred to Romansh. It takes time and needs to be promoted to get implemented in this new function. The Swiss Armed Forces attempted to introduce Romansh as an official language of command between 1988 and 1992. Attempts were made to form four entirely Romansh-speaking companies, but these efforts were abandoned in 1992 due to a lack of sufficient Romansh-speaking non-commissioned officers. Official use of Romansh as a language of command was discontinued in 1995 as part of a reform of the Swiss military. Grisons is the only canton of Switzerland where Romansh is recognized as an official language. The only working language of the Three Leagues was German until 1794, when the assembly of the leagues declared German, Italian, Sursilvan, and Ladin (Putèr and Vallader) to have equal official standing. No explicit mention of any official language was made in the cantonal constitutions of 1803, 1814, and 1854. The constitution of 1880 declared that "The three languages of the Canton are guaranteed as national languages, without specifying anywhere which three languages are meant. The new cantonal constitution of 2004 recognizes German, Italian, and Romansh as equal national and official languages of the canton. The canton used the Romansh varieties Sursilvan and Vallader up until 1997, when Rumantsch Grischun was added and use of Sursilvan and Vallader was discontinued in 2001. This means that any citizen of the canton may request service and official documents such as ballots in their language of choice, that all three languages may be used in court, and that a member of the cantonal parliament is free to use any of the three languages. Since 1991, all official texts of the cantonal parliament must be translated into Romansh and offices of the cantonal government must include signage in all three languages. In practice, the role of Romansh within the cantonal administration is limited and often symbolic and the working language is mainly German. This is usually justified by cantonal officials on the grounds that all Romansh speakers are perfectly bilingual and able to understand and speak German. Up until the 1980s it was usually seen as a provocation when a deputy in the cantonal parliament used Romansh during a speech. Cantonal law leaves it to the districts and municipalities to specify their own language of administration and schooling. According to Article 3 of the cantonal constitution however, the municipalities are to "take into consideration the traditional linguistic composition and respect the autochthonous linguistic minorities". This means that the language area of Romansh has never officially been defined, and that any municipality is free to change its official language. In 2003, Romansh was the sole official language in 56 municipalities of Grisons, and 19 were bilingual in their administrative business. In practice, even those municipalities which only recognize Romansh as an official working language, readily offer services in German as well. Additionally, since the working language of the canton is mainly German and many official publications of the canton are available only in German, it is virtually impossible for a municipal administration to operate only in Romansh. Within the Romansh-speaking areas, three different types of educational models can be found: Romansh schools, bilingual schools, and German schools with Romansh as a subject. In the Romansh schools, Romansh is the primary language of instruction during the first 3–6 years of the nine years of compulsory schooling, and German during the last 3–9 years. Due to this, this school type is often called the "so-called Romansh school". In practice, the amount of Romansh schooling varies between half and 4/5 of the compulsory school term, often depending on how many Romansh-speaking teachers are available. This "so-called Romansh school" is found in 82 municipalities of Grisons as of 2001. The bilingual school is found only in Samedan, Pontresina, and Ilanz/Schnaus. In 15 municipalities, German is the sole medium of instruction as of 2001, with Romansh being taught as a subject. Outside of areas where Romansh is traditionally spoken, Romansh is not offered as a subject and as of 2001, 17 municipalities within the historical language area of Romansh do not teach Romansh as a subject. On the secondary level, the language of instruction is mainly German, with Romansh as a subject in Romansh-speaking regions. Outside of the traditional Romansh-speaking areas, the capital of Grisons, Chur, runs a bilingual Romansh-German elementary school. On the tertiary level, the University of Fribourg offers Bachelor- and Master programs for Romansh language and literature. The Romansh department there has been in existence since 1991. The University of Zürich also maintains a partial chair for Romansh language and literature together with the ETH Zürich since 1985. Whereas Romansh was spoken as far north as Lake Constance in the early Middle Ages, the language area of Romansh is today limited to parts of the Swiss canton of Grisons; the last areas outside the canton to speak Romansh, the Vinschgau in South Tyrol, became German-speaking in the 17th century. Inside Grisons, the language borders largely stabilized in the 16th century and remained almost unchanged until the 19th century. This language area is often called the "Traditional Romansh-speaking territory", a term introduced by the statistician Jean-Jacques Furer based on the results of the Swiss censuses. Furer defines this language area as those municipalities in which a majority declared Romansh as their mother tongue in any of the first four Swiss censuses between 1860 and 1888. In addition, he includes Fürstenau. This represented 121 municipalities at the time, corresponding to 116 present-day municipalities. The villages of Samnaun, Sils im Domleschg, Masein, and Urmein, which were still Romansh-speaking in the 17th century, had lost their Romansh majority by 1860, and are not included in this definition. This historical definition of the language area has been taken up in many subsequent publications, but the Swiss Federal Statistical Office for instance defines the language area of Romansh as those municipalities, where a majority declared to habitually use Romansh in the census of 2000. The presence of Romansh within its traditional language area varies from region to region. In 2000, 66 municipalities still had a Romansh majority, an additional 32 had at least 20% who declared Romansh as their language of best command or as a habitually spoken language, while Romansh is either extinct or only spoken by a small minority in the remaining 18 municipalities within the traditional language area. In the Surselva region, it is the habitually spoken language of 78.5% and the language of best command of 66%. In the Sutselva region by contrast, Romansh is extinct or only spoken by a small number of older people, with the exception of Schams, where it is still transmitted to children and where some villages still have a Romansh majority, notably in the vicinity of the Schamserberg. In the Surmiran region, it is the main language in the Surses region, but no longer widely spoken in the Albula Valley. In the Upper Engadine valley, it is a habitually spoken language for 30.8% and the language of best command for 13%. However, most children still acquire Romansh through the school system, which has retained Romansh as the primary language of instruction, even though Swiss German is more widely spoken inside the home. In the Lower Engadine, Romansh speakers form the majority in virtually all municipalities, with 60.4% declaring Romansh as their language of best command in 2000, and 77.4% declaring it as a habitually spoken language. Outside of the traditional Romansh language area, Romansh is spoken by the so-called "Romansh diaspora", meaning people who have moved out of the Romansh-speaking valleys. A significant number are found in the capital of Grisons, Chur, as well as in Swiss cities outside of Grisons. The current situation of Romansh is quite well researched. The number of speakers is known through the Swiss censuses, with the most recent having taken place in 2000, in addition to surveys by the Radio e Televisiun Rumantscha. The quantitative data from these surveys was summed up by statistician Jean-Jacques Furer in 2005. In addition, linguist Regula Cathomas performed a detailed survey of everyday language use, published in 2008. Virtually all Romansh-speakers today are bilingual in Romansh and German. Whereas monolingual Romansh were still common at the beginning of the twentieth century, they are now only found among pre-school children. As Romansh linguist Ricarda Liver writes: The language situation today consists of a complex relationship between several diglossia, since there is a functional distribution within Romansh itself between the local dialect, the regional standard variety, and nowadays the pan-regional variety Rumantsch Grischun as well; and German is also acquired in two varieties: Swiss German and Standard German. Additionally, in Val Müstair many people also speak Bavarian German as a second language. Aside from German, many Romansh also speak additional languages such as French, Italian, or English, learned at school or acquired through direct contact. The Swiss census of 1990 and 2000 asked for the "language of best command" as well as for the languages habitually used in the family, at work, and in school. Previous censuses had only asked for the "mother tongue". In 1990, Romansh was named as the "language of best command" by 39,632 people, with a decrease to 35,095 in 2000. As a family language, Romansh is more widespread, with 55,707 having named it in 1990, and 49,134 in 2000. As a language used at work, Romansh was more widely used in 2000 with 20,327 responses than in 1990 with 17,753, as it was as a language used at school, with 6,411 naming it in 2000 as compared to 5,331 in 1990. Overall, a total of 60,561 people reported that they used Romansch of some sort on a habitual basis, representing 0.83% of the Swiss population. As the language of best command, Romansh comes in 11th in Switzerland with 0.74%, with the non-national languages Serbian, Croatian, Albanian, Portuguese, Spanish, English, and Turkish all having more speakers than Romansh. In the entire Canton of Grisons, where about two-thirds of all speakers live, roughly a sixth report it as the language of best command (29,679 in 1990 and 27,038 in 2000). As a family language it was used by 19.5% in 2000 (33,707), as a language used on the job by 17.3% (15,715), and as a school language by 23.3% (5,940). Overall, 21.5% (40,168) of the population of Grisons reported to be speaking Romansh habitually in 2000. Within the traditional Romansh-speaking areas, where 56.1% (33,991) of all speakers lived in 2000, it is the majority language in 66 municipalities. The status of Romansh differs widely within this traditional area however. Whereas in some areas Romansh is used by virtually the entire population, in others the only speakers are people who have moved there from elsewhere. Overall, Romansh dominates in most of the Surselva and the Lower Engadine as well as parts of the Surses, whereas German is the dominant daily language in most other areas, though Romansh is often still used and transmitted in a limited manner regardless. In general, Romansh is the dominant language in most of the Surselva. In the western areas, the Cadi and the Lumnezia, it is the language of a vast majority, with around 80% naming it as their language of best command, and it often being a daily language for virtually the entire population. In the eastern areas of the Gruob around Ilanz, German is significantly more dominant in daily life, though most people still use Romansh regularly. Romansh is still acquired by most children in the Cadi and Gruob even in villages where Romansh speakers are in the minority, since it is usually the language of instruction in primary education there. Even in villages where Romansh dominates, newcomers rarely learn Romansh however, as Sursilvan speakers quickly accommodate by switching to German, so that there is often little opportunity to practice Romansh even when people are willing to learn it. Some pressure is often exerted by children, who will sometimes speak Romansh even with their non-Romansh-speaking parents. In the Imboden District by contrast, it is only used habitually by 22%, and is the language of best command for only 9.9%. Even within this district however, the presence of Romansh varies, with 41.3% in Trin reporting to speak it habitually. In the Sutselva, the local Romansh dialects are extinct in most villages, with a few elder speakers remaining in places such as Präz, Scharans, Feldis/Veulden, and Scheid, though passive knowledge is slightly more common. Some municipalities still offer Romansh as a foreign language subject in school, though it is often under pressure of being replaced by Italian. The notably exception is Schams, where it is still regularly transmitted to children and where the language of instruction is Romansh. In the Surmeir region, it is still the dominant every day language in the Surses, but has mostly disappeared from the Albula Valley. The highest proportion of habitual speakers is found in Salouf with 86.3%, the lowest in Obervaz with 18.9%. In these areas, many Romansh speakers only speak German with their spouses as an accommodation or because of a habit, though they sometimes speak Romansh to their children. In most cases, this is not because of a will to preserve the language, but because of other reasons such as Romansh having been their own childhood language or a belief that their children will later find it easier to learn additional languages. In the Upper Engadine, it is used habitually by 30.8% and the language of best command for 13%, with only S-chanf having a Romansh majority. Even though the main every-day and family language is German, Romansh is not in imminent danger of disappearing in the Upper Engadine, due to the strong emotional attachment to the language and in particular the Romansh-language school, which means that a Romansh-speaking core always exists in some form. Romansh is often a sign of being one of the locals, and used to distinguish oneself from tourists or temporary residents, so that outsiders will sometimes acquire Romansh in order to fit in. In the Lower Engadine by contrast, Romansh is the majority language virtually everywhere, with over 80% reporting it as a habitually spoken language in most villages. The status of Romansh is even stronger in the Val Müstair, where 86.4% report to speak it habitually, and 74.1% as their language of best command. In the Lower Engadine, outsiders are generally expected to learn Romansh if they wish to be integrated into the local community and take part in social life. In addition, there is often pressure from inside the family to learn Romansh. Overall, Jean-Jacques Furer concludes that the shrinkage of the Romansh-speaking areas is continuing, though at different rates depending on the region. At the same time, he notes that Romansh is still very much alive, a fact that is obvious in those areas where it retains a strong presence, such as most parts of the Surselva and the Lower Engadine. It is also assured that Romansh will continue to be transmitted for several more generations, even though each succeeding generation will be more and more rooted in German as well as Romansh. As a result, if the overall linguistic situation does not change, speakers will slowly become fewer and fewer with each generation. He also concludes however, that there are still enough speakers to ensure that Romansh will survive in the long term at least in certain regions. He considers the Romansh-language school system to be the single most crucial factor in this. Romansh has up to 26 consonant phonemes. Two are only found in some varieties, and one is found only in loanwords borrowed from German. Notes: The voiced obstruents are fully voiced in Romansh, in contrast to Swiss German with which Romansh is in extensive contact, and voiceless obstruents are non-aspirated. Voiced obstruents are devoiced word-finally, however, as in "buob" 'boy' > , "chöd" 'warm' > , "saung" 'blood' > , or "clav" 'key' > . The vowel inventory varies somewhat between dialects, as the front rounded vowels and and are found only in Putèr and Vallader. They have historically been unrounded in the other varieties and are found only in recent loans from German there. They are not found in the pan-regional variety Rumantsch Grischun either. The now nearly extinct Sutsilvan dialects of the Heinzenberg have as in "plànta" 'plant, tree', but this is etymologically unrelated to the found in Putèr and Vallader. The exact realization of the phoneme varies from to depending on the dialect: / 'book'. It is regarded as either a marginal phoneme or not a separate phoneme from by some linguists. Word stress generally falls either on the last or the penult syllable of a word. Unstressed vowels are generally reduced to a schwa, whose exact pronunciation varies between or as in 'song'. Vowel length is predictable: The amount of diphthongs varies significantly between dialects. Sursilvan dialects contain eleven diphthongs and four triphthongs (, , , and ). Other dialects have different inventories; Putèr for instance lacks , , and as well as the triphthongs but has , which is missing in Sursilvan. A phenomenon known as "hardened diphthongs", in which the second vowel of a falling Diphthong is pronounced as , was once common in Putèr as well, but is nowadays limited to Surmiran: "strousch" 'barely > . Romansh is written in the Latin alphabet, and mostly follows a phonemic orthography, with a high correspondence between letters and sounds. The orthography varies slightly depending on the variety. The vowel inventories of the five regional written varieties differ widely (in particular in regards to diphthongs), and the pronunciation often differs depending on the dialect even within them. The orthography of Sutsilvan is particularly complex, allowing for different pronunciations of the vowels depending on the regional dialect, and is not treated in this table. The following description deals mainly with the Sursilvan dialect, which is the best-studied so far. The dialects Putèr and Vallader of the Engadine valley in particular diverge considerably from Sursilvan in many points. When possible, such differences are described. Nouns are not inflected for case in Romansh; the grammatical category is expressed through word order instead. As in most other Romance languages, Romansh nouns belong to two grammatical genders: masculine and feminine. A definite article (masc. "il" or "igl" before a vowel; fem. "la") is distinguished from an indefinite article (masc. "in", "egn", "en" or "ün", depending on the dialect; fem. "ina", "egna", "ena" or "üna"). The plural is usually formed by adding the suffix -s. In Sursilvan, masculine nouns are sometimes irregular, with the stem vowel alternating: A particularity of Romansh is the so-called "collective plural" to refer to a mass of things as a whole: Adjectives are declined according to gender and number. Feminine forms are always regular, but the stem vowel sometimes alternates in the masculine forms: Sursilvan also distinguishes an attributive and predicative form of adjectives in the singular. This is not found in some of the other dialects however: There are three singular and three plural pronouns in Romansh (Sursilvan forms shown below): There is a T–V distinction between familiar "ti" and polite "vus". Putèr and Vallader distinguish between familiar "tü" and "vus" and polite "El/Ella" and "Els/Ellas". Pronouns for the polite forms in Putèr and Vallader are always capitalized to distinguish them from third person pronouns: "Eau cugnuosch a Sia sour" "I know your sister" and "Eau cugnuosch a sia sour" "I know his/her sister". The 1st and 2nd person pronouns for a direct object have two distinct forms, with one occurring following the preposition "a": "dai a mi tiu codisch" 'give me your book'. A particularity of Sursilvan is that reflexive verbs are all formed with the reflexive pronoun "se-", which was originally only the third person pronoun: The other Romansh dialects distinguish different reflexive pronouns however. Possessive pronouns occur in a pronominal and a predicative form that differ only in the masculine form, however: The feminine remains the same: "sia casa" 'her/his house' – "quella casa ei sia" 'this house is hers/his' Three different demonstrative pronouns "quel", "tschel", and "lez" are distinguished: "A quel fidel jeu, a tschel buc" 'I trust that one, but not that other one' or "Ed il bab, tgei vegn lez a dir?" 'and the father, what is he going to say?'. Verb tenses are divided into synthetic forms (present, imperfect) and analytic forms (perfect, pluperfect, future, passive) distinguished by the grammatical moods indicative, subjunctive, conditional, and imperative. These are most common forms in Sursilvan: The syntax of Romansh has not been thoroughly investigated so far. Regular word order is subject–verb–object, but subject-auxiliary inversion occurs in several cases, placing the verb at the beginning of a sentence: This feature might be a result of contact with German, or it might be an archaic feature no longer found in other Romance languages. A sentence is negated by adding a negative particle. In Sursilvan, this is "buc", placed after the verb, while in other dialects such as Putèr and Vallader, it is "nu", placed before the verb: A feature found only in Putèr and Vallader (as it is in Castilian Spanish) is the preposition of a direct object, when that direct object is a person or an animal, with "a", as in "test vis a Peider?" "did you see Peter?", "eau d'he mno a spass al chaun" "I took the dog out for a walk", but "hest vis la baselgia?" "did you see the church?". No systematic synchronic description of Romansh vocabulary has been carried out so far. Existing studies usually approach the subject from a historical perspective, taking particular interest in pre-Roman substratum, archaic words preserved only in Romansh, or in loan words from German. A project to compile together all known historic and modern Romansh vocabulary is the Dicziunari Rumantsch Grischun, first published in 1904, with the 13th edition currently in preparation. The influence of the languages (Raetic and Celtic) spoken in Grisons before the arrival of the Romans is most obvious in placenames, which are often pre-Roman. Since very little is known about the Celtic language once spoken in Grisons, and almost nothing about Raetic, words or placenames thought to come from them are usually simply referred to as "pre-Roman". Apart from placenames, such words are found in landscape features, plant and animal names unique to the Alps, and tools and methods related to alpine transhumance. Such words include: Like all languages, Romansh has its own archaisms, that is, words derived from Latin that have fallen out of use in most other Romance languages. Examples include "baselgia" 'church' (Vegliote "bašalka", Romanian "biserică"), "nuidis" 'grudgingly, reluctantly' from Latin "invitus", "urar" 'to pray' (Portuguese "orar", Romanian "a ura" – to wish), "aura" 'weather' (Old French "ore", Aromanian "avrî"), "scheiver" 'carnival', "cudesch" 'book', the last two of which are only found in Romansh. The non-Engadinese dialects retain "anceiver" ~ "entschaiver" 'to begin', from Latin "incipere", otherwise found only in Romanian "începe", whereas Surmiran and Engadinese (Putèr, Vallader) and all other Romance languages retain a reflex of Latin *"cuminitiāre", e.g. Engadinese "(s)cumanzar", Italian "cominciare", French "commencer". Other examples are "memia" (adv.) 'too much' from Latin "nimia" (adj., fem.), only found in Old Occitan, "vess" 'difficult' from Latin "vix" 'seldom' (cf. Old Spanish "abés", Romanian "abia" < "ad vix"), and Engadinese "encleger" 'to understand' (vs. non-Engadinese "capir"), also found in Romanian "înțelege" and Albanian "(n)dëgjoj", from Latin "intellegere". Some unique innovations include "tedlar" 'to listen' from Latin "titulare" and "patertgar" 'to think' from "pertractare". Another distinguishing characteristic of Romansh vocabulary is its numerous Germanic loanwords. Some Germanic loan words already entered the language in Late Antiquity or the Early Middle Ages, and they are often found in other Romance languages as well. Words more particular to Romansh include Surs./ Suts. "tschadun", Surm. "sdom"/"sdong", Engad. "sdun" 'spoon', which is also found in Ladin as "sciadon" and Friaulian as "sedòn" and is thought to go back to Ostrogothic *skeitho, and it was once probably common throughout Northern Italy. Another such early loan is "bletsch" 'wet', which probably goes back to Old Frankish "blettjan" 'to squeeze', from where French "blesser" 'to wound' is also derived. The change in meaning probably occurred by the way of 'bruised fruit', as is still found in French "blet". Early Germanic loans found more commonly in the other Romance languages includes Surs./Vall. "blau", Suts. "blo"/"blova", Surm. "blo"/"blava", Put. "blov" 'blue', which is derived from Germanic "blao" and also found for instance in French as "bleu" and Italian as "blu". Others were borrowed into Romansh during the Old High German period, such as "glieud" 'people' from OHG "liut" or Surs. "uaul", Suts. "gòld", Surm. "gôt", eng. "god" 'forest' from OHG "wald". Surs. "baul", Suts. "bòld", Engad. "bod" 'soon, early, nearly' is likely derived from Middle High German "bald, balde" 'keen, fast' as are Surs. "nez", Engad. "nüz" 'use' from Middle High German "nu(t)z", or "losch" 'proud' likely from Middle High German "lôs". Other examples include Surs. "schuber" 'clean' from Swiss German "suuber", Surs. "schumber" 'drum' from Swiss German or Middle High German "sumber", and Surs. "schufar" 'to drink greedily' from Swiss German "suufe". Some words were adapted into Romansh through different dialects of German, such as the word for 'farmer', borrowed as "paur" from Bavarian in Vallader and Putèr, but from Alemannic as "pur" in the other dialects. In addition, many German words entered Romansh beginning in the 19th century, when numerous new objects and ideas were introduced. Romansh speakers often simply adopted the German words, such as "il zug" 'the train' or "il banhof" 'the train station'. Language purists attempted to coin new Romansh words instead, which were occasionally successful in entering popular usage. Whereas "il tren" and "la staziun" managed to replace "il zug" and "il banhof", other German words have become established in Romansh usage, such as "il schalter" 'the switch', "il hebel" 'the lever', "la schlagbohrmaschina" 'the hammer drill', or "in schluc" 'a sip'. Especially noticeable are interjections such as "schon", "aber" or "halt", which have become established in everyday language. Romansh speakers have been in close contact with speakers of German dialects such as Alemannic and Bavarian for centuries, as well as speakers of various Italian dialects and Standard German more recently. These languages have influenced Romansh, most strongly the vocabulary, whereas the German and Italian influences on morphology and syntax are much more limited. This means that despite German influence, Romansh has remained a Romance language in its core structure. Romansh linguist Ricarda Liver also notes that an influence of Swiss German on intonation is obvious, in particular in the Sursilvan dialect, even though this has so far not been linguistically studied. The influence of German is generally strongest in the Rhenish varieties Sursilvan, Sutsilvan, and Sursilvan, where French loanwords (frequently not borrowed directly but transmitted through German) are also more numerous. In the dialects of the Engadine, by contrast, the influence of Italian is stronger. In the Engadinese written languages, Putèr and Vallader, Italian-influenced spellings, learned words, and derivations were previously abundant, for instance in Zaccaria Pallioppi's 1895 dictionary, but came under scrutiny at the start of the 20th century and were gradually eliminated from the written language. Following reforms of the written languages of the Engadine, many of these Italian words fell out of usage (such as "contadin" 'farmer' instead of "paur", "nepotin" 'nephew' rather than "abiadi", "ogni" 'everyone' instead of "inmincha", "saimper" 'always' instead of "adüna", and "abbastanza" 'enough' instead of "avuonda"), while others persisted as synonyms of more traditional Ladin words (such as "tribunal" 'court' alongside "drettüra", "chapir" alongside "incleger", and "testimoni" 'witness' alongside "perdütta"). Aside from the written language, everyday Romansh was also influenced by Italian through the large number of emigrants, especially from the Engadine, to Italy, the so-called Randulin. These emigrants often returned with their Romansh speech influenced by Italian. German loanwords entered Romansh as early as the Old High German period in the Early Middle Ages, and German has remained an important source of vocabulary since. Many of these words have been in use in Romansh for long enough that German speakers no longer recognize them as German, and for morphological derivations of them to have appeared, in particular through the suffix "-egiar ~ iar", as in Surs. "baghegiar", sut. "biagear", Surm. "biagier", Put. "biager", Vall. "bear" 'to build', derived from Middle High German "bûwen". Other examples include "malegiar" 'to paint' (← "malen"), "schenghegiar" 'to give (a present)' (← "schenken"), "schazegiar" 'to estimate' (← "schätzen"), or Surs. "betlegiar" (sut. "batlagear", Surm./Put. "batlager", Vall. "supetliar") 'to beg', derived from Swiss German "bettle" with the same meaning. Nouns derived from these verbs include "maletg" 'painting', "schenghetg" 'gift', "schazetg" 'estimation', or "bagetg" 'building'. The adjective "flissi" 'hard-working' has given rise to the noun "flissiadad" 'industriousness'. The word "pur" has given rise to derived words such as "pura" 'farmwife, female farmer' or "puranchel" 'small-time farmer', as has "buob" ‘boy’ from Swiss German "bueb" ‘boy’, with the derivations "buoba" ‘girl’ and "buobanaglia" ‘crowd of children’. Common nouns of Italian origin include "resposta/risposta" 'answer', "vista/vesta" 'view', "proposta" 'proposal', "surpresa/surpraisa" 'surprise', and "offaisa/offesa" 'insult'. In Ladin, many such nouns are borrowed or derived from Italian and end in –a, whereas the same group of nouns in Sursilvan frequently ends in –iun and where borrowed either from French or formed through analogy with Latin. Examples include "pretensiun" ‘opinion, claim’ vs. "pretaisa", "defensiun" ‘defense’ vs. "defaisa", or "confirmaziun" ‘confirmation’ vs. "conferma". Other Italian words used throughout Romansh include the words for 'nail', which are derived from Italian "acuto" 'sharp', which has yielded Sur. "guota", Sut. "guta", Surm. "gotta", and Ladin "guotta/aguotta", whereas the Romansh word for 'sharp' itself (Rhenish: "git", Ladin "agüz") is derived from the same Latin source ACUTUM. Words from various Italian dialects related to crafts include Ladin "marangun" 'carpenter' (← Venetian "marangon"), as opposed to "lennari" in other Romansh dialects, "chazzoula" 'trowel' (← Lombard "cazzola"), or "filadè" 'spinning wheel' (← Lombard "filadel"). Other words include culinary items such as "macaruns" 'macaroni' (← "maccheroni"); "tschiculatta/tschugalata" 'chocolate' (← "cioccolata" or Lombard "ciculata/cicolata"), Ladin and Surmiran "limun/limung" 'lemon' as opposed to Sursilvan "citrona" (← "limone"), "giabus/baguos" 'cabbage' (← Lombard "gabüs"), "chanella/canella" 'cinnamon' (← "cannella"). In Sursilvan, the word "ogna" 'flat cake' can be found, which is derived from Italian "lasagna", with the initial "las-" having been mistaken for the plural article, and the vowel having been adapted to Sursilvan sound patterns through analogy with words such as "muntogna" 'mountain'. Others are words for animals such as "lodola" 'lark' (← "lodola") or "randulina" 'swallow' (← Lombard "randulina"), as well as Ladin "scarafagi/scarvatg" 'beetle' (← "scarafaggio"). Other Italian words include "impostas" 'taxes' (← "imposte"; as opposed to Rhenish "taglia"), "radunanza/radunonza" 'assembly' (← "radunanza"), Ladin "ravarenda" '(Protestant) priest' (← "reverendo"), 'bambin 'Christmas child (giftbringer)' (← "Gesù Bambino"), "marchadant/marcadont" 'merchant' (← "mercatante") or "butia/buteia" 'shop' (← "bottega"). In Ladin, Italian borrowings also include words groups not usually borrowed readily. Examples include pronouns such as "qualchosa" 'something' (← "qualcosa"), "listess" 'the same one' (← Lombard or Venetian "l'istess"), adverbs such as "apunta" 'exactly' (← "appunto"), "magara/magari" 'fairly/quite' (← "magari"), prepositions like "dürant/duront" 'during' (← "durante") and "malgrà/malgrad" 'despite' (← "malgrado"), and conjunctions such as "però" 'but' (← "però") and "fin cha" 'until' (← "finché"). Most of these are confined to Ladin, with some exceptions such as Sursilvan "magari", "duront", and "malgrad". Aside from outright loanwords, the German influence on Romansh often takes the form of calques, where Romanic vocabulary has taken on the meaning of German words, summed up by Italian dialectologist Graziadio Isaia Ascoli in 1880 as ""materia romana e spirito tedesco"" ("Roman body and German soul). The earliest examples go back to Carolingian times and show the influence of Germanic law. Such words include "tschentament" 'statute', a derivation of the verb "tschentar" (from Latin *"sedentare" 'to sit') as an analogy to Middle High German "satzunge" or Surs./sut./Surm. "lètg", Put. "alach", Vall. "lai" 'marriage', derived from Latin "legem" (accusative singular of "lēx" 'law'), with the meaning of Middle High German "ê, ewe". A more recent example of a loan translation is the verb "tradir" 'to betray', which has taken on the additional meaning of German "verraten" of 'to give away' as in "tradir in secret" 'to give away a secret', originally covered by the verb "revelar". Particularly common are combinations of verbs with locative adverbs, such as "vegnir cun" 'to accompany' (literally 'to come with'), "vegnir anavos" 'to come back', "far cun" 'to participate' (literally 'to do with'), "far giu" 'to agree on' (literally 'to do down'), or "grodar tras" 'to fail' (literally 'to fall through'). Whereas such verbs also occur sporadically in other Romance languages as in French "prendre avec" 'to take along' or Italian "andare via" 'to go away', the large number in Romansh suggests an influence of German, where this pattern is common. However, prepositional verbs are also common in the (Romance) Lombard language spoken in the bordering Swiss and Italian regions. The verbs "far cun" 'to participate' or "grodar tras" 'to fail' for example, are direct equivalents of German "mitmachen" (from "mit" 'with' and "machen" 'to do) and "durchfallen" (from "durch" 'through' and "fallen" 'to fall'). Less integrated into the Romansh verbal system are constructions following the pattern of "far il" ('doing the') + a German infinitive. Examples include "far il löten" 'to solder', "far il würzen" 'to season', or "far il vermissen" 'to miss, to feel the absence of'. German also often serves as a model for the creation of new words. An example is Surs. "tschetapuorla" 'vacuum cleaner', a compound of "tschitschar" 'to suck' and "puorla" 'dust', following the model of German "Staubsauger" – the Italian word, "aspirapolvere" possibly being itself a calque on the German word. The Engadinese dialects on the other hand have adopted "aspiradur" from Italian "aspiratore", which, however, does not mean "vacuum cleaner". The Engadinese dialects on the other hand have adopted "aspiradur" from Italian "aspiratore". A skyscraper, which is a direct loan translation from English in many Romance languages (as in French "gratte-en-ciel", Italian "grattacielo"), is a loan translation of German "Wolkenkratzer" (literally 'cloud-scraper') in Sursilvan: "il sgrattaneblas" (from "sgrattar" 'to scratch' and "neblas" 'clouds'). The Engadinese varieties again follow the Italian pattern of "sgrattatschêl" (from "tschêl" 'sky'). A more recent word is "la natelnumra" 'the cell phone number', which follows the word order of Swiss German "Natelnummer", and is found alongside "la numra da natel". Examples of idiomatic expressions include Surs. "dar in canaster", Engad. "dar ün dschierl", a direct translation of German 'einen Korb geben', literally meaning 'to hand a basket', but used in the sense of 'turning down a marriage proposal' or "esser ligiongia ad enzatgi", a loan translation of the German expression "jemandem Wurst sein", literally meaning 'to be sausage to someone' but meaning 'not cared about, to be unimportant'. Apart from vocabulary, the influence of German is noticeable in grammatical constructions, which are sometimes closer to German than to other Romance languages. For instance, Romansh is the only Romance language in which indirect speech is formed using the subjunctive mood, as in Sursilvan "El di ch'el seigi malsauns", Putèr "El disch ch'el saja amalo", 'He says that he is sick', as compared to Italian "Dice che è malato" or French "Il dit qu'il est malade". Ricarda Liver attributes this to the influence of German. Limited to Sursilvan is the insertion of entire phrases between auxiliary verbs and participles as in "Cun Mariano Tschuor ha Augustin Beeli discurriu" 'Mariano Tschuor has spoken with Augustin Beeli' as compared to Engadinese "Cun Rudolf Gasser ha discurrü Gion Peider Mischol" 'Rudolf Gasser has spoken with Gion Peider Mischol'. In contemporary spoken language, adjective forms are often not distinguished from adverbs, as in Sursilvan "Jeu mon direct" 'I am going directly', rather than "Jeu mon directamein". This usage is rare in most other Romance languages with a few sporadic exceptions as in French "parler haut" or Italian "vosà fort" 'speak aloud', and the common usage in colloquial Romansh is likely an influence from German. Especially noticeable and often criticized by language purists are particles such as "aber", "schon", "halt", "grad", "eba", or "zuar", which have become an integral part of everyday Romansh speech, especially in Sursilvan. Negation was originally formed by a double negative in all Romansh dialects. Today, this usage is limited to Surmiran as in "ia na sa betg" 'I do not know' (it has also been included in panregional Rumantsch Grischun). While the first particle was lost in Sursilvan, where negation is now formed only with "buc" as in "jeu sai buc", the Ladin varieties lost the second particle "brich(a)", apparently under the influence of Italian, as in Putér "eau nu se". The influence of Romansh on the local vernacular German has not been studied as thoroughly as vice versa. Apart from place names throughout the former speech area of Romansh, only a handful of Romansh words have become part of wider German usage. Such words include "Gletscher" 'glacier' or "Murmeltier" 'marmot' (derived from Romansh "murmunt"), as well as culinary items such as Maluns or Capuns. The Romansh influence is much stronger in the German dialects of Grisons. It is sometimes controversially suspected that the pronunciation /k/ or /h/ in words such as "Khind" and "bahe", as opposed to /x/ in other Swiss German dialects ("Chind" and "bache"), is an influence of Romansh. In morphosyntax, the use of the auxiliary verb "kho" 'to come' as opposed to "wird" 'will' in phrases such as "leg di warm a, sunscht khunscht krank" ('put on warm clothes, otherwise you will get sick') in Grisons-German is sometimes attributed to Romansh, as well as the lack of a distinction between the accusative and dative case in some Grisons-German dialects and the word order in phrases such as "i tet froge jemand wu waiss" ('I would ask someone who knows'). In addition, some words, neuter in most dialects of German, are masculine in Grisons-German. Examples include "der Brot" 'the bread' or "der Gäld" 'the money'. Common words of Romansh origin in Grisons-German include "Schaffa" (derived from Romansh "scaffa" 'cupboard'), "Spus/Spüslig" 'bridegroom' and "Spus" 'bride', "Banitsch" 'cart used for moving dung', and "Pon" 'container made of wood'. In areas where Romansh either is still spoken or has disappeared recently, Romansh words are even more common in the local dialects of German. The influence of German has been seen in different ways by linguists and language activists. The Italian dialectologist Ascoli for instance described Romansh as "a body that has lost its soul and taken on an entirely foreign one in its place" in the 1880s. This opinion was shared by many, who saw the influence of German as a threat to and corruption of Romansh, often referring to it as a disease infecting Romansh. This view was prevalent until after World War II, with many contemporary linguists and activists by contrast seeing these loan elements as completely natural and as an integral part of Romansh, which should be seen as an enrichment of the language. This position is currently held among others by the language activists Bernard Cathomas, Iso Camartin, or Alexi Decurtins, who argue for a relaxed attitude towards loan elements, which they point out are often among the most down-to-earth elements of the language, and that the dual nature of Romansh can also be seen as an advantage in being open to cultural elements from both sides. This position is also shared by several contemporary authors in particular from the Surselva, such as Arno Camenisch, who makes heavy use of Germanisms in his works. Romansh had a rich oral tradition before the appearance of Romansh writing, but apart from songs such as the "Canzun da Sontga Margriata", virtually none of it survives. Prior to the 16th century, Romansh writings are known from only a few fragments. The oldest known written records of Romansh dating from the period before 1500 are: The first substantial surviving work in Romansh is the "Chianzun dalla guerra dagl Chiaste da Müs" written in the Putèr dialect in 1527 by Gian Travers. It is an epic poem describing the First Musso war which Travers himself had taken part in. Subsequent works usually have religious themes, including Bible translations, manuals for religious instructions, and biblical plays. In 1560, the first Romansh translation of the New Testament: "L'g Nuof Sainc Testamaint da nos Signer Jesu Christ" by Giachem Bifrun, was published. Two years later, in 1562, another writer from the Engadine, Durich Chiampel, published the "Cudesch da Psalms", a collection of Romansh church songs in the Vallader dialect. In the Sursilvan dialect, the first surviving works are also religious works such as catechism by Daniel Bonifaci, and in 1611 "Ilg Vêr Sulaz da pievel giuvan" ("The true joys of young people"), a series of religious instructions for Protestant youths was published by Steffan Gabriel. Four years later in 1615, a Catholic catechism "Curt Mussament" was published in response, written by Gion Antoni Calvenzano. The first translation of the New Testament into Sursilvan was published in 1648 by the son of Steffan Gabriel, Luci Gabriel. The first complete translation of the Bible, the "Bibla da Cuera" was published between 1717 and 1719. In music, choirs have a long tradition in the Romansh-speaking areas. Apart from traditional music and song, Romansh is also used in contemporary pop or hip-hop music, some of which has become known outside the Romansh-speaking regions, for instance, in the Eurovision Song Contest 1989, Switzerland was represented by a Romansh song, "Viver senza tei". Since 2004, the hip-hop group Liricas Analas has become known even outside of Grisons through their Romansh songs. Other contemporary groups include the rock-band "Passiunai" with its lead singer Pascal Gamboni, or the rock/pop band The Capoonz. Composer Gion Antoni Derungs has written three operas with Romansh librettos: "Il cerchel magic" (1986), "Il semiader" (1998) and "Tredeschin" (2000). Romansh is used to varying extents in newspapers, the radio, and television. Radio and television broadcasts in Romansh are produced by the Radiotelevisiun Svizra Rumantscha, which is part of the Swiss public broadcasting company SRG SSR. The radio Radio Rumantsch broadcasts a 24-hour program including informational and music broadcasts. The broadcasters generally speak their own regional dialect on the air, which is considered a key factor in familiarizing Romansh speakers with the dialects outside their home region. News broadcasts are generally in the pan-regional variety Rumantsch Grischun. The two local radio stations Radio Grischa and Radio Engiadina occasionally broadcast in Romansh, but primarily use German. The Televisiun Rumantscha airs regular broadcasts on SF 1, which are subtitled in German. Programs include the informational broadcast "Telesguard", which is broadcast daily from Monday to Friday. The children's show "Minisguard" and the informational broadcast "Cuntrasts" are aired on weekends. Additionally, the shows "Controvers", "Pled sin via", and others are broadcast during irregular intervals. The Romansh newspapers used to be heavily fragmented by regions and dialects. The more long-lived newspapers included the "Gasetta Romontscha" in the Surselva, the "Fögl Ladin" in the Engadine, "Casa Paterna/La Punt" in the Sutselva, and "La Pagina da Surmeir" in the Surmeir. Due to financial difficulties, most of these merged into a pan-regional daily newspaper called "La Quotidiana" in 1997. This newspaper includes articles in all five dialects and in Rumantsch Grischun. Apart from "La Quotidiana", "La Pagina da Surmeir" continues to be published to a regional audience, and the "Engadiner Post" includes two pages in Romansh. A Romansh news agency, the Agentura da Novitads Rumantscha, has been in existence since 1997. Several Romansh-language magazines are also published regularly, including the youth magazine "Punts" and the yearly publications "Calender Romontsch" and "Chalender Ladin". In September 2018 "Amur senza fin", the first-ever Romansh-language television film, debuted on Swiss national television. The fable The Fox and the Crow by Aesop with a French version by Jean de La Fontaine; translated into the Dachsprache Rumantsch Grischun and all six dialects of Romansh: Sursilvan, Sutsilvan, Surmiran, Puter, and the similar-looking but noticeably different-sounding dialects Vallader and Jauer, as well as a translation into English.
https://en.wikipedia.org/wiki?curid=25529
Robert Rodriguez Robert Anthony Rodriguez (; born June 20, 1968) is an American filmmaker and visual effects supervisor. He shoots, edits, produces, and scores many of his films in Mexico and in his home state of Texas. Rodriguez directed the 1992 action film "El Mariachi", which was a commercial success after grossing $2 million against a budget of $7,000. The film spawned two sequels known collectively as the "Mexico Trilogy": "Desperado" and "Once Upon a Time in Mexico". He directed "From Dusk Till Dawn" in 1996 and developed its (2014–2016). Rodriguez co-directed the 2005 neo-noir crime thriller anthology "Sin City" (adapted from the graphic novel of the same name) and the 2014 sequel, "". Rodriguez also directed the "Spy Kids" films, "The Faculty", "The Adventures of Sharkboy and Lavagirl", "Planet Terror", "Machete", and "". He is the best friend and frequent collaborator of filmmaker Quentin Tarantino, who founded the production company A Band Apart, of which Rodriguez was a member. In December 2013, Rodriguez launched his own cable television channel, El Rey. Rodríguez was born in San Antonio, Texas, the son of Mexican parents Rebecca (née Villegas), a nurse, and Cecilio G. Rodríguez, a salesman. He began his interest in film at age eleven, when his father bought one of the first VCRs, which came with a camera. While attending St. Anthony High School Seminary in San Antonio, Rodríguez was commissioned to videotape the school's football games. According to his sister, he was fired soon afterward as he had shot footage in a cinematic style, getting shots of parents' reactions and the ball traveling through the air instead of shooting the whole play. In high school, he met Carlos Gallardo; they both shot films on video throughout high school and college. Rodriguez went to the College of Communication at the University of Texas at Austin, where he also developed a love of cartooning. Not having grades high enough to be accepted into the school's film program, he created a daily comic strip entitled "Los Hooligans." Many of the characters were based on his siblings – in particular, one of his sisters, Maricarmen. The comic ran for three years in the student newspaper "The Daily Texan", while Rodríguez continued to make short films. Rodríguez shot action and horror short films on video and edited on two VCRs. In late 1990, his entry in a local film contest earned him a spot in the university's film program. There he made the award-winning 16 mm short "Bedhead" (1991). The film chronicles the amusing misadventures of a young girl whose older brother sports an incredibly tangled mess of hair which she detests. Even at this early stage, Rodríguez's trademark style began to emerge: quick cuts, intense zooms, and fast camera movements deployed with a sense of humor. "Bedhead" (1991) was recognized for excellence in the Black Maria Film Festival. It was selected by Film/Video Curator Sally Berger for the Black Maria 20th-anniversary retrospective at MoMA in 2006. The short film "Bedhead" attracted enough attention to encourage him to seriously attempt a career as a filmmaker. He went on to shoot the action flick "El Mariachi" (1992) in Spanish; he shot it for around $7,000 with money raised by his friend Adrian Kano and from payments for his own participation in medical testing studies. Rodriguez won the Audience Award for this film at the Sundance Film Festival in 1993. Intended for the Spanish-language low-budget home-video market, the film was "cleaned up" by Columbia Pictures with post-production work costing several hundred thousand dollars before it was distributed in the United States. Its promotion still advertised it as "the movie made for $7,000". Rodríguez described his experiences making the film in his book "Rebel Without a Crew" (1995). "Desperado" was a sequel to "El Mariachi" that starred Antonio Banderas and introduced Salma Hayek to American audiences. Rodríguez went on to collaborate with Quentin Tarantino on the vampire thriller "From Dusk till Dawn" (also both co-producing its two sequels), and he wrote, directed, and produced the for his own cable network, El Rey. Rodriguez has also worked with Kevin Williamson, on the sci-fi thriller film "The Faculty". In 2001, Rodríguez enjoyed his first Hollywood hit with "Spy Kids", which went on to become a movie franchise. A third "mariachi" film also appeared in late 2003, "Once Upon a Time in Mexico", which completed the Mexico Trilogy (also called the Mariachi Trilogy). He operates a production company called Troublemaker Studios, formerly Los Hooligans Productions. Rodríguez co-directed "Sin City" (2005), an adaptation of the Frank Miller "Sin City" comic books; Quentin Tarantino guest-directed a scene. During production in 2004, Rodríguez insisted Miller be credited as co-director, because he considered the visual style of Miller's comic art to be just as important as his own in the film. However, the Directors Guild of America would not allow it, citing that only "legitimate teams", "e.g.", the Wachowskis, could share the director's credit. Rodríguez chose to resign from the DGA, stating, "It was easier for me to quietly resign before shooting because otherwise I'd be forced to make compromises I was unwilling to make or set a precedent that might hurt the guild later on." By resigning from the DGA, Rodríguez was forced to relinquish his director's seat on the film "John Carter of Mars" for Paramount Pictures. Rodríguez had already signed on and had been announced as director of that film, planning to begin filming soon after completing "Sin City". "Sin City" was a critical hit in 2005 as well as a box office success, particularly for a hyperviolent comic book adaptation that did not have name recognition comparable to the "X-Men" or "Spider-Man". He has an interest in adapting all of Miller's "Sin City" comic books. Rodríguez released "The Adventures of Sharkboy and Lavagirl" in 2005, a superhero-kid movie intended for the same younger audiences as his "Spy Kids" series. "Sharkboy and Lavagirl" was based on a story conceived by Rodríguez's 7-year-old son, Racer, who was given credit for the screenplay. The film grossed $39 million at the box office. Rodríguez wrote and directed the film "Planet Terror" as part of the double-bill release "Grindhouse" (2007). Quentin Tarantino directed "Grindhouse"'s other film. He has a series of "Ten Minute Film School" segments on several of his DVD releases, showing aspiring filmmakers how to make good, profitable movies using inexpensive tactics. Starting with the "Once Upon a Time in Mexico" DVD, Rodríguez began creating a series called "Ten Minute Cooking School" in which he revealed his recipe for "Puerco Pibil" (based on Cochinita pibil, an old dish from Yucatán), the same food Johnny Depp's character, "Agent Sands" ate in the film. The popularity of this series led to the inclusion of another "Cooking School" on the two-disc version of the "Sin City" DVD where Rodríguez teaches the viewer how to make "Sin City Breakfast Tacos", a dish (made for his cast and crew during late-night shoots and editing sessions) utilizing his grandmother's tortilla recipe and different egg mixes for the filling. He had initially planned to release a third "Cooking School" with the DVD release of "Planet Terror" but then announced on the "Film School" segment of the DVD that he would put it on the "Grindhouse" DVD set instead. The Cooking School, titled "Texas Barbecue...from the GRAVE!", is a dish based on the "secret barbecue recipe" of JT Hague, Jeff Fahey's character in the film. Rodríguez is a strong supporter of digital filmmaking, having been introduced to the practice by director George Lucas, who personally invited Rodríguez to use the digital cameras at Lucas's headquarters. He was presented with the Extraordinary Contribution to Filmmaking Award at the 2010 Austin Film Festival. On February 7, 2010, it was announced that Rodríguez would produce a new Predator sequel, entitled "Predators". This film's script was based on early drafts he had written after seeing the original. Rodriguez's ideas included a planet-sized game preserve and various creatures used by the Predators to hunt a group of abducted yet skilled humans. Opening to mostly positive reviews, the film fared reasonably well at the box office. "Machete" is a feature film directed by Rodríguez and released in September 2010. It is an expansion of a fake trailer Rodriguez directed for the 2007 film "Grindhouse". It starred Danny Trejo as the title character. Trejo, Rodriguez's 2nd cousin, has worked with him in some of his other movies such as "Desperado", "From Dusk Till Dawn", "Once Upon a Time in Mexico" and "Spy Kids", where Trejo first appeared as Machete. Although originally announced to be released direct-to-DVD as an extra on the "Planet Terror" DVD, the film was produced as a theatrical release. According to Rodríguez, the origins of the film go back to "Desperado". He says, "When I met Danny, I said, 'This guy should be like the Mexican Jean-Claude Van Damme or Charles Bronson, putting out a movie every year and his name should be Machete.' So I decided to do that way back when, never got around to it until finally now. So now, of course, I want to keep going and do a feature." In an interview with "Rolling Stone" magazine, Rodriguez said that he wrote the screenplay back in 1993 when he cast Trejo in "Desperado". "So I wrote him this idea of a federale from Mexico who gets hired to do hatchet jobs in the U.S. I had heard sometimes FBI or DEA have a really tough job that they don't want to get their own agents killed on, they'll hire an agent from Mexico to come do the job for $25,000. I thought, "That's "Machete". He would come and do a really dangerous job for a lot of money to him but for everyone else over here it's peanuts." But I never got around to making it." Rodríguez hoped to film "Machete" at the same time as "". Additionally, during Comic-Con International 2008, he took the time to speak about Machete, including such topics as: status, possible sequels after the release of Machete, and production priorities. It was also revealed that he has regularly pulled sequences from it for his other productions, including "Once Upon a Time in Mexico". "Machete" was released in theaters September 3, 2010 in the U.S.A. On May 5, 2010, Robert Rodríguez responded to Arizona's controversial immigration law by releasing an "illegal" trailer on Ain't It Cool News. The fake trailer combined elements of the "Machete" trailer that appeared in "Grindhouse" with footage from the actual film, and implied that the film would be about Machete leading a revolt against anti-immigration politicians and border vigilantes. Several movie websites, including Internet Movie Database, reported that it was the official teaser for the film. However, Rodriguez later revealed the trailer to be a joke, explaining "it was Cinco de Mayo and I had too much tequila." Since 1998, he has owned the film rights to Mike Allred's off-beat comic "Madman". The two have hinted at the project being close to beginning on several occasions without anything coming of it. However, other projects have been completed first (Allred was instrumental in connecting Rodríguez with Frank Miller, leading to the production of "Sin City"). In 2004, Allred, while promoting his comic book, "The Golden Plates", announced that a screenplay by George Huang was near completion. In March 2006, it was announced that production on "Sin City: A Dame to Kill For" would be postponed. Allred announced at the 2006 WonderCon that production would likely commence on "Madman the Movie" in 2006. Huang is actually friends with Rodriguez, who advised him to pursue filmmaking as a career when Rodriguez landed a deal with Columbia Pictures where Huang was an employee. In May 2007, it was announced that Rodríguez had signed on to direct a remake of "Barbarella" for a 2008 release. At the 2007 Comic-Con convention, actress Rosario Dawson announced that because of "Barbarella", production of "Sin City: A Dame to Kill For" would be put on hold. She also announced that she would be playing an amazon in the Barbarella film. As of June 2008, plans to remake the film Barbarella with Rose McGowan as the lead have been delayed; the actress and director are instead remaking the film "Red Sonja". In May 2008, Rodríguez is said to be shopping around a prison drama television series called "Woman in Chains!", with Rose McGowan being a possibility for a lead role. As of May 2009, Rodríguez planned to produce a live-action remake of "Fire and Ice", a 1983 film collaboration between painter Frank Frazetta and animator Ralph Bakshi. The deal was closed shortly after Frazetta's death. In 2011, Rodríguez announced at Comic-Con that he had purchased the film rights to "Heavy Metal" and planned to develop a new animated film at the new Quick Draw Studios. In November 2015, it was announced Rodriguez directed the film "100 Years", which would not be released until 2115. In March 2017, it was announced that Rodriguez would direct "Escape from New York", a remake of the dystopian sci-fi action film with original director John Carpenter producing. In May 2020, Rodriguez confirmed he would direct an episode from the second season of the Disney+ series "The Mandalorian", part of the Star Wars franchise. Rodriguez made the announcement in an Instagram post in which he posed with a puppet of The Child. Rodríguez announced in April 2006 that he and his wife Elizabeth Avellán, with whom he had five children (Rocket, Racer, Rebel, Rogue, and Rhiannon), had separated after 16 years of marriage. Avellán has continued to produce most of his films since the split-up, so their professional relationship has continued. He reportedly had a "dalliance" with actress Rose McGowan during the shooting of "Grindhouse". In October 2007, "Elle Magazine" revealed that Rodríguez had cast McGowan in the title role in his remake of "Barbarella". After some reports of their breaking up and being together again, they split up in October 2009. In October 2010, he walked Alexa Vega (Carmen Cortez in "Spy Kids" series) down the aisle at her wedding to producer Sean Covel. In March 2014, Rodriguez showed his collection of Frank Frazetta original paintings in Austin, Texas, during the SXSW festival. Rodríguez not only has the credits of producing, directing and writing his films, he also frequently serves as editor, director of photography, camera operator, steadicam operator, composer, production designer, visual effects supervisor, and sound editor on his films. This has earned him the nickname of "the one-man film crew". He abbreviates his numerous roles in his film credits; "Once Upon a Time in Mexico", for instance, is "shot, chopped, and scored by Robert Rodriguez", and "Sin City" is "shot and cut by Robert Rodriguez". He calls his style of making movies "Mariachi-style" (in reference to his first feature film "El Mariachi") in which (according to the back cover of his book "Rebel Without a Crew") "Creativity, not money, is used to solve problems." He prefers to work at night, spending his day-time hours with his kids, when they're home, and says that he believes many creative people are "night people". In his book "The DV Rebel's Guide", Stu Maschwitz coined the term "Robert Rodriguez list", i.e. the filmmaker compiling a list of things they have access to like cool cars, apartments, horses, samurai swords and so on, and then writing the screenplay based on that list. Rodriguez wrote a blurb for the book that stated: I'd been wanting to write a book for the new breed of digital filmmakers, but now I don't have to. My pal and fellow movie maker Stu Maschwitz has compressed years of experience into this thorough guide. Don't make a movie without reading this book! Robert Rodriguez has brought a number of his favorite and most influential directors on his television show, "The Directors Chair". Some of these directors included John Carpenter, Quentin Tarantino, and George Miller.
https://en.wikipedia.org/wiki?curid=25530
Romantic comedy Romantic comedy (also known as romcom or rom-com) is a subgenre of comedy and slice-of-life fiction, focusing on lighthearted, humorous plot lines centered on romantic ideas, such as how true love is able to surmount most obstacles. One dictionary definition is "a funny movie, play, or television program about a love story that ends happily". Another definition suggests that its "primary distinguishing feature is a love plot in which two sympathetic and well-matched lovers are united or reconciled". Romantic comedy films are a certain genre of comedy films as well as of romance films, and may also have elements of screwball comedies. However, a romantic comedy is classified as a film with two genres, not a single new genre. Some television series can also be classified as romantic comedies. In a typical romantic comedy the two lovers tend to be young, likeable, and seemingly meant for each other, yet they are kept apart by some complicating circumstance (e.g., class differences, parental interference, a previous girlfriend or boyfriend) until, surmounting all obstacles, they are finally reunited. A fairy-tale-style happy ending is a typical feature. The basic plot of a romantic comedy is that two characters meet, part ways due to an argument or other obstacle, then ultimately realize their love for one another and reunite. Sometimes the two leads meet and become involved initially, then must confront challenges to their union. Sometimes they are hesitant to become romantically involved because they believe that they do not like each other, because one of them already has a partner, or because of social pressures. However, the screenwriters leave clues that suggest that the characters are, in fact, attracted to each other and that they would be a good love match. The protagonists often separate or seek time apart to sort out their feelings or deal with the external obstacles to their being together, only to later come back together. While the two protagonists are separated, one or both of them usually realizes that they love the other person. Then, one party makes some extravagant effort (sometimes called a "grand gesture") to find the other person and declare their love. This is not always the case as sometimes there is an astonishing coincidental encounter where the two meet again. Or one plans a sweet romantic gesture to show that they still care. Then, perhaps with some comic friction or awkwardness, they declare their love for each other and the film ends on a happy note. Even though it is implied that they live a happily ever after, it does not always state what that happy ending will be. The couple does not necessarily get married, or even live together for it to be a "happily ever after". The ending of a romantic comedy is meant to affirm the primary importance of the love relationship in its protagonists' lives, even if they physically separate in the end (e.g. "Shakespeare in Love", "Roman Holiday"). Most of the time the ending gives the audience a sense that if it is true love, it will always prevail no matter what is thrown in the way. There are many variations on this basic plot line. Sometimes, instead of the two lead characters ending up in each other's arms, another love match will be made between one of the principal characters and a secondary character (e.g., "My Best Friend's Wedding" and "My Super Ex-Girlfriend"). Alternatively, the film may be a rumination on the impossibility of love, as in Woody Allen's film "Annie Hall." The basic format of a romantic comedy film can be found in much earlier sources, such as Shakespeare plays like "Much Ado About Nothing" and "A Midsummer Night's Dream". The convention underlying a romance book or film is there is two people, normally male and a female, who fall in love with each other. They have a good situation going on for a while, but then the couple finds a major obstacle in their way, which usually starts to pull them apart or makes one of them leave. Before they can overcome this obstacle, one (or both) realizes that they are perfect for each other and proclaims their love for the other. The films usually end with the couple either getting married, engaged, or giving some indication that they live "happily ever after". Over the years, romantic comedies have slowly been becoming more popular to both males and females. They have begun to spread out of their conventional and traditional structure into other territory. This territory explores more subgenres and more complex topics. These films still follow the typical plot of "a light and humorous movie, play, etc., whose central plot is a happy love story" but with more complexity. These are a few ways romantic comedies are adding more subtlety and complexity into the genre. Two ways they are adding to the complexity are through the general obstacles that come between the couple and the general morals that the characters feel throughout the entire film. Some romantic comedies have adopted extreme or strange circumstances for the main characters, as in "Warm Bodies" where the protagonist is a zombie who falls in love with a human girl after eating her boyfriend. The effect of their love towards each other is that it starts spreading to the other zombies and even starts to cure them. With the zombie cure, the two main characters can now be together since they don't have that barrier between them anymore. Another strange set of circumstances is in "Zack and Miri Make a Porno" where the two protagonists are building a relationship while trying to make a porno together. Both these films take the typical story arc and then add strange circumstances to add originality. Other romantic comedies flip the standard conventions of the romantic comedy genre. In films like "500 Days of Summer", the two main interests do not end up together, leaving the protagonist somewhat distraught. Other films like "Adam" have the two main interests end up separated but still content and pursuing other goals and love interests. Some romantic comedies use reversal of gender roles to add comedic effect. These films contain characters who possess qualities that diverge from the gender role that society has imposed upon them, as seen in "Forgetting Sarah Marshall" in which the male protagonist is especially in touch with his emotions, and "Made of Honor" in which the female bridesmaids are shown in a negative and somewhat masculine light in order to advance the likability of the male lead. Other remakes of romantic comedies involve similar elements, but explore more adult themes such as marriage, responsibility, or even disability. Two films by Judd Apatow, "This Is 40" and "Knocked Up", deal with these issues. "This Is 40" chronicles the mid-life crisis of a couple entering their 40s, and "Knocked Up" addresses unintended pregnancy and the ensuing assuming of responsibility. "Silver Linings Playbook" deals with mental illness and the courage to start a new relationship. All of these go against the stereotype of what romantic comedy has become as a genre. Yet the genre of romantic comedy is simply a structure, and all of these elements do not negate the fact that these films are still romantic comedies. One of the conventions of romantic comedy films is the entertainment factor in a contrived encounter of two potential romantic partners in unusual or comic circumstances, which film critics such as Roger Ebert or the Associated Press' Christy Lemire have called a "meet-cute" situation. During a "meet-cute", scriptwriters often create a humorous sense of awkwardness between the two potential partners by depicting an initial clash of personalities or beliefs, an embarrassing situation, or by introducing a comical misunderstanding or mistaken identity situation. Sometimes the term is used without a hyphen (a "meet cute"), or as a verb ("to meet cute"). Roger Ebert describes the "concept of a Meet Cute" as "when boy meets girl in a cute way." As an example, he cites "The Meet Cute in "Lost and Found" [which] has Jackson and Segal running their cars into each other in Switzerland. Once recovered, they Meet Cute again when they run into each other while on skis. Eventually... they fall in love." In many romantic comedies, the potential couple comprises polar opposites, two people of different temperaments, situations, social statuses, or all three ("It Happened One Night"), who would not meet or talk under normal circumstances, and the meet cute's contrived situation provides the opportunity for these two people to meet. Certain movies are entirely driven by the meet-cute situation, and contrived circumstances throw the couple together for much of the screenplay. However, movies in which the contrived situation is the main feature, such as "Some Like It Hot", rather than the romance being the main feature, are not considered "meet-cutes". The use of the meet-cute is less marked in television series and novels, because these formats have more time to establish and develop romantic relationships. In situation comedies, relationships are static and meet-cute is not necessary, though flashbacks may recall one ("The Dick Van Dyke Show", "Mad About You") and lighter fare may require contrived romantic meetings. The heyday of "meet cute" in films was during the Great Depression in the 1930s; screwball comedy films made a heavy use of contrived romantic "meet cutes", perhaps because the more rigid class consciousness and class divisions of this period made cross-social class romances into tantalizing fantasies. The "Oxford Dictionary of Literary Terms" defines romantic comedy as "a general term for comedies that deal mainly with the follies and misunderstandings of young lovers, in a light‐hearted and happily concluded manner which usually avoids serious satire". This reference states that the "best‐known examples are Shakespeare's comedies of the late 1590s, "A Midsummer Night's Dream", "Twelfth Night", and "As You Like It" being the most purely romantic, while "Much Ado About Nothing" approaches the comedy of manners and "The Merchant of Venice" is closer to tragicomedy." Comedies since ancient Greece have often incorporated sexual or social elements. It was not until the creation of romantic love in the western European medieval period, though, that "romance" came to refer to "romantic love" situations, rather than the heroic adventures of medieval Romance. These adventures, however, often revolved about a knight's feats on behalf of a lady, and so the modern themes of love were quickly woven into them, as in Chrétien de Troyes's "Lancelot, the Knight of the Cart". Shakespearean comedy and Restoration comedy remain influential. The creation of huge economic social strata in the Gilded Age, combined with the heightened openness about sex after the Victorian era and the celebration of Sigmund Freud's theories, and the birth of the film industry in the early twentieth century, gave birth to the screwball comedy. As class consciousness declined and World War II unified various social orders, the savage screwball comedies of the twenties and thirties, proceeding through Rock Hudson–Doris Day-style comedies, gave way to more innocuous comedies. In the 1970s What's Up, Doc? was a success, although the film follows the conventions of the screwball comedy, as its tagline confirms: "A Screwball Comedy. Remember them?". The more sexually charged "When Harry Met Sally" had a successful box office run in 1989, paving the way for a rebirth for the Hollywood romantic comedy in the mid-1990s. The French film industry went in a completely different direction, with less inhibitions about sex. Virginia Woolf, tired of stories that ended in 'happily ever after' at the beginning of a serious relationship, called "Middlemarch" by George Eliot, with its portrayal of a difficult marriage, "one of the few English novels written for grown-up people." With the increase of romantic comedy movies, there has been an apparent change in the way society views romance. Researchers are asking whether the romances projected in romantic comedies are preventing true love in real life. The increase in use of technology has also led the society to spend a great amount of time engaging in mediated reality and less time with each other. Even though researchers have only started to explore the impact of romantic comedy films on human romance, the few studies conducted have already shown correlation between romantic comedies and the love delusion. Romantic comedies are very popular. They depict relationships that some scholars think affect how people view relationships outside of this virtual world. In the past, love has not always been the real reason for people coming together. In some cultures, arranged marriages were common to adhere to and propagate caste systems or to join kingdoms. Today, love is the root of all romance, and it is over-emphasized through these films. It tells viewers that love conquers all and will ultimately bring a never-ending happiness that is rarely affected by any conflict. When people do not experience the romance portrayed in these movies, they often wonder what they are doing wrong. Although people should be able to tell between an overly romanticized love and realistic love, they are often caught up in constantly trying to echo the stories they see on screen. While most know that the idea of a perfect relationship is unrealistic, some perceptions of love are heavily influenced by media portrayals. A study was conducted at Heriot Watt University in Edinburgh to understand this phenomenon. They studied 40 top box-office films released between 1995 and 2005 to establish common themes. Then they asked hundreds of people to complete a questionnaire to describe their beliefs and expectations in romantic relationships. Researchers found that people who enjoyed movies such as "You’ve Got Mail", "The Wedding Planner", and "While You Were Sleeping" often failed to communicate with their partners effectively. They also believe that if someone is meant to be with you, then they should know your needs without you telling them. Although this study is just one of a handful, it shows a correlation of how people's expectations are distorted through watching romantic comedies.
https://en.wikipedia.org/wiki?curid=25531
Renaissance The Renaissance ( , ) was a period in European history marking the transition from the Middle Ages to Modernity and covering the 15th and 16th centuries. It occurred after the Crisis of the Late Middle Ages and was associated with great social change. In addition to the standard periodization, proponents of a "long Renaissance" put its beginning in the 14th century and its end in the 17th century. The traditional view focuses more on the early modern aspects of the Renaissance and argues that it was a break from the past, but many historians today focus more on its medieval aspects and argue that it was an extension of the Middle Ages. The intellectual basis of the Renaissance was its version of humanism, derived from the concept of Roman "Humanitas" and the rediscovery of classical Greek philosophy, such as that of Protagoras, who said that "Man is the measure of all things." This new thinking became manifest in art, architecture, politics, science and literature. Early examples were the development of perspective in oil painting and the recycled knowledge of how to make concrete. Although the invention of metal movable type sped the dissemination of ideas from the later 15th century, the changes of the Renaissance were not uniformly experienced across Europe: the first traces appear in Italy as early as the late 13th century, in particular with the writings of Dante and the paintings of Giotto. As a cultural movement, the Renaissance encompassed innovative flowering of Latin and vernacular literatures, beginning with the 14th-century resurgence of learning based on classical sources, which contemporaries credited to Petrarch; the development of linear perspective and other techniques of rendering a more natural reality in painting; and gradual but widespread educational reform. In politics, the Renaissance contributed to the development of the customs and conventions of diplomacy, and in science to an increased reliance on observation and inductive reasoning. Although the Renaissance saw revolutions in many intellectual pursuits, as well as social and political upheaval, it is perhaps best known for its artistic developments and the contributions of such polymaths as Leonardo da Vinci and Michelangelo, who inspired the term "Renaissance man". The Renaissance began in the 14th century in Florence, Italy. Various theories have been proposed to account for its origins and characteristics, focusing on a variety of factors including the social and civic peculiarities of Florence at the time: its political structure, the patronage of its dominant family, the Medici, and the migration of Greek scholars and their texts to Italy following the Fall of Constantinople to the Ottoman Turks which inherited from the Timurid Renaissance. Other major centres were northern Italian city-states such as Venice, Genoa, Milan, Bologna, and finally Rome during the Renaissance Papacy. The Renaissance has a long and complex historiography, and, in line with general scepticism of discrete periodizations, there has been much debate among historians reacting to the 19th-century glorification of the "Renaissance" and individual culture heroes as "Renaissance men", questioning the usefulness of "Renaissance" as a term and as a historical delineation. The art historian Erwin Panofsky observed of this resistance to the concept of "Renaissance": It is perhaps no accident that the factuality of the Italian Renaissance has been most vigorously questioned by those who are not obliged to take a professional interest in the aesthetic aspects of civilization – historians of economic and social developments, political and religious situations, and, most particularly, natural science – but only exceptionally by students of literature and hardly ever by historians of Art. Some observers have called into question whether the Renaissance was a cultural "advance" from the Middle Ages, instead seeing it as a period of pessimism and nostalgia for classical antiquity, while social and economic historians, especially of the "longue durée", have instead focused on the continuity between the two eras, which are linked, as Panofsky observed, "by a thousand ties". The term "rinascita" ('rebirth') first appeared in Giorgio Vasari's "Lives of the Artists" (c. 1550), anglicized as the "Renaissance" in the 1830s. The word has also been extended to other historical and cultural movements, such as the Carolingian Renaissance (8th and 9th centuries), Ottonian Renaissance (10th and 11th century), and the Renaissance of the 12th century. The Renaissance was a cultural movement that profoundly affected European intellectual life in the early modern period. Beginning in Italy, and spreading to the rest of Europe by the 16th century, its influence was felt in art, architecture, philosophy, literature, music, science and technology, politics, religion, and other aspects of intellectual inquiry. Renaissance scholars employed the humanist method in study, and searched for realism and human emotion in art. Renaissance humanists such as Poggio Bracciolini sought out in Europe's monastic libraries the Latin literary, historical, and oratorical texts of Antiquity, while the Fall of Constantinople (1453) generated a wave of émigré Greek scholars bringing precious manuscripts in ancient Greek, many of which had fallen into obscurity in the West. It is in their new focus on literary and historical texts that Renaissance scholars differed so markedly from the medieval scholars of the Renaissance of the 12th century, who had focused on studying Greek and Arabic works of natural sciences, philosophy and mathematics, rather than on such cultural texts. In the revival of neo-Platonism Renaissance humanists did not reject Christianity; quite the contrary, many of the greatest works of the Renaissance were devoted to it, and the Church patronized many works of Renaissance art. However, a subtle shift took place in the way that intellectuals approached religion that was reflected in many other areas of cultural life. In addition, many Greek Christian works, including the Greek New Testament, were brought back from Byzantium to Western Europe and engaged Western scholars for the first time since late antiquity. This new engagement with Greek Christian works, and particularly the return to the original Greek of the New Testament promoted by humanists Lorenzo Valla and Erasmus, would help pave the way for the Protestant Reformation. Well after the first artistic return to classicism had been exemplified in the sculpture of Nicola Pisano, Florentine painters led by Masaccio strove to portray the human form realistically, developing techniques to render perspective and light more naturally. Political philosophers, most famously Niccolò Machiavelli, sought to describe political life as it really was, that is to understand it rationally. A critical contribution to Italian Renaissance humanism, Giovanni Pico della Mirandola wrote the famous text "De hominis dignitate" ("Oration on the Dignity of Man", 1486), which consists of a series of theses on philosophy, natural thought, faith and magic defended against any opponent on the grounds of reason. In addition to studying classical Latin and Greek, Renaissance authors also began increasingly to use vernacular languages; combined with the introduction of printing press, this would allow many more people access to books, especially the Bible. In all, the Renaissance could be viewed as an attempt by intellectuals to study and improve the secular and worldly, both through the revival of ideas from antiquity, and through novel approaches to thought. Some scholars, such as Rodney Stark, play down the Renaissance in favour of the earlier innovations of the Italian city-states in the High Middle Ages, which married responsive government, Christianity and the birth of capitalism. This analysis argues that, whereas the great European states (France and Spain) were absolutist monarchies, and others were under direct Church control, the independent city republics of Italy took over the principles of capitalism invented on monastic estates and set off a vast unprecedented commercial revolution that preceded and financed the Renaissance. Many argue that the ideas characterizing the Renaissance had their origin in late 13th-century Florence, in particular with the writings of Dante Alighieri (1265–1321) and Petrarch (1304–1374), as well as the paintings of Giotto di Bondone (1267–1337). Some writers date the Renaissance quite precisely; one proposed starting point is 1401, when the rival geniuses Lorenzo Ghiberti and Filippo Brunelleschi competed for the contract to build the bronze doors for the Baptistery of the Florence Cathedral (Ghiberti won). Others see more general competition between artists and polymaths such as Brunelleschi, Ghiberti, Donatello, and Masaccio for artistic commissions as sparking the creativity of the Renaissance. Yet it remains much debated why the Renaissance began in Italy, and why it began when it did. Accordingly, several theories have been put forward to explain its origins. During the Renaissance, money and art went hand in hand. Artists depended entirely on patrons while the patrons needed money to foster artistic talent. Wealth was brought to Italy in the 14th, 15th, and 16th centuries by expanding trade into Asia and Europe. Silver mining in Tyrol increased the flow of money. Luxuries from the Muslim world, brought home during the Crusades, increased the prosperity of Genoa and Venice. Jules Michelet defined the 16th-century Renaissance in France as a period in Europe's cultural history that represented a break from the Middle Ages, creating a modern understanding of humanity and its place in the world. In stark contrast to the High Middle Ages, when Latin scholars focused almost entirely on studying Greek and Arabic works of natural science, philosophy and mathematics. Renaissance scholars were most interested in recovering and studying Latin and Greek literary, historical, and oratorical texts. Broadly speaking, this began in the 14th century with a Latin phase, when Renaissance scholars such as Petrarch, Coluccio Salutati (1331–1406), Niccolò de' Niccoli (1364–1437) and Poggio Bracciolini (1380–1459) scoured the libraries of Europe in search of works by such Latin authors as Cicero, Lucretius, Livy and Seneca. By the early 15th century, the bulk of the surviving such Latin literature had been recovered; the Greek phase of Renaissance humanism was under way, as Western European scholars turned to recovering ancient Greek literary, historical, oratorical and theological texts. Unlike with Latin texts, which had been preserved and studied in Western Europe since late antiquity, the study of ancient Greek texts was very limited in medieval Western Europe. Ancient Greek works on science, maths and philosophy had been studied since the High Middle Ages in Western Europe and in the Islamic Golden Age (normally in translation), but Greek literary, oratorical and historical works (such as Homer, the Greek dramatists, Demosthenes and Thucydides) were not studied in either the Latin or medieval Islamic worlds; in the Middle Ages these sorts of texts were only studied by Byzantine scholars. Some argues that the Timurid Renaissance in Samarkand was linked with Ottoman Empire whose conquests led the migration of Greek scholars in Italian cities. One of the greatest achievements of Renaissance scholars was to bring this entire class of Greek cultural works back into Western Europe for the first time since late antiquity. Muslim logicians had inherited Greek ideas after they had invaded and conquered Egypt and the Levant. Their translations and commentaries on these ideas worked their way through the Arab West into Iberia and Sicily, which became important centers for this transmission of ideas. From the 11th to the 13th century, many schools dedicated to the translation of philosophical and scientific works from Classical Arabic to Medieval Latin were established in Iberia. Most notably the Toledo School of Translators. This work of translation from Islamic culture, though largely unplanned and disorganized, constituted one of the greatest transmissions of ideas in history. The movement to reintegrate the regular study of Greek literary, historical, oratorical and theological texts back into the Western European curriculum is usually dated to the 1396 invitation from Coluccio Salutati to the Byzantine diplomat and scholar Manuel Chrysoloras (c. 1355–1415) to teach Greek in Florence. This legacy was continued by a number of expatriate Greek scholars, from Basilios Bessarion to Leo Allatius. The unique political structures of late Middle Ages Italy have led some to theorize that its unusual social climate allowed the emergence of a rare cultural efflorescence. Italy did not exist as a political entity in the early modern period. Instead, it was divided into smaller city states and territories: the Kingdom of Naples controlled the south, the Republic of Florence and the Papal States at the center, the Milanese and the Genoese to the north and west respectively, and the Venetians to the east. Fifteenth-century Italy was one of the most urbanised areas in Europe. Many of its cities stood among the ruins of ancient Roman buildings; it seems likely that the classical nature of the Renaissance was linked to its origin in the Roman Empire's heartland. Historian and political philosopher Quentin Skinner points out that Otto of Freising (c. 1114–1158), a German bishop visiting north Italy during the 12th century, noticed a widespread new form of political and social organization, observing that Italy appeared to have exited from Feudalism so that its society was based on merchants and commerce. Linked to this was anti-monarchical thinking, represented in the famous early Renaissance fresco cycle "The Allegory of Good and Bad Government" by Ambrogio Lorenzetti (painted 1338–1340), whose strong message is about the virtues of fairness, justice, republicanism and good administration. Holding both Church and Empire at bay, these city republics were devoted to notions of liberty. Skinner reports that there were many defences of liberty such as the Matteo Palmieri (1406–1475) celebration of Florentine genius not only in art, sculpture and architecture, but "the remarkable efflorescence of moral, social and political philosophy that occurred in Florence at the same time". Even cities and states beyond central Italy, such as the Republic of Florence at this time, were also notable for their merchant Republics, especially the Republic of Venice. Although in practice these were oligarchical, and bore little resemblance to a modern democracy, they did have democratic features and were responsive states, with forms of participation in governance and belief in liberty. The relative political freedom they afforded was conducive to academic and artistic advancement. Likewise, the position of Italian cities such as Venice as great trading centres made them intellectual crossroads. Merchants brought with them ideas from far corners of the globe, particularly the Levant. Venice was Europe's gateway to trade with the East, and a producer of fine glass, while Florence was a capital of textiles. The wealth such business brought to Italy meant large public and private artistic projects could be commissioned and individuals had more leisure time for study. One theory that has been advanced is that the devastation in Florence caused by the Black Death, which hit Europe between 1348 and 1350, resulted in a shift in the world view of people in 14th century Italy. Italy was particularly badly hit by the plague, and it has been speculated that the resulting familiarity with death caused thinkers to dwell more on their lives on Earth, rather than on spirituality and the afterlife. It has also been argued that the Black Death prompted a new wave of piety, manifested in the sponsorship of religious works of art. However, this does not fully explain why the Renaissance occurred specifically in Italy in the 14th century. The Black Death was a pandemic that affected all of Europe in the ways described, not only Italy. The Renaissance's emergence in Italy was most likely the result of the complex interaction of the above factors. The plague was carried by fleas on sailing vessels returning from the ports of Asia, spreading quickly due to lack of proper sanitation: the population of England, then about 4.2 million, lost 1.4 million people to the bubonic plague. Florence's population was nearly halved in the year 1347. As a result of the decimation in the populace the value of the working class increased, and commoners came to enjoy more freedom. To answer the increased need for labor, workers traveled in search of the most favorable position economically. The demographic decline due to the plague had economic consequences: the prices of food dropped and land values declined by 30–40% in most parts of Europe between 1350 and 1400. Landholders faced a great loss, but for ordinary men and women it was a windfall. The survivors of the plague found not only that the prices of food were cheaper but also that lands were more abundant, and many of them inherited property from their dead relatives. The spread of disease was significantly more rampant in areas of poverty. Epidemics ravaged cities, particularly children. Plagues were easily spread by lice, unsanitary drinking water, armies, or by poor sanitation. Children were hit the hardest because many diseases, such as typhus and syphilis, target the immune system, leaving young children without a fighting chance. Children in city dwellings were more affected by the spread of disease than the children of the wealthy. The Black Death caused greater upheaval to Florence's social and political structure than later epidemics. Despite a significant number of deaths among members of the ruling classes, the government of Florence continued to function during this period. Formal meetings of elected representatives were suspended during the height of the epidemic due to the chaotic conditions in the city, but a small group of officials was appointed to conduct the affairs of the city, which ensured continuity of government. It has long been a matter of debate why the Renaissance began in Florence, and not elsewhere in Italy. Scholars have noted several features unique to Florentine cultural life that may have caused such a cultural movement. Many have emphasized the role played by the Medici, a banking family and later ducal ruling house, in patronizing and stimulating the arts. Lorenzo de' Medici (1449–1492) was the catalyst for an enormous amount of arts patronage, encouraging his countrymen to commission works from the leading artists of Florence, including Leonardo da Vinci, Sandro Botticelli, and Michelangelo Buonarroti. Works by Neri di Bicci, Botticelli, da Vinci and Filippino Lippi had been commissioned additionally by the Convent of San Donato in Scopeto in Florence. The Renaissance was certainly underway before Lorenzo de' Medici came to power – indeed, before the Medici family itself achieved hegemony in Florentine society. Some historians have postulated that Florence was the birthplace of the Renaissance as a result of luck, i.e., because "Great Men" were born there by chance: Leonardo da Vinci, Botticelli and Michelangelo were all born in Tuscany. Arguing that such chance seems improbable, other historians have contended that these "Great Men" were only able to rise to prominence because of the prevailing cultural conditions at the time. In some ways, Renaissance humanism was not a philosophy but a method of learning. In contrast to the medieval scholastic mode, which focused on resolving contradictions between authors, Renaissance humanists would study ancient texts in the original and appraise them through a combination of reasoning and empirical evidence. Humanist education was based on the programme of 'Studia Humanitatis', the study of five humanities: poetry, grammar, history, moral philosophy and rhetoric. Although historians have sometimes struggled to define humanism precisely, most have settled on "a middle of the road definition... the movement to recover, interpret, and assimilate the language, literature, learning and values of ancient Greece and Rome". Above all, humanists asserted "the genius of man ... the unique and extraordinary ability of the human mind". Humanist scholars shaped the intellectual landscape throughout the early modern period. Political philosophers such as Niccolò Machiavelli and Thomas More revived the ideas of Greek and Roman thinkers and applied them in critiques of contemporary government. Pico della Mirandola wrote the "manifesto" of the Renaissance, the "Oration on the Dignity of Man", a vibrant defence of thinking. Matteo Palmieri (1406–1475), another humanist, is most known for his work "Della vita civile" ("On Civic Life"; printed 1528), which advocated civic humanism, and for his influence in refining the Tuscan vernacular to the same level as Latin. Palmieri drew on Roman philosophers and theorists, especially Cicero, who, like Palmieri, lived an active public life as a citizen and official, as well as a theorist and philosopher and also Quintilian. Perhaps the most succinct expression of his perspective on humanism is in a 1465 poetic work "La città di vita", but an earlier work, "Della vita civile", is more wide-ranging. Composed as a series of dialogues set in a country house in the Mugello countryside outside Florence during the plague of 1430, Palmieri expounds on the qualities of the ideal citizen. The dialogues include ideas about how children develop mentally and physically, how citizens can conduct themselves morally, how citizens and states can ensure probity in public life, and an important debate on the difference between that which is pragmatically useful and that which is honest. The humanists believed that it is important to transcend to the afterlife with a perfect mind and body, which could be attained with education. The purpose of humanism was to create a universal man whose person combined intellectual and physical excellence and who was capable of functioning honorably in virtually any situation. This ideology was referred to as the "uomo universale", an ancient Greco-Roman ideal. Education during the Renaissance was mainly composed of ancient literature and history as it was thought that the classics provided moral instruction and an intensive understanding of human behavior. A unique characteristic of some Renaissance libraries is that they were open to the public. These libraries were places where ideas were exchanged and where scholarship and reading were considered both pleasurable and beneficial to the mind and soul. As freethinking was a hallmark of the age, many libraries contained a wide range of writers. Classical texts could be found alongside humanist writings. These informal associations of intellectuals profoundly influenced Renaissance culture. Some of the richest "bibliophiles" built libraries as temples to books and knowledge. A number of libraries appeared as manifestations of immense wealth joined with a love of books. In some cases, cultivated library builders were also committed to offering others the opportunity to use their collections. Prominent aristocrats and princes of the Church created great libraries for the use of their courts, called "court libraries", and were housed in lavishly designed monumental buildings decorated with ornate woodwork, and the walls adorned with frescoes (Murray, Stuart A.P.) Renaissance art marks a cultural rebirth at the close of the Middle Ages and rise of the Modern world. One of the distinguishing features of Renaissance art was its development of highly realistic linear perspective. Giotto di Bondone (1267–1337) is credited with first treating a painting as a window into space, but it was not until the demonstrations of architect Filippo Brunelleschi (1377–1446) and the subsequent writings of Leon Battista Alberti (1404–1472) that perspective was formalized as an artistic technique. The development of perspective was part of a wider trend towards realism in the arts. Painters developed other techniques, studying light, shadow, and, famously in the case of Leonardo da Vinci, human anatomy. Underlying these changes in artistic method was a renewed desire to depict the beauty of nature and to unravel the axioms of aesthetics, with the works of Leonardo, Michelangelo and Raphael representing artistic pinnacles that were much imitated by other artists. Other notable artists include Sandro Botticelli, working for the Medici in Florence, Donatello, another Florentine, and Titian in Venice, among others. In the Netherlands, a particularly vibrant artistic culture developed. The work of Hugo van der Goes and Jan van Eyck was particularly influential on the development of painting in Italy, both technically with the introduction of oil paint and canvas, and stylistically in terms of naturalism in representation. Later, the work of Pieter Brueghel the Elder would inspire artists to depict themes of everyday life. In architecture, Filippo Brunelleschi was foremost in studying the remains of ancient classical buildings. With rediscovered knowledge from the 1st-century writer Vitruvius and the flourishing discipline of mathematics, Brunelleschi formulated the Renaissance style that emulated and improved on classical forms. His major feat of engineering was building the dome of the Florence Cathedral. Another building demonstrating this style is the church of St. Andrew in Mantua, built by Alberti. The outstanding architectural work of the High Renaissance was the rebuilding of St. Peter's Basilica, combining the skills of Bramante, Michelangelo, Raphael, Sangallo and Maderno. During the Renaissance, architects aimed to use columns, pilasters, and entablatures as an integrated system. The Roman orders types of columns are used: Tuscan and Composite. These can either be structural, supporting an arcade or architrave, or purely decorative, set against a wall in the form of pilasters. One of the first buildings to use pilasters as an integrated system was in the Old Sacristy (1421–1440) by Brunelleschi. Arches, semi-circular or (in the Mannerist style) segmental, are often used in arcades, supported on piers or columns with capitals. There may be a section of entablature between the capital and the springing of the arch. Alberti was one of the first to use the arch on a monumental. Renaissance vaults do not have ribs; they are semi-circular or segmental and on a square plan, unlike the Gothic vault, which is frequently rectangular. Renaissance artists were not pagans, although they admired antiquity and kept some ideas and symbols of the medieval past. Nicola Pisano (c. 1220–c. 1278) imitated classical forms by portraying scenes from the Bible. His "Annunciation", from the Baptistry at Pisa, demonstrates that classical models influenced Italian art before the Renaissance took root as a literary movement Applied innovation extended to commerce. At the end of the 15th century Luca Pacioli published the first work on bookkeeping, making him the founder of accounting. The rediscovery of ancient texts and the invention of the printing press democratized learning and allowed a faster propagation of more widely distributed ideas. In the first period of the Italian Renaissance, humanists favoured the study of humanities over natural philosophy or applied mathematics, and their reverence for classical sources further enshrined the Aristotelian and Ptolemaic views of the universe. Writing around 1450, Nicholas Cusanus anticipated the heliocentric worldview of Copernicus, but in a philosophical fashion. Science and art were intermingled in the early Renaissance, with polymath artists such as Leonardo da Vinci making observational drawings of anatomy and nature. Da Vinci set up controlled experiments in water flow, medical dissection, and systematic study of movement and aerodynamics, and he devised principles of research method that led Fritjof Capra to classify him as the "father of modern science". Other examples of Da Vinci's contribution during this period include machines designed to saw marbles and lift monoliths, and new discoveries in acoustics, botany, geology, anatomy, and mechanics. A suitable environment had developed to question scientific doctrine. The discovery in 1492 of the New World by Christopher Columbus challenged the classical worldview. The works of Ptolemy (in geography) and Galen (in medicine) were found to not always match everyday observations. As the Protestant Reformation and Counter-Reformation clashed, the Northern Renaissance showed a decisive shift in focus from Aristotelean natural philosophy to chemistry and the biological sciences (botany, anatomy, and medicine). The willingness to question previously held truths and search for new answers resulted in a period of major scientific advancements. Some view this as a "scientific revolution", heralding the beginning of the modern age, others as an acceleration of a continuous process stretching from the ancient world to the present day. Significant scientific advances were made during this time by Galileo Galilei, Tycho Brahe and Johannes Kepler. Copernicus, in "De revolutionibus orbium coelestium" ("On the Revolutions of the Heavenly Spheres"), posited that the Earth moved around the Sun. "De humani corporis fabrica" ("On the Workings of the Human Body") by Andreas Vesalius, gave a new confidence to the role of dissection, observation, and the mechanistic view of anatomy. Another important development was in the "process" for discovery, the scientific method, focusing on empirical evidence and the importance of mathematics, while discarding Aristotelian science. Early and influential proponents of these ideas included Copernicus, Galileo, and Francis Bacon. The new scientific method led to great contributions in the fields of astronomy, physics, biology, and anatomy. During the Renaissance, extending from 1450 to 1650, every continent was visited and mostly mapped by Europeans, except the south polar continent now known as Antarctica. This development is depicted in the large world map "Nova Totius Terrarum Orbis Tabula" made by the Dutch cartographer Joan Blaeu in 1648 to commemorate the Peace of Westphalia. In 1492, Christopher Columbus sailed across the Atlantic Ocean from Spain seeking a direct route to India of the Delhi Sultanate. He accidentally stumbled upon the Americas, but believed he had reached the East Indies. In 1606, the Dutch navigator Willem Janszoon sailed from the East Indies in the VOC ship Duyfken and landed in Australia. He charted about 300 km of the west coast of Cape York Peninsula in Queensland. More than thirty Dutch expeditions followed, mapping sections of the north, west and south coasts. In 1642–1643, Abel Tasman circumnavigated the continent, proving that it was not joined to the imagined south polar continent. By 1650, Dutch cartographers had mapped most of the coastline of the continent, which they named New Holland, except the east coast which was charted in 1770 by Captain Cook. The long-imagined south polar continent was eventually sighted in 1820. Throughout the Renaissance it had been known as Terra Australis, or 'Australia' for short. However, after that name was transferred to New Holland in the nineteenth century, the new name of 'Antarctica' was bestowed on the south polar continent. From this changing society emerged a common, unifying musical language, in particular the polyphonic style of the Franco-Flemish school. The development of printing made distribution of music possible on a wide scale. Demand for music as entertainment and as an activity for educated amateurs increased with the emergence of a bourgeois class. Dissemination of chansons, motets, and masses throughout Europe coincided with the unification of polyphonic practice into the fluid style that culminated in the second half of the sixteenth century in the work of composers such as Palestrina, Lassus, Victoria and William Byrd. The new ideals of humanism, although more secular in some aspects, developed against a Christian backdrop, especially in the Northern Renaissance. Much, if not most, of the new art was commissioned by or in dedication to the Church. However, the Renaissance had a profound effect on contemporary theology, particularly in the way people perceived the relationship between man and God. Many of the period's foremost theologians were followers of the humanist method, including Erasmus, Zwingli, Thomas More, Martin Luther, and John Calvin. The Renaissance began in times of religious turmoil. The late Middle Ages was a period of political intrigue surrounding the Papacy, culminating in the Western Schism, in which three men simultaneously claimed to be true Bishop of Rome. While the schism was resolved by the Council of Constance (1414), a resulting reform movement known as Conciliarism sought to limit the power of the pope. Although the papacy eventually emerged supreme in ecclesiastical matters by the Fifth Council of the Lateran (1511), it was dogged by continued accusations of corruption, most famously in the person of Pope Alexander VI, who was accused variously of simony, nepotism and fathering four children (most of whom were married off, presumably for the consolidation of power) while a cardinal. Churchmen such as Erasmus and Luther proposed reform to the Church, often based on humanist textual criticism of the New Testament. In October 1517 Luther published the 95 Theses, challenging papal authority and criticizing its perceived corruption, particularly with regard to instances of sold indulgences. The 95 Theses led to the Reformation, a break with the Roman Catholic Church that previously claimed hegemony in Western Europe. Humanism and the Renaissance therefore played a direct role in sparking the Reformation, as well as in many other contemporaneous religious debates and conflicts. Pope Paul III came to the papal throne (1534–1549) after the sack of Rome in 1527, with uncertainties prevalent in the Catholic Church following the Protestant Reformation. Nicolaus Copernicus dedicated "De revolutionibus orbium coelestium" (On the Revolutions of the Celestial Spheres) to Paul III, who became the grandfather of Alessandro Farnese (cardinal), who had paintings by Titian, Michelangelo, and Raphael, as well as an important collection of drawings, and who commissioned the masterpiece of Giulio Clovio, arguably the last major illuminated manuscript, the "Farnese Hours". By the 15th century, writers, artists, and architects in Italy were well aware of the transformations that were taking place and were using phrases such as "modi antichi" (in the antique manner) or "alle romana et alla antica" (in the manner of the Romans and the ancients) to describe their work. In the 1330s Petrarch referred to pre-Christian times as "antiqua" (ancient) and to the Christian period as "nova" (new). From Petrarch's Italian perspective, this new period (which included his own time) was an age of national eclipse. Leonardo Bruni was the first to use tripartite periodization in his "History of the Florentine People" (1442). Bruni's first two periods were based on those of Petrarch, but he added a third period because he believed that Italy was no longer in a state of decline. Flavio Biondo used a similar framework in "Decades of History from the Deterioration of the Roman Empire" (1439–1453). Humanist historians argued that contemporary scholarship restored direct links to the classical period, thus bypassing the Medieval period, which they then named for the first time the "Middle Ages". The term first appears in Latin in 1469 as "media tempestas" (middle times). The term "rinascita" (rebirth) first appeared, however, in its broad sense in Giorgio Vasari's "Lives of the Artists", 1550, revised 1568. Vasari divides the age into three phases: the first phase contains Cimabue, Giotto, and Arnolfo di Cambio; the second phase contains Masaccio, Brunelleschi, and Donatello; the third centers on Leonardo da Vinci and culminates with Michelangelo. It was not just the growing awareness of classical antiquity that drove this development, according to Vasari, but also the growing desire to study and imitate nature. In the 15th century, the Renaissance spread rapidly from its birthplace in Florence to the rest of Italy and soon to the rest of Europe. The invention of the printing press by German printer Johannes Gutenberg allowed the rapid transmission of these new ideas. As it spread, its ideas diversified and changed, being adapted to local culture. In the 20th century, scholars began to break the Renaissance into regional and national movements. In England, the sixteenth century marked the beginning of the English Renaissance with the work of writers William Shakespeare, Christopher Marlowe, Edmund Spenser, Sir Thomas More, Francis Bacon, Sir Philip Sidney, as well as great artists, architects (such as Inigo Jones who introduced Italianate architecture to England), and composers such as Thomas Tallis, John Taverner, and William Byrd. The word "Renaissance" is borrowed from the French language, where it means "re-birth". It was first used in the eighteenth century and was later popularized by French historian Jules Michelet (1798–1874) in his 1855 work, "Histoire de France" (History of France). In 1495 the Italian Renaissance arrived in France, imported by King Charles VIII after his invasion of Italy. A factor that promoted the spread of secularism was the inability of the Church to offer assistance against the Black Death. Francis I imported Italian art and artists, including Leonardo da Vinci, and built ornate palaces at great expense. Writers such as François Rabelais, Pierre de Ronsard, Joachim du Bellay and Michel de Montaigne, painters such as Jean Clouet, and musicians such as Jean Mouton also borrowed from the spirit of the Renaissance. In 1533, a fourteen-year-old Caterina de' Medici (1519–1589), born in Florence to Lorenzo de' Medici, Duke of Urbino and Madeleine de la Tour d'Auvergne, married Henry II of France, second son of King Francis I and Queen Claude. Though she became famous and infamous for her role in France's religious wars, she made a direct contribution in bringing arts, sciences and music (including the origins of ballet) to the French court from her native Florence. In the second half of the 15th century, the Renaissance spirit spread to Germany and the Low Countries, where the development of the printing press (ca. 1450) and Renaissance artists such as Albrecht Dürer (1471–1528) predated the influence from Italy. In the early Protestant areas of the country humanism became closely linked to the turmoil of the Protestant Reformation, and the art and writing of the German Renaissance frequently reflected this dispute. However, the Gothic style and medieval scholastic philosophy remained exclusively until the turn of the 16th century. Emperor Maximilian I of Habsburg (ruling 1493–1519) was the first truly Renaissance monarch of the Holy Roman Empire. After Italy, Hungary was the first European country where the Renaissance appeared. The Renaissance style came directly from Italy during the Quattrocento to Hungary first in the Central European region, thanks to the development of early Hungarian-Italian relationships—not only in dynastic connections, but also in cultural, humanistic and commercial relations—growing in strength from the 14th century. The relationship between Hungarian and Italian Gothic styles was a second reason—exaggerated breakthrough of walls is avoided, preferring clean and light structures. Large-scale building schemes provided ample and long term work for the artists, for example, the building of the Friss (New) Castle in Buda, the castles of Visegrád, Tata and Várpalota. In Sigismund's court there were patrons such as Pipo Spano, a descendant of the Scolari family of Florence, who invited Manetto Ammanatini and Masolino da Pannicale to Hungary. The new Italian trend combined with existing national traditions to create a particular local Renaissance art. Acceptance of Renaissance art was furthered by the continuous arrival of humanist thought in the country. Many young Hungarians studying at Italian universities came closer to the Florentine humanist center, so a direct connection with Florence evolved. The growing number of Italian traders moving to Hungary, specially to Buda, helped this process. New thoughts were carried by the humanist prelates, among them Vitéz János, archbishop of Esztergom, one of the founders of Hungarian humanism. During the long reign of emperor Sigismund of Luxemburg the Royal Castle of Buda became probably the largest Gothic palace of the late Middle Ages. King Matthias Corvinus (r. 1458–1490) rebuilt the palace in early Renaissance style and further expanded it. After the marriage in 1476 of King Matthias to Beatrice of Naples, Buda became one of the most important artistic centres of the Renaissance north of the Alps. The most important humanists living in Matthias' court were Antonio Bonfini and the famous Hungarian poet Janus Pannonius. András Hess set up a printing press in Buda in 1472. Matthias Corvinus's library, the Bibliotheca Corviniana, was Europe's greatest collections of secular books: historical chronicles, philosophic and scientific works in the 15th century. His library was second only in size to the Vatican Library. (However, the Vatican Library mainly contained Bibles and religious materials.) In 1489, Bartolomeo della Fonte of Florence wrote that Lorenzo de' Medici founded his own Greek-Latin library encouraged by the example of the Hungarian king. Corvinus's library is part of UNESCO World Heritage. Matthias started at least two major building projects. The works in Buda and Visegrád began in about 1479. Two new wings and a hanging garden were built at the royal castle of Buda, and the palace at Visegrád was rebuilt in Renaissance style. Matthias appointed the Italian Chimenti Camicia and the Dalmatian Giovanni Dalmata to direct these projects. Matthias commissioned the leading Italian artists of his age to embellish his palaces: for instance, the sculptor Benedetto da Majano and the painters Filippino Lippi and Andrea Mantegna worked for him. A copy of Mantegna's portrait of Matthias survived. Matthias also hired the Italian military engineer Aristotele Fioravanti to direct the rebuilding of the forts along the southern frontier. He had new monasteries built in Late Gothic style for the Franciscans in Kolozsvár, Szeged and Hunyad, and for the Paulines in Fejéregyháza. In the spring of 1485, Leonardo da Vinci travelled to Hungary on behalf of Sforza to meet king Matthias Corvinus, and was commissioned by him to paint a Madonna. Matthias enjoyed the company of Humanists and had lively discussions on various topics with them. The fame of his magnanimity encouraged many scholarsmostly Italianto settle in Buda. Antonio Bonfini, Pietro Ranzano, Bartolomeo Fonzio, and Francesco Bandini spent many years in Matthias's court. This circle of educated men introduced the ideas of Neoplatonism to Hungary. Like all intellectuals of his age, Matthias was convinced that the movements and combinations of the stars and planets exercised influence on individuals' life and on the history of nations. Galeotto Marzio described him as "king and astrologer", and Antonio Bonfini said Matthias "never did anything without consulting the stars". Upon his request, the famous astronomers of the age, Johannes Regiomontanus and Marcin Bylica, set up an observatory in Buda and installed it with astrolabes and celestial globes. Regiomontanus dedicated his book on navigation that was used by Christopher Columbus to Matthias. Other important figures of Hungarian Renaissance include Bálint Balassi (poet), Sebestyén Tinódi Lantos (poet), Bálint Bakfark (composer and lutenist), and Master MS (fresco painter). Culture in the Netherlands at the end of the 15th century was influenced by the Italian Renaissance through trade via Bruges, which made Flanders wealthy. Its nobles commissioned artists who became known across Europe. In science, the anatomist Andreas Vesalius led the way; in cartography, Gerardus Mercator's map assisted explorers and navigators. In art, Dutch and Flemish Renaissance painting ranged from the strange work of Hieronymus Bosch to the everyday life depictions of Pieter Brueghel the Elder. The Renaissance in Northern Europe has been termed the "Northern Renaissance". While Renaissance ideas were moving north from Italy, there was a simultaneous southward spread of some areas of innovation, particularly in music. The music of the 15th-century Burgundian School defined the beginning of the Renaissance in music, and the polyphony of the Netherlanders, as it moved with the musicians themselves into Italy, formed the core of the first true international style in music since the standardization of Gregorian Chant in the 9th century. The culmination of the Netherlandish school was in the music of the Italian composer Palestrina. At the end of the 16th century Italy again became a center of musical innovation, with the development of the polychoral style of the Venetian School, which spread northward into Germany around 1600. The paintings of the Italian Renaissance differed from those of the Northern Renaissance. Italian Renaissance artists were among the first to paint secular scenes, breaking away from the purely religious art of medieval painters. Northern Renaissance artists initially remained focused on religious subjects, such as the contemporary religious upheaval portrayed by Albrecht Dürer. Later, the works of Pieter Bruegel influenced artists to paint scenes of daily life rather than religious or classical themes. It was also during the Northern Renaissance that Flemish brothers Hubert and Jan van Eyck perfected the oil painting technique, which enabled artists to produce strong colors on a hard surface that could survive for centuries. A feature of the Northern Renaissance was its use of the vernacular in place of Latin or Greek, which allowed greater freedom of expression. This movement had started in Italy with the decisive influence of Dante Alighieri on the development of vernacular languages; in fact the focus on writing in Italian has neglected a major source of Florentine ideas expressed in Latin. The spread of the printing press technology boosted the Renaissance in Northern Europe as elsewhere, with Venice becoming a world center of printing. An early Italian humanist who came to Poland in the mid-15th century was Filippo Buonaccorsi. Many Italian artists came to Poland with Bona Sforza of Milan, when she married King Sigismund I the Old in 1518. This was supported by temporarily strengthened monarchies in both areas, as well as by newly established universities. The Polish Renaissance lasted from the late 15th to the late 16th century and was the Golden Age of Polish culture. Ruled by the Jagiellon dynasty, the Kingdom of Poland (from 1569 known as the Polish–Lithuanian Commonwealth) actively participated in the broad European Renaissance. The multi-national Polish state experienced a substantial period of cultural growth thanks in part to a century without major wars – aside from conflicts in the sparsely populated eastern and southern borderlands. The Reformation spread peacefully throughout the country (giving rise to the Polish Brethren), while living conditions improved, cities grew, and exports of agricultural products enriched the population, especially the nobility ("szlachta") who gained dominance in the new political system of Golden Liberty. The Polish Renaissance architecture has three periods of development. The greatest monument of this style in the territory of the former Duchy of Pomerania is the Ducal Castle in Szczecin. Although Italian Renaissance had a modest impact in Portuguese arts, Portugal was influential in broadening the European worldview, stimulating humanist inquiry. Renaissance arrived through the influence of wealthy Italian and Flemish merchants who invested in the profitable commerce overseas. As the pioneer headquarters of European exploration, Lisbon flourished in the late 15th century, attracting experts who made several breakthroughs in mathematics, astronomy and naval technology, including Pedro Nunes, João de Castro, Abraham Zacuto and Martin Behaim. Cartographers Pedro Reinel, Lopo Homem, Estêvão Gomes and Diogo Ribeiro made crucial advances in mapping the world. Apothecary Tomé Pires and physicians Garcia de Orta and Cristóvão da Costa collected and published works on plants and medicines, soon translated by Flemish pioneer botanist Carolus Clusius. In architecture, the huge profits of the spice trade financed a sumptuous composite style in the first decades of the 16th century, the Manueline, incorporating maritime elements. The primary painters were Nuno Gonçalves, Gregório Lopes and Vasco Fernandes. In music, Pedro de Escobar and Duarte Lobo produced four songbooks, including the Cancioneiro de Elvas. In literature, Sá de Miranda introduced Italian forms of verse. Bernardim Ribeiro developed pastoral romance, plays by Gil Vicente fused it with popular culture, reporting the changing times, and Luís de Camões inscribed the Portuguese feats overseas in the epic poem "Os Lusíadas". Travel literature especially flourished: João de Barros, Castanheda, António Galvão, Gaspar Correia, Duarte Barbosa, and Fernão Mendes Pinto, among others, described new lands and were translated and spread with the new printing press. After joining the Portuguese exploration of Brazil in 1500, Amerigo Vespucci coined the term New World, in his letters to Lorenzo di Pierfrancesco de' Medici. The intense international exchange produced several cosmopolitan humanist scholars, including Francisco de Holanda, André de Resende and Damião de Góis, a friend of Erasmus who wrote with rare independence on the reign of King Manuel I. Diogo and André de Gouveia made relevant teaching reforms via France. Foreign news and products in the Portuguese factory in Antwerp attracted the interest of Thomas More and Albrecht Dürer to the wider world. There, profits and know-how helped nurture the Dutch Renaissance and Golden Age, especially after the arrival of the wealthy cultured Jewish community expelled from Portugal. Renaissance trends from Italy and Central Europe influenced Russia in many ways. Their influence was rather limited, however, due to the large distances between Russia and the main European cultural centers and the strong adherence of Russians to their Orthodox traditions and Byzantine legacy. Prince Ivan III introduced Renaissance architecture to Russia by inviting a number of architects from Italy, who brought new construction techniques and some Renaissance style elements with them, while in general following the traditional designs of Russian architecture. In 1475 the Bolognese architect Aristotele Fioravanti came to rebuild the Cathedral of the Dormition in the Moscow Kremlin, which had been damaged in an earthquake. Fioravanti was given the 12th-century Vladimir Cathedral as a model, and he produced a design combining traditional Russian style with a Renaissance sense of spaciousness, proportion and symmetry. In 1485 Ivan III commissioned the building of the royal residence, Terem Palace, within the Kremlin, with Aloisio da Milano as the architect of the first three floors. He and other Italian architects also contributed to the construction of the Kremlin walls and towers. The small banquet hall of the Russian Tsars, called the Palace of Facets because of its facetted upper story, is the work of two Italians, Marco Ruffo and Pietro Solario, and shows a more Italian style. In 1505, an Italian known in Russia as Aleviz Novyi or Aleviz Fryazin arrived in Moscow. He may have been the Venetian sculptor, Alevisio Lamberti da Montagne. He built twelve churches for Ivan III, including the Cathedral of the Archangel, a building remarkable for the successful blending of Russian tradition, Orthodox requirements and Renaissance style. It is believed that the Cathedral of the Metropolitan Peter in Vysokopetrovsky Monastery, another work of Aleviz Novyi, later served as an inspiration for the so-called "octagon-on-tetragon" architectural form in the Moscow Baroque of the late 17th century. Between the early 16th and the late 17th centuries, an original tradition of stone tented roof architecture developed in Russia. It was quite unique and different from the contemporary Renaissance architecture elsewhere in Europe, though some research terms the style 'Russian Gothic' and compares it with the European Gothic architecture of the earlier period. The Italians, with their advanced technology, may have influenced the invention of the stone tented roof (the wooden tents were known in Russia and Europe long before). According to one hypothesis, an Italian architect called Petrok Maly may have been an author of the Ascension Church in Kolomenskoye, one of the earliest and most prominent tented roof churches. By the 17th century the influence of Renaissance painting resulted in Russian icons becoming slightly more realistic, while still following most of the old icon painting canons, as seen in the works of Bogdan Saltanov, Simon Ushakov, Gury Nikitin, Karp Zolotaryov and other Russian artists of the era. Gradually the new type of secular portrait painting appeared, called "parsúna" (from "persona" – person), which was transitional style between abstract iconographics and real paintings. In the mid 16th-century Russians adopted printing from Central Europe, with Ivan Fyodorov being the first known Russian printer. In the 17th century printing became widespread, and woodcuts became especially popular. That led to the development of a special form of folk art known as lubok printing, which persisted in Russia well into the 19th century. A number of technologies from the European Renaissance period were adopted by Russia rather early and subsequently perfected to become a part of a strong domestic tradition. Mostly these were military technologies, such as cannon casting adopted by at least the 15th century. The Tsar Cannon, which is the world's largest bombard by caliber, is a masterpiece of Russian cannon making. It was cast in 1586 by Andrey Chokhov and is notable for its rich, decorative relief. Another technology, that according to one hypothesis originally was brought from Europe by the Italians, resulted in the development of vodka, the national beverage of Russia. As early as 1386 Genoese ambassadors brought the first aqua vitae ("water of life") to Moscow and presented it to Grand Duke Dmitry Donskoy. The Genoese likely developed this beverage with the help of the alchemists of Provence, who used an Arab-invented distillation apparatus to convert grape must into alcohol. A Moscovite monk called Isidore used this technology to produce the first original Russian vodka c. 1430. The Renaissance arrived in the Iberian peninsula through the Mediterranean possessions of the Aragonese Crown and the city of Valencia. Many early Spanish Renaissance writers come from the Kingdom of Aragon, including Ausiàs March and Joanot Martorell. In the Kingdom of Castile, the early Renaissance was heavily influenced by the Italian humanism, starting with writers and poets such as the Marquis of Santillana, who introduced the new Italian poetry to Spain in the early 15th century. Other writers, such as Jorge Manrique, Fernando de Rojas, Juan del Encina, Juan Boscán Almogáver and Garcilaso de la Vega, kept a close resemblance to the Italian canon. Miguel de Cervantes's masterpiece "Don Quixote" is credited as the first Western novel. Renaissance humanism flourished in the early 16th century, with influential writers such as philosopher Juan Luis Vives, grammarian Antonio de Nebrija and natural historian Pedro de Mexía. Later Spanish Renaissance tended towards religious themes and mysticism, with poets such as fray Luis de León, Teresa of Ávila and John of the Cross, and treated issues related to the exploration of the New World, with chroniclers and writers such as Inca Garcilaso de la Vega and Bartolomé de las Casas, giving rise to a body of work, now known as Spanish Renaissance literature. The late Renaissance in Spain produced artists such as El Greco and composers such as Tomás Luis de Victoria and Antonio de Cabezón. The Italian artist and critic Giorgio Vasari (1511–1574) first used the term "rinascita" in his book "The Lives of the Artists" (published 1550). In the book Vasari attempted to define what he described as a break with the barbarities of Gothic art: the arts (he held) had fallen into decay with the collapse of the Roman Empire and only the Tuscan artists, beginning with Cimabue (1240–1301) and Giotto (1267–1337) began to reverse this decline in the arts. Vasari saw ancient art as central to the rebirth of Italian art. However, only in the 19th century did the French word "renaissance" achieve popularity in describing the self-conscious cultural movement based on revival of Roman models that began in the late 13th century. French historian Jules Michelet (1798–1874) defined "The Renaissance" in his 1855 work "Histoire de France" as an entire historical period, whereas previously it had been used in a more limited sense. For Michelet, the Renaissance was more a development in science than in art and culture. He asserted that it spanned the period from Columbus to Copernicus to Galileo; that is, from the end of the 15th century to the middle of the 17th century. Moreover, Michelet distinguished between what he called, "the bizarre and monstrous" quality of the Middle Ages and the democratic values that he, as a vocal Republican, chose to see in its character. A French nationalist, Michelet also sought to claim the Renaissance as a French movement. The Swiss historian Jacob Burckhardt (1818–1897) in his "The Civilization of the Renaissance in Italy" (1860), by contrast, defined the Renaissance as the period between Giotto and Michelangelo in Italy, that is, the 14th to mid-16th centuries. He saw in the Renaissance the emergence of the modern spirit of individuality, which the Middle Ages had stifled. His book was widely read and became influential in the development of the modern interpretation of the Italian Renaissance. However, Buckhardt has been accused of setting forth a linear Whiggish view of history in seeing the Renaissance as the origin of the modern world. More recently, some historians have been much less keen to define the Renaissance as a historical age, or even as a coherent cultural movement. The historian Randolph Starn, of the University of California Berkeley, stated in 1998: There is debate about the extent to which the Renaissance improved on the culture of the Middle Ages. Both Michelet and Burckhardt were keen to describe the progress made in the Renaissance towards the modern age. Burckhardt likened the change to a veil being removed from man's eyes, allowing him to see clearly. On the other hand, many historians now point out that most of the negative social factors popularly associated with the medieval period—poverty, warfare, religious and political persecution, for example—seem to have worsened in this era, which saw the rise of Machiavellian politics, the Wars of Religion, the corrupt Borgia Popes, and the intensified witch hunts of the 16th century. Many people who lived during the Renaissance did not view it as the "golden age" imagined by certain 19th-century authors, but were concerned by these social maladies. Significantly, though, the artists, writers, and patrons involved in the cultural movements in question believed they were living in a new era that was a clean break from the Middle Ages. Some Marxist historians prefer to describe the Renaissance in material terms, holding the view that the changes in art, literature, and philosophy were part of a general economic trend from feudalism towards capitalism, resulting in a bourgeois class with leisure time to devote to the arts. Johan Huizinga (1872–1945) acknowledged the existence of the Renaissance but questioned whether it was a positive change. In his book "The Autumn of the Middle Ages", he argued that the Renaissance was a period of decline from the High Middle Ages, destroying much that was important. The Latin language, for instance, had evolved greatly from the classical period and was still a living language used in the church and elsewhere. The Renaissance obsession with classical purity halted its further evolution and saw Latin revert to its classical form. Robert S. Lopez has contended that it was a period of deep economic recession. Meanwhile, George Sarton and Lynn Thorndike have both argued that scientific progress was perhaps less original than has traditionally been supposed. Finally, Joan Kelly argued that the Renaissance led to greater gender dichotomy, lessening the agency women had had during the Middle Ages. Some historians have begun to consider the word "Renaissance" to be unnecessarily loaded, implying an unambiguously positive rebirth from the supposedly more primitive "Dark Ages", the Middle Ages. Most historians now prefer to use the term "early modern" for this period, a more neutral designation that highlights the period as a transitional one between the Middle Ages and the modern era. Others such as Roger Osborne have come to consider the Italian Renaissance as a repository of the myths and ideals of western history in general, and instead of rebirth of ancient ideas as a period of great innovation. The term "Renaissance" has also been used to define periods outside of the 15th and 16th centuries. Charles H. Haskins (1870–1937), for example, made a case for a Renaissance of the 12th century. Other historians have argued for a Carolingian Renaissance in the 8th and 9th centuries, Ottonian Renaissance in the 10th century and for the Timurid Renaissance of the 14th century. The Islamic Golden Age has been also sometimes termed with the Islamic Renaissance. Other periods of cultural rebirth have also been termed "renaissances", such as the Bengal Renaissance, Tamil Renaissance, Nepal Bhasa renaissance, al-Nahda or the Harlem Renaissance. The term can also be used in cinema. In animation, the Disney Renaissance is a period that spanned the years from 1989 to 1999 which saw the studio return to the level of quality not witnessed since their Golden Age or Animation. The San Francisco Renaissance was a vibrant period of exploratory poetry and fiction writing in that city in the mid-20th century. Notes Citations Interactive resources Lectures and galleries
https://en.wikipedia.org/wiki?curid=25532
Rheged Rheged () was one of the kingdoms of the "Hen Ogledd" ("Old North"), the Brittonic-speaking region of what is now Northern England and southern Scotland, during the post-Roman era and Early Middle Ages. It is recorded in several poetic and bardic sources, although its borders are not described in any of them. A recent archaeological discovery suggests that its stronghold was located in what is now Galloway in Scotland rather than, as was previously speculated, being in Cumbria. Rheged possibly extended into Lancashire and other parts of northern England. In some sources, Rheged is intimately associated with the king Urien Rheged and his family. Its inhabitants spoke Cumbric, a Brittonic dialect closely related to Old Welsh. The name Rheged appears regularly as an epithet of a certain Urien in a number of early Welsh poems and royal genealogies. His victories over the Anglian chieftains of Bernicia in the second half of the 6th century are recorded by Nennius and celebrated by the bard Taliesin, who calls him "Ruler of Rheged". He is thus placed squarely in the North of Britain and perhaps specifically in Westmorland when referred to as "Ruler of Llwyfenydd" (identified with the Lyvennet Valley). Later legend associates Urien with the city of Carlisle (the Roman Luguvalium), only twenty-five miles away; Higham suggests that Rheged was "broadly conterminous with the earlier "Civitas Carvetiorum", the Roman administrative unit based on Carlisle". Although it is possible that Rheged was merely a stronghold, it was not uncommon for sub-Roman monarchs to use their kingdom's name as an epithet. It is generally accepted, therefore, that Rheged was a kingdom covering a large part of modern Cumbria. Place-name evidence, e.g., Dunragit (possibly "Fort of Rheged") suggests that, at least in one period of its history, Rheged included Dumfries and Galloway. Recent archaeological excavations at Trusty's Hill, a vitrified fort near Gatehouse of Fleet, and the analysis of its artefacts in the context of other sites and their artefacts have led to claims that the kingdom was centred on Galloway early in the 7th century. More problematic interpretations suggest that it could also have reached as far south as Rochdale in Greater Manchester, recorded in the Domesday Book as "Recedham". The River Roch on which Rochdale stands was recorded in the 13th century as "Rached" or "Rachet". Such names may derive from Old English "reced" "hall or house". However, no other place names originating from this Old English element exist, which makes this derivation unlikely. If they are not of English origin, these place-names may incorporate the element 'Rheged' precisely because they lay on or near its borders. Certainly Urien's kingdom stretched eastward at one time, as he was also "Ruler of Catraeth" (Catterick in North Yorkshire). The traditional royal genealogy of Urien and his successors traces their ancestry back to Coel Hen (considered by some to be the origins of the Old King Cole of folk tradition), A second royal genealogy exists for a line, perhaps of kings, descended from Cynfarch Oer's brother: Elidir Lydanwyn. According to "Bonedd Gwŷr y Gogledd" Elidir's son, Llywarch Hen, was a ruler in North Britain in the 6th century. He was driven from his territory by princely in-fighting after Urien's death and was perhaps in old age associated with Powys. However, it is possible, because of internal inconsistencies, that the poetry connected to Powys was associated with Llywarch's name at a later, probably 9th century, date. Llywarch is referred to in some poems as king of South Rheged, and in others as king of Argoed, suggesting that the two regions were the same. Searching for Llywarch's kingdom has led some historians to propose that Rheged may have been divided between sons, resulting in northern and southern successor states. The connections of the family of Llywarch and Urien with Powys has suggested to some, on grounds of proximity, that the area of modern Lancashire may have been their original home. After Bernicia united with Deira to become the kingdom of Northumbria, Rheged was annexed by Northumbria, some time before AD 730. There was a royal marriage between Prince (later King) Oswiu of Northumbria and the Rhegedian princess Rieinmelth, granddaughter of Rum (Rhun), probably in 638, so it is possible that it was a peaceful takeover, both kingdoms being inherited by the same man. After Rheged was incorporated into Northumbria, the old Cumbric language was gradually replaced by Old English, Cumbric surviving only in remote upland communities. Around the year 900, after the power of Northumbria was destroyed by Viking incursions and settlement, large areas west of the Pennines fell without apparent warfare under the control of the British Kingdom of Strathclyde, with Leeds recorded as being on the border between the Britons and the Norse Kingdom of York. This may have represented the political assertion of lingering British culture in the region. The area of Cumbria remained under the control of Strathclyde until the early 11th century when Strathclyde itself was absorbed into the Scottish kingdom. The name of the people, whose modern Welsh form is has, however, survived in the name of Cumberland and now Cumbria; it probably derives from an old Celtic word *"Kombroges" meaning "fellow countrymen". Previously, Rheged was assumed to have been centred somewhere in Cumbria, northwest England. However, in 2012 archaeologists found evidence at Trusty's Hill near the town of Gatehouse of Fleet, in Galloway, southwest Scotland, suggesting that the site may in fact have been Rheged's capital . The discovery was announced to the public in January 2017 and the site is still under excavation. One of the lead researchers, Ronan Toolis, stated that their findings revealed structural ruins atop the hill. These originally belonged to a fortification system with a timber-reinforced stone rampart where the main fortification was supplemented by smaller defensive works along the low-lying slopes. According to Toolis, this suggests the presence of a royal stronghold of the period: ""This is a type of fort that has been recognized in Scotland as a form of high status secular settlement of the early medieval period. The evidence makes a compelling case for Galloway being the core of the kingdom of Rheged."" According to the University of Oxford's "People of the British Isles" project, the original population of Rheged has left a distinct genetic heritage amongst the people of Cumbria. The research compared the DNA of over 2000 people across the British Isles whose grandparents were all born within of each other, and found a number of cases, including Rheged, where genetic clusters of people matched the location of historical kingdoms (other examples included Bernicia and Elmet).
https://en.wikipedia.org/wiki?curid=25533
Romanian language Romanian (dated spellings: Rumanian or Roumanian; autonym: "limba română" , "the Romanian language", or "românește", lit. "in Romanian") is a Balkan Romance language spoken by approximately 24–26 million people as a native language, primarily in Romania and Moldova, and by another 4 million people as a second language. According to another estimate, there are about 34 million people worldwide who can speak Romanian, of whom 30 million speak it as a native language. It is an official and national language of both Romania and Moldova and is one of the official languages of the European Union. Romanian is a part of the Eastern Romance sub-branch of Romance languages, a linguistic group that evolved from several dialects of Vulgar Latin which separated from the Western Romance languages in the course of the period from the 5th to the 8th centuries. To distinguish it within the Eastern Romance languages, in comparative linguistics it is called "Daco-Romanian" as opposed to its closest relatives, Aromanian, Megleno-Romanian and Istro-Romanian. Romanian is also known as "Moldovan" in Moldova, although the Constitutional Court of Moldova ruled in 2013 that "the official language of the republic is Romanian". Numerous immigrant Romanian speakers live scattered across many other regions and countries worldwide, with large populations in Italy, Spain, Germany, Russia, Canada, and the United States of America. Romanian descended from the Vulgar Latin spoken in the Roman provinces of Southeastern Europe. Roman inscriptions show that Latin was primarily used to the north of the so-called Jireček Line (a hypothetical boundary between the predominantly Latin- and Greek-speaking territories of the Balkan Peninsula in the Roman Empire), but the exact territory where Proto-Romanian (or Common Romanian) developed cannot certainly be determined. Most regions where Romanian is now widely spokenBessarabia, Bukovina, Crișana, Maramureș, Moldova, and significant parts of Munteniawere not incorporated in the Roman Empire. Other regionsBanat, western Muntenia, Oltenia and Transylvaniaformed the Roman province of Dacia Traiana for about 170 years. According to the "continuity" theory, modern Romanian is the direct descendant of the Latin dialect of Dacia Traiana and developed primarily in the lands now forming Romania; the concurring "immigrationist" theory maintains that Proto-Romanian was spoken in the lands to the south of the Danube and Romanian-speakers settled in most parts of modern Romania only centuries after the fall of the Roman Empire. Most scholars agree that two major dialects developed from Common Romanian by the 10th century. Daco-Romanian (the official language of Romania and Moldova) and Istro-Romanian (a language spoken by no more than 2,000 people in Istria) descended from the northern dialect. Two other languages, Aromanian and Megleno-Romanian, developed from the southern version of Common Romanian. These two languages are now spoken in lands to the south of the Jireček Line. The use of the denomination "Romanian" ("română") for the language and use of the demonym "Romanians" ("Români") for speakers of this language predates the foundation of the modern Romanian state. Although the followers of the former Romanian voievodships used to designate themselves as "Ardeleni" (or "Ungureni"), "Moldoveni" or "Munteni", the name of "rumână" or "rumâniască" for the Romanian language itself is attested earlier, during the 16th century, by various foreign travelers into the Carpathian Romance-speaking space, as well as in other historical documents written in Romanian at that time such as ("The Chronicles of the land of Moldova") by Grigore Ureche. An attested reference to Romanian comes from a Latin title of an oath made in 1485 by the Moldavian Prince Stephen the Great to the Polish King Casimir, in which it is reported that ""Haec Inscriptio ex Valachico in Latinam versa est sed Rex Ruthenica Lingua scriptam accepta" — This Inscription was translated from Valachian (Romanian) into Latin, but the King has received it written in the Ruthenian language (Slavic)". In 1534, Tranquillo Andronico notes: ""Valachi nunc se Romanos vocant"" ("The Wallachians are now calling themselves Romans"). writes in 1532 that Romanians "are calling themselves Romans in their own language", and he subsequently quotes the expression: ""Știi Românește?"" ("Do you know Romanian?"). After travelling through Wallachia, Moldavia and Transylvania Ferrante Capecci accounts in 1575 that the indigenous population of these regions call themselves "românești" (""romanesci""). Pierre Lescalopier writes in 1574 that those who live in Moldavia, Wallachia and the vast part of Transylvania, ""se consideră adevărați urmași ai romanilor și-și numesc limba "românește", adică romana"" ("they consider themselves as the descendants of the Romans and they name their language Romanian"). The Transylvanian Saxon Johann Lebel writes in 1542 that ""Vlachi" se numeau între ei "Romuini"" and the Polish chronicler Stanislaw Orzechowski (Orichovius) notes in 1554 that "în limba lor "walachii" se numesc "romini"" ("In their language the Wallachians call themselves Romini"). The Croatian prelate and diplomat Antun Vrančić recorded in 1570 that ""Vlachs in Transylvania, Moldavia and Wallachia designate themselves as "Romans"" and the Transylvanian Hungarian Martin Szentiványi in 1699 quotes the following: "«Si noi sentem Rumeni»" ("Și noi suntem români" – "We are Romans as well") and "«Noi sentem di sange Rumena»" ("Noi suntem de sânge român" – "We are of Roman blood"). Notably, Szentiványi used Italian-based spellings to try to write the Romanian words. In (1582) stands written "".[...] că văzum cum toate limbile au și înfluresc întru cuvintele slăvite a lui Dumnezeu numai noi românii pre limbă nu avem. Pentru aceia cu mare muncă scoasem de limba jidovească si grecească si srâbească pre limba românească 5 cărți ale lui Moisi prorocul si patru cărți și le dăruim voo frați rumâni și le-au scris în cheltuială multă... și le-au dăruit voo fraților români... și le-au scris voo fraților români"" and in Letopisețul Țării Moldovei written by the Moldavian chronicler Grigore Ureche we can read: "«În Țara Ardialului nu lăcuiesc numai unguri, ce și sași peste seamă de mulți și români peste tot locul...»" ("In Transylvania there live not solely Hungarians or Saxons, but overwhelmingly many Romanians everywhere around."). Nevertheless, the oldest extant document written in Romanian remains Neacșu's letter (1521) and was written using Cyrillic letters (which remained in use up until the late 19th century). There are no records of any other documents written in Romanian from before 1521. Miron Costin, in his "De neamul moldovenilor" (1687), while noting that Moldavians, Wallachians, and the Romanians living in the Kingdom of Hungary have the same origin, says that although people of Moldavia call themselves "Moldavians", they name their language "Romanian" ("românește") instead of "Moldavian" ("moldovenește"). Dimitrie Cantemir, in his "" (Berlin, 1714), points out that the inhabitants of Moldavia, Wallachia and Transylvania spoke the same language. He notes, however, some differences in accent and vocabulary. Cantemir's work provides one of the earliest histories of the language, in which he notes, like Ureche before him, the evolution from Latin and notices the Greek and Polish borrowings. Additionally, he introduces the idea that some words must have had Dacian roots. Cantemir also notes that while the idea of a Latin origin of the language was prevalent in his time, other scholars considered it to have derived from Italian. The slow process of Romanian establishing itself as an official language, used in the public sphere, in literature and ecclesiastically, began in the late 15th century and ended in the early decades of the 18th century, by which time Romanian had begun to be regularly used by the Church. The oldest Romanian texts of a literary nature are religious manuscripts ("Codicele Voroneţean", "Psaltirea Scheiană"), translations of essential Christian texts. These are considered either propagandistic results of confessional rivalries, for instance between Lutheranism and Calvinism, or as initiatives by Romanian monks stationed at Peri Monastery in Maramureş to distance themselves from the influence of the Mukacheve eparchy in Ukraine. The language remains poorly attested during the Early Modern period. The first Romanian grammar was published in Vienna in 1780. Following the annexation of Bessarabia by Russia (after 1812), Moldavian was established as an official language in the governmental institutions of Bessarabia, used along with Russian, The publishing works established by Archbishop Gavril Bănulescu-Bodoni were able to produce books and liturgical works in Moldavian between 1815–1820. The linguistic situation in Bessarabia from 1812 to 1918 was the gradual development of bilingualism. Russian continued to develop as the official language of privilege, whereas Romanian remained the principal vernacular. The period from 1905 to 1917 was one of increasing linguistic conflict, with the re-awakening of Romanian national consciousness. In 1905 and 1906, the Bessarabian "zemstva" asked for the re-introduction of Romanian in schools as a "compulsory language", and the "liberty to teach in the mother language (Romanian language)". At the same time, Romanian-language newspapers and journals began to appear, such as "Basarabia" (1906), "Viața Basarabiei" (1907), "Moldovanul" (1907), "Luminătorul" (1908), "Cuvînt moldovenesc" (1913), "Glasul Basarabiei" (1913). From 1913, the synod permitted that "the churches in Bessarabia use the Romanian language". Romanian finally became the official language with the Constitution of 1923. Romanian has preserved a part of the Latin declension, but whereas Latin had six cases, from a morphological viewpoint, Romanian has only five: the nominative, accusative, genitive, dative, and marginally the vocative. Romanian nouns also preserve the neuter gender, although instead of functioning as a separate gender with its own forms in adjectives, the Romanian neuter became a mixture of masculine and feminine. The verb morphology of Romanian has shown the same move towards a compound perfect and future tense as the other Romance languages. Compared with the other Romance languages, during its evolution, Romanian simplified the original Latin tense system in extreme ways, in particular the absence of sequence of tenses. Romanian is spoken mostly in Central and the Balkan region of Southern Europe, although speakers of the language can be found all over the world, mostly due to emigration of Romanian nationals and the return of immigrants to Romania back to their original countries. Romanian speakers account for 0.5% of the world's population, and 4% of the Romance-speaking population of the world. Romanian is the single official and national language in Romania and Moldova, although it shares the official status at regional level with other languages in the Moldovan autonomies of Gagauzia and Transnistria. Romanian is also an official language of the Autonomous Province of Vojvodina in Serbia along with five other languages. Romanian minorities are encountered in Serbia (Timok Valley), Ukraine (Chernivtsi and Odessa oblasts), and Hungary (Gyula). Large immigrant communities are found in Italy, Spain, France, and Portugal. In 1995, the largest Romanian-speaking community in the Middle East was found in Israel, where Romanian was spoken by 5% of the population. Romanian is also spoken as a second language by people from Arabic-speaking countries who have studied in Romania. It is estimated that almost half a million Middle Eastern Arabs studied in Romania during the 1980s. Small Romanian-speaking communities are to be found in Kazakhstan and Russia. Romanian is also spoken within communities of Romanian and Moldovan immigrants in the United States, Canada and Australia, although they do not make up a large homogeneous community statewide. According to the Constitution of Romania of 1991, as revised in 2003, Romanian is the official language of the Republic. Romania mandates the use of Romanian in official government publications, public education and legal contracts. Advertisements as well as other public messages must bear a translation of foreign words, while trade signs and logos shall be written predominantly in Romanian. The Romanian Language Institute (Institutul Limbii Române), established by the Ministry of Education of Romania, promotes Romanian and supports people willing to study the language, working together with the Ministry of Foreign Affairs' Department for Romanians Abroad. Romanian is the official language of the Republic of Moldova. The 1991 Declaration of Independence names the official language Romanian. The Constitution of Moldova names the state language of the country Moldovan. In December 2013, a decision of the Constitutional Court of Moldova ruled that the Declaration of Independence takes precedence over the Constitution and the state language should be called Romanian. Scholars agree that Moldovan and Romanian are the same language, with the glottonym "Moldovan" used in certain political contexts. It has been the sole official language since the adoption of the Law on State Language of the Moldavian SSR in 1989. This law mandates the use of Moldovan in all the political, economical, cultural and social spheres, as well as asserting the existence of a "linguistic Moldo-Romanian identity". It is also used in schools, mass media, education and in the colloquial speech and writing. Outside the political arena the language is most often called "Romanian". In the breakaway territory of Transnistria, it is co-official with Ukrainian and Russian. In the 2014 census, out of the 2,804,801 people living in Moldova, 24% (652,394) stated Romanian as their most common language, whereas 56% stated Moldovan. While in the urban centers speakers are split evenly between the two names (with the capital Chișinău showing a strong preference for the name "Romanian", i.e. 3:2), in the countryside hardly a quarter of Romanian/Moldovan speakers indicated Romanian as their native language. Unofficial results of this census first showed a stronger preference for the name Romanian, however the initial reports were later dismissed by the Institute for Statistics, which led to speculations in the media regarding the forgery of the census results. The Constitution of the Republic of Serbia determines that in the regions of the Republic of Serbia inhabited by national minorities, their own languages and scripts shall be officially used as well, in the manner established by law. The Statute of the Autonomous Province of Vojvodina determines that, together with the Serbian language and the Cyrillic script, and the Latin script as stipulated by the law, the Croat, Hungarian, Slovak, Romanian and Rusyn languages and their scripts, as well as languages and scripts of other nationalities, shall simultaneously be officially used in the work of the bodies of the Autonomous Province of Vojvodina, in the manner established by the law. The bodies of the Autonomous Province of Vojvodina are: the Assembly, the Executive Council and the Provincial administrative bodies. The Romanian language and script are officially used in eight municipalities: Alibunar, Bela Crkva (), Žitište (Zitiște), Zrenjanin (Zrenianin), Kovačica (Kovăcița), Kovin (Cuvin), Plandište (Plandiște) and Sečanj. In the municipality of Vršac (Vârșeț), Romanian is official only in the villages of Vojvodinci (Voivodinț), Markovac (Marcovăț), Straža (Straja), Mali Žam (Jamu Mic), Malo Središte (Srediștea Mică), Mesić (Mesici), Jablanka, Sočica (Sălcița), Ritiševo (Râtișor), Orešac (Oreșaț) and Kuštilj (Coștei). In the 2002 Census, the last carried out in Serbia, 1.5% of Vojvodinians stated Romanian as their native language. In parts of Ukraine where Romanians constitute a significant share of the local population (districts in Chernivtsi, Odessa and Zakarpattia oblasts) Romanian is taught in schools as a primary language and there are Romanian-language newspapers, TV, and radio broadcasting. The University of Chernivtsi in western Ukraine trains teachers for Romanian schools in the fields of Romanian philology, mathematics and physics. In Hertsa Raion of Ukraine as well as in other villages of Chernivtsi Oblast and Zakarpattia Oblast, Romanian has been declared a "regional language" alongside Ukrainian as per the 2012 legislation on languages in Ukraine. Romanian is an official or administrative language in various communities and organisations, such as the Latin Union and the European Union. Romanian is also one of the five languages in which religious services are performed in the autonomous monastic state of Mount Athos, spoken in the monk communities of Prodromos and Lacu. In the unrecognised state of Transnistria, Moldovan is one of the official languages. However, unlike all other dialects of Romanian, this variety of Moldovan is written in Cyrillic Script. Romanian is taught in some areas that have Romanian minority communities, such as Vojvodina in Serbia, Bulgaria, Ukraine and Hungary. The Romanian Cultural Institute (ICR) has since 1992 organised summer courses in Romanian for language teachers. There are also non-Romanians who study Romanian as a foreign language, for example the Nicolae Bălcescu High-school in Gyula, Hungary. Romanian is taught as a foreign language in tertiary institutions, mostly in European countries such as Germany, France and Italy, and the Netherlands, as well as in the United States. Overall, it is taught as a foreign language in 43 countries around the world. Romanian has become popular in other countries through movies and songs performed in the Romanian language. Examples of Romanian acts that had a great success in non-Romanophone countries are the bands O-Zone (with their No. 1 single "Dragostea Din Tei/Numa Numa" across the world in 2003–2004), Akcent (popular in the Netherlands, Poland and other European countries), Activ (successful in some Eastern European countries), DJ Project (popular as clubbing music) SunStroke Project (known by viral video "Epic sax guy") and Alexandra Stan (worldwide no.1 hit with "Mr. Saxobeat)" and Inna as well as high-rated movies like "4 Months, 3 Weeks and 2 Days", "The Death of Mr. Lazarescu", "" or "California Dreamin'" (all of them with awards at the Cannes Film Festival). Also some artists wrote songs dedicated to the Romanian language. The multi-platinum pop trio O-Zone (originally from Moldova) released a song called ""Nu mă las de limba noastră"" ("I won't forsake our language"). The final verse of this song, "Eu nu mă las de limba noastră, de limba noastră cea română" is translated in English as "I won't forsake our language, our Romanian language". Also, the Moldovan musicians Doina and Ion Aldea Teodorovici performed a song called "The Romanian language". Romanian encompasses four varieties: (Daco-)Romanian, Aromanian, Megleno-Romanian, and Istro-Romanian with Daco-Romanian being the standard variety. The origin of the term "Daco-Romanian" can be traced back to the first printed book of Romanian grammar in 1780, by Samuil Micu and Gheorghe Șincai. There, the Romanian dialect spoken north of the Danube is called "lingua Daco-Romana" to emphasize its origin and its area of use, which includes the former Roman province of Dacia, although it is spoken also south of the Danube, in Dobrudja, Central Serbia and northern Bulgaria. This article deals with the Romanian (i.e. Daco-Romanian) language, and thus only its dialectal variations are discussed here. The differences between the regional varieties are small, limited to regular phonetic changes, few grammar aspects, and lexical particularities. There is a single written standard (literary) Romanian language used by all speakers, regardless of region. Like most natural languages, Romanian dialects are part of a dialect continuum. The dialects of Romanian are also referred to as "sub-dialects" and are distinguished primarily by phonetic differences. Romanians themselves speak of the differences as "accents" or "speeches" (in Romanian: "accent" or "grai"). Depending on the criteria used for classifying these dialects, fewer or more are found, ranging from 2 to 20, although the most widespread approaches give a number of five dialects. These are grouped into two main types, southern and northern, further divided as follows: Over the last century, however, regional accents have been weakened due to mass communication and greater mobility. Romanian is a Romance language, belonging to the Italic branch of the Indo-European language family, having much in common with languages such as French, Italian, Spanish and Portuguese. However, the languages closest to Romanian are the other Balkan Romance languages, spoken south of the Danube: Aromanian, Megleno-Romanian and Istro-Romanian. An alternative name for Romanian used by linguists to disambiguate with the other Balkan Romance languages is "Daco-Romanian", referring to the area where it is spoken (which corresponds roughly to the onetime Roman province of Dacia). Compared with the other Romance languages, the closest relative of Romanian is Italian; the two languages show a limited degree of asymmetrical mutual intelligibility, especially in their cultivated forms: speakers of Romanian seem to understand Italian more easily than the other way around. Romanian has obvious grammatical and lexical similarities with French, Catalan, Spanish and Portuguese, with a high phonological similarity with Portuguese in particular; however, it is not mutually intelligible with them to any practical extent. Romanian speakers will usually need some formal study of basic grammar and vocabulary before being able to understand more than individual words and simple sentences in other Romance languages. The same is true for speakers of these languages trying to understand Romanian. Because of its separation from the other Romance languages, it has diverged from them and is an outlier in various ways, somewhat like English and Icelandic in regard to the other Germanic languages. Romanian has had a greater share of foreign influence than some other Romance languages such as Italian in terms of vocabulary and other aspects. A study conducted by Mario Pei in 1949 which analyzed the degree of differentiation of languages from their parental language (in the case of Romance languages to Latin comparing phonology, inflection, discourse, syntax, vocabulary, and intonation) produced the following percentages (the higher the percentage, the greater the distance from Latin): The lexical similarity of Romanian with Italian has been estimated at 77%, followed by French at 75%, Sardinian 74%, Catalan 73%, Portuguese and Rhaeto-Romance 72%, Spanish 71%. The Romanian vocabulary became predominantly influenced by French and, to a lesser extent, Italian in the nineteenth and early twentieth centuries. The Dacian language was an Indo-European language spoken by the ancient Dacians, mostly north of the Danube river but also in Moesia and other regions south of the Danube. It may have been the first language to influence the Latin spoken in Dacia, but little is known about it. Dacian is usually considered to have been a northern branch of the Thracian language, and, like Thracian, Dacian was a satem language. About 300 words found only in Romanian or with a cognate in the Albanian language may be inherited from Dacian (for example: "barză" "stork", "balaur" "dragon", "mal" "shore", "brânză" "cheese"). Some of these possibly Dacian words are related to pastoral life (for example, "brânză" "cheese"). Some linguists and historians have asserted that Albanians are Dacians who were not Romanized and migrated southward. A different view is that these non-Latin words with Albanian cognates are not necessarily Dacian, but rather were brought into the territory that is modern Romania by Romance-speaking Aromanian shepherds migrating north from Albania, Serbia, and northern Greece who became the Romanian people. While most of Romanian grammar and morphology are based on Latin, there are some features that are shared only with other languages of the Balkans and not found in other Romance languages. The shared features of Romanian and the other languages of the Balkan language area (Bulgarian, Macedonian, Albanian, Greek, and Serbo-Croatian) include a suffixed definite article, the syncretism of genitive and dative case and the formation of the future and the alternation of infinitive with subjunctive constructions. According to a well-established scholarly theory, most Balkanisms could be traced back to the development of the Balkan Romance languages; these features were adopted by other languages due to language shift. Slavic influence on Romanian is especially noticeable in its vocabulary, at about 10–15% of modern Romanian words, with further influences in its phonetics, morphology and syntax. The greater part of its Slavic vocabulary comes from Old Church Slavonic, which was the official written language of Wallachia and Moldavia from the 14th to the 18th century (although not understood by most people), as well as the liturgical language of the Romanian Orthodox Church. As a result, much Romanian vocabulary dealing with religion, ritual, and hierarchy is Slavic. The number of high-frequency Slavic-derived words is also believed to indicate contact or cohabitation with South Slavic tribes from around the 6th century, though it is disputed where this took place (see Origin of the Romanians). Words borrowed in this way tend to be more vernacular (compare "sfârși", "to end", with "săvârși", "to commit"). The extent of this borrowing is such that some scholars once mistakenly viewed Romanian as a Slavic language. It has also been argued that Slavic borrowing was a key factor in the development of ("î" and "â") as a separate phoneme. Even before the 19th century, Romanian came in contact with several other languages. Some notable examples include: Furthermore, during the Habsburg and, later on, Austrian rule of Banat, Transylvania, and Bukovina, a large number of words were borrowed from Austrian High German, in particular in fields such as the military, administration, social welfare, economy, etc. Subsequently, German terms have been taken out of science and technics, like: "șină" < "Schiene" "rail", "știft" < "Stift" "peg", "liță" < "Litze" "braid", "șindrilă" < "Schindel" "shingle", "ștanță" < "Stanze" "punch", "șaibă" < "Scheibe" "washer", "ștangă" < "Stange" "crossbar", "țiglă" < "Ziegel" "tile", "șmirghel" < "Schmirgelpapier" "emery paper"; Since the 19th century, many literary or learned words were borrowed from the other Romance languages, especially from French and Italian (for example: "birou" "desk, office", "avion" "airplane", "exploata" "exploit"). It was estimated that about 38% of words in Romanian are of French and/or Italian origin (in many cases both languages); and adding this to Romanian's native stock, about 75%–85% of Romanian words can be traced to Latin. The use of these Romanianized French and Italian learned loans has tended to increase at the expense of Slavic loanwords, many of which have become rare or fallen out of use. As second or third languages, French and Italian themselves are better known in Romania than in Romania's neighbors. Along with the switch to the Latin alphabet in Moldova, the re-latinization of the vocabulary has tended to reinforce the Latin character of the language. In the process of lexical modernization, much of the native Latin stock have acquired doublets from other Romance languages, thus forming a further and more modern and literary lexical layer. Typically, the native word is a noun and the learned loan is an adjective. Some examples of doublets: In the 20th century, an increasing number of English words have been borrowed (such as: "gem" < jam; "interviu" < interview; "meci" < match; "manager" < manager; "fotbal" < football; "sandviș" < sandwich; "bișniță" < business; "chec" < cake; "veceu" < WC; "tramvai" < tramway). These words are assigned grammatical gender in Romanian and handled according to Romanian rules; thus "the manager" is "managerul". Some borrowings, for example in the computer field, appear to have awkward (perhaps contrived and ludicrous) 'Romanisation,' such as "cookie-uri" which is the plural of the Internet term "cookie." A statistical analysis sorting Romanian words by etymological source carried out by Macrea (1961) based on the DLRM (49,649 words) showed the following makeup: If the analysis is restricted to a core vocabulary of 2,500 frequent, semantically rich and productive words, then the Latin inheritance comes first, followed by Romance and classical Latin neologisms, whereas the Slavic borrowings come third. Romanian has a lexical similarity of 77% with Italian, 75% with French, 74% with Sardinian, 73% with Catalan, 72% with Portuguese and Rheto-Romance, 71% with Spanish. Romanian nouns are characterized by gender (feminine, masculine, and neuter), and declined by number (singular and plural) and case (nominative/accusative, dative/genitive and vocative). The articles, as well as most adjectives and pronouns, agree in gender, number and case with the noun they modify. Romanian is the only Romance language where definite articles are enclitic: that is, attached to the end of the noun (as in Scandinavian, Bulgarian and Albanian), instead of in front (proclitic). They were formed, as in other Romance languages, from the Latin demonstrative pronouns. As in all Romance languages, Romanian verbs are highly inflected for person, number, tense, mood, and voice. The usual word order in sentences is subject–verb–object (SVO). Romanian has four verbal conjugations which further split into ten conjugation patterns. Verbs can be put in five moods that are inflected for the person (indicative, conditional/optative, imperative, subjunctive, and presumptive) and four impersonal moods (infinitive, gerund, supine, and participle). Romanian has seven vowels: , , , , , and . Additionally, and may appear in some borrowed words. Arguably, the diphthongs and are also part of the phoneme set. There are twenty-two consonants. The two approximants and can appear before or after any vowel, creating a large number of glide-vowel sequences which are, strictly speaking, not diphthongs. In final positions after consonants, a short can be deleted, surfacing only as the palatalization of the preceding consonant (e.g., ). Similarly, a deleted may prompt labialization of a preceding consonant, though this has ceased to carry any morphological meaning. Owing to its isolation from the other Romance languages, the phonetic evolution of Romanian was quite different, but the language does share a few changes with Italian, such as → (Lat. clarus → Rom. chiar, Ital. chiaro, Lat. clamare → Rom. "che"mare, Ital. "chi"amare) and → (Lat. *"gl"acia ("gl"acies) → Rom. "ghe"ață, Ital. "ghi"accia, "ghi"accio, Lat. *un"gl"a (ungula) → Rom. un"ghi"e, Ital. un"ghi"a), although this did not go as far as it did in Italian with other similar clusters (Rom. "pl"ace, Ital. "pi"ace); another similarity with Italian is the change from or to or (Lat. pax, pa"ce"m → Rom. and Ital. pa"ce", Lat. dul"ce"m → Rom. dul"ce", Ital. dol"ce", Lat. "ci"rcus → Rom. "ce"rc, Ital. "ci"rco) and or to or (Lat. "ge"lu → Rom. "ge"r, Ital. "ge"lo, Lat. mar"gi"nem → Rom. and Ital. mar"gi"ne, Lat. "ge"mere → Rom. "ge"me ("ge"mere), Ital. "ge"mere). There are also a few changes shared with Dalmatian, such as (probably phonetically ) → (Lat. cognatus → Rom. cumnat, Dalm. comnut) and → in some situations (Lat. coxa → Rom. coa"ps"ă, Dalm. co"ps"a). Among the notable phonetic changes are: Romanian has entirely lost Latin (qu), turning it either into (Lat. quattuor → Rom."patru", "four"; cf. It. "quattro") or (Lat. quando → Rom."când", "when"; Lat. quale → Rom."care", "which"). In fact, in modern re-borrowings, it sometimes takes the German-like form /kv/, as in "acvatic", "aquatic". Notably, it also failed to develop the palatalised sounds and , which exist at least historically in all other major Romance languages, and even in neighbouring non-Romance languages such as Serbian and Hungarian. The first written record about a Romance language spoken in the Middle Ages in the Balkans is from 587. A Vlach muleteer accompanying the Byzantine army noticed that the load was falling from one of the animals and shouted to a companion "Torna, torna frate" (meaning "Return, return brother!"), and, "sculca" (out of bed). Theophanes Confessor recorded it as part of a 6th-century military expedition by Commentiolus and Priscus against the Avars and Slovenes. "Libri III de moribus et actis primorum Normanniae ducum" by Dudo of Saint-Quentin states that Richard I of Normandy was sent by his father William I Longsword to learn the Dacian language with Bothon because the inhabitants of Bayeux spoke more Dacian than Roman. The oldest surviving written text in Romanian is a letter from late June 1521, in which Neacșu of Câmpulung wrote to the mayor of Brașov about an imminent attack of the Turks. It was written using the Cyrillic alphabet, like most early Romanian writings. The earliest surviving writing in Latin script was a late 16th-century Transylvanian text which was written with the Hungarian alphabet conventions. In the 18th century, Transylvanian scholars noted the Latin origin of Romanian and adapted the Latin alphabet to the Romanian language, using some orthographic rules from Italian, recognized as Romanian's closest relative. The Cyrillic alphabet remained in (gradually decreasing) use until 1860, when Romanian writing was first officially regulated. In the Soviet Republic of Moldova, a special version of the Cyrillic alphabet derived from the Russian version was used until 1989, when Romanian language spoken there officially returned to the Romanian Latin alphabet, although in the breakaway territory of Transnistria the Cyrillic alphabet is used to this day. The Romanian alphabet is as follows: K, Q, W and Y, not part of the native alphabet, were officially introduced in the Romanian alphabet in 1982 and are mostly used to write loanwords like "kilogram", "quasar", "watt", and "yoga". The Romanian alphabet is based on the Latin script with five additional letters , , , , . Formerly, there were as many as 12 additional letters, but some of them were abolished in subsequent reforms. Also, until the early 20th century, a short vowel marker was used, which survives only in ă. Today the Romanian alphabet is largely phonemic. However, the letters "â" and "î" both represent the same close central unrounded vowel . "Â" is used only inside words; "î" is used at the beginning or the end of non-compound words and in the middle of compound words. Another exception from a completely phonetic writing system is the fact that vowels and their respective semivowels are not distinguished in writing. In dictionaries the distinction is marked by separating the entry word into syllables for words containing a hiatus. Stressed vowels also are not marked in writing, except very rarely in cases where by misplacing the stress a word might change its meaning and if the meaning is not obvious from the context. For example, "trei copíi" means "three children" while "trei cópii" means "three copies". Uses of punctuation peculiar to Romanian are: In 1993, new spelling rules were proposed by the Romanian Academy. In 2000, the Moldovan Academy recommended adopting the same spelling rules, and in 2010 the Academy launched a schedule for the transition to the new rules that was intended to be completed by publications in 2011. On 17 October 2016, Minister of Education Corina Fusu signed Order No. 872, adopting the revised spelling rules as recommended by the Moldovan Academy of Sciences, coming into force on the day of signing (due to be completed within two school years). From this day, the spelling as used by institutions subordinated to the ministry of education is in line with the Romanian Academy's 1993 recommendation. This order, however, has no application to other government institutions and neither has Law 3462 of 1989 (which provided for the means of transliterating of Cyrillic to Latin) been amended to reflect these changes; thus, these institutions, along with most Moldovans, prefer to use the spelling adopted in 1989 (when the language with Latin script became official). The sentence in contemporary Romanian. Words inherited directly from Latin are highlighted: The same sentence, with French and Italian loanwords highlighted instead: The sentence rewritten to exclude French and Italian loanwords. Slavic loanwords are highlighted: The sentence rewritten to exclude all loanwords. The meaning is somewhat compromised due to the paucity of native vocabulary:
https://en.wikipedia.org/wiki?curid=25534
Republic A republic (, meaning "public affair") is a form of government in which the country is considered a "public matter", not the private concern or property of the rulers. The primary positions of power within a republic are attained, through democracy, oligarchy, or a mix thereof, rather than being unalterably occupied. It has become the opposing form of government to a monarchy and has therefore no monarch as head of state. In the context of American constitutional law, the definition of "republic" refers specifically to a form of government in which elected individuals represent the citizen body and exercise power according to the rule of law under a constitution, including separation of powers with an elected head of state, referred to as a constitutional republic or representative democracy. , 159 of the world's 206 sovereign states use the word "republic" as part of their official names – not all of these are republics in the sense of having elected governments, nor is the word "republic" used in the names of all nations with elected governments. The word "republic" comes from the Latin term "res publica", which literally means "public thing", "public matter", or "public affair" and was used to refer to the state as a whole. The term developed its modern meaning in reference to the constitution of the ancient Roman Republic, lasting from the overthrow of the kings in 509 BC to the establishment of the Empire in 27 BC. This constitution was characterized by a Senate composed of wealthy aristocrats and wielding significant influence; several popular assemblies of all free citizens, possessing the power to elect magistrates and pass laws; and a series of magistracies with varying types of civil and political authority. Most often a republic is a single sovereign state, but there are also sub-sovereign state entities that are referred to as republics, or that have governments that are described as "republican" in nature. For instance, Article IV of the United States Constitution "guarantee[s] to every State in this Union a Republican form of Government". In contrast, the Soviet Union described itself as being a group of "Soviet Socialist Republics", in reference to the 15 individually federal, multinational, top-level subdivisions or republics. The term originates from the Latin translation of Greek word "politeia". Cicero, among other Latin writers, translated "politeia" as "res publica" and it was in turn translated by Renaissance scholars as "republic" (or similar terms in various western European languages). The term "politeia" can be translated as form of government, polity, or regime and is therefore not always a word for a specific type of regime as the modern word republic is. One of Plato's major works on political science was titled "Politeia" and in English it is thus known as "The Republic". However, apart from the title, in modern translations of "The Republic", alternative translations of "politeia" are also used. However, in Book III of his "Politics", Aristotle was apparently the first classical writer to state that the term "politeia" can be used to refer more specifically to one type of "politeia": "When the citizens at large govern for the public good, it is called by the name common to all governments ("to koinon onoma pasōn tōn politeiōn"), government ("politeia")". Also amongst classical Latin, the term "republic" can be used in a general way to refer to any regime, or in a specific way to refer to governments which work for the public good. In medieval Northern Italy, a number of city states had commune or signoria based governments. In the late Middle Ages, writers such as Giovanni Villani began writing about the nature of these states and the differences from other types of regime. They used terms such as "libertas populi", a free people, to describe the states. The terminology changed in the 15th century as the renewed interest in the writings of Ancient Rome caused writers to prefer using classical terminology. To describe non-monarchical states writers, most importantly Leonardo Bruni, adopted the Latin phrase "res publica". While Bruni and Machiavelli used the term to describe the states of Northern Italy, which were not monarchies, the term "res publica" has a set of interrelated meanings in the original Latin. The term can quite literally be translated as "public matter". It was most often used by Roman writers to refer to the state and government, even during the period of the Roman Empire. In subsequent centuries, the English word "commonwealth" came to be used as a translation of "res publica", and its use in English was comparable to how the Romans used the term "res publica". Notably, during The Protectorate of Oliver Cromwell the word commonwealth was the most common term to call the new monarchless state, but the word republic was also in common use. Likewise, in Polish the term was translated as "rzeczpospolita", although the translation is now only used with respect to Poland. Presently, the term "republic" commonly means a system of government which derives its power from the people rather than from another basis, such as heredity or divine right. While the philosophical terminology developed in classical Greece and Rome, as already noted by Aristotle there was already a long history of city states with a wide variety of constitutions, not only in Greece but also in the Middle East. After the classical period, during the Middle Ages, many free cities developed again, such as Venice. The modern type of "republic" itself is different from any type of state found in the classical world. Nevertheless, there are a number of states of the classical era that are today still called republics. This includes ancient Athens and the Roman Republic. While the structure and governance of these states was very different from that of any modern republic, there is debate about the extent to which classical, medieval, and modern republics form a historical continuum. J. G. A. Pocock has argued that a distinct republican tradition stretches from the classical world to the present. Other scholars disagree. Paul Rahe, for instance, argues that the classical republics had a form of government with few links to those in any modern country. The political philosophy of the classical republics has in any case had an influence on republican thought throughout the subsequent centuries. Philosophers and politicians advocating republics, such as Machiavelli, Montesquieu, Adams, and Madison, relied heavily on classical Greek and Roman sources which described various types of regimes. Aristotle's "Politics" discusses various forms of government. One form Aristotle named "politeia", which consisted of a mixture of the other forms. He argued that this was one of the ideal forms of government. Polybius expanded on many of these ideas, again focusing on the idea of mixed government. The most important Roman work in this tradition is Cicero's "De re publica". Over time, the classical republics were either conquered by empires or became ones themselves. Most of the Greek republics were annexed to the Macedonian Empire of Alexander. The Roman Republic expanded dramatically conquering the other states of the Mediterranean that could be considered republics, such as Carthage. The Roman Republic itself then became the Roman Empire. The term "republic" is not commonly used to refer to pre-classical city-states, especially if outside Europe and the area which was under Graeco-Roman influence. However some early states outside Europe had governments that are sometimes today considered similar to republics. In the ancient Near East, a number of cities of the Eastern Mediterranean achieved collective rule. Arwad has been cited as one of the earliest known examples of a republic, in which the people, rather than a monarch, are described as sovereign. The Israelite confederation of the era of the Judges before the United Monarchy has also been considered a type of republic. In Africa the Axum Empire was organized as a confederation ruled similarly to a royal republic. Similarly the Igbo nation in what is now Nigeria. Early republican institutions comes from the independent "gana" "sanghas", which may have existed as early as the 6th century BC and persisted in some areas until the 4th century in India. The evidence for this is scattered, however, and no pure historical source exists for that period. Diodorus, a Greek historian who wrote two centuries after the time of Alexander the Great's invasion of India (now Pakistan and northwest India) mentions, without offering any detail, that independent and democratic states existed in India. Modern scholars note the word "democracy" at the time of the 3rd century BC and later suffered from degradation and could mean any autonomous state, no matter how oligarchic in nature. Key characteristics of the "gana" seem to include a monarch, usually known by the name raja, and a deliberative assembly. The assembly met regularly. It discussed all major state decisions. At least in some states, attendance was open to all free men. This body also had full financial, administrative, and judicial authority. Other officers, who rarely receive any mention, obeyed the decisions of the assembly. Elected by the "gana", the monarch apparently always belonged to a family of the noble class of "Kshatriya Varna". The monarch coordinated his activities with the assembly; in some states, he did so with a council of other nobles. The Licchavis had a primary governing body of 7,077 rajas, the heads of the most important families. On the other hand, the Shakyas, Koliyas, Mallas, and Licchavis, during the period around Gautama Buddha, had the assembly open to all men, rich and poor. Early "republics" or gaṇa sangha, such as Mallas, centered in the city of Kusinagara, and the Vajji (or Vriji) confederation, centered in the city of Vaishali, existed as early as the 6th century BC and persisted in some areas until the 4th century AD. The most famous clan amongst the ruling confederate clans of the Vajji Mahajanapada were the Licchavis. The Magadha kingdom included republican communities such as the community of Rajakumara. Villages had their own assemblies under their local chiefs called Gramakas. Their administrations were divided into executive, judicial, and military functions. Scholars differ over how best to describe these governments, and the vague, sporadic quality of the evidence allows for wide disagreements. Some emphasize the central role of the assemblies and thus tout them as democracies; other scholars focus on the upper-class domination of the leadership and possible control of the assembly and see an oligarchy or an aristocracy. Despite the assembly's obvious power, it has not yet been established whether the composition and participation were truly popular. The first main obstacle is the lack of evidence describing the popular power of the assembly. This is reflected in the "Arthashastra", an ancient handbook for monarchs on how to rule efficiently. It contains a chapter on how to deal with the "sanghas", which includes injunctions on manipulating the noble leaders, yet it does not mention how to influence the mass of the citizens—a surprising omission if democratic bodies, not the aristocratic families, actively controlled the republican governments. Another issue is the persistence of the four-tiered Varna class system. The duties and privileges on the members of each particular caste—rigid enough to prohibit someone sharing a meal with those of another order—might have affected the roles members were expected to play in the state, regardless of the formality of the institutions. A central tenet of democracy is the notion of shared decision-making power. The absence of any concrete notion of citizen equality across these caste system boundaries leads many scholars to claim that the true nature of "ganas" and "sanghas" is not comparable to truly democratic institutions. The Icelandic Commonwealth was established in AD 930 by refugees from Norway who had fled the unification of that country under King Harald Fairhair. The Commonwealth consisted of a number of clans run by chieftains, and the Althing was a combination of parliament and supreme court where disputes appealed from lower courts were settled, laws were decided, and decisions of national importance were taken. One such example was the Christianisation of Iceland in 1000, where the Althing decreed, in order to prevent an invasion, that all Icelanders must be baptized, and forbade celebration of pagan rituals. Contrary to most states, the Icelandic Commonwealth had no official leader. In the early 13th century, the Age of the Sturlungs, the Commonwealth began to suffer from long conflicts between warring clans. This, combined with pressure from the Norwegian king Haakon IV for the Icelanders to rejoin the Norwegian "family", led the Icelandic chieftains to accept Haakon IV as king by the signing of the "Gamli sáttmáli" ("Old Covenant") in 1262. This effectively brought the Commonwealth to an end. The Althing, however, is still Iceland's parliament, almost 800 years later. In Europe new republics appeared in the late Middle Ages when a number of small states embraced republican systems of government. These were generally small, but wealthy, trading states, like the Italian city-states and the Hanseatic League, in which the merchant class had risen to prominence. Knud Haakonssen has noted that, by the Renaissance, Europe was divided with those states controlled by a landed elite being monarchies and those controlled by a commercial elite being republics. Across Europe a wealthy merchant class developed in the important trading cities. Despite their wealth they had little power in the feudal system dominated by the rural land owners, and across Europe began to advocate for their own privileges and powers. The more centralized states, such as France and England, granted limited city charters. In the more loosely governed Holy Roman Empire, 51 of the largest towns became free imperial cities. While still under the dominion of the Holy Roman Emperor most power was held locally and many adopted republican forms of government. The same rights to imperial immediacy were secured by the major trading cities of Switzerland. The towns and villages of alpine Switzerland had, courtesy of geography, also been largely excluded from central control. Unlike Italy and Germany, much of the rural area was thus not controlled by feudal barons, but by independent farmers who also used communal forms of government. When the Habsburgs tried to reassert control over the region both rural farmers and town merchants joined the rebellion. The Swiss were victorious, and the Swiss Confederacy was proclaimed, and Switzerland has retained a republican form of government to the present. Italy was the most densely populated area of Europe, and also one with the weakest central government. Many of the towns thus gained considerable independence and adopted commune forms of government. Completely free of feudal control, the Italian city-states expanded, gaining control of the rural hinterland. The two most powerful were the Republic of Venice and its rival the Republic of Genoa. Each were large trading ports, and further expanded by using naval power to control large parts of the Mediterranean. It was in Italy that an ideology advocating for republics first developed. Writers such as Bartholomew of Lucca, Brunetto Latini, Marsilius of Padua, and Leonardo Bruni saw the medieval city-states as heirs to the legacy of Greece and Rome. Two Russian cities with powerful merchant class—Novgorod and Pskov—also adopted republican forms of government in 12th and 13th centuries, respectively, which ended when the republics were conquered by Muscovy/Russia at the end of 15th – beginning of 16th century. The dominant form of government for these early republics was control by a limited council of elite patricians. In those areas that held elections, property qualifications or guild membership limited both who could vote and who could run. In many states no direct elections were held and council members were hereditary or appointed by the existing council. This left the great majority of the population without political power, and riots and revolts by the lower classes were common. The late Middle Ages saw more than 200 such risings in the towns of the Holy Roman Empire. Similar revolts occurred in Italy, notably the Ciompi Revolt in Florence. Following the collapse of the Seljuk Sultanate of Rum and establishment of the Turkish Anatolian Beyliks, the Ahiler merchant fraternities established a state centered on Ankara that is sometimes compared to the Italian mercantile republics. While the classical writers had been the primary ideological source for the republics of Italy, in Northern Europe, the Protestant Reformation would be used as justification for establishing new republics. Most important was Calvinist theology, which developed in the Swiss Confederacy, one of the largest and most powerful of the medieval republics. John Calvin did not call for the abolition of monarchy, but he advanced the doctrine that the faithful had the duty to overthrow irreligious monarchs. Advocacy for republics appeared in the writings of the Huguenots during the French Wars of Religion. Calvinism played an important role in the republican revolts in England and the Netherlands. Like the city-states of Italy and the Hanseatic League, both were important trading centres, with a large merchant class prospering from the trade with the New World. Large parts of the population of both areas also embraced Calvinism. During the Dutch Revolt (beginning in 1566), the Dutch Republic emerged from rejection of Spanish Habsburg rule. However, the country did not adopt the republican form of government immediately: in the formal declaration of independence (Act of Abjuration, 1581), the throne of king Philip was only declared vacant, and the Dutch magistrates asked the Duke of Anjou, queen Elizabeth of England and prince William of Orange, one after another, to replace Philip. It took until 1588 before the Estates (the "Staten", the representative assembly at the time) decided to vest the sovereignty of the country in themselves. In 1641 the English Civil War began. Spearheaded by the Puritans and funded by the merchants of London, the revolt was a success, and King Charles I was executed. In England James Harrington, Algernon Sidney, and John Milton became some of the first writers to argue for rejecting monarchy and embracing a republican form of government. The English Commonwealth was short lived, and the monarchy soon restored. The Dutch Republic continued in name until 1795, but by the mid-18th century the stadtholder had become a "de facto" monarch. Calvinists were also some of the earliest settlers of the British and Dutch colonies of North America. Along with these initial republican revolts, early modern Europe also saw a great increase in monarchical power. The era of absolute monarchy replaced the limited and decentralized monarchies that had existed in most of the Middle Ages. It also saw a reaction against the total control of the monarch as a series of writers created the ideology known as liberalism. Most of these Enlightenment thinkers were far more interested in ideas of constitutional monarchy than in republics. The Cromwell regime had discredited republicanism, and most thinkers felt that republics ended in either anarchy or tyranny. Thus philosophers like Voltaire opposed absolutism while at the same time being strongly pro-monarchy. Jean-Jacques Rousseau and Montesquieu praised republics, and looked on the city-states of Greece as a model. However, both also felt that a nation-state like France, with 20 million people, would be impossible to govern as a republic. Rousseau admired the republican experiment in Corsica (1755–1769) and described his ideal political structure of small, self-governing communes. Montesquieu felt that a city-state should ideally be a republic, but maintained that a limited monarchy was better suited to a large nation. The American Revolution began as a rejection only of the authority of the British Parliament over the colonies, not of the monarchy. The failure of the British monarch to protect the colonies from what they considered the infringement of their rights to representative government, the monarch's branding of those requesting redress as traitors, and his support for sending combat troops to demonstrate authority resulted in widespread perception of the British monarchy as tyrannical. With the United States Declaration of Independence the leaders of the revolt firmly rejected the monarchy and embraced republicanism. The leaders of the revolution were well versed in the writings of the French liberal thinkers, and also in history of the classical republics. John Adams had notably written a book on republics throughout history. In addition, the widely distributed and popularly read-aloud tract "Common Sense", by Thomas Paine, succinctly and eloquently laid out the case for republican ideals and independence to the larger public. The Constitution of the United States, ratified in 1789, created a relatively strong federal republic to replace the relatively weak confederation under the first attempt at a national government with the Articles of Confederation and Perpetual Union ratified in 1781. The first ten amendments to the Constitution, called the United States Bill of Rights, guaranteed certain natural rights fundamental to republican ideals that justified the Revolution. The French Revolution was also not republican at its outset. Only after the Flight to Varennes removed most of the remaining sympathy for the king was a republic declared and Louis XVI sent to the guillotine. The stunning success of France in the French Revolutionary Wars saw republics spread by force of arms across much of Europe as a series of client republics were set up across the continent. The rise of Napoleon saw the end of the French First Republic and her Sister Republics, each replaced by "popular monarchies". Throughout the Napoleonic period, the victors extinguished many of the oldest republics on the continent, including the Republic of Venice, the Republic of Genoa, and the Dutch Republic. They were eventually transformed into monarchies or absorbed into neighbouring monarchies. Outside Europe another group of republics was created as the Napoleonic Wars allowed the states of Latin America to gain their independence. Liberal ideology had only a limited impact on these new republics. The main impetus was the local European descended Creole population in conflict with the Peninsulares—governors sent from overseas. The majority of the population in most of Latin America was of either African or Amerindian descent, and the Creole elite had little interest in giving these groups power and broad-based popular sovereignty. Simón Bolívar, both the main instigator of the revolts and one of its most important theorists, was sympathetic to liberal ideals but felt that Latin America lacked the social cohesion for such a system to function and advocated autocracy as necessary. In Mexico this autocracy briefly took the form of a monarchy in the First Mexican Empire. Due to the Peninsular War, the Portuguese court was relocated to Brazil in 1808. Brazil gained independence as a monarchy on September 7, 1822, and the Empire of Brazil lasted until 1889. In the other states various forms of autocratic republic existed until most were liberalized at the end of the 20th century. The French Second Republic was created in 1848, but abolished by Napoleon III who proclaimed himself Emperor in 1852. The French Third Republic was established in 1870, when a civil revolutionary committee refused to accept Napoleon III's surrender during the Franco-Prussian War. Spain briefly became the First Spanish Republic in 1873–74, but the monarchy was soon restored. By the start of the 20th century France, Switzerland and San Marino remained the only republics in Europe. This changed when, after the 1908 Lisbon Regicide, the 5 October 1910 revolution established the Portuguese Republic. In East Asia, China had seen considerable anti-Qing sentiment during the 19th century, and a number of protest movements developed calling for constitutional monarchy. The most important leader of these efforts was Sun Yat-sen, whose Three Principles of the People combined American, European, and Chinese ideas. Under his leadership the Republic of China was proclaimed on January 1, 1912. Republicanism expanded significantly in the aftermath of World War I, when several of the largest European empires collapsed: the Russian Empire (1917), German Empire (1918), Austro-Hungarian Empire (1918), and Ottoman Empire (1922) were all replaced by republics. New states gained independence during this turmoil, and many of these, such as Ireland, Poland, Finland and Czechoslovakia, chose republican forms of government. Following Greece's defeat in the Greco-Turkish War (1919–22), the monarchy was briefly replaced by the Second Hellenic Republic (1924–35). In 1931, the proclamation of the Second Spanish Republic (1931–39) resulted in the Spanish Civil War that would be the prelude of World War II. Republican ideas were spreading, especially in Asia. The United States began to have considerable influence in East Asia in the later part of the 19th century, with Protestant missionaries playing a central role. The liberal and republican writers of the west also exerted influence. These combined with native Confucian inspired political philosophy that had long argued that the populace had the right to reject unjust government that had lost the Mandate of Heaven. Two short-lived republics were proclaimed in East Asia, the Republic of Formosa and the First Philippine Republic. In the years following World War II, most of the remaining European colonies gained their independence, and most became republics. The two largest colonial powers were France and the United Kingdom. Republican France encouraged the establishment of republics in its former colonies. The United Kingdom attempted to follow the model it had for its earlier settler colonies of creating independent Commonwealth realms still linked under the same monarchy. While most of the settler colonies and the smaller states of the Caribbean retained this system, it was rejected by the newly independent countries in Africa and Asia, which revised their constitutions and became republics. Britain followed a different model in the Middle East; it installed local monarchies in several colonies and mandates including Iraq, Jordan, Kuwait, Bahrain, Oman, Yemen and Libya. In subsequent decades revolutions and coups overthrew a number of monarchs and installed republics. Several monarchies remain, and the Middle East is the only part of the world where several large states are ruled by monarchs with almost complete political control. In the wake of the First World War, the Russian monarchy fell during the Russian Revolution. The Russian Provisional Government was established in its place on the lines of a liberal republic, but this was overthrown by the Bolsheviks who went on to establish the Union of Soviet Socialist Republics. This was the first republic established under Marxist-Leninist ideology. Communism was wholly opposed to monarchy, and became an important element of many republican movements during the 20th century. The Russian Revolution spread into Mongolia, and overthrew its theocratic monarchy in 1924. In the aftermath of the Second World War the communists gradually gained control of Romania, Bulgaria, Yugoslavia, Hungary and Albania, ensuring that the states were reestablished as socialist republics rather than monarchies. Communism also intermingled with other ideologies. It was embraced by many national liberation movements during decolonization. In Vietnam, communist republicans pushed aside the Nguyễn Dynasty, and monarchies in neighbouring Laos and Cambodia were overthrown by communist movements in the 1970s. Arab socialism contributed to a series of revolts and coups that saw the monarchies of Egypt, Iraq, Libya, and Yemen ousted. In Africa Marxist-Leninism and African socialism led to the end of monarchy and the proclamation of republics in states such as Burundi and Ethiopia. Islamic political philosophy has a long history of opposition to absolute monarchy, notably in the work of Al-Farabi. Sharia law took precedence over the will of the ruler, and electing rulers by means of the Shura was an important doctrine. While the early caliphate maintained the principles of an elected ruler, later states became hereditary or military dictatorships though many maintained some pretense of a consultative shura. None of these states are typically referred to as republics. The current usage of republic in Muslim countries is borrowed from the western meaning, adopted into the language in the late 19th century. The 20th century saw republicanism become an important idea in much of the Middle East, as monarchies were removed in many states of the region. Iraq became a secular state. Some nations, such as Indonesia and Azerbaijan, began as secular. In Iran, the 1979 revolution overthrew the monarchy and created an Islamic republic based on the ideas of Islamic democracy. With no monarch, most modern republics use the title president for the head of state. Originally used to refer to the presiding officer of a committee or governing body in Great Britain the usage was also applied to political leaders, including the leaders of some of the Thirteen Colonies (originally Virginia in 1608); in full, the "President of the Council". The first republic to adopt the title was the United States of America. Keeping its usage as the head of a committee the President of the Continental Congress was the leader of the original congress. When the new constitution was written the title of President of the United States was conferred on the head of the new executive branch. If the head of state of a republic is also the head of government, this is called a presidential system. There are a number of forms of presidential government. A full-presidential system has a president with substantial authority and a central political role. In other states the legislature is dominant and the presidential role is almost purely ceremonial and apolitical, such as in Germany, Trinidad and Tobago and India. These states are parliamentary republics and operate similarly to constitutional monarchies with parliamentary systems where the power of the monarch is also greatly circumscribed. In parliamentary systems the head of government, most often titled prime minister, exercises the most real political power. Semi-presidential systems have a president as an active head of state, but also have a head of government with important powers. The rules for appointing the president and the leader of the government, in some republics permit the appointment of a president and a prime minister who have opposing political convictions: in France, when the members of the ruling cabinet and the president come from opposing political factions, this situation is called cohabitation. In some countries, like Switzerland, Bosnia and Herzegovina and San Marino, the head of state is not a single person but a committee (council) of several persons holding that office. The Roman Republic had two consuls, elected for a one-year term by the "comitia centuriata", consisting of all adult, freeborn males who could prove citizenship. In liberal democracies presidents are elected, either directly by the people or indirectly by a parliament or council. Typically in presidential and semi-presidential systems the president is directly elected by the people, or is indirectly elected as done in the United States. In that country the president is officially elected by an electoral college, chosen by the States, all of which do so by direct election of the electors. The indirect election of the president through the electoral college conforms to the concept of republic as one with a system of indirect election. In the opinion of some, direct election confers legitimacy upon the president and gives the office much of its political power. However, this concept of legitimacy differs from that expressed in the United States Constitution which established the legitimacy of the United States president as resulting from the signing of the Constitution by nine states. The idea that direct election is required for legitimacy also contradicts the spirit of the Great Compromise, whose actual result was manifest in the clause that provides voters in smaller states with more representation in presidential selection than those in large states; for example citizens of Wyoming in 2016 had 3.6 times as much electoral vote representation as citizens of California.. In states with a parliamentary system the president is usually elected by the parliament. This indirect election subordinates the president to the parliament, and also gives the president limited legitimacy and turns most presidential powers into reserve powers that can only be exercised under rare circumstance. There are exceptions where elected presidents have only ceremonial powers, such as in Ireland. The distinction between a republic and a monarchy is not always clear. The constitutional monarchies of the former British Empire and Western Europe today have almost all real political power vested in the elected representatives, with the monarchs only holding either theoretical powers, no powers or rarely used reserve powers. Real legitimacy for political decisions comes from the elected representatives and is derived from the will of the people. While hereditary monarchies remain in place, political power is derived from the people as in a republic. These states are thus sometimes referred to as crowned republics. Terms such as "liberal republic" are also used to describe all of the modern liberal democracies. There are also self-proclaimed republics that act similarly to monarchies with absolute power vested in the leader and passed down from father to son. North Korea and Syria are two notable examples where a son has inherited political control. Neither of these states are officially monarchies. There is no constitutional requirement that power be passed down within one family, but it has occurred in practice. There are also elective monarchies where ultimate power is vested in a monarch, but the monarch is chosen by some manner of election. A current example of such a state is Malaysia where the Yang di-Pertuan Agong is elected every five years by the Conference of Rulers composed of the nine hereditary rulers of the Malay states and the Vatican City-State, where the pope is selected by cardinal-electors, currently all cardinals under a specific age. While rare today, elective monarchs were common in the past. The Holy Roman Empire is an important example, where each new emperor was chosen by a group of electors. Islamic states also rarely employed primogeniture, instead relying on various forms of election to choose a monarch's successor. The Polish–Lithuanian Commonwealth had an elective monarchy, with a wide suffrage of some 500,000 nobles. The system, known as the Golden Liberty, had developed as a method for powerful landowners to control the crown. The proponents of this system looked to classical examples, and the writings of the Italian Renaissance, and called their elective monarchy a "rzeczpospolita", based on "res publica." In general being a republic also implies sovereignty as for the state to be ruled by the people it cannot be controlled by a foreign power. There are important exceptions to this, for example, republics in the Soviet Union were member states which had to meet three criteria to be named republics: It is sometimes argued that the former Soviet Union was also a supra-national republic, based on the claim that the member states were different nations. Socialist Federal Republic of Yugoslavia was a federal entity composed of six republics (Socialist Republic of Bosnia and Herzegovina, Croatia, Macedonia, Montenegro, Serbia, and Slovenia). Each republic had its parliament, government, institute of citizenship, constitution, etc., but certain functions were delegated to the federation (army, monetary matters). Each republic also had a right of self-determination according to the conclusions of the second session of the AVNOJ and according to the federal constitution. States of the United States are required, like the federal government, to be republican in form, with final authority resting with the people. This was required because the states were intended to create and enforce most domestic laws, with the exception of areas delegated to the federal government and prohibited to the states. The founding fathers of the country intended most domestic laws to be handled by the states. Requiring the states to be a republic in form was seen as protecting the citizens' rights and preventing a state from becoming a dictatorship or monarchy, and reflected unwillingness on the part of the original 13 states (all independent republics) to unite with other states that were not republics. Additionally, this requirement ensured that only other republics could join the union. In the example of the United States, the original 13 British colonies became independent states after the American Revolution, each having a republican form of government. These independent states initially formed a loose confederation called the United States and then later formed the current United States by ratifying the current U.S. Constitution, creating a union of sovereign states with the union or federal government also being a republic. Any state joining the union later was also required to be a republic. The term "republic" originated from the writers of the Renaissance as a descriptive term for states that were not monarchies. These writers, such as Machiavelli, also wrote important prescriptive works describing how such governments should function. These ideas of how a government and society should be structured is the basis for an ideology known as classical republicanism or civic humanism. This ideology is based on the Roman Republic and the city states of Ancient Greece and focuses on ideals such as civic virtue, rule of law and mixed government. This understanding of a republic as a distinct form of government from a liberal democracy is one of the main theses of the Cambridge School of historical analysis. This grew out of the work of J. G. A. Pocock who in 1975 argued that a series of scholars had expressed a consistent set of republican ideals. These writers included Machiavelli, Milton, Montesquieu and the founders of the United States of America. Pocock argued that this was an ideology with a history and principles distinct from liberalism. These ideas were embraced by a number of different writers, including Quentin Skinner, Philip Pettit and Cass Sunstein. These subsequent writers have further explored the history of the idea, and also outlined how a modern republic should function. A distinct set of definitions for the word republic evolved in the United States. In common parlance, a republic is a state that does not practice direct democracy but rather has a government indirectly controlled by the people. This understanding of the term was originally developed by James Madison, and notably employed in Federalist Paper No. 10. This meaning was widely adopted early in the history of the United States, including in Noah Webster's dictionary of 1828. It was a novel meaning to the term; representative democracy was not an idea mentioned by Machiavelli and did not exist in the classical republics. There is also evidence that contemporaries of Madison considered the meaning of the word to reflect the definition found elsewhere, as is the case with a quotation of Benjamin Franklin taken from the notes of James McHenry where the question is put forth, "a Republic or a Monarchy?". The term republic does not appear in the Declaration of Independence, but does appear in Article IV of the Constitution which "guarantee[s] to every State in this Union a Republican form of Government." What exactly the writers of the constitution felt this should mean is uncertain. The Supreme Court, in "Luther v. Borden" (1849), declared that the definition of "republic" was a "political question" in which it would not intervene. In two later cases, it did establish a basic definition. In "United States v. Cruikshank" (1875), the court ruled that the "equal rights of citizens" were inherent to the idea of a republic. However, the term republic is not synonymous with the republican form. The republican form is defined as one in which the powers of sovereignty are vested in the people and are exercised by the people, either directly, or through representatives chosen by the people, to whom those powers are specially delegated. Beyond these basic definitions the word republic has a number of other connotations. W. Paul Adams observes that republic is most often used in the United States as a synonym for state or government, but with more positive connotations than either of those terms. Republicanism is often referred to as the founding ideology of the United States. Traditionally scholars believed this American republicanism was a derivation of the classical liberal ideologies of John Locke and others developed in Europe. A political philosophy of republicanism that formed during the Renaissance period and initiated by Machiavelli was thought to have had little impact on the founders of the United States. In the 1960s and 1970s, a revisionist school led by the likes of Bernard Bailyn began to argue that republicanism was just as or even more important than liberalism in the creation of the United States. This issue is still much disputed and scholars like Isaac Kramnick completely reject this view. • Thomas Corwin, Senate Speech Against the Mexican War-Congressional Globe 1847.
https://en.wikipedia.org/wiki?curid=25536
Robyn Robin Miriam Carlsson (born 12 June 1979), known as Robyn (), is a Swedish singer, songwriter, record producer and DJ. She arrived on the music scene with her 1995 debut album, "Robyn Is Here", which produced two US "Billboard" Hot 100 top-10 singles: "Do You Know (What It Takes)" and "Show Me Love". Her second and third albums, "My Truth" (1999) and "Don't Stop the Music" (2002), were released in Sweden. Robyn returned to international success with her fourth album, "Robyn" (2005), which brought a Grammy Award nomination. The album spawned the singles "Be Mine!" and the UK number one "With Every Heartbeat". Robyn released a trilogy of mini-albums in 2010, known as the "Body Talk" series. They received broad critical praise, three Grammy Award nominations, and produced three top-10 singles: "Dancing On My Own", "Hang with Me" and "Indestructible". Robyn followed this with two collaborative EPs: "Do It Again" (2014) with Röyksopp, and "Love Is Free" (2015) with La Bagatelle Magique. She released her eighth solo album "Honey" in 2018. Robyn voiced the character of Miranda in the 1989 Swedish-Norwegian animated film "The Journey to Melonia". Directed by Per Åhlin, the film is loosely based on William Shakespeare's "The Tempest". She recorded ""Du kan alltid bli nummer ett"" ("You Can Always be Number One"), the theme song for the Swedish television show "Lilla Sportspegeln", in 1991 at age 12. Robyn performed her first original song at that age on another television show, "Söndagsöppet" ("Sundays"). She was discovered by Swedish pop singer Meja in the early 1990s when Meja and her band, Legacy of Sound, visited Robyn's school as part of a musical workshop. Impressed by Robyn's performance, Meja contacted her management and a meeting was arranged with Robyn and her parents. At age 14, after completing middle school education in 1993, Robyn signed with Ricochet Records Sweden (which was acquired by BMG in 1994). Robyn collaborated with producers Max Martin and Denniz Pop, who gave the singer a gritty (but popular) sound. She began her pop music-career at age 15, signing with RCA Records in 1994 and releasing her debut single ("You've Got That Somethin'") in Sweden. Later that year, Robyn's Swedish breakthrough came with the single "Do You Really Want Me (Show Respect)". The singles became part of the album "Robyn Is Here", which was released in October 1995. Robyn also contributed vocals to Blacknuss' 1996 single, "Roll with Me." She entered Sweden's pre-selection for the Eurovision Song Contest 1997 as co-writer and producer of "Du gör mig hel igen" ("You Make Me Whole Again"), which was performed by Cajsalisa Ejemyr. In Melodifestivalen 1997, the song finished fourth. Robyn's US breakthrough came in late 1997, when the dance-pop singles "Show Me Love" and "Do You Know (What It Takes)" reached the top 10 of the "Billboard" Hot 100. She performed "Show Me Love" on the American children's show "All That" that year, and the songs also performed well in the UK. Robyn re-released "Do You Really Want Me (Show Respect)" internationally, but it was less successful than the other releases. It was ineligible for the US charts because there was no retail single available, but it reached number 32 on the Hot 100 Airplay chart. "Show Me Love" was featured in the 1998 Lukas Moodysson film, "Fucking Åmål", and the song's title was used as the title of the film in English-speaking countries. As Robyn's popularity grew internationally, she was diagnosed with exhaustion and returned to Sweden to recover. Robyn's second album, "My Truth", was released in Sweden in May 1999 and subsequently in Europe. The single, "Electric", was a commercial success and propelled "My Truth" to the number-two position in Sweden. The autobiographical album included the tracks, "Universal Woman" and "Giving You Back". Despite her US success with "Robyn Is Here", "My Truth" was not released in that country, partly because it included two songs which referenced an abortion she had in her teens. Robyn contributed to Christian Falk's 1999 debut solo album, "Quel Bordel" ("What a Mess"), appearing on "Remember" and "Celebration". The following year, she appeared on "Intro/Fristil" on Petter's self-titled album. In 2001, Robyn performed "Say You'll Walk the Distance" for the soundtrack of "On the Line". She signed a worldwide deal with Jive Records in July 2001, moving from BMG after the singer was "disillusioned with the lack of artistic control [she] had there"; a year later, Jive was acquired by BMG when it bought Zomba Records. Robyn later said, "I was back where I started!" In October 2002, she released the album "Don't Stop the Music" in Sweden. The album's singles, "Keep This Fire Burning" and "Don't Stop the Music", received airplay in Scandinavia and elsewhere in Europe. The title track was later covered by the Swedish girl group Play, and the lead single ("Keep This Fire Burning") was covered by the British soul singer Beverley Knight. In May 2004, "Robyn's Best" was released in the US. It was a condensed version of her debut album, with no material from her later releases. In 2006, after her departure from BMG, "Det Bästa Med Robyn" ("The Best of Robyn") was released in Sweden with material from her first three albums; notable omissions, however, were the singles "Don't Stop the Music" and "Keep This Fire Burning". The decade-long relationship between Robyn and her label ended in 2004. When Jive Records reacted negatively to "Who's That Girl?s new electropop sound, the singer decided to release music on her own. In early 2005, she announced that she was leaving Jive to start her own label. Konichiwa Records was created to liberate Robyn artistically. She said on her website that her new album would be released earlier than anticipated, with notable collaborators including Klas Åhlund from Teddybears STHLM, Swedish duo The Knife and former Cheiron Studios producer Alexander Kronlund. Robyn released the single "Be Mine!" in March 2005. Her fourth album, "Robyn", was her first number-one album in Sweden when it was released a month later. Influenced by electronica, rap, R&B and new-age music, "Robyn" was critically praised and earned the singer three 2006 Swedish Grammy Awards: "Årets Album" (Best Album), "Årets Kompositör" (Best Writer, with Klas Åhlund) and "Årets Pop Kvinnlig" (Best Pop Female). The album evoked global interest in Robyn, who was recognized for co-writing the song "Money for Nothing" for Darin Zanyar (his debut single). She released three more singles—"Who's That Girl?", "Handle Me" and "Crash and Burn Girl"—from the eponymous LP, which was popular in Sweden. Robyn appeared on the Basement Jaxx track "Hey U" from their 2006 album, "Crazy Itch Radio", and contributed "Dream On" and "C.C.C" to Christian Falk's "People Say" (his second album) that year. In December 2006, Robyn released "The Rakamonie EP" in the UK as a preview of her more-recent material; this was followed by the March 2007 release of "Konichiwa Bitches". A revised edition of "Robyn" was released in the UK the following month, with two new tracks—"With Every Heartbeat" (a collaboration with Kleerup) and "Cobrastyle" (a cover of a 2006 single by Swedish rockers Teddybears)—with slightly-altered versions of the original music. The second single from the UK release was "With Every Heartbeat", which was released in late July 2007 and reached number one on the UK singles chart. Robyn appeared on Jo Whiley's BBC Radio 1 showcase show, "Live Lounge". In Australia, where "Robyn" reached the top ten of the iTunes Store's album chart, "With Every Heartbeat" received attention on radio and video networks. Robyn contributed vocals to Fleshquartet's single, "This One's for You", from their "Voices of Eden" album that year. Konichiwa Records signed an international licensing deal with Universal Music Group to distribute Robyn's music globally, and her UK recordings are released by Island Records. "The Rakamonie EP" was released in January 2008 by Cherrytree Records (a subsidiary of Interscope Records), and the US version of "Robyn" was released in April of that year. "With Every Heartbeat", "Handle Me" and "Cobrastyle" were top-10 club singles, and "With Every Heartbeat" received airplay on US pop and dance radio stations. Robyn provided backing vocals on Britney Spears' 2007 single, "Piece of Me", and appeared on the Fyre Department remix of "Sexual Eruption" by rapper Snoop Dogg. She made a brief US tour to promote "Robyn", and was the supporting act for Madonna's Sticky & Sweet Tour on European dates in 2008. In January 2009, Robyn received a 2008 Swedish Grammis Award for Best Live Act. She released the first album of the "Body Talk" trilogy, "Body Talk Pt. 1", on 14 June 2010 in the Nordic countries on EMI and on 15 June in the US on Interscope Records. It was preceded by the single "Dancing on My Own" on 1 June 2010. The song was Robyn's first number-one single in Sweden and her fourth top-10 single in the UK and the US, peaking at number eight on the UK Singles Chart and number three on "Billboard"'s Hot Dance Club Songs chart. In July 2010, she sang a minimalist, electro cover version of Alicia Keys' "Try Sleeping with a Broken Heart" live on IHeartRadio. Robyn made the All Hearts Tour in July and August 2010 with American singer Kelis to promote the "Body Talk" albums, and a four-date UK tour at the end of October. On 6 September 2010, "Body Talk Pt. 2" was released in the UK. It was preceded by the lead single, a dance version of "Hang with Me" from "Body Talk Pt. 1", the day before. The album includes a duet with American rapper Snoop Dogg, "U Should Know Better". Robyn performed "Dancing on My Own" with deadmau5 at the 2010 MTV Video Music Awards on 12 September. In a BBC "Newsbeat" interview, she explained her decision to release three albums in one year: "It was just something I felt like I needed to do. I just never thought about selling records or not, making this decision. I just did it for myself. It's a way of, for me, to stay inspired and to be able to do the things I like to do". However, Robyn said that she would not do it again: "When you do 16 or 13 songs in one go, you kind of empty yourself, and it takes a while to fill back up and have new things to talk about, so I think it's good for everyone". Robyn announced the release of the single, "Indestructible", on 13 October 2010; an acoustic version appeared on "Body Talk Pt. 2". The song was released on 17 November in Scandinavia and 22 November in the UK. Co-written by Klas Åhlund, it was described as a "pulsating full power version [that] takes every ounce of that emotion and wraps it up in another exceptional disco-pop record worthy of any dance-floor or passion-laden sing-a-long." Robyn planned to collaborate with Swedish producer Max Martin on the song, "Time Machine"; Martin produced Robyn's US singles, "Do You Know (What It Takes)" and "Show Me Love", both of which peaked in the top 10 on the "Billboard" 100 in 1997. The "Body Talk" albums have sold 91,000 copies in the US. Robyn guest-starred on "War at the Roses", a 2010 episode of "Gossip Girl", where she performed an acoustic version of "Hang with Me"; "Dancing on My Own" was featured at the end of the episode. In November, she said she would return to the studio in January 2011 with enough material to release a new album later that year. Robyn opened for Coldplay on their 2012 tour in Dallas, Houston, Tampa, Miami, Atlanta, Charlotte, Philadelphia and Washington, D.C. In mid-2013, she appeared with Paul Rudd and Sean Combs on "Go Kindergarten" from the Lonely Island's "The Wack Album". Robyn posted two videos of the Snoop Dogg collaboration ("U Should Know Better" and "Behind The Scenes") and a game, Mixory, on 21 and 22 June 2013. That year she received the Stockholm KTH Royal Institute of Technology Great Prize for "artistic contributions and embrace of technology", worth 1.2 million Swedish kronor (around £117,000 at the time), which she planned to donate to a cause of her choice. Robyn sang on Neneh Cherry's "Out of the Black", from Cherry's album "Blank Project", in 2014. She also announced the Do It Again Tour with Röyksopp and a collaborative mini-album, "Do It Again", that year. The tour ended prematurely after the death of Robyn's longtime friend and collaborator, Christian Falk. An EP of their final collaboration, "Love Is Free", was released soon afterwards. Robyn appeared at the Popaganda Festival in Sweden the following year and performed songs written with Falk before she postponed subsequent performances because she was still grieving. She premiered a dance set of remixed versions of her songs at the May 2016 Boston Calling Music Festival, with plans for more dates during the year. Robyn released "Trust Me", a collaboration EP with Mr. Tophat, in November 2016. She appeared on "That Could Have Been Me", a track from Todd Rundgren's album "White Knight", the following year. In March 2017, a new Robyn song called "Honey" was used in the soundtrack of the final season of HBO TV series, "Girls". The creator of the show, Lena Dunham selected it from a collection of her in-progress tracks. Robyn finalized it specially for the series. In February 2018, Robyn answered a fan on Twitter, that she will release her new album "some time this year". During an interview with Kindness, she revealed she was almost done with her new album. Afterwards, at a party, she debuted the full version of her new song "Honey". On 23 July, a new song entitled "Missing U" was enlisted as a single, and later taken down. Fans quickly began noticing the hints she was dropping, including a post on Twitter with the hashtag #MissingU. It was released on 1 August 2018. On 1 August 2018, Robyn presented "Missing U" on Annie Mac's BBC Radio 1 show. There she talked about the long silence and the process of making the upcoming album to be released before 2018's end. Robyn also released a mini-documentary featuring the song and a tribute to her fans who were missing her and her new music for years. On 19 September 2018, Robyn announced her upcoming album is titled "Honey" to be released on 26 October 2018. In November 2018, Robyn announced she would be touring across North America and Europe come 2019. The trek kicked off on 5 February 2019 and ends in April. On 27 September 2019, she performed in Kungsträdgården in Stockholm during the international climate strikes. Before singing "Ever Again", she also told the audience she had met climate researcher Johan Rockström. In February 2020, she accepted the award for Songwriter of the Decade at the 2020 NME Awards. In March 2020, global critic aggregator Acclaimed Music ranked "Dancing on My Own" as the greatest song of the 2010s. Robyn's parents led an independent theatre group, and growing up in that environment influenced her sense of style: "I was around people who dressed up for work every day, and so the concept of how you can use clothes to change your personality or communicate who you are is very interesting to me." Robyn has two younger siblings. Robyn began dating Olof Inger in 2002, and they were engaged until 2011. She later became engaged to videographer Max Vitali, referring to him in a 2013 interview with "Collection of Style" magazine as her fiancé: "We became friends when we made the video for 'Be Mine', and now we work together a lot. He made all the videos for the last album." She and Vitali separated for a period of time following the release of "Body Talk", but had reconciled by 2018.
https://en.wikipedia.org/wiki?curid=25538
Request for Comments Request for Comments (RFC), in information and communications technology, is a type of text document from the technology community. An RFC document may come from many bodies including from the Internet Engineering Task Force (IETF), the Internet Research Task Force (IRTF), the Internet Architecture Board (IAB), or from independent authors. The RFC system is supported by the Internet Society (ISOC). An RFC is authored by engineers and computer scientists in the form of a memorandum describing methods, behaviors, research, or innovations applicable to the working of the Internet and Internet-connected systems. It is submitted either for peer review or to convey new concepts, information, or occasionally engineering humor. The IETF adopts some of the proposals published as RFCs as Internet Standards. However, many RFCs are informational or experimental in nature and are not standards. The RFC system was invented by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications, communications protocols, procedures, and events. According to Crocker, the documents "shape the Internet's inner workings and have played a significant role in its success," but are not well known outside the community. Requests for Comments are produced in a non-reflowable document format, but work has begun to change the format to a reflowable one, so that documents can be viewed in devices with restricted size. Outside of the Internet community, Requests for Comments have often been published in U.S. Federal government work, such as the National Highway Traffic Safety Administration. The inception of the RFC format occurred in 1969 as part of the seminal ARPANET project. Today, it is the official publication channel for the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and to some extent the global community of computer network researchers in general. The authors of the first RFCs typewrote their work and circulated hard copies among the ARPA researchers. Unlike the modern RFCs, many of the early RFCs were actual Requests for Comments and were titled as such to avoid sounding too declarative and to encourage discussion. The RFC leaves questions open and is written in a less formal style. This less formal style is now typical of Internet Draft documents, the precursor step before being approved as an RFC. In December 1969, researchers began distributing new RFCs via the newly operational ARPANET. RFC1, titled "Host Software," was written by Steve Crocker of the University of California, Los Angeles (UCLA), and published on April 7, 1969. Although written by Steve Crocker, the RFC had emerged from an early working group discussion between Steve Crocker, Steve Carr, and Jeff Rulifson. In RFC 3, which first defined the RFC series, Crocker started attributing the RFC series to the Network Working Group. Rather than being a formal committee, it was a loose association of researchers interested in the ARPANET project. In effect, it included anyone who wanted to join the meetings and discussions about the project. Many of the subsequent RFCs of the 1970s also came from UCLA, because UCLA is one of the first of what were Interface Message Processors (IMPs) on ARPANET. The Augmentation Research Center (ARC) at Stanford Research Institute, directed by Douglas Engelbart, is another of the four first of what were ARPANET nodes and the source of early RFCs. The ARC became the first network information center (InterNIC), which was managed by Elizabeth J. Feinler to distribute the RFCs along with other network information. From 1969 until 1998, Jon Postel served as the RFC editor. On his death in 1998, his obituary was published as RFC 2468. Following the expiration of the original ARPANET contract with the U.S. federal government, the Internet Society, acting on behalf of the IETF, contracted with the Networking Division of the University of Southern California (USC) Information Sciences Institute (ISI) to assume the editorship and publishing responsibilities under the direction of the IAB. Sandy Ginoza joined USC/ISI in 1999 to work on RFC editing, and Alice Hagens in 2005. Bob Braden took over the role of RFC project lead, while Joyce K. Reynolds continued to be part of the team until October 13, 2006. In July 2007, "streams" of RFCs were defined, so that the editing duties could be divided. IETF documents came from IETF working groups or submissions sponsored by an IETF area director from the Internet Engineering Steering Group. The IAB can publish its own documents. A research stream of documents comes from the Internet Research Task Force (IRTF), and an independent stream from other outside sources. A new model was proposed in 2008, refined, and published in August 2009, splitting the task into several roles, including the RFC Series Advisory Group (RSAG). The model was updated in 2012. The streams were also refined in December 2009, with standards defined for their style. In January 2010 the RFC editor function was moved to a contractor, Association Management Solutions, with Glenn Kowack serving as interim series editor. In late 2011, Heather Flanagan was hired as the permanent RFC Series Editor. Also at that time, an RFC Series Oversight Committee (RSOC) was created. The RFC Editor assigns each RFC a serial number. Once assigned a number and published, an RFC is never rescinded or modified; if the document requires amendments, the authors publish a revised document. Therefore, some RFCs supersede others; the superseded RFCs are said to be "deprecated", "obsolete", or "obsoleted by" the superseding RFC. Together, the serialized RFCs compose a continuous historical record of the evolution of Internet standards and practices. The RFC process is documented in RFC 2026 ("The Internet Standards Process, Revision 3"). The RFC production process differs from the standardization process of formal standards organizations such as International Organization for Standardization (ISO). Internet technology experts may submit an Internet Draft without support from an external institution. Standards-track RFCs are published with approval from the IETF, and are usually produced by experts participating in IETF Working Groups, which first publish an Internet Draft. This approach facilitates initial rounds of peer review before documents mature into RFCs. The RFC tradition of pragmatic, experience-driven, after-the-fact standards authorship accomplished by individuals or small working groups can have important advantages over the more formal, committee-driven process typical of ISO and national standards bodies. Most RFCs use a common set of terms such as "MUST" and "NOT RECOMMENDED" (as defined by RFC 2119 and RFC 8174), augmented Backus–Naur form (ABNF) (RFC 5234) as a meta-language, and simple text-based formatting, in order to keep the RFCs consistent and easy to understand. The RFC series contains three sub-series for IETF RFCs: BCP, FYI, and STD. Best Current Practice (BCP) is a sub-series of mandatory IETF RFCs not on standards track. For Your Information (FYI) is a sub-series of informational RFCs promoted by the IETF as specified in RFC 1150 (FYI 1). In 2011, RFC 6360 obsoleted FYI 1 and concluded this sub-series. Standard (STD) used to be the third and highest maturity level of the IETF standards track specified in RFC 2026 (BCP 9). In 2011 RFC 6410 (a new part of BCP 9) reduced the standards track to two maturity levels. There are four streams of RFCs: IETF, IRTF, IAB, and "independent submission". Only the IETF creates BCPs and RFCs on the standards track. An "independent submission" is checked by the IESG for conflicts with IETF work; the quality is assessed by an "independent submission editorial board". In other words, IRTF and "independent " RFCs are supposed to contain relevant info or experiments for the Internet at large not in conflict with IETF work; compare RFC 4846, RFC 5742, and RFC 5744. The official source for RFCs on the World Wide Web is the RFC Editor. Almost any published RFC can be retrieved via a URL of the form http://www.rfc-editor.org/rfc/rfc5000.txt, shown for RFC 5000. Every RFC is submitted as plain ASCII text and is published in that form, but may also be available in other formats. For easy access to the metadata of an RFC, including abstract, keywords, author(s), publication date, errata, status, and especially later updates, the RFC Editor site offers a search form with many features. A redirection sets some efficient parameters, example: . The official International Standard Serial Number (ISSN) of the RFC series is 2070–1721. Not all RFCs are standards. Each RFC is assigned a designation with regard to status within the Internet standardization process. This status is one of the following: "Informational", "Experimental", "Best Current Practice", "Standards Track", or "Historic". Each RFC is static; if the document is changed, it is submitted again and assigned a new RFC number. Standards-track documents are further divided into "Proposed Standard" and "Internet Standard" documents. Only the IETF, represented by the Internet Engineering Steering Group (IESG), can approve standards-track RFCs. If an RFC becomes an Internet Standard (STD), it is assigned an STD number but retains its RFC number. The definitive list of Internet Standards is the Official Internet Protocol Standards. Previously STD 1 used to maintain a snapshot of the list. When an Internet Standard is updated, its STD number stays the same, now referring to a new RFC or set of RFCs. A given Internet Standard, STD "n", may be RFCs "x" and "y" at a given time, but later the same standard may be updated to be RFC "z" instead. For example, in 2007 RFC 3700 was an Internet Standard—STD 1—and in May 2008 it was replaced with RFC 5000, so RFC 3700 changed to "Historic", RFC 5000 became an Internet Standard, and STD 1 is RFC 5000. RFC 5000 is replaced by RFC 7100, updating RFC 2026 to no longer use STD 1. (Best Current Practices work in a similar fashion; BCP "n" refers to a certain RFC or set of RFCs, but which RFC or RFCs may change over time). An "informational" RFC can be nearly anything from April 1 jokes to widely recognized essential RFCs like Domain Name System Structure and Delegation (RFC 1591). Some informational RFCs formed the FYI sub-series. An "experimental" RFC can be an IETF document or an individual submission to the 'RFC Editor.' A draft is designated experimental if it is unclear the proposal will work as intended or unclear if the proposal will be widely adopted. An experimental RFC may be promoted to standards track if it becomes popular and works well. The Best Current Practice subseries collects administrative documents and other texts which are considered as official rules and not only "informational", but which do not affect "over the wire data". The border between standards track and BCP is often unclear. If a document only affects the Internet Standards Process, like BCP 9, or IETF administration, it is clearly a BCP. If it only defines rules and regulations for Internet Assigned Numbers Authority (IANA) registries it is less clear; most of these documents are BCPs, but some are on the standards track. The BCP series also covers technical recommendations for how to practice Internet standards; for instance, the recommendation to use source filtering to make DoS attacks more difficult (RFC 2827: ""Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing"") is BCP 38. A "historic" RFC is one that the technology defined by the RFC is no longer recommended for use, which differs from "Obsoletes" header in a replacement RFC. For example, RFC 821 (SMTP) itself is obsoleted by various newer RFCs, but SMTP itself is still "current technology," so it is not in "Historic" status. However, since BGP version 4 has entirely superseded earlier BGP versions, the RFCs describing those earlier versions, such as RFC 1267, have been designated historic. Status "unknown" is used for some very old RFCs, where it is unclear which status the document would get if it were published today. Some of these RFCs would not be published at all today; an early RFC was often just that: a simple Request for Comments, not intended to specify a protocol, administrative procedure, or anything else for which the RFC series is used today.
https://en.wikipedia.org/wiki?curid=25540
Ragga Raggamuffin music, usually abbreviated as ragga, is a subgenre of dancehall and reggae music. The instrumentals primarily consist of electronic music. Similar to hip hop, sampling often serves a prominent role in raggamuffin music. Wayne Smith's "Under Mi Sleng Teng", produced by King Jammy in 1985 on a Casio MT-40 synthesizer, is generally recognized as the seminal ragga song. "Sleng Teng" boosted Jammy's popularity immensely, and other producers quickly released their own versions of the riddim, accompanied by dozens of different vocalists. Ragga is now mainly used as a synonym for dancehall reggae or for describing dancehall with a deejay chatting rather than singjaying or singing on top of the riddim. Ragga originated in Jamaica during the 1980s, at the same time that electronic dance music's popularity was increasing globally. One of the reasons for ragga's swift propagation is that it is generally easier and less expensive to produce than reggae performed on traditional musical instruments. Ragga evolved first in Jamaica, and later in Europe, North America, and Africa, eventually spreading to Japan, India, and the rest of the world. Ragga heavily influenced early jungle music, and also spawned the syncretistic bhangragga style when fused with bhangra. In the 1990s, ragga and breakcore music fused, creating a style known as raggacore. The term "raggamuffin" is an intentional misspelling of "ragamuffin", a word that entered the Jamaican Patois lexicon after the British Empire colonized Jamaica in the 17th century. Despite the British colonialists' pejorative application of the term, Jamaican youth appropriated it as an ingroup designation. The term "raggamuffin music" describes the music of Jamaica's "ghetto dwellers". In the late 1980s, influential Jamaican rapper Daddy Freddy's pioneering efforts in fusing ragga with hip hop music earned him international acclaim while helping to publicize and popularize ragga. In 1987, Daddy Freddy and Asher D's "Ragamuffin Hip-Hop" became the first multinational single to feature the word "ragga" in its title. In 1992, Canadian hip hop group Rascalz released their debut album under the name Ragga Muffin Rascals. As ragga matured, an increasing number of dancehall artists began to appropriate stylistic elements of hip hop music, while ragga music, in turn, influenced more and more hip hop artists, most notably KRS-One, Poor Righteous Teachers, the Boot Camp Clik, Das EFX, Busta Rhymes, as well as some artists with ragga-influenced styles, like early Common, Main Source, Ill Al Scratch, Fu-Schnickens, and Redman. Artists like Mad Lion grew in popularity during this early 1990s trend, exemplified by his crossing from reggae to hip-hop culture. Some ragga artists believe that the assimilation of hip hop sensibilities is crucial to the international marketability of dancehall music. Indeed, the appeal to the contemporary rhythm and blues and hip hop music audiences in the English-speaking world contributed substantially to the multinational commercial success.
https://en.wikipedia.org/wiki?curid=25596
Religious conversion Religious conversion is the adoption of a set of beliefs identified with one particular religious denomination to the exclusion of others. Thus "religious conversion" would describe the abandoning of adherence to one denomination and affiliating with another. This might be from one to another denomination within the same religion, for example, from Baptist to Catholic Christianity or from Sunni Islam to Shi’a Islam. In some cases, religious conversion "marks a transformation of religious identity and is symbolized by special rituals". People convert to a different religion for various reasons, including active conversion by free choice due to a change in beliefs, secondary conversion, deathbed conversion, conversion for convenience, marital conversion, and forced conversion. Proselytism is the act of attempting to convert by persuasion another individual from a different religion or belief system. Apostate is a term used by members of a religion or denomination to refer to someone who has left that religion or denomination. In sharing their faith with others, Bahá'ís are cautioned to "obtain a hearing" – meaning to make sure the person they are proposing to teach is open to hearing what they have to say. "Bahá'í pioneers", rather than attempting to supplant the cultural underpinnings of the people in their adopted communities, are encouraged to integrate into the society and apply Bahá'í principles in living and working with their neighbors. Bahá'ís recognize the divine origins of all revealed religion, and believe that these religions occurred sequentially as part of a divine plan (see Progressive revelation), with each new revelation superseding and fulfilling that of its predecessors. Bahá'ís regard their own faith as the most recent (but not the last), and believe its teachings – which are centered around the principle of the oneness of humanity – are most suited to meeting the needs of a global community. In most countries conversion is a simple matter of filling out a card stating a declaration of belief. This includes acknowledgement of Bahá'u'llah – the Founder of the Faith – as the Messenger of God for this age, awareness and acceptance of his teachings, and intention to be obedient to the institutions and laws he established. Conversion to the Bahá'í Faith carries with it an explicit belief in the common foundation of all revealed religion, a commitment to the unity of mankind, and active service to the community at large, especially in areas that will foster unity and concord. Since the Bahá'í Faith has no clergy, converts are encouraged to be active in all aspects of community life. Even a recent convert may be elected to serve on a local Spiritual Assembly – the guiding Bahá'í institution at the community level. Within Christianity conversion refers variously to three different phenomena: a person becoming Christian who was previously not Christian; a Christian moving from one Christian denomination to another; a particular spiritual development, sometimes called the "second conversion", or "the conversion of the baptised". Conversion to Christianity is the religious conversion of a previously non-Christian person to some form of Christianity. Some Christian sects require full conversion for new members regardless of any history in other Christian sects, or from certain other sects. The exact requirements vary between different churches and denominations. Baptism is traditionally seen as a sacrament of admission to Christianity. Christian baptism has some parallels with Jewish immersion by "mikvah". In the New Testament, Jesus commanded his disciples in the Great Commission to "go and make disciples of all nations" (, ). Evangelization—sharing the Gospel message or "Good News" in deed and word, is an expectation of Christians. This table summarizes three Protestant beliefs. Much of the theology of Latter Day Saint baptism was established during the early Latter Day Saint movement founded by Joseph Smith. According to this theology, baptism must be by immersion, for the remission of sins (meaning that through baptism, past sins are forgiven), and occurs after one has shown faith and repentance. Mormon baptism does not purport to remit any sins other than personal ones, as adherents do not believe in original sin. Latter Day Saints baptisms also occur only after an "age of accountability" which is defined as the age of eight years. The theology thus rejects infant baptism. In addition, Latter Day Saint theology requires that baptism may only be performed with one who has been called and ordained by God with priesthood authority. Because the churches of the Latter Day Saint movement operate under a lay priesthood, children raised in a Mormon family are usually baptized by a father or close male friend or family member who has achieved the office of priest, which is conferred upon worthy male members at least 16 years old in the LDS Church. Baptism is seen as symbolic both of Jesus' death, burial and resurrection and is also symbolic of the baptized individual putting off of the natural or sinful man and becoming spiritually reborn as a disciple of Jesus. Membership into a Latter Day Saint church is granted only by baptism whether or not a person has been raised in the church. Latter Day Saint churches do not recognize baptisms of other faiths as valid because they believe baptisms must be performed under the church's unique authority. Thus, all who come into one of the Latter Day Saint faiths as converts are baptized, even if they have previously received baptism in another faith. When performing a Baptism, Latter Day Saints say the following prayer before performing the ordinance: Baptisms inside and outside the temples are usually done in a baptistry, although they can be performed in any body of water in which the person may be completely immersed. The person administering the baptism must recite the prayer exactly, and immerse every part, limb, hair and clothing of the person being baptized. If there are any mistakes, or if any part of the person being baptized is not fully immersed, the baptism must be redone. In addition to the baptizer, two members of the church witness the baptism to ensure that it is performed properly. Following baptism, Latter Day Saints receive the Gift of the Holy Ghost by the laying on of hands of a Melchizedek Priesthood holder. Latter Day Saints hold that one may be baptized after death through the vicarious act of a living individual, and holders of the Melchezidek Priesthood practice baptism for the dead as a missionary ritual. This doctrine answers the question of the righteous non-believer and the unevangelized by providing a post-mortem means of repentance and salvation. Converting to Islam requires the "shahada", the Muslim profession of faith ("there is no god but Allah, and Muhammad is the messenger of Allah"). Islam teaches that everyone is Muslim at birth but the parents or society can cause them to deviate from the straight path. When someone accepts Islam, they are considered to revert to the original condition. In Islam, circumcision is a "Sunnah" custom not mentioned in the Quran. The majority clerical opinion holds that circumcision is not required upon entering Islam. The Shafi`i and Hanbali schools regard it as obligatory, while the Maliki and Hanafi schools regard it as only recommended. However, it is not a precondition for the acceptance of a person's Islamic practices, nor is choosing to forgo circumcision considered a sin. It is not one of the Five Pillars of Islam. Conversion to Judaism is the religious conversion of non-Jews to become members of the Jewish religion and Jewish ethnoreligious community. The procedure and requirements for conversion depend on the sponsoring denomination. A conversion in accordance with the process of a denomination is not a guarantee of recognition by another denomination. A formal conversion is also sometimes undertaken by individuals whose Jewish ancestry is questioned, even if they were raised Jewish, but may not actually be considered Jews according to traditional Jewish law. As late as the 6th century the Eastern Roman empire and Caliph Umar ibn Khattab were issuing decrees against conversion to Judaism, implying that this was still occurring. In some cases, a person may forgo a formal conversion to Judaism and adopt some or all beliefs and practices of Judaism. However, without a formal conversion, many highly observant Jews will reject a convert's Jewish status. There are no rituals or dogmas, nor any sort of procedures in conversion to Spiritism. The doctrine is first considered as science, then philosophy and lastly as a religion. Allan Kardec's codification of Spiritism occurred between the years 1857 and 1868. Currently there are 25 to 60 million people studying Spiritism in various countries, mainly in Brazil, through its essential books, which include "The Spirits Book", "The Book on Mediums", "The Gospel According to Spiritism", "Heaven and Hell" and "The Genesis According to Spiritism". Chico Xavier wrote over 490 additional books, which expand on the spiritualist doctrine. As explained in the first of the 1,019 questions and answers in "The Spirits Book": 1. What is God? Answer: "God is the Supreme Intelligence-First Cause of all things." The consensus in Spiritism is that God, the Great Creator, is above everything, including all human things such as rituals, dogmas, denominations or any other thing. Persons newly adhering to Buddhism traditionally "take Refuge" (express faith in the Three Jewels—Buddha, Dharma, and Sangha) before a monk, nun, or similar representative, with often the sangha, the community of practitioners, also in ritual attendance. Throughout the timeline of Buddhism, conversions of entire countries and regions to Buddhism were frequent, as Buddhism spread throughout Asia. For example, in the 11th century in Burma, king Anoratha converted his entire country to Theravada Buddhism. At the end of the 12th century, Jayavarman VII set the stage for conversion of the Khmer people to Theravada Buddhism. Mass conversions of areas and communities to Buddhism occur up to the present day, for example, in the Dalit Buddhist movement in India there have been organized mass conversions. Exceptions to encouraging conversion may occur in some Buddhist movements. In Tibetan Buddhism, for example, the current Dalai Lama discourages active attempts to win converts. Hinduism is a diverse system of thought with beliefs spanning monotheism, polytheism, panentheism, pantheism, pandeism, monism, and atheism among others. Hinduism has no traditional ecclesiastical order, no centralized religious authorities, no universally accepted governing body, no binding holy book nor any mandatory prayer attendance requirements. Hinduism has been described as a way of life. In its diffuse and open structure, numerous schools and sects of Hinduism have developed and spun off in India with help from its ascetic scholars, since the Vedic age. The six Astika and two Nastika schools of Hindu philosophy, in its history, did not develop a missionary or proselytization methodology, and they co-existed with each other. Most Hindu sub-schools and sects do not actively seek converts. Individuals have had a choice to enter, leave or change their god(s), spiritual convictions, accept or discard any rituals and practices, and pursue spiritual knowledge and liberation (moksha) in different ways. However, various schools of Hinduism do have some core common beliefs, such as the belief that all living beings have Atman (soul), a belief in karma theory, spirituality, ahimsa (non-violence) as the greatest dharma or virtue, and others. Religious conversion to Hinduism has a long history outside India. Merchants and traders of India, particularly from Indian peninsula, carried their religious ideas, which led to religious conversions to Hinduism in Indonesia, Vietnam, Cambodia and Burma. Some sects of Hindus, particularly of the Bhakti schools began seeking or accepting converts in early to mid 20th century. For example, groups like the International Society for Krishna Consciousness accept those who have a desire to follow their sects of Hinduism and have their own religious conversion procedure. Since 1800 CE, religious conversion from and to Hinduism has been a controversial subject within Hinduism. Some have suggested that the concept of missionary conversion, either way, is contrary to the precepts of Hinduism. Religious leaders of some of Hinduism sects such as Brahmo Samaj have seen Hinduism as a non-missionary religion yet welcomed new members, while other leaders of Hinduism's diverse schools have stated that with the arrival of missionary Islam and Christianity in India, this "there is no such thing as proselytism in Hinduism" view must be re-examined. In recent decades, mainstream Hinduism schools have attempted to systematize ways to accept religious converts, with an increase in inter-religious mixed marriages. The steps involved in becoming a Hindu have variously included a period where the interested person gets an informal "ardha-Hindu" name and studies ancient literature on spiritual path and practices (English translations of Upanishads, Agama, Itihasa, ethics in Sutra, Hindu festivals, yoga). If after a period of study, the individual still wants to convert, a "Namakarana Samskara" ceremony is held, where the individual adopts a traditional Hindu name. The initiation ceremony may also include "Yajna" (i.e., fire ritual with Sanskrit hymns) under guidance of a local Hindu priest. Some of these places are "mathas" and "asramas" (hermitage, monastery), where one or more "gurus" (spiritual guide) conduct the conversion and offer spiritual discussions. Some schools encourage the new convert to learn and participate in community activities such as festivals (Diwali etc.), read and discuss ancient literature, learn and engage in rites of passages (ceremonies of birth, first feeding, first learning day, age of majority, wedding, cremation and others). Jainism accepts anyone who wants to embrace the religion. There is no specific ritual for becoming a Jain. One does not need to ask any authorities for admission. One becomes a Jain on one's own by observing the five vows ("vratas") The five main vows as mentioned in the ancient Jain texts like Tattvarthasutra are: Following the five vows is the main requirement in Jainism. All other aspects such as visiting temples are secondary. Jain monks and nuns are required to observe these five vows strictly. Sikhism is not known to openly proselytize, but accepts converts. In the second half of the 20th century, the rapid growth of new religious movements (NRMs) led some psychologists and other scholars to propose that these groups were using "brainwashing" or "mind control" techniques to gain converts. This theory was publicized by the popular news media but disputed by other scholars, including some sociologists of religion. In the 1960s sociologist John Lofland lived with Unification Church missionary Young Oon Kim and a small group of American church members in California and studied their activities in trying to promote their beliefs and win converts to their church. Lofland noted that most of their efforts were ineffective and that most of the people who joined did so because of personal relationships with other members, often family relationships. Lofland published his findings in 1964 as a doctoral thesis entitled "The World Savers: A Field Study of Cult Processes", and in 1966 in book form by Prentice-Hall as "". It is considered to be one of the most important and widely cited studies of the process of religious conversion, and one of the first modern sociological studies of a new religious movement. The Church of Scientology attempts to gain converts by offering "free stress tests". It has also used the celebrity status of some of its members (most famously the American actor Tom Cruise) to attract converts. The Church of Scientology requires that all converts sign a legal waiver which covers their relationship with the Church of Scientology before engaging in Scientology services. Research in the United States and the Netherlands has shown a positive correlation between areas lacking mainstream churches and the percentage of people who are a member of a new religious movement. This applies also for the presence of New Age centres. On the other end of the scale are religions that do not accept any converts, or do so very rarely. Often these are relatively small, close-knit minority religions that are ethnically based such as the Yazidis, Druze, and Mandaeans. Zoroastrianism classically does not accept converts, but this issue has become controversial in the 20th century due to the rapid decline in membership. Chinese traditional religion lacks clear criteria for membership, and hence for conversion. The Shakers and some Indian eunuch brotherhoods do not allow procreation, so that every member is a convert. The United Nations Universal Declaration of Human Rights defines religious conversion as a human right: "Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief" (Article 18). Despite this UN-declared human right, some groups forbid or restrict religious conversion (see below). Based on the declaration the United Nations Commission on Human Rights (UNCHR) drafted the International Covenant on Civil and Political Rights, a legally binding treaty. It states that "Everyone shall have the right to freedom of thought, conscience and religion. This right shall include freedom to have or to adopt a religion or belief of his choice" (Article 18.1). "No one shall be subject to coercion which would impair his freedom to have or to adopt a religion or belief of his choice" (Article 18.2). The UNCHR issued a General Comment on this Article in 1993: "The Committee observes that the freedom to 'have or to adopt' a religion or belief necessarily entails the freedom to choose a religion or belief, "including the right to replace one's current religion or belief with another" or to adopt atheistic views [...] Article 18.2 bars coercion that would impair the right to have or adopt a religion or belief, including the use of threat of physical force or penal sanctions to compel believers or non-believers to adhere to their religious beliefs and congregations, to recant their religion or belief "or to convert"." (CCPR/C/21/Rev.1/Add.4, General Comment No. 22.; emphasis added) Some countries distinguish voluntary, motivated conversion from organized proselytism, attempting to restrict the latter. The boundary between them is not easily defined: what one person considers legitimate evangelizing, or witness-bearing, another may consider intrusive and improper. Illustrating the problems that can arise from such subjective viewpoints is this extract from an article by Dr. C. Davis, published in Cleveland State University's "Journal of Law and Health": "According to the Union of American Hebrew Congregations, Jews for Jesus and Hebrew Christians constitute two of the most dangerous cults, and its members are appropriate candidates for deprogramming. Anti-cult evangelicals ... protest that 'aggressiveness and proselytizing ... are basic to authentic Christianity,' and that Jews for Jesus and Campus Crusade for Christ are not to be labeled as cults. Furthermore, certain Hassidic groups who physically attacked a meeting of the Hebrew Christian 'cult' have themselves been labeled a 'cult' and equated with the followers of Reverend Moon, by none other than the President of the Central Conference of American Rabbis." Since the collapse of the former Soviet Union the Russian Orthodox Church has enjoyed a revival. However, it takes exception to what it considers illegitimate proselytizing by the Roman Catholic Church, the Salvation Army, Jehovah's Witnesses, and other religious movements in what it refers to as its "canonical territory". Greece has a long history of conflict, mostly with Jehovah's Witnesses, but also with some Pentecostals, over its laws on proselytism. This situation stems from a law passed in the 1930s by the dictator Ioannis Metaxas. A Jehovah's Witness, Minos Kokkinakis, won the equivalent of $14,400 in damages from the Greek state after being arrested for trying to preach his faith from door to door. In another case, "Larissis v. Greece", a member of the Pentecostal church also won a case in the European Court of Human Rights. Conference on Religious and Philosophical Conversion in the Ancient Mediterranean
https://en.wikipedia.org/wiki?curid=25597
Rubidium "Rubidium" is a chemical element with the symbol Rb and atomic number 37. Rubidium is a very soft, silvery-white metal in the alkali metal group. Rubidium metal shares similarities to potassium metal and caesium metal in physical appearance, softness and conductivity. Rubidium cannot be stored under atmospheric oxygen, as a highly exothermic reaction will ensue, sometimes even resulting in the metal catching fire. Rubidium is the first alkali metal in the group to have a density higher than water, so it sinks, unlike the metals above it in the group. Rubidium has a standard atomic weight of 85.4678. On Earth, natural rubidium comprises two isotopes: 72% is a stable isotope 85Rb, and 28% is slightly radioactive 87Rb, with a half-life of 49 billion years—more than three times as long as the estimated age of the universe. German chemists Robert Bunsen and Gustav Kirchhoff discovered rubidium in 1861 by the newly developed technique, flame spectroscopy. The name comes from the Latin word , meaning deep red, the color of its emission spectrum. Rubidium's compounds have various chemical and electronic applications. Rubidium metal is easily vaporized and has a convenient spectral absorption range, making it a frequent target for laser manipulation of atoms. Rubidium is not a known nutrient for any living organisms. However, rubidium ions have the same charge as potassium ions and are actively taken up and treated by animal cells in similar ways. Rubidium is a very soft, ductile, silvery-white metal. It is the second most electropositive of the stable alkali metals and melts at a temperature of . Like other alkali metals, rubidium metal reacts violently with water. As with potassium (which is slightly less reactive) and caesium (which is slightly more reactive), this reaction is usually vigorous enough to ignite the hydrogen gas it produces. Rubidium has also been reported to ignite spontaneously in air. It forms amalgams with mercury and alloys with gold, iron, caesium, sodium, and potassium, but not lithium (even though rubidium and lithium are in the same group). Rubidium has a very low ionization energy of only 406 kJ/mol. Rubidium and potassium show a very similar purple color in the flame test, and distinguishing the two elements requires more sophisticated analysis, such as spectroscopy. Rubidium chloride (RbCl) is probably the most used rubidium compound: among several other chlorides, it is used to induce living cells to take up DNA; it is also used as a biomarker, because in nature, it is found only in small quantities in living organisms and when present, replaces potassium. Other common rubidium compounds are the corrosive rubidium hydroxide (RbOH), the starting material for most rubidium-based chemical processes; rubidium carbonate (Rb2CO3), used in some optical glasses, and rubidium copper sulfate, Rb2SO4·CuSO4·6H2O. Rubidium silver iodide (RbAg4I5) has the highest room temperature conductivity of any known ionic crystal, a property exploited in thin film batteries and other applications. Rubidium forms a number of oxides when exposed to air, including rubidium monoxide (Rb2O), Rb6O, and Rb9O2; rubidium in excess oxygen gives the superoxide RbO2. Rubidium forms salts with halides, producing rubidium fluoride, rubidium chloride, rubidium bromide, and rubidium iodide. Although rubidium is monoisotopic, rubidium in the Earth's crust is composed of two isotopes: the stable 85Rb (72.2%) and the radioactive 87Rb (27.8%). Natural rubidium is radioactive, with specific activity of about 670 Bq/g, enough to significantly expose a photographic film in 110 days. Twenty four additional rubidium isotopes have been synthesized with half-lives of less than 3 months; most are highly radioactive and have few uses. Rubidium-87 has a half-life of  years, which is more than three times the age of the universe of  years, making it a primordial nuclide. It readily substitutes for potassium in minerals, and is therefore fairly widespread. Rb has been used extensively in dating rocks; 87Rb beta decays to stable 87Sr. During fractional crystallization, Sr tends to concentrate in plagioclase, leaving Rb in the liquid phase. Hence, the Rb/Sr ratio in residual magma may increase over time, and the progressing differentiation results in rocks with elevated Rb/Sr ratios. The highest ratios (10 or more) occur in pegmatites. If the initial amount of Sr is known or can be extrapolated, then the age can be determined by measurement of the Rb and Sr concentrations and of the 87Sr/86Sr ratio. The dates indicate the true age of the minerals only if the rocks have not been subsequently altered (see rubidium–strontium dating). Rubidium-82, one of the element's non-natural isotopes, is produced by electron-capture decay of strontium-82 with a half-life of 25.36 days. With a half-life of 76 seconds, rubidium-82 decays by positron emission to stable krypton-82. Rubidium is the twenty-third most abundant element in the Earth's crust, roughly as abundant as zinc and rather more common than copper. It occurs naturally in the minerals leucite, pollucite, carnallite, and zinnwaldite, which contain as much as 1% rubidium oxide. Lepidolite contains between 0.3% and 3.5% rubidium, and is the commercial source of the element. Some potassium minerals and potassium chlorides also contain the element in commercially significant quantities. Seawater contains an average of 125 µg/L of rubidium compared to the much higher value for potassium of 408 mg/L and the much lower value of 0.3 µg/L for caesium. Because of its large ionic radius, rubidium is one of the "incompatible elements." During magma crystallization, rubidium is concentrated together with its heavier analogue caesium in the liquid phase and crystallizes last. Therefore, the largest deposits of rubidium and caesium are zone pegmatite ore bodies formed by this enrichment process. Because rubidium substitutes for potassium in the crystallization of magma, the enrichment is far less effective than that of caesium. Zone pegmatite ore bodies containing mineable quantities of caesium as pollucite or the lithium minerals lepidolite are also a source for rubidium as a by-product. Two notable sources of rubidium are the rich deposits of pollucite at Bernic Lake, Manitoba, Canada, and the rubicline ((Rb,K)AlSi3O8) found as impurities in pollucite on the Italian island of Elba, with a rubidium content of 17.5%. Both of those deposits are also sources of caesium. Although rubidium is more abundant in Earth's crust than caesium, the limited applications and the lack of a mineral rich in rubidium limits the production of rubidium compounds to 2 to 4 tonnes per year. Several methods are available for separating potassium, rubidium, and caesium. The fractional crystallization of a rubidium and caesium alum (Cs,Rb)Al(SO4)2·12H2O yields after 30 subsequent steps pure rubidium alum. Two other methods are reported, the chlorostannate process and the ferrocyanide process. For several years in the 1950s and 1960s, a by-product of potassium production called Alkarb was a main source for rubidium. Alkarb contained 21% rubidium, with the rest being potassium and a small amount of caesium. Today the largest producers of caesium, such as the Tanco Mine, Manitoba, Canada, produce rubidium as a by-product from pollucite. Rubidium was discovered in 1861 by Robert Bunsen and Gustav Kirchhoff, in Heidelberg, Germany, in the mineral lepidolite through flame spectroscopy. Because of the bright red lines in its emission spectrum, they chose a name derived from the Latin word , meaning "deep red". Rubidium is a minor component in lepidolite. Kirchhoff and Bunsen processed 150 kg of a lepidolite containing only 0.24% rubidium oxide (Rb2O). Both potassium and rubidium form insoluble salts with chloroplatinic acid, but those salts show a slight difference in solubility in hot water. Therefore, the less soluble rubidium hexachloroplatinate (Rb2PtCl6) could be obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, the process yielded 0.51 grams of rubidium chloride for further studies. Bunsen and Kirchhoff began their first large-scale isolation of caesium and rubidium compounds with of mineral water, which yielded 7.3 grams of caesium chloride and 9.2 grams of rubidium chloride. Rubidium was the second element, shortly after caesium, to be discovered by spectroscopy, just one year after the invention of the spectroscope by Bunsen and Kirchhoff. The two scientists used the rubidium chloride to estimate that the atomic weight of the new element was 85.36 (the currently accepted value is 85.47). They tried to generate elemental rubidium by electrolysis of molten rubidium chloride, but instead of a metal, they obtained a blue homogeneous substance, which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance". They presumed that it was a subchloride (); however, the product was probably a colloidal mixture of the metal and rubidium chloride. In a second attempt to produce metallic rubidium, Bunsen was able to reduce rubidium by heating charred rubidium tartrate. Although the distilled rubidium was pyrophoric, they were able to determine the density and the melting point. The quality of this research in the 1860s can be appraised by the fact that their determined density differs by less than 0.1 g/cm3 and the melting point by less than 1 °C from the presently accepted values. The slight radioactivity of rubidium was discovered in 1908, but that was before the theory of isotopes was established in 1910, and the low level of activity (half-life greater than 1010 years) made interpretation complicated. The now proven decay of 87Rb to stable 87Sr through beta decay was still under discussion in the late 1940s. Rubidium had minimal industrial value before the 1920s. Since then, the most important use of rubidium is research and development, primarily in chemical and electronic applications. In 1995, rubidium-87 was used to produce a Bose–Einstein condensate, for which the discoverers, Eric Allin Cornell, Carl Edwin Wieman and Wolfgang Ketterle, won the 2001 Nobel Prize in Physics. Rubidium compounds are sometimes used in fireworks to give them a purple color. Rubidium has also been considered for use in a thermoelectric generator using the magnetohydrodynamic principle, where hot rubidium ions are passed through a magnetic field. These conduct electricity and act like an armature of a generator, thereby generating an electric current. Rubidium, particularly vaporized 87Rb, is one of the most commonly used atomic species employed for laser cooling and Bose–Einstein condensation. Its desirable features for this application include the ready availability of inexpensive diode laser light at the relevant wavelength and the moderate temperatures required to obtain substantial vapor pressures. For cold-atom applications requiring tunable interactions, 85Rb is preferable due to its rich Feshbach spectrum. Rubidium has been used for polarizing 3He, producing volumes of magnetized 3He gas, with the nuclear spins aligned rather than random. Rubidium vapor is optically pumped by a laser, and the polarized Rb polarizes 3He through the hyperfine interaction. Such spin-polarized 3He cells are useful for neutron polarization measurements and for producing polarized neutron beams for other purposes. The resonant element in atomic clocks utilizes the hyperfine structure of rubidium's energy levels, and rubidium is useful for high-precision timing. It is used as the main component of secondary frequency references (rubidium oscillators) in cell site transmitters and other electronic transmitting, networking, and test equipment. These rubidium standards are often used with GPS to produce a "primary frequency standard" that has greater accuracy and is less expensive than caesium standards. Such rubidium standards are often mass-produced for the telecommunication industry. Other potential or current uses of rubidium include a working fluid in vapor turbines, as a getter in vacuum tubes, and as a photocell component. Rubidium is also used as an ingredient in special types of glass, in the production of superoxide by burning in oxygen, in the study of potassium ion channels in biology, and as the vapor in atomic magnetometers. In particular, 87Rb is used with other alkali metals in the development of spin-exchange relaxation-free (SERF) magnetometers. Rubidium-82 is used for positron emission tomography. Rubidium is very similar to potassium, and tissue with high potassium content will also accumulate the radioactive rubidium. One of the main uses is myocardial perfusion imaging. As a result of changes in the blood–brain barrier in brain tumors, rubidium collects more in brain tumors than normal brain tissue, allowing the use of radioisotope rubidium-82 in nuclear medicine to locate and image brain tumors. Rubidium-82 has a very short half-life of 76 seconds, and the production from decay of strontium-82 must be done close to the patient. Rubidium was tested for the influence on manic depression and depression. Dialysis patients suffering from depression show a depletion in rubidium, and therefore a supplementation may help during depression. In some tests the rubidium was administered as rubidium chloride with up to 720 mg per day for 60 days. Rubidium reacts violently with water and can cause fires. To ensure safety and purity, this metal is usually kept under dry mineral oil or sealed in glass ampoules in an inert atmosphere. Rubidium forms peroxides on exposure even to a small amount of air diffused into the oil, and storage is subject to similar precautions as the storage of metallic potassium. Rubidium, like sodium and potassium, almost always has +1 oxidation state when dissolved in water, even in biological contexts. The human body tends to treat Rb+ ions as if they were potassium ions, and therefore concentrates rubidium in the body's intracellular fluid (i.e., inside cells). The ions are not particularly toxic; a 70 kg person contains on average 0.36 g of rubidium, and an increase in this value by 50 to 100 times did not show negative effects in test persons. The biological half-life of rubidium in humans measures 31–46 days. Although a partial substitution of potassium by rubidium is possible, when more than 50% of the potassium in the muscle tissue of rats was replaced with rubidium, the rats died.
https://en.wikipedia.org/wiki?curid=25599
Ruthenium Ruthenium is a chemical element with the symbol Ru and atomic number 44. It is a rare transition metal belonging to the platinum group of the periodic table. Like the other metals of the platinum group, ruthenium is inert to most other chemicals. Russian-born scientist of Baltic-German ancestry Karl Ernst Claus discovered the element in 1844 at Kazan State University and named ruthenium in honor of Russia (Ruthenia is the Latin name of Rus). Ruthenium is usually found as a minor component of platinum ores; the annual production has risen from about 19 tonnes in 2009 to some 35.5 tonnes in 2017. Most ruthenium produced is used in wear-resistant electrical contacts and thick-film resistors. A minor application for ruthenium is in platinum alloys and as a chemistry catalyst. A new application of ruthenium is as the capping layer for extreme ultraviolet photomasks. Ruthenium is generally found in ores with the other platinum group metals in the Ural Mountains and in North and South America. Small but commercially important quantities are also found in pentlandite extracted from Sudbury, Ontario and in pyroxenite deposits in South Africa. Ruthenium, a polyvalent hard white metal, is a member of the platinum group and is in group 8 of the periodic table: Whereas all other group 8 elements have two electrons in the outermost shell, in ruthenium, the outermost shell has only one electron (the final electron is in a lower shell). This anomaly is observed in the neighboring metals niobium (41), molybdenum (42), and rhodium (45). Ruthenium has four crystal modifications and does not tarnish at ambient conditions; it oxidizes upon heating to . Ruthenium dissolves in fused alkalis to give ruthenates (), is not attacked by acids (even aqua regia) but is attacked by halogens at high temperatures. Indeed, ruthenium is most readily attacked by oxidizing agents. Small amounts of ruthenium can increase the hardness of platinum and palladium. The corrosion resistance of titanium is increased markedly by the addition of a small amount of ruthenium. The metal can be plated by electroplating and by thermal decomposition. A ruthenium-molybdenum alloy is known to be superconductive at temperatures below 10.6 K. Ruthenium is the last of the 4d transition metals that can assume the group oxidation state +8, and even then it is less stable there than the heavier congener osmium: this is the first group from the left of the table where the second and third-row transition metals display notable differences in chemical behavior. Like iron but unlike osmium, ruthenium can form aqueous cations in its lower oxidation states of +2 and +3. Ruthenium is the first in a downward trend in the melting and boiling points and atomization enthalpy in the 4d transition metals after the maximum seen at molybdenum, because the 4d subshell is more than half full and the electrons are contributing less to metallic bonding. (Technetium, the previous element, has an exceptionally low value that is off the trend due to its half-filled [Kr]4d55s2 configuration, though it is not as far off the trend in the 4d series as manganese in the 3d transition series.) Unlike the lighter congener iron, ruthenium is paramagnetic at room temperature, as iron also is above its Curie point. The reduction potentials in acidic aqueous solution for some common ruthenium ions are shown below: Naturally occurring ruthenium is composed of seven stable isotopes. Additionally, 34 radioactive isotopes have been discovered. Of these radioisotopes, the most stable are 106Ru with a half-life of 373.59 days, 103Ru with a half-life of 39.26 days and 97Ru with a half-life of 2.9 days. Fifteen other radioisotopes have been characterized with atomic weights ranging from 89.93 u (90Ru) to 114.928 u (115Ru). Most of these have half-lives that are less than five minutes except 95Ru (half-life: 1.643 hours) and 105Ru (half-life: 4.44 hours). The primary decay mode before the most abundant isotope, 102Ru, is electron capture and the primary mode after is beta emission. The primary decay product before 102Ru is technetium and the primary decay product after is rhodium. 106Ru is a product of fission of a nucleus of uranium or plutonium. High concentrations of detected atmospheric 106Ru were associated with an alleged undeclared nuclear accident in Russia in 2017. As the 74th most abundant element in Earth's crust, ruthenium is relatively rare, found in about 100 parts per trillion. This element is generally found in ores with the other platinum group metals in the Ural Mountains and in North and South America. Small but commercially important quantities are also found in pentlandite extracted from Sudbury, Ontario, Canada, and in pyroxenite deposits in South Africa. The native form of ruthenium is a very rare mineral (Ir replaces part of Ru in its structure). Roughly 30 tonnes of ruthenium are mined each year with world reserves estimated at 5,000 tonnes. The composition of the mined platinum group metal (PGM) mixtures varies widely, depending on the geochemical formation. For example, the PGMs mined in South Africa contain on average 11% ruthenium while the PGMs mined in the former USSR contain only 2% (1992). Ruthenium, osmium, and iridium are considered the minor platinum group metals. Ruthenium, like the other platinum group metals, is obtained commercially as a by-product from nickel, and copper, and platinum metals ore processing. During electrorefining of copper and nickel, noble metals such as silver, gold, and the platinum group metals precipitate as "anode mud", the feedstock for the extraction. The metals are converted to ionized solutes by any of several methods, depending on the composition of the feedstock. One representative method is fusion with sodium peroxide followed by dissolution in aqua regia, and solution in a mixture of chlorine with hydrochloric acid. Osmium, ruthenium, rhodium, and iridium are insoluble in aqua regia and readily precipitate, leaving the other metals in solution. Rhodium is separated from the residue by treatment with molten sodium bisulfate. The insoluble residue, containing Ru, Os, and Ir is treated with sodium oxide, in which Ir is insoluble, producing dissolved Ru and Os salts. After oxidation to the volatile oxides, is separated from by precipitation of (NH4)3RuCl6 with ammonium chloride or by distillation or extraction with organic solvents of the volatile osmium tetroxide. Hydrogen is used to reduce ammonium ruthenium chloride yielding a powder. The product is reduced using hydrogen, yielding the metal as a powder or sponge metal that can be treated with powder metallurgy techniques or argon-arc welding. The oxidation states of ruthenium range from 0 to +8, and −2. The properties of ruthenium and osmium compounds are often similar. The +2, +3, and +4 states are the most common. The most prevalent precursor is ruthenium trichloride, a red solid that is poorly defined chemically but versatile synthetically. Ruthenium can be oxidized to ruthenium(IV) oxide (RuO2, oxidation state +4) which can in turn be oxidized by sodium metaperiodate to the volatile yellow tetrahedral ruthenium tetroxide, RuO4, an aggressive, strong oxidizing agent with structure and properties analogous to osmium tetroxide. RuO4 is mostly used as an intermediate in the purification of ruthenium from ores and radiowastes. Dipotassium ruthenate (K2RuO4, +6), and potassium perruthenate (KRuO4, +7) are also known. Unlike osmium tetroxide, ruthenium tetroxide is less stable and is strong enough as an oxidising agent to oxidise dilute hydrochloric acid and organic solvents like ethanol at room temperature, and is easily reduced to ruthenate () in aqueous alkaline solutions; it decomposes to form the dioxide above 100 °C. Unlike iron but like osmium, ruthenium does not form oxides in its lower +2 and +3 oxidation states. Ruthenium forms dichalcogenides, which are diamagnetic semiconductors crystallizing in the pyrite structure. Ruthenium sulfide (RuS2) occurs naturally as the mineral laurite. Like iron, ruthenium does not readily form oxoanions, and prefers to achieve high coordination numbers with hydroxide ions instead. Ruthenium tetroxide is reduced by cold dilute potassium hydroxide to form black potassium perruthenate, KRuO4, with ruthenium in the +7 oxidation state. Potassium perruthenate can also be produced by oxidising potassium ruthenate, K2RuO4, with chlorine gas. The perruthenate ion is unstable and is reduced by water to form the orange ruthenate. Potassium ruthenate may be synthesized by reacting ruthenium metal with molten potassium hydroxide and potassium nitrate. Some mixed oxides are also known, such as MIIRuIVO3, Na3RuVO4, NaRuO, and MLnRuO. The highest known ruthenium halide is the hexafluoride, a dark brown solid that melts at 54 °C. It hydrolyzes violently upon contact with water and easily disproportionates to form a mixture of lower ruthenium fluorides, releasing fluorine gas. Ruthenium pentafluoride is a tetrameric dark green solid that is also readily hydrolyzed, melting at 86.5 °C. The yellow ruthenium tetrafluoride is probably also polymeric and can be formed by reducing the pentafluoride with iodine. Among the binary compounds of ruthenium, these high oxidation states are known only in the oxides and fluorides. Ruthenium trichloride is a well-known compound, existing in a black α-form and a dark brown β-form: the trihydrate is red. Of the known trihalides, trifluoride is dark brown and decomposes above 650 °C, tetrabromide is dark-brown and decomposes above 400 °C, and triiodide is black. Of the dihalides, difluoride is not known, dichloride is brown, dibromide is black, and diiodide is blue. The only known oxyhalide is the pale green ruthenium(VI) oxyfluoride, RuOF4. Ruthenium forms a variety of coordination complexes. Examples are the many pentaammine derivatives [Ru(NH3)5L]n+ that often exist for both Ru(II) and Ru(III). Derivatives of bipyridine and terpyridine are numerous, best known being the luminescent tris(bipyridine)ruthenium(II) chloride. Ruthenium forms a wide range compounds with carbon-ruthenium bonds. Grubbs' catalyst is used for alkene metathesis. Ruthenocene is analogous to ferrocene structurally, but exhibits distinctive redox properties. The colorless liquid ruthenium pentacarbonyl converts in the absence of CO pressure to the dark red solid triruthenium dodecacarbonyl. Ruthenium trichloride reacts with carbon monoxide to give many derivatives including RuHCl(CO)(PPh3)3 and Ru(CO)2(PPh3)3 (Roper's complex). Heating solutions of ruthenium trichloride in alcohols with triphenylphosphine gives tris(triphenylphosphine)ruthenium dichloride (RuCl2(PPh3)3), which converts to the hydride complex chlorohydridotris(triphenylphosphine)ruthenium(II) (RuHCl(PPh3)3). Though naturally occurring platinum alloys containing all six platinum-group metals were used for a long time by pre-Columbian Americans and known as a material to European chemists from the mid-16th century, not until the mid-18th century was platinum identified as a pure element. That natural platinum contained palladium, rhodium, osmium and iridium was discovered in the first decade of the 19th century. Platinum in alluvial sands of Russian rivers gave access to raw material for use in plates and medals and for the minting of ruble coins, starting in 1828. Residues from platinum production for coinage were available in the Russian Empire, and therefore most of the research on them was done in Eastern Europe. It is possible that the Polish chemist Jędrzej Śniadecki isolated element 44 (which he called "vestium" after the asteroid Vesta discovered shortly before) from South American platinum ores in 1807. He published an announcement of his discovery in 1808. His work was never confirmed, however, and he later withdrew his claim of discovery. Jöns Berzelius and Gottfried Osann nearly discovered ruthenium in 1827. They examined residues that were left after dissolving crude platinum from the Ural Mountains in aqua regia. Berzelius did not find any unusual metals, but Osann thought he found three new metals, which he called pluranium, ruthenium, and polinium. This discrepancy led to a long-standing controversy between Berzelius and Osann about the composition of the residues. As Osann was not able to repeat his isolation of ruthenium, he eventually relinquished his claims. The name "ruthenium" was chosen by Osann because the analysed samples stemmed from the Ural Mountains in Russia. The name itself derives from Ruthenia, the Latin word for Rus', a historical area that included present-day Ukraine, Belarus, western Russia, and parts of Slovakia and Poland. In 1844, Karl Ernst Claus, a Russian scientist of Baltic German descent, showed that the compounds prepared by Gottfried Osann contained small amounts of ruthenium, which Claus had discovered the same year. Claus isolated ruthenium from the platinum residues of rouble production while he was working in Kazan University, Kazan, the same way its heavier congener osmium had been discovered four decades earlier. Claus showed that ruthenium oxide contained a new metal and obtained 6 grams of ruthenium from the part of crude platinum that is insoluble in aqua regia. Choosing the name for the new element, Claus stated: "I named the new body, in honour of my Motherland, ruthenium. I had every right to call it by this name because Mr. Osann relinquished his ruthenium and the word does not yet exist in chemistry." Approximately 30.9 tonnes of ruthenium were consumed in 2016, 13.8 of them in electrical applications, 7.7 in catalysis, and 4.6 in electrochemistry. Because it hardens platinum and palladium alloys, ruthenium is used in electrical contacts, where a thin film is sufficient to achieve the desired durability. With similar properties and lower cost than rhodium, electric contacts are a major use of ruthenium. The ruthenium plate is applied to the electrical contact and electrode base metal by electroplating or sputtering. Ruthenium dioxide with lead and bismuth ruthenates are used in thick-film chip resistors. These two electronic applications account for 50% of the ruthenium consumption. Ruthenium is seldom alloyed with metals outside the platinum group, where small quantities improve some properties. The added corrosion resistance in titanium alloys led to the development of a special alloy with 0.1% ruthenium. Ruthenium is also used in some advanced high-temperature single-crystal superalloys, with applications that include the turbines in jet engines. Several nickel based superalloy compositions are described, such as EPM-102 (with 3% Ru), TMS-162 (with 6% Ru), TMS-138, and TMS-174, the latter two containing 6% rhenium. Fountain pen nibs are frequently tipped with ruthenium alloy. From 1944 onward, the Parker 51 fountain pen was fitted with the "RU" nib, a 14K gold nib tipped with 96.2% ruthenium and 3.8% iridium. Ruthenium is a component of mixed-metal oxide (MMO) anodes used for cathodic protection of underground and submerged structures, and for electrolytic cells for such processes as generating chlorine from salt water. The fluorescence of some ruthenium complexes is quenched by oxygen, finding use in optode sensors for oxygen. Ruthenium red, [(NH3)5Ru-O-Ru(NH3)4-O-Ru(NH3)5]6+, is a biological stain used to stain polyanionic molecules such as pectin and nucleic acids for light microscopy and electron microscopy. The beta-decaying isotope 106 of ruthenium is used in radiotherapy of eye tumors, mainly malignant melanomas of the uvea. Ruthenium-centered complexes are being researched for possible anticancer properties. Compared with platinum complexes, those of ruthenium show greater resistance to hydrolysis and more selective action on tumors. Ruthenium tetroxide exposes latent fingerprints by reacting on contact with fatty oils or fats with sebaceous contaminants and producing brown/black ruthenium dioxide pigment. Many ruthenium-containing compounds exhibit useful catalytic properties. The catalysts are conveniently divided into those that are soluble in the reaction medium, homogeneous catalysts, and those that are not, which are called heterogeneous catalysts. Ruthenium nanoparticles can be formed inside halloysite. This abundant mineral naturally has a structure of rolled nanosheets (nanotubes), which can support both the Ru nanocluster synthesis and its products for subsequent use in industrial catalysis. Solutions containing ruthenium trichloride are highly active for olefin metathesis. Such catalysts are used commercially for the production of polynorbornene for example. Well defined ruthenium carbene and alkylidene complexes show comparable reactivity and provide mechanistic insights into the industrial processes. The Grubbs' catalysts for example have been employed in the preparation of drugs and advanced materials. Ruthenium complexes are highly active catalysts for transfer hydrogenations (sometimes referred to as "borrowing hydrogen" reactions). This process is employed for the enantioselective hydrogenation of ketones, aldehydes, and imines. This reaction exploits using chiral ruthenium complexes introduced by Ryoji Noyori. For example, (cymene)Ru(S,S-TsDPEN) catalyzes the hydrogenation of benzil into ("R,R")-hydrobenzoin. In this reaction, formate and water/alcohol serve as the source of H2: A Nobel Prize in Chemistry was awarded in 2001 to Ryōji Noyori for contributions to the field of asymmetric hydrogenation. In 2012, Masaaki Kitano and associates, working with an organic ruthenium catalyst, demonstrated ammonia synthesis using a stable electride as an electron donor and reversible hydrogen store. Small-scale, intermittent production of ammonia, for local agricultural use, may be a viable substitute for electrical grid attachment as a sink for power generated by wind turbines in isolated rural installations. Ruthenium-promoted cobalt catalysts are used in Fischer-Tropsch synthesis. Some ruthenium complexes absorb light throughout the visible spectrum and are being actively researched for solar energy technologies. For example, Ruthenium-based compounds have been used for light absorption in dye-sensitized solar cells, a promising new low-cost solar cell system. Many ruthenium-based oxides show very unusual properties, such as a quantum critical point behavior, exotic superconductivity (in its strontium ruthenate form), and high-temperature ferromagnetism. Relatively recently, ruthenium has been suggested as a material that could beneficially replace other metals and silicides in microelectronics components. Ruthenium tetroxide (RuO4) is highly volatile, as is ruthenium trioxide (RuO3). By oxidizing ruthenium (for example with an oxygen plasma) into the volatile oxides, ruthenium can be easily patterned. The properties of the common ruthenium oxides make ruthenium a metal compatible with the semiconductor processing techniques needed to manufacture microelectronics. To continue miniaturization of microelectronics, new materials are needed as dimensions change. There are three main applications for thin ruthenium films in microelectronics. The first is using thin films of ruthenium as electrodes on both sides of tantalum pentoxide (Ta2O5) or barium strontium titanate ((Ba, Sr)TiO3, also known as BST) in the next generation of three-dimensional dynamic random access memories (DRAMs). Ruthenium thin-film electrodes could also be deposited on top of lead zirconate titanate (Pb(ZrxTi1−x)O3, also known as PZT) in another kind of RAM, ferroelectric random access memory (FRAM). Platinum has been used as the electrodes in RAMs in laboratory settings, but it is difficult to pattern. Ruthenium is chemically similar to platinum, preserving the function of the RAMs, but in contrast to Pt patterns easily. The second is using thin ruthenium films as metal gates in p-doped metal-oxide-semiconductor field effect transistors (p-MOSFETs). When replacing silicide gates with metal gates in MOSFETs, a key property of the metal is its work function. The work function needs to match the surrounding materials. For p-MOSFETs, the ruthenium work function is the best materials property match with surrounding materials such as HfO2, HfSiOx, HfNOx, and HfSiNOx, to achieve the desired electrical properties. The third large-scale application for ruthenium films is as a combination adhesion promoter and electroplating seed layer between TaN and Cu in the copper dual damascene process. Copper can be directly electroplated onto ruthenium, in contrast to tantalum nitride. Copper also adheres poorly to TaN, but well to Ru. By depositing a layer of ruthenium on the TaN barrier layer, copper adhesion would be improved and deposition of a copper seed layer would not be necessary. There are also other suggested uses. In 1990, IBM scientists discovered that a thin layer of ruthenium atoms created a strong anti-parallel coupling between adjacent ferromagnetic layers, stronger than any other nonmagnetic spacer-layer element. Such a ruthenium layer was used in the first giant magnetoresistive read element for hard disk drives. In 2001, IBM announced a three-atom-thick layer of the element ruthenium, informally referred to as "pixie dust", which would allow a quadrupling of the data density of current hard disk drive media. Ruthenium and its compounds are very poisonous, and are considered carcinogens - they tend to be absorbed in the bones, and can stain the skin. Protective clothing must be worn when working with this element.
https://en.wikipedia.org/wiki?curid=25600
Rhodium Rhodium is a chemical element with the symbol Rh and atomic number 45. It is a rare, silvery-white, hard, corrosion-resistant, and chemically inert transition metal. It is a noble metal and a member of the platinum group. It has only one naturally occurring isotope, 103Rh. Naturally occurring rhodium is usually found as free metal, as an alloy with similar metals, and rarely as a chemical compound in minerals such as bowieite and rhodplumsite. It is one of the rarest and most valuable precious metals. Rhodium is found in platinum or nickel ores together with the other members of the platinum group metals. It was discovered in 1803 by William Hyde Wollaston in one such ore, and named for the rose color of one of its chlorine compounds. The element's major use (approximately 80% of world rhodium production) is as one of the catalysts in the three-way catalytic converters in automobiles. Because rhodium metal is inert against corrosion and most aggressive chemicals, and because of its rarity, rhodium is usually alloyed with platinum or palladium and applied in high-temperature and corrosion-resistive coatings. White gold is often plated with a thin rhodium layer to improve its appearance while sterling silver is often rhodium-plated for tarnish resistance. Rhodium is sometimes used to cure silicones; a two-part silicone in which one part containing a silicon hydride and the other containing a vinyl-terminated silicone are mixed. One of these liquids contains a rhodium complex. Rhodium detectors are used in nuclear reactors to measure the neutron flux level. Other uses of rhodium include asymmetric hydrogenation used to form drug precursors and the processes for the production of acetic acid. Rhodium (Greek "rhodon" (ῥόδον) meaning "rose") was discovered in 1803 by William Hyde Wollaston, soon after his discovery of palladium. He used crude platinum ore presumably obtained from South America. His procedure involved dissolving the ore in aqua regia and neutralizing the acid with sodium hydroxide (NaOH). He then precipitated the platinum as ammonium chloroplatinate by adding ammonium chloride (). Most other metals like copper, lead, palladium and rhodium were precipitated with zinc. Diluted nitric acid dissolved all but palladium and rhodium. Of these, palladium dissolved in aqua regia but rhodium did not, and the rhodium was precipitated by the addition of sodium chloride as . After being washed with ethanol, the rose-red precipitate was reacted with zinc, which displaced the rhodium in the ionic compound and thereby released the rhodium as free metal. After the discovery, the rare element had only minor applications; for example, by the turn of the century, rhodium-containing thermocouples were used to measure temperatures up to 1800 °C. They have exceptionally good stability in the temperature range of 1300 to 1800 °C. The first major application was electroplating for decorative uses and as corrosion-resistant coating. The introduction of the three-way catalytic converter by Volvo in 1976 increased the demand for rhodium. The previous catalytic converters used platinum or palladium, while the three-way catalytic converter used rhodium to reduce the amount of NOx in the exhaust. Rhodium is a hard, silvery, durable metal that has a high reflectance. Rhodium metal does not normally form an oxide, even when heated. Oxygen is absorbed from the atmosphere only at the melting point of rhodium, but is released on solidification. Rhodium has both a higher melting point and lower density than platinum. It is not attacked by most acids: it is completely insoluble in nitric acid and dissolves slightly in aqua regia. Rhodium belongs to group 9 of the periodic table, but the configuration of electrons in the outermost shells is atypical for the group. This anomaly is also observed in the neighboring elements, niobium (41), ruthenium (44), and palladium (46). The common oxidation state of rhodium is +3, but oxidation states from 0 to +6 are also observed. Unlike ruthenium and osmium, rhodium forms no volatile oxygen compounds. The known stable oxides include , , , , and . Halogen compounds are known in nearly the full range of possible oxidation states. Rhodium(III) chloride, rhodium(IV) fluoride, rhodium(V) fluoride and rhodium(VI) fluoride are examples. The lower oxidation states are stable only in the presence of ligands. The best-known rhodium-halogen compound is the Wilkinson's catalyst chlorotris(triphenylphosphine)rhodium(I). This catalyst is used in the hydroformylation or hydrogenation of alkenes. Naturally occurring rhodium is composed of only one isotope, 103Rh. The most stable radioisotopes are 101Rh with a half-life of 3.3 years, 102Rh with a half-life of 207 days, 102mRh with a half-life of 2.9 years, and 99Rh with a half-life of 16.1 days. Twenty other radioisotopes have been characterized with atomic weights ranging from 92.926 u (93Rh) to 116.925 u (117Rh). Most of these have half-lives shorter than an hour, except 100Rh (20.8 hours) and 105Rh (35.36 hours). Rhodium has numerous meta states, the most stable being 102mRh (0.141 MeV) with a half-life of about 2.9 years and 101mRh (0.157 MeV) with a half-life of 4.34 days (see isotopes of rhodium). In isotopes weighing less than 103 (the stable isotope), the primary decay mode is electron capture and the primary decay product is ruthenium. In isotopes greater than 103, the primary decay mode is beta emission and the primary product is palladium. Rhodium is one of the rarest elements in the Earth's crust, comprising an estimated 0.0002 parts per million (2 × 10−10). Its rarity affects its price and its use in commercial applications. The concentration of rhodium in nickel meteorites is typically 1 part per billion. Rhodium has been measured in some potatoes in concentrations between 0.8 and 30 ppt. The industrial extraction of rhodium is complex because the ores are mixed with other metals such as palladium, silver, platinum, and gold and there are very few rhodium-bearing minerals. It is found in platinum ores and extracted as a white inert metal that is difficult to fuse. Principal sources are located in South Africa; in river sands of the Ural Mountains; and in North America, including the copper-nickel sulfide mining area of the Sudbury, Ontario, region. Although the rhodium abundance at Sudbury is very small, the large amount of processed nickel ore makes rhodium recovery cost-effective. The main exporter of rhodium is South Africa (approximately 80% in 2010) followed by Russia. The annual world production is 30 tonnes. The price of rhodium is highly variable. In 2007, rhodium cost approximately eight times more than gold, 450 times more than silver, and 27,250 times more than copper by weight. In 2008, the price briefly rose above $10,000 per ounce ($350,000 per kilogram). The economic slowdown of the 3rd quarter of 2008 pushed rhodium prices sharply back below $1,000 per ounce ($35,000 per kilogram); the price rebounded to $2,750 by early 2010 ($97,000 per kilogram) (more than twice the gold price), but in late 2013, the prices were less than $1000. Political and financial problems led to very low oil prices and oversupply, causing most metals to drop in price. The economies of China, India and other emerging countries slowed in 2014 and 2015. In 2014 alone, 23,722,890 motor vehicles were produced in China, excluding motorbikes. This resulted in a rhodium price of 740.00 US-$ per Troy ounce (31.1 grams) in late November 2015. Rhodium is a fission product of uranium-235: each kilogram of fission product contains a significant amount of the lighter platinum group metals. Used nuclear fuel is therefore a potential source of rhodium, but the extraction is complex and expensive, and the presence of rhodium radioisotopes requires a period of cooling storage for multiple half-lives of the longest-lived isotope (101Rh with a half-life of 3.3 years, and 102mRh with a half-life of 2.9 years), or about 10 years. These factors make the source unattractive and no large-scale extraction has been attempted. The primary use of this element is in automobiles as a catalytic converter, changing harmful unburned hydrocarbons, carbon monoxide, and nitrogen oxide exhaust emissions into less noxious gases. Of 30,000 kg of rhodium consumed worldwide in 2012, 81% (24,300 kg) went into this application, and 8,060 kg was recovered from old converters. About 964 kg of rhodium was used in the glass industry, mostly for production of fiberglass and flat-panel glass, and 2,520 kg was used in the chemical industry. Rhodium is preferable to the other platinum metals in the reduction of nitrogen oxides to nitrogen and oxygen: Rhodium catalysts are used in a number of industrial processes, notably in catalytic carbonylation of methanol to produce acetic acid by the Monsanto process. It is also used to catalyze addition of hydrosilanes to molecular double bonds, a process important in manufacture of certain silicone rubbers. Rhodium catalysts are also used to reduce benzene to cyclohexane. The complex of a rhodium ion with BINAP is a widely used chiral catalyst for chiral synthesis, as in the synthesis of menthol.. Rhodium finds use in jewelry and for decorations. It is electroplated on white gold and platinum to give it a reflective white surface at time of sale, after which the thin layer wears away with use. This is known as rhodium flashing in the jewelry business. It may also be used in coating sterling silver to protect against tarnish (silver sulfide, Ag2S, produced from atmospheric hydrogen sulfide, H2S). Solid (pure) rhodium jewelry is very rare, more because of the difficulty of fabrication (high melting point and poor malleability) than because of the high price. The high cost ensures that rhodium is applied only as an electroplate. Rhodium has also been used for honors or to signify elite status, when more commonly used metals such as silver, gold or platinum were deemed insufficient. In 1979 the "Guinness Book of World Records" gave Paul McCartney a rhodium-plated disc for being history's all-time best-selling songwriter and recording artist. Rhodium is used as an alloying agent for hardening and improving the corrosion resistance of platinum and palladium. These alloys are used in furnace windings, bushings for glass fiber production, thermocouple elements, electrodes for aircraft spark plugs, and laboratory crucibles. Other uses include: In automobile manufacturing, Rhodium is also used in the construction of headlight reflectors. Being a noble metal, pure rhodium is inert. However, chemical complexes of rhodium can be reactive. The median lethal dose (LD50) for rats is 198 mg of rhodium chloride () per kilogram of body weight. Like the other noble metals, all of which are too inert to occur as chemical compounds in nature, rhodium has not been found to serve any biological function. In elemental form, the metal is harmless. People can be exposed to rhodium in the workplace by inhalation. The Occupational Safety and Health Administration (OSHA) has specified the legal limit (Permissible exposure limit) for rhodium exposure in the workplace at 0.1 mg/m3 over an 8-hour workday, and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL), at the same level. At levels of 100 mg/m3, rhodium is immediately dangerous to life or health. For soluble compounds, the PEL and REL are both 0.001 mg/m3.
https://en.wikipedia.org/wiki?curid=25601
Radium Radium is a chemical element with the symbol Ra and atomic number 88. It is the sixth element in group 2 of the periodic table, also known as the alkaline earth metals. Pure radium is silvery-white, but it readily reacts with nitrogen (rather than oxygen) on exposure to air, forming a black surface layer of radium nitride (Ra3N2). All isotopes of radium are highly radioactive, with the most stable isotope being radium-226, which has a half-life of 1600 years and decays into radon gas (specifically the isotope radon-222). When radium decays, ionizing radiation is a product, which can excite fluorescent chemicals and cause radioluminescence. Radium, in the form of radium chloride, was discovered by Marie and Pierre Curie in 1898. They extracted the radium compound from uraninite and published the discovery at the French Academy of Sciences five days later. Radium was isolated in its metallic state by Marie Curie and André-Louis Debierne through the electrolysis of radium chloride in 1911. In nature, radium is found in uranium and (to a lesser extent) thorium ores in trace amounts as small as a seventh of a gram per ton of uraninite. Radium is not necessary for living organisms, and adverse health effects are likely when it is incorporated into biochemical processes because of its radioactivity and chemical reactivity. Currently, other than its use in nuclear medicine, radium has no commercial applications; formerly, it was used as a radioactive source for radioluminescent devices and also in radioactive quackery for its supposed curative powers. Today, these former applications are no longer in vogue because radium's toxicity has become known, and less dangerous isotopes are used instead in radioluminescent devices. Radium is the heaviest known alkaline earth metal and is the only radioactive member of its group. Its physical and chemical properties most closely resemble its lighter congener barium. Pure radium is a volatile silvery-white metal, although its lighter congeners calcium, strontium, and barium have a slight yellow tint. This tint rapidly vanishes on exposure to air, yielding a black layer of radium nitride (Ra3N2). Its melting point is either or and its boiling point is . Both of these values are slightly lower than those of barium, confirming periodic trends down the group 2 elements. Like barium and the alkali metals, radium crystallizes in the body-centered cubic structure at standard temperature and pressure: the radium–radium bond distance is 514.8 picometers. Radium has a density of 5.5 g/cm3, higher than that of barium, again confirming periodic trends; the radium-barium density ratio is comparable to the radium-barium atomic mass ratio, due to the two elements' similar crystal structures. Radium has 33 known isotopes, with mass numbers from 202 to 234: all of them are radioactive. Four of these – 223Ra (half-life 11.4 days), 224Ra (3.64 days), 226Ra (1600 years), and 228Ra (5.75 years) – occur naturally in the decay chains of primordial thorium-232, uranium-235, and uranium-238 (223Ra from uranium-235, 226Ra from uranium-238, and the other two from thorium-232). These isotopes nevertheless still have half-lives too short to be primordial radionuclides and only exist in nature from these decay chains. Together with the mostly artificial 225Ra (15 d), which occurs in nature only as a decay product of minute traces of 237Np, these are the five most stable isotopes of radium. All other known radium isotopes have half-lives under two hours, and the majority have half-lives under a minute. At least 12 nuclear isomers have been reported; the most stable of them is radium-205m, with a half-life between 130 and 230 milliseconds; this is still shorter than twenty-four ground-state radium isotopes. In the early history of the study of radioactivity, the different natural isotopes of radium were given different names. In this scheme, 223Ra was named actinium X (AcX), 224Ra thorium X (ThX), 226Ra radium (Ra), and 228Ra mesothorium 1 (MsTh1). When it was realized that all of these are isotopes of the same element, many of these names fell out of use, and "radium" came to refer to all isotopes, not just 226Ra. Some of radium-226's decay products received historical names including "radium", ranging from radium A to radium G, with the letter indicating approximately how far they were down the chain from their parent 226Ra. 226Ra is the most stable isotope of radium and is the last isotope in the (4"n" + 2) decay chain of uranium-238 with a half-life of over a millennium: it makes up almost all of natural radium. Its immediate decay product is the dense radioactive noble gas radon (specifically the isotope 222Rn), which is responsible for much of the danger of environmental radium. It is 2.7 million times more radioactive than the same molar amount of natural uranium (mostly uranium-238), due to its proportionally shorter half-life. A sample of radium metal maintains itself at a higher temperature than its surroundings because of the radiation it emits – alpha particles, beta particles, and gamma rays. More specifically, natural radium (which is mostly 226Ra) emits mostly alpha particles, but other steps in its decay chain (the uranium or radium series) emit alpha or beta particles, and almost all particle emissions are accompanied by gamma rays. In 2013, it was discovered that the nucleus of radium-224 is pear-shaped. This was the first discovery of an asymmetric nucleus. Radium, like barium, is a highly reactive metal and always exhibits its group oxidation state of +2. It forms the colorless Ra2+ cation in aqueous solution, which is highly basic and does not form complexes readily. Most radium compounds are therefore simple ionic compounds, though participation from the 6s and 6p electrons (in addition to the valence 7s electrons) is expected due to relativistic effects and would enhance the covalent character of radium compounds such as RaF2 and RaAt2. For this reason, the standard electrode potential for the half-reaction Ra2+ (aq) + 2e− → Ra (s) is −2.916 V, even slightly lower than the value −2.92 V for barium, whereas the values had previously smoothly increased down the group (Ca: −2.84 V; Sr: −2.89 V; Ba: −2.92 V). The values for barium and radium are almost exactly the same as those of the heavier alkali metals potassium, rubidium, and caesium. Solid radium compounds are white as radium ions provide no specific coloring, but they gradually turn yellow and then dark over time due to self-radiolysis from radium's alpha decay. Insoluble radium compounds coprecipitate with all barium, most strontium, and most lead compounds. Radium oxide (RaO) has not been characterized well past its existence, despite oxides being common compounds for the other alkaline earth metals. Radium hydroxide (Ra(OH)2) is the most readily soluble among the alkaline earth hydroxides and is a stronger base than its barium congener, barium hydroxide. It is also more soluble than actinium hydroxide and thorium hydroxide: these three adjacent hydroxides may be separated by precipitating them with ammonia. Radium chloride (RaCl2) is a colorless, luminous compound. It becomes yellow after some time due to self-damage by the alpha radiation given off by radium when it decays. Small amounts of barium impurities give the compound a rose color. It is soluble in water, though less so than barium chloride, and its solubility decreases with increasing concentration of hydrochloric acid. Crystallization from aqueous solution gives the dihydrate RaCl2·2H2O, isomorphous with its barium analog. Radium bromide (RaBr2) is also a colorless, luminous compound. In water, it is more soluble than radium chloride. Like radium chloride, crystallization from aqueous solution gives the dihydrate RaBr2·2H2O, isomorphous with its barium analog. The ionizing radiation emitted by radium bromide excites nitrogen molecules in the air, making it glow. The alpha particles emitted by radium quickly gain two electrons to become neutral helium, which builds up inside and weakens radium bromide crystals. This effect sometimes causes the crystals to break or even explode. Radium nitrate (Ra(NO3)2) is a white compound that can be made by dissolving radium carbonate in nitric acid. As the concentration of nitric acid increases, the solubility of radium nitrate decreases, an important property for the chemical purification of radium. Radium forms much the same insoluble salts as its lighter congener barium: it forms the insoluble sulfate (RaSO4, the most insoluble known sulfate), chromate (RaCrO4), carbonate (RaCO3), iodate (Ra(IO3)2), tetrafluoroberyllate (RaBeF4), and nitrate (Ra(NO3)2). With the exception of the carbonate, all of these are less soluble in water than the corresponding barium salts, but they are all isostructural to their barium counterparts. Additionally, radium phosphate, oxalate, and sulfite are probably also insoluble, as they coprecipitate with the corresponding insoluble barium salts. The great insolubility of radium sulfate (at 20 °C, only 2.1 mg will dissolve in 1 kg of water) means that it is one of the less biologically dangerous radium compounds. The large ionic radius of Ra2+ (148 pm) results in weak complexation and poor extraction of radium from aqueous solutions when not at high pH. All isotopes of radium have half-lives much shorter than the age of the Earth, so that any primordial radium would have decayed long ago. Radium nevertheless still occurs in the environment, as the isotopes 223Ra, 224Ra, 226Ra, and 228Ra are part of the decay chains of natural thorium and uranium isotopes; since thorium and uranium have very long half-lives, these daughters are continually being regenerated by their decay. Of these four isotopes, the longest-lived is 226Ra (half-life 1600 years), a decay product of natural uranium. Because of its relative longevity, 226Ra is the most common isotope of the element, making up about one part per trillion of the Earth's crust; essentially all natural radium is 226Ra. Thus, radium is found in tiny quantities in the uranium ore uraninite and various other uranium minerals, and in even tinier quantities in thorium minerals. One ton of pitchblende typically yields about one seventh of a gram of radium. One kilogram of the Earth's crust contains about 900 picograms of radium, and one liter of sea water contains about 89 femtograms of radium. Radium was discovered by Marie Sklodowska-Curie and her husband Pierre Curie on 21 December 1898, in a uraninite (pitchblende) sample. While studying the mineral earlier, the Curies removed uranium from it and found that the remaining material was still radioactive. In July 1898 while studying pitchblende they isolated an element similar to bismuth which turned out to be polonium. They then isolated a radioactive mixture consisting mostly of two components: compounds of barium, which gave a brilliant green flame color, and unknown radioactive compounds which gave carmine spectral lines that had never been documented before. The Curies found the radioactive compounds to be very similar to the barium compounds, except that they were less soluble. This made it possible for the Curies to isolate the radioactive compounds and discover a new element in them. The Curies announced their discovery to the French Academy of Sciences on 26 December 1898. The naming of radium dates to about 1899, from the French word "radium", formed in Modern Latin from "radius" ("ray"): this was in recognition of radium's power of emitting energy in the form of rays. On September 1910, Marie Curie and André-Louis Debierne announced that they had isolated radium as a pure metal through the electrolysis of a pure radium chloride (RaCl2) solution using a mercury cathode, producing a radium–mercury amalgam. This amalgam was then heated in an atmosphere of hydrogen gas to remove the mercury, leaving pure radium metal. Later that same year, E. Eoler isolated radium by thermal decomposition of its azide, Ra(N3)2. Radium metal was first industrially produced in the beginning of the 20th century by Biraco, a subsidiary company of Union Minière du Haut Katanga (UMHK) in its Olen plant in Belgium. The common historical unit for radioactivity, the curie, is based on the radioactivity of 226Ra. Radium was formerly used in self-luminous paints for watches, nuclear panels, aircraft switches, clocks, and instrument dials. A typical self-luminous watch that uses radium paint contains around 1 microgram of radium. In the mid-1920s, a lawsuit was filed against the United States Radium Corporation by five dying "Radium Girls" – dial painters who had painted radium-based luminous paint on the dials of watches and clocks. The dial painters were instructed to lick their brushes to give them a fine point, thereby ingesting radium. Their exposure to radium caused serious health effects which included sores, anemia, and bone cancer. This is because the body treats radium as calcium and deposits it in the bones, where radioactivity degrades marrow and can mutate bone cells. During the litigation, it was determined that the company's scientists and management had taken considerable precautions to protect themselves from the effects of radiation, yet had not seen fit to protect their employees. Additionally, for several years the companies had attempted to cover up the effects and avoid liability by insisting that the Radium Girls were instead suffering from syphilis. This complete disregard for employee welfare had a significant impact on the formulation of occupational disease labor law. As a result of the lawsuit, the adverse effects of radioactivity became widely known, and radium-dial painters were instructed in proper safety precautions and provided with protective gear. In particular, dial painters no longer licked paint brushes to shape them (which caused some ingestion of radium salts). Radium was still used in dials as late as the 1960s, but there were no further injuries to dial painters. This highlighted that the harm to the Radium Girls could easily have been avoided. From the 1960s the use of radium paint was discontinued. In many cases luminous dials were implemented with non-radioactive fluorescent materials excited by light; such devices glow in the dark after exposure to light, but the glow fades. Where long-lasting self-luminosity in darkness was required, safer radioactive promethium-147 (half-life 2.6 years) or tritium (half-life 12 years) paint was used; both continue to be used today. These had the added advantage of not degrading the phosphor over time, unlike radium. Tritium emits very low-energy beta radiation (even lower-energy than the beta radiation emitted by promethium) which cannot penetrate the skin, rather than the penetrating gamma radiation of radium and is regarded as safer. Clocks, watches, and instruments dating from the first half of the 20th century, often in military applications, may have been painted with radioactive luminous paint. They are usually no longer luminous; however, this is not due to radioactive decay of the radium (which has a half-life of 1600 years) but to the fluorescence of the zinc sulfide fluorescent medium being worn out by the radiation from the radium. The appearance of an often thick layer of green or yellowish brown paint in devices from this period suggests a radioactive hazard. The radiation dose from an intact device is relatively low and usually not an acute risk; but the paint is dangerous if released and inhaled or ingested. Radium was once an additive in products such as toothpaste, hair creams, and even food items due to its supposed curative powers. Such products soon fell out of vogue and were prohibited by authorities in many countries after it was discovered they could have serious adverse health effects. (See, for instance, "Radithor" or "Revigator" types of "radium water" or "Standard Radium Solution for Drinking".) Spas featuring radium-rich water are still occasionally touted as beneficial, such as those in Misasa, Tottori, Japan. In the U.S., nasal radium irradiation was also administered to children to prevent middle-ear problems or enlarged tonsils from the late 1940s through the early 1970s. Radium (usually in the form of radium chloride or radium bromide) was used in medicine to produce radon gas, which in turn was used as a cancer treatment; for example, several of these radon sources were used in Canada in the 1920s and 1930s. However, many treatments that were used in the early 1900s are not used anymore because of the harmful effects radium bromide exposure caused. Some examples of these effects are anaemia, cancer, and genetic mutations. Safer gamma emitters such as 60Co, which is less costly and available in larger quantities, are usually used today to replace the historical use of radium in this application. Early in the 1900s, biologists used radium to induce mutations and study genetics. As early as 1904, Daniel MacDougal used radium in an attempt to determine whether it could provoke sudden large mutations and cause major evolutionary shifts. Thomas Hunt Morgan used radium to induce changes resulting in white-eyed fruit flies. Nobel-winning biologist Hermann Muller briefly studied the effects of radium on fruit fly mutations before turning to more affordable x-ray experiments. Howard Atwood Kelly, one of the founding physicians of Johns Hopkins Hospital, was a major pioneer in the medical use of radium to treat cancer. His first patient was his own aunt in 1904, who died shortly after surgery. Kelly was known to use excessive amounts of radium to treat various cancers and tumors. As a result, some of his patients died from radium exposure. His method of radium application was inserting a radium capsule near the affected area, then sewing the radium "points" directly to the tumor. This was the same method used to treat Henrietta Lacks, the host of the original HeLa cells, for cervical cancer. Currently, safer and more available radioisotopes are used instead. Uranium had no large scale application in the late 19th century and therefore no large uranium mines existed. In the beginning the only large source for uranium ore was the silver mines in Joachimsthal, Austria-Hungary (now Jáchymov, Czech Republic). The uranium ore was only a byproduct of the mining activities. In the first extraction of radium, Curie used the residues after extraction of uranium from pitchblende. The uranium had been extracted by dissolution in sulfuric acid leaving radium sulfate, which is similar to barium sulfate but even less soluble in the residues. The residues also contained rather substantial amounts of barium sulfate which thus acted as a carrier for the radium sulfate. The first steps of the radium extraction process involved boiling with sodium hydroxide, followed by hydrochloric acid treatment to minimize impurities of other compounds. The remaining residue was then treated with sodium carbonate to convert the barium sulfate into barium carbonate (carrying the radium), thus making it soluble in hydrochloric acid. After dissolution, the barium and radium were reprecipitated as sulfates; this was then repeated to further purify the mixed sulfate. Some impurities that form insoluble sulfides were removed by treating the chloride solution with hydrogen sulfide, followed by filtering. When the mixed sulfates were pure enough, they were once more converted to mixed chlorides; barium and radium thereafter were separated by fractional crystallisation while monitoring the progress using a spectroscope (radium gives characteristic red lines in contrast to the green barium lines), and the electroscope. After the isolation of radium by Marie and Pierre Curie from uranium ore from Joachimsthal, several scientists started to isolate radium in small quantities. Later, small companies purchased mine tailings from Joachimsthal mines and started isolating radium. In 1904, the Austrian government nationalised the mines and stopped exporting raw ore. For some time radium availability was low. The formation of an Austrian monopoly and the strong urge of other countries to have access to radium led to a worldwide search for uranium ores. The United States took over as leading producer in the early 1910s. The Carnotite sands in Colorado provide some of the element, but richer ores are found in the Congo and the area of the Great Bear Lake and the Great Slave Lake of northwestern Canada. Neither of the deposits is mined for radium but the uranium content makes mining profitable. The Curies' process was still used for industrial radium extraction in 1940, but mixed bromides were then used for the fractionation. If the barium content of the uranium ore is not high enough it is easy to add some to carry the radium. These processes were applied to high grade uranium ores but may not work well with low grade ores. Small amounts of radium were still extracted from uranium ore by this method of mixed precipitation and ion exchange as late as the 1990s, but today they are extracted only from spent nuclear fuel. In 1954, the total worldwide supply of purified radium amounted to about and it is still in this range today, while the annual production of pure radium compounds is only about 100 g in total today. The chief radium-producing countries are Belgium, Canada, the Czech Republic, Slovakia, the United Kingdom, and Russia. The amounts of radium produced were and are always relatively small; for example, in 1918, 13.6 g of radium were produced in the United States. The metal is isolated by reducing radium oxide with aluminium metal in a vacuum at 1200 °C. Some of the few practical uses of radium are derived from its radioactive properties. More recently discovered radioisotopes, such as cobalt-60 and caesium-137, are replacing radium in even these limited uses because several of these isotopes are more powerful emitters, safer to handle, and available in more concentrated form. The isotope 223Ra (under the trade name Xofigo) was approved by the United States Food and Drug Administration in 2013 for use in medicine as a cancer treatment of bone metastasis. The main indication of treatment with Xofigo is the therapy of bony metastases from castration-resistant prostate cancer due to the favourable characteristics of this alpha-emitter radiopharmaceutical. 225Ra has also been used in experiments concerning therapeutic irradiation, as it is the only reasonably long-lived radium isotope which does not have radon as one of its daughters. Radium is still used today as a radiation source in some industrial radiography devices to check for flawed metallic parts, similarly to X-ray imaging. When mixed with beryllium, radium acts as a neutron source. Radium-beryllium neutron sources are still sometimes used even today, but other materials such as polonium are now more common: about 1500 polonium-beryllium neutron sources, with an individual activity of , have been used annually in Russia. These RaBeF4-based (α, n) neutron sources have been deprecated despite the high number of neutrons they emit (1.84×106 neutrons per second) in favour of 241Am–Be sources. Today, the isotope 226Ra is mainly used to form 227Ac by neutron irradiation in a nuclear reactor. Radium is highly radioactive, and its immediate daughter, radon gas, is also radioactive. When ingested, 80% of the ingested radium leaves the body through the feces, while the other 20% goes into the bloodstream, mostly accumulating in the bones. Exposure to radium, internal or external, can cause cancer and other disorders, because radium and radon emit alpha and gamma rays upon their decay, which kill and mutate cells. At the time of the Manhattan Project in 1944, the "tolerance dose" for workers was set at 0.1 micrograms of ingested radium. Some of the biological effects of radium include the first case of "radium-dermatitis", reported in 1900, two years after the element's discovery. The French physicist Antoine Becquerel carried a small ampoule of radium in his waistcoat pocket for six hours and reported that his skin became ulcerated. Pierre and Marie Curie were so intrigued by radiation that they sacrificed their own health to learn more about it. Pierre Curie attached a tube filled with radium to his arm for ten hours, which resulted in the appearance of a skin lesion, suggesting the use of radium to attack cancerous tissue as it had attacked healthy tissue. Handling of radium has been blamed for Marie Curie's death due to aplastic anemia. A significant amount of radium's danger comes from its daughter radon: being a gas, it can enter the body far more readily than can its parent radium. Today, 226Ra is considered to be the most toxic of the quantity radioelements, and it must be handled in tight glove boxes with significant airstream circulation that is then treated to avoid escape of its daughter 222Rn to the environment. Old ampoules containing radium solutions must be opened with care because radiolytic decomposition of water can produce an overpressure of hydrogen and oxygen gas. The world's largest concentration of 226Ra is stored within the Interim Waste Containment Structure, approximately north of Niagara Falls, New York.
https://en.wikipedia.org/wiki?curid=25602
Simple DirectMedia Layer Simple DirectMedia Layer (SDL) is a cross-platform software development library designed to provide a hardware abstraction layer for computer multimedia hardware components. Software developers can use it to write high-performance computer games and other multimedia applications that can run on many operating systems such as Android, iOS, Linux, macOS, and Windows. SDL manages video, audio, input devices, CD-ROM, threads, shared object loading, networking and timers. For 3D graphics, it can handle an OpenGL, Vulkan or Direct3D context. A common misconception is that SDL is a game engine, but this is not true. However, the library is suited to building games directly, or is usable "indirectly" by engines built on top of it. The library is internally written in C and possibly, depending on the target platform, C++ or Objective-C, and provides the application programming interface in C, with bindings to other languages available. It is free and open-source software subject to the requirements of the zlib License since version 2.0, and with prior versions subject to the GNU Lesser General Public License. Under the zlib License, SDL 2.0 is freely available for static linking in closed-source projects, unlike SDL 1.2. SDL 2.0, released in 2013, was a major departure from previous versions, offering more opportunity for 3D hardware acceleration, but breaking backwards-compatibility. SDL is extensively used in the industry in both large and small projects. Over 700 games, 180 applications, and 120 demos have been posted on the library website. Sam Lantinga created the library, first releasing it in early 1998, while working for Loki Software. He got the idea while porting a Windows application to Macintosh. He then used SDL to port "Doom" to BeOS (see Doom source ports). Several other free libraries were developed to work alongside SDL, such as SMPEG and OpenAL. He also founded Galaxy Gameworks in 2008 to help commercially support SDL, although the company plans are currently on hold due to time constraints. Soon after putting Galaxy Gameworks on hold, Lantinga announced that SDL 1.3 (which would then later become SDL 2.0) would be licensed under the zlib License. Lantinga announced SDL 2.0 on 14 July 2012, at the same time announcing that he was joining Valve, the first version of which was announced the same day he joined the company. Lantinga announced the stable release of SDL 2.0.0 on 13 August 2013. SDL 2.0 is a major update to the SDL 1.2 codebase with a different, not backwards-compatible API. It replaces several parts of the 1.2 API with more general support for multiple input and output options. Some feature additions include multiple window support, hardware-accelerated 2D graphics, and better Unicode support. Support for Mir and Wayland was added in SDL 2.0.2 and enabled by default in SDL 2.0.4. Version 2.0.4 also provided better support for Android. SDL is a wrapper around the operating-system-specific functions that the game needs to access. The only purpose of SDL is to provide a common framework for accessing these functions for multiple operating systems (cross-platform). SDL provides support for 2D pixel operations, sound, file access, event handling, timing and threading. It is often used to complement OpenGL by setting up the graphical output and providing mouse and keyboard input, since OpenGL comprises only rendering. A game using the Simple DirectMedia Layer will "not" automatically run on every operating system, further adaptations must be applied. These are reduced to the minimum, since SDL also contains a few abstraction APIs for frequent functions offered by an operating system. The syntax of SDL is function-based: all operations done in SDL are done by passing parameters to subroutines (functions). Special structures are also used to store the specific information SDL needs to handle. SDL functions are categorized under several different subsystems. SDL is divided into several subsystems: Besides this basic, low-level support, there also are a few separate official libraries that provide some more functions. These comprise the "standard library", and are provided on the official website and included in the official documentation: Other, non-standard libraries also exist. For example: SDL_Collide on Sourceforge created by Amir Taaki. The SDL 2.0 library has language bindings for: Because of the way SDL is designed, much of its source code is split into separate modules for each operating system, to make calls to the underlying system. When SDL is compiled, the appropriate modules are selected for the target system. Following back-ends are available: SDL 1.2 has support for RISC OS (dropped in 2.0). An unofficial Sixel back-end is available for SDL 1.2. The Rockbox MP3 player firmware also distributes a version of SDL 1.2, which is used to run games such as Quake. Over the years SDL was used for many commercial and non-commercial video game projects. For instance, MobyGames listed 120 games using SDL in 2013, and the SDL website itself listed around 700 games in 2012. Important commercial examples are "Angry Birds" and "Unreal Tournament"; ones from the open-source domain are "OpenTTD", "The Battle for Wesnoth" or "Freeciv". The cross-platform game releases of the popular Humble Indie Bundles for Linux, Mac and Android are often SDL based. SDL is also often used for later ports on new platforms with legacy code. For instance, the PC game Homeworld was ported to the Pandora handheld and Jagged Alliance 2 for Android via SDL. Also, several non video game software uses SDL; examples are the emulators DOSBox and VisualBoyAdvance. There were several books written for development with SDL (see further readings). SDL is used in university courses teaching multimedia and computer science, for instance, in a workshop about game programming using libSDL at the University of Cadiz in 2010, or a Game Design discipline at UTFPR (Ponta Grossa campus) in 2015.
https://en.wikipedia.org/wiki?curid=29199
Seattle University Seattle University (SU) is a private Jesuit university in Seattle, Washington. SU is the largest independent university in the Northwest US, with over 7,500 students enrolled in undergraduate and graduate programs within eight schools. In 1891, Adrian Sweere, S.J., took over a small parish near downtown Seattle at Broadway and Madison. At first, the school was named after the surrounding Immaculate Conception parish and did not offer higher education. In 1898, the school was named Seattle College after both the city and Chief Seattle, and it granted its first bachelor's degrees 11 years later. Initially, the school served as both a high school and college. From 1919 to 1931, the college moved to Interlaken Blvd, but in 1931 it returned to First Hill permanently. In 1931, Seattle College created a "night school" for women, though admitting women was highly controversial at the time. In 1948, Seattle College changed its name to Seattle University, under Father Albert A. Lemieux, S.J. In 1993, the Seattle University School of Law was established through purchase of the Law School from the University of Puget Sound in Tacoma, and the School of Law moved to the Seattle campus in 1999. In 2009, SU completed its largest capital campaign, raising almost $169 million. This led to investment in the scholarship fund, academic programs and professorships, a fitness complex, an arts center, and the $56 million Lemieux Library and McGoldrick Learning Commons, completed in fall 2010. Seattle University has a campus in the city's First Hill neighborhood, east of downtown Seattle. The SU campus has been recognized by the city of Seattle and EPA for its commitment to sustainability through pesticide-free grounds, a food waste compost facility, recycling, and energy conservation program. The Chapel of St. Ignatius on campus, designed by New York architect Steven Holl, won a national Honor Award from the American Institute of Architects in 1998. At night the chapel sends beacons of multi-colored lights out onto the campus. The campus includes numerous works by well-known artists: the Centennial Fountain by Seattle artist George Tsutakawa; a large glass sculpture in the PACCAR Atrium of Piggot Hall by Tacoma artist Dale Chihuly; and works by Chuck Close, Jacob Lawrence, Gwendolyn Knight, William Morris, and David Mach. Undergraduate enrollment in 2014 showed some ethnic diversity: 55.7% White, 23.4% Asian, 11.0% Hispanic, 10.7% Other (International), 4.5% Black, 3.3% Pacific Islander, 1.6% Native American; some dual mention. The Lemieux Library was founded in 1991. it contained 216,677 books and subscribed to 1,604 periodicals. It is a member of the American Theological Library Association. Seattle University offers 61 bachelor's degree programs, 31 graduate degree programs, and 27 certificate programs, plus law school and a doctoral program in education. The university consists of nine colleges: the College of Arts and Sciences, the Albers School of Business and Economics, the College of Education, the School of Law, Matteo Ricci College, the College of Nursing, the College of Science and Engineering, the School of New and Continuing Studies, and the School of Theology and Ministry. A Seattle University education is estimated to cost $150,000, although much of this is covered by financial aid. Seattle University's Albers School of Business and Economics, started in 1945, was named after the Albers family. George and Eva Albers were frequent donors including Eva's bequest of $3 million to the school in 1971. Their daughter, alumna Genevieve Albers, has also made several bequests including a sponsored professorship. In 1967, the business school added an MBA program. "BusinessWeek" ranked Albers's Part-time MBA Program #25 in the nation and the undergraduate program in the top 50 in 2010. Both the Leadership Executive MBA Program and the part-time MBA Program are recognized among the Top 25 in their categories by ""U.S. News & World Report's" 2010 America's Best Graduate Schools." "US News" also ranks the Albers School among the top 10% of undergraduate business schools nationwide. The Albers School is accredited with the Association to Advance Collegiate Schools of Business (AACSB). The Seattle University College of Arts and Sciences is the oldest and largest undergraduate and graduate college affiliated with Seattle University. The college offers 41 undergraduate majors, 36 undergraduate minors, six graduate degrees, and one post-graduate certificate. Its graduate program in psychology is one of the few schools in the country to focus on existential phenomenology as a therapeutic method. Seattle University Communications Department offers Strategic Communications, Journalism, and Communication Studies majors, as well as internship opportunities. The Matteo Ricci College was founded in 1973 and named after Italian Jesuit missionary, Matteo Ricci. The program allows high school students from the affiliated Seattle Preparatory School and other area high schools to graduate with a bachelor's degree in humanities or teaching after as little as three years in high school and three years in college. It also provides students the opportunity to obtain a second bachelor's degree in any other discipline with one additional year of study. The Seattle University School of Law is the largest and most diverse in the Pacific Northwest. It was founded in 1972 as part of the University of Puget Sound (UPS) in Tacoma, WA. In 1993 the University of Puget Sound and Seattle University agreed on a transfer of the law school to Seattle University; in August 1994 the transfer was completed and the school physically moved to the Seattle University campus in 1999. The 2019 "U.S. News & World Report" Law School rankings lists the school at number 122 in the nation overall, adding that the school has the number one legal writing program in the nation as well as top-20 rankings for its part-time program and its clinical programs. Seattle University's College of Nursing celebrated its 75th anniversary in 2010. It is housed in the renovated Garrand building, the site of the original Seattle College and the oldest building on campus. The "state of the art" Clinical Performance Lab is located in the James Tower of Swedish on Cherry Hill, a few blocks away from the main campus. Undergraduate and graduate students use this lab to practice skills necessary for clinical nursing. The BSN program accepts transfer students from community colleges and other universities. The MSN program welcomes registered nurses with bachelor's degrees. The Advanced Practice Nursing Immersion program (MSN) offers an accelerated program for those with a bachelor's degree in another field. Specialties available in the MSN program are Family Nurse Practitioner, Adult/Gerontological Nurse Practitioner, Psych-Mental Health Nurse Practitioner, Nurse-Midwifery, and Advanced Community/Public Health Nursing. The College of Education was founded in 1935 and offers programs that include a Doctorate in Educational Leadership, Masters in Adult Education and Training, Counseling, Curriculum and Instruction, Educational Administration, Literacy for Special Needs, Master in Teaching, Master in Teaching with Special Education Endorsement, Special Education, Student Development Administration, and Teaching English as a second or foreign language (ESL). Educational specialist degree programs include Educational Administration and School Psychology, and special education and certificate programs offered include Superintendent, Principal, and Professional Development. The College of Education is accredited by the National Council of Accreditation of Teacher Education and the National Association of School Psychologists, and approved by the National Association of School Psychologists. The College of Science and Engineering focuses on basic sciences, mathematics, and their applications. Students can major in basic science disciplines, computer science, or one of the engineering departments – civil and environmental engineering, mechanical engineering, or computer and electrical engineering. Students may also obtain an interdisciplinary general science degree, or prepare for graduate work in the health professions. The School of Theology and Ministry is an ecumenical program with relationships with 10 Protestant denominations and the Catholic Archdiocese of Seattle. The school offers a number of master's degrees and certificates, including a Master of Divinity. The number of service learning courses at SU has nearly doubled since 2004. The economic impact of SU in the Seattle area in 2008 was $580.4 million. This figure is drawn from the total spending by the university, its students and visitors. Among Seattle University's many environmental undertakings, there are projects ranging from composting initiatives to water conservation. There are also solar panels on buildings, and a central recycling yard with an extensive recycling program. The university has been composting since 1995, and in 2003 it built the first composting facility in the state on an urban campus. SU received the Sustainability Innovator Award in 2007 from the Sustainable Endowments Institute for its pre-consumer food waste composting program and the Green Washington Award in 2008 from "Washington CEO Magazine" for its sustainable landscape practices and pre-consumer food waste composting program. The "Princeton Review"'s 2018 Green Rating rated the school as the #12 Green College in the country. SU's move to a pesticide-free campus began in the early 1980s when Ciscoe Morris, now a local gardening personage, was head of the grounds department. He put a halt to chemical spraying and in its place released more than 20,000 beneficial insects called lacewings to eat the aphids that had infested trees on campus. The success of this led to other pesticide-free gardening practices. Between 1950 and 1971, Seattle University competed as a Division I independent school. In the 1950s, the basketball team was a powerhouse with brothers Johnny and Eddie O'Brien, who led the team to a rare victory over the Harlem Globetrotters. In 1958, future NBA Hall of Famer Elgin Baylor paced a men's basketball team that advanced to the Final Four and defeated top-ranked Kansas State University before losing to the University of Kentucky. Seattle University was also a leader in the area of racial diversity, with an integrated squad known as "the United Nations team." The success of men's basketball, in addition to men's golf and baseball, continued into the 1960s with players Eddie Miles, Clint Richardson, and Tom Workman who went on to successful careers in the NBA. The 1966 basketball squad gave Texas Western University its only defeat in a championship season celebrated in the film "Glory Road". In the course of the 1960s, Seattle University produced more NBA players than any other school. During that time women's tennis star Janet Hopps Adkisson was the first female to be the top-ranked player for both the men and women nationally. In women's golf, Pat Lesser was twice named to the Curtis Cup in the mid-1950s and was later inducted into the State of Washington Sports Hall of Fame. Before 1980, more than 25 SU baseball players went on to play professionally in both the major and minor leagues. Men's golf and a Tom Gorman-led tennis team were also rated nationally. Gorman went on to lead the US Davis Cup team, where he captained a record 18 match wins and one Davis Cup title (1972) as a player and two more Davis Cup championships as a coach (1990 and 1992). SU joined the West Coast Conference in 1971. In 1980, it left the West Coast Conference and Division I membership and entered the NAIA, where it remained for nearly 20 years. In the late 1990s, President Fr. Sundborg started restoring the university's NCAA membership. The athletic program moved into Division II in the fall of 2002. The school moved from Division II to Division I in 2009. Also in that year, the university hired men's basketball coach Cameron Dollar, former assistant at University of Washington, and women's coach Joan Bonvicini, former University of Arizona coach and one of the winningest women's college basketball coaches. In 2013, Coach Bonvicini led the Redhawks to the regular season Western Athletic Conference championship. In 2016, Suzy Barcomb was hired as the new coach for women's basketball after Coach Bonvicini resigned in March 2016. In her first season with Seattle U, Coach Barcomb led the Redhawks to a WAC tournament title and was the 15th seed in the NCAA Tournament where Seattle U faced the second seed, Oregon Ducks. In 1938, the mascot switched from the Maroons to the Chieftains. The name was selected to honor the college's namesake, Chief Seattle. In 2000, the university changed its mascot to the Redhawks. On June 14, 2011, Seattle U accepted an invitation to join the Western Athletic Conference, becoming a full member for the 2012–2013 season.
https://en.wikipedia.org/wiki?curid=29200
Seattle Colleges District The Seattle Colleges District (previously Seattle Community Colleges District), also known simply as Seattle Colleges, is a group of colleges located in Seattle, Washington. It consists of three colleges—North Seattle College, Seattle Central College (including the Health Education Center in Pacific Tower, the Wood Technology Center and Seattle Maritime Academy), South Seattle College (including the Georgetown Campus and NewHolly Learning Center)—and the Seattle Vocational Institute. Together the colleges form the second largest institution of higher education in the state, behind the University of Washington, to which many of their graduates transfer. The district's origins can be traced to 1902, with the opening of Broadway High School on Capitol Hill. It operated as a traditional high school until the end of World War II, when it was converted to a vocational and adult education institution for the benefit of veterans who wanted to finish high school but no longer fit in at regular schools. As a result, in 1946, Broadway High School was renamed Edison Technical School. Edison started offering college-level courses 21 years later, and it was reconstituted as Seattle Community College in September 1966. North Seattle Community College and South Seattle Community College opened their doors in 1970, whereupon Seattle Community College was renamed Seattle Central Community College. Seattle Central Community College was named Time magazine's Community College of the Year in 2001. In March 2014, the Board of Trustees voted unanimously to change the name from "Seattle Community Colleges District" to "Seattle Colleges District" and to change the names of the colleges to "Seattle Central College", "North Seattle College" and "South Seattle College". In 2018 Seattle Colleges partnered with the city of Seattle and Seattle Public Schools to launch Seattle Promise, a tuition covering program that aims to expand college access, success, and completion. Seattle Promise was part of the Families and Education Levy passed by citizens of Seattle during the local election of November 2018. Seattle Promise offers graduating seniors of Seattle public schools paid tuition for up to two years or 90 credits as well as academic support and advising when they attend Seattle Colleges. The chief executive officer of Seattle Colleges District is the chancellor. Presidents of each of the three colleges comprising Seattle Colleges—North Seattle College, Seattle Central College, South Seattle College—report to the chancellor. Seattle Colleges is governed by a board of trustees appointed by the governor and approved by the state Senate. Seattle Colleges offers more than 130 career and technical education programs of study that result in certificates and associate degrees, as well as a number of Bachelor of Applied Science degrees and college transfer options. The colleges also offer continuing education programs; concurrent high school enrollment and high school completion programs; and Adult Basic Education/English as a Second Language programs, as well as corporate and customized training.
https://en.wikipedia.org/wiki?curid=29201
Summer of Love The Summer of Love was a social phenomenon that occurred during mid-1967, when as many as 100,000 people, mostly young people sporting hippie fashions of dress and behavior, converged in San Francisco's neighborhood of Haight-Ashbury. More broadly, the Summer of Love encompassed the hippie music, drug, anti-war, and free-love scene throughout the American west coast, and as far away as New York City. Hippies, sometimes called flower children, were an eclectic group. Many were suspicious of the government, rejected consumerist values, and generally opposed the Vietnam War. A few were interested in politics; others were concerned more with art (music, painting, poetry in particular) or spiritual and meditative practices. Inspired by the Beat Generation of authors of the 1950s, who had flourished in the North Beach area of San Francisco, those who gathered in Haight-Ashbury during 1967 allegedly rejected the conformist and materialist values of modern life; there was an emphasis on sharing and community. The Diggers established a Free Store, and a Free Clinic where medical treatment was provided. The prelude to the Summer of Love was a celebration known as the Human Be-In at Golden Gate Park on January 14, 1967, which was produced and organized by artist Michael Bowen. It was at this event that Timothy Leary voiced his phrase, "turn on, tune in, drop out". This phrase helped shape the entire hippie counterculture, as it voiced the key ideas of 1960s rebellion. These ideas included communal living, political decentralization, and dropping out. The term "dropping out" became popular among many high school and college students, many of whom would abandon their conventional education for a summer of hippie culture. The event was announced by the Haight-Ashbury's hippie newspaper, the "San Francisco Oracle": A new concept of celebrations beneath the human underground must emerge, become conscious, and be shared, so a revolution can be formed with a renaissance of compassion, awareness, and love, and the revelation of unity for all mankind. The gathering of approximately 30,000 at the Human Be-In helped publicize hippie fashions. The term "Summer of Love" originated with the formation of the Council for the Summer of Love during the spring of 1967 as a response to the convergence of young people on the Haight-Ashbury district. The Council was composed of The Family Dog, The Straight Theatre, The Diggers, "The San Francisco Oracle", and approximately twenty-five other people, who sought to alleviate some of the problems anticipated from the influx of people expected during the summer. The Council also assisted the Free Clinic and organized housing, food, sanitation, music and arts, along with maintaining coordination with local churches and other social groups. The increasing numbers of youth traveling to the Haight-Ashbury district alarmed the San Francisco authorities, whose public warning was that they would keep hippies away. Adam Kneeman, a long-time resident of the Haight-Ashbury, recalls that the police did little to help the hordes of newcomers, much of which was done by residents of the area. College and high-school students began streaming into the Haight during the spring break of 1967 and the local government officials, determined to stop the influx of young people once schools ended for the summer, unwittingly brought additional attention to the scene, and a series of articles in local papers alerted the national media to the hippies' growing numbers. By spring, some Haight-Ashbury residents responded by forming the Council of the Summer of Love, giving the event a name. The media's coverage of hippie life in the Haight-Ashbury drew the attention of youth from all over America. Hunter S. Thompson termed the district "Hashbury" in "The New York Times Magazine", and the activities in the area were reported almost daily. The event was also reported by the counterculture's own media, particularly the "San Francisco Oracle", the pass-around readership of which is thought to have exceeded a half-million people that summer, and the "Berkeley Barb". The media's reportage of the "counterculture" included other events in California, such as the Fantasy Fair and Magic Mountain Music Festival in Marin County and the Monterey Pop Festival, both during June 1967. At Monterey, approximately 30,000 people gathered for the first day of the music festival, with the number increasing to 60,000 on the final day. Additionally, media coverage of the Monterey Pop Festival facilitated the Summer of Love as large numbers of hippies traveled to California to hear favorite bands such as The Who, Grateful Dead, the Animals, Jefferson Airplane, Quicksilver Messenger Service, The Jimi Hendrix Experience, Otis Redding, The Byrds, and Big Brother and the Holding Company featuring Janis Joplin. Musician John Phillips of the band The Mamas & the Papas wrote the song "San Francisco (Be Sure to Wear Flowers in Your Hair)" for his friend Scott McKenzie. It served to promote both the Monterey Pop Festival that Phillips was helping to organize, and to popularize the flower children of San Francisco. Released on May 13, 1967, the song was an instant success. By the week ending July 1, 1967, it reached number four on the "Billboard" Hot 100 in the United States, where it remained for four consecutive weeks. Meanwhile, the song charted at number one in the United Kingdom and much of Europe. The single is purported to have sold more than 7 million copies worldwide. In Manhattan, near the Greenwich Village neighborhood, during a concert in Tompkins Square Park on Memorial Day of 1967, some police officers asked for the music's volume to be reduced. In response, some people in the crowd threw various objects, and 38 arrests ensued. A debate about the "threat of the hippie" ensued between Mayor John Lindsay and Police Commissioner Howard Leary. After this event, Allan Katzman, the editor of the "East Village Other", predicted that 50,000 hippies would enter the area for the summer. Double that amount, as many as 100,000 young people from around the world, flocked to San Francisco's Haight-Ashbury district, as well as to nearby Berkeley and to other San Francisco Bay Area cities, to join in a popularized version of the hippieism. A Free Clinic was established for free medical treatment, and a Free Store gave away basic necessities without charge to anyone who needed them. The Summer of Love attracted a wide range of people of various ages: teenagers and college students drawn by their peers and the allure of joining an alleged cultural utopia; middle-class vacationers; and even partying military personnel from bases within driving distance. The Haight-Ashbury could not accommodate this influx of people, and the neighborhood scene quickly deteriorated, with overcrowding, homelessness, hunger, drug problems, and crime afflicting the neighborhood. Psychedelic drug use became common. Grateful Dead guitarist Bob Weir commented: Haight Ashbury was a ghetto of bohemians who wanted to do anything—and we did but I don't think it has happened since. Yes there was LSD. But Haight Ashbury was not about drugs. It was about exploration, finding new ways of expression, being aware of one's existence. After losing his untenured position as an instructor on the Psychology Faculty at Harvard University, Timothy Leary became a major advocate for the recreational use of psychedelic drugs. After taking psilocybin, a drug extracted from certain mushrooms that causes effects similar to those of LSD, Leary endorsed the use of all psychedelics for personal development. He often invited friends as well as an occasional graduate student to consume such drugs along with him and colleague Richard Alpert. On the West Coast, author Ken Kesey, a prior volunteer for a CIA-sponsored LSD experiment, also advocated the use of the drug. Soon after participating, he was inspired to write the bestselling novel "One Flew Over the Cuckoo's Nest". Subsequently, after buying an old school bus, painting it with psychedelic graffiti and attracting a group of similarly-minded individuals he dubbed the Merry Pranksters, Kesey and his group traveled across the country, often hosting "acid tests" where they would fill a large container with a diluted low dose form of the drug and give out diplomas to those who passed their test. Along with LSD, cannabis was also much used during this period. However, as a result, crime increased among users because new laws were subsequently enacted to control the use of both drugs. The users thereof often had sessions to oppose the laws, including The Human Be-In referenced above as well as various "smoke-ins" during July and August, however, their efforts at repeal were unsuccessful. By the end of summer, many participants had left the scene to join the back-to-the-land movement of the late '60s, to resume school studies, or simply to "get a job". Those remaining in the Haight wanted to commemorate the conclusion of the event. A mock funeral entitled "The Death of the Hippie" ceremony was staged on October 6, 1967, and organizer Mary Kasper explained the intended message: In New York, the rock musical drama "Hair", which told the story of the hippie counterculture and sexual revolution of the 1960s, began Off-Broadway on October 17, 1967. The "Second Summer of Love" (a term which generally refers to the summers of both 1988 and 1989) was a renaissance of acid house music and rave parties in Britain. The culture supported MDMA use and some LSD use. The art had a generally psychedelic emotion reminiscent of the 1960s. During the summer of 2007, San Francisco celebrated the 40th anniversary of the Summer of Love by holding numerous events around the region, culminating on September 2, 2007, when over 150,000 people attended the 40th anniversary of the Summer of Love concert, held in Golden Gate Park in Speedway Meadows. It was produced by 2b1 Multimedia and the Council of Light. In 2016, 2b1 Multimedia and The Council of Light, once again, began the planning for the 50th Anniversary of the Summer of Love in Golden Gate Park in San Francisco. By the beginning of 2017, the council had gathered about 25 poster artists, about 10 of whom submitted their finished art, but it was never printed. The council was also contacted by many bands and musicians who wanted to be part of this historic event, all were waiting for the date to be determined before a final commitment. New rules enforced by the San Francisco Parks and Recreational Department (PRD) prohibited the council from holding a free event of the proposed size. There were many events planned for San Francisco in 2017, many of which were 50th Anniversary-themed. However, there was no free concert. The PRD later hosted an event originally called “Summer Solstice Party,” but it was later renamed “50th Anniversary of the Summer of Love” two weeks before commencement. The event had fewer than 20,000 attendees from the local Bay Area. In frustration, producer Boots Hughston put the proposal of what was by then to be a 52nd anniversary free concert into the form of an initiative intended for the November 6, 2018, ballot. The issue did not make the ballot; however, a more generic Proposition E provides for directing hotel tax fees to a $32 million budget for "arts and cultural organizations and projects in the city." During the summer of 2017, San Francisco celebrated the 50th anniversary of the Summer of Love by holding numerous events and art exhibitions. In Liverpool, the city has staged a 50 Summers of Love festival based on the 50th anniversary of the June 1, 1967, release of the album "Sgt Pepper's Lonely Hearts Club Band", by The Beatles. Notes
https://en.wikipedia.org/wiki?curid=29204
Skyhooks (band) Skyhooks were an Australian rock band formed in Melbourne in March 1973 by mainstays Greg Macainsh on bass guitar and backing vocals, and Imants "Freddie" Strauks on drums. They were soon joined by Bob "Bongo" Starkie on guitar and backing vocals, and Red Symons on guitar, vocals and keyboards; Graeme "Shirley" Strachan became lead vocalist in March 1974. Described as a glam rock band, because of flamboyant costumes and make-up, Skyhooks addressed teenage issues including buying drugs in "Carlton (Lygon Street Limbo)", suburban sex in "Balwyn Calling", the gay scene in "Toorak Cowboy" and loss of girlfriends in "Somewhere in Sydney" by namechecking Australian locales. According to music historian, Ian McFarlane "[Skyhooks] made an enormous impact on Australian social life". Skyhooks had #1 albums on the Australian Kent Music Report with their 1974 debut, "Living in the 70's" (for 16 weeks), and its 1975 follow-up, "Ego Is Not a Dirty Word" (11 weeks). Their #1 singles were "Horror Movie" (January 1975) and "Jukebox in Siberia" (November 1990). Symons left Skyhooks in 1977 and became a radio and television personality. Strachan had solo releases since 1976 and finally left the band in 1978 and was also a radio and television presenter. With altered line-ups, Skyhooks continued until they disbanded on 8 June 1980; they briefly reformed in 1983, 1984, 1990 and 1994. In 1992, Skyhooks were inducted into the Australian Recording Industry Association (ARIA) Hall of Fame. Lead singer, Strachan died on 29 August 2001, aged 49, in a helicopter crash while solo piloting. Their original lead singer, Steve Hill, died in October 2005, aged 52, of liver cancer. In 2011, the Skyhooks album "Living in the 70s" was added to the National Film and Sound Archive of Australia's Sounds of Australia registry. Greg Macainsh and Imants "Freddie" Strauks both attended Norwood High School in the Melbourne suburb of Ringwood and formed Spare Parts in 1966 with Macainsh on bass guitar and Strauks on lead vocals. Spare Parts was followed by Sound Pump in 1968, Macainsh formed Reuben Tice in Eltham, with Tony Williams on vocals. By 1970 Macainsh was back with Strauks, now on drums, first in Claptrap and by 1971 in Frame which had Graeme "Shirley" Strachan as lead vocalist. Frame also included Pat O'Brien on guitar and Cynthio Ooms on guitar. Strachan had befriended Strauks earlier—he sang with Strauks on the way to parties—and was asked to join Claptrap which was renamed as Frame. Strachan stayed in Frame for about 18 months but left for a career in carpentry and a hobby of surfing in Phillip Island. Skyhooks formed in March 1973 in Melbourne with Steve Hill on vocals (ex-Lillee), Peter Ingliss on guitar (The Captain Matchbox Whoopee Band), Macainsh on bass guitar and backing vocals, Peter Starkie on guitar and backing vocals (Lipp & the Double Dekker Brothers) and Strauks on drums and backing vocals. The name, Skyhooks, came from a fictional organisation in the 1956 film "Earth vs. the Flying Saucers". Their first gig was on 16 April 1973 at St Jude's Church hall in Carlton. At a later gig, former Daddy Cool frontman, Ross Wilson was playing in his group Mighty Kong with Skyhooks as a support act. Wilson was impressed with the fledgling band and signed Macainsh to a publishing deal. In August, Bob "Bongo" Starkie (Mary Jane Union) on guitar replaced his older brother Peter (later in Jo Jo Zep & The Falcons) and Ingliss was replaced by Red Symons (Scumbag) on guitar, vocals and keyboards. The two new members added a touch of theatre and humour to the band's visual presence. By late 1973, Wilson had convinced Michael Gudinski to sign the band to his booking agency, Australian Entertainment Exchange, and eventually to Gudinski's label, Mushroom Records. Skyhooks gained a cult following around Melbourne including university intelligentsia and pub rockers, but a poorly received show at the January 1974 Sunbury Pop Festival saw the group booed off stage. Two tracks from their live set, "Hey What's the Matter?" and "Love on the Radio" appeared on Mushroom's "Highlights of Sunbury '74". After seeing his performance on TV, Hill phoned Macainsh and resigned. To replace Hill, in March, Macainsh recruited occasional singer, surfer and carpenter Strachan from his Frame era. Strachan had been dubbed "Shirley" by fellow surfers due to his curly blond hair "a la" Shirley Temple. For Skyhooks, the replacement of Hill by Strachan was a pivotal moment, as Strachan had remarkable vocal skills, and a magnetic stage and screen presence. Alongside Macainsh's lyrics, another facet of the group was the twin-guitar sound of Starkie and Symons. Adopting elements of glam rock in their presentation, and lyrics that presented frank depictions of the social life of young Australia in the 1970s, the band shocked conservative middle Australia with their outrageous (for the time) costumes, make-up, lyrics, and on-stage activities. A 1.2-metre (4 ft) high mushroom-shaped phallus was confiscated by Adelaide police after a performance. Six of the ten tracks on their debut album, "Living in the 70's", were banned by the Federation of Australian Commercial Broadcasters for their sex and drug references, "Toorak Cowboy", "Whatever Happened to the Revolution?", "You Just Like Me Cos I'm Good in Bed", "Hey What's the Matter", "Motorcycle Bitch" and "Smut". Much of the group's success derived from its distinctive repertoire, mostly penned by bass guitarist Macainsh, with an occasional additional song from Symons—who wrote "Smut" and performed its lead vocals. Although Skyhooks were not the first Australian rock band to write songs in a local setting—rather than ditties about love or songs about New York or other foreign lands—they were the first to become commercially successful doing so. Skyhooks songs addressed teenage issues including buying drugs ("Carlton (Lygon Street Limbo)"), suburban sex ("Balwyn Calling"), the gay scene ("Toorak Cowboy") and loss of girlfriends ("Somewhere in Sydney") by namechecking Australian locales. Radio personality, Billy Pinnell described the importance of their lyrics in tackling Australia's cultural cringe: The first Skyhooks single, "Living in the 70's", was released in August, ahead of the album, and peaked at #7 on the Australian Kent Music Report Singles Charts. "Living in the 70's" initially charted only in Melbourne upon its release on 28 October 1974. It went on to spend 16 weeks at the top of the Australian Kent Music Report Albums Charts from February to June 1975. The album was produced by Wilson, and became the best selling Australian album, to that time, with 226,000 copies sold in Australia. Skyhooks returned to the Sunbury Pop Festival in January 1975. They were declared the best performers by "Rolling Stone Australia" and "The Age" reviewers, and Gudinski now took over their management. The second single, "Horror Movie", reached #1 for two weeks in March. The band's success was credited by Gudinski with saving his struggling Mushroom Records and enabled it to develop into the most successful Australian label of its time. The success of the album was also due to support by a new pop music television show "Countdown" on national public broadcaster ABC Television, rather than promotion by commercial radio. "Horror Movie" was the first song played on the first colour transmission of "Countdown" in early 1975. Despite the radio ban, the ABC's newly established 24-hour rock music station Double Jay chose the album's fifth track, the provocatively titled "You Just Like Me Cos I'm Good in Bed", as its first ever broadcast on 19 January. Skyhooks' 1975 national tour promoting "Living in the 70's" finished at Melbourne's Festival Hall with their ANZAC Day (25 April) performance. They were supported by comedy singer Bob Hudson, heavy rockers AC/DC and New Zealand band Split Enz. Strachan then took two weeks off and considered leaving the band, however he returned—newly married—and they continued recording the follow-up album, "Ego Is Not a Dirty Word". Initially, they were locked out of the recording studio until their manager, Gudinski, sent down the money still owed for recording the first album. "Ego Is Not a Dirty Word" spent 11 weeks at the top of the Australian album chart from 21 July 1975, and sold 210,000 copies. It was produced by Wilson again, with the single, "Ego is Not a Dirty Word" issued in March ahead of the album, peaking at #1. The next single, "All My Friends Are Getting Married" reached #5 in July, and was followed by "Million Dollar Riff" at #2 in October. Macainsh's then girlfriend, Jenny Brown, described the band in her 1975 book, "Skyhooks : Million Dollar Riff". A live version of Chuck Berry's "Let It Rock" from a December performance was released as a single in March 1976 and reached #13. With Australian commercial success achieved, Skyhooks turned to the US market. Gudinski announced a $1.5 million deal with Mercury Records/Phonogram Records, which released a modified international version of "Ego Is Not a Dirty Word" with "Horror Movie" and "You Just Like Me Cos I'm Good in Bed" from their first Australian album replacing two tracks. A US tour followed in March–April 1976, but critics described them as imitators of Kiss due to the similarity of Symons' make-up and stage act to that of Gene Simmons, and despite limited success in Boston, Massachusetts and Jacksonville, Florida they failed to make inroads into the general US market. After completing their 1976 US tour, the band remained in San Francisco and recorded their third album with Wilson producing, "Straight in a Gay Gay World"—called "Living in the 70's" for US release with "Living in the 70's" replacing "The Girl Says She's Bored"—which appeared in August and peaked at #3 on the Australian album charts. In July, upon return to Australia they launched The Brats Are Back Tour with a single, "This is My City", which reached the Top 20. "Blue Jeans" followed in August and peaked at #13 on the singles chart. By October, Strachan provided his debut solo single, "Every Little Bit Hurts" (a cover of Brenda Holloway's 1964 hit), which reached #3. In February 1977, Symons left the band and was replaced on guitar by Bob Spencer from the band Finch. With Symons' departure the band dropped the glam rock look and used a more straight forward hard rock approach. During 1977 Skyhooks toured nationally three times, while their first single with Spencer, "Party to End All Parties", entered the top 30 in May. Strachan released his second solo single, a cover of Smokey Robinson's "Tracks of My Tears", which reached the top 20 in July. Meanwhile, Mushroom released a singles anthology, "The Skyhooks Tapes", which entered the top 50 in September. The band's mass popularity had declined although they still kept their live performances exciting and irreverent. In January 1978 they toured New Zealand and performed at the Nambassa festival. In February their next single, "Women in Uniform", was issued and peaked at #8, while its album "Guilty Until Proven Insane" followed in March and reached #6. The album was produced by Americans Eddie Leonetti and Jack Douglas. The second single from the album, "Megalomania" issued in May, did not enter the top 40. Strachan told band members he intended to leave—but it was not officially announced for six months—he continued regular shows until his final gig with Skyhooks on 29 July. Strachan released further solo singles, "Mr Summer" in October and "Nothing but the Best" in January 1979, but neither charted in the top 50. Strachan's replacement in Skyhooks, on lead vocals, was Tony Williams (ex-Reuben Tice with Macainsh). Williams' first single for Skyhooks, "Over the Border", a political song about the state of the Queensland Police Force at the time, reached the top 40 in April, and their fifth studio album, "Hot for the Orient", appeared in May 1980, but failed to enter the top 50. From 1975 to 1977, Skyhooks were—alongside Sherbet—the most commercially successful group in Australia, but over the next few years, Skyhooks rapidly faded from the public eye with the departure of key members, and in 1980 the band announced its break-up in controversial circumstances. Ian "Molly" Meldrum, usually a supporter of Skyhooks, savaged "Hot for the Orient" on his "Humdrum" segment of "Countdown"—viewers demanded that the band appear on a following show to defend it. Poor reception of the album both by the public and reviewers led the band to take out a page-sized ad in the local music press declaring "Why Don't You All Get Fu**ed" (title of one of their songs) and they played their last performance on 8 June, not in their hometown of Melbourne, but in the mining town of Kalgoorlie in Western Australia. In December 1982, Mushroom released a medley of Skyhooks songs as "Hooked on Hooks" which peaked at #21. Demands for the "classic" line-up of the band—Macainsh, Bob Starkie, Strachan, Strauks and Symons—to reform were successful and on 23 April 1983, they started the Living in the 80's Tour. Support acts for the first concert included The Church, Mental as Anything, The Party Boys, The Sunnyboys, and Midnight Oil—who acknowledged, "Hooks were the only Australian band they would let top the bill above them". This tour was released on LP as "Live in the 80's". A one-off reunion concert took place in October 1984, and in 1990 the band finally recorded new material, including "Jukebox in Siberia", released in September, which peaked at the top of the ARIA Singles Charts for two weeks. In November, "The Latest and Greatest", a compilation album, was released, which peaked at #4 on the ARIA Albums Charts. The tracks were taken from Skyhooks' first four studio albums along with two recent singles, "Jukebox In Siberia" and the uncharted "Tall Timber". In 1992, Skyhooks were inducted into the Australian Recording Industry Association (ARIA) Hall of Fame, while their manager, Gudunski, and record label, Mushroom Records, received a 'Special Achievement Award'. Producer of their first three albums, Wilson, had been inducted into the Hall of Fame in 1989 as an individual and again as a member of Daddy Cool in 2006. The final release of new Skyhooks material came in June 1999 when a twin-CD, "Skyhooks: The Collection", was issued. Disc one contained a greatest hits package, very similar to "The Latest and Greatest", with additional tracks. Disc two is referred to by fans as "The Lost Album", with previously unreleased songs from their 1990 and 1994 recording sessions. Strachan and Symons each went on to successful careers in Australian media including radio and television. Symons works on ABC radio and writes humorous newspaper columns. Starkie played locally with different bands including Ol' Skydaddys, and Ram Band. Strauks was drummer for Melbourne rock band The Sports, Jo Jo Zep & The Falcons, folk band The Bushwackers and the Ol' Skydaddys. Macainsh played with John Farnham on his Whispering Jack Tour and with Dave Warner's from the Suburbs, in 1988 he put together and managed a very successful AC/DC tribute band called Back in Black who went on to support Skyhooks on their comeback tour. He was a board member of Australasian Performing Right Association (APRA) (1997–2000) and Phonographic Performance Company of Australia (PPCA) (2001–2006), and is an intellectual property lawyer. Strachan was killed in an air crash on 29 August 2001, when the helicopter he was learning to fly solo crashed into Mount Archer near Kilcoy, northwest of Brisbane. A memorial concert was held on 11 September 2001 at the Palais Theatre, tributes were paid and some remaining members—Strauks, Macainsh, Starkie, Symons and Spencer—performed with guest vocalists Daryl Braithwaite and Wilson. It is the only time Symons and his replacement, Spencer performed together on stage. Braithwaite performed "All My Friends Are Getting Married" with the band whilst Wilson sang the rare Skyhooks track "Warm Wind in the City". The 30th anniversary of the release of the "Living in the 70's" album was commemorated in 2004, with different incarnations of the band performing. Absent were Strachan, Hill and Ingliss. Vocals were by Wilson, Williams and Bob Starkie. The original line-up of Skyhooks including Hill reformed in 2005 at the Annandale Hotel in Sydney for a one-off gig, a benefit for Hill, who had been diagnosed with liver cancer. The line-up of Ingliss, Peter Starkie, Strauks and Macainsh joined him onstage—Hill died six weeks later. In November 2009, the "Skyhooks Tour Archive", displayed on the band's website, listed 925 live shows. Macainsh, Starkie and Strauks appeared as Skyhooks at the 2009 Helpmann Awards in Sydney. They performed "Women in Uniform" with Australian rock icon Jimmy Barnes providing vocals. Red Symons was also slated to perform with the band, but was replaced by Diesel after withdrawing a few days before the show. On 7 April 2010, 3AW reported that Skyhooks were to appear on the first episode of the new series of "Hey Hey It's Saturday" with Leo Sayer on vocals. Sayer later appeared on air and denied the claims.
https://en.wikipedia.org/wiki?curid=29207
Square root In mathematics, a square root of a number is a number such that ; in other words, a number whose "square" (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16 because . Every nonnegative real number has a unique nonnegative square root, called the "principal square root", which is denoted by , where the symbol is called the "radical sign" or "radix". For example, the principal square root of 9 is 3, which is denoted by , because and 3 is nonnegative. The term (or number) whose square root is being considered is known as the "radicand". The radicand is the number or expression underneath the radical sign, in this example 9. Every positive number has two square roots: , which is positive, and , which is negative. Together, these two roots are denoted as (see ± shorthand). Although the principal square root of a positive number is only one of its two square roots, the designation ""the" square root" is often used to refer to the "principal square root". For positive , the principal square root can also be written in exponent notation, as . Square roots of negative numbers can be discussed within the framework of complex numbers. More generally, square roots can be considered in any context in which a notion of "squaring" of some mathematical objects is defined (including algebras of matrices, endomorphism rings, etc.) The Yale Babylonian Collection YBC 7289 clay tablet was created between 1800 BC and 1600 BC, showing and /2 = 1/ as 1;24,51,10 and 0;42,25,35 base 60 numbers on a square crossed by two diagonals. (1;24,51,10) base 60 corresponds to 1.41421296 which is a correct value to 5 decimal points (1.41421356...). The Rhind Mathematical Papyrus is a copy from 1650 BC of an earlier Berlin Papyrus and other textspossibly the Kahun Papyrusthat shows how the Egyptians extracted square roots by an inverse proportion method. In Ancient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as the "Sulba Sutras", dated around 800–500 BC (possibly much earlier). A method for finding very good approximations to the square roots of 2 and 3 are given in the "Baudhayana Sulba Sutra". Aryabhata in the "Aryabhatiya" (section 2.4), has given a method for finding the square root of numbers having many digits. It was known to the ancient Greeks that square roots of positive whole numbers that are not perfect squares are always irrational numbers: numbers not expressible as a ratio of two integers (that is, they cannot be written exactly as "m/n", where "m" and "n" are integers). This is the theorem "Euclid X, 9" almost certainly due to Theaetetus dating back to circa 380 BC. The particular case is assumed to date back earlier to the Pythagoreans and is traditionally attributed to Hippasus. It is exactly the length of the diagonal of a square with side length 1. In the Chinese mathematical work "Writings on Reckoning", written between 202 BC and 186 BC during the early Han Dynasty, the square root is approximated by using an "excess and deficiency" method, which says to "...combine the excess and deficiency as the divisor; (taking) the deficiency numerator multiplied by the excess denominator and the excess numerator times the deficiency denominator, combine them as the dividend." A symbol for square roots, written as an elaborate R, was invented by Regiomontanus (1436–1476). An R was also used for Radix to indicate square roots in Gerolamo Cardano's "Ars Magna". According to historian of mathematics D.E. Smith, Aryabhata's method for finding the square root was first introduced in Europe by Cataneo in 1546. According to Jeffrey A. Oaks, Arabs used the letter "jīm/ĝīm" (), the first letter of the word “” (variously transliterated as "jaḏr", "jiḏr", "ǧaḏr" or "ǧiḏr", “root”), placed in its initial form () over a number to indicate its square root. The letter "jīm" resembles the present square root shape. Its usage goes as far as the end of the twelfth century in the works of the Moroccan mathematician Ibn al-Yasamin. The symbol '√' for the square root was first used in print in 1525 in Christoph Rudolff's "Coss". The principal square root function "f"("x") = (usually just referred to as the "square root function") is a function that maps the set of nonnegative real numbers onto itself. In geometrical terms, the square root function maps the area of a square to its side length. The square root of "x" is rational if and only if "x" is a rational number that can be represented as a ratio of two perfect squares. (See square root of 2 for proofs that this is an irrational number, and quadratic irrational for a proof for all non-square natural numbers.) The square root function maps rational numbers into algebraic numbers (a superset of the rational numbers). For all real numbers "x", For all nonnegative real numbers "x" and "y", and The square root function is continuous for all nonnegative "x" and differentiable for all positive "x". If "f" denotes the square root function, its derivative is given by: The Taylor series of about "x" = 0 converges for ≤ 1 and is given by The square root of a nonnegative number is used in the definition of Euclidean norm (and distance), as well as in generalizations such as Hilbert spaces. It defines an important concept of standard deviation used in probability theory and statistics. It has a major use in the formula for roots of a quadratic equation; quadratic fields and rings of quadratic integers, which are based on square roots, are important in algebra and have uses in geometry. Square roots frequently appear in mathematical formulas elsewhere, as well as in many physical laws. A positive number has two square roots, one positive, and one negative, which are opposite to each other. So, when talking of "the" square root of a positive integer, this is the positive square root that is meant. The square roots of an integer are algebraic integers and, more specifically, quadratic integers. The square root of a positive integer is the product of the roots of its prime factors, because the square root of a product is the product of the square roots of the factors. Since , only roots of those primes having an odd power in the factorization are necessary. More precisely, the square root of a prime factorization is The square roots of the perfect squares (0, 1, 4, 9, 16, etc.) are integers. In all other cases, the square roots of positive integers are irrational numbers, and therefore their decimal representations are non-repeating decimals. Decimal approximations of the square roots of the first few natural numbers are given in the following table. The square roots of the perfect squares (1, 4, 9, 16, etc.) are integers. In all other cases, the square roots of positive integers are irrational numbers, and therefore their representations in any standard positional notation system are non-repeating. The square roots of small integers are used in both the SHA-1 and SHA-2 hash function designs to provide nothing up my sleeve numbers. One of the most intriguing results from the study of irrational numbers as continued fractions was obtained by Joseph Louis Lagrange 1780. Lagrange found that the representation of the square root of any non-square positive integer as a continued fraction is periodic. That is, a certain pattern of partial denominators repeats indefinitely in the continued fraction. In a sense these square roots are the very simplest irrational numbers, because they can be represented with a simple repeating pattern of integers. The square bracket notation used above is a short form for a continued fraction. Written in the more suggestive algebraic form, the simple continued fraction for the square root of 11, [3; 3, 6, 3, 6, ...], looks like this: where the two-digit pattern {3, 6} repeats over and over again in the partial denominators. Since , the above is also identical to the following generalized continued fractions: Square roots of positive numbers are not in general rational numbers, and so cannot be written as a terminating or recurring decimal expression. Therefore in general any attempt to compute a square root expressed in decimal form can only yield an approximation, though a sequence of increasingly accurate approximations can be obtained. Most pocket calculators have a square root key. Computer spreadsheets and other software are also frequently used to calculate square roots. Pocket calculators typically implement efficient routines, such as the Newton's method (frequently with an initial guess of 1), to compute the square root of a positive real number. When computing square roots with logarithm tables or slide rules, one can exploit the identities where and 10 are the natural and base-10 logarithms. By trial-and-error, one can square an estimate for and raise or lower the estimate until it agrees to sufficient accuracy. For this technique it is prudent to use the identity as it allows one to adjust the estimate "x" by some amount "c" and measure the square of the adjustment in terms of the original estimate and its square. Furthermore, ("x" + "c")2 ≈ "x"2 + 2"xc" when "c" is close to 0, because the tangent line to the graph of "x"2 + 2"xc" + "c"2 at "c" = 0, as a function of "c" alone, is "y" = 2"xc" + "x"2. Thus, small adjustments to "x" can be planned out by setting 2"xc" to "a", or "c" = "a"/(2"x"). The most common iterative method of square root calculation by hand is known as the "Babylonian method" or "Heron's method" after the first-century Greek philosopher Heron of Alexandria, who first described it. The method uses the same iterative scheme as the Newton–Raphson method yields when applied to the function y = "f"("x") = "x"2 − "a", using the fact that its slope at any point is "dy"/"dx" = ""("x") = 2"x", but predates it by many centuries. The algorithm is to repeat a simple calculation that results in a number closer to the actual square root each time it is repeated with its result as the new input. The motivation is that if "x" is an overestimate to the square root of a nonnegative real number "a" then "a"/"x" will be an underestimate and so the average of these two numbers is a better approximation than either of them. However, the inequality of arithmetic and geometric means shows this average is always an overestimate of the square root (as noted below), and so it can serve as a new overestimate with which to repeat the process, which converges as a consequence of the successive overestimates and underestimates being closer to each other after each iteration. To find "x": That is, if an arbitrary guess for is "x"0, and , then each xn is an approximation of which is better for large "n" than for small "n". If "a" is positive, the convergence is quadratic, which means that in approaching the limit, the number of correct digits roughly doubles in each next iteration. If , the convergence is only linear. Using the identity the computation of the square root of a positive number can be reduced to that of a number in the range . This simplifies finding a start value for the iterative method that is close to the square root, for which a polynomial or piecewise-linear approximation can be used. The time complexity for computing a square root with "n" digits of precision is equivalent to that of multiplying two "n"-digit numbers. Another useful method for calculating the square root is the shifting nth root algorithm, applied for . The name of the square root function varies from programming language to programming language, with codice_1 (often pronounced "squirt" ) being common, used in C, C++, and derived languages like JavaScript, PHP, and Python. The square of any positive or negative number is positive, and the square of 0 is 0. Therefore, no negative number can have a real square root. However, it is possible to work with a more inclusive set of numbers, called the complex numbers, that does contain solutions to the square root of a negative number. This is done by introducing a new number, denoted by "i" (sometimes "j", especially in the context of electricity where ""i"" traditionally represents electric current) and called the imaginary unit, which is "defined" such that . Using this notation, we can think of "i" as the square root of −1, but we also have and so −"i" is also a square root of −1. By convention, the principal square root of −1 is "i", or more generally, if "x" is any nonnegative number, then the principal square root of −"x" is The right side (as well as its negative) is indeed a square root of −"x", since For every non-zero complex number "z" there exist precisely two numbers "w" such that : the principal square root of "z" (defined below), and its negative. To find a definition for the square root that allows us to consistently choose a single value, called the principal value, we start by observing that any complex number "x" + "iy" can be viewed as a point in the plane, ("x", "y"), expressed using Cartesian coordinates. The same point may be reinterpreted using polar coordinates as the pair formula_14), where "r" ≥ 0 is the distance of the point from the origin, and formula_15 is the angle that the line from the origin to the point makes with the positive real ("x") axis. In complex analysis, the location of this point is conventionally written formula_16 If then we define the principal square root of "z" as follows: The principal square root function is thus defined using the nonpositive real axis as a branch cut. The principal square root function is holomorphic everywhere except on the set of non-positive real numbers (on strictly negative reals it isn't even continuous). The above Taylor series for remains valid for complex numbers "x" with . The above can also be expressed in terms of trigonometric functions: When the number is expressed using Cartesian coordinates the following formula can be used for the principal square root: where the sign of the imaginary part of the root is taken to be the same as the sign of the imaginary part of the original number, or positive when zero. The real part of the principal value is always nonnegative. For example, the principal square roots of are given by: In the following, the complex "z" and "w" may be expressed as: where formula_24 and formula_25. Because of the discontinuous nature of the square root function in the complex plane, the following laws are not true in general. A similar problem appears with other complex functions with branch cuts, e.g., the complex logarithm and the relations or which are not true in general. Wrongly assuming one of these laws underlies several faulty "proofs", for instance the following one showing that : The third equality cannot be justified (see invalid proof). It can be made to hold by changing the meaning of √ so that this no longer represents the principal square root (see above) but selects a branch for the square root that contains ()·(). The left-hand side becomes either if the branch includes +"i" or if the branch includes −"i", while the right-hand side becomes where the last equality, , is a consequence of the choice of branch in the redefinition of √. If "A" is a positive-definite matrix or operator, then there exists precisely one positive definite matrix or operator "B" with ; we then define . In general matrices may have multiple square roots or even an infinitude of them. For example, the identity matrix has an infinity of square roots, though only one of them is positive definite. Each element of an integral domain has no more than 2 square roots. The difference of two squares identity is proved using the commutativity of multiplication. If and are square roots of the same element, then . Because there are no zero divisors this implies or , where the latter means that two roots are additive inverses of each other. In other words if an element a square root of an element exists, then the only square roots of are and . The only square root of 0 in an integral domain is 0 itself. In a field of characteristic 2, an element either has one square root or does not have any at all, because each element is its own additive inverse, so that . If the field is finite of characteristic 2 then every element has a unique square root. In a field of any other characteristic, any non-zero element either has two square roots, as explained above, or does not have any. Given an odd prime number , let for some positive integer . A non-zero element of the field with elements is a quadratic residue if it has a square root in . Otherwise, it is a quadratic non-residue. There are quadratic residues and quadratic non-residues; zero is not counted in either class. The quadratic residues form a group under multiplication. The properties of quadratic residues are widely used in number theory. Unlike in an integral domain, a square root in an arbitrary (unital) ring need not be unique up to sign. For example, in the ring formula_36 of integers modulo 8 (which is commutative, but has zero divisors), the element 1 has four distinct square roots: ±1 and ±3. Another example is provided by the ring of quaternions formula_37 which has no zero divisors, but is not commutative. Here, the element −1 has infinitely many square roots, including , , and . In fact, the set of square roots of −1 is exactly A square root of 0 is either 0 or a zero divisor. Thus in rings where zero divisors do not exist, it is uniquely 0. However, rings with zero divisors may have multiple square roots of 0. For example, in formula_39 any multiple of is a square root of 0. The square root of a positive number is usually defined as the side length of a square with the area equal to the given number. But the square shape is not necessary for it: if one of two similar planar Euclidean objects has the area "a" times greater than another, then the ratio of their linear sizes is . A square root can be constructed with a compass and straightedge. In his Elements, Euclid (fl. 300 BC) gave the construction of the geometric mean of two quantities in two different places: Proposition II.14 and Proposition VI.13. Since the geometric mean of "a" and "b" is , one can construct simply by taking . The construction is also given by Descartes in his "La Géométrie", see figure 2 on page 2. However, Descartes made no claim to originality and his audience would have been quite familiar with Euclid. Euclid's second proof in Book VI depends on the theory of similar triangles. Let AHB be a line segment of length with and . Construct the circle with AB as diameter and let C be one of the two intersections of the perpendicular chord at H with the circle and denote the length CH as "h". Then, using Thales' theorem and, as in the proof of Pythagoras' theorem by similar triangles, triangle AHC is similar to triangle CHB (as indeed both are to triangle ACB, though we don't need that, but it is the essence of the proof of Pythagoras' theorem) so that AH:CH is as HC:HB, i.e. , from which we conclude by cross-multiplication that , and finally that . When marking the midpoint O of the line segment AB and drawing the radius OC of length , then clearly OC > CH, i.e. (with equality if and only if ), which is the arithmetic–geometric mean inequality for two variables and, as noted above, is the basis of the Ancient Greek understanding of "Heron's method". Another method of geometric construction uses right triangles and induction: can, of course, be constructed, and once has been constructed, the right triangle with legs 1 and has a hypotenuse of . Constructing successive square roots in this manner yields the Spiral of Theodorus depicted above.
https://en.wikipedia.org/wiki?curid=29208
SS Kaiser Wilhelm der Grosse Kaiser Wilhelm der Grosse (Ger. orth. "Kaiser Wilhelm der Große") was a German transatlantic ocean liner named after Wilhelm I, German Emperor, the first monarch of the (second) German Empire. The liner was constructed in Stettin (now Szczecin, Poland) for the North German Lloyd (NDL), and entered service in 1897. It was the first liner to have four funnels and is considered to be the first "superliner." The first of four sister ships built between 1903 and 1907 by NDL (the others being , and SS "Kronprinzessin Cecilie"), she marked the beginning of a change in the way maritime supremacy was demonstrated in Europe at the beginning of the 20th century. The ship began a new era in ocean travel and the novelty of having four funnels was quickly associated with size, strength, speed and above all luxury. Quickly established on the Atlantic, she gained the Blue Riband for Germany, a notable prize for the fastest trip from Europe to America which had been previously dominated by the British. In 1900, she was damaged in a massive and lethal multi-ship fire in the port of New York. She was also in a collision in the French port of Cherbourg in 1906. With the advent of her sister ships, she was modified to an all-third-class ship to take advantage of the lucrative immigrant market travelling to the United States. Converted into an auxiliary cruiser at the outbreak of World War I, she was given orders to capture and destroy enemy ships. She destroyed several before being defeated in the Battle of Río de Oro by the British cruiser and scuttled by her crew, just three weeks after the outbreak of war. Her wreck was discovered in 1952 and dismantled. At the end of the 19th century, the United Kingdom dominated maritime trade with the ocean liners of the principal maritime companies such as the Cunard and the White Star Line. Having gained more influence in Europe after William I, German Emperor, his grandfather, had created the German Empire in 1871, Emperor Wilhelm II wished to consolidate German influence on the sea and thus decrease that of the British. In 1889, the Emperor himself had attended a naval review in honour of the jubilee of his grandmother Queen Victoria. There he saw the strength and size of these British ships, notably the latest and then-largest liner owned by White Star, . He particularly admired the fact that these ships could easily be converted to auxiliary cruisers in time of conflict. Leaving a lasting impression, the emperor was heard to say that "We must have some of these..." The "Norddeutscher Lloyd", commonly known as NDL or North German Lloyd, was one of only two German maritime companies which had any influence in the hugely profitable transatlantic shipping market. Neither of these lines had shown any interest in operating large liners. NDL, however, was the first company to name any of their liners in honour of members of the Imperial family, purely to flatter the emperor. The company also had important links with the naval architects AG Vulkan of Stettin. NDL then approached Vulkan and commissioned them to construct a new "superliner", which was to be named "Kaiser Wilhelm der Grosse". The new ship would set a new style for ocean liners. She was the largest and longest liner afloat and would have been the largest ever had it not been for of 1860. She was the first liner to have suites with sleeping quarters including a private parlor and bath. She was built with decks strengthened to mount eight guns, four guns, and fourteen machine guns; although fewer and smaller guns were actually mounted in her ultimate wartime conversion. The launching of the ship took place on 4 May 1897 in the presence of the Imperial family; it was the emperor who baptised the ship whose name honoured his grandfather Emperor William I, ""the Great"". Construction and the internal decoration of the liner took place in Bremerhaven and before long she was ready to begin her regular crossings, her maiden voyage being scheduled for September the same year. The most striking feature of "Kaiser Wilhelm der Grosse" was her four funnels, the first ship ever to sport such a quartet, which for the next two decades would be a symbol of size and safety. "Kaiser Wilhelm der Grosse" set out on her maiden voyage on 19 September 1897, travelling from Bremerhaven to Southampton and thence to New York. With a capacity of 800 third-class passengers, the NDL had ensured that they would profit greatly from the immigrants wishing to leave the continent for a better standard of living in the United States. From her maiden voyage, she was the only superliner to cross the Atlantic with such speed and such media attention. In March 1898, she successfully gained the Blue Riband with an average crossing speed of , thus establishing the new German competitiveness. The Blue Riband, an award given for the fastest crossing of the North Atlantic, east and westbound, had previously been held by the Cunard liner . This turn of events was closely watched by the maritime world of the era, who were eager to see how the British would retaliate. However, the NDL soon lost the riband in 1900 to the new German superliner, of the Hamburg America Line. This change in events was acceptable to Germans, who were able to relax in the knowledge that they were still the owners of the fastest liner; however, NDL promptly ordered that "Kaiser Wilhelm der Grosse" undergo a refit to ensure that they were the dominant German company. This refit included the installation of wireless communication, then new technology which allowed "Kaiser Wilhelm der Grosse" to transmit telegraphic messages to a port, emphasising her image of security. The NDL took the battle even further. 1901 saw the addition to their fleet of another four-funnel liner, named in honour of Crown Prince William, heir to the German throne, and they subsequently commissioned another two superliners, and of 1903 and 1907 respectively. From 1903 to 1907 the Blue Riband was held by SS "Kaiser Wilhelm II". The company stated that the four liners were of the renowned Kaiser class and decided to market them as the "Four Flyers", a reference to their speed and associations with the Blue Riband. The career of "Kaiser Wilhelm der Grosse", despite its prestige, was not without incident. In June 1900 at her quay in Hoboken, New Jersey, she was the victim of a fire which killed one hundred staff who were trying to remove the threat as the ship was towed to safety in the Hudson River. Six years later, on 21 November 1906, she was the victim of a collision with , a British ship of the Royal Mail, in Cherbourg. Five passengers aboard "Kaiser Wilhelm der Grosse" and three crewmen aboard "Orinoco" lost their lives in the incident and "Kaiser Wilhelm der Grosse" was found to have an tear in her hull. New York City mayor William Jay Gaynor was embarking on a European vacation when he was shot aboard the ship on 9 August 1910. To make matters worse, ever growing technological evolution of steamships soon made NDL's express steamers outdated. Cunard's and outmatched their German rivals in all fields, and when the future White Star's entered service in 1911, luxury on the high seas was taken one step further. As a result, "Kaiser Wilhelm der Grosse" was rebuilt in 1913 to carry third-class passengers only. It seemed that her glory was fading regardless of her career as the first "four stacker". From 26 January 1907, she was charged with carrying passengers between the Mediterranean Sea and New York, effectively ending the public career of the first of the "four flyers". From 1908, German naval captains had been receiving orders to make preparations in the event of a sudden war. In fact, "Kaiser Wilhelm der Grosse" was soon fitted with cannons and thus transformed into an auxiliary cruiser. Across the world, supply ships carrying weapons and provisions were ready to convert merchant vessels into armed auxiliary cruisers. On August 4, 1914, Great Britain and France declared war on Germany after the Germans invaded Belgium and Luxembourg. "Kaiser Wilhelm der Grosse" was requisitioned and turned into an armed cruiser, painted in grey and black. Her commander at the time, Captain Reymann, operated not only under the rules of war, but also the rules of mercy. Reymann soon sank three ships, "Tubal Cain", "Kaipara", and "Nyanza", but only after taking their occupants on board. Further south in the Atlantic, "Kaiser Wilhelm der Grosse" encountered two passenger liners: "Galician" and "Arlanza". Reymann's first intention was to sink both vessels, but, discovering that they had many women and children on board, he let them go. In this early stage of the war, it was thought that it could be fought in a chivalrous fashion. However, soon it was to become a total war and ships would no longer be warned before being fired upon. As "Kaiser Wilhelm der Grosse" approached the west coast of Africa, her coal bunkers were almost empty and needed refilling. She stopped at Río de Oro, (Villa Cisneros, former Spanish Sahara) where German and Austrian colliers started the task of refuelling her. The task of coaling was still going on on 26 August, when the British cruiser appeared. Reymann quickly prepared his ship and crew for battle and steamed out to engage the enemy after disembarking his prisoners of war. A fierce battle took place, but came to a dramatic end when "Kaiser Wilhelm der Grosse" ran out of ammunition. According to the Germans, rather than let the enemy capture the onetime pride of Germany, Reymann ordered the ship to be scuttled using dynamite, which was already in position should this situation ever arise. On detonation, the explosives tore a massive hole in the ship, causing her to capsize. This version of events was disputed by the British, who stated that "Kaiser Wilhelm der Grosse" had been badly damaged and sinking when Reymann ordered it to be abandoned. The British firmly believed that it was gunfire from HMS "Highflyer" which sank the German ship. Reymann managed to swim to shore, and he made his way back to Germany by working as a stoker on a neutral vessel. (Most of the crew were taken prisoner and held in the Amherst Internment Camp in Nova Scotia for the remainder of the war.) The downfall of such great liners in the event of war was their huge fuel consumption. Most liners were subsequently converted from cruisers to hospital ships or troopships. "Kaiser Wilhelm der Grosse" was long and had a beam of . The liner measured 14,349 gross register tons. In fact, her dimensions were similar to those of the 1860 "Great Eastern", which was the largest ship of its time. As already noted, her four funnels were her most unusual feature. People associated the safety of an ocean liner with the number of "stacks" or funnels they had. Some passengers would in fact refuse to board ships if they did not have four funnels. In an age when ocean travel was not as safe as today, it was important to ensure that passengers felt at ease. The special improvement in the arrangement of this steamer, as compared with other express steamers previously built by the NDL or other companies, consisted in the entire upper deck. Like many four-funnelled liners, "Kaiser Wilhelm der Grosse" did not actually require that many. She had only two uptake shafts from the boiler rooms, which each branched into two to connect to the four funnels—this design is the reason for the funnels being unequally spaced. "Kaiser Wilhelm der Grosse" became the first liner to have a commercial wireless telegraphy system when the Marconi Company installed one in February 1900. Communications were demonstrated with systems installed at the Borkum Island lighthouse and Borkum Riff lightship northwest of the island, as well as with British stations, and the first ship-to-shore message was sent on 7 March. The ship was powered by with two triple expansion reciprocating engines and had two propellers, allowing her to reach speeds of over . The engines were noted for their stability. The engines were balanced on the Schlick system, which prevented movement being transferred to the body of the ship, thus reducing unpleasant vibration. As a large passenger ship, "Kaiser Wilhelm der Grosse" was built to carry a maximum of 1,506 passengers: 206 first class; 226 second class; 1,074 third class. At the time of her construction, she had a crew numbering a mere 488. However, following her refit of 1913, her crew space was increased to 800. The décor of ship was in the style of Baroque revival, overseen by Johann Poppe, who carried out all of the interior decoration. This was unique as usually a ship would have several interior designers. The interiors were graced with statues, mirrors, tapestries, gilding, and various portraits of the Imperial family. The interiors of her sister ships were also placed in the hands of Poppe. The first class salon was noted for its tapestries and its blue seating. The smoking room, a traditionally male preserve, was made to look like a typical German inn. The dining room, capable of holding all passengers in one sitting, rose several decks and was crowned with a dome. The room also had columns and had its chairs fixed to the deck, a typical feature of ocean liners of the era. On 6 September 2013 the Salam Association for the Protection of the Environment and Sustainable Development in Morocco filmed underwater footage of the wreck with the ship's name on the hull visible. This was confirmed by the Moroccan Ministry of Culture on 8 October 2013.
https://en.wikipedia.org/wiki?curid=29209
Sydney Swans The Sydney Swans are a professional Australian rules football club which plays in the Australian Football League (AFL). Established in Melbourne as the South Melbourne Football Club in 1874, the Swans relocated to Sydney in 1982, thus making it the first club in the competition to be based outside Victoria. Initially playing in the Victorian Football Association (VFA), the Swans joined seven other clubs in founding the breakaway Victorian Football League (now known as the AFL) in 1896. It won premierships in 1909, 1918 and 1933 before experiencing a 72-year premiership drought—the longest in the competition's history. The club broke the drought in 2005 and won another premiership in 2012. The club has proven to be one of the most consistent teams in the AFL era, failing to make the finals in only four seasons since 1995, playing the most number of finals matches and winning the second-most matches overall (only behind ) since 2000 and boasting a finals winning record of over 50% in the same time period. The Swans' headquarters and training facilities are located at the Sydney Cricket Ground, the club's playing home ground since 1982. The inauguration date of the club is officially 19 June 1874, and it adopted the name "South Melbourne Football Club" four weeks later, on 15 July. In 1880, South Melbourne amalgamated with the nearby Albert-park Football Club, which had a senior football history dating back to May 1867 (Albert-park had, in fact, been known as South Melbourne during its first year of existence). Following the amalgamation, the club retained the name South Melbourne, and adopted the club's now familiar red and white colours from Albert-park. Nicknamed the "Southerners", the team was more colourfully known as the "Bloods", in reference to the bright red sash on their white jumpers (the sash was replaced with a red "V" in 1932). The colourful epithet the "Bloodstained Angels" was also in use. The club was based at Lake Oval, also home of the South Melbourne Cricket Club. South Melbourne was a junior foundation club of the Victorian Football Association in 1877, and attained senior status in 1879; The South Melbourne amalgamation with neighbouring Albert-park Football Club in 1880, formed a club that became the strongest in metropolitan Melbourne. Over its first decade as an amalgamated club, South Melbourne won five VFA premierships – in 1881, 1885 (undefeated), and three-in-a-row in 1888, 1889 and 1890 – and was runner-up to the provincial Geelong Football Club in 1880, 1883 and 1886. At the end of the 1896 season, Collingwood and South Melbourne finished equal at the top of the VFA's premiership ladder with records of 14–3–1, requiring a playoff match to determine the season's premiership; this was the first time this had occurred in VFA history. The match took place on 3 October 1896 at the East Melbourne Cricket Ground. Collingwood won the match, six goals to five, in front of an estimated crowd of 12,000. This grand final would be the last match South Melbourne would play in the VFA, as the following season they would be one of eight founding clubs forming the breakaway Victorian Football League. The other clubs were St Kilda Football Club, Essendon Football Club, Fitzroy Football Club, Melbourne Football Club, Geelong Football Club, Carlton Football Club and Collingwood Football Club. South Melbourne was one of the original founding clubs of the Victorian Football League that was formed in 1897. The club had early success and won three VFL premierships in 1909, 1918 and 1933. The club was at its most successful in the 1930s, when key recruits from both Victoria and interstate led to a string of appearances in the finals, including four successive grand final appearances from 1933 to 1936, albeit with only one premiership in 1933. The collection of players recruited from interstate in 1932/1933 became known as South Melbourne's "Foreign Legion". On grand final eve, 1935, as the Swans prepared to take on Collingwood, star full-forward Bob Pratt was clipped by a truck moments after stepping off a tram and subsequently missed the match for South. Ironically, the truck driver was a South Melbourne supporter. It was during this period that the team became known as the Swans. The nickname, which was suggested by a Herald and Weekly Times artist in 1933, was inspired by the number of Western Australians in the team (the black swan being the state emblem of Western Australia), and was formally adopted by the club before the following season 1934. The name stuck, in part due to the club's association with nearby Albert Park and Lake, also known for its swans (although there are no longer any non-native white swans and only black, indigenous swans in the lake). After several years with only limited success, South Melbourne next reached the grand final in 1945. The match, played against Carlton, was to become known as "the Bloodbath", courtesy of the brawl that overshadowed the match, with a total of 9 players being reported by the umpires. Carlton won the match by 28 points, and from then on, South Melbourne struggled. In the following years, South Melbourne consistently struggled, as their traditional inner-city recruiting district largely emptied as a result of demographic shifts. The club missed the finals in 1946 and continued to fall such that by 1950 they were second-last on the ladder. They nearly made the finals in 1952, but from 1953 to 1969, they never finished higher than eighth on the ladder. By the 1960s it was clear that South Melbourne's financial resources would not be capable of allowing them to compete in the growing market for country and interstate players, and their own local zone was never strong enough to compensate for this. The introduction of country zoning failed to help, as the Riverina Football League proved to be one of the least profitable zones. Between 1945 and 1981, South Melbourne made the finals only twice: under legendary coach Norm Smith, South Melbourne finished fourth in 1970, but lost the first semi-final; and, in 1977, the club finished fifth under coach Ian Stewart, but lost the elimination final. In that time, they "won" three wooden spoons. Between Round 7, 1972 and Round 13, 1973, the team lost 29 consecutive games. By the end of the 1970s South Melbourne were saddled with massive debts after struggling for such a long period of time. In the late 1970s and early 1980s, the VFL was strategically interested in seeing a club based in Sydney, as part of a long-term plan to broaden the appeal of the game in Queensland and New South Wales. The league had started moving a few premiership matches to the Sydney Cricket Ground annually since 1979, and in 1981 was preparing to establish an entirely new, 13th VFL club in Sydney after the Fitzroy Lions staved off a proposed relocation to become the Sydney Lions in late 1980. These plans halted when the South Melbourne board, recognising the difficulties it faced with viability and financial stability in Melbourne, made the decision to play all 1982 home games in Sydney. The club had been operating at a loss of at least $150,000 for the previous five years. News of the proposal broke on 2 July 1981, after which a letter was sent to members outlining the board's reasons for making the proposal and noting that the coach and current players were in favour of the move. On 29 July 1981, the VFL formally accepted the proposal, and paved the way for the Swans to shift to Sydney in 1982. The move caused great internal difficulties, as a group of supporters known as Keep South at South campaigned throughout the rest of 1981 to stop the move; and, at an extraordinary general meeting on 22 September, the group democratically took control of the club's board. However, the new board did not have the power to unilaterally stop the move to Sydney: under the VFL constitution, to rescind the decision that had been made on 29 July required a three-quarters majority in a vote of all twelve clubs, and at a meeting on 14 October it failed to obtain this majority. The new board also lacked the support of the players, the vast majority of whom were in favour of a long-term move to Sydney; in early November, after the board promised that it would try to bring the club back to Melbourne in 1983, the players went on strike, seeking to force the new board commit to Sydney in the long term as well as seeking payments that the cash-strapped club owed them from the previous season. The board ended up undermining its own position when it accepted a $400,000 loan from the VFL in late November to stay solvent, under the condition that it commit to Sydney for two years. Finally, in early December, the Keep South at South board resigned and a board in favour of the move to Sydney was installed. Upon moving, the club played at the Sydney Cricket Ground. In 1982, the club was technically a Melbourne-based club which played all of its home games in Sydney; it dropped the name "South Melbourne" in June 1982, becoming known as simply "the Swans" for the rest of that season. It was not until 1983 that the club formally moved its operations to Sydney and became the Sydney Swans. Its physical "home club" was the "Southern Cross Social Club" at 120a Clovelly Road, Randwick, New South Wales which became bankrupt in 1987; new Sydney Swans Offices were then set up in the Sydney Football Stadium. On 31 July 1985, for what was thought to be $6.3 million, Geoffrey Edelsten "bought" the Swans; in reality it was $2.9 million in cash with funding and other payments spread over five years. Edelsten resigned as chairman in less than twelve months, but had already made his mark. He immediately recruited former Geelong coach Tom Hafey. Hafey, in turn, used his knowledge of Geelong's contracts to recruit David Bolton, Bernard Toohey and Greg Williams, who would all form a key part of the Sydney side, at a league-determined total fee of $240,000 (less than the $500,000 Geelong demanded and even the $300,000 Sydney offered). The likes of Gerard Healy, Merv Neagle and Paul Morwood were also poached from other clubs, and failed approaches were made to Simon Madden, Terry Daniher, Andrew Bews and Maurice Rioli. During the Edelsten years, the Swans were seen by the Sydney public as a flamboyant club, typified by the style of its spearhead, Warwick Capper, his long bright blond mullet and bright pink boots made him unmissable on the field and his pink Lamborghini, penchant for fashion models and eccentricity made him notorious off the field – all somewhat fashionable in the 1980s. During Capper's peak years, the Swans had made successive finals appearances for the first time since relocating. His consistently spectacular aerial exploits earned him the Mark of the Year award in 1987 while his goalkicking efforts (amassing 103 goals in 1987) made him runner up in the Coleman Medal two years running. The Swans’ successive finals appearances saw crowds during this time peak at an average of around 25,000 per game. Edelsten also introduced the "Swanettes", becoming the sole such cheerleading group among VFL teams following the disbandment of Carlton's Blue Birds in 1986. The Swanettes did not get much performance time, owing to the short intervals between quarters of play in the VFL and the lack of space in which they might perform while other activities take place on the field. The Swanettes were soon discontinued and no club has had cheerleaders since then. In 1987, the Swans scored 201 points against the West Coast Eagles and the following week scored 236 points against the Essendon Football Club. Both games were at the SCG. The Swans remain one of only two clubs to have scored consecutive team tallies above 200 points, the only other being Geelong in 1992. However, this was followed by several heavy losses, including defeat by Hawthorn by 99 points in the Qualifying Final and by 76 points against Melbourne in the First Semi-final. The club's form was to slump in the following year. Losses were in the millions. A group of financial backers including Mike Willesee, Basil Sellers, Peter Weinert and Craig Kimberley purchased the licence and bankrolled the club until 1993, when the AFL stepped in. Morale at the side plummeted as players were asked to take pay cuts. Legendary coach Tom Hafey was sacked by the club in 1988 after a player-led rebellion at his tough training methods (unusual in the semi-professional days of that era). Capper was sold to the Brisbane Bears for $400,000 in a desperate attempt to improve the club's finances. Instead, it only led to disastrous on-field performances. Instead of a 100-goal-a-season forward, Sydney's goalkicking was led by Bernard Toohey (usually a defender) with 29 in 1989, then Jim West with 34 in 1990. Players left the club in droves, including Brownlow Medalist Greg Williams, Bernard Toohey and Barry Mitchell. The careers of stars such as Dennis Carroll, David Bolton, Ian Roberts, Tony Morwood and David Murphy wound down, while promising young players like Jamie Lawson, Robert Teal and Paul Bryce had their careers cut short by injury. Attendances consistently dropped below 10,000 when the team performed poorly between 1990 and 1994. The nadir came with three consecutive wooden spoons in 1992, 1993 and 1994. The AFL stepped in to save the Swans, offering substantial monetary and management support. The club survived, despite strong rumours in 1992 that it would merge with the Brisbane Bears to form a combined New South Wales/Queensland team, fold altogether, or even move back to South Melbourne. With draft and salary cap concessions in the early 1990s and a series of notable recruits, the team became competitive after the early part of the decade. During this time, the side was largely held together by two inspirational skippers, both from the Wagga Wagga region of country New South Wales, Dennis Carroll and later the courageous captain Paul Kelly. Desperate to hang on, the club was keen to enlist the biggest names and identities in the AFL, and recruited legendary coach Ron Barassi who helped save the club from extinction while serving them as coach from Round 7, 1993 to 1995. At roughly the same time, Hawthorn legend Dermott Brereton was also recruited, albeit with little on-field impact. On a much brighter side for the Swans, their captain Paul Kelly won the League's highest individual honour, the Brownlow Medal, in 1995. A big coup for the club was recruitment of St Kilda Football Club champion Tony "Plugger" Lockett in 1995. Lockett became a cult figure in Sydney, with an instant impact and along with the Super League war in the dominant rival rugby league football code in Australia, helped the Swans to become a powerhouse Sydney icon. 1995 would be Barassi's last year in charge. The Swans won eight games – as many as they did in the previous three seasons combined – and finished with a percentage of over 100 (in fact, they have managed such consistently ever since). They were also one of only two teams to defeat the all-conquering Carlton side of that year. Swans great Paul Kelly also won the Brownlow Medal that year. Barassi left an improving team, a club in a much better state than he found them. Former Hawthorn player Rodney Eade took over the reins in 1996 and after a slow start (they lost their first two games of the season), turned the club around into powerful force. The Swans ended the minor round on top of the premiership table with 16 wins, 5 losses, and 1 draw. In the finals, the Swans won one of the most thrilling AFL preliminary finals in history after Plugger Lockett kicked a behind after the siren to win the game. The Swans lost the grand final to , which had been their first appearance in a grand final since 1945. The game was played in front of 93,102 at the MCG. The Swans then made the finals for four of the next five full years that Rodney Eade was in charge. In 1998 they finished 3rd on the AFL ladder; despite beating in their first final the Swans were then beaten by eventual premiers in the semi-final at the SCG. The 1999 season was a largely uneventful year for the club, the only real highlight being Tony Lockett kicking his record-breaking 1300th goal against in Round 10. The 1999 season ended with a 69-point mauling at the hands of minor premiers . After missing the finals in 2000, the Swans rebounded to finish 7th in 2001, but were beaten by by 55 points in their elimination final at Colonial Stadium. Former Swans favourite son Paul Roos was appointed caretaker coach midway through the 2002 season, replacing Rodney Eade who was removed after Round 12. Roos won six of the remaining 10 games that year (including the last four of the season) and was installed as the permanent coach from the 2003 season onwards, despite rumors that Sydney had nearly concluded a deal with Terry Wallace. Roos continued a record as a successful coach with the Swans for the eight full seasons that would follow. A new home ground in ANZ Stadium (then known as Telstra Stadium) provided increased capacity over the SCG. The Swans' first game played at the Stadium in Round 9, 2002 against attracted 54,169 spectators. The Sydney Swans v Collingwood match on 23 August 2003 set an attendance record for the largest crowd to watch an AFL game outside of Victoria with an official attendance of 72,393 and was the largest home and away AFL crowd at any stadium for 2003. A preliminary final against the Brisbane Lions in 2003 attracted 71,019 people. The Swans lost all three of those significant matches. 2004 saw an average year for Sydney, however one highlight was when they ended 's undefeated start to the season in Round 11. The match was notable for Leo Barry's effort in nullifying the impact of St Kilda full-forward and eventual Coleman Medallist Fraser Gehrig, whom Barry restricted to only two possessions for the entire match. Sydney was able to recruit another St Kilda export in the Lockett mould, Barry Hall. There were obvious parallels to the signing of Lockett (a powerful, tough forward from St Kilda with questions over his discipline and attitude), which left Hall with much to live up to. He flourished in his new surroundings and eventually became a cult figure and club leader in his own right. As the new century dawned, Sydney implemented a policy of giving up high order draft picks in exchange for players who struggled at other clubs. It was during this era that the Swans picked up the likes of Paul Williams, Barry Hall, Craig Bolton, Darren Jolly, Ted Richards, Peter Everitt, Martin Mattner, Rhyce Shaw, Shane Mumford, Ben McGlynn and Mitch Morton, amongst others, and giving up higher order draft picks meant the Swans missed out on the likes of Daniel Motlop, Nick Dal Santo, James Kelly, Courtenay Dempsey and Sam Lonergan who went to , , and the latter two to respectively. This policy is said to have paid off in the Roos era, as they implemented a strict culture of discipline at the club. In 2005, the Swans came under enormous public scrutiny, even from AFL CEO Andrew Demetriou, for their unorthodox, "boring" defense-oriented tactics that included tightly controlling the tempo of the game and starving the opposition of possession (in fact, seven teams that season had their lowest possession total whilst playing against the Swans). Swans coach Paul Roos maintained that playing contested football was the style used by all recent Premiership-winning teams, and felt that it was ironic that the much criticised strategy proved ultimately successful. After finishing third during the regular season, the Swans lost the second qualifying final against the West Coast Eagles at Subiaco Oval on 2 September by 10.5 (65) to 10.9 (69). This dropped them into a semi-final against the Geelong Cats at the SCG on 9 September, and the Swans trailed the Cats 31–53 before Nick Davis kicked four consecutive goals, with the last one a matter of seconds before the siren, to win the game for Sydney by 7.14 (56) to 7.11 (53). In the first preliminary final at the MCG on 16 September against St Kilda, the Swans used a seven-goal blitz in 11 minutes of the fourth quarter to overturn an 8-point deficit and overrun the Saints by 15.6 (96) to 9.11 (65). The Swans faced the Eagles in a rematch in the AFL Grand Final on 24 September 2005, and this time, they prevailed by four points, final score 8.10 (58) to West Coast's 7.12 (54). In the last few minutes, the Sydney defence held strong, with Leo Barry marking the ball just before the siren to stop the Eagles' final desperate shot at goal. The Premiership was the Swans' first in 72 years and their first since being based in Sydney. It was also the fifth Premiership in succession to be won by a team from outside Victoria. On Friday, 30 September 2005, a ticker tape parade down Sydney's George Street was held in honour of the Swans' achievements, which ended with a rally at Town Hall, where Sydney Lord Mayor Clover Moore presented the team with the key to the city. The flag of the Swans also flew on top of the Sydney Harbour Bridge during the week; the same flag was later given to WA premier Geoff Gallop to fly on top of the state legislature in Perth as part of the friendly wager between Gallop and NSW premier Morris Iemma. As reigning premiers, the Sydney Swans started the 2006 season slowly, losing three of their first four games, including in round one to an side that would finish near the bottom of the ladder with only three wins and a draw, and finish with the worst defensive record of any side for the season (Sydney, conversely, had the best defensive record of any side). The 2006 AFL Grand Final was contested between the Sydney Swans and West Coast Eagles at the Melbourne Cricket Ground on 30 September 2006. The West Coast Eagles avenged their 2005 Grand Final defeat by beating the Sydney Swans by one point, only the fourth one-point grand final margin in the competition's history. The rivalry between the Sydney Swans and West Coast Eagles has become one of the great modern rivalries. The six games between the two sides (from the start of the 2005 finals to the first round of 2007 inclusive) were decided by a combined margin of 13 points. Four of those six games were finals, and 2 grand finals. Sydney finished the 2007 home and away season in 7th place, and advanced to the finals, where they faced and were defeated by by 38 points in the elimination final. It was their earliest exit from the finals since 2001 and was a culmination of a mostly disappointing season, as only victories against lesser teams saw them through to a fifth consecutive finals campaign. The conclusion of the 2007 trade saw the loss of Adam Schneider and Sean Dempster to St Kilda, the delisting of Simon Phillips, Jonathan Simpkin and Luke Vogels, and the gain of Henry Playfair from Geelong and Martin Mattner from Adelaide. The Swans spent the middle part of the 2008 season inside the top four, however a late form slump which yielded only three wins in the last nine rounds saw the Swans drop to sixth at the conclusion of the 2008 regular season. Having qualified for the finals for a sixth consecutive season, the Swans defeated in the elimination final before losing to the Western Bulldogs the following week. 2009 saw the club register only eight victories as they failed to reach the finals for the first time since 2002, finishing 12th with a percentage of below 100% for the first time since 1994. Barry Hall, Leo Barry, Jared Crouch, Michael O'Loughlin, Amon Buchanan and Darren Jolly all departed at the conclusion of the season, with Mark Seaby, Daniel Bradshaw and Shane Mumford, among others, joining the club during the trade period. The 2010 season saw Sydney return to the finals by virtue of a fifth-place finish at the end of the regular season. The club defeated by five points in the elimination final before losing to the Western Bulldogs in the semi-finals for the second time in three seasons. The loss signalled the end of the Swans coaching career of Paul Roos as well as that of the playing career of Brett Kirk. Former premiership-winning forward John Longmire took over as coach of the Swans as part of a succession plan initiated by Paul Roos in 2009 prior to the beginning of the 2011 season. He led the club to a seventh-place finish at the end of the regular season, therefore qualifying for the finals for the 13th time in the past 16 seasons. The Swans defeated in an elimination final at Docklands Stadium before losing to in the semi-finals the following week. It was during the regular season that the Swans caused the upset of the season, defeating the star-studded Geelong Cats on its home ground, Skilled Stadium, where the home tenant had won its past 29 games in succession, and its past two matches at the ground by a combined margin of 336 points, in Round 23. It was the Swans' first win over the Cats since 2006 and its first win at the ground since Round 8, 1999. The Swans were also the only team to defeat the West Coast Eagles at Patersons Stadium during the regular season. The Swans' victory over Geelong was overshadowed by the news that co-captain Jarrad McVeigh's baby daughter had died in the week leading up to the match, forcing him to miss that match. The 2012 season began for the Swans with the inaugural Sydney Derby against AFL newcomers . After an even and physical first half, Sydney went on to win by 63 points. Subsequent wins over , , and saw the Swans sit second behind on percentage after Round 5, but the Swans would proceed to lose three of their next four matches before embarking on a nine-match winning streak between Rounds 10 and 19 inclusive. The Swans eventually finished the regular season in third place after losing three of their final four matches, all against their fellow top-four rivals (Collingwood, Hawthorn and Geelong in Rounds 20, 22 and 23 respectively). The Swans defeated by 29 points in their qualifying final at AAMI Stadium, thus earning a week off and a home preliminary final, where they then defeated by 26 points to qualify for their first grand final since 2006, ending an eleven-match losing streak against the Magpies in the process. In the grand final, the Swans defeated Hawthorn by ten points in front of 99,683 people at the MCG, with Nick Malceski kicking a snap goal with 34 seconds left to seal the Swans' fifth premiership and first since 2005. Ryan O'Keefe was named the Norm Smith Medallist and the Swan's best player in September The Swans' 2013 season was marred by long-term injuries to many of its key players, namely Adam Goodes, Sam Reid, Lewis Jetta, Rhyce Shaw and Lewis Roberts-Thomson, among others; despite this setback, the team were still able to reach the finals for the fifteenth time in 18 seasons, reaching the preliminary finals where they were defeated by at Patersons Stadium, its first loss at the venue since 2009. The 2014 AFL season began with some difficulties for the Swans. Sydney lost their first game against and then to Collingwood before becoming the first non-South Australian team to win at Adelaide Oval defeating Adelaide by 63 points with Lance Franklin and Luke Parker kicking 4 goals each. After a loss to North Melbourne in Round 4, the Swans' won twelve games in a row, including victories against 2013 grand finalists Fremantle and Hawthorn, Geelong by 110 points at the SCG and then ladder leaders Port Adelaide. In Round 17, the Swans defeated Carlton to match a winning streak set three times in club history, the last of which came way back in 1935, and eventually closed out the season with their first minor premiership in 18 years and a club record 17 wins for the season, eclipsing the previous highest of 16, which was achieved on six past occasions in 2012, 1996, 1986, 1945, 1936 and 1935. In 2014 the Swans were minor premiers, and also qualified for the 2014 AFL Grand Final. They defeated Fremantle at home in the first qualifying final in Round one of the finals series and so earned a one-week break. In the first preliminary final the Swans had a convincing win against North Melbourne, which led them to their fourth grand final in 10 years. The 2014 AFL Grand Final was played on Saturday 27 September 2014 in near perfect weather conditions, with Sydney seen as favourites leading up to the match. This was the first time in a finals series that former Hawk player Lance Franklin would play against his former team, one of very few players to have played back to back grand finals for two different teams. The Hawks dominated the game quite early and eventually defeated the Swans 11.8.(74) to 21.11.(137). The 63-point loss was Sydney's biggest ever loss in a grand final and their biggest defeat all season, meaning Hawthorn would become back to back premiers for the second time in their history. The Swans started the 2015 AFL season well, winning their first three, before losing their next 2 games against Fremantle, where they trailed by as many as 8 goals before half-time, and the Western Bulldogs. They won their next 6 leading into the bye, including home wins against Geelong and North Melbourne, and an upset away win against Hawthorn in the grand final replay. The Swans lost their first game after the bye, their 3rd of the season to Richmond at the SCG, 11.11 (77) to 14.11. (95). The Swans rebounded with unconvincing wins against Port Adelaide and Brisbane Lions, before suffering their heaviest defeat for 17 seasons against the Hawks by 89 points. The following week was no better with a road trip to Perth and another loss, this time to the Eagles by 52 points, the scoreline ultimately flattering the Swans. The Swans bounced back against Adelaide with a convincing win 52-point win, but lost their next game to Geelong at Simmonds stadium; a close affair that Geelong blew apart in the 3rd quarter. The Swans won their final 4 games to secure a top 4 finish, against Collingwood, , St Kilda and . The Swans faced minor premiers Fremantle in the first qualifying final, their first finals match without Franklin, who had withdrawn from the finals due to illness. Ultimately the Swans would go down in a low scoring affair, effectively kicking themselves out of the game after losing Sam Reid to a hamstring injury midway through the 2nd quarter. The following week the Swans were knocked out of the finals in a one-sided contest against North Melbourne, struggling to score throughout the first half with the game effectively over by half-time. For the first time since 2011, the Swans failed to make a preliminary final. The Swans' continued period of success, in which it has missed the finals only three times since 1995, has led to some criticism about a salary cap concession which the club receives; the concession is in the form of an additional Cost of Living Allowance (COLA), due to the higher cost of living in Sydney compared with any other Australian city. It was, however, announced in March 2014 that this allowance would be scrapped. The trade ban was fought by the club before the 2015 season and a reprieve was won, with the AFL allowing the club to participate in the 2015 AFL draft. There was a catch however, with the league imposing an edict that the club could only recruit players at or below current average wage of $340,000 (adjusted figures for 2015 was $349,000). During the 2015 season, with the Swans team stretched by aging players and injuries, it had become apparent that the trade restrictions that had prevented the Swans from participating in the 2014 draft, had impacted the list. With the trade period looming, Andrew Pridham lobbied the AFL to lift the trade restrictions, labeling the ban as a restraint of trade. In response to continued discussions between the club and league, as well as lobbying by the AFLPA, the league further relaxed the trade restrictions for the Swans during the 2015 AFL Finals. The AFL changed the sanctions so that the Swans could replace a player that leaves the club as either a free agent, or through trade, with another player on a contract up to $450,000 per year. This allowed the Swans to trade for Callum Sinclair in a swap deal, as well as trade a late pick for out-of-contract defender, Michael Talia from the Western Bulldogs. The Swans started off the 2016 season with a convincing 80-point round 1 win against Collingwood, with new Swans recruit Michael Talia suffering a long term foot injury. They followed up the next round with a 60-point win against the Blues, with new recruit Callum Sinclair kicking 3 goals. The following week they defeated GWS by 25 points, with Lance Franklin kicking 4 goals. In the following match against the Crows, Isaac Heeney starred with 18 touches and 4 goals in a losing side. Three more wins followed, against West Coast, Brisbane and Essendon respectively before a shock loss to Richmond in round 8 by a solitary point, after a kick after the siren. They bounced back to win against top spot North Melbourne, and the Hawks at the MCG, with Lance Franklin booting 3 goals, including a bomb from 80 metres. After a tight slog against the Suns, the Swans played the Giants once more and were defeated in the club's 100th game. They won their next game by 55 points against the Demons, in a fourth quarter breeze. After a bye in Round 14, the Sydney Swans lost their first game after, again with the last kick of the game, by 4 points. The week after was soured by a family feud involving co-captain Kieren Jack and his parents, after they were reportedly told by him not to come to is 200-game milestone. After the spat, Jack led the Swans to an emphatic upset victory against Geelong, booting 3 goals and gathering 24 possessions in the one-sided 38-point victory at Simonds Stadium. They then travelled back home where they faced Hawthorn and lost their 3rd match of the season by under a goal, as Buddy went goalless for the first time in the season. After an unconvincing win the following week against Carlton, the Swans went on to win their last 5 home and away games by a combined total of 349 points, giving them top spot and a home qualifying final. Ahead of their first final against cross-town rivals the Giants, the Swans confirmed that they would play all home finals at the SCG except for Sydney Derbies, which would be played at ANZ Stadium. The final would create history, being the first Sydney Derby to be played in a final. It was also the first time that the Giants would make the finals in their fifth year. In a low-scoring first half, the Swans were very competitive, trailing by only 2 points. However, a mark not paid to Isaac Heeney midway through the third quarter turned all the momentum the Giants way, as they kicked away to win by 36 points. The Swans only kicked 2 goals after half-time with Giant Jeremy Cameron outscoring them in the third quarter alone with 3 goals. They were quick to bounce back the following week, thumping the Adelaide Crows by 6 goals, with Franklin and Tom Papley kicking 4 goals a piece, after a blistering 7 goal to 1 quarter. The story was pretty much the same in the preliminary final against the Geelong Cats at the MCG. The Swans kept the Cats goalless for the first quarter, and were never really challenged in their 37-point triumph. It would take them to their third grand final in 5 years, against the Western Bulldogs at the MCG. After leading by a scant 2-point margin at half time, the Bulldogs pulled away towards the end of the fourth quarter to hand Sydney their second grand final loss in three years. The Swans began the 2017 season with six straight losses, after being upset at home by Port Adelaide in the opening round, they were upset by Collingwood and Carlton, and suffered defeats to the Western Bulldogs, Greater Western Sydney (who won their first game at the SCG) and West Coast Eagles (in Perth). However, they managed to win 13 of their last 15, losing both their games to Hawthorn by 1 goal. Some of their best wins include against the reigning premiers the Bulldogs, GWS, and comeback wins against Richmond and Essendon. After becoming the first grand-finalist to lose their first six games, they have become the first team to reach the finals after starting the season 0–6. They would comprehensively defeat Essendon in their first final, before slumping to an ugly defeat against Geelong, ending their season. The Swans had an indifferent 2018, compounded by their struggles at home, losing 5 out of 11 games at the SCG. A lean patch of form which included upset losses to Gold Coast (for the first time ever) and Essendon (for the first time since 2011) had them looking likely to miss finals altogether. However three out of four wins in the last four rounds was enough to see them into their ninth consecutive finals series, where they were comprehensively beaten by GWS in the Elimination Final. The Swans' golden era of finals appearances came to an end in 2019. They missed the finals for the first time in a decade, finishing 15th on the ladder with eight wins and 14 losses. They started the season poorly with just one win in their first seven matches, although they would briefly recover after winning five of the next seven games. Six losses on the trot ended any chance of a tenth consecutive finals appearance, but strong wins over also-rans Melbourne and St Kilda in the final two rounds ensured the season ended on a positive note. They won their first match of the interrupted 2020 season against Adelaide at the Adelaide Oval by three points. The jumper is white with a red back and a red yoke with a silhouette of the Sydney Opera House at the point of the yoke. The Opera House design was first used at the start of the 1987 season, replacing the traditional red "V" on white design. Until 1991, the back of the jumper was white with the yoke only extending to the back of the shoulders and each side of the jumper had a red vertical stripe. The current predominantly red design appeared at the start of the 1992 season. The club's major sponsor is QBE Insurance. In 2004 the club added the initials 'SMFC' in white lettering at the back of the collar to honour the club's past as South Melbourne Football Club. The move was welcomed by Melbourne-based fans. The clash guernsey is a predominantly white version of the home guernsey similar to the original Opera House guernsey design, including a white back, but it is rarely used, since the two Queensland clubs (the Brisbane Lions and Gold Coast Suns) and cross town rivals GWS Giants are the only clubs with which there is a clash. ISC have manufactured the Swans' apparel since 2010, replacing long-time sponsor Puma. Since the club was called "South Melbourne", the team has worn different designs with the traditional red and white colors, as follows: The club song is known as "Cheer, Cheer The Red and The White"' and is to the tune of the "Victory March", the fight song of the Notre Dame Fighting Irish in South Bend, Indiana, USA, which was written by University of Notre Dame graduates and brothers Rev. Michael J. Shea and John F. Shea. In 1961, the school and other musical houses granted South Melbourne a copyright to adapt the "Victory March" into the new club song, which replaced an adaptation of "Springtime in the Rockies" by Gene Autry. Port Adelaide also has used the "Victory March" as the basis for their club song since 1971, though their senior team changed their club song to their current original "Power To Win" after their entry into the AFL. George Gershwin's "Swanee" (1919) was used by the club in marketing promotions during the late 1990s. The Sydney Swans' mascot for the AFL's Mascot Manor is Syd 'Swannie' Skilton. He is named after Swans legend Bob Skilton. The actual mascot at Sydney's home games is, however, still known as "Cyggy" (as in cygnet). Since the 2016 AFL season, the Swans have played all their home games at the Sydney Cricket Ground, a 48,000 capacity venue located in inner-east Sydney. The venue has been home to Swans home games since the club's relocation to Sydney in 1982. In the years 2002–2015, the Swans played between three and four home matches per season and most home finals matches at Stadium Australia (commercially known as ANZ Stadium), an 80,000 capacity stadium located in the west of the city. During the first five years at the ground average crowds were high, but issues with the surface as well as fan and player disengagement resulted in the club ending its association with the venue. The club also trains on the SCG during the season and has its indoor training facilities and offices located within the stadium. During the off-season, when the ground is configured for cricket, the Swans train on the nearby Tramway Oval (previously known as Lakeside Oval) at Moore Park. In October 2018 the club announced it would shift all offices and indoor training facilities to Moore Park's Royal Hall of Industries sometime in the early-to-mid 2020s, after announcing a $55 million deal with the New South Wales Government to redevelop the Hall. However, the club pulled out of the agreement in April 2020 due to the COVID-19 pandemic. The Sydney Swans have built a strong following in the city they've called home since moving from South Melbourne. Attendances and memberships grew dramatically during the Lockett era, helped out by the Super League War plaguing Rugby League. The Swans continue to have a strong supporter base in Victoria with attendances for Swans games in Melbourne being much higher than other non-Victorian teams. Legend: ¹following finals matches The introduction of the GWS Giants to the AFL in 2012 resulted in the formation of the Sydney Derby. The Swans compete against their cross-city rivals twice every season. The best performed player from every derby match is awarded the Brett Kirk Medal. The Swans have also played the Giants twice in finals matches, losing each time. The Swans developed a famous modern rivalry against the Perth-based West Coast Eagles between 2005 and 2007, when six consecutive games between the two teams, including two qualifying finals and two grand finals, were decided by less than a goal. The rivalry was highlighted by Sydney's four-point win against West Coast in the 2005 Grand Final, and West Coast's one-point win against Sydney in the 2006 Grand Final. The rivalry with Hawthorn has been more recent, mostly defined by two grand finals (2012 and 2014). The Swans beat Hawthorn in 2012 by 10 points to claim their fifth premiership. The rivalry grew in 2013, when Hawthorn forward Lance Franklin transferred to the Swans as a free agent on a nine-year, $10 million deal. In 2014, the Swans finished minor premiers and were favourites to win the grand final, however Hawthorn beat Sydney by 63 points. Both teams have had close games since their grand final encounters, with their matches often finishing within single digit margins. The Swans currently field a reserves team in the North East Australian Football League. Previously a reserves team was first created for South Melbourne in 1919 and continued to compete in the Victorian reserves competition until 1999 despite the team relocating to Sydney in 1982. The team enjoyed little success in the Victorian competition; it was the only reserves team never to win a premiership, and its best performances were losing grand finals in 1927, 1956 and 1980. In 2000 the Swans entered a reserves team in the Sydney AFL competition but withdrew prior to the finals series because the club felt the difference in standard was too greatly in favour of the Swans. Between 2001 and 2002 the Swans affiliated themselves with the Port Melbourne Football Club in the VFL while also starting a new stand-alone team named the Redbacks in the Sydney AFL competition. Little success resulted and the Swans entered a stand-alone reserves team in the AFL Canberra competition in 2003 which resulted in four consecutive premierships between 2005 and 2008. In 2011 the Swans reserves team joined the North East Australian Football League with the rest of the AFL Canberra competition and now have regular matches against AFL reserve teams from the Brisbane Lions, Gold Coast Suns and GWS Giants. The team plays home games at the Sydney Cricket Ground and will often play as a curtain raiser to senior AFL games. In 2011 the Swans reserves finished the home and away season with the Eastern Conference minor premiership. In the Eastern Conference grand final Ainslie caused a major upset when they defeated the Swans by 52 points. The team suffered the same fate in 2012 when Queanbeyan defeated them by 30 points in the Eastern Conference grand final. The Swans reserves would then go on to play in an astonishing five losing NEAFL grand finals; 2013, 2014, 2016, 2017 and 2018. 1: Relocated to Sydney ²: Six rounds into the 2005 season, Stuart Maxfield ended his playing career due to chronic injury. Six players rotated as captain throughout the rest of the season: Brett Kirk (Rounds 7, 8, 19 and 20), Leo Barry (Rounds 9, 10, 21 and 22), Barry Hall (Rounds 11, 12 and the entire finals series), Ben Mathews (Rounds 13 and 14), Adam Goodes (Rounds 15 and 16) and Jude Bolton (Rounds 17 and 18). 1 denotes a streak that is ongoing. As of 2019, the Sydney Swans have not lost a premiership match by more than 100 points since Round 10, 1998. Despite its historical lack of success, South Melbourne/Sydney has provided more Brownlow Medal winners (14) than any other club. The Norm Smith Medal is awarded to the player judged best-on-ground in the AFL Grand Final: Sydney announced its team of the century on 8 August 2003: Directors: CEOs: The Sydney Swans receive regular exposure from Sydney's two major daily newspapers, The Daily Telegraph, the Sydney Morning Herald and their respective counterpart publications, The Sunday Telegraph and The Sun-Herald. Articles about the Swans can occasionally be found in local community newspapers, free magazines and Sydney street press publications. The Sydney Swans are sponsored by radio station Triple M which broadcasts all of its games, including finals, live. Occasionally, 702 ABC Sydney may cover Swans matches if they are played on a Saturday afternoon, regardless of where they are playing. If they play in Sydney during that time schedule, appropriately 702 ABC Sydney will cover the match. Matches played at other times and days are broadcast on the ABC NewsRadio station's analogue AM/FM frequencies for listeners in Sydney, Newcastle, the NSW Central Coast and Canberra. Most Swans matches can be heard by listeners in the Riverina region of N.S.W. via the ABC Riverina – Wagga Wagga (2RVR) service, on the 675 AM frequency. Match coverage can be heard anywhere in the world via live streaming at the official AFL website or by downloading the AFL app for smartphones such as the iPhone and Samsung Galaxy. From 2002 to 2011 Network Ten televised all Swans games played in Melbourne and outside New South Wales live, but on a half-hour delay when played in Sydney for Sydney viewers and via affiliated stations in N.S.W and Canberra. In past and recent years the Seven Network broadcast Swans games to viewers in Sydney and most of N.S.W. and Canberra via the Prime TV network (now branded as Prime7). Matches were telecast either live, on a 30–90-minute delayed broadcast or late-night replay. Commencing 2002 all their games were broadcast live or on same day delay by Subscription television provider Foxtel across Australia on either the Fox Footy Channel or Fox Sports channels. From 2012 to 2016, the AFL commenced a new broadcast deal requiring the Seven Network and their affiliate station Prime7 to broadcast all Sydney Swans (and Greater Western Sydney Giants) games live to viewers in Sydney and most of regional New South Wales and Canberra. These games are screened on the 7mate channel in these regions. Foxtel also signed a new broadcast deal for the 2012 – 2016 seasons which included screening all AFL matches (including all Swans games) live across Australia on their Fox Sports and Fox Footy channels.
https://en.wikipedia.org/wiki?curid=29210
Supersessionism Supersessionism, also called replacement theology, is a Christian doctrine which asserts that the New Covenant through Jesus Christ supersedes the Old Covenant, which was made exclusively with the Jewish people. In Christianity, supersessionism is a theological view on the current status of the church in relation to the Jewish people and Judaism. It holds that the Christian Church has succeeded the Israelites as the definitive people of God or that the New Covenant has replaced or superseded the Mosaic covenant. From a supersessionist's "point of view, just by continuing to exist [outside the Church], the Jews dissent". This view directly contrasts with dual-covenant theology which holds that the Mosaic covenant remains valid for Jews. Supersessionism has formed a core tenet of the Christian Churches for the majority of their existence. Christian traditions that have traditionally championed dual-covenant theology (including the Roman Catholic, Reformed and Methodist teachings of this doctrine), have taught that the moral law continues to stand. Subsequent to and because of the Holocaust, some mainstream Christian theologians and denominations have rejected supersessionism. The Islamic tradition views Islam as the final and most authentic expression of Abrahamic prophetic monotheism, superseding both Jewish and Christian teachings. The doctrine of "tahrif" teaches that earlier monotheistic scriptures or their interpretations have been corrupted, while the Quran presents a pure version of the divine message that they originally contained. The word "supersessionism" comes from the English verb "to supersede", from the Latin verb "sedeo, sedere, sedi, sessum", "to sit", plus "super", "upon". It thus signifies one thing being replaced or supplanted by another. The word "supersession" is used by Sydney Thelwall in the title of chapter three of his 1870 translation of Tertullian's "Adversus Iudaeos". (Tertullian wrote between 198 and 208 AD.) The title is provided by Thelwall; it is not in the original Latin (which means ""Against the Jews""). Many Christian theologians saw the New Covenant in Christ as a replacement for the Mosaic Covenant. Historically, statements on behalf of the Roman Catholic Church have claimed its ecclesiastical structures to be a fulfillment and replacement of Jewish ecclesiastical structures (see also Jerusalem as an allegory for the Church). As recently as 1965 Vatican Council II affirmed, "the Church is the new people of God," without intending to make "Israel according to the flesh", the Jewish people, irrelevant in terms of eschatology (see "Roman Catholicism," below). Modern Protestants hold to a range of positions on the relationship between the Church and the Jewish people with the primary Protestant alternative to Supersessionism being Dispensationalism. In the wake of the Holocaust, mainstream Christian communities began to re-examine supersessionism. In the New Testament, Jesus and others repeatedly give Jews priority in their mission, as in Jesus' expression of him coming to the Jews rather than to Gentiles and in Paul's formula "first for the Jew, then for the Gentile." Yet after the death of Jesus, the inclusion of the Gentiles as equals in this burgeoning sect of Judaism also caused problems, particularly when it came to Gentiles keeping the Mosaic Law, which was both a major issue at the Council of Jerusalem and a theme of Paul's Epistle to the Galatians, though the relationship of Paul of Tarsus and Judaism is still disputed today. Paul's views on "the Jews" are complex, but he is generally regarded as the first person to make the claim that by not accepting claims of Jesus' divinity, known as high Christology, Jews disqualified themselves from salvation. Paul himself was born a Jew, but after a conversion experience he came to accept Jesus' divinity later in his life. In the opinion of Roman Catholic ex-priest James Carroll, accepting Jesus' divinity, for Paul, was dichotomous with being a Jew. His personal conversion and his understanding of the dichotomy between being Jewish and accepting Jesus' divinity, was the religious philosophy he wanted to see adopted among other Jews of his time. However, New Testament scholar N.T. Wright argues that Paul saw his faith in Jesus as precisely the fulfillment of his Judaism, not that there was any tension between being Jewish and Christian. Christians quickly adopted Paul's views. For most of Christian history, supersessionism has been the mainstream interpretation of the New Testament of all three major historical traditions within Christianity – Orthodox, Roman Catholic and Protestant. The text most often quoted in favor of the supersessionist view is Hebrews 8:13: "In speaking of 'a new covenant' [Jer. 31.31-32] he has made the first one obsolete." Many Early Christian commentators taught that the Old Covenant was fulfilled and replaced (superseded) by the New Covenant in Christ, for instance: Augustine (354–430) follows these views of the earlier Church Fathers, but he emphasizes the importance to Christianity of the continued existence of the Jewish people: "The Jews ... are thus by their own Scriptures a testimony to us that we have not forged the prophecies about Christ." The Catholic church built its system of eschatology on his theology, where Christ rules the earth spiritually through his triumphant church. Like his anti-Jewish teacher, St. Ambrose of Milan, he defined Jews as a special subset of those damned to hell, calling them "Witness People": "Not by bodily death, shall the ungodly race of carnal Jews perish. ...Scatter them abroad, take away their strength. And bring them down O Lord." Augustine mentioned to "love" the Jews but as a means to convert them to Christianity. Jeremy Cohen, followed by John Y. B. Hood and James Carroll, sees this as having had decisive social consequences, with Carroll saying, "It is not too much to say that, at this juncture, Christianity 'permitted' Judaism to endure because of Augustine." Supersessionism is not the name of any official Roman Catholic doctrine and the word appears in no Church documents, but official Catholic teaching has reflected varying levels of supersessionist thought throughout its history, especially prior to the mid-twentieth century. Supersessionist theology is extensive in Catholic liturgy and literature. The Second Vatican Council (1962–65) marked a shift in emphasis of official Catholic teaching about Judaism, a shift which may be described as a move from "hard" to "soft" supersessionism, to use the terminology of David Novak (below). Prior to Vatican II, Catholic doctrine on the matter was characterized by "displacement" or "substitution" theologies, according to which the Church and its New Covenant took the place of Judaism and its "Old Covenant", the latter being rendered void by the coming of Jesus. The nullification of the Old Covenant was often explained in terms of the "deicide charge" that Jews forfeited their covenantal relationship with God by executing the divine Christ. As recently as 1943, Pope Pius XII stated in his encyclical "Mystici corporis Christi": At the Second Vatican Council, convened within two decades of the Holocaust, there emerged a different framework for thinking about the status of the Jewish covenant. The declaration "Nostra aetate", promulgated in 1965, made several statements which signaled a shift away from "hard supersessionist" replacement thinking which posited that the Jews’ covenant was no longer acknowledged by God. Retrieving Paul's language in chapter 11 of his Epistle to the Romans, the declaration states, "God holds the Jews most dear for the sake of their Fathers; He does not repent of the gifts He makes or of the calls He issues. …Although the Church is the new people of God, the Jews should not be presented as rejected or accursed by God, as if this followed from the Holy Scriptures." Notably, a draft of the declaration contained a passage which originally called for "the entry of that [Jewish] people into the fullness of the people of God established by Christ;" however, at the suggestion of Catholic priest (and convert from Judaism) John M. Oesterreicher, it was replaced in the final promulgated version with the following language: “the Church awaits that day, known to God alone, on which all peoples will address the Lord in a single voice and ‘serve him shoulder to shoulder’ (Zeph 3:9).” Further developments in Catholic thinking on the covenantal status of Jews were led by Pope John Paul II. Among his most noteworthy statements on the matter is that which occurred during his historic visit to the synagogue in Mainz (1980), where he called Jews the "people of God of the Old Covenant, which has never been abrogated by God (cf. Romans 11:29, "for the gifts and the calling of God are irrevocable" [NRSV])." In 1997, John Paul II again affirmed the Jews’ covenantal status: “This people continues in spite of everything to be the people of the covenant and, despite human infidelity, the Lord is faithful to his covenant.” The post-Vatican II shift toward acknowledging the Jews as a covenanted people has led to heated discussions in the Catholic Church over the issue of missionary activity directed toward Jews, with some Catholics theologians reasoning that "if Christ is the redeemer of the world, every tongue should confess him", while others vehemently oppose "targeting Jews for conversion". Weighing in on this matter, Cardinal Walter Kasper, then President of the Pontifical Commission for Religious Relations with the Jews, reaffirmed the validity of the Jews’ covenant and then continued: Recently, in his apostolic exhortation "Evangelii gaudium" (2013), Pope Francis’s own teaching emphasized the communal heritage and mutual respect for each other. Similarly, the words of Cardinal Kasper, "God's grace, which is the grace of Jesus Christ according to our faith, is available to all. Therefore, the Church believes that Judaism, [as] the faithful response of the Jewish people to God's irrevocable covenant, is salvific for them, because God is faithful to his promises," highlight the covenantal relationship of God with the Jewish people, but differs from Pope Francis in calling the Jewish faith salvific. In 2011, Kasper specifically repudiated the notion of "displacement" theology, clarifying that the "New Covenant for Christians is not the replacement (substitution), but the fulfillment of the Old Covenant." These statements from Catholic officials signal a remaining point of debate, wherein some adhere to a movement away from supersessionism, and others remain with a "soft" notion of supersessionism. Fringe Catholic groups, such as the Society of St. Pius X, strongly oppose the theological developments concerning Judaism made at Vatican II and retain "hard" supersessionist views. Even among mainstream Catholic groups and official Catholic teaching, elements of "soft" supersessionism remain: Protestant opinions on supersessionism vary. These differences arise from dissimilar literal versus figurative approaches to understanding the relationships between the covenants of the Bible, particularly the relationship between the covenants of the Old Testament and the New Covenant. In consequence, there is a range of viewpoints, including: Three prominent Protestant views on this relationship are covenant theology, New Covenant theology, and dispensationalism. Extensive discussion is found in Christian views on the Old Covenant and in the respective articles for each of these viewpoints: for example, there is a section within Dispensationalism detailing that perspective's concept of Israel. Differing approaches influence how the land promise in Genesis 12, 15 and 17 is understood, whether it is interpreted literally or figuratively, both with regard to the land and the identity of people who inherit it. Adherents to these various views are not restricted to a single denomination though some traditions teach a certain view. Classical covenant theology is taught within the Presbyterian and Continental Reformed traditions. Methodist hermeneutics traditionally use a variation of this, known as Wesleyan covenant theology, which is consistent with Arminian soteriology. In the United States, a difference of approach has been perceived between the Presbyterian Church and the Episcopal Church, the Evangelical Lutheran Church in America, and the United Methodist Church which have worked to develop a non-supersessionist theology. Paul van Buren developed a thoroughly nonsupersessionist position, in contrast to Karl Barth, his mentor. He wrote, "The reality of the Jewish people, fixed in history by the reality of their election, in their faithfulness in spite of their unfaithfulness, is as solid and sure as that of the gentile church." Mormonism rejects supersessionism. Judaism rejects supersessionism, only discussing the topic as an idea upheld by Christian and Muslim theologians. While some modern Jews are offended by the traditional Christian belief in supersessionism, a different viewpoint has been offered by Rabbi and Jewish theologian David Novak, who has stated that "Christian supersessionism need not denigrate Judaism" and that some subsets of Christian supersessionism "can affirm that God has not annulled his everlasting covenant with the Jewish people, neither past nor present nor future." In its canonical form, the Islamic doctrine of tahrif teaches that Jewish and Christian scriptures or their interpretations have been corrupted, which has obscured the divine message that they originally contained. According to this doctrine, the Quran both points out and corrects these supposed errors introduced by previous corruption of monotheistic scriptures, which makes it the final and most pure divine revelation. Sandra Toenis Keiting argues that Islam was supersessionist from its inception, advocating the view that the Quranic revelations would "replace the corrupted scriptures possessed by other communities", and that early Islamic scriptures display a "clear theology of revelation that is concerned with establishing the credibility of the nascent community" viz-a-viz other religions. In contrast, Abdulaziz Sachedina has argued that Islamic supersessionism stems not from the Quran or hadith, but rather from the work of Muslim jurists who reinterpreted the Quranic message about "islam" (in its literal meaning of "submission") being "the only true religion with God" into an argument about the religion of Islam being superior to other faiths, thereby providing theoretical justification for Muslim political dominance and a wider interpretation of the notion of jihad. Both Christian and Jewish theologians have identified different types of supersessionism in the Christian reading of the Bible. R. Kendall Soulen notes three categories of supersessionism identified by Christian theologians: punitive, economic, and structural: These three views are neither mutually exclusive, nor logically dependent, and it is possible to hold all of them or any one with or without the others. The work of Matthew Tapie attempts a further clarification of the language of supersessionism in modern theology that Peter Ochs has called "the clearest teaching on supersessionism in modern scholarship." Tapie argued that Soulen's view of economic supersessionism shares important similarities with those of Jules Isaac's thought (the French-Jewish historian well known for his identification of "the teaching of contempt" in the Christian tradition) and can ultimately be traced to the medieval concept of the "cessation of the law" – the idea that Jewish observance of the ceremonial law (Sabbath, circumcision, and dietary laws) ceases to have a positive significance for Jews after the passion of Christ. According to Soulen, Christians today often repudiate supersessionism but they do not always carefully examine just what that is supposed to mean. Soulen thinks Tapie's work is a remedy to this situation.
https://en.wikipedia.org/wiki?curid=29212
Software cracking Software cracking (known as "breaking" in the 1980s) is the modification of software to remove or disable features which are considered undesirable by the person cracking the software, especially copy protection features (including protection against the manipulation of software, serial number, hardware key, date checks and disc check) or software annoyances like nag screens and adware. A crack refers to the means of achieving, for example a stolen serial number or a tool that performs that act of cracking. Some of these tools are called keygen, patch, or loader. A keygen is a handmade product serial number generator that often offers the ability to generate working serial numbers in your own name. A patch is a small computer program that modifies the machine code of another program. This has the advantage for a cracker to not include a large executable in a release when only a few bytes are changed. A loader modifies the startup flow of a program and does not remove the protection but circumvents it. A well-known example of a loader is a trainer used to cheat in games. Fairlight pointed out in one of their .nfo files that these type of cracks are not allowed for warez scene game releases. A nukewar has shown that the protection may not kick in at any point for it to be a valid crack.
https://en.wikipedia.org/wiki?curid=29213
SOAP SOAP (abbreviation for Simple Object Access Protocol) is a messaging protocol specification for exchanging structured information in the implementation of web services in computer networks. Its purpose is to provide extensibility, neutrality, verbosity and independence. It uses XML Information Set for its message format, and relies on application layer protocols, most often Hypertext Transfer Protocol (HTTP), although some legacy systems communicate over Simple Mail Transfer Protocol (SMTP), for message negotiation and transmission. SOAP allows developers to invoke processes running on disparate operating systems (such as Windows, macOS, and Linux) to authenticate, authorize, and communicate using Extensible Markup Language (XML). Since Web protocols like HTTP are installed and running on all operating systems, SOAP allows clients to invoke web services and receive responses independent of language and platforms. SOAP provides the Messaging Protocol layer of a web services protocol stack for web services. It is an XML-based protocol consisting of three parts: and how to process it SOAP has three major characteristics: As an example of what SOAP procedures can do, an application can send a SOAP request to a server that has web services enabled—such as a real-estate price database—with the parameters for a search. The server then returns a SOAP response (an XML-formatted document with the resulting data), e.g., prices, location, features. Since the generated data comes in a standardized machine-parsable format, the requesting application can then integrate it directly. The SOAP architecture consists of several layers of specifications for: SOAP evolved as a successor of XML-RPC, though it borrows its transport and interaction neutrality from Web Service Addressing and the envelope/header/body from elsewhere (probably from WDDX). SOAP was designed as an object-access protocol in 1998 by Dave Winer, Don Box, Bob Atkinson, and Mohsen Al-Ghosein for Microsoft, where Atkinson and Al-Ghosein were working. The specification was not made available until it was submitted to IETF 13 September 1999. According to Don Box, this was due to politics within Microsoft. Because of Microsoft's hesitation, Dave Winer shipped XML-RPC in 1998. The submitted Internet Draft did not reach RFC status and is therefore not considered a "standard" as such. Version 1.1 of the specification was published as a W3C Note on 8 May 2000. Since version 1.1 did not reach W3C Recommendation status, it can not be considered a "standard" either. Version 1.2 of the specification, however, became a W3C recommendation on June 24, 2003. The SOAP specification was maintained by the XML Protocol Working Group of the World Wide Web Consortium until the group was closed 10 July 2009. "SOAP" originally stood for "Simple Object Access Protocol" but version 1.2 of the standard dropped this acronym. After SOAP was first introduced, it became the underlying layer of a more complex set of web services, based on Web Services Description Language (WSDL), XML schema and Universal Description Discovery and Integration (UDDI). These different services, especially UDDI, have proved to be of far less interest, but an appreciation of them gives a complete understanding of the expected role of SOAP compared to how web services have actually evolved. SOAP specification can be broadly defined to be consisting of the following 3 conceptual components: protocol concepts, encapsulation concepts and network concepts. The SOAP specification defines the messaging framework, which consists of: A SOAP message is an ordinary XML document containing the following elements: Both SMTP and HTTP are valid application layer protocols used as transport for SOAP, but HTTP has gained wider acceptance as it works well with today's internet infrastructure; specifically, HTTP works well with network firewalls. SOAP may also be used over HTTPS (which is the same protocol as HTTP at the application level, but uses an encrypted transport protocol underneath) with either simple or mutual authentication; this is the advocated WS-I method to provide web service security as stated in the WS-I Basic Profile 1.1. This is a major advantage over other distributed protocols like GIOP/IIOP or DCOM, which are normally filtered by firewalls. SOAP over AMQP is yet another possibility that some implementations support. SOAP also has an advantage over DCOM that it is unaffected by security rights configured on the machines that require knowledge of both transmitting and receiving nodes. This lets SOAP be loosely coupled in a way that is not possible with DCOM. There is also the SOAP-over-UDP OASIS standard. XML Information Set was chosen as the standard message format because of its widespread use by major corporations and open source development efforts. Typically, XML Information Set is serialized as XML. A wide variety of freely available tools significantly eases the transition to a SOAP-based implementation. The somewhat lengthy syntax of XML can be both a benefit and a drawback. While it promotes readability for humans, facilitates error detection, and avoids interoperability problems such as byte-order (endianness), it can slow processing speed and can be cumbersome. For example, CORBA, GIOP, ICE, and DCOM use much shorter, binary message formats. On the other hand, hardware appliances are available to accelerate processing of XML messages. Binary XML is also being explored as a means for streamlining the throughput requirements of XML. XML messages by their self-documenting nature usually have more 'overhead' (e.g., headers, nested tags, delimiters) than actual data in contrast to earlier protocols where the overhead was usually a relatively small percentage of the overall message. In financial messaging SOAP was found to result in a 2–4 times larger message than previous protocols FIX (Financial Information Exchange) and CDR (Common Data Representation). XML Information Set does not have to be serialized in XML. For instance, CSV and JSON XML-infoset representations exist. There is also no need to specify a generic transformation framework. The concept of SOAP bindings allows for specific bindings for a specific application. The drawback is that both the senders and receivers have to support this newly defined binding. The message below is requesting a stock price for AT&T (stock ticker symbol "T"). POST /InStock HTTP/1.1 Host: www.example.org Content-Type: application/soap+xml; charset=utf-8 Content-Length: 299 SOAPAction: "http://www.w3.org/2003/05/soap-envelope"
https://en.wikipedia.org/wiki?curid=29215
Sodium thiopental Sodium thiopental, also known as Sodium Pentothal (a trademark of Abbott Laboratories), thiopental, thiopentone, or Trapanal (also a trademark), or Fatal-Plus in veterinary euthanasia contexts, is a rapid-onset short-acting barbiturate general anesthetic. It is the thiobarbiturate analog of pentobarbital, and an analog of thiobarbital. Sodium thiopental was a core medicine in the World Health Organization's List of Essential Medicines, the safest and most effective medicines needed in a health system, but was supplanted by propofol. Despite this thiopental is still listed as an acceptable alternative to propofol, depending on local availability and cost of these agents. It was previously the first of three drugs administered during most lethal injections in the United States, but the US manufacturer Hospira stopped manufacturing the drug and the EU banned the export of the drug for this purpose. Although thiopental abuse carries a dependency risk, its recreational use is rare. Sodium thiopental is an ultra-short-acting barbiturate and has been used commonly in the induction phase of general anesthesia. Its use has been largely replaced with that of propofol, but retains popularity as an induction agent for rapid-sequence intubation and in obstetrics. Following intravenous injection, the drug rapidly reaches the brain and causes unconsciousness within 30–45 seconds. At one minute, the drug attains a peak concentration of about 60% of the total dose in the brain. Thereafter, the drug distributes to the rest of the body, and in about 5–10 minutes the concentration is low enough in the brain that consciousness returns. A normal dose of sodium thiopental (usually 4–6 mg/kg) given to a pregnant woman for operative delivery (caesarian section) rapidly makes her unconscious, but the baby in her uterus remains conscious. However, larger or repeated doses can depress the baby. Sodium thiopental is not used to maintain anesthesia in surgical procedures because, in infusion, it displays zero-order elimination pharmacokinetics, leading to a long period before consciousness is regained. Instead, anesthesia is usually maintained with an inhaled anesthetic (gas) agent. Inhaled anesthetics are eliminated relatively quickly, so that stopping the inhaled anesthetic will allow rapid return of consciousness. Sodium thiopental would have to be given in large amounts to maintain an anesthetic plane, and because of its 11.5- to 26-hour half-life, consciousness would take a long time to return. In veterinary medicine, sodium thiopental is used to induce anesthesia in animals. Since it is redistributed to fat, certain lean breeds of dogs such as sighthounds will have prolonged recoveries from sodium thiopental due to their lack of body fat and their lean body mass. Conversely, obese animals will have rapid recoveries, but it will be some time before it is entirely removed (metabolized) from their bodies. Sodium thiopental is always administered intravenously, as it can be fairly irritating; severe tissue necrosis and sloughing can occur if it is injected incorrectly into the tissue around a vein. Sodium thiopental generally produces less hypotension than an equivalent dose of propofol when used for induction of anaesthesia. This is partly because both drugs decrease systemic vascular resistance, but thiopentone (as opposed to propofol) tends to preserve the reflex tachycardia seen in states of acute hypotension, which can restore cardiac output. In addition to anesthesia induction, sodium thiopental was historically used to induce medical comas. It has now been superseded by drugs such as propofol because their effects wear off more quickly than thiopental. Patients with brain swelling, causing elevation of intracranial pressure, either secondary to trauma or following surgery, may benefit from this drug. Sodium thiopental, and the barbiturate class of drugs, decrease neuronal activity thereby decreasing cerebral metabolic rate of oxygen consumption (CMRO2), decrease intracranial vascular response to carbon dioxide (CO2), which in turn decreases intracranial pressure. Patients with refractory elevated intracranial pressure (RICH) due to traumatic brain injury (TBI) may have improved long term outcome when barbiturate coma is added to their neurointensive care treatment. Reportedly, thiopental has been shown to be superior to pentobarbital in reducing intracranial pressure. This phenomenon is also called a reverse steal effect. In refractory status epilepticus, thiopental may be used to terminate a seizure. Sodium thiopental is used intravenously for the purposes of euthanasia. In both Belgium and the Netherlands, where active euthanasia is allowed by law, the standard protocol recommends sodium thiopental as the ideal agent to induce coma, followed by pancuronium bromide to paralyze muscles and stop breathing. Intravenous administration is the most reliable and rapid way to accomplish euthanasia. Death is quick. A coma is first induced by intravenous administration of 20 mg/kg thiopental sodium (Nesdonal) in a small volume (10 ml physiological saline). Then, a triple dose of a non-depolarizing neuromuscular blocking drug is given, such as 20 mg pancuronium bromide (Pavulon) or 20 mg vecuronium bromide (Norcuron). The muscle relaxant should be given intravenously to ensure optimal availability but pancuronium bromide may be administered intramuscularly at an increased dosage level of 40 mg. Along with pancuronium bromide and potassium chloride, thiopental is used in 34 states of the United States to execute prisoners by lethal injection. A very large dose is given to ensure rapid loss of consciousness. Although death usually occurs within ten minutes of the beginning of the injection process, some have been known to take longer. The use of sodium thiopental in execution protocols was challenged in court after a study in the medical journal "The Lancet" reported autopsies of executed inmates showed the level of thiopental in their bloodstream was insufficient to cause unconsciousness. On December 8, 2009, Ohio became the first state to use a single dose of sodium thiopental for its capital execution, following the failed use of the standard three-drug cocktail during a recent execution, due to inability to locate suitable veins. Kenneth Biros was executed using the single-drug method. Washington became the second state in the US to use the single-dose sodium thiopental injections for executions. On September 10, 2010, the execution of Cal Coburn Brown was the first in the state to use a single-dose, single-drug injection. His death was pronounced approximately one and a half minutes after the intravenous administration of five grams of the drug. After its use for the execution of Jeffrey Landrigan in the US, the UK introduced a ban on the export of sodium thiopental in December 2010, after it was established that no European supplies to the US were being used for any other purpose. The restrictions were based on "the European Union Torture Regulation (including licensing of drugs used in execution by lethal injection)". From 21 December 2011 the European Union extended trade restrictions to prevent the export of certain medicinal products for capital punishment, stating that "the Union disapproves of capital punishment in all circumstances and works towards its universal abolition". Thiopental (Pentothal) is still used in some places as a truth serum to weaken the resolve of a subject and make them more compliant to pressure. The barbiturates as a class decrease higher cortical brain functioning, and also due to the loss of inhibition produced by barbiturates. Some psychiatrists hypothesize that because lying is more complex than telling the truth, suppression of the higher cortical functions may lead to the uncovering of the truth. The drug tends to make subjects loquacious and cooperative with interrogators; however, the reliability of confessions made under thiopental is questionable. Psychiatrists have used thiopental to desensitize patients with phobias and to "facilitate the recall of painful repressed memories." One psychiatrist who worked with thiopental is the Dutch Professor , who used this procedure to help relieve trauma in surviving victims of the Holocaust. Sodium thiopental is a member of the barbiturate class of drugs, which are relatively non-selective compounds that bind to an entire superfamily of ligand-gated ion channels, of which the GABAA receptor channel is one of several representatives. This superfamily of ion channels includes the neuronal nAChR channel, the 5HT3R channel, the GlyR channel and others. Surprisingly, while GABAA receptor currents are increased by barbiturates (and other general anesthetics), ligand-gated ion channels that are predominantly permeable for cationic ions are blocked by these compounds. For example, neuronal nAChR channels are blocked by clinically relevant anesthetic concentrations of both sodium thiopental and pentobarbital. Such findings implicate (non-GABA-ergic) ligand-gated ion channels, e.g. the neuronal nAChR channel, in mediating some of the (side) effects of barbiturates. The GABAA receptor is an inhibitory channel that decreases neuronal activity, and barbiturates enhance the inhibitory action of the GABAA receptor. Following a shortage that led a court to delay an execution in California, a company spokesman for Hospira, the sole American manufacturer of the drug, objected to the use of thiopental in lethal injection. "Hospira manufactures this product because it improves or saves lives, and the company markets it solely for use as indicated on the product labeling. The drug is not indicated for capital punishment and Hospira does not support its use in this procedure." On January 21, 2011, the company announced that it would stop production of sodium thiopental from its plant in Italy because Italian authorities couldn't guarantee that exported quantities of the drug would not be used in executions. Italy was the only viable place where the company could produce sodium thiopental, leaving the United States without a supplier. Thiopental rapidly and easily crosses the blood brain barrier as it is a lipophilic molecule. As with all lipid-soluble anaesthetic drugs, the short duration of action of sodium thiopental is due almost entirely to its redistribution away from central circulation towards muscle and fat tissue, due to its very high fat:water partition coefficient (approximately 10), leading to sequestration in fat tissue. Once redistributed, the free fraction in the blood is metabolized in the liver. Sodium thiopental is mainly metabolized to pentobarbital, 5-ethyl-5-(1'-methyl-3'-hydroxybutyl)-2-thiobarbituric acid, and 5-ethyl-5-(1'-methyl-3'-carboxypropyl)-2-thiobarbituric acid. The usual dose range for induction of anesthesia using thiopental is from 3 to 6 mg/kg; however, there are many factors that can alter this. Premedication with sedatives such as benzodiazepines or clonidine will reduce requirements, as do specific disease states and other patient factors. Among patient factors are: age, sex, and lean body mass. Specific disease conditions that can alter the dose requirements of thiopentone and for that matter any other intravenous anaesthetic are: hypovolemia, burns, azotemia, liver failure, hypoproteinemia, etc. As with nearly all anesthetic drugs, thiopental causes cardiovascular and respiratory depression resulting in hypotension, apnea, and airway obstruction. For these reasons, only suitably trained medical personnel should give thiopental in an environment suitably equipped to deal with these effects. Side effects include headache, agitated emergence, prolonged somnolence, and nausea. Intravenous administration of sodium thiopental is followed instantly by an odor and/or taste sensation, sometimes described as being similar to rotting onions, or to garlic. The hangover from the side effects may last up to 36 hours. Although each molecule of thiopental contains one sulfur atom, it is not a sulfonamide, and does not show allergic reactions of sulfa/sulpha drugs. Thiopental should be used with caution in cases of liver disease, Addison's disease, myxedema, severe heart disease, severe hypotension, a severe breathing disorder, or a family history of porphyria. Co-administration of pentoxifylline and thiopental causes death by acute pulmonary edema in rats. This pulmonary edema was not mediated by cardiac failure or by pulmonary hypertension but was due to increased pulmonary vascular permeability. Sodium thiopental was discovered in the early 1930s by Ernest H. Volwiler and Donalee L. Tabern, working for Abbott Laboratories. It was first used in human beings on March 8, 1934, by Dr. Ralph M. Waters in an investigation of its properties, which were short-term anesthesia and surprisingly little analgesia. Three months later, Dr. John S. Lundy started a clinical trial of thiopental at the Mayo Clinic at the request of Abbott. Abbott continued to make the drug until 2004, when it spun off its hospital-products division as Hospira. Thiopental is famously associated with a number of anesthetic deaths in victims of the attack on Pearl Harbor. These deaths, relatively soon after the drug's introduction, were said to be due to excessive doses given to shocked trauma patients. However, recent evidence available through freedom of information legislation was reviewed in the "British Journal of Anaesthesia", which has suggested that this story was grossly exaggerated. Of the 344 wounded that were admitted to the Tripler Army Hospital, only 13 did not survive, and it is unlikely that thiopentone overdose was responsible for more than a few of these.
https://en.wikipedia.org/wiki?curid=29218
Stone Age The Stone Age was a broad prehistoric period during which stone was widely used to make tools with an edge, a point, or a percussion surface. The period lasted for roughly 3.4 million years, and ended between 8700 BCE and 2000 BCE , with the advent of metalworking. Though some simple metalworking of malleable metals, particularly the use of gold and copper for purposes of ornamentation, was known in the Stone Age, it is the melting and smelting of copper that marks the end of the Stone Age. In western Asia this occurred by about 3000 BCE, when bronze became widespread. The term Bronze Age is used to describe the period that followed the Stone Age, as well as to describe cultures that had developed techniques and technologies for working copper into tools, supplanting stone in many uses. Stone Age artifacts that have been discovered include tools used by modern humans, by their predecessor species in the genus "Homo", and possibly by the earlier partly contemporaneous genera "Australopithecus" and "Paranthropus". Bone tools have been discovered that were used during this period as well but these are rarely preserved in the archaeological record. The Stone Age is further subdivided by the types of stone tools in use. The Stone Age is the first period in the three-age system frequently used in archaeology to divide the timeline of human technological prehistory into functional periods: The Stone Age is contemporaneous with the evolution of the genus "Homo", with the possible exception of the early Stone Age, when species prior to "Homo" may have manufactured tools. According to the age and location of the current evidence, the cradle of the genus is the East African Rift System, especially toward the north in Ethiopia, where it is bordered by grasslands. The closest relative among the other living primates, the genus "Pan", represents a branch that continued on in the deep forest, where the primates evolved. The rift served as a conduit for movement into southern Africa and also north down the Nile into North Africa and through the continuation of the rift in the Levant to the vast grasslands of Asia. Starting from about 4 million years ago (mya) a single biome established itself from South Africa through the rift, North Africa, and across Asia to modern China. This has been called "transcontinental 'savannahstan'" recently. Starting in the grasslands of the rift, "Homo erectus", the predecessor of modern humans, found an ecological niche as a tool-maker and developed a dependence on it, becoming a "tool equipped savanna dweller". The oldest indirect evidence found of stone tool use is fossilised animal bones with tool marks; these are 3.4 million years old and were found in the Lower Awash Valley in Ethiopia. Archaeological discoveries in Kenya in 2015, identifying what may be the oldest evidence of hominin use of tools known to date, have indicated that "Kenyanthropus platyops" (a 3.2 to 3.5-million-year-old Pliocene hominin fossil discovered in Lake Turkana, Kenya in 1999) may have been the earliest tool-users known. The oldest stone tools were excavated from the site of Lomekwi 3 in West Turkana, northwestern Kenya, and date to 3.3 million years old. Prior to the discovery of these "Lomekwian" tools, the oldest known stone tools had been found at several sites at Gona, Ethiopia, on sediments of the paleo-Awash River, which serve to date them. All the tools come from the Busidama Formation, which lies above a disconformity, or missing layer, which would have been from 2.9 to 2.7 mya. The oldest sites discovered to contain tools are dated to 2.6–2.55 mya. One of the most striking circumstances about these sites is that they are from the Late Pliocene, where prior to their discovery tools were thought to have evolved only in the Pleistocene. Excavators at the locality point out that: The species who made the Pliocene tools remains unknown. Fragments of "Australopithecus garhi", "Australopithecus aethiopicus", and "Homo", possibly "Homo habilis", have been found in sites near the age of the Gona tools. In July 2018, scientists reported the discovery in China of the known oldest stone tools outside Africa, estimated at 2.12 million years old. Innovation of the technique of smelting ore is regarded as ending the Stone Age and beginning the Bronze Age. The first highly significant metal manufactured was bronze, an alloy of copper and tin or arsenic, each of which was smelted separately. The transition from the Stone Age to the Bronze Age was a period during which modern people could smelt copper, but did not yet manufacture bronze, a time known as the Copper Age (or more technically the Chalcolithic or Eneolithic, both meaning 'copper–stone'). The Chalcolithic by convention is the initial period of the Bronze Age. The Bronze Age was followed by the Iron Age. The transition out of the Stone Age occurred between 6000 and 2500 BCE for much of humanity living in North Africa and Eurasia. The first evidence of human metallurgy dates to between the 6th and 5th millennia BCE in the archaeological sites of Majdanpek, Yarmovac, and Pločnik in modern-day Serbia (including a copper axe from 5500 BCE belonging to the Vinca culture); though not conventionally considered part of the Chalcolithic, this provides the earliest known example of copper metallurgy. Note the Rudna Glava mine in Serbia. Ötzi the Iceman, a mummy from about 3300 BCE, carried with him a copper axe and a flint knife. In some regions, such as Sub-Saharan Africa, the Stone Age was followed directly by the Iron Age. The Middle East and Southeast Asian regions progressed past Stone Age technology around 6000 BCE. Europe, and the rest of Asia became post-Stone Age societies by about 4000 BCE. The proto-Inca cultures of South America continued at a Stone Age level until around 2000 BCE, when gold, copper, and silver made their entrance. The peoples of the Americas notably did not develop a widespread behavior of smelting bronze or iron after the Stone Age period, although the technology existed. Stone-tool manufacture continued even after the Stone Age ended in a given area. In Europe and North America, millstones were in use until well into the 20th century, and still are in many parts of the world. The terms "Stone Age", "Bronze Age", and "Iron Age" are not intended to suggest that advancements and time periods in prehistory are only measured by the type of tool material, rather than, for example, social organization, food sources exploited, adaptation to climate, adoption of agriculture, cooking, settlement, and religion. Like pottery, the typology of the stone tools combined with the relative sequence of the types in various regions provide a chronological framework for the evolution of humanity and society. They serve as diagnostics of date, rather than characterizing the people or the society. Lithic analysis is a major and specialised form of archaeological investigation. It involves measurement of stone tools to determine their typology, function and technologies involved. It includes scientific study of the lithic reduction of the raw materials and methods used to make the prehistoric artifacts that are discovered. Much of this study takes place in the laboratory in the presence of various specialists. In experimental archaeology, researchers attempt to create replica tools, to understand how they were made. Flintknappers are craftsmen who use sharp tools to reduce flintstone to flint tool. In addition to lithic analysis, field prehistorians utilize a wide range of techniques derived from multiple fields. The work of archaeologists in determining the paleocontext and relative sequence of the layers is supplemented by the efforts of geologic specialists in identifying layers of rock developed or deposited over geologic time; of paleontological specialists in identifying bones and animals; of palynologists in discovering and identifying pollen, spores and plant species; of physicists and chemists in laboratories determining ages of materials by carbon-14, potassium-argon and other methods. Study of the Stone Age has never been limited to stone tools and archaeology, even though they are important forms of evidence. The chief focus of study has always been on the society and the living people who belonged to it. Useful as it has been, the concept of the Stone Age has its limitations. The date range of this period is ambiguous, disputed, and variable, depending upon the region in question. While it is possible to speak of a general 'stone age' period for the whole of humanity, some groups never developed metal-smelting technology, and so remained in the so-called 'stone age' until they encountered technologically developed cultures. The term was innovated to describe the archaeological cultures of Europe. It may not always be the best in relation to regions such as some parts of the Indies and Oceania, where farmers or hunter-gatherers used stone for tools until European colonisation began. Archaeologists of the late 19th and early 20th centuries CE, who adapted the three-age system to their ideas, hoped to combine cultural anthropology and archaeology in such a way that a specific contemporaneous tribe can be used to illustrate the way of life and beliefs of the people exercising a particular Stone-Age technology. As a description of people living today, the term "stone age" is controversial. The Association of Social Anthropologists discourages this use, asserting:To describe any living group as 'primitive' or 'Stone Age' inevitably implies that they are living representatives of some earlier stage of human development that the majority of humankind has left behind. In the 1920s, South African archaeologists organizing the stone tool collections of that country observed that they did not fit the newly detailed Three-Age System. In the words of J. Desmond Clark, It was early realized that the threefold division of culture into Stone, Bronze and Iron Ages adopted in the nineteenth century for Europe had no validity in Africa outside the Nile valley. Consequently, they proposed a new system for Africa, the Three-stage System. Clark regarded the Three-age System as valid for North Africa; in sub-Saharan Africa, the Three-stage System was best. In practice, the failure of African archaeologists either to keep this distinction in mind, or to explain which one they mean, contributes to the considerable equivocation already present in the literature. There are in effect two Stone Ages, one part of the Three-age and the other constituting the Three-stage. They refer to one and the same artifacts and the same technologies, but vary by locality and time. The three-stage system was proposed in 1929 by Astley John Hilary Goodwin, a professional archaeologist, and Clarence van Riet Lowe, a civil engineer and amateur archaeologist, in an article titled "Stone Age Cultures of South Africa" in the journal "Annals of the South African Museum". By then, the dates of the Early Stone Age, or Paleolithic, and Late Stone Age, or Neolithic ("neo" = new), were fairly solid and were regarded by Goodwin as absolute. He therefore proposed a relative chronology of periods with floating dates, to be called the Earlier and Later Stone Age. The Middle Stone Age would not change its name, but it would not mean Mesolithic. The duo thus reinvented the Stone Age. In Sub-Saharan Africa, however, iron-working technologies were either invented independently or came across the Sahara from the north (see "iron metallurgy in Africa"). The Neolithic was characterized primarily by herding societies rather than large agricultural societies, and although there was copper metallurgy in Africa as well as bronze smelting, archaeologists do not currently recognize a separate Copper Age or Bronze Age. Moreover, the technologies included in those 'stages', as Goodwin called them, were not exactly the same. Since then, the original relative terms have become identified with the technologies of the Paleolithic and Mesolithic, so that they are no longer relative. Moreover, there has been a tendency to drop the comparative degree in favor of the positive: resulting in two sets of Early, Middle and Late Stone Ages of quite different content and chronologies. By voluntary agreement, archaeologists respect the decisions of the Pan-African Congress on Prehistory, which meets every four years to resolve archaeological business brought before it. Delegates are actually international; the organization takes its name from the topic. Louis Leakey hosted the first one in Nairobi in 1947. It adopted Goodwin and Lowe's 3-stage system at that time, the stages to be called Early, Middle and Later. The problem of the transitions in archaeology is a branch of the general philosophic continuity problem, which examines how discrete objects of any sort that are contiguous in any way can be presumed to have a relationship of any sort. In archaeology, the relationship is one of causality. If Period B can be presumed to descend from Period A, there must be a boundary between A and B, the A–B boundary. The problem is in the nature of this boundary. If there is no distinct boundary, then the population of A suddenly stopped using the customs characteristic of A and suddenly started using those of B, an unlikely scenario in the process of evolution. More realistically, a distinct border period, the A/B transition, existed, in which the customs of A were gradually dropped and those of B acquired. If transitions do not exist, then there is no proof of any continuity between A and B. The Stone Age of Europe is characteristically in deficit of known transitions. The 19th and early 20th-century innovators of the modern three-age system recognized the problem of the initial transition, the "gap" between the Paleolithic and the Neolithic. Louis Leakey provided something of an answer by proving that man evolved in Africa. The Stone Age must have begun there to be carried repeatedly to Europe by migrant populations. The different phases of the Stone Age thus could appear there without transitions. The burden on African archaeologists became all the greater, because now they must find the missing transitions in Africa. The problem is difficult and ongoing. After its adoption by the First Pan African Congress in 1947, the Three-Stage Chronology was amended by the Third Congress in 1955 to include a First Intermediate Period between Early and Middle, to encompass the Fauresmith and Sangoan technologies, and the Second Intermediate Period between Middle and Later, to encompass the Magosian technology and others. The chronologic basis for definition was entirely relative. With the arrival of scientific means of finding an absolute chronology, the two intermediates turned out to be will-of-the-wisps. They were in fact Middle and Lower Paleolithic. Fauresmith is now considered to be a facies of Acheulean, while Sangoan is a facies of Lupemban. Magosian is "an artificial mix of two different periods". Once seriously questioned, the intermediates did not wait for the next Pan African Congress two years hence, but were officially rejected in 1965 (again on an advisory basis) by Burg Wartenstein Conference #29, "Systematic Investigation of the African Later Tertiary and Quaternary", a conference in anthropology held by the Wenner-Gren Foundation, at Burg Wartenstein Castle, which it then owned in Austria, attended by the same scholars that attended the Pan African Congress, including Louis Leakey and Mary Leakey, who was delivering a pilot presentation of her typological analysis of Early Stone Age tools, to be included in her 1971 contribution to "Olduvai Gorge", "Excavations in Beds I and II, 1960–1963." However, although the intermediate periods were gone, the search for the transitions continued. In 1859 Jens Jacob Worsaae first proposed a division of the Stone Age into older and younger parts based on his work with Danish kitchen middens that began in 1851. In the subsequent decades this simple distinction developed into the archaeological periods of today. The major subdivisions of the Three-age Stone Age cross two epoch boundaries on the geologic time scale: The succession of these phases varies enormously from one region (and culture) to another. The Paleolithic or Palaeolithic (from Greek: παλαιός, "palaios", "old"; and λίθος, "lithos", "stone" lit. "old stone", coined by archaeologist John Lubbock and published in 1865) is the earliest division of the Stone Age. It covers the greatest portion of humanity's time (roughly 99% of "human technological history", where "human" and "humanity" are interpreted to mean the genus "Homo"), extending from 2.5 or 2.6 million years ago, with the first documented use of stone tools by hominans such as "Homo habilis", to the end of the Pleistocene around 10,000 BCE. The Paleolithic era ended with the Mesolithic, or in areas with an early neolithisation, the Epipaleolithic. At sites dating from the Lower Paleolithic Period (about 2,500,000 to 200,000 years ago), simple pebble tools have been found in association with the remains of what may have been the earliest human ancestors. A somewhat more sophisticated Lower Paleolithic tradition, known as the Chopper chopping-tool industry, is widely distributed in the Eastern Hemisphere. This tradition is thought to have been the work of the hominin species named Homo erectus. Although no such fossil tools have yet been found, it is believed that H. erectus probably made tools of wood and bone as well as stone. About 700,000 years ago, a new Lower Paleolithic tool, the hand ax, appeared. The earliest European hand axes are assigned to the Abbevillian industry, which developed in northern France in the valley of the Somme River; a later, more refined hand-axe tradition is seen in the Acheulian industry, evidence of which has been found in Europe, Africa, the Middle East, and Asia. Some of the earliest known hand axes were found at Olduvai Gorge (Tanzania) in association with remains of H. erectus. Alongside the hand-axe tradition there developed a distinct and very different stone-tool industry, based on flakes of stone: special tools were made from worked (carefully shaped) flakes of flint. In Europe, the Clactonian industry is one example of a flake tradition. The early flake industries probably contributed to the development of the Middle Paleolithic flake tools of the Mousterian industry, which is associated with the remains of Neanderthal man. The earliest documented stone tools have been found in eastern Africa, manufacturers unknown, at the 3.3 million year old site of Lomekwi 3 in Kenya. Better known are the later tools belonging to an industry known as Oldowan, after the type site of Olduvai Gorge in Tanzania. The tools were formed by knocking pieces off a river pebble, or stones like it, with a hammerstone to obtain large and small pieces with one or more sharp edges. The original stone is called a core; the resultant pieces, flakes. Typically, but not necessarily, small pieces are detached from a larger piece, in which case the larger piece may be called the core and the smaller pieces the flakes. The prevalent usage, however, is to call all the results flakes, which can be confusing. A split in half is called bipolar flaking. Consequently, the method is often called "core-and-flake". More recently, the tradition has been called "small flake" since the flakes were small compared to subsequent Acheulean tools.The essence of the Oldowan is the making and often immediate use of small flakes. Another naming scheme is "Pebble Core Technology (PBC)":Pebble cores are ... artifacts that have been shaped by varying amounts of hard-hammer percussion. Various refinements in the shape have been called choppers, discoids, polyhedrons, subspheroid, etc. To date no reasons for the variants have been ascertained:From a functional standpoint, pebble cores seem designed for no specific purpose. However, they would not have been manufactured for no purpose:Pebble cores can be useful in many cutting, scraping or chopping tasks, but ... they are not particularly more efficient in such tasks than a sharp-edged rock. The whole point of their utility is that each is a "sharp-edged rock" in locations where nature has not provided any. There is additional evidence that Oldowan, or Mode 1, tools were utilized in "percussion technology"; that is, they were designed to be gripped at the blunt end and strike something with the edge, from which use they were given the name of choppers. Modern science has been able to detect mammalian blood cells on Mode 1 tools at Sterkfontein, Member 5 East, in South Africa. As the blood must have come from a fresh kill, the tool users are likely to have done the killing and used the tools for butchering. Plant residues bonded to the silicon of some tools confirm the use to chop plants. Although the exact species authoring the tools remains unknown, Mode 1 tools in Africa were manufactured and used predominantly by "Homo habilis". They cannot be said to have developed these tools or to have contributed the tradition to technology. They continued a tradition of yet unknown origin. As chimpanzees sometimes naturally use percussion to extract or prepare food in the wild, and may use either unmodified stones or stones that they have split, creating an Oldowan tool, the tradition may well be far older than its current record. Towards the end of Oldowan in Africa a new species appeared over the range of "Homo habilis": "Homo erectus". The earliest "unambiguous" evidence is a whole cranium, KNM-ER 3733 (a find identifier) from Koobi Fora in Kenya, dated to 1.78 mya. An early skull fragment, KNM-ER 2598, dated to 1.9 mya, is considered a good candidate also. Transitions in paleoanthropology are always hard to find, if not impossible, but based on the "long-legged" limb morphology shared by "H. habilis" and "H. rudolfensis" in East Africa, an evolution from one of those two has been suggested. The most immediate cause of the new adjustments appears to have been an increasing aridity in the region and consequent contraction of parkland savanna, interspersed with trees and groves, in favor of open grassland, dated 1.8–1.7 mya. During that transitional period the percentage of grazers among the fossil species increased from 15–25% to 45%, dispersing the food supply and requiring a facility among the hunters to travel longer distances comfortably, which "H. erectus" obviously had. The ultimate proof is the "dispersal" of "H. erectus" "across much of Africa and Asia, substantially before the development of the Mode 2 technology and use of fire ..." "H. erectus" carried Mode 1 tools over Eurasia. According to the current evidence (which may change at any time) Mode 1 tools are documented from about 2.6 mya to about 1.5 mya in Africa, and to 0.5 mya outside of it. The genus Homo is known from "H. habilis" and "H. rudolfensis" from 2.3 to 2.0 mya, with the latest habilis being an upper jaw from Koobi Fora, Kenya, from 1.4 mya. "H. erectus" is dated 1.8–0.6 mya. According to this chronology Mode 1 was inherited by "Homo" from unknown Hominans, probably "Australopithecus" and "Paranthropus", who must have continued on with Mode 1 and then with Mode 2 until their extinction no later than 1.1 mya. Meanwhile, living contemporaneously in the same regions "H. habilis" inherited the tools around 2.3 mya. At about 1.9 mya "H. erectus" came on stage and lived contemporaneously with the others. Mode 1 was now being shared by a number of Hominans over the same ranges, presumably subsisting in different niches, but the archaeology is not precise enough to say which. Tools of the Oldowan tradition first came to archaeological attention in Europe, where, being intrusive and not well defined, compared to the Acheulean, they were puzzling to archaeologists. The mystery would be elucidated by African archaeology at Olduvai, but meanwhile, in the early 20th century, the term "Pre-Acheulean" came into use in climatology. C.E.P, Brooks, a British climatologist working in the United States, used the term to describe a "chalky boulder clay" underlying a layer of gravel at Hoxne, central England, where Acheulean tools had been found. Whether any tools would be found in it and what type was not known. Hugo Obermaier, a contemporary German archaeologist working in Spain, quipped:Unfortunately, the stage of human industry which corresponds to these deposits cannot be positively identified. All we can say is that it is pre-Acheulean. This uncertainty was clarified by the subsequent excavations at Olduvai; nevertheless, the term is still in use for pre-Acheulean contexts, mainly across Eurasia, that are yet unspecified or uncertain but with the understanding that they are or will turn out to be pebble-tool. There are ample associations of Mode 2 with "H. erectus" in Eurasia. "H. erectus" – Mode 1 associations are scantier but they do exist, especially in the Far East. One strong piece of evidence prevents the conclusion that only "H. erectus" reached Eurasia: at Yiron, Israel, Mode 1 tools have been found dating to 2.4 mya, about 0.5 my earlier than the known "H. erectus" finds. If the date is correct, either another Hominan preceded "H. erectus" out of Africa or the earliest "H. erectus" has yet to be found. After the initial appearance at Gona in Ethiopia at 2.7 mya, pebble tools date from 2.0 mya at Sterkfontein, Member 5, South Africa, and from 1.8 mya at El Kherba, Algeria, North Africa. The manufacturers had already left pebble tools at Yiron, Israel, at 2.4 mya, Riwat, Pakistan, at 2.0 mya, and Renzidong, South China, at over 2 mya. The identification of a fossil skull at Mojokerta, Pernung Peninsula on Java, dated to 1.8 mya, as "H. erectus", suggests that the African finds are not the earliest to be found in Africa, or that, in fact, erectus did not originate in Africa after all but on the plains of Asia. The outcome of the issue waits for more substantial evidence. Erectus was found also at Dmanisi, Georgia, from 1.75 mya in association with pebble tools. Pebble tools are found the latest first in southern Europe and then in northern. They begin in the open areas of Italy and Spain, the earliest dated to 1.6 mya at Pirro Nord, Italy. The mountains of Italy are rising at a rapid rate in the framework of geologic time; at 1.6 mya they were lower and covered with grassland (as much of the highlands still are). Europe was otherwise mountainous and covered over with dense forest, a formidable terrain for warm-weather savanna dwellers. Similarly there is no evidence that the Mediterranean was passable at Gibraltar or anywhere else to "H. erectus" or earlier hominans. They might have reached Italy and Spain along the coasts. In northern Europe pebble tools are found earliest at Happisburgh, United Kingdom, from 0.8 mya. The last traces are from Kent's Cavern, dated 0.5 mya. By that time "H. erectus" is regarded as having been extinct; however, a more modern version apparently had evolved, "Homo heidelbergensis", who must have inherited the tools. He also explains the last of the Acheulean in Germany at 0.4 mya. In the late 19th and early 20th centuries archaeologists worked on the assumptions that a succession of Hominans and cultures prevailed, that one replaced another. Today the presence of multiple hominans living contemporaneously near each other for long periods is accepted as proved true; moreover, by the time the previously assumed "earliest" culture arrived in northern Europe, the rest of Africa and Eurasia had progressed to the Middle and Upper Palaeolithic, so that across the earth all three were for a time contemporaneous. In any given region there was a progression from Oldowan to Acheulean, Lower to Upper, no doubt. The end of Oldowan in Africa was brought on by the appearance of Acheulean, or Mode 2, stone tools. The earliest known instances are in the 1.7–1.6 mya layer at Kokiselei, West Turkana, Kenya. At Sterkfontein, South Africa, they are in Member 5 West, 1.7–1.4 mya. The 1.7 is a fairly certain, fairly standard date. Mode 2 is often found in association with "H. erectus". It makes sense that the most advanced tools should have been innovated by the most advanced Hominan; consequently, they are typically given credit for the innovation. A Mode 2 tool is a biface consisting of two concave surfaces intersecting to form a cutting edge all the way around, except in the case of tools intended to feature a point. More work and planning go into the manufacture of a Mode 2 tool. The manufacturer hits a slab off a larger rock to use as a blank. Then large flakes are struck off the blank and worked into bifaces by hard-hammer percussion on an anvil stone. Finally the edge is retouched: small flakes are hit off with a bone or wood soft hammer to sharpen or resharpen it. The core can be either the blank or another flake. Blanks are ported for manufacturing supply in places where nature has provided no suitable stone. Although most Mode 2 tools are easily distinguished from Mode 1, there is a close similarity of some Oldowan and some Acheulean, which can lead to confusion. Some Oldowan tools are more carefully prepared to form a more regular edge. One distinguishing criterion is the size of the flakes. In contrast to the Oldowan "small flake" tradition, Acheulean is "large flake:" "The primary technological distinction remaining between Oldowan and the Acheulean is the preference for large flakes (>10 cm) as blanks for making large cutting tools (handaxes and cleavers) in the Acheulean." "Large Cutting Tool (LCT)" has become part of the standard terminology as well. In North Africa, the presence of Mode 2 remains a mystery, as the oldest finds are from Thomas Quarry in Morocco at 0.9 mya. Archaeological attention, however, shifts to the Jordan Rift Valley, an extension of the East African Rift Valley (the east bank of the Jordan is slowly sliding northward as East Africa is thrust away from Africa). Evidence of use of the Nile Valley is in deficit, but Hominans could easily have reached the palaeo-Jordan river from Ethiopia along the shores of the Red Sea, one side or the other. A crossing would not have been necessary, but it is more likely there than over a theoretical but unproven land bridge through either Gibraltar or Sicily. Meanwhile, Acheulean went on in Africa past the 1.0 mya mark and also past the extinction of "H. erectus" there. The last Acheulean in East Africa is at Olorgesailie, Kenya, dated to about 0.9 mya. Its owner was still "H. erectus", but in South Africa, Acheulean at Elandsfontein, 1.0–0.6 mya, is associated with Saldanha man, classified as "H. heidelbergensis", a more advanced, but not yet modern, descendant most likely of "H. erectus". The Thoman Quarry Hominans in Morocco similarly are most likely Homo rhodesiensis, in the same evolutionary status as "H. heidelbergensis". Mode 2 is first known out of Africa at 'Ubeidiya, Israel, a site now on the Jordan River, then frequented over the long term (hundreds of thousands of years) by Homo on the shore of a variable-level palaeo-lake, long since vanished. The geology was created by successive "transgression and regression" of the lake resulting in four cycles of layers. The tools are located in the first two, Cycles Li (Limnic Inferior) and Fi (Fluviatile Inferior), but mostly in Fi. The cycles represent different ecologies and therefore different cross-sections of fauna, which makes it possible to date them. They appear to be the same faunal assemblages as the Ferenta Faunal Unit in Italy, known from excavations at Selvella and Pieterfitta, dated to 1.6–1.2 mya. At 'Ubeidiya the marks on the bones of the animal species found there indicate that the manufacturers of the tools butchered the kills of large predators, an activity that has been termed "scavenging". There are no living floors, nor did they process bones to obtain the marrow. These activities cannot be understood therefore as the only or even the typical economic activity of Hominans. Their interests were selective: they were primarily harvesting the meat of Cervids, which is estimated to have been available without spoiling for up to four days after the kill. The majority of the animals at the site were of "Palaearctic biogeographic origin". However, these overlapped in range on 30–60% of "African biogeographic origin". The biome was Mediterranean, not savanna. The animals were not passing through; there was simply an overlap of normal ranges. Of the Hominans, "H. erectus" left several cranial fragments. Teeth of undetermined species may have been "H. ergaster". The tools are classified as "Lower Acheulean" and "Developed Oldowan". The latter is a disputed classification created by Mary Leakey to describe an Acheulean-like tradition in Bed II at Olduvai. It is dated 1.53–1.27 mya. The date of the tools therefore probably does not exceed 1.5 mya; 1.4 is often given as a date. This chronology, which is definitely later than in Kenya, supports the "out of Africa" hypothesis for Acheulean, if not for the Hominans. From Southwest Asia, as the Levant is now called, the Acheulean extended itself more slowly eastward, arriving at Isampur, India, about 1.2 mya. It does not appear in China and Korea until after 1mya and not at all in Indonesia. There is a discernible boundary marking the furthest extent of the Acheulean eastward before 1 mya, called the Movius Line, after its proposer, Hallam L. Movius. On the east side of the line the small flake tradition continues, but the tools are additionally worked Mode 1, with flaking down the sides. In Athirampakkam at Chennai in Tamil Nadu the Acheulean age started at 1.51 mya and it is also prior than North India and Europe. The cause of the Movius Line remains speculative, whether it represents a real change in technology or a limitation of archeology, but after 1 mya evidence not available to Movius indicates the prevalence of Acheulean. For example, the Acheulean site at Bose, China, is dated 0.803±3K mya. The authors of this chronologically later East Asian Acheulean remain unknown, as does whether it evolved in the region or was brought in. There is no named boundary line between Mode 1 and Mode 2 on the west; nevertheless, Mode 2 is equally late in Europe as it is in the Far East. The earliest comes from a rock shelter at Estrecho de Quípar in Spain, dated to greater than 0.9 mya. Teeth from an undetermined Hominan were found there also. The last Mode 2 in Southern Europe is from a deposit at Fontana Ranuccio near Anagni in Italy dated to 0.45 mya, which is generally linked to "Homo cepranensis", a "late variant of "H. erectus"", a fragment of whose skull was found at Ceprano nearby, dated 0.46 mya. This period is best known as the era during which the Neanderthals lived in Europe and the Near East (c. 300,000–28,000 years ago). Their technology is mainly the Mousterian, but Neanderthal physical characteristics have been found also in ambiguous association with the more recent Châtelperronian archeological culture in Western Europe and several local industries like the Szeletian in Eastern Europe/Eurasia. There is no evidence for Neanderthals in Africa, Australia or the Americas. Neanderthals nursed their elderly and practised ritual burial indicating an organised society. The earliest evidence (Mungo Man) of settlement in Australia dates to around 40,000 years ago when modern humans likely crossed from Asia by island-hopping. Evidence for symbolic behavior such as body ornamentation and burial is ambiguous for the Middle Paleolithic and still subject to debate. The Bhimbetka rock shelters exhibit the earliest traces of human life in India, some of which are approximately 30,000 years old. From 50,000 to 10,000 years ago in Europe, the Upper Paleolithic ends with the end of the Pleistocene and onset of the Holocene era (the end of the last ice age). Modern humans spread out further across the Earth during the period known as the Upper Paleolithic. The Upper Paleolithic is marked by a relatively rapid succession of often complex stone artifact technologies and a large increase in the creation of art and personal ornaments. During period between 35 and 10 kya evolved: from 38 to 30 kya Châtelperronian, 40–28 Aurignacian, 28–22 Gravettian, 22–17 Solutrean, and 18–10 Magdalenian. All of these industries except the Châtelperronian are associated with anatomically modern humans. Authorship of the Châtelperronian is still the subject of much debate. Most scholars date the arrival of humans in Australia at 40,000 to 50,000 years ago, with a possible range of up to 125,000 years ago. The earliest anatomically modern human remains found in Australia (and outside of Africa) are those of Mungo Man; they have been dated at 42,000 years old. The Americas were colonised via the Bering land bridge which was exposed during this period by lower sea levels. These people are called the Paleo-Indians, and the earliest accepted dates are those of the Clovis culture sites, some 13,500 years ago. Globally, societies were hunter-gatherers but evidence of regional identities begins to appear in the wide variety of stone tool types being developed to suit very different environments. The period starting from the end of the last ice age, 10,000 years ago, to around 6,000 years ago was characterized by rising sea levels and a need to adapt to a changing environment and find new food sources. The development of Mode 5 (microlith) tools began in response to these changes. They were derived from the previous Paleolithic tools, hence the term Epipaleolithic, or were intermediate between the Paleolithic and the Neolithic, hence the term Mesolithic (Middle Stone Age), used for parts of Eurasia, but not outside it. The choice of a word depends on exact circumstances and the inclination of the archaeologists excavating the site. Microliths were used in the manufacture of more efficient composite tools, resulting in an intensification of hunting and fishing and with increasing social activity the development of more complex settlements, such as Lepenski Vir. Domestication of the dog as a hunting companion probably dates to this period. The earliest known battle occurred during the Mesolithic period at a site in Egypt known as Cemetery 117. The Neolithic, or New Stone Age, was approximately characterized by the adoption of agriculture. The shift from food gathering to food producing, in itself one of the most revolutionary changes in human history, was accompanied by the so-called Neolithic Revolution: the development of pottery, polished stone tools, and construction of more complex, larger settlements such as Göbekli Tepe and Çatal Hüyük. Some of these features began in certain localities even earlier, in the transitional Mesolithic. The first Neolithic cultures started around 7000 BCE in the fertile crescent and spread concentrically to other areas of the world; however, the Near East was probably not the only nucleus of agriculture, the cultivation of maize in Meso-America and of rice in the Far East being others. Due to the increased need to harvest and process plants, ground stone and polished stone artifacts became much more widespread, including tools for grinding, cutting, and chopping. Skara Brae located in Orkney off Scotland is one of Europe's best examples of a Neolithic village. The community contains stone beds, shelves and even an indoor toilet linked to a stream. The first large-scale constructions were built, including settlement towers and walls, e.g., Jericho (Tell es-Sultan) and ceremonial sites, e.g.: Stonehenge. The Ġgantija temples of Gozo in the Maltese archipelago are the oldest surviving free standing structures in the world, erected c. 3600–2500 BCE. The earliest evidence for established trade exists in the Neolithic with newly settled people importing exotic goods over distances of many hundreds of miles. These facts show that there were sufficient resources and co-operation to enable large groups to work on these projects. To what extent this was a basis for the development of elites and social hierarchies is a matter of ongoing debate. Although some late Neolithic societies formed complex stratified chiefdoms similar to Polynesian societies such as the Ancient Hawaiians, based on the societies of modern tribesmen at an equivalent technological level, most Neolithic societies were relatively simple and egalitarian. A comparison of art in the two ages leads some theorists to conclude that Neolithic cultures were noticeably more hierarchical than the Paleolithic cultures that preceded them. The Early Stone Age in Africa is not to be identified with "Old Stone Age", a translation of Paleolithic, or with Paleolithic, or with the "Earlier Stone Age" that originally meant what became the Paleolithic and Mesolithic. In the initial decades of its definition by the Pan-African Congress of Prehistory, it was parallel in Africa to the Upper and Middle Paleolithic. However, since then Radiocarbon dating has shown that the Middle Stone Age is in fact contemporaneous with the Middle Paleolithic. The Early Stone Age therefore is contemporaneous with the Lower Paleolithic and happens to include the same main technologies, Oldowan and Acheulean, which produced Mode 1 and Mode 2 stone tools respectively. A distinct regional term is warranted, however, by the location and chronology of the sites and the exact typology. The Middle Stone Age was a period of African prehistory between Early Stone Age and Late Stone Age. It began around 300,000 years ago and ended around 50,000 years ago. It is considered as an equivalent of European Middle Paleolithic. It is associated with anatomically modern or almost modern "Homo sapiens". Early physical evidence comes from Omo and Herto, both in Ethiopia and dated respectively at c. 195 ka and at c. 160 ka. The Later Stone Age (LSA, sometimes also called the Late Stone Age) refers to a period in African prehistory. Its beginnings are roughly contemporaneous with the European Upper Paleolithic. It lasts until historical times and this includes cultures corresponding to Mesolithic and Neolithic in other regions. Stone tools were made from a variety of stones. For example, flint and chert were shaped (or "chipped") for use as cutting tools and weapons, while basalt and sandstone were used for ground stone tools, such as quern-stones. Wood, bone, shell, antler (deer) and other materials were widely used, as well. During the most recent part of the period, sediments (such as clay) were used to make pottery. Agriculture was developed and certain animals were domesticated as well. Some species of non-primates are able to use stone tools, such as the sea otter, which breaks abalone shells with them. Primates can both use and manufacture stone tools. This combination of abilities is more marked in apes and men, but only men, or more generally Hominans, depend on tool use for survival. The key anatomical and behavioral features required for tool manufacture, which are possessed only by Hominans, are the larger thumb and the ability to hold by means of an assortment of grips. Food sources of the Palaeolithic hunter-gatherers were wild plants and animals harvested from the environment. They liked animal organ meats, including the livers, kidneys and brains. Large seeded legumes were part of the human diet long before the agricultural revolution, as is evident from archaeobotanical finds from the Mousterian layers of Kebara Cave, in Israel. Moreover, recent evidence indicates that humans processed and consumed wild cereal grains as far back as 23,000 years ago in the Upper Paleolithic. Near the end of the Wisconsin glaciation, 15,000 to 9,000 years ago, mass extinction of Megafauna such as the woolly mammoth occurred in Asia, Europe, North America and Australia. This was the first Holocene extinction event. It possibly forced modification in the dietary habits of the humans of that age and with the emergence of agricultural practices, plant-based foods also became a regular part of the diet. A number of factors have been suggested for the extinction: certainly over-hunting, but also deforestation and climate change. The net effect was to fragment the vast ranges required by the large animals and extinguish them piecemeal in each fragment. Around 2 million years ago, "Homo habilis" is believed to have constructed the first man-made structure in East Africa, consisting of simple arrangements of stones to hold branches of trees in position. A similar stone circular arrangement believed to be around 380,000 years old was discovered at Terra Amata, near Nice, France. (Concerns about the dating have been raised, see Terra Amata). Several human habitats dating back to the Stone Age have been discovered around the globe, including: Prehistoric art is visible in the artifacts. Prehistoric music is inferred from found instruments, while parietal art can be found on rocks of any kind. The latter are petroglyphs and rock paintings. The art may or may not have had a religious function. Petroglyphs appeared in the Neolithic. A Petroglyph is an intaglio abstract or symbolic image engraved on natural stone by various methods, usually by prehistoric peoples. They were a dominant form of pre-writing symbols. Petroglyphs have been discovered in different parts of the world, including Australia (Sydney rock engravings), Asia (Bhimbetka, India), North America (Death Valley National Park), South America (Cumbe Mayo, Peru), and Europe (Finnmark, Norway). In paleolithic times, mostly animals were painted, in theory ones that were used as food or represented strength, such as the rhinoceros or large cats (as in the Chauvet Cave). Signs such as dots were sometimes drawn. Rare human representations include handprints and half-human/half-animal figures. The Cave of Chauvet in the Ardèche "département", France, contains the most important cave paintings of the paleolithic era, dating from about 36,000 BCE. The Altamira cave paintings in Spain were done 14,000 to 12,000 BCE and show, among others, bisons. The hall of bulls in Lascaux, Dordogne, France, dates from about 15,000 to 10,000 BCE. The meaning of many of these paintings remains unknown. They may have been used for seasonal rituals. The animals are accompanied by signs that suggest a possible magic use. Arrow-like symbols in Lascaux are sometimes interpreted as calendar or almanac use, but the evidence remains interpretative. Some scenes of the Mesolithic, however, can be typed and therefore, judging from their various modifications, are fairly clear. One of these is the battle scene between organized bands of archers. For example, "the marching Warriors", a rock painting at Cingle de la Mola, Castellón in Spain, dated to about 7,000–4,000 BCE, depicts about 50 bowmen in two groups marching or running in step toward each other, each man carrying a bow in one hand and a fistful of arrows in the other. A file of five men leads one band, one of whom is a figure with a "high crowned hat". In other scenes elsewhere, the men wear head-dresses and knee ornaments but otherwise fight nude. Some scenes depict the dead and wounded, bristling with arrows. One is reminded of Ötzi the Iceman, a Copper Age mummy revealed by an Alpine melting glacier, who collapsed from loss of blood due to an arrow wound in the back. Modern studies and the in-depth analysis of finds dating from the Stone Age indicate certain rituals and beliefs of the people in those prehistoric times. It is now believed that activities of the Stone Age humans went beyond the immediate requirements of procuring food, body coverings, and shelters. Specific rites relating to death and burial were practiced, though certainly differing in style and execution between cultures. The image of the caveman is commonly associated with the Stone Age. For example, a 2003 documentary series showing the evolution of humans through the Stone Age was called "Walking with Cavemen", but only the last programme showed humans living in caves. While the idea that human beings and dinosaurs coexisted is sometimes portrayed in popular culture in cartoons, films and computer games, such as "The Flintstones", "One Million Years B.C." and "Chuck Rock", the notion of hominids and non-avian dinosaurs co-existing is not supported by any scientific evidence. Other depictions of the Stone Age include the best-selling "Earth's Children" series of books by Jean M. Auel, which are set in the Paleolithic and are loosely based on archaeological and anthropological findings. The 1981 film "Quest for Fire" by Jean-Jacques Annaud tells the story of a group of early homo sapiens searching for their lost fire. A 21st-century series, "Chronicles of Ancient Darkness" by Michelle Paver tells of two New Stone Age children fighting to fulfil a prophecy and save their clan.
https://en.wikipedia.org/wiki?curid=29219
Sam Loyd Samuel Loyd (January 30, 1841 – April 10, 1911), born in Philadelphia and raised in New York City, was an American chess player, chess composer, puzzle author, and recreational mathematician. As a chess composer, he authored a number of chess problems, often with interesting themes. At his peak, Loyd was one of the best chess players in the US, and was ranked 15th in the world, according to chessmetrics.com. He played in the strong Paris 1867 chess tournament (won by Ignatz von Kolisch) with little success, placing near the bottom of the field. Following his death, his book "Cyclopedia of 5000 Puzzles" was published (1914) by his son. His son, named after his father, dropped the "Jr" from his name and started publishing reprints of his father's puzzles. Loyd (senior) was inducted into the US Chess Hall of Fame in 1987. Loyd is widely acknowledged as one of America's great puzzle-writers and popularizers, often mentioned as "the" greatest. Martin Gardner featured Loyd in his August 1957 Mathematical Games column in Scientific American and called him "America's greatest puzzler". In 1898 "The Strand" dubbed him "the prince of puzzlers". As a chess problemist, his composing style is distinguished by wit and humour. However, he is also known for lies and self-promotion, and criticized on these grounds—Martin Gardner's assessment continues "but also obviously a hustler". Canadian puzzler Mel Stover called Loyd "an old reprobate", and Matthew Costello called him "puzzledom's greatest celebrity ... popularizer, genius", but also a "huckster" and "fast-talking snake oil salesman". He collaborated with puzzler Henry Dudeney for a while, but Dudeney broke off the correspondence and accused Loyd of stealing his puzzles and publishing them under his own name. Dudeney despised Loyd so intensely he equated him with the Devil. Loyd claimed from 1891 until his death in 1911 that he invented the 15 puzzle, for example writing in the "Cyclopedia of Puzzles" (published 1914), p. 235: "The older inhabitants of Puzzleland will remember how in the early seventies I drove the entire world crazy over a little box of movable pieces which became known as the '14–15 Puzzle'." This is false as Loyd had nothing to do with the invention or popularity of the puzzle, and the craze was in the early 1880s, not the early 1870s. The craze had ended by July 1880 and Loyd's first article on the subject was not published until 1896. Loyd first claimed in 1891 that he had invented the puzzle, and continued to do so until his death. The actual inventor was Noyes Chapman, who applied for a patent in March 1880. An enthusiast of Tangram puzzles, Loyd popularised them with "The Eighth Book Of Tan", a book of seven hundred unique Tangram designs and a fanciful history of the origin of the Tangram, claiming that the puzzle was invented 4,000 years ago by a god named Tan. This was presented as true and has been described as "Sam Loyd's Most Successful Hoax". One of his best known chess problems is the following, called "Excelsior" by Loyd after the poem by Henry Wadsworth Longfellow. White is to move and checkmate black in five moves against any defense: Loyd bet a friend that he could not pick a piece that "didn't" give mate in the main line, and when it was published in 1861 it was with the stipulation that white mates with "the least likely piece or pawn". One of the most famous chess problems by Loyd. He wrote on this problem: "The originality of the problem is due to the White King being placed in absolute safety, and yet coming out on a reckless career, with no immediate threat and in the face of innumerable checks". This problem was originally published in 1859. The story involves an incident during the siege of Charles XII of Sweden by the Turks at Bender in 1713. "Charles beguiled this period by means of drills and chess, and used frequently to play with his minister, Christian Albert Grosthusen, some of the contests being mentioned by Voltaire. One day while so engaged, the game had advanced to this stage, and Charles (White) had just announced mate in three." "Scarcely had he uttered the words, when a Turkish bullet, shattering the window, dashed the White knight off of the board in fragments. Grothusen started violently, but Charles, with utmost coolness, begged him to put back the other knight and work out the mate, observing that it was pretty enough. But another glance at the board made Charles smile. We do not need the knight. I can give it to you and still mate in four!" Who would believe it, he had scarcely spoken when another bullet flew across the room, and the pawn at h2 shared the fate of the knight. Grothusen turned pale. "You have our good friends the Turks with you," said the king unconcerned, "it can scarcely be expected that I should contend against such odds; but let me see if I can dispense with that unlucky pawn. I have it!" he shouted with a tremendous laugh, "I have great pleasure in informing you that there is undoubtedly a mate in 5." In 1900, Friedrich Amelung pointed out that in the original position, if the first bullet had struck the rook instead of the knight, Charles would still have a mate in six. In 2003, ChessBase posted a fifth variation, attributed to Brian Stewart. After the first bullet took out the knight, if the second had removed the g-pawn rather than the h-pawn, Charles would be able to mate in ten. One of Loyd's notable puzzles was the "Trick Donkeys". It was based on a similar puzzle involving dogs published in 1857. In the problem, the solver must cut the drawing along the dotted lines and rearrange the three pieces so that the riders appear to be riding the donkeys. This is one of Sam Loyd's most famous puzzles, first printed in the "New York Journal and Advertiser", April 24, 1898 (as far as available evidence indicates). Loyd's original instructions were to: Start from that heart in the center and go three steps in a straight line in any one of the eight directions, north, south, east or west, or on the bias, as the ladies say, northeast, northwest, southeast or southwest. When you have gone three steps in a straight line, you will reach a square with a number on it, which indicates the second day's journey, as many steps as it tells, in a straight line in any of the eight directions. From this new point when reached, march on again according to the number indicated, and continue on, following the requirements of the numbers reached, until you come upon a square with a number which will carry you just one step beyond the border, when you are supposed to be out of the woods and can holler all you want, as you will have solved the puzzle. Chess Interactive puzzle Books
https://en.wikipedia.org/wiki?curid=29222
Shiba Inu The is a Japanese breed of hunting dog. A small-to-medium breed, it is the smallest of the six original and distinct spitz breeds of dog native to Japan. A small, alert and agile dog that copes very well with mountainous terrain and hiking trails, the Shiba Inu was originally bred for hunting. It looks similar to and is often mistaken for other Japanese dog breeds like the Akita Inu or Hokkaido, but the Shiba Inu is a different breed with a distinct blood line, temperament, and smaller size than other Japanese dog breeds. The Shiba's frame is compact with well-developed muscles. Males are at the withers. Females are . The preferred size is the middle of the range for each sex. Average weight at preferred size is approximately for males, for females. Bones are moderate. The Shiba is double coated, with the outer coat being stiff and straight and the undercoat soft and thick. Fur is short and even on the fox-like face, ears, and legs. Guard hairs stand off the body and are about long at the withers. The purpose of the guard hairs is to protect their underlying skin and to repel rain or snow. Tail hair is slightly longer and stands open in a brush. Their tails are a defining characteristic and makes them stand apart from other dog breeds. Their tails help to protect them from the harsh winter weather. When they sleep, Shiba Inus curl up and use their tails to shield their face and nose in order to protect their sensitive areas from the cold. Shibas may be red, orange, yellow, black and tan, or sesame (red with black-tipped hairs), with a cream, buff, or grey undercoat. They may also be white (cream), though this color is considered a "major fault" by the American Kennel Club and should never be intentionally bred in a show dog, as the required markings known as are not visible; "Urajiro" literally translates to "underside white". Conversely, a white (cream) coat is perfectly acceptable according to the British Kennel Club breed standard. The "urajiro" (cream to white ventral color) is required in the following areas on all coat colors: on the sides of the muzzle, on the cheeks, inside the ears, on the underjaw and upper throat inside of legs, on the abdomen, around the vent and the ventral side of the tail. On reds: commonly on the throat, forechest, and chest. On blacks and sesames: commonly as a triangular mark on both sides of the forechest. Shibas tend to exhibit an independent nature. From the Japanese breed standard: The dog has a spirited boldness and is fiercely proud with a good nature and a feeling of artlessness. The Shiba is able to move quickly with nimble, elastic steps. The terms , , and have subtle interpretations that have been the subject of much commentary. The Shiba is a relatively fastidious breed and feels the need to maintain itself in a clean state. They can often be seen licking their paws and legs, much as cats do. They generally go out of their way to keep their coats clean. Because of their fastidious and proud nature, Shiba puppies are easy to housebreak and in many cases will housebreak themselves. Having their owner simply place them outside after meal times and naps is generally enough to teach the Shiba the appropriate method of toileting. A distinguishing characteristic of the breed is the so-called "shiba scream". When sufficiently provoked or unhappy, the dog will produce a loud, high-pitched scream. This can occur when attempting to handle the dog in a way that it deems unacceptable. The animal may also emit a very similar sound during periods of great joy, such as the return of the owner after an extended absence, or the arrival of a favored human guest. The Shiba Inu has been identified as a basal breed that predates the emergence of the modern breeds in the 19th Century. Originally, the Shiba Inu was bred to hunt and flush small game, such as birds and rabbit. Shiba lived in the mountainous areas of the Chūbu region. During the Meiji Restoration, western dog breeds were imported and crosses between these and native Japanese breeds became popular. From 1912 to 1926, almost no pure Shiba remained. From around 1928, hunters and intellectuals began to show interest in the protection of the remaining pure Shiba; however, despite efforts to preserve the breed, the Shiba nearly became extinct during World War II due to a combination of food shortage and a post-war distemper epidemic. All subsequent dogs were bred from the only three surviving bloodlines. These bloodlines were the Shinshu Shiba from Nagano Prefecture, the Mino Shiba from the former Mino Province in the south of present-day Gifu Prefecture, and the San'in Shiba from Tottori and Shimane Prefectures. The Shinshu Shibas possessed a solid undercoat, with a dense layer of guard-hairs, and were small and red in color. The Mino Shibas tended to have thick, prick ears, and possessed a sickle tail, rather than the common curled tail found on most modern Shibas. The San'in Shibas were larger than most modern shibas, and tended to be black, without the common tan and white accents found on modern black-and-tan shibas. When the study of Japanese dogs was formalized in the early and mid-20th century, these three strains were combined into one overall breed, the Shiba Inu. The first Japanese breed standard for the Shiba, the Nippo Standard, was published in 1934. In December 1936, the Shiba Inu was recognized as a Natural Monument of Japan through the Cultural Properties Act, largely due to the efforts of Nippo (Nihon Ken Hozonkai), the Association for the Preservation of the Japanese Dog. In 1954, an armed service family brought the first Shiba Inu to the United States. In 1979, the first recorded litter was born in the United States. The Shiba was recognized by the American Kennel Club in 1992 and added to the AKC Non-Sporting Group in 1993. It is now primarily kept as a pet both in Japan and abroad. According to the American Kennel Club, the Shiba Inu is the number one companion dog in Japan. In the United States, the growing popularity of the Shiba Inu is evident as the American Kennel Club Registration Statistics ranked the breed in 44th place in 2016; a rise from 50th place in 2012. Overall, the Shiba Inu is a healthy dog breed. Health conditions known to affect this breed are allergies, glaucoma, cataracts, hip dysplasia, entropion, and luxating patella. Periodic joint examinations are recommended throughout the dog's life. Eye tests should be performed yearly as eye problems can develop over time. By two years of age, Shiba Inus may be considered fully free from joint problems, if none have been discovered, since at this age the skeleton is fully developed. As with most dog breeds, Shibas should be walked or otherwise exercised daily. Their average life expectancy is from 12 to 15 years. Exercise, especially daily walks, is preferred for this breed to live a long and healthy life. The oldest known Shiba, Pusuke, died at age 26 in early December 2011. Pusuke was the oldest dog alive at the time and lived three years less than the world record for longest living dog. These dogs are very clean, so grooming needs will likely be minimal. They naturally tend to hate to be wet or bathed, thus, it is very important to start accustomed when they are young. A Shiba Inu's coat is coarse; short to medium length with the outer coat being long, and is naturally waterproof so there is little need for regular bathing. They also have a thick undercoat that can protect them from temperatures well below freezing. However, shedding, also known as blowing coat, can be a nuisance. Shedding is heaviest during the seasonal change and particularly during the summer season, but daily brushing can temper this problem. It is recommended that owners never shave or cut the coat of a Shiba Inu, as the coat is needed to protect them from both cold and hot temperatures.
https://en.wikipedia.org/wiki?curid=29228
Slot machine A slot machine (American English), known variously as a fruit machine (British English, except Scotland), puggy (Scottish English), the slots (Canadian and American English), poker machine/pokies (Australian English and New Zealand English), or simply slot (British English and American English), is a casino gambling machine that creates a game of chance for its customers. Slot machines are also known pejoratively as one-armed bandits due to the large mechanical levers affixed to the sides of early mechanical machines and their ability to empty players' pockets and wallets as thieves would. Its standard layout features a screen displaying three or more reels that "spin" when the game is activated. Some modern slot machines still include a lever as a skeuomorphic design trait to trigger play. However, the mechanics of early machines have since been superseded by random number generators—most are now operated using push-buttons and touchscreens. Slot machines include one or more currency detectors that validate the form of payment, whether coin, cash, voucher, or token. The machine pays off according to the pattern of symbols displayed when the reels stop "spinning". Slot machines are the most popular gambling method in casinos and constitute about 70 percent of the average U.S. casino's income. Digital technology has resulted in variations on the original slot machine concept. Since the player is essentially playing a video game, manufacturers are able to offer more interactive elements such as advanced bonus rounds and more varied video graphics. The "slot machine" term derives from the slots on the machine for inserting and retrieving coins. "Fruit machine" comes from the traditional fruit images on the spinning reels such as lemons and cherries. Sittman and Pitt of Brooklyn, New York, developed a gambling machine in 1891, which was a precursor to the modern slot machine. It contained five drums holding a total of 50 card faces based on poker. This machine proved extremely popular and soon many bars in the city had one or more of them. Players would insert a nickel and pull a lever, which would spin the drums and the cards they held, the player hoping for a good poker hand. There was no direct payout mechanism, so a pair of kings might get the player a free beer, whereas a royal flush could pay out cigars or drinks, the prizes wholly dependent on what was on offer at the local establishment. To make the odds better for the house, two cards were typically removed from the deck: the ten of spades and the jack of hearts, which doubles the odds against winning a royal flush. The drums could also be rearranged to further reduce a player's chance of winning. Due to the vast number of possible wins with the original poker card game, it proved practically impossible to come up with a way to make a machine capable of making an automatic payout for all possible winning combinations. Somewhere between 1887 and 1895, Charles Fey of San Francisco, California, devised a much simpler automatic mechanism with three spinning reels containing a total of five symbols: horseshoes, diamonds, spades, hearts, and a Liberty Bell. The bell gave the machine its name. By replacing ten cards with five symbols and using three reels instead of five drums, the complexity of reading a win was considerably reduced, allowing Fey to devise an effective automatic payout mechanism. Three bells in a row produced the biggest payoff, ten nickels (50¢). "Liberty Bell" was a huge success and spawned a thriving mechanical gaming device industry. Even when, after a few years, the use of these gambling devices was banned in his home state, Fey still could not keep up with demand for them elsewhere. The Liberty Bell machine was so popular that it was copied by many slot machine manufacturers. The first of these was a machine, also called the "Liberty Bell", produced by the manufacturer Herbert Mills in 1907. By 1908 lots of "bell" machines were installed in most cigar stores, saloons, bowling alleys, brothels and barber shops. Early machines, including an 1899 "Liberty Bell", are now part of the Nevada State Museum's Fey Collection. The first Liberty Bell machines produced by Mills used the same symbols on the reels as Charles Fey's original. Soon afterwards, another version was produced with patriotic symbols such as a flag and a wreath on the wheels. Later, a similar machine, rechristened the Operator's Bell, was designed, for which an optional gum vending attachment was available. As the gum offered was fruit-flavored, fruit symbols were placed on the reels: lemons, cherries, oranges, and plums. A bell was retained, and a picture of a stick of Bell-Fruit Gum, the origin of the bar symbol, was also present. This set of symbols proved highly popular, so was used by the other companies that began to make their own slot machines: Caille, Watling, Jennings and Pace. The payment of food prizes was a commonly used technique to avoid laws against gambling in a number of states. For this reason, a number of gumball and other vending machines were regarded with mistrust by the courts. The two Iowa cases of "State v. Ellis" and "State v. Striggles" are both used in classes on criminal law to illustrate the concept of reliance upon authority as it relates to the axiomatic "ignorantia juris non excusat" ("ignorance of the law is no excuse"). In these cases, a mint vending machine was declared to be a gambling device because the machine would, by internally manufactured chance, occasionally give the next user a number of tokens exchangeable for more candy. Despite the display of the result of the next use on the machine, the courts ruled that "[t]he machine appealed to the player's propensity to gamble, and that is [a] vice." In 1963, Bally developed the first fully electromechanical slot machine, called "Money Honey" (although earlier machines such as the "High Hand" draw poker machine by Bally had exhibited the basics of electromechanical construction as early as 1940). The electromechanical approach of the 1960s allowed "Money Honey" to be the first slot machine with a bottomless hopper and automatic payout of up to 500 coins without the help of an attendant. The popularity of this machine led to the increasing predominance of electronic games, with the side lever soon becoming vestigial. The first video slot machine was developed in 1976 in Kearny Mesa, California, by the Las Vegas–based Fortune Coin Co. This slot machine used a modified Sony Trinitron color receiver for the display and logic boards for all slot machine functions. The prototype was mounted in a full-size show-ready slot machine cabinet. The first production units went on trial in the Las Vegas Hilton Hotel. After some "cheat-proofing" modifications, the video slot machine was approved by the Nevada State Gaming Commission and eventually found popularity in the Las Vegas Strip and downtown casinos. Fortune Coin Co. and their video slot machine technology were purchased by IGT (International Gaming Technology) in 1978. The first American video slot machine to offer a "second screen" bonus round was "Reel ’Em In" developed by WMS Industries in 1996. This type of machine had appeared in Australia from at least 1994 with the "Three Bags Full" game. In this type of machine, the display changes to provide a different game where an additional payout may be won or accumulated. A person playing a slot machine can insert cash or, in "ticket-in, ticket-out" machines, a paper ticket with a barcode, into a designated slot on the machine. The machine is then activated by means of a lever or button (either physical or on a touchscreen), which activates reels that spin and stop to reveal one or several symbols. Most games have a variety of winning combinations of symbols. If a player matches a combination according to the rules of the game, the slot machine credits the player. Symbols can vary depending on the machine, but often include objects such as fruits, bells, and stylized lucky sevens. Most slot games have a theme, such as a specific aesthetic, location, or character. Symbols and other bonus features of the game are typically aligned with the theme. Some themes are licensed from popular media franchises, including films, television series (including game shows such as "Wheel of Fortune"), entertainers, and musicians. Multi-line slot machines have become more popular since the 1990s. These machines have more than one payline, meaning that visible symbols that are not aligned on the main horizontal may be considered for winning combinations. Traditional 3-reel slot machines commonly have three or five paylines while video slot machines may have 9, 15, 25, or as many as 1024 different paylines. Most accept variable numbers of credits to play, with 1 to 15 credits per line being typical. The higher the amount bet, the higher the payout will be if the player wins. One of the main differences between video slot machines and reel machines is in the way payouts are calculated. With reel machines, the only way to win the maximum jackpot is to play the maximum number of coins (usually 3, sometimes 4 or even 5 coins per spin). With video machines, the fixed payout values are multiplied by the number of coins per line that is being bet. In other words: on a reel machine, the odds are more favorable if the gambler plays with the maximum number of coins available. However, depending on the structure of the game and its bonus features, some video slots may still include features that improve chances at payouts by making increased wagers. "Multi-way" games eschew fixed paylines in favor of allowing symbols to pay anywhere, as long as there is at least one in at least 3 consecutive reels from left to right. Multi-way games may be configured to allow players to bet by-reel: for example, on a game with a 3x5 pattern (often referred to as a 243-way game), playing one reel allows all three symbols in the first reel to potentially pay, but only the center row pays on the remaining reels (often designated by darkening the unused portions of the reels). Other multi-way games use a 4x5 or 5x5 pattern, where there are up to 5 symbols in each reel, allowing for up to 1,024 and 3,125 ways to win respectively. The Australian manufacturer Aristocrat Leisure brands games featuring this system as "Reel Power", "Xtra Reel Power" and "Super Reel Power" respectively. A variation involves patterns where symbols pay adjacent to one another. Most of these games have a hexagonal reel formation, and much like multi-way games, any patterns not played are darkened out of use. Denominations can range from 1 cent ("penny slots") all the way up to $100.00 or more per credit. The latter are typically known as "high limit" machines, and machines configured to allow for such wagers are often located in dedicated areas (which may have a separate team of attendants to cater to the needs of those who play there). The machine automatically calculates the number of credits the player receives in exchange for the cash inserted. Newer machines often allow players to choose from a selection of denominations on a splash screen or menu. A bonus is a special feature of the particular game theme, which is activated when certain symbols appear in a winning combination. Bonuses and the number of bonus features vary depending upon the game. Some bonus rounds are a special session of free spins (the number of which is often based on the winning combination that triggers the bonus), often with a different or modified set of winning combinations as the main game and/or other multipliers or increased frequencies of symbols, or a "hold and re-spin" mechanic in which specific symbols (usually marked with values of credits or other prizes) are collected and locked in place over a finite number of spins. In other bonus rounds, the player is presented with several items on a screen from which to choose. As the player chooses items, a number of credits is revealed and awarded. Some bonuses use a mechanical device, such as a spinning wheel, that works in conjunction with the bonus to display the amount won. A candle is a light on top of the slot machine. It flashes to alert the operator that change is needed, hand pay is requested or a potential problem with the machine. It can be lit by the player by pressing the "service" or "help" button. Carousel refers to a grouping of slot machines, usually in a circle or oval formation. A coin hopper is a container where the coins that are immediately available for payouts are held. The hopper is a mechanical device that rotates coins into the coin tray when a player collects credits/coins (by pressing a "Cash Out" button). When a certain preset coin capacity is reached, a coin diverter automatically redirects, or "drops", excess coins into a "drop bucket" or "drop box". (Unused coin hoppers can still be found even on games that exclusively employ Ticket-In, Ticket-Out technology, as a vestige.) The credit meter is a display of the amount of money or number of credits on the machine. On mechanical slot machines, this is usually a seven-segment display, but video slot machines typically use stylized text that suits the game's theme and user interface. The drop bucket or drop box is a container located in a slot machine's base where excess coins are diverted from the hopper. Typically, a drop bucket is used for low-denomination slot machines and a drop box is used for high-denomination slot machines. A drop box contains a hinged lid with one or more locks whereas a drop bucket does not contain a lid. The contents of drop buckets and drop boxes are collected and counted by the casino on a scheduled basis. EGM is short for "Electronic Gaming Machine". Free spins are a common form of bonus, where a series of spins are automatically played at no charge at the player's current wager. Free spins are usually triggered via a scatter of at least three designated symbols (with the number of spins dependent on the number of symbols that land). Some games allow the free spins bonus to "retrigger", which adds additional spins on top of those already awarded. There is no theoretical limit to the number of free spins obtainable. Some games may have other features that can also trigger over the course of free spins. A hand pay refers to a payout made by an attendant or at an exchange point ("cage"), rather than by the slot machine itself. A hand pay occurs when the amount of the payout exceeds the maximum amount that was preset by the slot machine's operator. Usually, the maximum amount is set at the level where the operator must begin to deduct taxes. A hand pay could also be necessary as a result of a short pay. Hopper fill slip is a document used to record the replenishment of the coin in the coin hopper after it becomes depleted as a result of making payouts to players. The slip indicates the amount of coin placed into the hoppers, as well as the signatures of the employees involved in the transaction, the slot machine number and the location and the date. MEAL book (Machine entry authorization log) is a log of the employee's entries into the machine. Low-level or slant-top slot machines include a stool so the player may sit down. Stand-up or upright slot machines are played while standing. Optimal play is a payback percentage based on a gambler using the optimal strategy in a skill-based slot machine game. Payline is a line that crosses through one symbol on each reel, along which a winning combination is evaluated. Classic spinning reel machines usually have up to nine paylines, while video slot machines may have as many as one hundred. Paylines could be of various shapes (horizontal, vertical, oblique, triangular, zigzag, etc.) Persistent state refers to passive features on some slot machines, some of which able to trigger bonus payouts or other special features if certain conditions are met over time by players on that machine. Roll-up is the process of dramatizing a win by playing sounds while the meters count up to the amount that has been won. Short pay refers to a partial payout made by a slot machine, which is less than the amount due to the player. This occurs if the coin hopper has been depleted as a result of making earlier payouts to players. The remaining amount due to the player is either paid as a hand pay or an attendant will come and refill the machine. A scatter is a pay combination based on occurrences of a designated symbol landing anywhere on the reels, rather than falling in sequence on the same payline. A scatter pay usually requires a minimum of three symbols to land, and the machine may offer increased prizes or jackpots depending on the number that land. Scatters are frequently used to trigger bonus games, such as free spins (with the number of spins multiplying based on the number of scatter symbols that land). The scatter symbol usually cannot be matched using wilds, and some games may require the scatter symbols to appear on consecutive reels in order to pay. On some multiway games, scatter symbols still pay in unused areas. Taste is a reference to the small amount often paid out to keep a player seated and continuously betting. Only rarely will machines fail to pay even the minimum out over the course of several pulls. Tilt is a term derived from electromechanical slot machines' "tilt switches", which would make or break a circuit when they were tilted or otherwise tampered with that triggered an alarm. While modern machines no longer have tilt switches, any kind of technical fault (door switch in the wrong state, reel motor failure, out of paper, etc.) is still called a "tilt". A theoretical hold worksheet is a document provided by the manufacturer for every slot machine that indicates the theoretical percentage the machine should hold based on the amount paid in. The worksheet also indicates the reel strip settings, number of coins that may be played, the payout schedule, the number of reels and other information descriptive of the particular type of slot machine. Volatility or variance refers to the measure of risk associated with playing a slot machine. A low-volatility slot machine has regular but smaller wins, while a high-variance slot machine has fewer but bigger wins. Weight count is an American term referring to the total value of coins or tokens removed from a slot machine's drop bucket or drop box for counting by the casino's hard count team through the use of a weigh scale. Wild symbols substitute for most other symbols in the game (similarly to a joker card), usually excluding scatter and jackpot symbols (or offering a lower prize on non-natural combinations that include wilds). How jokers behave are dependent on the specific game and whether the player is in a bonus or free games mode. Sometimes wild symbols may only appear on certain reels, or have a chance to "stack" across the entire reel. Each machine has a table that lists the number of credits the player will receive if the symbols listed on the pay table line up on the pay line of the machine. Some symbols are wild and can represent many, or all, of the other symbols to complete a winning line. Especially on older machines, the pay table is listed on the face of the machine, usually above and below the area containing the wheels. On video slot machines, they are usually contained within a help menu, along with information on other features. Historically, all slot machines used revolving mechanical reels to display and determine results. Although the original slot machine used five reels, simpler, and therefore more reliable, three reel machines quickly became the standard. A problem with three reel machines is that the number of combinations is only cubic – the original slot machine with three physical reels and 10 symbols on each reel had only 103 = 1,000 possible combinations. This limited the manufacturer's ability to offer large jackpots since even the rarest event had a likelihood of 0.1%. The maximum theoretical payout, assuming 100% return to player would be 1000 times the bet, but that would leave no room for other pays, making the machine very high risk, and also quite boring. Although the number of symbols eventually increased to about 22, allowing 10,648 combinations, this still limited jackpot sizes as well as the number of possible outcomes. In the 1980s, however, slot machine manufacturers incorporated electronics into their products and programmed them to weight particular symbols. Thus the odds of losing symbols appearing on the payline became disproportionate to their actual frequency on the physical reel. A symbol would only appear once on the reel displayed to the player, but could, in fact, occupy several stops on the multiple reel. In 1984 Inge Telnaes received a patent for a device titled, "Electronic Gaming Device Utilizing a Random Number Generator for Selecting the Reel Stop Positions" (US Patent 4448419), which states: "It is important to make a machine that is perceived to present greater chances of payoff than it actually has within the legal limitations that games of chance must operate." The patent was later bought by International Game Technology and has since expired. A virtual reel that has 256 virtual stops per reel would allow up to 2563 = 16,777,216 final positions. The manufacturer could choose to offer a $1 million dollar jackpot on a $1 bet, confident that it will only happen, over the long term, once every 16.8 million plays. With microprocessors now ubiquitous, the computers inside modern slot machines allow manufacturers to assign a different probability to every symbol on every reel. To the player it might appear that a winning symbol was "so close", whereas in fact the probability is much lower. In the 1980s in the U.K., machines embodying microprocessors became common. These used a number of features to ensure the payout was controlled within the limits of the gambling legislation. As a coin was inserted into the machine, it could go either directly into the cashbox for the benefit of the owner or into a channel that formed the payout reservoir, with the microprocessor monitoring the number of coins in this channel. The drums themselves were driven by stepper motors, controlled by the processor and with proximity sensors monitoring the position of the drums. A "look-up table" within the software allows the processor to know what symbols were being displayed on the drums to the gambler. This allowed the system to control the level of payout by stopping the drums at positions it had determined. If the payout channel had filled up, the payout became more generous; if nearly empty, the payout became less so (thus giving good control of the odds). Video slot machines do not use mechanical reels, instead of using graphical reels on a computerized display. As there are no mechanical constraints on the design of video slot machines, games often use at least five reels, and may also use non-standard layouts. This greatly expands the number of possibilities: a machine can have 50 or more symbols on a reel, giving odds as high as 300 million to 1 against – enough for even the largest jackpot. As there are so many combinations possible with five reels, manufacturers do not need to weight the payout symbols (although some may still do so). Instead, higher paying symbols will typically appear only once or twice on each reel, while more common symbols earning a more frequent payout will appear many times. Video slot machines usually make more extensive use of multimedia, and can feature more elaborate minigames as bonuses. Modern cabinets typically use flat-panel displays, but cabinets using larger curved screens (which can provide a more immersive experience for the player) are not uncommon. Video slot machines typically encourage the player to play multiple "lines": rather than simply taking the middle of the three symbols displayed on each reel, a line could go from top left to the bottom right or any other pattern specified by the manufacturer. As each symbol is equally likely, there is no difficulty for the manufacturer in allowing the player to take as many of the possible lines on offer as desire – the long-term return to the player will be the same. The difference for the player is that the more lines they play, the more likely they are to get paid on a given spin (because they are betting more). To avoid seeming as if the player's money is simply ebbing away (whereas a payout of 100 credits on a single-line machine would be 100 bets and the player would feel they had made a substantial win, on a 20-line machine, it would only be 5 bets and not seem as significant), manufacturers commonly offer bonus games, which can return many times their bet. The player is encouraged to keep playing to reach the bonus: even if he is losing, the bonus game could allow then to win back their losses. All modern machines are designed using pseudorandom number generators ("PRNGs"), which are constantly generating a sequence of simulated random numbers, at a rate of hundreds or perhaps thousands per second. As soon as the "Play" button is pressed, the most recent random number is used to determine the result. This means that the result varies depending on exactly when the game is played. A fraction of a second earlier or later and the result would be different. It is important that the machine contains a high-quality RNG implementation. Because all PRNGs must eventually repeat their number sequence and, if the period is short or the PRNG is otherwise flawed, an advanced player may be able to "predict" the next result. Having access to the PRNG code and seed values, Ronald Dale Harris, a former slot machine programmer, discovered equations for specific gambling games like Keno that allowed him to predict what the next set of selected numbers would be based on the previous games played. Most machines are designed to defeat this by generating numbers even when the machine is not being played so the player cannot tell where in the sequence they are, even if they know how the machine was programmed. Slot machines are typically programmed to pay out as winnings 0% to 99% of the money that is wagered by players. This is known as the "theoretical payout percentage" or RTP, "return to player". The minimum theoretical payout percentage varies among jurisdictions and is typically established by law or regulation. For example, the minimum payout in Nevada is 75%, in New Jersey 83%, and in Mississippi 80%. The winning patterns on slot machines – the amounts they pay and the frequencies of those payouts – are carefully selected to yield a certain fraction of the money paid to the "house" (the operator of the slot machine) while returning the rest to the players during play. Suppose that a certain slot machine costs $1 per spin and has a return to player (RTP) of 95%. It can be calculated that, over a sufficiently long period such as 1,000,000 spins, the machine will return an average of $950,000 to its players, who have inserted $1,000,000 during that time. In this (simplified) example, the slot machine is said to pay out 95%. The operator keeps the remaining $50,000. Within some EGM development organizations this concept is referred to simply as "par". "Par" also manifests itself to gamblers as promotional techniques: "Our 'Loose Slots' have a 93% payback! Play now!" A slot machine's theoretical payout percentage is set at the factory when the software is written. Changing the payout percentage after a slot machine has been placed on the gaming floor requires a physical swap of the software or "firmware", which is usually stored on an EPROM but may be loaded onto non-volatile random access memory (NVRAM) or even stored on CD-ROM or DVD, depending on the capabilities of the machine and the applicable regulations. In certain jurisdictions, such as New Jersey, the EPROM has a tamper-evident seal and can only be changed in the presence of Gaming Control Board officials. Other jurisdictions, including Nevada, randomly audit slot machines to ensure that they contain only approved software. Historically, many casinos, both online and offline, have been unwilling to publish individual game RTP figures, making it impossible for the player to know whether they are playing a "loose" or a "tight" game. Since the turn of the century some information regarding these figures has started to come into the public domain either through various casinos releasing them—primarily this applies to online casinos—or through studies by independent gambling authorities. The "return to player" is not the only statistic that is of interest. The probabilities of every payout on the pay table is also critical. For example, consider a hypothetical slot machine with a dozen different values on the pay table. However, the probabilities of getting all the payouts are zero except the largest one. If the payout is 4,000 times the input amount, and it happens every 4,000 times on average, the "return to player" is exactly 100%, but the game would be dull to play. Also, most people would not win anything, and having entries on the paytable that have a return of zero would be deceptive. As these individual probabilities are closely guarded secrets, it is possible that the advertised machines with high return to player simply increase the probabilities of these jackpots. The casino could legally place machines of a similar style payout and advertise that some machines have 100% return to player. The added advantage is that these large jackpots increase the excitement of the other players. The table of probabilities for a specific machine is called the Probability and Accounting Report or PAR sheet, also PARS commonly understood as Paytable and Reel Strips. Mathematician Michael Shackleford revealed the PARS for one commercial slot machine, an original International Gaming Technology "Red White and Blue" machine. This game, in its original form, is obsolete, so these specific probabilities do not apply. He only published the odds after a fan of his sent him some information provided on a slot machine that was posted on a machine in the Netherlands. The psychology of the machine design is quickly revealed. There are 13 possible payouts ranging from 1:1 to 2,400:1. The 1:1 payout comes every 8 plays. The 5:1 payout comes every 33 plays, whereas the 2:1 payout comes every 600 plays. Most players assume the likelihood increases proportionate to the payout. The one mid-size payout that is designed to give the player a thrill is the 80:1 payout. It is programmed to occur an average of once every 219 plays. The 80:1 payout is high enough to create excitement, but not high enough that it makes it likely that the player will take his winnings and abandon the game. More than likely the player began the game with at least 80 times his bet (for instance there are 80 quarters in $20). In contrast the 150:1 payout occurs only on average of once every 6,241 plays. The highest payout of 2,400:1 occurs only on average of once every 643 = 262,144 plays since the machine has 64 virtual stops. The player who continues to feed the machine is likely to have several mid-size payouts, but unlikely to have a large payout. He quits after he is bored or has exhausted his bankroll. Despite their confidentiality, occasionally a PAR sheet is posted on a website. They have limited value to the player, because usually a machine will have 8 to 12 different possible programs with varying payouts. In addition, slight variations of each machine (e.g., with "double jackpots" or "five times play") are always being developed. The casino operator can choose which EPROM chip to install in any particular machine to select the payout desired. The result is that there is not really such a thing as a high payback type of machine, since every machine potentially has multiple settings. From October 2001 to February 2002, columnist Michael Shackleford obtained PAR sheets for five different nickel machines; four IGT games "Austin Powers", "Fortune Cookie", "Leopard Spots" and "Wheel of Fortune" and one game manufactured by WMS; "Reel 'em In". Without revealing the proprietary information, he developed a program that would allow him to determine with usually less than a dozen plays on each machine which EPROM chip was installed. Then he did a survey of over 400 machines in 70 different casinos in Las Vegas. He averaged the data, and assigned an average payback percentage to the machines in each casino. The resultant list was widely publicized for marketing purposes (especially by the Palms casino which had the top ranking). One reason that the slot machine is so profitable to a casino is that the player must play the "high house edge and high payout" wagers along with the "low house edge and low payout" wagers. In a more traditional wagering game like craps, the player knows that certain wagers have almost a 50/50 chance of winning or losing, but they only pay a limited multiple of the original bet (usually no higher than three times). Other bets have a higher house edge, but the player is rewarded with a bigger win (up to thirty times in craps). The player can choose what kind of wager he wants to make. A slot machine does not afford such an opportunity. Theoretically, the operator could make these probabilities available, or allow the player to choose which one so that the player is free to make a choice. However, no operator has ever enacted this strategy. Different machines have different maximum payouts, but without knowing the odds of getting the jackpot, there is no rational way to differentiate. In many markets where central monitoring and control systems are used to link machines for auditing and security purposes, usually in wide area networks of multiple venues and thousands of machines, player return must usually be changed from a central computer rather than at each machine. A range of percentages is set in the game software and selected remotely. In 2006, the Nevada Gaming Commission began working with Las Vegas casinos on technology that would allow the casino's management to change the game, the odds, and the payouts remotely. The change cannot be done instantaneously, but only after the selected machine has been idle for at least four minutes. After the change is made, the machine must be locked to new players for four minutes and display an on-screen message informing potential players that a change is being made. Some varieties of slot machines can be linked together in a setup sometimes known as a "community" game. The most basic form of this setup involves progressive jackpots that are shared between the bank of machines, but may include multiplayer bonuses and other features. In some cases multiple machines are linked across multiple casinos. In these cases, the machines may be owned by the manufacturer, who is responsible for paying the jackpot. The casinos lease the machines rather than owning them outright. Casinos in New Jersey, Nevada, and South Dakota now offer multi-state progressive jackpots, which now offer bigger jackpot pools. Mechanical slot machines and their coin acceptors were sometimes susceptible to cheating devices and other scams. One historical example involved spinning a coin with a short length of plastic wire. The weight and size of the coin would be accepted by the machine and credits would be granted. However, the spin created by the plastic wire would cause the coin to exit through the reject chute into the payout tray. This particular scam has become obsolete due to improvements in newer slot machines. Another obsolete method of defeating slot machines was to use a light source to confuse the optical sensor used to count coins during payout. Modern slot machines are controlled by EPROM computer chips and, in large casinos, coin acceptors have become obsolete in favor of bill acceptors. These machines and their bill acceptors are designed with advanced anti-cheating and anti-counterfeiting measures and are difficult to defraud. Early computerized slot machines were sometimes defrauded through the use of cheating devices, such as the "slider" or "monkey paw". Computerized slot machines are fully deterministic and thus outcomes can be sometimes successfully predicted. Malfunctioning electronic slot machines are capable of indicating jackpot winnings far in excess of those advertised. Two such cases occurred in casinos in Colorado in 2010, where software errors led to indicated jackpots of $11 million and $42 million. Analysis of machine records by the state Gaming Commission revealed faults, with the true jackpot being substantially smaller. State gaming laws do not require a casino to honour payouts. In the United States, the public and private availability of slot machines is highly regulated by state governments. Many states have established gaming control boards to regulate the possession and use of slot machines and other form of gaming. Nevada is the only state that has no significant restrictions against slot machines both for public and private use. In New Jersey, slot machines are only allowed in hotel casinos operated in Atlantic City. Several states (Indiana, Louisiana and Missouri) allow slot machines (as well as any casino-style gambling) only on licensed riverboats or permanently anchored barges. Since Hurricane Katrina, Mississippi has removed the requirement that casinos on the Gulf Coast operate on barges and now allows them on land along the shoreline. Delaware allows slot machines at three horse tracks; they are regulated by the state lottery commission. In Wisconsin, bars and taverns are allowed to have up to five machines. These machines usually allow a player to either take a payout, or gamble it on a double-or-nothing "side game". The territory of Puerto Rico places significant restrictions on slot machine ownership, but the law is widely flouted and slot machines are common in bars and coffeeshops. In regards to tribal casinos located on Native American reservations, slot machines played against the house and operating independently from a centralized computer system are classified as "Class III" gaming by the Indian Gaming Regulatory Act (IGRA), and sometimes promoted as "Vegas-style" slot machines. In order to offer Class III gaming, tribes must enter into a compact (agreement) with the state that is approved by the Department of the Interior, which may contain restrictions on the types and quantity of such games. As a workaround, some casinos may operate slot machines as "Class II" games—a category that includes games where players play exclusively against at least one other opponent and not the house, such as bingo or any related games (such as pull-tabs). In these cases, the reels are an entertainment display with a pre-determined outcome based on a centralized game played against other players. Under the IGRA, Class II games are regulated by individual tribes and the National Indian Gaming Commission, and do not require any additional approval if the state already permits tribal gaming. Some historical race wagering terminals operate in a similar manner, with the machines using slots as an entertainment display for outcomes paid using the parimutuel betting system, based on results of randomly-selected, previously-held horse races (with the player able to view selected details about the race and adjust their picks before playing the credit). Alaska, Arizona, Arkansas, Kentucky, Maine, Minnesota, Nevada, Ohio, Rhode Island, Texas, Utah, Virginia, and West Virginia place no restrictions on private ownership of slot machines. Conversely, in Connecticut, Hawaii, Nebraska, South Carolina, and Tennessee, private ownership of any slot machine is completely prohibited. The remaining states allow slot machines of a certain age (typically 25–30 years) or slot machines manufactured before a specific date. For a detailed list of state-by-state regulations on private slot machine ownership, see U.S. state slot machine ownership regulations. The Government of Canada has minimal involvement in gambling beyond the Canadian Criminal Code. In essence, the term "lottery scheme" used in the code means slot machines, bingo and table games normally associated with a casino. These fall under the jurisdiction of the province or territory without reference to the federal government; in practice, all Canadian provinces operate gaming boards that oversee lotteries, casinos and video lottery terminals under their jurisdiction. OLG piloted a classification system for slot machines at the Grand River Raceway developed by University of Waterloo professor Kevin Harrigan, as part of its PlaySmart initiative for responsible gambling. Inspired by nutrition labels on foods, they displayed metrics such as volatility and frequency of payouts. In Australia "Poker Machines" or "pokies" are officially termed "gaming machines". In Australia, gaming machines are a matter for state governments, so laws vary between states. Gaming machines are found in casinos (approximately one in each major city), pubs and clubs in some states (usually sports, social, or RSL clubs). The first Australian state to legalize this style of gambling was New South Wales, when in 1956 they were made legal in all registered clubs in the state. There are suggestions that the proliferation of poker machines has led to increased levels of problem gambling; however, the precise nature of this link is still open to research. In 1999 the Australian Productivity Commission reported that nearly half Australia's gaming machines were in New South Wales. At the time, 21% of all the gambling machines in the world were operating in Australia and, on a per capita basis, Australia had roughly five times as many gaming machines as the United States. Australia ranks 8th in total number of gaming machines after Japan, U.S.A., Italy, U.K., Spain and Germany. This primarily is because gaming machines have been legal in the state of New South Wales since 1956; over time, the number of machines has grown to 97,103 (at December 2010, including the Australian Capital Territory). By way of comparison, the U.S. State of Nevada, which legalised gaming including slots several decades before N.S.W., had 190,135 slots operating. Revenue from gaming machines in pubs and clubs accounts for more than half of the $4 billion in gambling revenue collected by state governments in fiscal year 2002–03. In Queensland, gaming machines in pubs and clubs must provide a return rate of 85%, while machines located in casinos must provide a return rate of 90%. Most other states have similar provisions. In Victoria, gaming machines must provide a minimum return rate of 87% (including jackpot contribution), including machines in Crown Casino. As of December 1, 2007, Victoria banned gaming machines that accepted $100 notes; all gaming machines made since 2003 comply with this rule. This new law also banned machines with an automatic play option. One exception exists in Crown Casino for any player with a VIP loyalty card: they can still insert $100 notes and use an autoplay feature (whereby the machine will automatically play until credit is exhausted or the player intervenes). All gaming machines in Victoria have an information screen accessible to the user by pressing the "i key" button, showing the game rules, paytable, return to player percentage, and the top and bottom five combinations with their odds. These combinations are stated to be played on a minimum bet (usually 1 credit per line, with 1 line or reel played, although some newer machines do not have an option to play 1 line; some machines may only allow maximum lines to be played), excluding feature wins. Western Australia has the most restrictive regulations on electronic gaming machines in general, with the Crown Perth casino resort being the only venue allowed to operate them, and banning slot machines with spinning reels entirely. This policy had an extensive political history, reaffirmed by the 1974 Royal Commission into Gambling: While Western Australian gaming machines are similar to the other states', they do not have spinning reels. Therefore different animations are used in place of the spinning reels in order to display each game result. Nick Xenophon was elected on an independent No Pokies ticket in the South Australian Legislative Council at the 1997 South Australian state election on 2.9 percent, re-elected at the 2006 election on 20.5 percent, and elected to the Australian Senate at the 2007 federal election on 14.8 percent. Independent candidate Andrew Wilkie, an anti-pokies campaigner, was elected to the Australian House of Representatives seat of Denison at the 2010 federal election. Wilkie was one of four crossbenchers who supported the Gillard Labor government following the hung parliament result. Wilkie immediately began forging ties with Xenophon as soon as it was apparent that he was elected. In exchange for Wilkie's support, the Labor government are attempting to implement precommitment technology for high-bet/high-intensity poker machines, against opposition from the Tony Abbott Coalition and Clubs Australia. During the Covid-19 pandemic of 2020, every establishment in the country that facilitated poker machines was shut down, in an attempt to curb the spread of the virus. Bringing Australia's usage of poker machines effectively to zero. In Russia, "slot clubs" appeared quite late, only in 1992. Before 1992, slot machines were only in casinos and small shops, but later slot clubs began appearing all over the country. The most popular and numerous were "Vulcan 777" and "Taj Mahal". Since 2009 when gambling establishments were banned, almost all slot clubs disappeared and are found only in a specially authorized gambling zones. Slot machines are covered by the Gambling Act 2005, which superseded the Gaming Act 1968. Slot machines in the U.K. are categorised by definitions produced by the Gambling Commission as part of the Gambling Act of 2005. Casinos built under the provisions of the 1968 Act are allowed to house either up to twenty machines of categories B–D or any number of C–D machines. As defined by the 2005 Act, large casinos can have a maximum of one hundred and fifty machines in any combination of categories B–D (subject to a machine-to-table ratio of 5:1); small casinos can have a maximum of eighty machines in any combination of categories B–D (subject to a machine-to-table ratio of 2:1). Category A games were defined in preparation for the planned "Super Casinos". Despite a lengthy bidding process with Manchester being chosen as the single planned location, the development was cancelled soon after Gordon Brown became Prime Minister of the United Kingdom. As a result, there are no lawful Category A games in the U.K. Category B games are divided into subcategories. The differences between B1, B3 and B4 games are mainly the stake and prizes as defined in the above table. Category B2 games – Fixed odds betting terminals (FOBTs) – have quite different stake and prize rules: FOBTs are mainly found in licensed betting shops, or bookmakers, usually in the form of electronic roulette. The games are based on a random number generator; thus each game's probability of getting the jackpot is independent of any other game: probabilities are all equal. If a pseudorandom number generator is used instead of a truly random one, probabilities are not independent since each number is determined at least in part by the one generated before it. Category C games are often referred to as fruit machines, one-armed bandits and AWP (amusement with prize). Fruit machines are commonly found in pubs, clubs, and arcades. Machines commonly have three but can be found with four or five reels, each with 16–24 symbols printed around them. The reels are spun each play, from which the appearance of particular combinations of symbols result in payment of their associated winnings by the machine (or alternatively initiation of a subgame). These games often have many extra features, trails and subgames with opportunities to win money; usually more than can be won from just the payouts on the reel combinations. Fruit machines in the U.K. almost universally have the following features, generally selected at random using a pseudorandom number generator: It is known for machines to pay out multiple jackpots, one after the other (this is known as a streak or rave) but each jackpot requires a new game to be played so as not to violate the law about the maximum payout on a single play. Typically this involves the player only pressing the Start button for which a single credit is taken, regardless of whether this causes the reels to spin or not. The minimum payout percentage is 70%, with pubs often setting the payout at around 78%. Japanese slot machines, known as or pachislot (portmanteaus of the words "pachinko" and "slot machine"), are a descendant of the traditional Japanese pachinko game. Slot machines are a fairly new phenomenon and they can be found mostly in pachinko parlors and the adult sections of amusement arcades, known as game centers. The machines are regulated with integrated circuits, and have six different levels changing the odds of a 777. The levels provide a rough outcome of between 90% to 160% (200% for skilled players). Japanese slot machines are "beatable". Parlor operators naturally set most machines to simply collect money, but intentionally place a few paying machines on the floor so that there will be at least someone winning, encouraging players on the losing machines to keep gambling, using the psychology of the gambler's fallacy. Despite the many varieties of pachislot machines, there are certain rules and regulations put forward by the , an affiliate of the National Police Agency. For example, there must be three reels. All reels must be accompanied by buttons which allow players to manually stop them, reels may not spin faster than 80 RPM, and reels must stop within 0.19 seconds of a button press. In practice, this means that machines cannot let reels slip more than 4 symbols. Other rules include a 15 coin payout cap, a 50 credit cap on machines, a 3 coin maximum bet, and other such regulations. Although a 15 coin payout may seem quite low, regulations allow "Big Bonus" (c. 400–711 coins) and "Regular Bonus" modes (c. 110 coins) where these 15 coin payouts occur nearly continuously until the bonus mode is finished. While the machine is in bonus mode, the player is entertained with special winning scenes on the LCD display, and energizing music is heard, payout after payout. Three other unique features of Pachisuro machines are "stock", "renchan", and . On many machines, when enough money to afford a bonus is taken in, the bonus is not immediately awarded. Typically the game merely stops making the reels slip off the bonus symbols for a few games. If the player fails to hit the bonus during these "standby games", it is added to the "stock" for later collection. Many current games, after finishing a bonus round, set the probability to release additional stock (gained from earlier players failing to get a bonus last time the machine stopped making the reels slip for a bit) very high for the first few games. As a result, a lucky player may get to play several bonus rounds in a row (a "renchan"), making payouts of 5,000 or even 10,000 coins possible. The lure of "stock" waiting in the machine, and the possibility of "renchan" tease the gambler to keep feeding the machine. To tease them further, there is a "tenjō" (ceiling), a maximum limit on the number of games between "stock" release. For example, if the "tenjō" is 1,500, and the number of games played since the last bonus is 1,490, the player is guaranteed to release a bonus within just 10 games. Because of the "stock", "renchan", and "tenjō" systems, it is possible to make money by simply playing machines on which someone has just lost a huge amount of money. This is called being a "hyena". They are easy to recognize, roaming the aisles for a "kamo" ("sucker" in English) to leave his machine. In short, the regulations allowing "stock", "renchan", and "tenjō" transformed the pachisuro from a low-stakes form of entertainment just a few years back to hardcore gambling. Many people may be gambling more than they can afford, and the big payouts also lure unsavory "hyena" types into the gambling halls. To address these social issues, a new regulation (Version 5.0) was adopted in 2006 which caps the maximum amount of "stock" a machine can hold to around 2,000–3,000 coins' worth of bonus games. Moreover, all pachisuro machines must be re-evaluated for regulation compliance every three years. Version 4.0 came out in 2004, so that means all those machines with the up to 10,000 coin payouts will be removed from service by 2007. Natasha Dow Schüll, associate professor in New York University's Department of Media, Culture, and Communication, uses the term "machine zone" to describe the state of immersion that users of slot machines experience during gambling, in which they lose a sense of time, space, bodily awareness, and monetary value. Mike Dixon, PhD, professor of psychology at the University of Waterloo, studies the relationship between slot players and slot machines. Slot players were observed experiencing heightened arousal from the sensory stimulus coming from the machines. They "sought to show that these 'losses disguised as wins' (LDWs) would be as arousing as wins, and more arousing than regular losses." Psychologists Robert Breen and Marc Zimmerman found that players of video slot machines reach a debilitating level of involvement with gambling three times as rapidly as those who play traditional casino games, even if they have gambled regularly on other forms of gambling in the past without a problem. The 2011 "60 Minutes" report "Slot Machines: The Big Gamble" focused on the link between slot machines and gambling addiction. Skill stop buttons predated the Bally electromechanical slot machines of the 1960s and 70s. They appeared on mechanical slot machines manufactured by Mills Novelty Co. as early as the mid 1920s. These machines had modified reel-stop arms, which allowed them to be released from the timing bar, earlier than in a normal play, simply by pressing the buttons on the front of the machine, located between each reel. "Skill stop" buttons were added to some slot machines by Zacharias Anthony in the early 1970s. These enabled the player to stop each reel, allowing a degree of "skill" so as to satisfy the New Jersey gaming laws of the day which required that players were able to control the game in some way. The original conversion was applied to approximately 50 late-model Bally slot machines. Because the typical machine stopped the reels automatically in less than 10 seconds, weights were added to the mechanical timers to prolong the automatic stopping of the reels. By the time the New Jersey Alcoholic Beverages Commission (ABC) had approved the conversion for use in New Jersey arcades, the word was out and every other distributor began adding skill stops. The machines were a huge hit on the Jersey Shore and the remaining unconverted Bally machines were destroyed as they had become instantly obsolete.
https://en.wikipedia.org/wiki?curid=29229
Spear A spear is a pole weapon consisting of a shaft, usually of wood, with a pointed head. The head may be simply the sharpened end of the shaft itself, as is the case with fire hardened spears, or it may be made of a more durable material fastened to the shaft, such as bone, flint, obsidian, iron, steel or bronze. The most common design for hunting or combat spears since ancient times has incorporated a metal spearhead shaped like a triangle, lozenge, or leaf. The heads of fishing spears usually feature barbs or serrated edges. The word "spear" comes from the Old English "spere", from the Proto-Germanic "speri", from a Proto-Indo-European root "*sper-" "spear, pole". Spears can be divided into two broad categories: those designed for thrusting in melee combat and those designed for throwing (usually referred to as javelins). The spear has been used throughout human history both as a hunting and fishing tool and as a weapon. Along with the axe, knife, and club; it is one of the earliest and most important tools developed by early humans. As a weapon, it may be wielded with either one or two hands. It was used in virtually every conflict up until the modern era, where even then it continues on in the form of the fixed bayonet, and is probably the most commonly used weapon in history. Spear manufacture and use is not confined to humans. It is also practiced by the western chimpanzee. Chimpanzees near Kédougou, Senegal have been observed to create spears by breaking straight limbs off trees, stripping them of their bark and side branches, and sharpening one end with their teeth. They then used the weapons to hunt galagos sleeping in hollows. Archaeological evidence found in present-day Germany documents that wooden spears have been used for hunting since at least 400,000 years ago, and a 2012 study from the site of Kathu Pan in South Africa suggests that hominids, possibly "Homo heidelbergensis", may have developed the technology of hafted stone-tipped spears in Africa about 500,000 years ago. Wood does not preserve well, however, and Craig Stanford, a primatologist and professor of anthropology at the University of Southern California, has suggested that the discovery of spear use by chimpanzees probably means that early humans used wooden spears as well, perhaps, five million years ago. Neanderthals were constructing stone spear heads from as early as 300,000 BP and by 250,000 years ago, wooden spears were made with fire-hardened points. From circa 200,000 BCE onwards, Middle Paleolithic humans began to make complex stone blades with flaked edges which were used as spear heads. These stone heads could be fixed to the spear shaft by gum or resin or by bindings made of animal sinew, leather strips or vegetable matter. During this period, a clear difference remained between spears designed to be thrown and those designed to be used in hand-to-hand combat. By the Magdalenian period (c. 15,000–9500 BCE), spear-throwers similar to the later atlatl were in use. The spear is the main weapon of the warriors of Homer's "Iliad". The use of both a single thrusting spear and two throwing spears are mentioned. It has been suggested that two styles of combat are being described; an early style, with thrusting spears, dating to the Mycenaean period in which the Iliad is set, and, anachronistically, a later style, with throwing spears, from Homer's own Archaic period. In the 7th century BCE, the Greeks evolved a new close-order infantry formation, the phalanx. The key to this formation was the hoplite, who was equipped with a large, circular, bronze-faced shield (aspis) and a spear with an iron head and bronze butt-spike (doru). The hoplite phalanx dominated warfare among the Greek City States from the 7th into the 4th century BCE. The 4th century saw major changes. One was the greater use of peltasts, light infantry armed with spear and javelins. The other was the development of the sarissa, a two-handed pike in length, by the Macedonians under Phillip of Macedon and Alexander the Great. The pike phalanx, supported by peltasts and cavalry, became the dominant mode of warfare among the Greeks from the late 4th century onward until Greek military systems were supplanted by the Roman legions. In the pre-Marian Roman armies, the first two lines of battle, the "hastati" and "principes", often fought with a sword called a "gladius" and "pila", heavy javelins that were specifically designed to be thrown at an enemy to pierce and foul a target's shield. Originally the "principes" were armed with a short spear called a "hasta", but these gradually fell out of use, eventually being replaced by the gladius. The third line, the "triarii", continued to use the "hasta". From the late 2nd century BCE, all legionaries were equipped with the "pilum". The "pilum" continued to be the standard legionary spear until the end of the 2nd century CE. "Auxilia", however, were equipped with a simple hasta and, perhaps, throwing spears. During the 3rd century CE, although the "pilum" continued to be used, legionaries usually were equipped with other forms of throwing and thrusting spear, similar to "auxilia" of the previous century. By the 4th century, the "pilum" had effectively disappeared from common use. In the late period of the Roman Empire, the spear became more often used because of its anti-cavalry capacities as the barbarian invasions were often conducted by people with a developed culture of cavalry in warfare. Muslim warriors used a spear that was called an "az-zaġāyah". Berbers pronounced it "zaġāya", but the English term, derived from the Old French via Berber, is "assegai". It is a pole weapon used for throwing or hurling, usually a light spear or javelin made of hard wood and pointed with a forged iron tip.The "az-zaġāyah" played an important role during the Islamic conquest as well as during later periods, well into the 20th century. A longer pole "az-zaġāyah" was being used as a hunting weapon from horseback. The "az-zaġāyah" was widely used. It existed in various forms in areas stretching from Southern Africa to the Indian subcontinent, although these places already had their own variants of the spear. This javelin was the weapon of choice during the "Fulani jihad" as well as during the Mahdist War in Sudan. It is still being used by Sikh "Nihang" in the Punjab as well as certain wandering Sufi ascetics "(Derwishes)". After the fall of the Western Roman Empire, the spear and shield continued to be used by nearly all Western European cultures. Since a medieval spear required only a small amount of steel along the sharpened edges (most of the spear-tip was wrought iron), it was an economical weapon. Quick to manufacture, and needing less smithing skill than a sword, it remained the main weapon of the common soldier. The Vikings, for instance, although often portrayed with axe or sword in hand, were armed mostly with spears, as were their Anglo-Saxon, Irish, or continental contemporaries. Broadly speaking, spears were either designed to be used in melee, or to be thrown. Within this simple classification, there was a remarkable range of types. For example, M. J. Swanton identified thirty different spearhead categories and sub-categories in early Saxon England. Most medieval spearheads were generally leaf-shaped. Notable types of early medieval spears include the "angon", a throwing spear with a long head similar to the Roman "pilum", used by the Franks and Anglo-Saxons, and the winged (or lugged) spear, which had two prominent wings at the base of the spearhead, either to prevent the spear penetrating too far into an enemy or to aid in spear fencing. Originally a Frankish weapon, the winged spear also was popular with the Vikings. It would become the ancestor of later medieval polearms, such as the partisan and spetum. The thrusting spear also has the advantage of reach, being considerably longer than other weapon types. Exact spear lengths are hard to deduce as few spear shafts survive archaeologically but would seem to have been the norm. Some nations were noted for their long spears, including the Scots and the Flemish. Spears usually were used in tightly ordered formations, such as the shield wall or the schiltron. To resist cavalry, spear shafts could be planted against the ground. William Wallace drew up his schiltrons in a circle at the Battle of Falkirk in 1298 to deter charging cavalry; this was a widespread tactic sometimes known as the "crown" formation. Throwing spears became rarer as the Middle Ages drew on, but survived in the hands of specialists such as the Catalan Almogavars. They were commonly used in Ireland until the end of the 16th century. Spears began to lose fashion among the infantry during the 14th century, being replaced by pole weapons that combined the thrusting properties of the spear with the cutting properties of the axe, such as the halberd. Where spears were retained they grew in length, eventually evolving into pikes, which would be a dominant infantry weapon in the 16th and 17th centuries. Cavalry spears were originally the same as infantry spears and were often used with two hands or held with one hand overhead. In the 12th century, after the adoption of stirrups and a high-cantled saddle, the spear became a decidedly more powerful weapon. A mounted knight would secure the lance by holding it with one hand and tucking it under the armpit (the "couched lance" technique) This allowed all the momentum of the horse and knight to be focused on the weapon's tip, whilst still retaining accuracy and control. This use of the spear spurred the development of the lance as a distinct weapon that was perfected in the medieval sport of jousting. In the 14th century, tactical developments meant that knights and men-at-arms often fought on foot. This led to the practice of shortening the lance to about .) to make it more manageable. As dismounting became commonplace, specialist pole weapons such as the pollaxe were adopted by knights and this practice ceased. Spears were used first as hunting weapons amongst the ancient Chinese. They became popular as infantry weapons during the Warring States and Qin era, when spearmen were used as especially highly disciplined soldiers in organized group attacks. When used in formation fighting, spearmen would line up their large rectangular or circular shields in a shieldwall manner. The Qin also employed long spears (more akin to a pike) in formations similar to Swiss pikemen in order to ward off cavalry. The Han Empire would use similar tactics as its Qin predecessors. Halberds, polearms, and dagger axes were also common weapons during this time. Spears were also common weaponry for Warring States, Qin, and Han era cavalry units. During these eras, the spear would develop into a longer lance-like weapon used for cavalry charges. There are many words in Chinese that would be classified as a spear in English. The "Mao" is the predecessor of the "Qiang". The first bronze "Mao" appeared in the Shang dynasty. This weapon was less prominent on the battlefield than the "ge" (dagger-axe). In some archaeological examples two tiny holes or ears can be found in the blade of the spearhead near the socket, these holes were presumably used to attach tassels, much like modern day wushu spears. In the early Shang, the "Mao" appeared to have a relatively short shaft as well as a relatively narrow shaft as opposed to "Mao" in the later Shang and Western Zhou period. Some "Mao" from this era are heavily decorated as is evidenced by a Warring States period "Mao" from the Ba Shu area. In the Han dynasty the "Mao" and the "Ji" (戟 "Ji" can be loosely defined as a halberd) rose to prominence in the military. Interesting to note is that the amount of iron Mao-heads found exceeds the number of bronze heads. By the end of the Han dynasty (Eastern Han) the process of replacement of the iron "Mao" had been completed and the bronze "Mao" had been rendered completely obsolete. After the Han dynasty toward the Sui and Tang dynasties the "Mao" used by cavalry were fitted with much longer shafts, as is mentioned above. During this era, the use of the "Shuo" (矟) was widespread among the footmen. The "Shuo" can be likened to a pike or simply a long spear. After the Tang dynasty, the popularity of the "Mao" declined and was replaced by the "Qiang" (枪). The Tang dynasty divided the "Qiang" in four categories: "一曰漆枪, 二曰木枪, 三曰白杆枪, 四曰扑头枪。” Roughly translated the four categories are: Qi (a kind of wood) Spears, Wooden Spears, Bai Gan (A kind of wood) Spears and Pu Tou Qiang. The Qiang that were produced in the Song and Ming dynasties consisted of four major parts: Spearhead, Shaft, End Spike and Tassel. The types of Qiang that exist are many. Among the types there are cavalry Qiang that were the length of one "zhang" (eleven feet and nine inches or 3.58 m), Litte-Flower Spears (Xiao Hua Qiang 小花枪) that are the length of one person and their arm extended above his head, double hooked spears, single hooked spears, ringed spears and many more. There is some confusion as to how to distinguish the "Qiang" from the "Mao", as they are obviously very similar. Some people say that a "Mao" is longer than a "Qiang", others say that the main difference is between the stiffness of the shaft, where the "Qiang" would be flexible and the "Mao" would be stiff. Scholars seem to lean toward the latter explanation more than the former. Because of the difference in the construction of the "Mao" and the "Qiang", the usage is also different, though there is no definitive answer as to what exactly the differences are between the "Mao" and the "Qiang". Spears in the Indian society were used both in missile and non-missile form, both by cavalry and foot-soldiers. Mounted spear-fighting was practiced using with a ten-foot, ball-tipped wooden lance called a "bothati", the end of which was covered in dye so that hits may be confirmed. Spears were constructed from a variety of materials such as the "sang" made completely of steel, and the "ballam" which had a bamboo shaft. The Arab presence in Sindh and the Mameluks of Delhi introduced the Middle Eastern javelin into India. The Rajputs wielded a type of spear for infantrymen which had a club integrated into the spearhead, and a pointed butt end. Other spears had forked blades, several spear-points, and numerous other innovations. One particular spear unique to India was the "vita" or corded lance. Used by the Maratha army, it had a rope connecting the spear with the user's wrist, allowing the weapon to be thrown and pulled back. The "Vel" is a type of spear or lance, originated in Southern India, primarily used by Tamils. Sikh Nihangs sometimes carry a spear even today. Spears were used in conflicts and training by armed paramilitary units such as the razakars of Nizams of Hyderabad State as late as the second half of the 20th century. Tribal made spears are used in conflicts and rioting in the Northeastern states of India, such as Assam, Arunachal Pradesh, Nagaland, Mizoram and Tripura. The hoko spear was used in ancient Japan sometime between the Yayoi period and the Heian period, but it became unpopular as early samurai often acted as horseback archers. Medieval Japan employed spears again for infantrymen to use, but it was not until the 11th century in that samurai began to prefer spears over bows. Several polearms were used in the Japanese theatres; the naginata was a glaive-like weapon with a long, curved blade popularly among the samurai and the Buddhist warrior-monks, often used against cavalry; the yari was a longer polearm, with a straight-bladed spearhead, which became the weapon of choice of both the samurai and the ashigaru (footmen) during the Warring States Era; the horseback samurai used shorter yari for his single-armed combat; on the other hand, ashigaru infantries used long yari (similar with European pike) for their massed combat formation. Filipino spears (sibat) were used as both a weapon and a tool throughout the Philippines. It is also called a "bangkaw" (after the Bankaw Revolt.), "sumbling" or "palupad" in the islands of Visayas and Mindanao. Sibat are typically made from rattan, either with a sharpened tip or a head made from metal. These heads may either be single-edged, double-edged or barbed. Styles vary according to function and origin. For example, a sibat designed for fishing may not be the same as those used for hunting. The spear was used as the primary weapon in expeditions and battles against neighbouring island kingdoms and it became famous during the 1521 Battle of Mactan, where the chieftain Lapu Lapu of Cebu fought against Spanish forces led by Ferdinand Magellan who was subsequently killed. As advanced metallurgy was largely unknown in pre-Columbian America outside of Western Mexico and South America, most weapons in Meso-America were made of wood or obsidian. This did not mean that they were less lethal, as obsidian may be sharpened to become many times sharper than steel. Meso-American spears varied greatly in shape and size. While the Aztecs preferred the sword-like macuahuitl for fighting, the advantage of a far-reaching thrusting weapon was recognised, and a large portion of the army would carry the tepoztopilli into battle. The tepoztopilli was a pole-arm, and to judge from depictions in various Aztec codices, it was roughly the height of a man, with a broad wooden head about twice the length of the users' palm or shorter, edged with razor-sharp obsidian blades which were deeply set in grooves carved into the head, and cemented in place with bitumen or plant resin as an adhesive. The tepoztopilli was able both to thrust and slash effectively. Throwing spears also were used extensively in Meso-American warfare, usually with the help of an atlatl. Throwing spears were typically shorter and more stream-lined than the tepoztopilli, and some had obsidian edges for greater penetration. Typically, most spears made by Native Americans were created with materials surrounded by their communities. Usually, the shaft of the spear was made with a wooden stick while the head of the spear was fashioned from arrowheads, pieces of metal such as copper, or a bone that had been sharpened. Spears were a preferred weapon by many since it was inexpensive to create, could more easily be taught to others, and could be made quickly and in large quantities. Native Americans used the Buffalo Pound method to kill buffalo, which required a hunter to dress as a buffalo and lure one into a ravine where other hunters were hiding. Once the buffalo appeared, the other hunters would kill him with spears. A variation of this technique, called the Buffalo Jump, was when a runner would lead the animals towards a cliff. As the buffalo got close to the cliff, other members of the tribe would jump out from behind rocks or trees and scare the buffalo over the cliff. Other hunters would be waiting at the bottom of the cliff to spear the animal to death. The development of both the long, two-handed pike and gunpowder in Renaissance Europe saw an ever-increasing focus on integrated infantry tactics. Those infantry not armed with these weapons carried variations on the pole-arm, including the halberd and the bill. Ultimately, the spear proper was rendered obsolete on the battlefield. Its last flowering was the half-pike or spontoon, a shortened version of the pike carried by officers and NCOs. While originally a weapon, this came to be seen more as a badge of office, or "leading staff" by which troops were directed. The half-pike, sometimes known as a boarding pike, was also used as a weapon on board ships until the late 19th century. At the start of the Renaissance, cavalry remained predominantly lance-armed; gendarmes with the heavy knightly lance and lighter cavalry with a variety of lighter lances. By the 1540s, however, pistol-armed cavalry called reiters were beginning to make their mark. Cavalry armed with pistols and other lighter firearms, along with a sword, had virtually replaced lance armed cavalry in Western Europe by the beginning of the 17th century. One of the earliest forms of killing prey for humans, hunting game with a spear and spear fishing continues to this day as both a means of catching food and as a cultural activity. Some of the most common prey for early humans were mega fauna such as mammoths which were hunted with various kinds of spear. One theory for the Quaternary extinction event was that most of these animals were hunted to extinction by humans with spears. Even after the invention of other hunting weapons such as the bow the spear continued to be used, either as a projectile weapon or used in the hand as was common in boar hunting. Spear hunting fell out of favour in most of Europe in the 18th century, but continued in Germany, enjoying a revival in the 1930s. Spear hunting is still practiced in the United States. Animals taken are primarily wild boar and deer, although trophy animals such as cats and big game as large as a Cape Buffalo are hunted with spears. Alligator are hunted in Florida with a type of harpoon. Like many weapons, a spear may also be a symbol of power. In the Chinese martial arts community, the Chinese spear (Qiang 槍) is popularly known as the "king of weapons". The Celts would symbolically destroy a dead warrior's spear either to prevent its use by another or as a sacrificial offering. In classical Greek mythology Zeus' bolts of lightning may be interpreted as a symbolic spear. Some would carry that interpretation to the spear that frequently is associated with Athena, interpreting her spear as a symbolic connection to some of Zeus' power beyond the Aegis once he rose to replacing other deities in the pantheon. Athena was depicted with a spear prior to that change in myths, however. Chiron's wedding-gift to Peleus when he married the nymph Thetis in classical Greek mythology, was an ashen spear as the nature of ashwood with its straight grain made it an ideal choice of wood for a spear. The Romans and their early enemies would force prisoners to walk underneath a 'yoke of spears', which humiliated them. The yoke would consist of three spears, two upright with a third tied between them at a height which made the prisoners stoop. It has been suggested that the arrangement has a magical origin, a way to trap evil spirits. The word "subjugate" has its origins in this practice (from Latin "sub" = under, "jugum" = yoke). In Norse mythology, the god Odin's spear (named Gungnir) was made by the sons of Ivaldi. It had the special property that it never missed its mark. During the War with the Vanir, Odin symbolically threw Gungnir into the Vanir host. This practice of symbolically casting a spear into the enemy ranks at the start of a fight was sometimes used in historic clashes, to seek Odin's support in the coming battle. In Wagner's opera "Siegfried", the haft of Gungnir is said to be from the "World-Tree" Yggdrasil. Other spears of religious significance are the Holy Lance and the Lúin of Celtchar, believed by some to have vast mystical powers. Sir James George Frazer in "The Golden Bough" noted the phallic nature of the spear and suggested that in the Arthurian legends the spear or lance functioned as a symbol of male fertility, paired with the Grail (as a symbol of female fertility). The Hindu god of war Murugan is worshipped by Tamils in the form of the spear called "Vel", which is his primary weapon. The term "spear" is also used (in a somewhat archaic manner) to describe the male line of a family, as opposed to the distaff or female line. Related weapons:
https://en.wikipedia.org/wiki?curid=29234
Sigrid Undset Sigrid Undset (20 May 1882 – 10 June 1949) was a Norwegian novelist who was awarded the Nobel Prize for Literature in 1928. Undset was born in Kalundborg, Denmark, but her family moved to Norway when she was two years old. In 1924, she converted to Catholicism. She fled Norway for the United States in 1940 because of her opposition to Nazi Germany and the German invasion and occupation of Norway, but returned after World War II ended in 1945. Her best-known work is "Kristin Lavransdatter", a trilogy about life in Norway in the Middle Ages, portrayed through the experiences of a woman from birth until death. Its three volumes were published between 1920 and 1922. Sigrid Undset was born on 20 May 1882 in the small town of Kalundborg, Denmark, at the childhood home of her mother, Charlotte Undset (1855–1939, née Anna Maria Charlotte Gyth). Undset was the eldest of three daughters. She and her family moved to Norway when she was two. She grew up in the Norwegian capital, Oslo (or Kristiania, as it was known until 1925). When she was only 11 years old, her father, the Norwegian archaeologist Ingvald Martin Undset (1853–1893), died at the age of 40 after a long illness. The family's economic situation meant that Undset had to give up hope of a university education and after a one-year secretarial course she obtained work at the age of 16 as a secretary with an engineering company in Kristiania, a post she was to hold for 10 years. She joined the Norwegian Authors' Union in 1907 and from 1933 through 1935 headed its Literary Council, eventually serving as the union's chairman from 1936 until 1940. While employed at office work, Undset wrote and studied. She was 16 years old when she made her first attempt at writing a novel set in the Nordic Middle Ages. The manuscript, a historical novel set in medieval Denmark, was ready by the time she was 22. It was turned down by the publishing house. Nonetheless, two years later, she completed another manuscript, much less voluminous than the first at only 80 pages. She had put aside the Middle Ages and had instead produced a realistic description of a woman with a middle-class background in contemporary Kristiania. This book was also refused by the publishers at first but it was subsequently accepted. The title was "Fru Marta Oulie", and the opening sentence (the words of the book's main character) scandalised readers: "I have been unfaithful to my husband". Thus, at the age of 25, Undset made her literary debut with a short realistic novel on adultery, set against a contemporary background. It created a stir, and she found herself ranked as a promising young author in Norway. During the years up to 1919, Undset published a number of novels set in contemporary Kristiania. Her contemporary novels of the period 1907–1918 are about the city and its inhabitants. They are stories of working people, of trivial family destinies, of the relationship between parents and children. Her main subjects are women and their love. Or, as she herself put it—in her typically curt and ironic manner—"the immoral kind" (of love). This realistic period culminated in the novels "Jenny" (1911) and "Vaaren" (Spring) (1914). The first is about a woman painter who, as a result of romantic crises, believes that she is wasting her life, and, in the end, commits suicide. The other tells of a woman who succeeds in saving both herself and her love from a serious matrimonial crisis, finally creating a secure family. These books placed Undset apart from the incipient women's emancipation movement in Europe. Undset's books sold well from the start, and, after the publication of her third book, she left her office job and prepared to live on her income as a writer. Having been granted a writer's scholarship, she set out on a lengthy journey in Europe. After short stops in Denmark and Germany, she continued to Italy, arriving in Rome in December 1909, where she remained for nine months. Undset's parents had had a close relationship with Rome, and, during her stay there, she followed in their footsteps. The encounter with Southern Europe meant a great deal to her; she made friends within the circle of Scandinavian artists and writers in Rome. In Rome, Undset met Anders Castus Svarstad, a Norwegian painter, whom she married almost three years later. She was 30; Svarstad was nine years older, married, and had a wife and three children in Norway. It was nearly three years before Svarstad got his divorce from his first wife. Undset and Svarstad were married in 1912 and went to stay in London for six months. From London, they returned to Rome, where their first child was born in January 1913. A boy, he was named after his father. In the years up to 1919, she had another child, and the household also took in Svarstad's three children from his first marriage. These were difficult years: her second child, a girl, was mentally handicapped, as was one of Svarstad's sons by his first wife. She continued writing, finishing her last realistic novels and collections of short stories. She also entered the public debate on topical themes: women's emancipation and other ethical and moral issues. She had considerable polemical gifts, and was critical of emancipation as it was developing, and of the moral and ethical decline she felt was threatening in the wake of the First World War. In 1919, she moved to Lillehammer, a small town in the Gudbrand Valley in southeast Norway, taking her two children with her. She was then expecting her third child. The intention was that she should take a rest at Lillehammer and move back to Kristiania as soon as Svarstad had their new house in order. However, the marriage broke down and a divorce followed. In August 1919, she gave birth to her third child, at Lillehammer. She decided to make Lillehammer her home, and within two years, Bjerkebæk, a large house of traditional Norwegian timber architecture, was completed, along with a large fenced garden with views of the town and the villages around. Here she was able to retreat and concentrate on her writing. After the birth of her third child, and with a secure roof over her head, Undset started a major project: "Kristin Lavransdatter". She was at home in the subject matter, having written a short novel at an earlier stage about a period in Norwegian history closer to the Pre-Christian era. She had also published a Norwegian retelling of the Arthurian legends. She had studied Old Norse manuscripts and Medieval chronicles and visited and examined Medieval churches and monasteries, both at home and abroad. She was now an authority on the period she was portraying and a very different person from the 22-year-old who had written her first novel about the Middle Ages. It was only after the end of her marriage that Undset grew mature enough to write her masterpiece. In the years between 1920 and 1927, she first published the three-volume "Kristin", and then the 4-volume "Olav" (Audunssøn), swiftly translated into English as "The Master of Hestviken". Simultaneously with this creative process, she was engaged in trying to find meaning in her own life, finding the answer in God. Undset experimented with modernist tropes such as stream of consciousness in her novel, although the original English translation by Charles Archer excised many of these passages. In 1997, the first volume of Tiina Nunnally's new translation of the work won the PEN/Faulkner Award for Fiction in the category of translation. The names of each volume were translated by Archer as "The Bridal Wreath", "The Mistress of Husaby", and "The Cross", and by Nunnally as "The Wreath", "The Wife", and "The Cross". Both Undset's parents were atheists and, although, in accord with the norm of the day, she and her two younger sisters were baptised and with their mother regularly attended the local Lutheran church, the milieu in which they were raised was a thoroughly secular one. Undset spent much of her life as an agnostic, but marriage and the outbreak of the First World War were to change her attitudes. During those difficult years she experienced a crisis of faith, almost imperceptible at first, then increasingly strong. The crisis led her from clear agnostic skepticism, by way of painful uneasiness about the ethical decline of the age, towards Christianity. In all her writing, one senses an observant eye for the mystery of life and for that which cannot be explained by reason or the human intellect. At the back of her sober, almost brutal realism, there is always an inkling of something unanswerable. At any rate, this crisis radically changed her views and ideology. Whereas she had once believed that man created God, she eventually came to believe that God created man. However, she did not turn to the established Lutheran Church of Norway, where she had been nominally reared. She was received into the Catholic Church in November 1924, after thorough instruction from the Catholic priest in her local parish. She was 42 years old. She subsequently became a lay Dominican. It is noteworthy that "The Master of Hestviken", written immediately after Undset's conversion, takes place in a historical period when Norway was Catholic, that it has very religious themes of the main character's relations with God and his deep feeling of sin, and that the Medieval Catholic Church is presented in a favorable light, with virtually all clergy and monks in the series being positive characters. In Norway, Undset's conversion to Catholicism was not only considered sensational; it was scandalous. It was also noted abroad, where her name was becoming known through the international success of "Kristin Lavransdatter". At the time, there were very few practicing Catholics in Norway, which was an almost exclusively Lutheran country. Anti-Catholicism was widespread not only among the Lutheran clergy, but through large sections of the population. Likewise, there was just as much anti-Catholic scorn among the Norwegian intelligentsia, many of whom were adherents of socialism and communism The attacks against her faith and character were quite vicious at times, with the result that Undset's literary gifts were aroused in response. For many years, she participated in the public debate, going out of her way to defend the Catholic Church. In response, she was swiftly dubbed "The Mistress of Bjerkebæk" and "The Catholic Lady". At the end of this creative eruption, Undset entered calmer waters. After 1929, she completed a series of novels set in contemporary Oslo, with a strong Catholic element. She selected her themes from the small Catholic community in Norway. But here also, the main theme is love. She also published a number of weighty historical works which put the history of Norway into a sober perspective. In addition, she translated several Icelandic sagas into Modern Norwegian and published a number of literary essays, mainly on English literature, of which a long essay on the Brontë sisters, and one on D. H. Lawrence, are especially worth mentioning. In 1934, she published "Eleven Years Old", an autobiographical work. With a minimum of camouflage, it tells the story of her own childhood in Kristiania, of her home, rich in intellectual values and love, and of her sick father. At the end of the 1930s, she commenced work on a new historical novel set in 18th century Scandinavia. Only the first volume, "Madame Dorthea", was published, in 1939. The Second World War broke out that same year and proceeded to break her, both as a writer and as a woman. She never completed her new novel. When Joseph Stalin's invasion of Finland touched off the Winter War, Undset supported the Finnish war effort by donating her Nobel Prize on 25 January 1940. When Germany invaded Norway in April 1940, Undset was forced to flee. She had strongly criticised Hitler since the early 1930s, and, from an early date, her books were banned in Nazi Germany. She had no wish to become a target of the Gestapo and fled to neutral Sweden. Her eldest son, Second Lieutenant Anders Svarstad of the Norwegian Army, was killed in action at the age of 27, on 27 April 1940, in an engagement with German troops at Segalstad Bridge in Gausdal. Undset's sick daughter had died shortly before the outbreak of the war. Bjerkebæk was requisitioned by the Wehrmacht, and used as officers' quarters throughout the Occupation of Norway. In 1940, Undset and her younger son left neutral Sweden for the United States. There, she untiringly pleaded her occupied country's cause and that of Europe's Jews in writings, speeches and interviews. She lived in Brooklyn Heights, New York. She was active in St. Ansgar's Scandinavian Catholic League and wrote several articles for its bulletin. She also traveled to Florida, where she became close friends with novelist Marjorie Kinnan Rawlings. Following the German execution of the Danish Lutheran pastor Kaj Munk on 4 January 1944, the Danish resistance newspaper "De frie Danske" printed condemning articles from influential Scandinavians, including Undset. Undset returned to Norway after the liberation in 1945. She lived another four years but never published another word. Undset died at 67 in Lillehammer, Norway, where she had lived from 1919 through 1940. She was buried in the village of Mesnali, 15 kilometers east of Lillehammer, where also her daughter and the son who died in battle are remembered. The grave is recognizable by three black crosses.
https://en.wikipedia.org/wiki?curid=29236
Systems theory Systems theory is the interdisciplinary study of systems. A system is a cohesive conglomeration of interrelated and interdependent parts which can be natural or human-made. Every system is bounded by space and time, influenced by its environment, defined by its structure and purpose, and expressed through its functioning. A system may be more than the sum of its parts if it expresses synergy or emergent behavior. Changing one part of a system may affect other parts or the whole system. It may be possible to predict these changes in patterns of behavior. For systems that learn and adapt, the growth and the degree of adaptation depend upon how well the system is engaged with its environment. Some systems support other systems, maintaining the other system to prevent failure. The goals of systems theory are to model a system's dynamics, constraints, conditions, and to elucidate principles (such as purpose, measure, methods, tools) that can be discerned and applied to other systems at every level of nesting, and in a wide range of fields for achieving optimized equifinality. General systems theory is about developing broadly applicable concepts and principles, as opposed to concepts and principles specific to one domain of knowledge. It distinguishes dynamic or active systems from static or passive systems. Active systems are activity structures or components that interact in behaviours and processes. Passive systems are structures and components that are being processed. For example, a program is passive when it is a disc file and active when it runs in memory. The field is related to systems thinking, machine logic, and systems engineering. The term "general systems theory" originates from Bertalanffy's general systems theory (GST). His ideas were adopted by others including Kenneth E. Boulding, William Ross Ashby and Anatol Rapoport working in mathematics, psychology, biology, game theory, and social network analysis. In sociology, systems thinking started earlier, in the 20th century. Stichweh states: "... Since its beginnings the social sciences were an important part of the establishment of systems theory... the two most influential suggestions were the comprehensive sociological versions of systems theory which were proposed by Talcott Parsons since the 1950s and by Niklas Luhmann since the 1970s." References include Parsons' action theory and Luhmann's social systems theory. Elements of systems thinking can also be seen in the work of James Clerk Maxwell, in particular control theory. Systems theory is manifest in the work of practitioners in many disciplines, for example the works of biologist Ludwig von Bertalanffy, linguist Béla H. Bánáthy, sociologist Talcott Parsons, and in the study of ecological systems by Howard T. Odum, Eugene Odum and is Fritjof Capra's study of organizational theory, and in the study of management by Peter Senge, in interdisciplinary areas such as Human Resource Development in the works of Richard A. Swanson, and in the works of educators Debora Hammond and Alfonso Montuori. As a transdisciplinary, interdisciplinary, and multiperspectival endeavor, systems theory brings together principles and concepts from ontology, the philosophy of science, physics, computer science, biology and engineering as well as geography, sociology, political science, psychotherapy (especially family systems therapy), and economics. Systems theory promotes dialogue between autonomous areas of study as well as within systems science itself. In this respect, with the possibility of misinterpretations, von Bertalanffy believed a general theory of systems "should be an important regulative device in science", to guard against superficial analogies that "are useless in science and harmful in their practical consequences". Others remain closer to the direct systems concepts developed by the original theorists. For example, Ilya Prigogine, of the Center for Complex Quantum Systems at the University of Texas, Austin, has studied emergent properties, suggesting that they offer analogues for living systems. The theories of autopoiesis of Francisco Varela and Humberto Maturana represent further developments in this field. Important names in contemporary systems science include Russell Ackoff, Ruzena Bajcsy, Béla H. Bánáthy, Gregory Bateson, Anthony Stafford Beer, Peter Checkland, Barbara Grosz, Brian Wilson, Robert L. Flood, Allenna Leonard, Radhika Nagpal, Fritjof Capra, Warren McCulloch, Kathleen Carley, Michael C. Jackson, Katia Sycara, and Edgar Morin among others. With the modern foundations for a general theory of systems following World War I, Ervin Laszlo, in the preface for Bertalanffy's book: "Perspectives on General System Theory", points out that the translation of "general system theory" from German into English has "wrought a certain amount of havoc": A system in this frame of reference can contain regularly interacting or interrelating groups of activities. For example, in noting the influence in organizational psychology as the field evolved from "an individually oriented industrial psychology to a systems and developmentally oriented organizational psychology", some theorists recognize that organizations have complex social systems; separating the parts from the whole reduces the overall effectiveness of organizations. This difference, from conventional models that center on individuals, structures, departments and units, separates in part from the whole, instead of recognizing the interdependence between groups of individuals, structures and processes that enable an organization to function. Laszlo explains that the new systems view of organized complexity went "one step beyond the Newtonian view of organized simplicity" which reduced the parts from the whole, or understood the whole without relation to the parts. The relationship between organisations and their environments can be seen as the foremost source of complexity and interdependence. In most cases, the whole has properties that cannot be known from analysis of the constituent elements in isolation. Béla H. Bánáthy, who argued—along with the founders of the systems society—that "the benefit of humankind" is the purpose of science, has made significant and far-reaching contributions to the area of systems theory. For the Primer Group at ISSS, Bánáthy defines a perspective that iterates this view: Similar ideas are found in learning theories that developed from the same fundamental concepts, emphasising how understanding results from knowing concepts both in part and as a whole. In fact, Bertalanffy's organismic psychology paralleled the learning theory of Jean Piaget. Some consider interdisciplinary perspectives critical in breaking away from industrial age models and thinking, wherein history represents history and math represents math, while the arts and sciences specialization remain separate and many treat teaching as behaviorist conditioning. The contemporary work of Peter Senge provides detailed discussion of the commonplace critique of educational systems grounded in conventional assumptions about learning, including the problems with fragmented knowledge and lack of holistic learning from the "machine-age thinking" that became a "model of school separated from daily life". In this way some systems theorists attempt to provide alternatives to, and evolved ideation from orthodox theories which have grounds in classical assumptions, including individuals such as Max Weber and Émile Durkheim in sociology and Frederick Winslow Taylor in scientific management. The theorists sought holistic methods by developing systems concepts that could integrate with different areas. Some may view the contradiction of reductionism in conventional theory (which has as its subject a single part) as simply an example of changing assumptions. The emphasis with systems theory shifts from parts to the organization of parts, recognizing interactions of the parts as not static and constant but dynamic processes. Some questioned the conventional closed systems with the development of open systems perspectives. The shift originated from absolute and universal authoritative principles and knowledge to relative and general conceptual and perceptual knowledge and still remains in the tradition of theorists that sought to provide means to organize human life. In other words, theorists rethought the preceding history of ideas; they did not lose them. Mechanistic thinking was particularly critiqued, especially the industrial-age mechanistic metaphor for the mind from interpretations of Newtonian mechanics by Enlightenment philosophers and later psychologists that laid the foundations of modern organizational theory and management by the late 19th century. System dynamics is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, and time delays. Systems biology is a movement that draws on several trends in bioscience research. Proponents describe systems biology as a biology-based inter-disciplinary study field that focuses on complex interactions in biological systems, claiming that it uses a new perspective (holism instead of reduction). Particularly from the year 2000 onwards, the biosciences use the term widely and in a variety of contexts. An often stated ambition of systems biology is the modelling and discovery of emergent properties which represents properties of a system whose theoretical description requires the only possible useful techniques to fall under the remit of systems biology. It is thought that Ludwig von Bertalanffy may have created the term systems biology in 1928. Systems chemistry is the science of studying networks of interacting molecules, to create new functions from a set (or library) of molecules with different hierarchical levels and emergent properties. Systems chemistry is also related to the origin of life (abiogenesis). Systems ecology is an interdisciplinary field of ecology, a subset of Earth system science, that takes a holistic approach to the study of ecological systems, especially ecosystems. Systems ecology can be seen as an application of general systems theory to ecology. Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems. Systems engineering is an interdisciplinary approach and means for enabling the realisation and deployment of successful systems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts. Systems engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user's needs. Systems psychology is a branch of psychology that studies human behaviour and experience in complex systems. It received inspiration from systems theory and systems thinking, as well as the basics of theoretical work from Roger Barker, Gregory Bateson, Humberto Maturana and others. It makes an approach in psychology in which groups and individuals receive consideration as systems in homeostasis. Systems psychology "includes the domain of engineering psychology, but in addition seems more concerned with societal systems and with the study of motivational, affective, cognitive and group behavior that holds the name engineering psychology." In systems psychology, "characteristics of organizational behaviour, for example individual needs, rewards, expectations, and attributes of the people interacting with the systems, considers this process in order to create an effective system". Whether considering the first systems of written communication with Sumerian cuneiform to Mayan numerals, or the feats of engineering with the Egyptian pyramids, systems thinking can date back to antiquity. Differentiated from Western rationalist traditions of philosophy, C. West Churchman often identified with the I Ching as a systems approach sharing a frame of reference similar to pre-Socratic philosophy and Heraclitus. Von Bertalanffy traced systems concepts to the philosophy of G.W. Leibniz and Nicholas of Cusa's "coincidentia oppositorum". While modern systems can seem considerably more complicated, today's systems may embed themselves in history. Figures like James Joule and Sadi Carnot represent an important step to introduce the "systems approach" into the (rationalist) hard sciences of the 19th century, also known as the energy transformation. Then, the thermodynamics of this century, by Rudolf Clausius, Josiah Gibbs and others, established the "system" reference model as a formal scientific object. The Society for General Systems Research specifically catalyzed systems theory as an area of study, which developed following the World Wars from the work of Ludwig von Bertalanffy, Anatol Rapoport, Kenneth E. Boulding, William Ross Ashby, Margaret Mead, Gregory Bateson, C. West Churchman and others in the 1950s. Cognizant of advances in science that questioned classical assumptions in the organizational sciences, Bertalanffy's idea to develop a theory of systems began as early as the interwar period, publishing "An Outline for General Systems Theory" in the "British Journal for the Philosophy of Science", Vol 1, No. 2, by 1950. Where assumptions in Western science from Greek thought with Plato and Aristotle to Newton's "Principia" have historically influenced all areas from the hard to social sciences (see David Easton's seminal development of the "political system" as an analytical construct), the original theorists explored the implications of twentieth century advances in terms of systems. People have studied subjects like complexity, self-organization, connectionism and adaptive systems in the 1940s and 1950s. In fields like cybernetics, researchers such as Norbert Wiener, William Ross Ashby, John von Neumann and Heinz von Foerster, examined complex systems mathematically. John von Neumann discovered cellular automata and self-reproducing systems, again with only pencil and paper. Aleksandr Lyapunov and Jules Henri Poincaré worked on the foundations of chaos theory without any computer at all. At the same time Howard T. Odum, known as a radiation ecologist, recognized that the study of general systems required a language that could depict energetics, thermodynamics and kinetics at any system scale. Odum developed a general system, or universal language, based on the circuit language of electronics, to fulfill this role, known as the Energy Systems Language. Between 1929-1951, Robert Maynard Hutchins at the University of Chicago had undertaken efforts to encourage innovation and interdisciplinary research in the social sciences, aided by the Ford Foundation with the interdisciplinary Division of the Social Sciences established in 1931. Numerous scholars had actively engaged in these ideas before (Tectology by Alexander Bogdanov, published in 1912-1917, is a remarkable example), but in 1937, von Bertalanffy presented the general theory of systems at a conference at the University of Chicago. The systems view was based on several fundamental ideas. First, all phenomena can be viewed as a web of relationships among elements, or a system. Second, all systems, whether electrical, biological, or social, have common patterns, behaviors, and properties that the observer can analyze and use to develop greater insight into the behavior of complex phenomena and to move closer toward a unity of the sciences. System philosophy, methodology and application are complementary to this science. By 1956, theorists established the Society for General Systems Research, which they renamed the International Society for Systems Science in 1988. The Cold War affected the research project for systems theory in ways that sorely disappointed many of the seminal theorists. Some began to recognize that theories defined in association with systems theory had deviated from the initial General Systems Theory (GST) view. The economist Kenneth Boulding, an early researcher in systems theory, had concerns over the manipulation of systems concepts. Boulding concluded from the effects of the Cold War that abuses of power always prove consequential and that systems theory might address such issues. Since the end of the Cold War, a renewed interest in systems theory emerged, combined with efforts to strengthen an ethical view on the subject. Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. The term goes back to Bertalanffy's book titled "General System Theory: Foundations, Development, Applications" from 1968. He developed the "allgemeine Systemlehre" (general systems theory) first via lectures beginning in 1937 and then via publications beginning in 1946. Von Bertalanffy's objective was to bring together under one heading the organismic science he had observed in his work as a biologist. His desire was to use the word "system" for those principles that are common to systems in general. In GST, he writes: Ervin Laszlo in the preface of von Bertalanffy's book "Perspectives on General System Theory": Ludwig von Bertalanffy outlines systems inquiry into three major domains: Philosophy, Science, and Technology. In his work with the Primer Group, Béla H. Bánáthy generalized the domains into four integratable domains of systemic inquiry: These operate in a recursive relationship, he explained. Integrating Philosophy and Theory as Knowledge, and Method and Application as action, Systems Inquiry then is knowledgeable action. Cybernetics is the study of the communication and control of regulatory feedback both in living and lifeless systems (organisms, organizations, machines), and in combinations of those. Its focus is how anything (digital, mechanical or biological) controls its behavior, processes information, reacts to information, and changes or can be changed to better accomplish those three primary tasks. The terms "systems theory" and "cybernetics" have been widely used as synonyms. Some authors use the term "cybernetic" systems to denote a proper subset of the class of general systems, namely those systems that include feedback loops. However Gordon Pask's differences of eternal interacting actor loops (that produce finite products) makes general systems a proper subset of cybernetics. According to Jackson (2000), von Bertalanffy promoted an embryonic form of general system theory (GST) as early as the 1920s and 1930s but it was not until the early 1950s it became more widely known in scientific circles. Threads of cybernetics began in the late 1800s that led toward the publishing of seminal works (e.g., Wiener's "Cybernetics" in 1948 and von Bertalanffy's "General Systems Theory" in 1968). Cybernetics arose more from engineering fields and GST from biology. If anything it appears that although the two probably mutually influenced each other, cybernetics had the greater influence. Von Bertalanffy (1969) specifically makes the point of distinguishing between the areas in noting the influence of cybernetics: "Systems theory is frequently identified with cybernetics and control theory. This again is incorrect. Cybernetics as the theory of control mechanisms in technology and nature is founded on the concepts of information and feedback, but as part of a general theory of systems;" then reiterates: "the model is of wide application but should not be identified with 'systems theory' in general", and that "warning is necessary against its incautious expansion to fields for which its concepts are not made." (17-23). Jackson (2000) also claims von Bertalanffy was informed by Alexander Bogdanov's three volume "Tectology" that was published in Russia between 1912 and 1917, and was translated into German in 1928. He also states it is clear to Gorelik (1975) that the "conceptual part" of general system theory (GST) had first been put in place by Bogdanov. The similar position is held by Mattessich (1978) and Capra (1996). Ludwig von Bertalanffy never even mentioned Bogdanov in his works, which Capra (1996) finds "surprising". Cybernetics, catastrophe theory, chaos theory and complexity theory have the common goal to explain complex systems that consist of a large number of mutually interacting and interrelated parts in terms of those interactions. Cellular automata (CA), neural networks (NN), artificial intelligence (AI), and artificial life (ALife) are related fields, but they do not try to describe general (universal) complex (singular) systems. The best context to compare the different "C"-Theories about complex systems is historical, which emphasizes different tools and methodologies, from pure mathematics in the beginning to pure computer science now. Since the beginning of chaos theory when Edward Lorenz accidentally discovered a strange attractor with his computer, computers have become an indispensable source of information. One could not imagine the study of complex systems without the use of computers today. Complex adaptive systems (CAS) are special cases of complex systems. They are "complex" in that they are diverse and composed of multiple, interconnected elements; they are "adaptive" in that they have the capacity to change and learn from experience. In contrast to control systems in which negative feedback dampens and reverses disequilibria, CAS are often subject to positive feedback, which magnifies and perpetuates changes, converting local irregularities into global features. Another mechanism, Dual-phase evolution arises when connections between elements repeatedly change, shifting the system between phases of variation and selection that reshape the system. Differently from Stafford Beer’s Management Cybernetics, Cultural Agency Theory (CAT) provides a modelling approach to explore predefined contexts and can be adapted to reflect those contexts. The term "complex adaptive system" was coined at the interdisciplinary Santa Fe Institute (SFI), by John H. Holland, Murray Gell-Mann and others. An alternative conception of complex adaptive (and learning) systems, methodologically at the interface between natural and social science, has been presented by Kristo Ivanov in terms of hypersystems. This concept intends to offer a theoretical basis for understanding and implementing participation of "users", decisions makers, designers and affected actors, in the development or maintenance of self-learning systems. Organizations
https://en.wikipedia.org/wiki?curid=29238
Sulfuric acid Sulfuric acid (American English) or sulphuric acid (historical spelling), also known as oil of vitriol, is a mineral acid composed of the elements sulfur, oxygen and hydrogen, with molecular formula H2SO4. It is a colorless, odorless, and viscous liquid that is soluble in water and is synthesized in reactions that are highly exothermic. Its corrosiveness can be mainly ascribed to its strong acidic nature, and, if at a high concentration, its dehydrating properties. It is also hygroscopic, readily absorbing water vapor from the air. Upon contact, sulfuric acid can cause severe chemical burns and even secondary thermal burns; it is very dangerous even at lower concentrations. Sulfuric acid is a very important commodity chemical, and a nation's sulfuric acid production is a good indicator of its industrial strength. It is widely produced with different methods, such as contact process, wet sulfuric acid process, lead chamber process and some other methods. Sulfuric acid is also a key substance in the chemical industry. It is most commonly used in fertilizer manufacture, but is also important in mineral processing, oil refining, wastewater processing, and chemical synthesis. It has a wide range of end applications including in domestic acidic drain cleaners, as an electrolyte in lead-acid batteries, in dehydrating a compound, and in various cleaning agents. Although nearly 100% sulfuric acid solutions can be made, the subsequent loss of at the boiling point brings the concentration to 98.3% acid. The 98.3% grade is more stable in storage, and is the usual form of what is described as "concentrated sulfuric acid". Other concentrations are used for different purposes. Some common concentrations are: "Chamber acid" and "tower acid" were the two concentrations of sulfuric acid produced by the lead chamber process, chamber acid being the acid produced in the lead chamber itself (<70% to avoid contamination with nitrosylsulfuric acid) and tower acid being the acid recovered from the bottom of the Glover tower. They are now obsolete as commercial concentrations of sulfuric acid, although they may be prepared in the laboratory from concentrated sulfuric acid if needed. In particular, "10M" sulfuric acid (the modern equivalent of chamber acid, used in many titrations) is prepared by slowly adding 98% sulfuric acid to an equal volume of water, with good stirring: the temperature of the mixture can rise to 80 °C (176 °F) or higher. Sulfuric acid reacts with its anhydride, , to form , called "pyrosulfuric acid", "fuming sulfuric acid", "Disulfuric acid" or "oleum" or, less commonly, "Nordhausen acid". Concentrations of oleum are either expressed in terms of % (called % oleum) or as % (the amount made if were added); common concentrations are 40% oleum (109% ) and 65% oleum (114.6% ). Pure is a solid with melting point of 36 °C. Pure sulfuric acid has a vapor pressure of <0.001 mmHg at 25 °C and 1 mmHg at 145.8 °C, and 98% sulfuric acid has a <1 mmHg vapor pressure at 40 °C. Pure sulfuric acid is a viscous clear liquid, like oil, and this explains the old name of the acid ('oil of vitriol'). Commercial sulfuric acid is sold in several different purity grades. Technical grade is impure and often colored, but is suitable for making fertilizer. Pure grades, such as United States Pharmacopeia (USP) grade, are used for making pharmaceuticals and dyestuffs. Analytical grades are also available. Nine hydrates are known, but four of them were confirmed to be tetrahydrate (H2SO4·4H2O), hemihexahydrate (H2SO4·H2O) and octahydrate (H2SO4·8H2O). Anhydrous is a very polar liquid, having a dielectric constant of around 100. It has a high electrical conductivity, caused by dissociation through protonating itself, a process known as autoprotolysis. The equilibrium constant for the autoprotolysis is The comparable equilibrium constant for water, "K"w is 10−14, a factor of 1010 (10 billion) smaller. In spite of the viscosity of the acid, the effective conductivities of the and ions are high due to an intramolecular proton-switch mechanism (analogous to the Grotthuss mechanism in water), making sulfuric acid a good conductor of electricity. It is also an excellent solvent for many reactions. Because the hydration reaction of sulfuric acid is highly exothermic, dilution should always be performed by adding the acid to the water rather than the water to the acid. Because the reaction is in an equilibrium that favors the rapid protonation of water, addition of acid to the water ensures that the "acid" is the limiting reagent. This reaction is best thought of as the formation of hydronium ions: Because the hydration of sulfuric acid is thermodynamically favorable and the affinity of it for water is sufficiently strong, sulfuric acid is an excellent dehydrating agent. Concentrated sulfuric acid has a very powerful dehydrating property, removing water (H2O) from other chemical compounds including sugar and other carbohydrates and producing carbon, heat, and steam. In the laboratory, this is often demonstrated by mixing table sugar (sucrose) into sulfuric acid. The sugar changes from white to dark brown and then to black as carbon is formed. A rigid column of black, porous carbon will emerge as well. The carbon will smell strongly of caramel due to the heat generated. Similarly, mixing starch into concentrated sulfuric acid will give elemental carbon and water as absorbed by the sulfuric acid (which becomes slightly diluted). The effect of this can be seen when concentrated sulfuric acid is spilled on paper which is composed of cellulose; the cellulose reacts to give a burnt appearance, the carbon appears much as soot would in a fire. Although less dramatic, the action of the acid on cotton, even in diluted form, will destroy the fabric. The reaction with copper(II) sulfate can also demonstrate the dehydration property of sulfuric acid. The blue crystal is changed into white powder as water is removed. As an acid, sulfuric acid reacts with most bases to give the corresponding sulfate. For example, the blue copper salt copper(II) sulfate, commonly used for electroplating and as a fungicide, is prepared by the reaction of copper(II) oxide with sulfuric acid: Sulfuric acid can also be used to displace weaker acids from their salts. Reaction with sodium acetate, for example, displaces acetic acid, , and forms sodium bisulfate: Similarly, reacting sulfuric acid with potassium nitrate can be used to produce nitric acid and a precipitate of potassium bisulfate. When combined with nitric acid, sulfuric acid acts both as an acid and a dehydrating agent, forming the nitronium ion , which is important in nitration reactions involving electrophilic aromatic substitution. This type of reaction, where protonation occurs on an oxygen atom, is important in many organic chemistry reactions, such as Fischer esterification and dehydration of alcohols. When allowed to react with superacids, sulfuric acid can act as a base and be protonated, forming the [H3SO4]+ ion. Salt of [H3SO4]+ have been prepared using the following reaction in liquid HF: The above reaction is thermodynamically favored due to the high bond enthalpy of the Si–F bond in the side product. Protonation using simply HF/SbF5, however, have met with failure, as pure sulfuric acid undergoes self-ionization to give [H3O]+ ions, which prevents the conversion of H2SO4 to [H3SO4]+ by the HF/SbF5 system: Even dilute sulfuric acid reacts with many metals via a single displacement reaction as with other typical acids, producing hydrogen gas and salts (the metal sulfate). It attacks reactive metals (metals at positions above copper in the reactivity series) such as iron, aluminium, zinc, manganese, magnesium, and nickel. Concentrated sulfuric acid can serve as an oxidizing agent, releasing sulfur dioxide: Lead and tungsten, however, are resistant to sulfuric acid. Hot concentrated sulfuric acid oxidizes carbon (as bituminous coal) and sulfur. It reacts with sodium chloride, and gives hydrogen chloride gas and sodium bisulfate: Benzene undergoes electrophilic aromatic substitution with sulfuric acid to give the corresponding sulfonic acids: Pure sulfuric acid is not encountered naturally on Earth in anhydrous form, due to its great affinity for water. Dilute sulfuric acid is a constituent of acid rain, which is formed by atmospheric oxidation of sulfur dioxide in the presence of water – i.e., oxidation of sulfurous acid. When sulfur-containing fuels such as coal or oil are burned, sulfur dioxide is the main byproduct (besides the chief products carbon oxides and water). Sulfuric acid is formed naturally by the oxidation of sulfide minerals, such as iron sulfide. The resulting water can be highly acidic and is called acid mine drainage (AMD) or acid rock drainage (ARD). This acidic water is capable of dissolving metals present in sulfide ores, which results in brightly colored, toxic solutions. The oxidation of pyrite (iron sulfide) by molecular oxygen produces iron(II), or : The can be further oxidized to : The produced can be precipitated as the hydroxide or hydrous iron oxide: The iron(III) ion ("ferric iron") can also oxidize pyrite: When iron(III) oxidation of pyrite occurs, the process can become rapid. pH values below zero have been measured in ARD produced by this process. ARD can also produce sulfuric acid at a slower rate, so that the acid neutralizing capacity (ANC) of the aquifer can neutralize the produced acid. In such cases, the total dissolved solids (TDS) concentration of the water can be increased from the dissolution of minerals from the acid-neutralization reaction with the minerals. Sulfuric acid is used as a defense by certain marine species, for example, the phaeophyte alga "Desmarestia munda" (order Desmarestiales) concentrates sulfuric acid in cell vacuoles. In the stratosphere, the atmosphere's second layer that is generally between 10 and 50 km above Earth's surface, sulfuric acid is formed by the oxidation of volcanic sulfur dioxide by the hydroxyl radical: Because sulfuric acid reaches supersaturation in the stratosphere, it can nucleate aerosol particles and provide a surface for aerosol growth via condensation and coagulation with other water-sulfuric acid aerosols. This results in the stratospheric aerosol layer. The permanent Venusian clouds produce a concentrated acid rain, as the clouds in the atmosphere of Earth produce water rain. Jupiter's moon Europa is also thought to have an atmosphere containing sulfuric acid hydrates. Sulfuric acid is produced from sulfur, oxygen and water via the conventional contact process (DCDA) or the wet sulfuric acid process (WSA). In the first step, sulfur is burned to produce sulfur dioxide. The sulfur dioxide is oxidized to sulfur trioxide by oxygen in the presence of a vanadium(V) oxide catalyst. This reaction is reversible and the formation of the sulfur trioxide is exothermic. The sulfur trioxide is absorbed into 97–98% to form oleum (), also known as fuming sulfuric acid. The oleum is then diluted with water to form concentrated sulfuric acid. Directly dissolving in water is not practiced. In the first step, sulfur is burned to produce sulfur dioxide: or, alternatively, hydrogen sulfide () gas is incinerated to gas: The sulfur dioxide then oxidized to sulfur trioxide using oxygen with vanadium(V) oxide as catalyst. The sulfur trioxide is hydrated into sulfuric acid : The last step is the condensation of the sulfuric acid to liquid 97–98% : Another method is the less well-known metabisulfite method, in which metabisulfite is placed at the bottom of a beaker, and 12.6 molar concentration hydrochloric acid is added. The resulting gas is bubbled through nitric acid, which will release brown/red vapors. The completion of the reaction is indicated by the ceasing of the fumes. This method does not produce an inseparable mist, which is quite convenient. In principle, sulfuric acid can be produced in the laboratory by burning sulfur in air followed by dissolving the resulting sulfur dioxide in a hydrogen peroxide solution. Alternatively, dissolving sulfur dioxide in an aqueous solution of a certain oxidizing metal salt such as copper (II) or iron (III) chloride: Two less well-known laboratory methods of producing sulfuric acid, albeit in dilute form and require some extra effort in purification. A solution of copper (II) sulfate can be electrolyzed with a copper cathode and platinum/graphite anode to give spongy copper at cathode and evolution of oxygen gas at anode, the solution diluted sulfuric acid that indicates completed reaction when it turns from blue to clear (production of hydrogen at cathode is another sign): More costly, dangerous, and troublesome yet novel is the electrobromine method, which employs a mixture of sulfur, water, and hydrobromic acid as the electrolytic solution. The sulfur pushed to bottom of container under the acid solution, then copper cathode and platinum/graphite anode are used with cathode near surface and anode at bottom of electrolyte to apply the current. This may take longer and emits toxic bromine/sulfur bromide vapors, but reactant acid is recyclable, overall only sulfur and water converted to sulfuric acid (omitting losses of acid as vapors): Prior to 1900, most sulfuric acid was manufactured by the lead chamber process. As late as 1940, up to 50% of sulfuric acid manufactured in the United States was produced by chamber process plants. In early to mid nineteenth century "vitriol" plants existed, among other places, in Prestonpans in Scotland, Shropshire and the Lagan Valley in County Antrim Ireland where it was used as a bleach for linen. Early bleaching of linen was done using lactic acid from sour milk but this was a slow process and the use of vitriol sped up the bleaching process. Sulfuric acid is a very important commodity chemical, and indeed, a nation's sulfuric acid production is a good indicator of its industrial strength. World production in the year 2004 was about 180 million tonnes, with the following geographic distribution: Asia 35%, North America (including Mexico) 24%, Africa 11%, Western Europe 10%, Eastern Europe and Russia 10%, Australia and Oceania 7%, South America 7%. Most of this amount (≈60%) is consumed for fertilizers, particularly superphosphates, ammonium phosphate and ammonium sulfates. About 20% is used in chemical industry for production of detergents, synthetic resins, dyestuffs, pharmaceuticals, petroleum catalysts, insecticides and antifreeze, as well as in various processes such as oil well acidicizing, aluminium reduction, paper sizing, water treatment. About 6% of uses are related to pigments and include paints, enamels, printing inks, coated fabrics and paper, and the rest is dispersed into a multitude of applications such as production of explosives, cellophane, acetate and viscose textiles, lubricants, non-ferrous metals, and batteries. The major use for sulfuric acid is in the "wet method" for the production of phosphoric acid, used for manufacture of phosphate fertilizers. In this method, phosphate rock is used, and more than 100 million tonnes are processed annually. This raw material is shown below as fluorapatite, though the exact composition may vary. This is treated with 93% sulfuric acid to produce calcium sulfate, hydrogen fluoride (HF) and phosphoric acid. The HF is removed as hydrofluoric acid. The overall process can be represented as: Ammonium sulfate, an important nitrogen fertilizer, is most commonly produced as a byproduct from coking plants supplying the iron and steel making plants. Reacting the ammonia produced in the thermal decomposition of coal with waste sulfuric acid allows the ammonia to be crystallized out as a salt (often brown because of iron contamination) and sold into the agro-chemicals industry. Another important use for sulfuric acid is for the manufacture of aluminium sulfate, also known as paper maker's alum. This can react with small amounts of soap on paper pulp fibers to give gelatinous aluminium carboxylates, which help to coagulate the pulp fibers into a hard paper surface. It is also used for making aluminium hydroxide, which is used at water treatment plants to filter out impurities, as well as to improve the taste of the water. Aluminium sulfate is made by reacting bauxite with sulfuric acid: Sulfuric acid is also important in the manufacture of dyestuffs solutions. The sulfur–iodine cycle is a series of thermo-chemical processes possibly usable to produce hydrogen from water. It consists of three chemical reactions whose net reactant is water and whose net products are hydrogen and oxygen. The compounds of sulfur and iodine are recovered and reused, hence the consideration of the process as a cycle. This process is endothermic and must occur at high temperatures, so energy in the form of heat has to be supplied. The sulfur–iodine cycle has been proposed as a way to supply hydrogen for a hydrogen-based economy. It is an alternative to electrolysis, and does not require hydrocarbons like current methods of steam reforming. But note that all of the available energy in the hydrogen so produced is supplied by the heat used to make it. The sulfur–iodine cycle is currently being researched as a feasible method of obtaining hydrogen, but the concentrated, corrosive acid at high temperatures poses currently insurmountable safety hazards if the process were built on a large scale. Sulfuric acid is used in large quantities by the iron and steelmaking industry to remove oxidation, rust, and scaling from rolled sheet and billets prior to sale to the automobile and major appliances industry. Used acid is often recycled using a spent acid regeneration (SAR) plant. These plants combust spent acid with natural gas, refinery gas, fuel oil or other fuel sources. This combustion process produces gaseous sulfur dioxide () and sulfur trioxide () which are then used to manufacture "new" sulfuric acid. SAR plants are common additions to metal smelting plants, oil refineries, and other industries where sulfuric acid is consumed in bulk, as operating a SAR plant is much cheaper than the recurring costs of spent acid disposal and new acid purchases. Hydrogen peroxide () can be added to sulfuric acid to produce piranha solution, a powerful but very toxic cleaning solution with which substrate surfaces can be cleaned. Piranha solution is typically used in the microelectronics industry, and also in laboratory settings to clean glassware. Sulfuric acid is used for a variety of other purposes in the chemical industry. For example, it is the usual acid catalyst for the conversion of cyclohexanone oxime to caprolactam, used for making nylon. It is used for making hydrochloric acid from salt via the Mannheim process. Much is used in petroleum refining, for example as a catalyst for the reaction of isobutane with isobutylene to give isooctane, a compound that raises the octane rating of gasoline (petrol). Sulfuric acid is also often used as a dehydrating or oxidising agent in industrial reactions, such as the dehydration of various sugars to form solid carbon. Sulfuric acid acts as the electrolyte in lead–acid batteries (lead-acid accumulator): At anode: At cathode: Overall: Sulfuric acid at high concentrations is frequently the major ingredient in acidic drain cleaners which are used to remove grease, hair, tissue paper, etc. Similar to their alkaline versions, such drain openers can dissolve fats and proteins via hydrolysis. Moreover, as concentrated sulfuric acid has a strong dehydrating property, it can remove tissue paper via dehydrating process as well. Since the acid may react with water vigorously, such acidic drain openers should be added slowly into the pipe to be cleaned. The study of vitriol, a category of glassy minerals from which the acid can be derived, began in ancient times. Sumerians had a list of types of vitriol that they classified according to the substances' color. Some of the earliest discussions on the origin and properties of vitriol is in the works of the Greek physician Dioscorides (first century AD) and the Roman naturalist Pliny the Elder (23–79 AD). Galen also discussed its medical use. Metallurgical uses for vitriolic substances were recorded in the Hellenistic alchemical works of Zosimos of Panopolis, in the treatise "Phisica et Mystica", and the Leyden papyrus X. Medieval Islamic era alchemists, Jābir ibn Hayyān (c. 721 – c. 815 AD, also known as Geber), Muhammad ibn Zakariya al-Razi (865 – 925 AD), and Jamal Din al-Watwat (d. 1318, wrote the book "Mabāhij al-fikar wa-manāhij al-'ibar"), included vitriol in their mineral classification lists. Ibn Sina focused on its medical uses and different varieties of vitriol. Razi is credited with being the first to produce sulfuric acid. Sulfuric acid was called "oil of vitriol" by medieval European alchemists because it was prepared by roasting "green vitriol" (iron(II) sulfate) in an iron retort. There are references to it in the works of Vincent of Beauvais and in the "Compositum de Compositis" ascribed to Saint Albertus Magnus. A passage from Pseudo-Geber's "Summa Perfectionis" was long considered to be a recipe for sulfuric acid, but this was a misinterpretation. In the seventeenth century, the German-Dutch chemist Johann Glauber prepared sulfuric acid by burning sulfur together with saltpeter (potassium nitrate, ), in the presence of steam. As saltpeter decomposes, it oxidizes the sulfur to , which combines with water to produce sulfuric acid. In 1736, Joshua Ward, a London pharmacist, used this method to begin the first large-scale production of sulfuric acid. In 1746 in Birmingham, John Roebuck adapted this method to produce sulfuric acid in lead-lined chambers, which were stronger, less expensive, and could be made larger than the previously used glass containers. This process allowed the effective industrialization of sulfuric acid production. After several refinements, this method, called the lead chamber process or "chamber process", remained the standard for sulfuric acid production for almost two centuries. Sulfuric acid created by John Roebuck's process approached a 65% concentration. Later refinements to the lead chamber process by French chemist Joseph Louis Gay-Lussac and British chemist John Glover improved concentration to 78%. However, the manufacture of some dyes and other chemical processes require a more concentrated product. Throughout the 18th century, this could only be made by dry distilling minerals in a technique similar to the original alchemical processes. Pyrite (iron disulfide, ) was heated in air to yield iron(II) sulfate, , which was oxidized by further heating in air to form iron(III) sulfate, Fe2(SO4)3, which, when heated to 480 °C, decomposed to iron(III) oxide and sulfur trioxide, which could be passed through water to yield sulfuric acid in any concentration. However, the expense of this process prevented the large-scale use of concentrated sulfuric acid. In 1831, British vinegar merchant Peregrine Phillips patented the contact process, which was a far more economical process for producing sulfur trioxide and concentrated sulfuric acid. Today, nearly all of the world's sulfuric acid is produced using this method. Sulfuric acid is capable of causing very severe burns, especially when it is at high concentrations. In common with other corrosive acids and alkali, it readily decomposes proteins and lipids through amide and ester hydrolysis upon contact with living tissues, such as skin and flesh. In addition, it exhibits a strong dehydrating property on carbohydrates, liberating extra heat and causing secondary thermal burns. Accordingly, it rapidly attacks the cornea and can induce permanent blindness if splashed onto eyes. If ingested, it damages internal organs irreversibly and may even be fatal. Protective equipment should hence always be used when handling it. Moreover, its strong oxidizing property makes it highly corrosive to many metals and may extend its destruction on other materials. Because of such reasons, damage posed by sulfuric acid is potentially more severe than that by other comparable strong acids, such as hydrochloric acid and nitric acid. Sulfuric acid must be stored carefully in containers made of nonreactive material (such as glass). Solutions equal to or stronger than 1.5 M are labeled "CORROSIVE", while solutions greater than 0.5 M but less than 1.5 M are labeled "IRRITANT". However, even the normal laboratory "dilute" grade (approximately 1 M, 10%) will char paper if left in contact for a sufficient time. The standard first aid treatment for acid spills on the skin is, as for other corrosive agents, irrigation with large quantities of water. Washing is continued for at least ten to fifteen minutes to cool the tissue surrounding the acid burn and to prevent secondary damage. Contaminated clothing is removed immediately and the underlying skin washed thoroughly. Preparation of the diluted acid can be dangerous due to the heat released in the dilution process. To avoid splattering, the concentrated acid is usually added to water and not the other way around. Water has a higher heat capacity than the acid, and so a vessel of cold water will absorb heat as acid is added. Also, because the acid is denser than water, it sinks to the bottom. Heat is generated at the interface between acid and water, which is at the bottom of the vessel. Acid will not boil, because of its higher boiling point. Warm water near the interface rises due to convection, which cools the interface, and prevents boiling of either acid or water. In contrast, addition of water to concentrated sulfuric acid results in a thin layer of water on top of the acid. Heat generated in this thin layer of water can boil, leading to the dispersal of a sulfuric acid aerosol or worse, an explosion. Preparation of solutions greater than 6 M (35%) in concentration is most dangerous, because the heat produced may be sufficient to boil the diluted acid: efficient mechanical stirring and external cooling (such as an ice bath) are essential. Reaction rates double for about every 10-degree Celsius increase in temperature. Therefore, the reaction will become more violent as dilution proceeds, unless the mixture is given time to cool. Adding acid to warm water will cause a violent reaction. On a laboratory scale, sulfuric acid can be diluted by pouring concentrated acid onto crushed ice made from de-ionized water. The ice melts in an endothermic process while dissolving the acid. The amount of heat needed to melt the ice in this process is greater than the amount of heat evolved by dissolving the acid so the solution remains cold. After all the ice has melted, further dilution can take place using water. Sulfuric acid is non-flammable. The main occupational risks posed by this acid are skin contact leading to burns (see above) and the inhalation of aerosols. Exposure to aerosols at high concentrations leads to immediate and severe irritation of the eyes, respiratory tract and mucous membranes: this ceases rapidly after exposure, although there is a risk of subsequent pulmonary edema if tissue damage has been more severe. At lower concentrations, the most commonly reported symptom of chronic exposure to sulfuric acid aerosols is erosion of the teeth, found in virtually all studies: indications of possible chronic damage to the respiratory tract are inconclusive as of 1997. Repeated occupational exposure to sulfuric acid mists may increase the chance of lung cancer by up to 64 percent. In the United States, the permissible exposure limit (PEL) for sulfuric acid is fixed at 1 mg/m3: limits in other countries are similar. There have been reports of sulfuric acid ingestion leading to vitamin B12 deficiency with subacute combined degeneration. The spinal cord is most often affected in such cases, but the optic nerves may show demyelination, loss of axons and gliosis. International commerce of sulfuric acid is controlled under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances, 1988, which lists sulfuric acid under Table II of the convention as a chemical frequently used in the illicit manufacture of narcotic drugs or psychotropic substances.
https://en.wikipedia.org/wiki?curid=29247
Space colonization Space colonization (also called space settlement, or extraterrestrial colonization) is permanent human habitation and exploitation of natural resources off the planet Earth. Many arguments have been made for and against space colonization. The two most common in favor of colonization are survival of human civilization and the biosphere in the event of a planetary-scale disaster (natural or man-made), and the availability of additional resources in space that could enable expansion of human society. The most common objections to colonization include concerns that the commodification of the cosmos may be likely to enhance the interests of the already powerful, including major economic and military institutions, and to exacerbate pre-existing detrimental processes such as wars, economic inequality, and environmental degradation. No space colonies have been built so far. Currently, the building of a space colony would present a set of huge technological and economic challenges. Space settlements would have to provide for nearly all (or all) the material needs of hundreds or thousands of humans, in an environment out in space that is very hostile to human life. They would involve technologies, such as controlled ecological life support systems, that have yet to be developed in any meaningful way. They would also have to deal with the as-yet unknown issue of how humans would behave and thrive in such places long-term. Because of the present cost of sending anything from the surface of the Earth into orbit (around $1400 per kg, or $640 per-pound, to low Earth orbit by the Falcon Heavy vehicle, expected to further decrease), a space colony would currently be a massively expensive project. There are yet no plans for building space colonies by any large-scale organization, either government or private. However, many proposals, speculations, and designs for space settlements have been made through the years, and a considerable number of space colonization advocates and groups are active. Several famous scientists, such as Freeman Dyson, have come out in favor of space settlement. On the technological front, there is ongoing progress in making access to space cheaper (reusable launch systems could reach $20 per kg to orbit), and in creating automated manufacturing and construction techniques. The primary argument calling for space colonization is the long-term survival of human civilization. By developing alternative locations off Earth, the planet's species, including humans, could live on in the event of natural or man-made disasters on our own planet. On two occasions, theoretical physicist and cosmologist Stephen Hawking argued for space colonization as a means of saving humanity. In 2001, Hawking predicted that the human race would become extinct within the next thousand years, unless colonies could be established in space. In 2010, he stated that humanity faces two options: either we colonize space within the next two hundred years, or we will face the prospect of long-term extinction. In 2005, then NASA Administrator Michael Griffin identified space colonization as the ultimate goal of current spaceflight programs, saying: Louis J. Halle, formerly of the United States Department of State, wrote in "Foreign Affairs" (Summer 1980) that the colonization of space will protect humanity in the event of global nuclear warfare. The physicist Paul Davies also supports the view that if a planetary catastrophe threatens the survival of the human species on Earth, a self-sufficient colony could "reverse-colonize" Earth and restore human civilization. The author and journalist William E. Burrows and the biochemist Robert Shapiro proposed a private project, the Alliance to Rescue Civilization, with the goal of establishing an off-Earth "backup" of human civilization. Based on his Copernican principle, J. Richard Gott has estimated that the human race could survive for another 7.8 million years, but it is not likely to ever colonize other planets. However, he expressed a hope to be proven wrong, because "colonizing other worlds is our best chance to hedge our bets and improve the survival prospects of our species". In a theoretical study from 2019, a group of researchers have pondered the long-term trajectory of human civilization. It is argued that due to Earth's finitude as well as the limited duration of our solar system, mankind's survival into the far future will very likely require extensive space colonization. This 'astronomical trajectory' of mankind, as it is termed, could come about in four steps: First step, plenty of space colonies could be established at various habitable locations — be it in outer space or on celestial bodies away from planet earth — and allowed to remain dependent on support from earth for a start. Second step, these colonies could gradually become self-sufficient, enabling them to survive if or when the mother civilization on earth fails or dies. Third step, the colonies could develop and expand their habitation by themselves on their space stations or celestial bodies, fx via terraforming. Fourth step, the colonies could self-replicate and establish new colonies further into space, a process that could then repeat itself and continue at an exponential rate throughout cosmos. However, this astronomical trajectory may not be a lasting one, as it will most likely be interrupted and eventually decline due to resource depletion or straining competition between various human factions, bringing about some 'star wars' scenario. In the very far future, mankind is expected to become extinct in any case, as no civilization — whether human or alien — will ever outlive the limited duration of cosmos itself. Resources in space, both in materials and energy, are enormous. The Solar System alone has, according to different estimates, enough material and energy to support anywhere from several thousand to over a billion times that of the current Earth-based human population. Outside the Solar System, several hundred billion other planets in the Milky Way alone provide opportunities for both colonization and resource collection, though travel to any of them is impossible on any practical time-scale without interstellar travel by use of generation ships or revolutionary new methods of travel, such as faster-than-light (FTL). Asteroid mining will also be a key player in space colonization. Water and materials to make structures and shielding can be easily found in asteroids. Instead of resupplying on Earth, mining and fuel stations need to be established on asteroids to facilitate better space travel. Optical mining is the term NASA uses to describe extracting materials from asteroids. NASA believes by using propellant derived from asteroids for exploration to the moon, Mars, and beyond will save $100 billion. If funding and technology come sooner than estimated, asteroid mining might be possible within a decade. All these planets and other bodies offer a virtually endless supply of resources providing limitless growth potential. Harnessing these resources can lead to much economic development. Expansion of humans and technological progress has usually resulted in some form of environmental devastation, and destruction of ecosystems and their accompanying wildlife. In the past, expansion has often come at the expense of displacing many indigenous peoples, the resulting treatment of these peoples ranging anywhere from encroachment to genocide. Because space has no known life, this need not be a consequence, as some space settlement advocates have pointed out. Another argument for space colonization is to mitigate the negative effects of overpopulation. If the resources of space were opened to use and viable life-supporting habitats were built, Earth would no longer define the limitations of growth. Although many of Earth's resources are non-renewable, off-planet colonies could satisfy the majority of the planet's resource requirements. With the availability of extraterrestrial resources, demand on terrestrial ones would decline. Additional goals cite the innate human drive to explore and discover, a quality recognized at the core of progress and thriving civilizations. Nick Bostrom has argued that from a utilitarian perspective, space colonization should be a chief goal as it would enable a very large population to live for a very long period of time (possibly billions of years), which would produce an enormous amount of utility (or happiness). He claims that it is more important to reduce existential risks to increase the probability of eventual colonization than to accelerate technological development so that space colonization could happen sooner. In his paper, he assumes that the created lives will have positive ethical value despite the problem of suffering. In a 2001 interview with Freeman Dyson, J. Richard Gott and Sid Goldstein, they were asked for reasons why some humans should live in space. Their answers were: Although some items of the infrastructure requirements above can already be easily produced on Earth and would therefore not be very valuable as trade items (oxygen, water, base metal ores, silicates, etc.), other high value items are more abundant, more easily produced, of higher quality, or can only be produced in space. These would provide (over the long-term) a very high return on the initial investment in space infrastructure. Some of these high-value trade goods include precious metals, gemstones, power, solar cells, ball bearings, semi-conductors, and pharmaceuticals. The mining and extraction of metals from a small asteroid the size of 3554 Amun or (6178) 1986 DA, both small near-Earth asteroids, would be 30 times as much metal as humans have mined throughout history. A metal asteroid this size would be worth approximately US$20 trillion at 2001 market prices Space colonization is seen as a long-term goal of some national space programs. Since the advent of the 21st-century commercialization of space, which saw greater cooperation between NASA and the private sector, several private companies have announced plans toward the colonization of Mars. Among entrepreneurs leading the call for space colonization are Elon Musk, Dennis Tito and Bas Lansdorp. The main impediments to commercial exploitation of these resources are the very high cost of initial investment, the very long period required for the expected return on those investments ("The Eros Project" plans a 50-year development), and the fact that the venture has never been carried out before—the high-risk nature of the investment. Major governments and well-funded corporations have announced plans for new categories of activities: space tourism and hotels, prototype space-based solar-power satellites, heavy-lift boosters and asteroid mining—that create needs and capabilities for humans to be present in space. Building colonies in space would require access to water, food, space, people, construction materials, energy, transportation, communications, life support, simulated gravity, radiation protection and capital investment. It is likely the colonies would be located near the necessary physical resources. The practice of space architecture seeks to transform spaceflight from a heroic test of human endurance to a normality within the bounds of comfortable experience. As is true of other frontier-opening endeavors, the capital investment necessary for space colonization would probably come from governments, an argument made by John Hickman and Neil deGrasse Tyson. Colonies on the Moon, Mars, or asteroids could extract local materials. The Moon is deficient in volatiles such as argon, helium and compounds of carbon, hydrogen and nitrogen. The LCROSS impacter was targeted at the Cabeus crater which was chosen as having a high concentration of water for the Moon. A plume of material erupted in which some water was detected. Mission chief scientist Anthony Colaprete estimated that the Cabeus crater contains material with 1% water or possibly more. Water ice should also be in other permanently shadowed craters near the lunar poles. Although helium is present only in low concentrations on the Moon, where it is deposited into regolith by the solar wind, an estimated million tons of He-3 exists over all. It also has industrially significant oxygen, silicon, and metals such as iron, aluminum, and titanium. Launching materials from Earth is expensive, so bulk materials for colonies could come from the Moon, a near-Earth object (NEO), Phobos, or Deimos. The benefits of using such sources include: a lower gravitational force, no atmospheric drag on cargo vessels, and no biosphere to damage. Many NEOs contain substantial amounts of metals. Underneath a drier outer crust (much like oil shale), some other NEOs are inactive comets which include billions of tons of water ice and kerogen hydrocarbons, as well as some nitrogen compounds. Farther out, Jupiter's Trojan asteroids are thought to be rich in water ice and other volatiles. Recycling of some raw materials would almost certainly be necessary. Solar energy in orbit is abundant, reliable, and is commonly used to power satellites today. There is no night in free space, and no clouds or atmosphere to block sunlight. Light intensity obeys an inverse-square law. So the solar energy available at distance "d" from the Sun is "E" = 1367/"d"2 W/m2, where "d" is measured in astronomical units (AU) and 1367 watts/m2 is the energy available at the distance of Earth's orbit from the Sun, 1 AU. In the weightlessness and vacuum of space, high temperatures for industrial processes can easily be achieved in solar ovens with huge parabolic reflectors made of metallic foil with very lightweight support structures. Flat mirrors to reflect sunlight around radiation shields into living areas (to avoid line-of-sight access for cosmic rays, or to make the Sun's image appear to move across their "sky") or onto crops are even lighter and easier to build. Large solar power photovoltaic cell arrays or thermal power plants would be needed to meet the electrical power needs of the settlers' use. In developed parts of Earth, electrical consumption can average 1 kilowatt/person (or roughly 10 megawatt-hours per person per year.) These power plants could be at a short distance from the main structures if wires are used to transmit the power, or much farther away with wireless power transmission. A major export of the initial space settlement designs was anticipated to be large solar power satellites (SPS) that would use wireless power transmission (phase-locked microwave beams or lasers emitting wavelengths that special solar cells convert with high efficiency) to send power to locations on Earth, or to colonies on the Moon or other locations in space. For locations on Earth, this method of getting power is extremely benign, with zero emissions and far less ground area required per watt than for conventional solar panels. Once these satellites are primarily built from lunar or asteroid-derived materials, the price of SPS electricity could be lower than energy from fossil fuel or nuclear energy; replacing these would have significant benefits such as the elimination of greenhouse gases and nuclear waste from electricity generation. Transmitting solar energy wirelessly from the Earth to the Moon and back is also an idea proposed for the benefit of space colonization and energy resources. Physicist Dr. David Criswell, who worked for NASA during the Apollo missions, came up with the idea of using power beams to transfer energy from space. These beams, microwaves with a wavelength of about 12 cm, will be almost untouched as they travel through the atmosphere. They can also be aimed at more industrial areas to keep away from humans or animal activities. This will allow for safer and more reliable methods of transferring solar energy. In 2008, scientists were able to send a 20 watt microwave signal from a mountain in Maui to the island of Hawaii. Since then JAXA and Mitsubishi has teamed up on a $21 billion project in order to place satellites in orbit which could generate up to 1 gigawatt of energy. These are the next advancements being done today in order to make energy be transmitted wirelessly for space-based solar energy. However, the value of SPS power delivered wirelessly to other locations in space will typically be far higher than to Earth. Otherwise, the means of generating the power would need to be included with these projects and pay the heavy penalty of Earth launch costs. Therefore, other than proposed demonstration projects for power delivered to Earth, the first priority for SPS electricity is likely to be locations in space, such as communications satellites, fuel depots or "orbital tugboat" boosters transferring cargo and passengers between low Earth orbit (LEO) and other orbits such as geosynchronous orbit (GEO), lunar orbit or highly-eccentric Earth orbit (HEEO). The system will also rely on satellites and receiving stations on Earth to convert the energy into electricity. Because of this energy can be transmitted easily from dayside to nightside meaning power is reliable 24/7. Nuclear power is sometimes proposed for colonies located on the Moon or on Mars, as the supply of solar energy is too discontinuous in these locations; the Moon has nights of two Earth weeks in duration. Mars has nights, relatively high gravity, and an atmosphere featuring large dust storms to cover and degrade solar panels. Also, Mars' greater distance from the Sun (1.5 astronomical units, AU) translates into "E/(1.52 = 2.25)" only ½–⅔ the solar energy of Earth orbit. Another method would be transmitting energy wirelessly to the lunar or Martian colonies from solar power satellites (SPSs) as described above; the difficulties of generating power in these locations make the relative advantages of SPSs much greater there than for power beamed to locations on Earth. In order to also be able to fulfill the requirements of a Moon base and energy to supply life support, maintenance, communications, and research, a combination of both nuclear and solar energy will be used in the first colonies. For both solar thermal and nuclear power generation in airless environments, such as the Moon and space, and to a lesser extent the very thin Martian atmosphere, one of the main difficulties is dispersing the inevitable heat generated. This requires fairly large radiator areas. In space settlements, a life support system must recycle or import all the nutrients without "crashing." The closest terrestrial analogue to space life support is possibly that of a nuclear submarine. Nuclear submarines use mechanical life support systems to support humans for months without surfacing, and this same basic technology could presumably be employed for space use. However, nuclear submarines run "open loop"—extracting oxygen from seawater, and typically dumping carbon dioxide overboard, although they recycle existing oxygen. Recycling of the carbon dioxide has been approached in the literature using the Sabatier process or the Bosch reaction. Although a fully mechanistic life support system is conceivable, a closed ecological system is generally proposed for life support. The Biosphere 2 project in Arizona has shown that a complex, small, enclosed, man-made biosphere can support eight people for at least a year, although there were many problems. A year or so into the two-year mission oxygen had to be replenished, which strongly suggests that the mission failed. The relationship between organisms, their habitat and the non-Earth environment can be: A combination of the above technologies is also possible. Cosmic rays and solar flares create a lethal radiation environment in space. In Earth orbit, the Van Allen belts make living above the Earth's atmosphere difficult. To protect life, settlements must be surrounded by sufficient mass to absorb most incoming radiation, unless magnetic or plasma radiation shields were developed. Passive mass shielding of four metric tons per square meter of surface area will reduce radiation dosage to several mSv or less annually, well below the rate of some populated high natural background areas on Earth. This can be leftover material (slag) from processing lunar soil and asteroids into oxygen, metals, and other useful materials. However, it represents a significant obstacle to maneuvering vessels with such massive bulk (mobile spacecraft being particularly likely to use less massive active shielding). Inertia would necessitate powerful thrusters to start or stop rotation, or electric motors to spin two massive portions of a vessel in opposite senses. Shielding material can be stationary around a rotating interior. To protect from radiation they say to bundle up in the thickest clothes possible so that the cloth can absorb the radiation and prevent it from getting to your body. Space manufacturing could enable self-replication. Some think it's the ultimate goal because it allows an exponential increase in colonies, while eliminating costs to and dependence on Earth. It could be argued that the establishment of such a colony would be Earth's first act of self-replication. Intermediate goals include colonies that expect only information from Earth (science, engineering, entertainment) and colonies that just require periodic supply of light weight objects, such as integrated circuits, medicines, genetic material and tools. The monotony and loneliness that comes from a prolonged space mission can leave astronauts susceptible to cabin fever or having a psychotic break. Moreover, lack of sleep, fatigue, and work overload can affect an astronaut's ability to perform well in an environment such as space where every action is critical. In 2002, the anthropologist John H. Moore estimated that a population of 150–180 would permit a stable society to exist for 60 to 80 generations—equivalent to 2000 years. A much smaller initial population of as little as two women should be viable as long as human embryos are available from Earth. Use of a sperm bank from Earth also allows a smaller starting base with negligible inbreeding. Researchers in conservation biology have tended to adopt the "50/500" rule of thumb initially advanced by Franklin and Soule. This rule says a short-term effective population size ("N"e) of 50 is needed to prevent an unacceptable rate of inbreeding, whereas a long‐term "N"e of 500 is required to maintain overall genetic variability. The "N"e = 50 prescription corresponds to an inbreeding rate of 1% per generation, approximately half the maximum rate tolerated by domestic animal breeders. The "N"e = 500 value attempts to balance the rate of gain in genetic variation due to mutation with the rate of loss due to genetic drift. Assuming a journey of 6,300 years, the astrophysicist Frédéric Marin and the particle physicist Camille Beluffi calculated that the minimum viable population for a generation ship to reach Proxima Centauri would be 98 settlers at the beginning of the mission (then the crew will breed until reaching a stable population of several hundred settlers within the ship) . In 2020, Jean-Marc Salotti proposed a method to determine the minimum number of settlers to survive on an extraterrestrial world. It is based on the comparison between the required time to perform all activities and the working time of all human resources. For Mars, 110 individuals would be required. Experts have debated on the possible usage of money and currencies in societies that will be established in space. The Quasi Universal Intergalactic Denomination, or QUID, is a physical currency made from a space-qualified polymer PTFE for inter-planetary travelers. QUID was designed for the foreign exchange company Travelex by scientists from Britain's National Space Centre and the University of Leicester. Location is a frequent point of contention between space colonization advocates. The location of colonization can be on a physical body planet, dwarf planet, natural satellite, or asteroid or orbiting one. For colonies not on a body see also space habitat. Due to its proximity and familiarity, Earth's Moon is discussed as a target for colonization. It has the benefits of proximity to Earth and lower escape velocity, allowing for easier exchange of goods and services. A drawback of the Moon is its low abundance of volatiles necessary for life such as hydrogen, nitrogen, and carbon. Water-ice deposits that exist in some polar craters could serve as a source for these elements. An alternative solution is to bring hydrogen from near-Earth asteroids and combine it with oxygen extracted from lunar rock. The Moon's low surface gravity is also a concern, as it is unknown whether 1/6g is enough to maintain human health for long periods. The Moon's lack of atmosphere provides no protection from space radiation or meteoroids. The early Moon colonies may shelter in ancient Lunar lava tubes to gain protection. The two-week day/night cycle makes use of solar power more difficult. Another near-Earth possibility are the five Earth–Moon Lagrange points. Although they would generally also take a few days to reach with current technology, many of these points would have near-continuous solar power because their distance from Earth would result in only brief and infrequent eclipses of light from the Sun. However, the fact that the Earth–Moon Lagrange points and tend to collect dust and debris, whereas - require active station-keeping measures to maintain a stable position, make them somewhat less suitable places for habitation than was originally believed. Additionally, the orbit of – takes them out of the protection of the Earth's magnetosphere for approximately two-thirds of the time, exposing them to the health threat from cosmic rays. The five Earth–Sun Lagrange points would totally eliminate eclipses, but only and would be reachable in a few days' time. The other three Earth–Sun points would require months to reach. See space habitat Colonizing Mercury would involve similar challenges as the Moon as there are few volatile elements, no atmosphere and the surface gravity is lower than Earth's. However, the planet also receives almost seven times the solar flux as the Earth/Moon system. Geologist Stephen Gillett suggested in 1996 that this could make Mercury an ideal place to build and launch solar sail spacecraft, which could launch as folded up "chunks" by mass driver from Mercury's surface. Once in space the solar sails would deploy. Since Mercury's solar constant is 6.5 times higher than Earth's, energy for the mass driver should be easy to come by, and solar sails near Mercury would have 6.5 times the thrust they do near Earth. This could make Mercury an ideal place to acquire materials useful in building hardware to send to (and terraform) Venus. Vast solar collectors could also be built on or near Mercury to produce power for large scale engineering activities such as laser-pushed lightsails to nearby star systems. Colonization of asteroids would require space habitats. The asteroid belt has significant overall material available, the largest object being Ceres, although it is thinly distributed as it covers a vast region of space. Unmanned supply craft should be practical with little technological advance, even crossing 500 million kilometers of space. The colonists would have a strong interest in assuring their asteroid did not hit Earth or any other body of significant mass, but would have extreme difficulty in moving an asteroid of any size. The orbits of the Earth and most asteroids are very distant from each other in terms of delta-v and the asteroidal bodies have enormous momentum. Rockets or mass drivers can perhaps be installed on asteroids to direct their path into a safe course. The Artemis Project designed a plan to colonize Europa, one of Jupiter's moons. Scientists were to inhabit igloos and drill down into the Europan ice crust, exploring any sub-surface ocean. This plan discusses possible use of "air pockets" for human habitation. Europa is considered one of the more habitable bodies in the Solar System and so merits investigation as a possible abode for life. NASA performed a study called "HOPE" (Revolutionary Concepts for Human Outer Planet Exploration) regarding the future exploration of the Solar System. The target chosen was Callisto due to its distance from Jupiter, and thus the planet's harmful radiation. It could be possible to build a surface base that would produce fuel for further exploration of the Solar System. Three of the Galilean moons (Europa, Ganymede, Callisto) have an abundance of volatiles that may support colonization efforts. Titan is suggested as a target for colonization, because it is the only moon in the Solar System to have a dense atmosphere and is rich in carbon-bearing compounds. Titan has ice water and large methane oceans. Robert Zubrin identified Titan as possessing an abundance of all the elements necessary to support life, making Titan perhaps the most advantageous locale in the outer Solar System for colonization, and saying "In certain ways, Titan is the most hospitable extraterrestrial world within our solar system for human colonization". Enceladus is a small, icy moon orbiting close to Saturn, notable for its extremely bright surface and the geyser-like plumes of ice and water vapor that erupt from its southern polar region. If Enceladus has liquid water, it joins Mars and Jupiter's moon Europa as one of the prime places in the Solar System to look for extraterrestrial life and possible future settlements. Other large satellites: Rhea, Iapetus, Dione, Tethys, and Mimas, all have large quantities of volatiles, which can be used to support settlements. The Kuiper belt is estimated to have 70,000 bodies of 100 km or larger. Freeman Dyson has suggested that within a few centuries human civilization will have relocated to the Kuiper belt. The Oort cloud is estimated to have up to a trillion comets. Looking beyond the Solar System, there are up to several hundred billion potential stars with possible colonization targets. The main difficulty is the vast distances to other stars: roughly a hundred thousand times farther away than the planets in the Solar System. This means that some combination of very high speed (some more-than-fractional percentage of the speed of light), or travel times lasting centuries or millennia, would be required. These speeds are far beyond what current spacecraft propulsion systems can provide. Space colonization technology could in principle allow human expansion at high, but sub-relativistic speeds, substantially less than the speed of light, "c".  An interstellar colony ship would be similar to a space habitat, with the addition of major propulsion capabilities and independent energy generation. Hypothetical starship concepts proposed both by scientists and in hard science fiction include: The above concepts which appear limited to high, but still sub-relativistic speeds, due to fundamental energy and reaction mass considerations, and all would entail trip times which might be enabled by space colonization technology, permitting self-contained habitats with lifetimes of decades to centuries. Yet human interstellar expansion at average speeds of even 0.1% of "c"  would permit settlement of the entire Galaxy in less than one half of the Sun's galactic orbital period of ~240,000,000 years, which is comparable to the timescale of other galactic processes. Thus, even if interstellar travel at near relativistic speeds is never feasible (which cannot be clearly determined at this time), the development of space colonization could allow human expansion beyond the Solar System without requiring technological advances that cannot yet be reasonably foreseen. This could greatly improve the chances for the survival of intelligent life over cosmic timescales, given the many natural and human-related hazards that have been widely noted. If humanity does gain access to a large amount of energy, on the order of the mass-energy of entire planets, it may eventually become feasible to construct Alcubierre drives. These are one of the few methods of superluminal travel which may be possible under current physics. However it is probable that such a device could never exist, due to the fundamental challenges posed. For more on this see Difficulties of making and using an Alcubierre Drive. Looking beyond the Milky Way, there are at least 2 trillion other galaxies in the observable universe. The distances between galaxies are on the order of a million times farther than those between the stars. Because of the speed of light limit on how fast any material objects can travel in space, intergalactic travel would either have to involve voyages lasting millions of years, or a possible faster than light propulsion method based on speculative physics, such as the Alcubierre drive. There are, however, no scientific reasons for stating that intergalactic travel is impossible in principle. Uploaded human minds or AI may be transmitted to other galaxies in the hope some intelligence there would receive and activate them. Space colonization can roughly be said to be possible when the necessary methods of space colonization become cheap enough (such as space access by cheaper launch systems) to meet the cumulative funds that have been gathered for the purpose, in addition to estimated profits from commercial use of space. Although there are no immediate prospects for the large amounts of money required for space colonization to be available given traditional launch costs, there is some prospect of a radical reduction to launch costs in the 2010s, which would consequently lessen the cost of any efforts in that direction. With a published price of per launch of up to payload to low Earth orbit, SpaceX Falcon 9 rockets are already the "cheapest in the industry". Advancements currently being developed as part of the SpaceX reusable launch system development program to enable reusable Falcon 9s "could drop the price by an order of magnitude, sparking more space-based enterprise, which in turn would drop the cost of access to space still further through economies of scale." If SpaceX is successful in developing the reusable technology, it would be expected to "have a major impact on the cost of access to space", and change the increasingly competitive market in space launch services. The President's Commission on Implementation of United States Space Exploration Policy suggested that an inducement prize should be established, perhaps by government, for the achievement of space colonization, for example by offering the prize to the first organization to place humans on the Moon and sustain them for a fixed period before they return to Earth. The most famous attempt to build an analogue to a self-sufficient colony is Biosphere 2, which attempted to duplicate Earth's biosphere. BIOS-3 is another closed ecosystem, completed in 1972 in Krasnoyarsk, Siberia. Many space agencies build testbeds for advanced life support systems, but these are designed for long duration human spaceflight, not permanent colonization. Remote research stations in inhospitable climates, such as the Amundsen–Scott South Pole Station or Devon Island Mars Arctic Research Station, can also provide some practice for off-world outpost construction and operation. The Mars Desert Research Station has a habitat for similar reasons, but the surrounding climate is not strictly inhospitable. The first known work on space colonization was "The Brick Moon", a work of fiction published in 1869 by Edward Everett Hale, about an inhabited artificial satellite. The Russian schoolmaster and physicist Konstantin Tsiolkovsky foresaw elements of the space community in his book "Beyond Planet Earth" written about 1900. Tsiolkovsky had his space travelers building greenhouses and raising crops in space. Tsiolkovsky believed that going into space would help perfect human beings, leading to immortality and peace. Others have also written about space colonies as Lasswitz in 1897 and Bernal, Oberth, Von Pirquet and Noordung in the 1920s. Wernher von Braun contributed his ideas in a 1952 "Colliers" article. In the 1950s and 1960s, Dandridge M. Cole published his ideas. Another seminal book on the subject was the book "The High Frontier: Human Colonies in Space" by Gerard K. O'Neill in 1977 which was followed the same year by "Colonies in Space " by T. A. Heppenheimer. M. Dyson wrote "Home on the Moon; Living on a Space Frontier" in 2003; Peter Eckart wrote "Lunar Base Handbook" in 2006 and then Harrison Schmitt's "Return to the Moon" written in 2007. , Bigelow Aerospace is the only private commercial spaceflight company that has launched two experimental space station modules, Genesis I (2006) and Genesis II (2007), into Earth-orbit, and has indicated that their first production model of the space habitat, the BA 330, could be launched by 2017. Robotic spacecraft to Mars are required to be sterilized, to have at most 300,000 spores on the exterior of the craft—and more thoroughly sterilized if they contact "special regions" containing water, otherwise there is a risk of contaminating not only the life-detection experiments but possibly the planet itself. It is impossible to sterilize human missions to this level, as humans are host to typically a hundred trillion microorganisms of thousands of species of the human microbiome, and these cannot be removed while preserving the life of the human. Containment seems the only option, but it is a major challenge in the event of a hard landing (i.e. crash). There have been several planetary workshops on this issue, but with no final guidelines for a way forward yet. Human explorers would also be vulnerable to back contamination to Earth if they become carriers of microorganisms. A corollary to the Fermi paradox—"nobody else is doing it"—is the argument that, because no evidence of alien colonization technology exists, it is statistically unlikely to even be possible to use that same level of technology ourselves. Colonizing space would require massive amounts of financial, physical, and human capital devoted to research, development, production, and deployment. Earth's natural resources do not increase to a noteworthy extent (which is in keeping with the "only one Earth" position of environmentalists). Thus, considerable efforts in colonizing places outside Earth would appear as a hazardous waste of the Earth's limited resources for an aim without a clear end. The fundamental problem of public things, needed for survival, such as space programs, is the free-rider problem. Convincing the public to fund such programs would require additional self-interest arguments: If the objective of space colonization is to provide a "backup" in case everyone on Earth is killed, then why should someone on Earth pay for something that is only useful after they are dead? This assumes that space colonization is not widely acknowledged as a sufficiently valuable social goal. Seen as a relief to the problem of overpopulation even as early as 1758, and listed as one of Stephen Hawking's reasons for pursuing space exploration, it has become apparent that space colonization in response to overpopulation is unwarranted. Indeed, the birth rates of many developed countries, specifically spacefaring ones, are at or below replacement rates, thus negating the need to use colonization as a means of population control. Other objections include concerns that the forthcoming colonization and commodification of the cosmos may be likely to enhance the interests of the already powerful, including major economic and military institutions e.g. the large financial institutions, the major aerospace companies and the military–industrial complex, to lead to new wars, and to exacerbate pre-existing exploitation of workers and resources, economic inequality, poverty, social division and marginalization, environmental degradation, and other detrimental processes or institutions. Additional concerns include creating a culture in which humans are no longer seen as human, but rather as material assets. The issues of human dignity, morality, philosophy, culture, bioethics, and the threat of megalomaniac leaders in these new "societies" would all have to be addressed in order for space colonization to meet the psychological and social needs of people living in isolated colonies. As an alternative or addendum for the future of the human race, many science fiction writers have focused on the realm of the 'inner-space', that is the computer-aided exploration of the human mind and human consciousness—possibly en route developmentally to a Matrioshka Brain. Robotic exploration is proposed as an alternative to gain many of the same scientific advantages without the limited mission duration and high cost of life support and return transportation involved in manned missions. However, there are vast scientific domains that cannot be addressed with robots, especially biology in specific atmospheric and gravitational environments and human sciences in space. Another concern is the potential to cause interplanetary contamination on planets that may harbor hypothetical extraterrestrial life. Space colonization has been discussed as continuation of imperialism and colonialism. Questioning colonial decisionmaking and reasons for colonial labour and land exploitation with postcolonial critique. Seeing the need for inclusive and democratic participation and implementation of any space exploration, infrastructure or habitation. The narrative of space exploration as a "New Frontier" has been criticized as unreflected continuation of settler colonialism and manifest destiny, continuing the narrative of colonial exploration as fundamental to the assumed human nature. Also narratives of survival and arguments for space as a solution to global problems like pollution have been identified as imperialist. The predominant perspective of territorial colonization in space has been called "surfacism", especially comparing advocacy for colonization of Mars opposed to Venus. It has been argued that the present politico-legal regimes and their philosophic grounding advantage imperialist development of space. The health of the humans who may participate in a colonization venture would be subject to increased physical, mental and emotional risks. NASA learned that – without gravity – bones lose minerals, causing osteoporosis. Bone density may decrease by 1% per month, which may lead to a greater risk of osteoporosis-related fractures later in life. Fluid shifts towards to the head may cause vision problems. NASA found that isolation in closed environments aboard the International Space Station led to depression, sleep disorders, and diminished personal interactions, likely due to confined spaces and the monotony and boredom of long space flight. Circadian rhythm may also be susceptible to the effects of space life due to the effects on sleep of disrupted timing of sunset and sunrise. This can lead to exhaustion, as well as other sleep problems such as insomnia, which can reduce their productivity and lead to mental health disorders. High-energy radiation is a health risk that colonizers would face, as radiation in deep space is deadlier than what astronauts face now in low Earth orbit. Metal shielding on space vehicles protects against only 25-30% of space radiation, possibly leaving colonizers exposed to the other 70% of radiation and its short and long-term health complications. Although there are many physical, mental, and emotional health risks for future colonizers and pioneers, solutions have been proposed to correct these problems. Mars500, HI-SEAS, and SMART-OP represent efforts to help reduce the effects of loneliness and confinement for long periods of time. Keeping contact with family members, celebrating holidays, and maintaining cultural identities all had an impact on minimizing the deterioration of mental health. There are also health tools in development to help astronauts reduce anxiety, as well as helpful tips to reduce the spread of germs and bacteria in a closed environment. Radiation risk may be reduced for astronauts by frequent monitoring and focusing work away from the shielding on the shuttle. Future space agencies can also ensure that every colonizer would have a mandatory amount of daily exercise to prevent degradation of muscle. Organizations that contribute to space colonization include: Although established space colonies are a stock element in science fiction stories, fictional works that explore the themes, social or practical, of the settlement and occupation of a habitable world are much rarer.
https://en.wikipedia.org/wiki?curid=29248
Sexual orientation Sexual orientation is an enduring pattern of romantic or sexual attraction (or a combination of these) to persons of the opposite sex or gender, the same sex or gender, or to both sexes or more than one gender. These attractions are generally subsumed under heterosexuality, homosexuality, and bisexuality, while asexuality (the lack of sexual attraction to others) is sometimes identified as the fourth category. These categories are aspects of the more nuanced nature of sexual identity and terminology. For example, people may use other labels, such as "pansexual" or "polysexual", or none at all. According to the American Psychological Association, sexual orientation "also refers to a person's sense of identity based on those attractions, related behaviors, and membership in a community of others who share those attractions". "Androphilia" and "gynephilia" are terms used in behavioral science to describe sexual orientation as an alternative to a gender binary conceptualization. "Androphilia" describes sexual attraction to masculinity; "gynephilia" describes the sexual attraction to femininity. The term "sexual preference" largely overlaps with sexual orientation, but is generally distinguished in psychological research. A person who identifies as bisexual, for example, may sexually prefer one sex over the other. "Sexual preference" may also suggest a degree of voluntary choice, whereas the scientific consensus is that sexual orientation is not a choice. Scientists do not know the exact cause of sexual orientation, but they theorize that it is caused by a complex interplay of genetic, hormonal, and environmental influences. Although no single theory on the cause of sexual orientation has yet gained widespread support, scientists favor biologically-based theories. There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males. There is no substantive evidence which suggests parenting or early childhood experiences play a role with regard to sexual orientation. Research over several decades has demonstrated that sexual orientation ranges along a continuum, from exclusive attraction to the opposite sex to exclusive attraction to the same sex. Sexual orientation is reported primarily within biology and psychology (including sexology), but it is also a subject area in anthropology, history (including social constructionism), and law, and there are other explanations that relate to sexual orientation and culture. Sexual orientation is traditionally defined as including heterosexuality, bisexuality, and homosexuality, while asexuality is considered the fourth category of sexual orientation by some researchers and has been defined as the absence of a traditional sexual orientation. An asexual has little to no sexual attraction to people. It may be considered a lack of a sexual orientation, and there is significant debate over whether or not it is a sexual orientation. Most definitions of sexual orientation include a psychological component, such as the direction of an individual's erotic desires, or a behavioral component, which focuses on the sex of the individual's sexual partner/s. Some people prefer simply to follow an individual's self-definition or identity. Scientific and professional understanding is that "the core attractions that form the basis for adult sexual orientation typically emerge between middle childhood and early adolescence". Sexual orientation differs from sexual identity in that it encompasses relationships with others, while sexual identity is a concept of self. The American Psychological Association states that "[s]exual orientation refers to an enduring pattern of emotional, romantic, and/or sexual attractions to men, women, or both sexes" and that "[t]his range of behaviors and attractions has been described in various cultures and nations throughout the world. Many cultures use identity labels to describe people who express these attractions. In the United States, the most frequent labels are lesbians (women attracted to women), gay men (men attracted to men), and bisexual people (men or women attracted to both sexes). However, some people may use different labels or none at all". They additionally state that sexual orientation "is distinct from other components of sex and gender, including biological sex (the anatomical, physiological, and genetic characteristics associated with being male or female), gender identity (the psychological sense of being male or female), and social gender role (the cultural norms that define feminine and masculine behavior)". Sexual identity and sexual behavior are closely related to sexual orientation, but they are distinguished, with sexual identity referring to an individual's conception of themselves, behavior referring to actual sexual acts performed by the individual, and orientation referring to "fantasies, attachments and longings." Individuals may or may not express their sexual orientation in their behaviors. People who have a non-heterosexual sexual orientation that does not align with their sexual identity are sometimes referred to as 'closeted'. The term may, however, reflect a certain cultural context and particular stage of transition in societies which are gradually dealing with integrating sexual minorities. In studies related to sexual orientation, when dealing with the degree to which a person's sexual attractions, behaviors and identity match, scientists usually use the terms "concordance" or "discordance." Thus, a woman who is attracted to other women, but calls herself heterosexual and only has sexual relations with men, can be said to experience discordance between her sexual orientation (homosexual or lesbian) and her sexual identity and behaviors (heterosexual). "Sexual identity" may also be used to describe a person's perception of his or her own "sex", rather than sexual orientation. The term "sexual preference" has a similar meaning to "sexual orientation", and the two terms are often used interchangeably, but "sexual preference" suggests a degree of voluntary choice. The term has been listed by the American Psychological Association's Committee on Gay and Lesbian Concerns as a wording that advances a "heterosexual bias". "Androphilia" and "gynephilia" (or "gynecophilia") are terms used in behavioral science to describe sexual attraction, as an alternative to a homosexual and heterosexual conceptualization. They are used for identifying a subject's object of attraction without attributing a sex assignment or gender identity to the subject. Related terms such as "pansexual" and "polysexual" do not make any such assignations to the subject. People may also use terms such as "queer", "pansensual," "polyfidelitous," "ambisexual," or personalized identities such as "byke" or "biphilic". Using "androphilia" and "gynephilia" can avoid confusion and offense when describing people in non-western cultures, as well as when describing intersex and transgender people. Psychiatrist Anil Aggrawal explains that androphilia, along with gynephilia, "is needed to overcome immense difficulties in characterizing the sexual orientation of trans men and trans women. For instance, it is difficult to decide whether a trans man erotically attracted to males is a heterosexual female or a homosexual male; or a trans woman erotically attracted to females is a heterosexual male or a lesbian female. Any attempt to classify them may not only cause confusion but arouse offense among the affected subjects. In such cases, while defining sexual attraction, it is best to focus on the object of their attraction rather than on the sex or gender of the subject." Sexologist Milton Diamond writes, "The terms heterosexual, homosexual, and bisexual are better used as adjectives, not nouns, and are better applied to behaviors, not people. This usage is particularly advantageous when discussing the partners of transsexual or intersexed individuals. These newer terms also do not carry the social weight of the former ones." Some researchers advocate use of the terminology to avoid bias inherent in Western conceptualizations of human sexuality. Writing about the Samoan fa'afafine demographic, sociologist Johanna Schmidt writes that in cultures where a third gender is recognized, a term like "homosexual transsexual" does not align with cultural categories. "Same gender loving", or "SGL", is a term adopted by some African-Americans, meant as a culturally affirming homosexual identity. Some researchers, such as Bruce Bagemihl, have criticized the labels "heterosexual" and "homosexual" as confusing and degrading. Bagemihl writes, "...the point of reference for 'heterosexual' or 'homosexual' orientation in this nomenclature is solely the individual's genetic sex prior to reassignment (see for example, Blanchard et al. 1987, Coleman and Bockting, 1988, Blanchard, 1989). These labels thereby ignore the individual's personal sense of gender identity taking precedence over biological sex, rather than the other way around." Bagemihl goes on to take issue with the way this terminology makes it easy to claim transsexuals are really homosexual males seeking to escape from stigma. The earliest writers on sexual orientation usually understood it to be intrinsically linked to the subject's own sex. For example, it was thought that a typical female-bodied person who is attracted to female-bodied persons would have masculine attributes, and vice versa. This understanding was shared by most of the significant theorists of sexual orientation from the mid nineteenth to early twentieth century, such as Karl Heinrich Ulrichs, Richard von Krafft-Ebing, Magnus Hirschfeld, Havelock Ellis, Carl Jung, and Sigmund Freud, as well as many gender-variant homosexual people themselves. However, this understanding of homosexuality as sexual inversion was disputed at the time, and, through the second half of the twentieth century, gender identity came to be increasingly seen as a phenomenon distinct from sexual orientation. Transgender and cisgender people may be attracted to men, women, or both, although the prevalence of different sexual orientations is quite different in these two populations. An individual homosexual, heterosexual or bisexual person may be masculine, feminine, or androgynous, and in addition, many members and supporters of lesbian and gay communities now see the "gender-conforming heterosexual" and the "gender-nonconforming homosexual" as negative stereotypes. Nevertheless, studies by J. Michael Bailey and Kenneth Zucker found a majority of the gay men and lesbians sampled reporting various degrees of gender-nonconformity during their childhood years. Transgender people today identify with the sexual orientation that corresponds with their gender; meaning that a trans woman who is solely attracted to women would often identify as a lesbian. A trans man solely attracted to women would be a straight man. Sexual orientation sees greater intricacy when non-binary understandings of both sex (male, female, or intersex) and gender (man, woman, transgender, third gender, etc. are considered. Sociologist Paula Rodriguez Rust (2000) argues for a more multifaceted definition of sexual orientation: Gay and lesbian people can have sexual relationships with someone of the opposite sex for a variety of reasons, including the desire for a perceived traditional family and concerns of discrimination and religious ostracism. While some LGBT people hide their respective orientations from their spouses, others develop positive gay and lesbian identities while maintaining successful heterosexual marriages. Coming out of the closet to oneself, a spouse of the opposite sex, and children can present challenges that are not faced by gay and lesbian people who are not married to people of the opposite sex or do not have children. Often, sexual orientation and sexual orientation identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation. Sexual orientation is stable and unlikely to change for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is more likely for women than for men.
https://en.wikipedia.org/wiki?curid=29252
Spandrel A spandrel is a triangular space, usually found in pairs, between the top of an arch and a rectangular frame; between the tops of two adjacent arches or one of the four spaces between a circle within a square. They are frequently filled with decorative elements. There are four or five accepted and cognate meanings of the term "spandrel" in architectural and art history, mostly relating to the space between a curved figure and a rectangular boundary – such as the space between the curve of an arch and a rectilinear bounding moulding, or the wallspace bounded by adjacent arches in an arcade and the stringcourse or moulding above them, or the space between the central medallion of a carpet and its rectangular corners, or the space between the circular face of a clock and the corners of the square revealed by its hood. Also included is the space under a flight of stairs, if it is not occupied by another flight of stairs. In a building with more than one floor, the term spandrel is also used to indicate the space between the top of the window in one story and the sill of the window in the story above. The term is typically employed when there is a sculpted panel or other decorative element in this space, or when the space between the windows is filled with opaque or translucent glass, in this case called "spandrel glass". In concrete or steel construction, an exterior beam extending from column to column usually carrying an exterior wall load is known as a "spandrel beam". The spandrels over doorways in perpendicular work are generally richly decorated. At Magdalen College, Oxford, is one which is perforated. The spandrel of doors is sometimes ornamented in the Decorated Period, but seldom forms part of the composition of the doorway itself, being generally over the label. Spandrels can also occur in the construction of domes and are typical in grand architecture from the medieval period onwards. Where a dome needed to rest on a square or rectangular base, the dome was raised above the level of the supporting pillars, with three-dimensional spandrels called pendentives taking the weight of the dome and concentrating it onto the pillars.
https://en.wikipedia.org/wiki?curid=29253
SimpleText SimpleText is the native text editor for the Apple classic Mac OS. SimpleText allows text editing and text formatting (underline, italic, bold, etc.), fonts, and sizes. It was developed to integrate the features included in the different versions of TeachText that were created by various software development groups within Apple. It can be considered similar to Windows' WordPad application. In later versions it also gained additional read only display capabilities for PICT files, as well as other Mac OS built-in formats like Quickdraw GX and QTIF, 3DMF and even QuickTime movies. SimpleText can even record short sound samples and, using Apple's PlainTalk speech system, read out text in English. Users who wanted to add sounds longer than 24 seconds, however, needed to use a separate program to create the sound and then paste the desired sound into the document using ResEdit. SimpleText superseded TeachText, which was included in System Software up until it was replaced in 1994 (shipped with System Update 3.0 and System 7.1.2). The need for SimpleText arose after Apple stopped bundling MacWrite, to ensure that every user could open and read Readme documents. The key improvement of SimpleText over TeachText was the addition of text styling. The underlying OS required by SimpleText implemented a standard styled text format, which meant that SimpleText could support multiple fonts and font sizes. Prior Macintosh OS versions lacked this feature, so TeachText supported only a single font per document. Adding text styling features made SimpleText WorldScript-savvy, meaning that it can use Simplified and Traditional Chinese characters. Like TeachText, SimpleText was also limited to only 32 kB of text in a document, although images could increase the total file size beyond this limit. SimpleText style information was stored in the file's resource fork in such a way that if the resource fork was stripped (such as by uploading to a non-Macintosh server), the text information would be retained. In Mac OS X, SimpleText is replaced by the more powerful TextEdit application, which reads and writes more document formats as well as including word processor-like features such as a ruler and spell checking. TextEdit's styled text format is RTF, which is able to survive a single-forked file system intact. Apple has released the source code for a Carbon version of SimpleText in the Mac OS X Developer Tools. If the Developer Tools are installed, it can be found at /Developer/Examples/Carbon/SimpleText.
https://en.wikipedia.org/wiki?curid=29257
Statute of Westminster 1931 The Statute of Westminster 1931 is an Act of the Parliament of the United Kingdom whose modified versions are now domestic law within Australia and Canada; it has been repealed in New Zealand and implicitly in former Dominions that are no longer Commonwealth realms. Passed on 11 December 1931, the act, either immediately or upon ratification, effectively both established the legislative independence of the self-governing Dominions of the British Empire from the United Kingdom and bound them all to seek each other's approval for changes to monarchical titles and the common line of succession. It thus became a statutory embodiment of the principles of equality and common allegiance to the Crown set out in the Balfour Declaration of 1926. As the statute removed nearly all of the British parliament's authority to legislate for the Dominions, it had the effect of making the Dominions largely sovereign nations in their own right. It was a crucial step in the development of the Dominions as separate states. The Statute of Westminster's relevance today is that it sets the basis for the relationship between the Commonwealth realms and the Crown. The Statute of Westminster gave effect to certain political resolutions passed by the Imperial Conferences of 1926 and 1930; in particular, the Balfour Declaration of 1926. The main effect was the removal of the ability of the British parliament to legislate for the Dominions, part of which also required the repeal of the Colonial Laws Validity Act 1865 in its application to the Dominions. King George V expressed his desire that the laws of royal succession be exempt from the Statute's provisions, but it was determined that this would be contrary to the principles of equality set out in the Balfour Declaration. Both Canada and the Irish Free State pushed for the ability to amend the succession laws themselves and section 2(2) (allowing a Dominion to amend or repeal laws of paramount force, such as the succession laws, insofar as they are part of the law of that Dominion) was included in the Statute of Westminster at Canada's insistence. After the Statute was passed, the British parliament could no longer make laws for the Dominions, other than with the request and consent of the government of that Dominion. Before then, the Dominions had legally been self-governing colonies of the United Kingdom. However, the statute had the effect of making them sovereign nations once they adopted it. The Statute provides in section 4: It also provides in section 2(1): The whole Statute applied to the Dominion of Canada, the Irish Free State, and the Union of South Africa without the need for any acts of ratification; the governments of those countries gave their consent to the application of the law to their respective jurisdictions. Section 10 of the Statute provided that sections 2 to 6 would apply in the other three Dominions—Australia, New Zealand, and Newfoundland - only after the Parliament of that Dominion had legislated to adopt them. Since 1931, over a dozen new Commonwealth realms have been created, all of which now hold the same powers as the United Kingdom, Canada, Australia, and New Zealand over matters of change to the monarchy, though the Statute of Westminster is not part of their laws. Ireland and South Africa are now republics and Newfoundland is now part of Canada as a province. Australia adopted sections 2 to 6 of the Statute of Westminster with the Statute of Westminster Adoption Act 1942, in order to clarify the validity of certain Australian legislation relating to the Second World War; the adoption was backdated to 3 September 1939, the date that Britain and Australia joined the war. Adopting section 2 of the Statute clarified that the Commonwealth Parliament was able to legislate inconsistently with British legislation, adopting section 3 clarified that it could legislate with extraterritorial effect. Adopting section 4 clarified that Britain could legislate with effect on Australia as a whole only with Australia's request and consent. Nonetheless, under section 9 of the Statute, on matters not within Commonwealth power Britain could still legislate with effect in all or any of the Australian states, without the agreement of the Commonwealth although only to the extent of "the constitutional practice existing before the commencement" of the statute. However, this capacity was never used. In particular, it was not used to implement the result of the 1933 Western Australian secession referendum, as it did not have the support of the Australian government. All British power to legislate with effect in Australia ended with the Australia Act 1986, the British version of which says that it was passed with the request and consent of the Australian Parliament, which had obtained the concurrence of the Parliaments of the Australian states. This Statute limited the legislative authority of the British parliament over Canada, effectively giving the country legal autonomy as a self-governing Dominion, though the British Parliament retained the power to amend Canada's constitution at the request of the Parliament of Canada. That authority remained in effect until the Constitution Act, 1982, which transferred it to Canada, the final step to achieving full sovereignty. The British North America Acts—the written elements (in 1931) of the Canadian constitution—were excluded from the application of the statute because of disagreements between the Canadian provinces and the federal government over how the British North America Acts could be otherwise amended. These disagreements were resolved only in time for the passage of the Canada Act 1982, thus completing the patriation of the Canadian constitution to Canada. At that time, the Canadian parliament also repealed sections 4 and 7(1) of the Statute of Westminster. The Statute of Westminster remains a part of the constitution of Canada by virtue of section 52(2)(b) of the Constitution Act, 1982. As a consequence of the Statute's adoption, the Parliament of Canada gained the ability to abolish appeals to the Judicial Committee of the Privy Council. Criminal appeals were abolished in 1933, while civil appeals continued until 1949. The passage of the Statute of Westminster meant that changes in British legislation governing the succession to the throne no longer automatically applied to Canada. The Irish Free State never formally adopted the Statute of Westminster, its Executive Council (cabinet) taking the view that the Anglo-Irish Treaty of 1921 had already ended Westminster's right to legislate for the Irish Free State. The Free State's constitution gave the Oireachtas "sole and exclusive power of making laws". Hence, even before 1931, the Irish Free State did not arrest British Army and Royal Air Force deserters on its territory, even though the UK believed post-1922 British laws gave the Free State's Garda Síochána the power to do so. The UK's Irish Free State Constitution Act 1922 said, however, " in the [Free State] Constitution shall be construed as prejudicing the power of [the British] Parliament to make laws affecting the Irish Free State in any case where, in accordance with constitutional practice, Parliament would make laws affecting other self-governing Dominions". Motions of approval of the Report of the Commonwealth Conference had been passed by the Dáil and Seanad in May 1931 and the final form of the Statute of Westminster included the Irish Free State among the Dominions the British Parliament could not legislate for without the Dominion's request and consent. Originally, the UK government had wanted to exclude from the Statute of Westminster the legislation underpinning the 1921 treaty, from which the Free State's constitution had emerged. Executive Council President (Prime Minister) W. T. Cosgrave objected, although he promised that the Executive Council would not amend the legislation unilaterally. The other Dominions backed Cosgrave and, when an amendment to similar effect was proposed at Westminster by John Gretton, parliament duly voted it down. When the Statute became law in the UK, Patrick McGilligan, the Free State Minister for External Affairs, stated: "It is a solemn declaration by the British people through their representatives in Parliament that the powers inherent in the Treaty position are what we have proclaimed them to be for the last ten years." He went on to present the Statute as largely the fruit of the Free State's efforts to secure for the other Dominions the same benefits it already enjoyed under the treaty. The Statute of Westminster had the effect of making the Irish Free State the first internationally recognised independent Irish state. After Éamon de Valera led Fianna Fáil to victory in the Free State election of 1932, he began removing the monarchical elements of the Constitution, beginning with the Oath of Allegiance. De Valera initially considered invoking the Statute of Westminster in making these changes, but John J. Hearne advised him not to. Abolishing the Oath of Allegiance in effect abrogated the 1921 treaty. Generally, the British thought that this was morally objectionable but legally permitted by the Statute of Westminster. Robert Lyon Moore, a Southern Unionist from County Donegal, challenged the legality of the abolition in the Irish Free State's courts and then appealed to the Judicial Committee of the Privy Council (JCPC) in London. However, the Free State had also abolished the right of appeal to the JCPC. In 1935, the JCPC ruled that both abolitions were valid under the Statute of Westminster. The Free State, which in 1937 was renamed "Ireland", left the Commonwealth in 1949 upon the coming into force of its Republic of Ireland Act. The Parliament of New Zealand adopted the Statute of Westminster by passing its Statute of Westminster Adoption Act 1947 in November 1947. The New Zealand Constitution Amendment Act, passed the same year, empowered the New Zealand Parliament to change the constitution, but did not remove the ability of the British Parliament to legislate regarding the New Zealand constitution. The remaining role of the British Parliament was removed by the New Zealand Constitution Act 1986 and the Statute of Westminster was repealed in its entirety. The Dominion of Newfoundland never adopted the Statute of Westminster, especially because of financial troubles and corruption there. By request of the Dominion's government, the United Kingdom established the Commission of Government in 1934, resuming direct rule of Newfoundland. That arrangement remained until Newfoundland became a province of Canada in 1949 following referendums on the issue in 1948. Although the Union of South Africa was not among the Dominions that needed to adopt the Statute of Westminster for it to take effect, two laws—the Status of the Union Act, 1934, and the Royal Executive Functions and Seals Act of 1934—were passed to confirm South Africa's status as a sovereign state. The preamble to the Statute of Westminster sets out conventions which affect attempts to change the rules of succession to the Crown. The second paragraph of the preamble to the Statute reads: This means, for example, that any change in any realm to the Act of Settlement's provisions barring Roman Catholics from the throne would require the unanimous assent of the Parliaments of all the other Commonwealth realms if the shared aspect of the Crown is to be retained. The preamble does not itself contain enforceable provisions, it merely expresses a constitutional convention, albeit one fundamental to the basis of the relationship between the Commonwealth realms. (As sovereign nations, each is free to withdraw from the arrangement, using their respective process for constitutional amendment.) Additionally, per section 4, if a realm wished for a British act amending the Act of Settlement in the UK to become part of that realm's laws, thereby amending the Act of Settlement in that realm, it would have to request and consent to the British act and the British act would have to state that such request and consent had been given. Section 4 of the Statute of Westminster has been repealed in a number of realms, however, and replaced by other constitutional clauses absolutely disallowing the British parliament from legislating for those realms. This has raised some logistical concerns, as it would mean multiple Parliaments would all have to assent to any future changes in any realm to its line of succession, as with the Perth Agreement's proposals to abolish male-preference primogeniture. During the abdication crisis in 1936, British Prime Minister Stanley Baldwin consulted the Commonwealth prime ministers at the request of King Edward VIII. The King wanted to marry Wallis Simpson, whom Baldwin and other British politicians considered unacceptable as Queen, as she was an American divorcée. Baldwin was able to get the then five Dominion prime ministers to agree with this and thus register their official disapproval at the King's planned marriage. The King later requested the Commonwealth prime ministers be consulted on a compromise plan, in which he would wed Simpson under a morganatic marriage pursuant to which she would not become queen. Under Baldwin's pressure, this plan was also rejected by the Dominions. All of these negotiations occurred at a diplomatic level and never went to the Commonwealth parliaments. However, the enabling legislation that allowed for the actual abdication (His Majesty's Declaration of Abdication Act 1936) did require the assent of each Dominion Parliament to be passed and the request and consent of the Dominion governments so as to allow it to be part of the law of each Dominion. For expediency and to avoid embarrassment, the British government had suggested the Dominion governments regard whoever is monarch of the UK to automatically be their monarch. However, the Dominions rejected this; Prime Minister of Canada William Lyon Mackenzie King pointed out that the Statute of Westminster required Canada's request and consent to any legislation passed by the British Parliament before it could become part of Canada's laws and affect the line of succession in Canada. The text of the British act states that Canada requested and consented (the only Dominion to formally do both) to the act applying in Canada under the Statute of Westminster, while Australia, New Zealand, and the Union of South Africa simply assented. In February 1937, the South African Parliament formally gave its assent by passing His Majesty King Edward the Eighth's Abdication Act, 1937, which declared that Edward VIII had abdicated on 10 December 1936; that he and his descendants, if any, would have no right of succession to the throne; and that the Royal Marriages Act 1772 would not apply to him or his descendants, if any. The move was largely done for symbolic purposes, in an attempt by Prime Minister J. B. M. Hertzog to assert South Africa's independence from Britain. In Canada, the federal parliament passed the Succession to the Throne Act 1937, to assent to His Majesty's Declaration of Abdication Act and ratify the government's request and consent to it. In the Irish Free State, Prime Minister Éamon de Valera used the departure of Edward VIII as an opportunity to remove all explicit mention of the monarch from the Constitution of the Irish Free State, through the Constitution (Amendment No. 27) Act 1936, passed on 11 December 1936. The following day, the External Relations Act provided for the king to carry out certain diplomatic functions, if authorised by law; the same Act also brought Edward VIII's Instrument of Abdication into effect for the purposes of Irish law (s. 3(2)). A new Constitution of Ireland, with a president, was approved by Irish voters in 1937, with the Irish Free State becoming simply "Ireland", or, in the Irish language, "Éire". However, the head of state of Ireland remained unclear until 1949, when Ireland unambiguously became a republic outside the Commonwealth of Nations by enacting the Republic of Ireland Act 1948. In some countries where the Statute of Westminster forms a part of the constitution, the anniversary of the date of the passage of the original British statute is commemorated as Statute of Westminster Day. In Canada, it is mandated that, on 11 December, the Royal Union Flag (as the Union Jack is called by law in Canada) is to be flown at properties owned by the federal Crown, where the requisite second flag pole is available.
https://en.wikipedia.org/wiki?curid=29263
Serbia Serbia (, ), officially the Republic of Serbia (, ), is a landlocked country situated at the crossroads of Central and Southeast Europe in the southern Pannonian Plain and the central Balkans. It borders Hungary to the north, Romania to the northeast, Bulgaria to the southeast, North Macedonia to the south, Croatia and Bosnia and Herzegovina to the west, and Montenegro to the southwest. The country claims a border with Albania through the disputed territory of Kosovo. Serbia's population numbers approximately seven million without Kosovo or 8.8 million if the territory is included. Its capital, Belgrade, ranks among the largest and oldest citiеs in southeastern Europe. Inhabited since the Paleolithic Age, the territory of modern-day Serbia faced Slavic migrations to Southeastern Europe in the 6th century, establishing several regional states in the early Middle Ages at times recognised as tributaries to the Byzantine, Frankish and Hungarian kingdoms. The Serbian Kingdom obtained recognition by the Holy See and Constantinople in 1217, reaching its territorial apex in 1346 as the relatively short-lived Serbian Empire. By the mid-16th century, the Ottomans annexed the entirety of modern-day Serbia; their rule was at times interrupted by the Habsburg Empire, which began expanding towards Central Serbia from the end of the 17th century while maintaining a foothold in Vojvodina. In the early 19th century, the Serbian Revolution established the nation-state as the region's first constitutional monarchy, which subsequently expanded its territory. Following disastrous casualties in World War I, and the subsequent unification of the former Habsburg crownland of Vojvodina (and other lands) with Serbia, the country co-founded Yugoslavia with other South Slavic nations, which would exist in various political formations until the Yugoslav Wars of the 1990s. During the breakup of Yugoslavia, Serbia formed a union with Montenegro, which was peacefully dissolved in 2006, restoring Serbia's independence as a sovereign state for the first time since 1918. In 2008, the parliament of the province of Kosovo unilaterally declared independence, with mixed responses from the international community. Serbia is one of the European countries with high numbers of registered national minorities, while the Autonomous Province of Vojvodina is recognizable for its multi-ethnic and multi-cultural identity. A unitary parliamentary constitutional republic, Serbia is a member of the UN, CoE, OSCE, PfP, BSEC, CEFTA, and is acceding to the WTO. Since 2014, the country has been negotiating its EU accession with the perspective of joining the European Union by 2025. Like some other European countries, Serbia has suffered from democratic backsliding in recent years, having dropped in ranking from "Free" to "Partly Free" in the 2019 Freedom House report, and to a "hybrid regime" in its 2020 report, mainly "due to the cumulative increase of high-level corruption coupled with the absence, and in some cases actual dismantlement, of policies and institutions that would successfully fight or prevent corruption.". Since 2007, Serbia formally adheres to the policy of military neutrality. The country provides social security, universal health care system, and a free primary and secondary education to its citizens. An upper-middle-income economy with a dominant service sector, the country ranks relatively high on the Human Development Index (63rd) and Social Progress Index (45th) as well as the Global Peace Index (50th). The origin of the name "Serbia" is unclear. Historically, authors have mentioned the Serbs ( / Срби) and the Sorbs of eastern Germany (Upper Sorbian: "Serbja"; Lower Sorbian: "Serby") in a variety of ways: Surbii, Suurbi, Serbloi, Zeriuani, Sorabi, Surben, Sarbi, Serbii, Serboi, Zirbi, Surbi, Sorben, etc. These authors used these names to refer to Serbs and Sorbs in areas where their historical (or current) presence was/is not disputed (notably in the Balkans and Lusatia). However, there are also sources that mention same or similar names in other parts of the World (most notably in the Asiatic Sarmatia in the Caucasus). The Proto-Slavic root word *sъrbъ has been variously connected with Russian "paserb" (пасерб, "stepson"), Ukrainian "pryserbytysia" (присербитися, "join in"), Old Indic "sarbh-" ("fight, cut, kill"), Latin "sero" ("make up, constitute"), and Greek "siro" (ειρω, "repeat"). Polish linguist Stanisław Rospond (1906–1982) derived the Serbian language ethnonym "Srb" from "srbati" (cf. "sorbo", "absorbo"). Sorbian scholar H. Schuster-Šewc suggested a connection with the Proto-Slavic verb for "to slurp" *sьrb-, with cognates such as "сёрбать" (Russian), "сьорбати" (Ukrainian), "сёрбаць" (Belarusian), "srbati" (Slovak), "сърбам" (Bulgarian) and "серебати" (Old Russian). In his book, De Administrando Imperio, Constantine VII Porphyrogenitus suggests that the Serbs originated from White Serbia on the far side of Turkey. He believed that the people split in two, with the half that became known as the Serbs coming down to settle Byzantine land. In this line of thinking, Serb is derived from the word Surbi which was used to describe people of the proto-country. From 1945 to 1963, the official name for Serbia was the People's Republic of Serbia, later renamed the Socialist Republic of Serbia from 1963 to 1990. Since 1990, the official name of the country has been the Republic of Serbia. From 1992 to 2006, however, the official names of the country Serbia was a part of were the Federal Republic of Yugoslavia and then the State Union of Serbia and Montenegro. Archaeological evidence of Paleolithic settlements on the territory of present-day Serbia is scarce. A fragment of a human jaw was found in Sićevo (Mala Balanica) and is believed to be up to 525,000–397,000 years old. Approximately around 6,500 years BC, during the Neolithic, the Starčevo and Vinča cultures existed in the region of modern-day Belgrade. They dominated much of Southeastern Europe, (as well as parts of Central Europe and Asia Minor). Several important archaeological sites from this era, including Lepenski Vir and Vinča-Belo Brdo, still exist near the banks of the Danube. During the Iron Age, local tribes of Triballi, Dardani, and Autariatae were encountered by the Ancient Greeks during their cultural and political expansion into the region, from the 5th up to the 2nd century BC. The Celtic tribe of Scordisci settled throughout the area in the 3rd century BC. It formed a tribal state, building several fortifications, including their capital at Singidunum (present-day Belgrade) and Naissos (present-day Niš). The Romans conquered much of the territory in the 2nd century BC. In 167 BC the Roman province of Illyricum was established; the remainder was conquered around 75 BC, forming the Roman province of Moesia Superior; the modern-day Srem region was conquered in 9 BC; and Bačka and Banat in 106 AD after the Dacian Wars. As a result of this, contemporary Serbia extends fully or partially over several former Roman provinces, including Moesia, Pannonia, Praevalitana, Dalmatia, Dacia and Macedonia. The chief towns of Upper Moesia (and broader) were: Singidunum (Belgrade), Viminacium (now Old Kostolac), Remesiana (now Bela Palanka), Naissos (Niš), and Sirmium (now Sremska Mitrovica), the latter of which served as a Roman capital during the Tetrarchy. Seventeen Roman Emperors were born in the area of modern-day Serbia, second only to contemporary Italy. The most famous of these was Constantine the Great, the first Christian Emperor, who issued an edict ordering religious tolerance throughout the Empire. When the Roman Empire was divided in 395, most of Serbia remained under the Eastern Roman Empire. At the same time, its northwestern parts were included in the Western Roman Empire. By the 6th century, South Slavs migrated into the European provinces of the Byzantine Empire in large numbers. They merged with the local Romanised population that was gradually assimilated. White Serbs, an early Slavic tribe from White Serbia first settled in an area near Thessaloniki on the Balkans and in the 6th and early 7th century, established the Serbian Principality by the 8th century. It was said in 822 that the Serbs inhabited the more significant part of Roman Dalmatia, their territory spanning what is today southwestern Serbia and parts of neighbouring countries. Meanwhile, the Byzantine Empire and the Bulgarian Empire held other parts of the territory. The Serbian rulers adopted Christianity in ca. 870, and by the mid-10th-century the Serbian state stretched the Adriatic Sea by the Neretva, the Sava, the Morava, and Skadar. Between 1166 and 1371 Serbia was ruled by the Nemanjić dynasty (whose legacy is especially cherished), under whom the state was elevated to a kingdom (and briefly an empire) and Serbian bishopric to an autocephalous archbishopric (through the effort of Sava, the country's patron saint). Monuments of the Nemanjić period survive in many monasteries (several being World Heritage sites) and fortifications. During these centuries the Serbian state (and influence) expanded significantly. The northern part, Vojvodina, was ruled by the Kingdom of Hungary. The period known as the Fall of the Serbian Empire saw the once-powerful state fragmented into duchies, culminating in the Battle of Kosovo (1389) against the rising Ottoman Empire. The Ottomans finally conquered the Serbian Despotate in 1459. The Ottoman threat and eventual conquest saw massive migrations of Serbs to the west and north. In all Serbian lands conquered by the Ottomans, the native nobility was eliminated and the peasantry was enserfed to Ottoman rulers, while much of the clergy fled or were confined to the isolated monasteries. Under the Ottoman system, Serbs, as Christians, were considered an inferior class of people and subjected to heavy taxes, and a portion of the Serbian population experienced Islamization. Many Serbs were recruited during the devshirme system, a form of slavery in the Ottoman Empire, in which boys from Balkan Christian families were forcibly converted to Islam and trained for infantry units of the Ottoman army known as the Janissaries. The Serbian Patriarchate of Peć was extinguished in 1463, but reestablished it in 1557, providing for limited continuation of Serbian cultural traditions within the Ottoman Empire, under the Millet system. After the loss of statehood to the Ottoman Empire, Serbian resistance continued in northern regions (modern Vojvodina), under titular despots (until 1537), and popular leaders like Jovan Nenad (1526–1527). From 1521 to 1552, Ottomans conquered Belgrade and regions of Syrmia, Bačka, and Banat. Continuing wars and various rebellions constantly challenged Ottoman rule. One of the most significant was the Banat Uprising in 1594 and 1595, which was part of the Long War (1593–1606) between the Habsburgs and the Ottomans. The area of modern Vojvodina endured a century-long Ottoman occupation before being ceded to the Habsburg Empire, partially by the Treaty of Karlovci (1699), and fully by the Treaty of Požarevac (1718). As the Great Serb Migrations depopulated most of southern Serbia, the Serbs sought refuge across the Danube River in Vojvodina to the north and the Military Frontier in the west, where they were granted rights by the Austrian crown under measures such as the "Statuta Wallachorum" of 1630. Much of central Serbia switched from Ottoman rule to Habsburg control (1686–91) during the Habsburg-Ottoman war (1683-1699). Following several petitions, Emperor Leopold I formally granted Serbs who wished to settle in the northern regions the right to their autonomous crown land. The ecclesiastical centre of the Serbs also moved northwards, to the Metropolitanate of Karlovci, and the Serbian Patriarchate of Peć was once-again abolished by the Ottomans in 1766. In 1718–39, the Habsburg Monarchy occupied much of Central Serbia and established the "Kingdom of Serbia" (1718–1739). Those gains were lost by the Treaty of Belgrade in 1739, when the Ottomans retook the region. Apart from territory of modern Vojvodina which remained under the Habsburg Empire, central regions of Serbia were occupied once again by the Habsburgs in 1788–1792. The Serbian Revolution for independence from the Ottoman Empire lasted eleven years, from 1804 until 1815. The revolution comprised two separate uprisings which gained autonomy from the Ottoman Empire (1830) that eventually evolved towards full independence (1878). During the First Serbian Uprising (1804–1813), led by vožd Karađorđe Petrović, Serbia was independent for almost a decade before the Ottoman army was able to reoccupy the country. Shortly after this, the Second Serbian Uprising began in 1815. Led by Miloš Obrenović, it ended with a compromise between Serbian revolutionaries and Ottoman authorities. Likewise, Serbia was one of the first nations in the Balkans to abolish feudalism. The Akkerman Convention in 1826, the Treaty of Adrianople in 1829 and finally, the Hatt-i Sharif, recognised the suzerainty of Serbia. The First Serbian Constitution was adopted on 15 February 1835 (the anniversary of the outbreak of the First Serbian Uprising), making the country one of the first to adopt a democratic constitution in Europe. 15 February is now commemorated as Statehood Day, a public holiday. Following the clashes between the Ottoman army and Serbs in Belgrade in 1862, and under pressure from the Great Powers, by 1867 the last Turkish soldiers left the Principality, making the country "de facto" independent. By enacting a new constitution in 1869, without consulting the Porte, Serbian diplomats confirmed the "de facto" independence of the country. In 1876, Serbia declared war on the Ottoman Empire, siding with the ongoing Christian uprisings in Bosnia-Herzegovina and Bulgaria. The formal independence of the country was internationally recognised at the Congress of Berlin in 1878, which ended the Russo-Turkish War; this treaty, however, prohibited Serbia from uniting with other Serbian regions by placing Bosnia and Herzegovina under Austro-Hungarian occupation, alongside the occupation of the region of Raška. From 1815 to 1903, the Principality of Serbia was ruled by the House of Obrenović, save for the rule of Prince Aleksandar Karađorđević between 1842 and 1858. In 1882, Principality of Serbia became the Kingdom of Serbia, ruled by King Milan I. The House of Karađorđević, descendants of the revolutionary leader Karađorđe Petrović, assumed power in 1903 following the May Overthrow. In the north, the 1848 revolution in Austria led to the establishment of the autonomous territory of Serbian Vojvodina; by 1849, the region was transformed into the Voivodeship of Serbia and Banat of Temeschwar. In the course of the First Balkan War in 1912, the Balkan League defeated the Ottoman Empire and captured its European territories, which enabled territorial expansion of the Kingdom of Serbia into regions of Raška, Kosovo, Metohija, and Vardarian Macedonia. The Second Balkan War soon ensued when Bulgaria turned on its former allies, but was defeated, resulting in the Treaty of Bucharest. In two years, Serbia enlarged its territory and its population by 50%; it also suffered high casualties on the eve of World War I, with more than 36,000 dead. Austria-Hungary became wary of the rising regional power on its borders and its potential to become an anchor for unification of Serbs and other South Slavs, and the relationship between the two countries became tense. The assassination of Archduke Franz Ferdinand of Austria on 28 June 1914 in Sarajevo by Gavrilo Princip, a member of the Young Bosnia organisation, led to Austria-Hungary declaring war on Serbia, on 28 July. Local war escalated, when Germany declared war on Russia, and invaded France and Belgium, thus drawing Great Britain into the conflict, that became the First World War. Serbia won the first major battles of World War I, including the Battle of Cer, and the Battle of Kolubara, marking the first Allied victories against the Central Powers in World War I. Despite initial success, it was eventually overpowered by the Central Powers in 1915. Most of its army and some people retreated through Albania to Greece and Corfu, suffering immense losses on the way. Serbia was occupied by the Central Powers. After the Central Powers military situation on other fronts worsened, the remains of the Serb army returned east and lead a final breakthrough through enemy lines on 15 September 1918, liberating Serbia and defeating Bulgaria and Austria-Hungary. Serbia, with its campaign, was a major Balkan Entente Power which contributed significantly to the Allied victory in the Balkans in November 1918, especially by helping France force Bulgaria's capitulation. Serbia's casualties accounted for 8% of the total Entente military deaths; 58% (243,600) soldiers of the Serbian army perished in the war. The total number of casualties is placed around 700,000, more than 16% of Serbia's prewar size, and a majority (57%) of its overall male population. Serbia suffered the biggest casualty rate in World War I. As the Austro-Hungarian Empire collapsed, the territory of Syrmia united with Serbia on 24 November 1918. Just a day later on November 25, 1918 Grand National Assembly of Serbs, Bunjevci and other Slavs in Banat, Bačka and Baranja declared the unification of Banat, Bačka and Baranja to the Kingdom of Serbia. On 26 November 1918, the Podgorica Assembly deposed the House of Petrović-Njegoš and united Montenegro with Serbia. On 1 December 1918, in Belgrade, Serbian Prince Regent Alexander Karađorđević proclaimed the Kingdom of the Serbs, Croats, and Slovenes, under King Peter I of Serbia. King Peter was succeeded by his son, Alexander, in August 1921. Serb centralists and Croat autonomists clashed in the parliament, and most governments were fragile and short-lived. Nikola Pašić, a conservative prime minister, headed or dominated most governments until his death. King Alexander established a dictatorship in 1929 with the aim of establishing the Yugoslav ideology and single Yugoslav nation, changed the name of the country to Yugoslavia and changed the internal divisions from the 33 oblasts to nine new banovinas. The effect of Alexander's dictatorship was to further alienate the non-Serbs living in Yugoslavia from the idea of unity. Alexander was assassinated in Marseille, during an official visit in 1934 by Vlado Chernozemski, member of the IMRO. Alexander was succeeded by his eleven-year-old son Peter II and a regency council was headed by his cousin, Prince Paul. In August 1939 the Cvetković–Maček Agreement established an autonomous Banate of Croatia as a solution to Croatian concerns. In 1941, in spite of Yugoslav attempts to remain neutral in the war, the Axis powers invaded Yugoslavia. The territory of modern Serbia was divided between Hungary, Bulgaria, the Independent State of Croatia and Italy (Greater Albania and Montenegro), while the remaining part of the occupied Serbia was placed under the military administration of the Nazi Germany, with Serbian puppet governments led by Milan Aćimović and Milan Nedić assisted by Dimitrije Ljotić's fascist organization Yugoslav National Movement (Zbor). The Yugoslav territory was the scene of a civil war between royalist Chetniks commanded by Draža Mihailović and communist partisans commanded by Josip Broz Tito. Axis auxiliary units of the Serbian Volunteer Corps and the Serbian State Guard fought against both of these forces. Siege of Kraljevo was a major battle of the Uprising in Serbia, led by Chetnik forces against the Nazis. Several days after the battle began the German forces committed a massacre of approximately 2,000 civilians in an event known as the Kraljevo massacre, in a reprisal for the attack. Draginac and Loznica massacre of 2,950 villagers in Western Serbia in 1941 was the first large execution of civilians in occupied Serbia by Germans, with Kragujevac massacre and Novi Sad Raid of Jews and Serbs by Hungarian fascists being the most notorious, with over 3,000 victims in each case. After one year of occupation, around 16,000 Serbian Jews were murdered in the area, or around 90% of its pre-war Jewish population during The Holocaust in Serbia. Many concentration camps were established across the area. Banjica concentration camp was the largest concentration camp and jointly run by the German army and Nedić's regime , with primary victims being Serbian Jews, Roma, and Serb political prisoners. During this period, hundreds of thousands of ethnic Serbs fled the Axis puppet state known as the Independent State of Croatia and sought refuge in German-occupied Serbia, seeking to escape the large-scale persecution and genocide of Serbs, Jews, and Roma being committed by the Ustaše regime. According to Josip Broz Tito himself, Serbs made up the vast majority of Anti-fascist fighters and Yugoslav Partisans for the whole course of World War II. The Republic of Užice was a short-lived liberated territory established by the Partisans and the first liberated territory in World War II Europe, organised as a military mini-state that existed in the autumn of 1941 in the west of occupied Serbia. By late 1944, the Belgrade Offensive swung in favour of the partisans in the civil war; the partisans subsequently gained control of Yugoslavia. Following the Belgrade Offensive, the Syrmian Front was the last major military action of World War II in Serbia. A study by Vladimir Žerjavić estimates total war related deaths in Yugoslavia at 1,027,000, including 273,000 in Serbia. The Ustaše regime committed the Genocide of Serbs and systematically murdered approximately 300,000 to 500,000 Serbs. The victory of the Communist Partisans resulted in the abolition of the monarchy and a subsequent constitutional referendum. A one-party state was soon established in Yugoslavia by the Communist Party of Yugoslavia. It is claimed between 60,000 and 70,000 people died in Serbia during the 1944–45 communist takeover and purge. All opposition was suppressed and people deemed to be promoting opposition to socialism or promoting separatism were imprisoned or executed for sedition. Serbia became a constituent republic within the SFRY known as the Socialist Republic of Serbia, and had a republic-branch of the federal communist party, the League of Communists of Serbia. Serbia's most powerful and influential politician in Tito-era Yugoslavia was Aleksandar Ranković, one of the "big four" Yugoslav leaders, alongside Tito, Edvard Kardelj, and Milovan Đilas. Ranković was later removed from the office because of the disagreements regarding Kosovo's nomenklatura and the unity of Serbia. Ranković's dismissal was highly unpopular among Serbs. Pro-decentralisation reformers in Yugoslavia succeeded in the late 1960s in attaining substantial decentralisation of powers, creating substantial autonomy in Kosovo and Vojvodina, and recognising a distinctive "Muslim" nationality. As a result of these reforms, there was a massive overhaul of Kosovo's nomenklatura and police, that shifted from being Serb-dominated to ethnic Albanian-dominated through firing Serbs on a large scale. Further concessions were made to the ethnic Albanians of Kosovo in response to unrest, including the creation of the University of Pristina as an Albanian language institution. These changes created widespread fear among Serbs of being treated as second-class citizens. Belgrade, the capital of SFR Yugoslavia and SR Serbia, hosted the first Non-Aligned Movement Summit in September 1961, as well as the first major gathering of the Organization for Security and Co-operation in Europe (OSCE) with the aim of implementing the Helsinki Accords from October 1977 to March 1978. The 1972 smallpox outbreak in SAP Kosovo and other parts of SR Serbia was the last major outbreak of smallpox in Europe since World War II. In 1989, Slobodan Milošević rose to power in Serbia. Milošević promised a reduction of powers for the autonomous provinces of Kosovo and Vojvodina, where his allies subsequently took over power, during the Anti-bureaucratic revolution. This ignited tensions between the communist leadership of the other republics of Yugoslavia, and awoke ethnic nationalism across Yugoslavia that eventually resulted in its breakup, with Slovenia, Croatia, Bosnia and Herzegovina, and Macedonia declaring independence during 1991 and 1992. Serbia and Montenegro remained together as the Federal Republic of Yugoslavia (FRY). However, according to the Badinter Commission, the country was not legally considered a continuation of the former SFRY, but a new state. Fueled by ethnic tensions, the Yugoslav Wars (1991–2001) erupted, with the most severe conflicts taking place in Croatia and Bosnia, where the large ethnic Serb communities opposed independence from Yugoslavia. The FRY remained outside the conflicts, but provided logistic, military and financial support to Serb forces in the wars. In response, the UN imposed sanctions against Serbia which led to political isolation and the collapse of the economy (GDP decreased from $24 billion in 1990 to under $10 billion in 1993). Following the rise of nationalism and political tensions after Slobodan Milošević came to power, numerous anti-war movements developed in Serbia and many anti-war protests were held in Belgrade. Multi-party democracy was introduced in Serbia in 1990, officially dismantling the one-party system. Critics of Milošević stated that the government continued to be authoritarian despite constitutional changes, as Milošević maintained strong political influence over the state media and security apparatus. When the ruling Socialist Party of Serbia refused to accept its defeat in municipal elections in 1996, Serbians engaged in large protests against the government. In 1998, continued clashes between the Albanian guerilla Kosovo Liberation Army and Yugoslav security forces led to the short Kosovo War (1998–99), in which NATO intervened, leading to the withdrawal of Serbian forces and the establishment of UN administration in the province. After the Yugoslav Wars, Serbia became home to highest number of refugees and internally displaced persons in Europe. After presidential elections in September 2000, opposition parties accused Milošević of electoral fraud. A campaign of civil resistance followed, led by the Democratic Opposition of Serbia (DOS), a broad coalition of anti-Milošević parties. This culminated on 5 October when half a million people from all over the country congregated in Belgrade, compelling Milošević to concede defeat. The fall of Milošević ended Yugoslavia's international isolation. Milošević was sent to the International Criminal Tribunal for the former Yugoslavia. The DOS announced that FR Yugoslavia would seek to join the European Union. In 2003, the Federal Republic of Yugoslavia was renamed Serbia and Montenegro; the EU opened negotiations with the country for the Stabilisation and Association Agreement. Serbia's political climate remained tense and in 2003, the Prime Minister Zoran Đinđić was assassinated as result of a plot originating from circles of organised crime and former security officials. On 21 May 2006, Montenegro held a referendum to determine whether to end its union with Serbia. The results showed 55.4% of voters in favour of independence, which was just above the 55% required by the referendum. On 5 June 2006, the National Assembly of Serbia declared Serbia to be the legal successor to the former state union. The Assembly of Kosovo unilaterally declared independence from Serbia on 17 February 2008. Serbia immediately condemned the declaration and continues to deny any statehood to Kosovo. The declaration has sparked varied responses from the international community, some welcoming it, while others condemned the unilateral move. Status-neutral talks between Serbia and Kosovo-Albanian authorities are held in Brussels, mediated by the EU. In April 2008 Serbia was invited to join the Intensified Dialogue programme with NATO, despite the diplomatic rift with the alliance over Kosovo. Serbia officially applied for membership in the European Union on 22 December 2009, and received candidate status on 1 March 2012, following a delay in December 2011. Following a positive recommendation of the European Commission and European Council in June 2013, negotiations to join the EU commenced in January 2014. Since Aleksandar Vučić came to power, Serbia has suffered from democratic backsliding into authoritarianism, followed by a decline in media freedom and civil liberties. Massive anti-government protests began in 2018 and continued into 2020, making them one of Europe's longest-running protests. After the COVID-19 pandemic spread to Serbia in March 2020, a state of emergency was declared and a curfew was introduced for the first time in Serbia since World War II. Situated at the crossroads between Central and Southern Europe, Serbia is located in the Balkan peninsula and the Pannonian Plain. Serbia lies between latitudes 41° and 47° N, and longitudes 18° and 23° E. The country covers a total of 88,361 km2 (including Kosovo), which places it at 113th place in the world; with Kosovo excluded, the total area is 77,474 km2, which would make it 117th. Its total border length amounts to 2,027 km (Albania 115 km, Bosnia and Herzegovina 302 km, Bulgaria 318 km, Croatia 241 km, Hungary 151 km, North Macedonia 221 km, Montenegro 203 km and Romania 476 km). All of Kosovo's border with Albania (115 km), North Macedonia (159 km) and Montenegro (79 km) are under control of the Kosovo border police. Serbia treats the 352 km long border between Kosovo and rest of Serbia as an "administrative line"; it is under shared control of Kosovo border police and Serbian police forces, and there are 11 crossing points. The Pannonian Plain covers the northern third of the country (Vojvodina and Mačva) while the easternmost tip of Serbia extends into the Wallachian Plain. The terrain of the central part of the country, with the region of Šumadija at its heart, consists chiefly of hills traversed by rivers. Mountains dominate the southern third of Serbia. Dinaric Alps stretch in the west and the southwest, following the flow of the rivers Drina and Ibar. The Carpathian Mountains and Balkan Mountains stretch in a north–south direction in eastern Serbia. Ancient mountains in the southeast corner of the country belong to the Rilo-Rhodope Mountain system. Elevation ranges from the Midžor peak of the Balkan Mountains at (the highest peak in Serbia, excluding Kosovo) to the lowest point of just near the Danube river at Prahovo. The largest lake is Đerdap Lake (163 square kilometres) and the longest river passing through Serbia is the Danube (587.35 kilometres). The climate of Serbia is under the influences of the landmass of Eurasia and the Atlantic Ocean and Mediterranean Sea. With mean January temperatures around , and mean July temperatures of , it can be classified as a warm-humid continental or humid subtropical climate. In the north, the climate is more continental, with cold winters, and hot, humid summers along with well-distributed rainfall patterns. In the south, summers and autumns are drier, and winters are relatively cold, with heavy inland snowfall in the mountains. Differences in elevation, proximity to the Adriatic Sea and large river basins, as well as exposure to the winds account for climate variations. Southern Serbia is subject to Mediterranean influences. The Dinaric Alps and other mountain ranges contribute to the cooling of most of the warm air masses. Winters are quite harsh in the Pešter plateau, because of the mountains which encircle it. One of the climatic features of Serbia is Košava, a cold and very squally southeastern wind which starts in the Carpathian Mountains and follows the Danube northwest through the Iron Gate where it gains a jet effect and continues to Belgrade and can spread as far south as Niš. The average annual air temperature for the period 1961–1990 for the area with an altitude of up to is . The areas with an altitude of have an average annual temperature of around , and over of altitude around . The lowest recorded temperature in Serbia was on 13 January 1985, Karajukića Bunari in Pešter, and the highest was , on 24 July 2007, recorded in Smederevska Palanka. Serbia is one of few European countries with "very high risk" exposure to natural hazards (earthquakes, storms, floods, droughts). It is estimated that potential floods, particularly in areas of Central Serbia, threaten over 500 larger settlements and an area of 16,000 square kilometres. The most disastrous were the floods in May 2014, when 57 people died and a damage of over a 1.5 billion euro was inflicted. Almost all of Serbia's rivers drain to the Black Sea, by way of the Danube river. The Danube, the second largest European river, passes through Serbia with 588 kilometres (21% of its overall length) and represents the major source of fresh water. It is joined by its biggest tributaries, the Great Morava (longest river entirely in Serbia with 493 km of length), Sava and Tisza rivers. One notable exception is the Pčinja which flows into the Aegean. Drina river forms the natural border between Bosnia and Herzegovina and Serbia, and represents the main kayaking and rafting attraction in both countries. Due to configuration of the terrain, natural lakes are sparse and small; most of them are located in the lowlands of Vojvodina, like the aeolian lake Palić or numerous oxbow lakes along river flows (like Zasavica and Carska Bara). However, there are numerous artificial lakes, mostly due to hydroelectric dams, the biggest being Đerdap (Iron Gates) on the Danube with 163 km2 on the Serbian side (a total area of 253 km2 is shared with Romania); Perućac on the Drina, and Vlasina. The largest waterfall, Jelovarnik, located in Kopaonik, is 71 m high. Abundance of relatively unpolluted surface waters and numerous underground natural and mineral water sources of high water quality presents a chance for export and economy improvement; however, more extensive exploitation and production of bottled water began only recently. With 29.1% of its territory covered by forest, Serbia is considered to be a middle-forested country, compared on a global scale to world forest coverage at 30%, and European average of 35%. The total forest area in Serbia is 2,252,000 ha (1,194,000 ha or 53% are state-owned, and 1,058,387 ha or 47% are privately owned) or 0.3 ha per inhabitant. The most common trees are oak, beech, pines and firs. Serbia is a country of rich ecosystem and species diversity – covering only 1.9% of the whole European territory Serbia is home to 39% of European vascular flora, 51% of European fish fauna, 40% of European reptile and amphibian fauna, 74% of European bird fauna, 67% European mammal fauna. Its abundance of mountains and rivers make it an ideal environment for a variety of animals, many of which are protected including wolves, lynx, bears, foxes and stags. There are 17 snake species living all over the country, 8 of them are venomous. Mountain of Tara in western Serbia is one of the last regions in Europe where bears can still live in absolute freedom. Serbia is home home to about 380 species of bird. In Carska Bara, there are over 300 bird species on just a few square kilometres. Uvac Gorge is considered one of the last habitats of the Griffon vulture in Europe. In area around the city of Kikinda, in the northernmost part of the country, some 145 endangered long-eared owls are noted, making it the world's biggest settlement of these species. The country is considerably rich with threatened species of bats and butterflies as well. There are 380 protected areas of Serbia, encompassing 4,947 square kilometres or 6.4% of the country. The "Spatial plan of the Republic of Serbia" states that the total protected area should be increased to 12% by 2021. Those protected areas include 5 national parks (Đerdap, Tara, Kopaonik, Fruška Gora and Šar Mountain), 15 nature parks, 15 "landscapes of outstanding features", 61 nature reserves, and 281 natural monuments. Air pollution is a significant problem in Bor area, due to work of large copper mining and smelting complex, and Pančevo where oil and petrochemical industry is based. Some cities suffer from water supply problems, due to mismanagement and low investments in the past, as well as water pollution (like the pollution of the Ibar River from the Trepča zinc-lead combinate, affecting the city of Kraljevo, or the presence of natural arsenic in underground waters in Zrenjanin). Poor waste management has been identified as one of the most important environmental problems in Serbia and the recycling is a fledgling activity, with only 15% of its waste being turned back for reuse. The 1999 NATO bombing caused serious damage to the environment, with several thousand tonnes of toxic chemicals stored in targeted factories and refineries released into the soil and water basins. Serbia is a parliamentary republic, with the government divided into legislative, executive and judiciary branches. Serbia had one of the first modern constitutions in Europe, the 1835 Constitution (known as the Sretenje Constitution), which was at the time considered among the most progressive and liberal constitutions in Europe. Since then it has adopted 10 different constitutions. The current constitution was adopted in 2006 in the aftermath of Montenegro independence referendum which by consequence renewed the independence of Serbia itself. The Constitutional Court rules on matters regarding the Constitution. The President of the Republic ("Predsednik Republike") is the head of state, is elected by popular vote to a five-year term and is limited by the Constitution to a maximum of two terms. In addition to being the commander in chief of the armed forces, the president has the procedural duty of appointing the prime minister with the consent of the parliament, and has some influence on foreign policy. Aleksandar Vučić of the Serbian Progressive Party is the current president following the 2017 presidential election. Seat of the presidency is Novi Dvor. The Government ("Vlada") is composed of the prime minister and cabinet ministers. The Government is responsible for proposing legislation and a budget, executing the laws, and guiding the foreign and internal policies. The current prime minister is Ana Brnabić, nominated by the Serbian Progressive Party. The National Assembly ("Narodna skupština") is a unicameral legislative body. The National Assembly has the power to enact laws, approve the budget, schedule presidential elections, select and dismiss the Prime Minister and other ministers, declare war, and ratify international treaties and agreements. It is composed of 250 proportionally elected members who serve four-year terms. The largest political parties in Serbia are the centre-right Serbian Progressive Party, leftist Socialist Party of Serbia and far-right Serbian Radical Party. Serbia is the fourth modern-day European country, after France, Austria and the Netherlands, to have a codified legal system. The country has a three-tiered judicial system, made up of the Supreme Court of Cassation as the court of the last resort, Courts of Appeal as the appellate instance, and Basic and High courts as the general jurisdictions at first instance. Courts of special jurisdictions are the Administrative Court, commercial courts (including the Commercial Court of Appeal at second instance) and misdemeanor courts (including High Misdemeanor Court at second instance). The judiciary is overseen by the Ministry of Justice. Serbia has a typical civil law legal system. Law enforcement is the responsibility of the Serbian Police, which is subordinate to the Ministry of the Interior. Serbian Police fields 27,363 uniformed officers. National security and counterintelligence are the responsibility of the Security Intelligence Agency (BIA). Serbia has established diplomatic relations with 188 UN member states, the Holy See, the Sovereign Military Order of Malta, and the European Union. Foreign relations are conducted through the Ministry of Foreign Affairs. Serbia has a network of 65 embassies and 23 consulates internationally. There are 69 foreign embassies, 5 consulates and 4 liaison offices in Serbia. Serbian foreign policy is focused on achieving the strategic goal of becoming a member state of the European Union (EU). Serbia started the process of joining the EU by signing of the Stabilisation and Association Agreement on 29 April 2008 and officially applied for membership in the European Union on 22 December 2009. It received a full candidate status on 1 March 2012 and started accession talks on 21 January 2014. The European Commission considers accession possible by 2025. The province of Kosovo declared independence from Serbia on 17 February 2008, which sparked varied responses from the international community, some welcoming it, while others condemn the unilateral move. In protest, Serbia initially recalled its ambassadors from countries that recognised Kosovo's independence. The resolution of 26 December 2007 by the National Assembly stated that both the Kosovo declaration of independence and recognition thereof by any state would be gross violation of international law. Serbia began cooperation and dialogue with NATO in 2006, when the country joined the Partnership for Peace programme and the Euro-Atlantic Partnership Council. The country's military neutrality was formally proclaimed by a resolution adopted by Serbia's parliament in December 2007, which makes joining any military alliance contingent on a popular referendum, a stance acknowledged by NATO. On the other hand, Serbia's relations with Russia are habitually described by mass media as a "centuries-old religious, ethnic and political alliance" and Russia is said to have sought to solidify its relationship with Serbia since the imposition of sanctions against Russia in 2014. The Serbian Armed Forces are subordinate to the Ministry of Defence, and are composed of the Army and the Air Force. Although a landlocked country, Serbia operates a River Flotilla which patrols on the Danube, Sava, and Tisza rivers. The Serbian Chief of the General Staff reports to the Defence Minister. The Chief of Staff is appointed by the President, who is the Commander-in-chief. , Serbian defence budget amounts to $804 million. Traditionally having relied on a large number of conscripts, Serbian Armed Forces went through a period of downsizing, restructuring and professionalisation. Conscription was abolished in 2011. Serbian Armed Forces have 28,000 active troops, supplemented by the "active reserve" which numbers 20,000 members and "passive reserve" with about 170,000. Serbia participates in the NATO Individual Partnership Action Plan programme, but has no intention of joining NATO, due to significant popular rejection, largely a legacy of the NATO bombing of Yugoslavia in 1999. It is an observer member of the Collective Securities Treaty Organisation (CSTO) The country also signed the Stability Pact for South Eastern Europe. The Serbian Armed Forces take part in several multinational peacekeeping missions, including deployments in Lebanon, Cyprus, Ivory Coast, and Liberia. Serbia is a major producer and exporter of military equipment in the region. Defence exports totaled around $600 million in 2018. The defence industry has seen significant growth over the years and it continues to grow on a yearly basis. Serbia is a unitary state composed of municipalities/cities, districts, and two autonomous provinces. In Serbia, excluding Kosovo, there are 145 municipalities ("opštine") and 29 cities ("gradovi"), which form the basic units of local self-government. Apart from municipalities/cities, there are 24 districts ("okruzi", 10 most populated listed below), with the City of Belgrade constituting an additional district. Except for Belgrade, which has an elected local government, districts are regional centres of state authority, but have no powers of their own; they present purely administrative divisions. The Constitution of Serbia recognizes two autonomous provinces, Vojvodina in the north, and the disputed territory of Kosovo and Metohija in the south, while the remaining area of Central Serbia never had its own regional authority. Following the Kosovo War, UN peacekeepers entered Kosovo and Metohija, as per UNSC Resolution 1244. In 2008, Kosovo declared independence. The government of Serbia did not recognise the declaration, considering it illegal and illegitimate. census, Serbia (excluding Kosovo) has a total population of 7,186,862 and the overall population density is medium as it stands at 92.8 inhabitants per square kilometre. The census was not conducted in Kosovo which held its own census that numbered their total population at 1,739,825, excluding Serb-inhabited North Kosovo, as Serbs from that area (about 50,000) boycotted the census. Serbia has been enduring a demographic crisis since the beginning of the 1990s, with a death rate that has continuously exceeded its birth rate. It is estimated that 300,000 people left Serbia during the 1990s, 20% of whom had a higher education. Serbia subsequently has one of the oldest populations in the world, with the average age of 42.9 years, and its population is shrinking at one of the fastest rates in the world. A fifth of all households consist of only one person, and just one-fourth of four and more persons. Average life expectancy in Serbia at birth is 76.1 years. During the 1990s, Serbia had the largest refugee population in Europe. Refugees and internally displaced persons (IDPs) in Serbia formed between 7% and 7.5% of its population at the time – about half a million refugees sought refuge in the country following the series of Yugoslav wars, mainly from Croatia (and to a lesser extent from Bosnia and Herzegovina) and the IDPs from Kosovo. Serbs with 5,988,150 are the largest ethnic group in Serbia, representing 83% of the total population (excluding Kosovo). Serbia is one of the European countries with high numbers of registered national minorities, while the Autonomous Province of Vojvodina is recognizable for its multi-ethnic and multi-cultural identity. With a population of 253,899, Hungarians are the largest ethnic minority in Serbia, concentrated predominantly in northern Vojvodina and representing 3.5% of the country's population (13% in Vojvodina). Romani population stands at 147,604 according to the 2011 census but unofficial estimates place their actual number between 400,000 and 500,000. Bosniaks with 145,278 are concentrated in Raška (Sandžak), in the southwest. Other minority groups include Croats, Slovaks, Albanians, Montenegrins, Vlachs, Romanians, Macedonians and Bulgarians. Chinese, estimated at about 15,000, are the only significant non-European immigrant minority. The majority of the population, or 59.4%, reside in urban areas and some 16.1% in Belgrade alone. Belgrade is the only city with more than a million inhabitants and there are four more with over 100,000 inhabitants. The Constitution of Serbia defines it as a secular state with guaranteed religious freedom. Orthodox Christians with 6,079,396 comprise 84.5% of country's population. The Serbian Orthodox Church is the largest and traditional church of the country, adherents of which are overwhelmingly Serbs. Other Orthodox Christian communities in Serbia include Montenegrins, Romanians, Vlachs, Macedonians and Bulgarians. Roman Catholics number 356,957 in Serbia, or roughly 6% of the population, mostly in Vojvodina (especially its northern part) which is home to minority ethnic groups such as Hungarians, Croats, Bunjevci, as well as to some Slovaks and Czechs. Protestantism accounts for about 1% of the country's population, chiefly Lutheranism among Slovaks in Vojvodina as well as Calvinism among Reformed Hungarians. Greek Catholic Church is adhered by around 25,000 citizens (0.37% of the population), mostly Rusyns in Vojvodina. Muslims, with 222,282 or 3% of the population, form the third largest religious group. Islam has a strong historic following in the southern regions of Serbia, primarily in southern Raška. Bosniaks are the largest Islamic community in Serbia; estimates are that around a third of the country's Roma people are Muslim. There are only 578 Jews in Serbia. Atheists numbered 80,053 or 1.1% of the population and an additional 4,070 declared themselves to be agnostics. The official language is Serbian, native to 88% of the population. Serbian is the only European language with active digraphia, using both Cyrillic and Latin alphabets. Serbian Cyrillic is designated in the Constitution as the "official script" and was devised in 1814 by Serbian philologist Vuk Karadžić, who based it on phonemic principles. A survey from 2014 showed that 47% of Serbians favour the Latin alphabet, 36% favour the Cyrillic one and 17% have no preference. Standard Serbian is based on the most widespread Shtokavian dialect (more specifically on the dialects of Šumadija-Vojvodina and Eastern Herzegovina). Recognised minority languages are: Hungarian, Bosnian, Slovak, Croatian, Albanian, Romanian, Bulgarian, Rusyn, and Macedonian. All these languages are in official use in municipalities or cities where the ethnic minority exceeds 15% of the total population. In Vojvodina, the provincial administration uses, besides Serbian, five other languages (Hungarian, Slovak, Croatian, Romanian and Rusyn). Serbia has an emerging market economy in upper-middle income range. According to the International Monetary Fund, Serbian nominal GDP in 2018 is officially estimated at $50.651 billion or $7,243 per capita while purchasing power parity GDP stood at $122.759 billion or $17,555 per capita. The economy is dominated by services which accounts for 67.9% of GDP, followed by industry with 26.1% of GDP, and agriculture at 6% of GDP. The official currency of Serbia is Serbian dinar (ISO code: RSD), and the central bank is National Bank of Serbia. The Belgrade Stock Exchange is the only stock exchange in the country, with market capitalisation of $8.65 billion and BELEX15 as the main index representing the 15 most liquid stocks. The economy has been affected by the global economic crisis. After almost a decade of strong economic growth (average of 4.45% per year), Serbia entered the recession in 2009 with negative growth of −3% and again in 2012 and 2014 with −1% and −1.8%, respectively. As the government was fighting effects of crisis the public debt has more than doubled: from pre-crisis level of just under 30% to about 70% of GDP and trending downwards recently to around 50%. Labour force stands at 3.2 million, with 56% employed in services sector, 28.1% in industry and 15.9% in the agriculture. The average monthly net salary in May 2019 stood at 47,575 dinars or $525. The unemployment remains an acute problem, with rate of 12.7% . Since 2000, Serbia has attracted over $40 billion in foreign direct investment (FDI). Blue-chip corporations making investments include: Fiat Chrysler Automobiles, Siemens, Bosch, Philip Morris, Michelin, Coca-Cola, Carlsberg and others. In the energy sector, Russian energy giants, Gazprom and Lukoil have made large investments. In metallurgy sector, Chinese steel and copper giants, Hesteel and Zijin Mining have acquired key complexes. Serbia has an unfavourable trade balance: imports exceed exports by 25%. Serbia's exports, however, recorded a steady growth in last couple of years reaching $19.2 billion in 2018. The country has free trade agreements with the EFTA and CEFTA, a preferential trade regime with the European Union, a Generalised System of Preferences with the United States, and individual free trade agreements with Russia, Belarus, Kazakhstan, and Turkey. Serbia has very favourable natural conditions (land and climate) for varied agricultural production. It has 5,056,000 ha of agricultural land (0.7 ha per capita), out of which 3,294,000 ha is arable land (0.45 ha per capita). In 2016, Serbia exported agricultural and food products worth $3.2 billion, and the export-import ratio was 178%. Agricultural exports constitute more than one-fifth of all Serbia's sales on the world market. Serbia is one of the largest provider of frozen fruit to the EU (largest to the French market, and 2nd largest to the German market). Agricultural production is most prominent in Vojvodina on the fertile Pannonian Plain. Other agricultural regions include Mačva, Pomoravlje, Tamnava, Rasina, and Jablanica. In the structure of the agricultural production 70% is from the crop field production, and 30% is from the livestock production. Serbia is world's second largest producer of plums (582,485 tonnes; second to China), second largest of raspberries (89,602 tonnes, second to Poland), it is also a significant producer of maize (6.48 million tonnes, ranked 32nd in the world) and wheat (2.07 million tonnes, ranked 35th in the world). Other important agricultural products are: sunflower, sugar beet, soybean, potato, apple, pork meat, beef, poultry and dairy. There are 56,000 ha of vineyards in Serbia, producing about 230 million litres of wine annually. Most famous viticulture regions are located in Vojvodina and Šumadija. The industry was the economic sector hardest hit by the UN sanctions and trade embargo and NATO bombing during the 1990s and transition to market economy during the 2000s. The industrial output saw dramatic downsizing: in 2013 it was expected to be only a half of that of 1989. Main industrial sectors include: automotive, mining, non-ferrous metals, food-processing, electronics, pharmaceuticals, clothes. Serbia has 14 free economic zones as of September 2017, in which many foreign direct investments are realised. Automotive industry (with Fiat Chrysler Automobiles as a forebearer) is dominated by cluster located in Kragujevac and its vicinity, and contributes to export with about $2 billion. Country is a leading steel producer in the wider region of Southeast Europe and had production of nearly 2 million tonnes of raw steel in 2018, coming entirely from Smederevo steel mill, owned by the Chinese Hesteel. Serbia's mining industry is comparatively strong: Serbia is the 18th largest producer of coal (7th in the Europe) extracted from large deposits in Kolubara and Kostolac basins; it is also world's 23rd largest (3rd in Europe) producer of copper which is extracted by Zijin Bor Copper, a large copper mining company, acquired by Chinese Zijin Mining in 2018; significant gold extraction is developed around Majdanpek. Serbia notably manufactures intel smartphones named Tesla smartphones. Food industry is well known both regionally and internationally and is one of the strong points of the economy. Some of the international brand-names established production in Serbia: PepsiCo and Nestlé in food-processing sector; Coca-Cola (Belgrade), Heineken (Novi Sad) and Carlsberg (Bačka Palanka) in beverage industry; Nordzucker in sugar industry. Serbia's electronics industry had its peak in the 1980s and the industry today is only a third of what it was back then, but has witnessed a something of revival in last decade with investments of companies such as Siemens (wind turbines) in Subotica, Panasonic (lighting devices) in Svilajnac, and Gorenje (electrical home appliances) in Valjevo. The pharmaceutical industry in Serbia comprises a dozen manufacturers of generic drugs, of which Hemofarm in Vršac and Galenika in Belgrade, account for 80% of production volume. Domestic production meets over 60% of the local demand. The energy sector is one of the largest and most important sectors to the country's economy. Serbia is a net exporter of electricity and importer of key fuels (such as oil and gas). Serbia has an abundance of coal, and significant reserves of oil and gas. Serbia's proven reserves of 5.5 billion tonnes of coal lignite are the 5th largest in the world (second in Europe, after Germany). Coal is found in two large deposits: Kolubara (4 billion tonnes of reserves) and Kostolac (1.5 billion tonnes). Despite being small on a world scale, Serbia's oil and gas resources (77.4 million tonnes of oil equivalent and 48.1 billion cubic metres, respectively) have a certain regional importance since they are largest in the region of former Yugoslavia as well as the Balkans (excluding Romania). Almost 90% of the discovered oil and gas are to be found in Banat and those oil and gas fields are by size among the largest in the Pannonian basin but are average on a European scale. The production of electricity in 2015 in Serbia was 36.5 billion kilowatt-hours (KWh), while the final electricity consumption amounted to 35.5 billion kilowatt-hours (KWh). Most of the electricity produced comes from thermal-power plants (72.7% of all electricity) and to a lesser degree from hydroelectric-power plants (27.3%). There are 6 lignite-operated thermal-power plants with an installed power of 3,936 MW; largest of which are 1,502 MW-Nikola Tesla 1 and 1,160 MW-Nikola Tesla 2, both in Obrenovac. Total installed power of 9 hydroelectric-power plants is 2,831 MW, largest of which is Đerdap 1 with capacity of 1,026 MW. In addition to this, there are mazute and gas-operated thermal-power plants with an installed power of 353 MW. The entire production of electricity is concentrated in Elektroprivreda Srbije (EPS), public electric-utility power company. The current oil production in Serbia amounts to over 1.1 million tonnes of oil equivalent and satisfies some 43% of country's needs while the rest is imported. National petrol company, Naftna Industrija Srbije (NIS), was acquired in 2008 by Gazprom Neft. The company's refinery in Pančevo (capacity of 4.8 million tonnes) is one of the most modern oil-refineries in Europe; it also operates network of 334 filling stations in Serbia (74% of domestic market) and additional 36 stations in Bosnia and Herzegovina, 31 in Bulgaria, and 28 in Romania. There are 155 kilometers of crude oil pipelines connecting Pančevo and Novi Sad refineries as a part of trans-national Adria oil pipeline. Serbia is heavily dependent on foreign sources of natural gas, with only 17% coming from domestic production (totalling 491 million cubic meters in 2012) and the rest is imported, mainly from Russia (via gas pipelines that run through Ukraine and Hungary). Srbijagas, public company, operates the natural gas transportation system which comprise 3,177 kilometers of trunk and regional natural gas pipelines and a 450 million cubic meter underground gas storage facility at Banatski Dvor. Serbia has a strategic transportation location since the country's backbone, Morava Valley, represents by far the easiest route of land travel from continental Europe to Asia Minor and the Near East. Serbian road network carries the bulk of traffic in the country. Total length of roads is 45,419 km of which 962 km are "class-IA state roads" (i.e. motorways); 4,517 km are "class-IB state roads" (national roads); 10,941 km are "class-II state roads" (regional roads) and 23,780 km are "municipal roads". The road network, except for the most of class-IA roads, are of comparatively lower quality to the Western European standards because of lack of financial resources for their maintenance in the last 20 years. Over 300 kilometers of new motorways has been constructed in the last decade and additional 142 kilometers are currently under construction: A5 motorway (from south of Pojate (north of Kruševac ) to Čačak) and 30 km-long segment of A2 (between Čačak and Požega). Coach transport is very extensive: almost every place in the country is connected by bus, from largest cities to the villages; in addition there are international routes (mainly to countries of Western Europe with large Serb diaspora). Routes, both domestic and international, are served by more than hundred intercity coach services, biggest of which are Lasta and Niš-Ekspres. , there were 1,999,771 registered passenger cars or 1 passenger car per 3.5 inhabitants. Serbia has 3,819 kilometres of rail tracks, of which 1,279 are electrified and 283 kilometres are double-track railroad. The major rail hub is Belgrade (and to a lesser degree Niš), while the most important railroads include: Belgrade–Bar (Montenegro), Belgrade–Šid–Zagreb (Croatia)/Belgrade–Niš–Sofia (Bulgaria) (part of Pan-European Corridor X), Belgrade–Subotica–Budapest (Hungary) and Niš–Thessaloniki (Greece). Although still a major mode of freight transportation, railroads face increasing problems with the maintenance of the infrastructure and lowering speeds. Rail services are operated Srbija Voz (passenger transport) and Srbija Kargo (freight transport). There are only two airports with regular passenger traffic. Belgrade Nikola Tesla Airport served 5.6 million passengers in 2018 and is a hub of flagship carrier Air Serbia which flies to 59 destinations in 32 countries and carried some 2.5 million passengers in 2018. Niš Constantine the Great Airport is mainly catering low-cost airlines. Serbia has a developed inland water transport since there are 1,716 kilometres of navigable inland waterways (1,043 km of navigable rivers and 673 km of navigable canals), which are almost all located in northern third of the country. The most important inland waterway is the Danube (part of Pan-European Corridor VII). Other navigable rivers include Sava, Tisza, Begej and Timiş River, all of which connect Serbia with Northern and Western Europe through the Rhine–Main–Danube Canal and North Sea route, to Eastern Europe via the Tisza, Begej and Danube Black Sea routes, and to Southern Europe via the Sava river. More than 2 million tonnes of cargo were transported on Serbian rivers and canals in 2016 while the largest river ports are: Novi Sad, Belgrade, Pančevo, Smederevo, Prahovo and Šabac. Fixed telephone lines connect 81% of households in Serbia, and with about 9.1 million users the number of cellphones surpasses the total population of by 28%. The largest mobile operator is Telekom Srbija with 4.2 million subscribers, followed by Telenor with 2.8 million users and Vip mobile with about 2 million. Some 58% of households have fixed-line (non-mobile) broadband Internet connection while 67% are provided with pay television services (i.e. 38% cable television, 17% IPTV, and 10% satellite). Digital television transition has been completed in 2015 with DVB-T2 standard for signal transmission. Serbia is not a mass-tourism destination but nevertheless has a diverse range of touristic products. In 2019, total of over 3.6 million tourists were recorded in accommodations, of which half were foreign. Foreign exchange earnings from tourism were estimated at $1.5 billion. Tourism is mainly focused on the mountains and spas of the country, which are mostly visited by domestic tourists, as well as Belgrade and, to a lesser degree, Novi Sad, which are preferred choices of foreign tourists (almost two-thirds of all foreign visits are made to these two cities). The most famous mountain resorts are Kopaonik, Stara Planina and Zlatibor. There are also many spas in Serbia, the biggest of which are Vrnjačka Banja, Soko Banja, and Banja Koviljača. City-break and conference tourism is developed in Belgrade and Novi Sad. Other touristic products that Serbia offer are natural wonders like Đavolja varoš, Christian pilgrimage to the many Orthodox monasteries across the country and the river cruising along the Danube. There are several internationally popular music festivals held in Serbia, such as EXIT (with 25–30,000 foreign visitors coming from 60 different countries) and the Guča trumpet festival. According to 2011 census, literacy in Serbia stands at 98% of population while computer literacy is at 49% (complete computer literacy is at 34.2%). Same census showed the following levels of education: 16.2% of inhabitants have higher education (10.6% have bachelors or master's degrees, 5.6% have an associate degree), 49% have a secondary education, 20.7% have an elementary education, and 13.7% have not completed elementary education. Education in Serbia is regulated by the Ministry of Education and Science. Education starts in either preschools or elementary schools. Children enroll in elementary schools at the age of seven. Compulsory education consists of eight grades of elementary school. Students have the opportunity to attend gymnasiums and vocational schools for another four years, or to enroll in vocational training for 2 to 3 years. Following the completion of gymnasiums or vocational schools, students have the opportunity to attend university. Elementary and secondary education are also available in languages of recognised minorities in Serbia, where classes are held in Hungarian, Slovak, Albanian, Romanian, Rusyn, Bulgarian as well as Bosnian and Croatian languages. Petnica Science Center is a notable institution for extracurricular science education focusing on gifted students. There are 19 universities in Serbia (nine public universities with a total number of 86 faculties and ten private universities with 51 faculties). In 2018/2019 academic year, 210,480 students attended 19 universities (181,310 at public universities and some 29,170 at private universities) while 47,169 attended 81 "higher schools". Public universities in Serbia are: the University of Belgrade (oldest, founded in 1808, and largest university with 97,696 undergraduates and graduates), University of Novi Sad (founded in 1960 and with student body of 42,489), University of Niš (founded in 1965; 20,559 students), University of Kragujevac (founded in 1976; 14,053 students), University of Priština (located in North Mitrovica), Public University of Novi Pazar as well as three specialist universities – University of Arts, University of Defence and University of Criminal Investigation and Police Studies. Largest private universities include Megatrend University and Singidunum University, both in Belgrade, and Educons University in Novi Sad. The University of Belgrade (placed in 301–400 bracket on 2013 Shanghai Ranking of World Universities, being best-placed university in Southeast Europe after those in Athens and Thessaloniki) and University of Novi Sad are generally considered as the best institutions of higher learning in the country. Serbia spent 0.9% of GDP on scientific research in 2017, which is slightly below the European average. Since 2018, Serbia is a full member of CERN. Serbia has a long history of excellence in maths and computer sciences which has created a strong pool of engineering talent, although economic sanctions during the 1990s and chronic underinvestment in research forced many scientific professionals to leave the country. Nevertheless, there are several areas in which Serbia still excels such as growing information technology sector, which includes software development as well as outsourcing. It generated over $1.2 billion in exports in 2018, both from international investors and a significant number of dynamic homegrown enterprises. Serbia is one of the countries with the highest proportion of women in science. Among the scientific institutes operating in Serbia, the largest are the Mihajlo Pupin Institute and Vinča Nuclear Institute, both in Belgrade. The Serbian Academy of Sciences and Arts is a learned society promoting science and arts from its inception in 1841. With a strong science and technological ecosystem, Serbia has produced a number of renowned scientists that have greatly contributed to the field of science and technology. For centuries straddling the boundaries between East and West, the territory of Serbia had been divided among the Eastern and Western halves of the Roman Empire; then between Byzantium and the Kingdom of Hungary; and in the Early modern period between the Ottoman Empire and the Habsburg Empire. These overlapping influences have resulted in cultural varieties throughout Serbia; its north leans to the profile of Central Europe, while the south is characteristic of the wider Balkans and even the Mediterranean. The Byzantine influence on Serbia was profound, firstly through the introduction of Eastern Christianity in the Early Middle Ages. The Serbian Orthodox Church has had an enduring status in Serbia, with the many Serbian monasteries constituting cultural monuments left from Serbia in the Middle Ages. Serbia has seen influences of Republic of Venice as well, mainly though trade, literature and romanesque architecture. Serbia has five cultural monuments inscribed in the list of UNESCO World Heritage: the early medieval capital Stari Ras and the 13th-century monastery Sopoćani; the 12th-century Studenica monastery; the Roman complex of Gamzigrad–Felix Romuliana; medieval tombstones Stećci; and finally the endangered Medieval Monuments in Kosovo (the monasteries of Visoki Dečani, Our Lady of Ljeviš, Gračanica and Patriarchal Monastery of Peć). There are two literary monuments on UNESCO's Memory of the World Programme: the 12th-century "Miroslav Gospel", and scientist Nikola Tesla's archive. The "slava" (patron saint veneration), kolo (traditional folk dance) and singing to the accompaniment of the gusle are inscribed on UNESCO Intangible Cultural Heritage Lists. The Ministry of Culture and Information is tasked with preserving the nation's cultural heritage and overseeing its development. Further activities supporting development of culture are undertaken at local government level. Traces of Roman and early Byzantine Empire architectural heritage are found in many royal cities and palaces in Serbia, like Sirmium, Felix Romuliana and Justiniana Prima, since 535 the seat of the Archbishopric of Justiniana Prima. Serbian monasteries are the pinnacle of Serbian medieval art. At the beginning, they were under the influence of Byzantine Art which was particularly felt after the fall of Constantinople in 1204, when many Byzantine artists fled to Serbia. Noted of these monasteries is Studenica (built around 1190). It was a model for later monasteries, like the Mileševa, Sopoćani, Žiča, Gračanica and Visoki Dečani. Numerous monuments and cultural sites were destroyed at various stages of Serbian history, with destruuction in Kosovo being the recent example. In the end of 14th and the 15th centuries, autochthonous architectural style known as Morava style evolved in area around Morava Valley. A characteristic of this style was the wealthy decoration of the frontal church walls. Examples of this include Manasija, Ravanica and Kalenić monasteries. Icons and fresco paintings are often considered the peak of Serbian art. The most famous frescos are White Angel (Mileševa monastery), "Crucifixion" (Studenica monastery) and "Dormition of the Virgin" (Sopoćani). Country is dotted with many well-preserved medieval fortifications and castles such as Smederevo Fortress (largest lowland fortress in Europe), Golubac, Maglič, Soko grad, Belgrade Fortress, Ostrvica and Ram. During the time of Ottoman occupation, Serbian art was virtually non-existent, with the exception of several Serbian artists who lived in the lands ruled by the Habsburg Monarchy. Traditional Serbian art showed Baroque influences at the end of the 18th century as shown in the works of Nikola Nešković, Teodor Kračun, Zaharije Orfelin and Jakov Orfelin. Serbian painting showed the influence of Biedermeier and Neoclassicism as seen in works by Konstantin Danil, Arsenije Teodorović and Pavel Đurković. Many painters followed the artistic trends set in the 19th century Romanticism, notably Đura Jakšić, Stevan Todorović, Katarina Ivanović and Novak Radonić. Important Serbian painters of the first half of the 20th century were Paja Jovanović and Uroš Predić of Realism, Cubist Sava Šumanović, Milena Pavlović-Barili and Nadežda Petrović of Impressionism, Expressionist Milan Konjović. Noted painters of the second half of 20th century include Marko Čelebonović, Petar Lubarda, Milo Milunović, Ljubomir Popović and Vladimir Veličković. Anastas Jovanović was one of the earliest photographes in the world, while Marina Abramović is one of the world leading performance artists. Pirot carpet is known as one of the most important traditional handicrafts in Serbia. There are around 180 museums in Serbia, of which the most prominent is the National Museum of Serbia, founded in 1844. It houses one of the largest art collections in the Balkans, including many foreign masterpiece collections. Other art museums of note are Museum of Contemporary Art in Belgrade, Museum of Vojvodina and the Gallery of Matica Srpska in Novi Sad. The beginning of Serbian literacy dates back to the activity of the brothers Cyril and Methodius in the Balkans. Monuments of Serbian literacy from the early 11th century can be found, written in Glagolitic. Starting in the 12th century, books were written in Cyrillic. From this epoch, the oldest Serbian Cyrillic book editorial are the Miroslav Gospels from 1186. "The Miroslav Gospels" are considered to be the oldest book of Serbian medieval history and as such has entered UNESCO's Memory of the World Register. Notable medieval authors include Saint Sava, Jefimija, Stefan Lazarević, Constantine of Kostenets and others. Due to Ottoman occupation, when every aspect of formal literacy stopped, Serbia stayed excluded from the entire Renaissance flow in Western culture. However, the tradition of oral story-telling blossomed, shaping itself through epic poetry inspired by at the times still recent Kosovo battle and folk tales deeply rooted in Slavic mythology. Serbian epic poetry in those times has seen as the most effective way in preserving the national identity. The oldest known, entirely fictional poems, make up the "Non-historic cycle"; this one is followed by poems inspired by events before, during and after Kosovo Battle. The special cycles are dedicated to Serbian legendary hero, Marko Kraljević, then about hajduks and uskoks, and the last one dedicated to the liberation of Serbia in the 19th century. Some of the best known folk ballads are "The Death of the Mother of the Jugović Family" and The Mourning Song of the Noble Wife of the Asan Aga (1646), translated into European languages by Goethe, Walter Scott, Pushkin and Mérimée. One of the most notable tales from Serbian folklore is The Nine Peahens and the Golden Apples. Baroque trends in Serbian literature emerged in the late 17th century. Notable Baroque-influenced authors were Gavril Stefanović Venclović, Jovan Rajić, Zaharije Orfelin, Andrija Zmajević and others. Dositej Obradović was a prominent figure of the Age of Enlightenment, while the notable Classicist writer was Jovan Sterija Popović, although his works also contained elements of Romanticism. In the era of national revival, in the first half of the 19th century, Vuk Stefanović Karadžić collected Serbian folk literature, and reformed the Serbian language and spelling, paving the way for Serbian Romanticism. The first half of the 19th century was dominated by Romanticism, with Petar II Petrović-Njegoš, Branko Radičević, Đura Jakšić, Jovan Jovanović Zmaj and Laza Kostić being the notable representatives, while the second half of the century was marked by Realist writers such as Milovan Glišić, Laza Lazarević, Simo Matavulj, Stevan Sremac, Vojislav Ilić, Branislav Nušić, Radoje Domanović and Borisav Stanković. The 20th century was dominated by the prose writers Meša Selimović ("Death and the Dervish"), Miloš Crnjanski ("Migrations"), Isidora Sekulić ("The Cronicle of a Small Town Cemetery"), Branko Ćopić ("Eagles Fly Early"), Borislav Pekić ("The Time of Miracles"), Danilo Kiš ("The Encyclopedia of the Dead"), Dobrica Ćosić ("The Roots"), Aleksandar Tišma ("The Use of Man"), Milorad Pavić and others. Pavić is widely acclaimed Serbian author of the beginning of the 21st century, most notably for his "Dictionary of the Khazars", which has been translated into 38 languages. Notable poets include Milan Rakić, Jovan Dučić, Vladislav Petković Dis, Rastko Petrović, Stanislav Vinaver, Dušan Matić, Branko Miljković, Vasko Popa, Oskar Davičo, Miodrag Pavlović, and Stevan Raičković. Notable contemporary authors include David Albahari, Svetislav Basara, Goran Petrović, Gordana Kuić, Vuk Drašković and Vladislav Bajac. Serbian comics emerged in the 1930s and the medium remains popular today. Ivo Andrić ("The Bridge on the Drina") is possibly the best-known Serbian author,; he was awarded the Nobel Prize in Literature in 1961. The most beloved face of Serbian literature was Desanka Maksimović, who for seven decades remained "the leading lady of Yugoslav poetry". She is honoured with statues, postage stamps, and the names of streets across Serbia. There are 551 public libraries biggest of which are: National Library of Serbia in Belgrade with funds of about 6 million items, and Matica Srpska (the oldest matica and Serbian cultural institution, founded in 1826) in Novi Sad with nearly 3.5 million volumes. In 2010, there were 10,989 books and brochures published. The book publishing market is dominated by several major publishers such as Laguna and Vulkan (both of which operate their own bookstore chains) and the industry's centrepiece event, annual Belgrade Book Fair, is the most visited cultural event in Serbia with 158,128 visitors in 2013. The highlight of the literary scene is awarding of NIN Prize, given every January since 1954 for the best newly published novel in Serbian language. Composer and musicologist Stevan Stojanović Mokranjac is considered the founder of modern Serbian music. The Serbian composers of the first generation Petar Konjović, Stevan Hristić, and Miloje Milojević maintained the national expression and modernised the romanticism into the direction of impressionism. Other famous classical Serbian composers include Isidor Bajić, Stanislav Binički and Josif Marinković. There are three opera houses in Serbia: Opera of the National Theatre and Madlenianum Opera, both in Belgrade, and Opera of the Serbian National Theatre in Novi Sad. Four symphonic orchestra operate in the country: Belgrade Philharmonic Orchestra, Niš Symphony Orchestra, Symphonic Orchestra of Radio Television of Serbia, and Novi Sad Philharmonic Orchestra. The Choir of Radio Television of Serbia is a leading vocal ensemble in the country. The BEMUS is one of the most prominent classical music festivals in the South East Europe. Traditional Serbian music includes various kinds of bagpipes, flutes, horns, trumpets, lutes, psalteries, drums and cymbals. The "kolo" is the traditional collective folk dance, which has a number of varieties throughout the regions. The most popular are those from Užice and Morava region. Sung epic poetry has been an integral part of Serbian and Balkan music for centuries. In the highlands of Serbia these long poems are typically accompanied on a one-string fiddle called the "gusle", and concern themselves with themes from history and mythology. There are records of "gusle" being played at the court of the 13th-century King Stefan Nemanjić. Pop music has mainstream popularity. Željko Joksimović won second place at the 2004 Eurovision Song Contest and Marija Šerifović managed to win the 2007 Eurovision Song Contest with the song "Molitva", and Serbia was the host of the 2008 edition of the contest. Most popular pop singers include likes of Đorđe Balašević, Goca Tržan, Zdravko Čolić, Aleksandra Radović, Vlado Georgiev, Jelena Tomašević and Nataša Bekvalac among others. The Serbian rock which was during the 1960s, 1970s and 1980s part of former Yugoslav rock scene, used to be well developed and covered in the media. During the 1990s and 2000s popularity of rock music declined in Serbia, and although several major mainstream acts managed to sustain their popularity, an underground and independent music scene developed. The 2000s saw a revival of the mainstream scene and the appearance of a large number of notable acts. Notable Serbian rock acts include Bajaga i Instruktori, Disciplina Kičme, Ekatarina Velika, Električni Orgazam, Eva Braun, Kerber, Neverne Bebe, Partibrejkers, Ritam Nereda, Orthodox Celts, Rambo Amadeus, Riblja Čorba, S.A.R.S., Smak, Van Gogh, YU Grupa and others. Folk music in its original form has been a prominent music style since World War One following the early success of Sofka Nikolić. The music has been further promoted by Danica Obrenić, Anđelija Milić, Nada Mamula, and even later, during 60s and 70s, with stars like Silvana Armenulić, Toma Zdravković, Lepa Lukić, Vasilija Radojčić, Vida Pavlović and Gordana Stojićević. Turbo-folk music is subgenre that has developed in Serbia in the late 1980s and the beginning of the 1990s and has since enjoyed an immense popularity through acts of Dragana Mirković, Zorica Brunclik, Šaban Šaulić, Ana Bekuta, Sinan Sakić, Vesna Zmijanac, Mile Kitić, Snežana Đurišić, Šemsa Suljaković, and Nada Topčagić. It is a blend of folk music with pop and/or dance elements and can be seen as a result of the urbanisation of folk music. In recent period turbo-folk featured even more pop music elements, and some of the performers were labeled as pop-folk. The most famous among them are Ceca (often considered to be the biggest music star of Serbia), Jelena Karleuša, Aca Lukas, Seka Aleksić, Dara Bubamara, Indira Radić, Saša Matić, Viki Miljković, Stoja and Lepa Brena, arguably the most prominent performer of former Yugoslavia. Balkan Brass, or "truba" ("trumpet") is a popular genre, especially in Central and Southern Serbia where Balkan Brass originated. The music has its tradition from the First Serbian Uprising. The trumpet was used as a military instrument to wake and gather soldiers and announce battles, the trumpet took on the role of entertainment during downtime, as soldiers used it to transpose popular folk songs. When the war ended and the soldiers returned to the rural life, the music entered civilian life and eventually became a music style, accompanying births, baptisms, weddings, and funerals. There are two main varieties of this genre, one from Western Serbia and the other from Southern Serbia, with brass musician Boban Marković being one of the most respected names in the world of modern brass band bandleaders. Most popular music festival are Guča Trumpet Festival with over 300,000 annual visitors and EXIT in Novi Sad (won the Best Major Festival award at the European Festivals Awards for 2013 and 2017.) with 200,000 visitors in 2013. Other festivals include Nišville Jazz Festival in Niš and Gitarijada rock festival in Zaječar. Serbia has a well-established theatrical tradition with Joakim Vujić considered the founder of modern Serbian theatre. Serbia has 38 professional theatres and 11 theatres for children, the most important of which are National Theatre in Belgrade, Serbian National Theatre in Novi Sad, National Theatre in Subotica, National Theatre in Niš and Knjaževsko-srpski teatar in Kragujevac (the oldest theatre in Serbia, established in 1835). The Belgrade International Theatre Festival – BITEF, founded in 1967, is one of the oldest theatre festivals in the world, and it has become one of the five biggest European festivals. Sterijino pozorje is, on the other hand, festival showcasing national drama plays. The most important Serbian playwrighters were Jovan Sterija Popović and Branislav Nušić, while recent renowned names are Dušan Kovačević and Biljana Srbljanović. The foundation of Serbian cinema dates back to 1896 with the release of the oldest movie in the Balkans, "The Life and Deeds of the Immortal Vožd Karađorđe", a biopic about Serbian revolutionary leader, Karađorđe. Serbian cinema is one of the dynamic smaller European cinematographies. Serbia's film industry is heavily subsidised by the government, mainly through grants approved by the Film Centre of Serbia. As of 2011, there were 17 domestic feature films produced. There are 22 operating cinemas in the country, of which 12 are multiplexes, with total attendance exceeding 2.6 million and comparatively high percentage of 32.3% of total sold tickets for domestic films. Modern PFI Studios located in Šimanovci is nowadays Serbia's only major film studio complex; it consists of 9 sound stages and attracts mainly international productions, primarily American and West European. The Yugoslav Film Archive used to be former Yugoslavia's and now is Serbia national film archive – with over 100 thousand film prints, it is among five largest film archives in the world. Famous Serbian filmmaker Emir Kusturica won two Golden Palms for Best Feature Film at the Cannes Film Festival, for "When Father Was Away on Business" in 1985 and then again for "Underground" in 1995. Other renowned directors include Dušan Makavejev, Želimir Žilnik (Golden Berlin Bear winner), Aleksandar Petrović, Živojin Pavlović, Goran Paskaljević, Goran Marković, Srđan Dragojević, Srdan Golubović and Mila Turajlić among others. Serbian-American screenwriter Steve Tesich won the Academy Award for Best Original Screenplay in 1979 for the movie Breaking Away. Prominent movie stars in Serbia have left celebrated heritage in cinematography of Yugoslavia as well. Notable mentions are Zoran Radmilović, Pavle Vuisić, Ljubiša Samardžić, Olivera Marković, Mija Aleksić, Miodrag Petrović Čkalja, Ružica Sokić, Velimir Bata Živojinović, Danilo Bata Stojković, Seka Sablić, Olivera Katarina, Dragan Nikolić, Mira Stupica, Nikola Simić, Bora Todorović and others. Milena Dravić was one of the most celebrated actress in Serbian cinematography. She has won Best Actress Award on Cannes Film Festival in 1980. The freedom of the press and the freedom of speech are guaranteed by the constitution of Serbia. Serbia is ranked 90th out of 180 countries in the 2019 Press Freedom Index report compiled by Reporters Without Borders. Report noted that media outlets and journalists continue to face partisan and government pressure over editorial policies. Also, the media are now more heavily dependent on advertising contracts and government subsidies to survive financially. According to AGB Nielsen Research in 2009, Serbs on average watch five hours of television per day, making it the highest average in Europe. There are seven nationwide free-to-air television channels, with public broadcaster Radio Television of Serbia (RTS) operating three (RTS1, RTS2 and RTS3) and private broadcasters operating four (Pink, Happy, Prva, and O2). In 2017, preferred usage of these channels were as follows: 20.2% for RTS1, 14.1% for Pink, 9.4% for Happy, 9.0% for Prva, 4.7% for O2, and 2.5% for RTS2. There are 28 regional television channels and 74 local television channels. Besides terrestrial channels there are dozens Serbian television channels available only on cable or satellite. There are 247 radio stations in Serbia. Out of these, six are radio stations with national coverage, including two of public broadcaster Radio Television of Serbia (Radio Belgrade 1 and Radio Belgrade 2/Radio Belgrade 3) and four private ones (Radio S1, Radio S2, Play Radio, and Radio Hit FM). Also, there are 34 regional stations and 207 local stations. There are 305 newspapers published in Serbia of which 12 are daily newspapers. Dailies "Politika" and "Danas" are Serbia's papers of record, former being the oldest newspaper in the Balkans, founded in 1904. Highest circulation newspapers are tabloids "Večernje Novosti", "Blic", "Kurir", and "Informer", all with more than 100,000 copies sold. There are one daily newspaper devoted to sports – "Sportski žurnal", one business daily "Privredni pregled", two regional newspapers ("Dnevnik" published in Novi Sad and "Narodne novine" from Niš), and one minority-language daily ("Magyar Szo" in Hungarian, published in Subotica). There are 1,351 magazines published in the country. Those include weekly news magazines "NIN", "Vreme" and "Nedeljnik", popular science magazine of "Politikin Zabavnik", women's "Lepota & Zdravlje", auto magazine "SAT revija", IT magazine "Svet kompjutera". In addition, there is a wide selection of Serbian editions of international magazines, such as "Cosmopolitan", "Elle", "Men's Health", "National Geographic", "Le Monde diplomatique", "Playboy", and "Hello!", among others. The main news agencies are Tanjug, Beta and Fonet. , out of 432 web-portals (mainly on the .rs domain) the most visited are online editions of printed dailies Blic and Kurir, news web-portal B92, and classifieds KupujemProdajem. Serbian cuisine is largely heterogeneous in a way characteristic of the Balkans and, especially, the former Yugoslavia. It features foods characteristic of lands formerly under Turkish suzerainty as well as cuisine originating from other parts of Central Europe (especially Austria and Hungary). Food is very important in Serbian social life, particularly during religious holidays such as Christmas, Easter and feast days i.e. slava. Staples of the Serbian diet include bread, meat, fruits, vegetables, and dairy products. Bread is the basis of all Serbian meals, and it plays an important role in Serbian cuisine and can be found in religious rituals. A traditional Serbian welcome is to offer bread and salt to guests. Meat is widely consumed, as is fish. Serbian specialties include ćevapčići (caseless sausages made of minced meat, which is always grilled and seasoned), pljeskavica, sarma, kajmak (a dairy product similar to clotted cream), gibanica (cheese and kajmak pie), ajvar (a roasted red pepper spread), proja (cornbread), and kačamak (corn-flour porridge). Serbians claim their country as the birthplace of rakia ("rakija"), a highly alcoholic drink primarily distilled from fruit. Rakia in various forms is found throughout the Balkans, notably in Bulgaria, Croatia, Slovenia, Montenegro, Hungary and Turkey. Slivovitz ("šljivovica"), a plum brandy, is a type of rakia which is considered the national drink of Serbia. Winemaking traditions in Serbia dates back to Roman times. Serbian wines are produced in 22 different geographical regions, with white wine dominating the total amount. Besides rakia and beer, wine is a very popular alcoholic beverage in the country. Sports play an important role in Serbian society, and the country has a strong sporting history. The most popular sports in Serbia are football, basketball, tennis, volleyball, water polo and handball. Professional sports in Serbia are organised by sporting federations and leagues (in case of team sports). One of particularities of Serbian professional sports is existence of many multi-sports clubs (called "sports societies"), biggest and most successful of which are Red Star, Partizan, and Beograd in Belgrade, Vojvodina in Novi Sad, Radnički in Kragujevac, Spartak in Subotica. Football is the most popular sport in Serbia, and the Football Association of Serbia with 146,845 registered players, is the largest sporting association in the country. FK Bačka 1901 is the oldest football club in Serbia and the former Yugoslavia. Dragan Džajić was officially recognised as "the best Serbian player of all times" by the Football Association of Serbia, and more recently the likes of Nemanja Vidić, Dejan Stanković, Branislav Ivanović, Aleksandar Kolarov and Nemanja Matić play for the elite European clubs, developing the nation's reputation as one of the world's biggest exporters of footballers. The Serbia national football team lacks relative success although it qualified for three of the last four FIFA World Cups. Serbia national youth football teams have won 2013 U-19 European Championship and 2015 U-20 World Cup. The two main football clubs in Serbia are Red Star (winner of the 1991 European Cup) and Partizan (finalist of the 1966 European Cup), both from Belgrade. The rivalry between the two clubs is known as the "Eternal Derby", and is often cited as one of the most exciting sports rivalries in the world. Serbia is one of the traditional powerhouses of world basketball, as Serbia men's national basketball team have won two World Championships (in 1998 and 2002), three European Championships (1995, 1997, and 2001) and two Olympic silver medals (in 1996 and 2016) as well. The women's national basketball team won the European Championship in 2015 and Olympic bronze medal in 2016. A total of 31 Serbian players have played in the NBA in last three decades, including Nikola Jokić (2019 All-NBA First team), Predrag "Peja" Stojaković (2011 NBA champion and three-time NBA All-Star), and Vlade Divac (2001 NBA All-Star and Basketball Hall of Famer). The renowned "Serbian coaching school" produced many of the most successful European basketball coaches of all times, such as Željko Obradović (who won a record 9 Euroleague titles as a coach), Dušan Ivković, Svetislav Pešić, and Igor Kokoškov (the first coach born and raised outside of North America to be hired as a head coach in the NBA). KK Partizan basketball club was the 1992 European champion. The Serbia men's national water polo team is the one of the most successful national teams, having won Olympic gold medal in 2016, three World Championships (2005, 2009 and 2015), and seven European Championships in 2001, 2003, 2006, 2012, 2014, 2016 and 2018, respectively. VK Partizan has won a joint-record seven European champion titles. Recent success of Serbian tennis players has led to an immense growth in the popularity of tennis in the country. Novak Djokovic has won seventeen Grand Slam singles title and has held the No. 1 spot in the ATP rankings for over 270 weeks. He became the eighth player in history to achieve the Career Grand Slam and the third man to hold all four major titles at once and the first ever to do so on three different surfaces. Ana Ivanovic (champion of 2008 French Open) and Jelena Janković were both ranked No. 1 in the WTA Rankings. There were two No. 1 ranked-tennis double players as well: Nenad Zimonjić (three-time men's double and four-time mixed double Grand Slam champion) and Slobodan Živojinović. The Serbia men's tennis national team won the 2010 Davis Cup and 2020 ATP Cup, while Serbia women's tennis national team reached the final at 2012 Fed Cup. Serbia is one of the leading volleyball countries in the world. Its men's national team won the gold medal at 2000 Olympics, the European Championship three times as well as the 2016 FIVB World League. The women's national volleyball team are current world Champions, has won European Championship three times as well as Olympic silver medal in 2016. Jasna Šekarić, sport shooter, is one of the athletes with the most appearances at the Olympic Games. She has won a total of five Olympic medals and also three World Championship gold medals. Other noted Serbian athletes include: swimmers Milorad Čavić (2009 World championships gold and silver medalist as well as 2008 Olympic silver medalist on 100-metre butterfly in historic race with American swimmer Michael Phelps) and Nađa Higl (2009 World champion in 200-metre breaststroke); track and field athletes Vera Nikolić (former world record holder in 800 metres) and Ivana Španović (long-jumper; four-time European champion, World indoor champion and bronze medalist at the 2016 Olympics); wrestler Davor Štefanek (2016 Olympic gold medalist and 2014 World champion), and taekwondoist Milica Mandić (2012 Olympic gold medalist and 2017 world champion). Serbia has hosted several major sport competitions, including the 2005 Men's European Basketball Championship, 2005 Men's European Volleyball Championship, 2006 and 2016 Men's European Water Polo Championships, 2009 Summer Universiade, 2012 European Men's Handball Championship, and 2013 World Women's Handball Championship. The most important annual sporting events held in the country are the Belgrade Marathon and the Tour de Serbie cycling race. Sources:
https://en.wikipedia.org/wiki?curid=29265
Relationship between religion and science Historians of science and of religion, philosophers, theologians, scientists, and others from various geographical regions and cultures have addressed numerous aspects of the relationship between religion and science. Critical questions in this debate include whether religion and science are compatible, whether religious beliefs can be conducive to science (or necessarily inhibit it), and what the nature of religious beliefs is. Even though the ancient and medieval worlds did not have conceptions resembling the modern understandings of "science" or of "religion", certain elements of modern ideas on the subject recur throughout history. The pair-structured phrases "religion and science" and "science and religion" first emerged in the literature in the 19th century. This coincided with the refining of "science" (from the studies of "natural philosophy") and of "religion" as distinct concepts in the preceding few centuries—partly due to professionalization of the sciences, the Protestant Reformation, colonization, and globalization. Since then the relationship between science and religion has been characterized in terms of 'conflict', 'harmony', 'complexity', and 'mutual independence', among others. Both science and religion are complex social and cultural endeavors that vary across cultures and change over time. Most scientific (and technical) innovations prior to the scientific revolution were achieved by societies organized by religious traditions. Ancient pagan, Islamic, and Christian scholars pioneered individual elements of the scientific method. Roger Bacon, often credited with formalizing the scientific method, was a Franciscan friar. Hinduism has historically embraced reason and empiricism, holding that science brings legitimate, but incomplete knowledge of the world and universe. Confucian thought, whether religious or non-religious in nature, has held different views of science over time. Many 21st-century Buddhists view science as complementary to their beliefs. While the classification of the material world by the ancient Indians and Greeks into air, earth, fire and water was more metaphysical, and figures like Anaxagoras questioned certain popular views of Greek divinities, medieval Middle Eastern scholars empirically classified materials. Events in Europe such as the Galileo affair of the early 17th century, associated with the scientific revolution and the Age of Enlightenment, led scholars such as John William Draper to postulate () a conflict thesis, suggesting that religion and science have been in conflict methodologically, factually and politically throughout history. Some contemporary scientists (such as Richard Dawkins, Lawrence Krauss, Peter Atkins, and Donald Prothero) subscribe to this thesis. However, the conflict thesis has lost favor among most contemporary historians of science. Many scientists, philosophers, and theologians throughout history, such as Francisco Ayala, Kenneth R. Miller and Francis Collins, have seen compatibility or interdependence between religion and science. Biologist Stephen Jay Gould, other scientists, and some contemporary theologians regard religion and science as non-overlapping magisteria, addressing fundamentally separate forms of knowledge and aspects of life. Some theologians or historians of science, including John Lennox, Thomas Berry, Brian Swimme and Ken Wilber propose an interconnection between science and religion, while others such as Ian Barbour believe there are even parallels. Public acceptance of scientific facts may sometimes be influenced by religious beliefs such as in the United States, where some reject the concept of evolution by natural selection, especially regarding human beings. Nevertheless, the American National Academy of Sciences has written that "the evidence for evolution can be fully compatible with religious faith", a view endorsed by many religious denominations. The concepts of "science" and "religion" are a recent invention: "religion" emerged in the 17th century in the midst of colonization and globalization and the Protestant Reformation, "science" emerged in the 19th century in the midst of attempts to narrowly define those who studied nature. Originally what is now known as "science" was pioneered as "natural philosophy". It was in the 19th century that the terms "Buddhism", "Hinduism", "Taoism", "Confucianism" and "World Religions" first emerged. In the ancient and medieval world, the etymological Latin roots of both science ("scientia") and religion ("religio") were understood as inner qualities of the individual or virtues, never as doctrines, practices, or actual sources of knowledge. It was in the 19th century that the concept of "science" received its modern shape with new titles emerging such as "biology" and "biologist", "physics", and "physicist", among other technical fields and titles; institutions and communities were founded, and unprecedented applications to and interactions with other aspects of society and culture occurred. The term "scientist" was coined by the naturalist-theologian William Whewell in 1834 and it was applied to those who sought knowledge and understanding of nature. From the ancient world, starting with Aristotle, to the 19th century, the practice of studying nature was commonly referred to as "natural philosophy". Isaac Newton's book Philosophiae Naturalis Principia Mathematica (1687), whose title translates to "Mathematical Principles of Natural Philosophy", reflects the then-current use of the words "natural philosophy", akin to "systematic study of nature". Even in the 19th century, a treatise by Lord Kelvin and Peter Guthrie Tait's, which helped define much of modern physics, was titled Treatise on Natural Philosophy (1867). It was in the 17th century that the concept of "religion" received its modern shape despite the fact that ancient texts like the Bible, the Quran, and other texts did not have a concept of religion in the original languages and neither did the people or the cultures in which these texts were written. In the 19th century, Max Müller noted that what is called ancient religion today, would have been called "law" in antiquity. For example, there is no precise equivalent of "religion" in Hebrew, and Judaism does not distinguish clearly between religious, national, racial, or ethnic identities. The Sanskrit word "dharma", sometimes translated as "religion", also means law or duty. Throughout classical South Asia, the study of law consisted of concepts such as penance through piety and ceremonial as well as practical traditions. Medieval Japan at first had a similar union between "imperial law" and universal or "Buddha law", but these later became independent sources of power. Throughout its long history, Japan had no concept of "religion" since there was no corresponding Japanese word, nor anything close to its meaning, but when American warships appeared off the coast of Japan in 1853 and forced the Japanese government to sign treaties demanding, among other things, freedom of religion, the country had to contend with this Western idea. The development of sciences (especially natural philosophy) in Western Europe during the Middle Ages, has considerable foundation in the works of the Arabs who translated Greek and Latin compositions. The works of Aristotle played a major role in the institutionalization, systematization, and expansion of reason. Christianity accepted reason within the ambit of faith. In Christendom, reason was considered subordinate to revelation, which contained the ultimate truth and this truth could not be challenged. In medieval universities, the faculty for natural philosophy and theology were separate, and discussions pertaining to theological issues were often not allowed to be undertaken by the faculty of philosophy. Natural philosophy, as taught in the arts faculties of the universities, was seen as an essential area of study in its own right and was considered necessary for almost every area of study. It was an independent field, separated from theology, and enjoyed a good deal of intellectual freedom as long as it was restricted to the natural world. In general, there was religious support for natural science by the late Middle Ages and a recognition that it was an important element of learning. The extent to which medieval science led directly to the new philosophy of the scientific revolution remains a subject for debate, but it certainly had a significant influence. The Middle Ages laid ground for the developments that took place in science, during the Renaissance which immediately succeeded it. By 1630, ancient authority from classical literature and philosophy, as well as their necessity, started eroding, although scientists were still expected to be fluent in Latin, the international language of Europe's intellectuals. With the sheer success of science and the steady advance of rationalism, the individual scientist gained prestige. Along with the inventions of this period, especially the printing press by Johannes Gutenberg, allowed for the dissemination of the Bible in languages of the common people (languages other than Latin). This allowed more people to read and learn from the scripture, leading to the Evangelical movement. The people who spread this message, concentrated more on individual agency rather than the structures of the Church. In the 17th century, founders of the Royal Society largely held conventional and orthodox religious views, and a number of them were prominent Churchmen. While theological issues that had the potential to be divisive were typically excluded from formal discussions of the early Society, many of its fellows nonetheless believed that their scientific activities provided support for traditional religious belief. Clerical involvement in the Royal Society remained high until the mid-nineteenth century, when science became more professionalised. Albert Einstein supported the compatibility of some interpretations of religion with science. In "Science, Philosophy and Religion, A Symposium" published by the Conference on Science, Philosophy and Religion in Their Relation to the Democratic Way of Life, Inc., New York in 1941, Einstein stated: Einstein thus expresses views of ethical non-naturalism (contrasted to ethical naturalism). Prominent modern scientists who are atheists include evolutionary biologist Richard Dawkins and Nobel Prize–winning physicist Steven Weinberg. Prominent scientists advocating religious belief include Nobel Prize–winning physicist and United Church of Christ member Charles Townes, evangelical Christian and past head of the Human Genome Project Francis Collins, and climatologist John T. Houghton. The kinds of interactions that might arise between science and religion have been categorized by theologian, Anglican priest, and physicist John Polkinghorne: (1) conflict between the disciplines, (2) independence of the disciplines, (3) dialogue between the disciplines where they overlap and (4) integration of both into one field. This typology is similar to ones used by theologians Ian Barbour and John Haught. More typologies that categorize this relationship can be found among the works of other science and religion scholars such as theologian and biochemist Arthur Peacocke. According to Guillermo Paz-y-Miño-C and Avelina Espinosa, the historical conflict between evolution and religion is intrinsic to the incompatibility between scientific rationalism/empiricism and the belief in supernatural causation. According to evolutionary biologist Jerry Coyne, views on evolution and levels of religiosity in some countries, along with the existence of books explaining reconciliation between evolution and religion, indicate that people have trouble in believing both at the same time, thus implying incompatibility. According to physical chemist Peter Atkins, "whereas religion scorns the power of human comprehension, science respects it." Planetary scientist Carolyn Porco describes a hope that "the confrontation between science and formal religion will come to an end when the role played by science in the lives of all people is the same played by religion today." Geologist and paleontologist Donald Prothero has stated that religion is the reason "questions about evolution, the age of the earth, cosmology, and human evolution nearly always cause Americans to flunk science literacy tests compared to other nations." However, Jon Miller, who studies science literacy across nations, states that Americans in general are slightly more scientifically literate than Europeans and the Japanese. According to cosmologist and astrophysicist Lawrence Krauss, compatibility or incompatibility is a theological concern, not a scientific concern. In Lisa Randall's view, questions of incompatibility or otherwise are not answerable, since by accepting revelations one is abandoning rules of logic which are needed to identify if there are indeed contradictions between holding certain beliefs. Daniel Dennett holds that incompatibility exists because religion is not problematic to a certain point before it collapses into a number of excuses for keeping certain beliefs, in light of evolutionary implications. According to theoretical physicist Steven Weinberg, teaching cosmology and evolution to students should decrease their self-importance in the universe, as well as their religiosity. Evolutionary developmental biologist PZ Myers' view is that all scientists should be atheists, and that science should never accommodate any religious beliefs. Physicist Sean M. Carroll claims that since religion makes claims that are supernatural, both science and religion are incompatible. Evolutionary biologist Richard Dawkins is openly hostile to religion because he believes it actively debauches the scientific enterprise and education involving science. According to Dawkins, religion "subverts science and saps the intellect". He believes that when science teachers attempt to expound on evolution, there is hostility aimed towards them by parents who are skeptical because they believe it conflicts with their own religious beliefs, and that even in some textbooks have had the word 'evolution' systematically removed. He has worked to argue the negative effects that he believes religion has on education of science. According to Renny Thomas' study on Indian scientists, atheistic scientists in India called themselves atheists even while accepting that their lifestyle is very much a part of tradition and religion. Thus, they differ from Western atheists in that for them following the lifestyle of a religion is not antithetical to atheism. Others such as Francis Collins, George F. R. Ellis, Kenneth R. Miller, Katharine Hayhoe, George Coyne and Simon Conway Morris argue for compatibility since they do not agree that science is incompatible with religion and vice versa. They argue that science provides many opportunities to look for and find God in nature and to reflect on their beliefs. According to Kenneth Miller, he disagrees with Jerry Coyne's assessment and argues that since significant portions of scientists are religious and the proportion of Americans believing in evolution is much higher, it implies that both are indeed compatible. Elsewhere, Miller has argued that when scientists make claims on science and theism or atheism, they are not arguing scientifically at all and are stepping beyond the scope of science into discourses of meaning and purpose. What he finds particularly odd and unjustified is in how atheists often come to invoke scientific authority on their non-scientific philosophical conclusions like there being no point or no meaning to the universe as the only viable option when the scientific method and science never have had any way of addressing questions of meaning or God in the first place. Furthermore, he notes that since evolution made the brain and since the brain can handle both religion and science, there is no natural incompatibility between the concepts at the biological level. Karl Giberson argues that when discussing compatibility, some scientific intellectuals often ignore the viewpoints of intellectual leaders in theology and instead argue against less informed masses, thereby, defining religion by non intellectuals and slanting the debate unjustly. He argues that leaders in science sometimes trump older scientific baggage and that leaders in theology do the same, so once theological intellectuals are taken into account, people who represent extreme positions like Ken Ham and Eugenie Scott will become irrelevant. Cynthia Tolman notes that religion does not have a method per se partly because religions emerge through time from diverse cultures, but when it comes to Christian theology and ultimate truths, she notes that people often rely on scripture, tradition, reason, and experience to test and gauge what they experience and what they should believe. The conflict thesis, which holds that religion and science have been in conflict continuously throughout history, was popularized in the 19th century by John William Draper's and Andrew Dickson White's accounts. It was in the 19th century that relationship between science and religion became an actual formal topic of discourse, while before this no one had pitted science against religion or vice versa, though occasional complex interactions had been expressed before the 19th century. Most contemporary historians of science now reject the conflict thesis in its original form and no longer support it. Instead, it has been superseded by subsequent historical research which has resulted in a more nuanced understanding: Historian of science, Gary Ferngren, has stated: "Although popular images of controversy continue to exemplify the supposed hostility of Christianity to new scientific theories, studies have shown that Christianity has often nurtured and encouraged scientific endeavour, while at other times the two have co-existed without either tension or attempts at harmonization. If Galileo and the Scopes trial come to mind as examples of conflict, they were the exceptions rather than the rule." Most historians today have moved away from a conflict model, which is based mainly on two historical episodes (Galileo and Darwin), toward compatibility theses (either the integration thesis or non-overlapping magisteria) or toward a "complexity" model, because religious figures were on both sides of each dispute and there was no overall aim by any party involved to discredit religion. An often cited example of conflict, that has been clarified by historical research in the 20th century, was the Galileo affair, whereby interpretations of the Bible were used to attack ideas by Copernicus on heliocentrism. By 1616 Galileo went to Rome to try to persuade Catholic Church authorities not to ban Copernicus' ideas. In the end, a decree of the Congregation of the Index was issued, declaring that the ideas that the Sun stood still and that the Earth moved were "false" and "altogether contrary to Holy Scripture", and suspending Copernicus's "De Revolutionibus" until it could be corrected. Galileo was found "vehemently suspect of heresy", namely of having held the opinions that the Sun lies motionless at the center of the universe, that the Earth is not at its centre and moves. He was required to "abjure, curse and detest" those opinions. However, before all this, Pope Urban VIII had personally asked Galileo to give arguments for and against heliocentrism in a book, and to be careful not to advocate heliocentrism as physically proven since the scientific consensus at the time was that the evidence for heliocentrism was very weak. The Church had merely sided with the scientific consensus of the time. Pope Urban VIII asked that his own views on the matter be included in Galileo's book. Only the latter was fulfilled by Galileo. Whether unknowingly or deliberately, Simplicio, the defender of the Aristotelian/Ptolemaic geocentric view in "Dialogue Concerning the Two Chief World Systems", was often portrayed as an unlearned fool who lacked mathematical training. Although the preface of his book claims that the character is named after a famous Aristotelian philosopher (Simplicius in Latin, Simplicio in Italian), the name "Simplicio" in Italian also has the connotation of "simpleton". Unfortunately for his relationship with the Pope, Galileo put the words of Urban VIII into the mouth of Simplicio. Most historians agree Galileo did not act out of malice and felt blindsided by the reaction to his book. However, the Pope did not take the suspected public ridicule lightly, nor the physical Copernican advocacy. Galileo had alienated one of his biggest and most powerful supporters, the Pope, and was called to Rome to defend his writings. The actual evidences that finally proved heliocentrism came centuries after Galileo: the stellar aberration of light by James Bradley in the 18th century, the orbital motions of binary stars by William Herschel in the 19th century, the accurate measurement of the stellar parallax in the 19th century, and Newtonian mechanics in the 17th century. According to physicist Christopher Graney, Galileo's own observations did not actually support the Copernican view, but were more consistent with Tycho Brahe's hybrid model where that Earth did not move and everything else circled around it and the Sun. British philosopher A. C. Grayling, still believes there is competition between science and religions and point to the origin of the universe, the nature of human beings and the possibility of miracles A modern view, described by Stephen Jay Gould as "non-overlapping magisteria" (NOMA), is that science and religion deal with fundamentally separate aspects of human experience and so, when each stays within its own domain, they co-exist peacefully. While Gould spoke of independence from the perspective of science, W. T. Stace viewed independence from the perspective of the philosophy of religion. Stace felt that science and religion, when each is viewed in its own domain, are both consistent and complete. They originate from different perceptions of reality, as Arnold O. Benz points out, but meet each other, for example, in the feeling of amazement and in ethics. The USA's National Academy of Science supports the view that science and religion are independent. Science and religion are based on different aspects of human experience. In science, explanations must be based on evidence drawn from examining the natural world. Scientifically based observations or experiments that conflict with an explanation eventually must lead to modification or even abandonment of that explanation. Religious faith, in contrast, does not depend on empirical evidence, is not necessarily modified in the face of conflicting evidence, and typically involves supernatural forces or entities. Because they are not a part of nature, supernatural entities cannot be investigated by science. In this sense, science and religion are separate and address aspects of human understanding in different ways. Attempts to put science and religion against each other create controversy where none needs to exist. According to Archbishop John Habgood, both science and religion represent distinct ways of approaching experience and these differences are sources of debate. He views science as descriptive and religion as prescriptive. He stated that if science and mathematics concentrate on what the world "ought to be", in the way that religion does, it may lead to improperly ascribing properties to the natural world as happened among the followers of Pythagoras in the sixth century B.C. In contrast, proponents of a normative moral science take issue with the idea that science has "no" way of guiding "oughts". Habgood also stated that he believed that the reverse situation, where religion attempts to be descriptive, can also lead to inappropriately assigning properties to the natural world. A notable example is the now defunct belief in the Ptolemaic (geocentric) planetary model that held sway until changes in scientific and religious thinking were brought about by Galileo and proponents of his views. In the view of the Lubavitcher rabbi Menachem Mendel Schneerson, non-Euclidean geometry such as Lobachevsky's hyperbolic geometry and Riemann's elliptic geometry proved that Euclid's axioms, such as, "there is only one straight line between two points", are in fact arbitrary. Therefore, science, which relies on arbitrary axioms, can never refute Torah, which is absolute truth. According to Ian Barbour, Thomas S. Kuhn asserted that science is made up of paradigms that arise from cultural traditions, which is similar to the secular perspective on religion. Michael Polanyi asserted that it is merely a commitment to universality that protects against subjectivity and has nothing at all to do with personal detachment as found in many conceptions of the scientific method. Polanyi further asserted that all knowledge is personal and therefore the scientist must be performing a very personal if not necessarily subjective role when doing science. Polanyi added that the scientist often merely follows intuitions of "intellectual beauty, symmetry, and 'empirical agreement'". Polanyi held that science requires moral commitments similar to those found in religion. Two physicists, Charles A. Coulson and Harold K. Schilling, both claimed that "the methods of science and religion have much in common." Schilling asserted that both fields—science and religion—have "a threefold structure—of experience, theoretical interpretation, and practical application." Coulson asserted that science, like religion, "advances by creative imagination" and not by "mere collecting of facts," while stating that religion should and does "involve critical reflection on experience not unlike that which goes on in science." Religious language and scientific language also show parallels (cf. rhetoric of science). The "religion and science community" consists of those scholars who involve themselves with what has been called the "religion-and-science dialogue" or the "religion-and-science field." The community belongs to neither the scientific nor the religious community, but is said to be a third overlapping community of interested and involved scientists, priests, clergymen, theologians and engaged non-professionals. Institutions interested in the intersection between science and religion include the Center for Theology and the Natural Sciences, the Institute on Religion in an Age of Science, the Ian Ramsey Centre, and the Faraday Institute. Journals addressing the relationship between science and religion include "Theology and Science" and "Zygon". Eugenie Scott has written that the "science and religion" movement is, overall, composed mainly of theists who have a healthy respect for science and may be beneficial to the public understanding of science. She contends that the "Christian scholarship" movement is not a problem for science, but that the "Theistic science" movement, which proposes abandoning methodological materialism, does cause problems in understanding of the nature of science. The Gifford Lectures were established in 1885 to further the discussion between "natural theology" and the scientific community. This annual series continues and has included William James, John Dewey, Carl Sagan, and many other professors from various fields. The modern dialogue between religion and science is rooted in Ian Barbour's 1966 book "Issues in Science and Religion". Since that time it has grown into a serious academic field, with academic chairs in the subject area, and two dedicated academic journals, "Zygon" and "Theology and Science". Articles are also sometimes found in mainstream science journals such as "American Journal of Physics"
https://en.wikipedia.org/wiki?curid=29266
Stephen Sondheim Stephen Joshua Sondheim (; born March 22, 1930) is an American composer and lyricist known for his work in musical theatre. One of the most important figures in 20th-century musical theater, Sondheim has been praised as having "reinvented the American musical" with shows that tackle "unexpected themes that range far beyond the [genre's] traditional subjects" with "music and lyrics of unprecedented complexity and sophistication." His shows have been praised for addressing "darker, more harrowing elements of the human experience," with songs often tinged with "ambivalence" about various aspects of life. His best-known works as composer and lyricist include "A Funny Thing Happened on the Way to the Forum" (1962), "Company" (1970), "Follies" (1971), "A Little Night Music" (1973), "" (1979), "Merrily We Roll Along" (1981), "Sunday in the Park with George" (1984), and "Into the Woods" (1987). He is also known for writing the lyrics for "West Side Story" (1957) and "Gypsy" (1959). He has received an Academy Award, eight Tony Awards (more than any other composer, including a Special Tony Award for Lifetime Achievement in the Theatre), eight Grammy Awards, a Pulitzer Prize, a Laurence Olivier Award, and a 2015 Presidential Medal of Freedom. In 2010, the former Henry Miller's Theater on Broadway was renamed the Stephen Sondheim Theatre; in 2019, it was announced that the Queen's Theatre in the West End of London would be renamed the Sondheim Theatre at the end of the year. Sondheim has written film music, contributing "Goodbye for Now" for Warren Beatty's 1981 "Reds". He wrote five songs for 1990's "Dick Tracy", including "Sooner or Later (I Always Get My Man)", sung in the film by Madonna, which won the Academy Award for Best Original Song. Film adaptations of Sondheim's work include "West Side Story" (1961), "" (2007), and "Into the Woods" (2014). Sondheim was born into a Jewish family in New York City, the son of Etta Janet ("Foxy", née Fox; 1897–1992) and Herbert Sondheim (1895–1966). His father manufactured dresses designed by his mother. The composer grew up on the Upper West Side of Manhattan and, after his parents divorced, on a farm near Doylestown, Pennsylvania. As the only child of well-to-do parents living in the San Remo on Central Park West, he was described in Meryle Secrest's biography ("Stephen Sondheim: A Life") as an isolated, emotionally neglected child. When he lived in New York, Sondheim attended the Ethical Culture Fieldston School. He later attended the New York Military Academy and George School, a private Quaker preparatory school in Bucks County, Pennsylvania where he wrote his first musical, "By George," and from which he graduated in 1946. Sondheim spent several summers at Camp Androscoggin. He later matriculated to Williams College and graduated in 1950. He traces his interest in theatre to "Very Warm for May", a Broadway musical he saw when he was nine. "The curtain went up and revealed a piano," Sondheim recalled. "A butler took a duster and brushed it up, tinkling the keys. I thought that was thrilling." When Sondheim was ten years old, his father (already a distant figure) had left his mother for another woman (Alicia, with whom he had two sons). Herbert sought custody of Stephen but was unsuccessful. Sondheim explained to biographer Secrest that he was "what they call an institutionalized child, meaning one who has no contact with any kind of family. You're in, though it's luxurious, you're in an environment that supplies you with everything but human contact. No brothers and sisters, no parents, and yet plenty to eat, and friends to play with and a warm bed, you know?" Sondheim detested his mother, who was said to be psychologically abusive and projected her anger from her failed marriage on her son: "When my father left her, she substituted me for him. And she used me the way she used him, to come on to and to berate, beat up on, you see. What she did for five years was treat me like dirt, but come on to me at the same time." She once wrote him a letter saying that the "only regret [she] ever had was giving him birth". When his mother died in the spring of 1992, Sondheim did not attend her funeral. He had already been estranged from her for nearly 20 years. When Sondheim was about ten years old (around the time of his parents' divorce), he became friends with James Hammerstein, son of lyricist and playwright Oscar Hammerstein II. The elder Hammerstein became Sondheim's surrogate father, influencing him profoundly and developing his love of musical theatre. Sondheim met Hal Prince, who would direct many of his shows, at the opening of "South Pacific," Hammerstein's musical with Richard Rodgers. The comic musical he wrote at George School, "By George", was a success among his peers and buoyed the young songwriter's self-esteem. When Sondheim asked Hammerstein to evaluate it as though he had no knowledge of its author, he said it was the worst thing he had ever seen: "But if you want to know why it's terrible, I'll tell you." They spent the rest of the day going over the musical, and Sondheim later said, "In that afternoon I learned more about songwriting and the musical theater than most people learn in a lifetime." Hammerstein designed a course of sorts for Sondheim on constructing a musical. He had the young composer write four musicals, each with one of the following conditions: None of the "assignment" musicals were produced professionally. "High Tor" and "Mary Poppins" have never been produced: The rights holder for the original "High Tor" refused permission, and "Mary Poppins" was unfinished. Sondheim began attending Williams College, a liberal arts college in Williamstown, Massachusetts whose theatre program attracted him. His first teacher there was Robert Barrow:  ... everybody hated him because he was very dry, and I thought he was wonderful because he was very dry. And Barrow made me realize that all my romantic views of art were nonsense. I had always thought an angel came down and sat on your shoulder and whispered in your ear 'dah-dah-dah-DUM.' Never occurred to me that art was something worked out. And suddenly it was skies opening up. As soon as you find out what a leading tone is, you think, Oh my God. What a diatonic scale is – Oh my God! The logic of it. And, of course, what that meant to me was: Well, I can do that. Because you just don't know. You think it's a talent, you think you're born with this thing. What I've found out and what I believed is that everybody is talented. It's just that some people get it developed and some don't. The composer told Meryle Secrest, "I just wanted to study composition, theory, and harmony without the attendant musicology that comes in graduate school. But I knew I wanted to write for the theatre, so I wanted someone who did not disdain theatre music." Barrow suggested that Sondheim study with Milton Babbitt, whom Sondheim described as "a frustrated show composer" with whom he formed "a perfect combination". When he met Babbitt, he was working on a musical for Mary Martin based on the myth of Helen of Troy. Sondheim and Babbitt would meet once a week in New York City for four hours (at the time, Babbitt was teaching at Princeton University). According to Sondheim, they spent the first hour dissecting Rodgers and Hart or George Gershwin or studying Babbitt's favorites (Buddy DeSylva, Lew Brown and Ray Henderson). They then proceeded to other forms of music (such as Mozart's Jupiter Symphony), critiquing them the same way. Babbitt and Sondheim, fascinated by mathematics, studied songs by a variety of composers (especially Jerome Kern). Sondheim told Secrest that Kern had the ability "to develop a single motif through tiny variations into a long and never boring line and his maximum development of the minimum of material". He said about Babbitt, "I am his maverick, his one student who went into the popular arts with all his serious artillery". At Williams, Sondheim wrote a musical adaption of "Beggar on Horseback" (a 1924 play by George S. Kaufman and Marc Connelly, with permission from Kaufman) which had three performances. A member of the Beta Theta Pi fraternity, he graduated "magna cum laude" in 1950. "A few painful years of struggle" followed, when Sondheim auditioned songs, lived in his father's dining room to save money and spent time in Hollywood writing for the television series "Topper". He devoured 1940s and 1950s films, and has called cinema his "basic language"; his film knowledge got him through "The $64,000 Question" contestant tryouts. Sondheim dislikes movie musicals, favoring classic dramas such as "Citizen Kane", "The Grapes of Wrath" and "A Matter of Life and Death": "Studio directors like Michael Curtiz and Raoul Walsh ... were heroes of mine. They went from movie to movie to movie, and every third movie was good and every fifth movie was great. There wasn't any cultural pressure to make art". At age 22, Sondheim had finished the four shows requested by Hammerstein. Julius and Philip Epstein's "Front Porch in Flatbush", unproduced at the time, was being shopped around by Lemuel (Lem) Ayers. Ayers approached Frank Loesser and another composer, who turned him down. Ayers and Sondheim met as ushers at a wedding, and Ayers commissioned Sondheim for three songs for the show; Julius Epstein flew in from California and hired Sondheim, who worked with him in California for four or five months. After eight auditions for backers, half the money needed was raised. The show, retitled "Saturday Night", was intended to open during the 1954–55 Broadway season; however, Ayers died of leukemia in his early forties. The rights transferred to his widow, Shirley, and due to her inexperience the show did not continue as planned; it opened off-Broadway in 2000. Sondheim later said, "I don't have any emotional reaction to "Saturday Night" at all – except fondness. It's not bad stuff for a 23-year-old. There are some things that embarrass me so much in the lyrics – the missed accents, the obvious jokes. But I decided, leave it. It's my baby pictures. You don't touch up a baby picture – you're a baby!" Burt Shevelove invited Sondheim to a party; Sondheim arrived before him, and knew no one else well. He saw a familiar face: Arthur Laurents, who had seen one of the auditions of "Saturday Night", and they began talking. Laurents told him he was working on a musical version of "Romeo and Juliet" with Leonard Bernstein, but they needed a lyricist; Betty Comden and Adolph Green, who were supposed to write the lyrics, were under contract in Hollywood. He said that although he was not a big fan of Sondheim's music, he enjoyed the lyrics from "Saturday Night" and he could audition for Bernstein. The following day, Sondheim met and played for Bernstein, who said he would let him know. The composer wanted to write music and lyrics; after consulting with Hammerstein, Bernstein told Sondheim he could write music later. In 1957, "West Side Story" opened; directed by Jerome Robbins, it ran for 732 performances. Sondheim has expressed dissatisfaction with his lyrics, saying that they do not always fit the characters and are sometimes too consciously poetic. Initially Bernstein was also credited as a co-writer of the lyrics; later, however, Bernstein offered Sondheim solo credit, as Sondheim had essentially done all of them. "The New York Times" review of the show never even mentioned the lyrics. Sondheim described the division of the royalties, saying that Bernstein received three percent and he received one percent. Bernstein suggested evening the percentage at two percent each, but Sondheim refused because he was satisfied just getting the credit. Sondheim later said he wished "someone stuffed a handkerchief in my mouth because it would have been nice to get that extra percentage". After "West Side Story" opened, Shevelove lamented the lack of "low-brow comedy" on Broadway and mentioned a possible musical based on Plautus' Roman comedies. When Sondheim was interested in the idea he called a friend, Larry Gelbart, to co-write the script. The show went through a number of drafts, and was interrupted briefly by Sondheim's next project. In 1959, Sondheim was approached by Laurents and Robbins for a musical version of Gypsy Rose Lee's memoir after Irving Berlin and Cole Porter turned it down. Sondheim agreed, but Ethel Merman – cast as Mama Rose – had just finished "Happy Hunting" with an unknown composer (Harold Karr) and lyricist (Matt Dubey). Although Sondheim wanted to write the music and lyrics, Merman refused to let another first-time composer write for her and demanded that Jule Styne write the music. Sondheim, concerned that writing lyrics again would pigeonhole him as a lyricist, called his mentor for advice. Hammerstein told him he should take the job, because writing a vehicle for a star would be a good learning experience. Sondheim agreed; "Gypsy" opened on May 21, 1959, and ran for 702 performances. In 1960, Sondheim lost his mentor and father figure, Oscar Hammerstein II. He remembered that shortly before Hammerstein's death, Hammerstein had given him a portrait of himself. Sondheim asked him to inscribe it, and said later about the request that it was "weird ... it's like asking your father to inscribe something". Reading the inscription ("For Stevie, My Friend and Teacher") choked up the composer, who said: "That describes Oscar better than anything I could say." When he walked away from the house that evening, Sondheim remembered a sad, sinking feeling that they had said their final goodbye. He never saw his mentor again; three days later, Hammerstein died of stomach cancer and Hammerstein's protégé eulogized him at his funeral. The first musical for which Sondheim wrote the music and lyrics was "A Funny Thing Happened on the Way to the Forum", which opened in 1962 and ran for 964 performances. The book, based on farces by Plautus, was written by Burt Shevelove and Larry Gelbart. Sondheim's score was not well received; although the show won several Tony Awards (including best musical), he did not receive a nomination. Sondheim had participated in three straight hits, but his next show – 1964's "Anyone Can Whistle" – was a nine-performance bomb (although it introduced Angela Lansbury to musical theatre). "Do I Hear a Waltz?", based on Arthur Laurents' 1952 play "The Time of the Cuckoo", was intended as another Rodgers and Hammerstein musical with Mary Martin in the lead. A new lyricist was needed, and Laurents and Rodgers' daughter, Mary, asked Sondheim to fill in. Although Richard Rodgers and Sondheim agreed that the original play did not lend itself to musicalization, they began writing the musical version. The project had many problems, Rodgers' alcoholism among them; Sondheim, calling it the one project he regretted, then decided to work only when he could write both music and lyrics. He asked author and playwright James Goldman to join him as bookwriter for a new musical. Inspired by a "New York Times" article about a gathering of former Ziegfeld Follies showgirls, it was entitled "The Girls Upstairs" (and would later become "Follies"). In 1966, Sondheim semi-anonymously provided lyrics for "The Boy From...", a parody of "The Girl from Ipanema" in the off-Broadway revue "The Mad Show". The song was credited to "Esteban Ria Nido", Spanish for "Stephen River Nest", and in the show's playbill the lyrics were credited to "Nom De Plume". That year Goldman and Sondheim hit a creative wall on "The Girls Upstairs", and Goldman asked Sondheim about writing a TV musical. The result was "Evening Primrose", with Anthony Perkins and Charmian Carr. Written for the anthology series "ABC Stage 67" and produced by Hubbell Robinson, it was broadcast on November 16, 1966. According to Sondheim and director Paul Bogart, the musical was written only because Goldman needed money for rent. The network disliked the title and Sondheim's alternative, "A Little Night Music". After Sondheim finished "Evening Primrose", Jerome Robbins asked him to adapt Bertolt Brecht's "The Measures Taken" despite the composer's general dislike of Brecht's work. Robbins wanted to adapt another Brecht play, "The Exception and the Rule", and asked John Guare to adapt the book. Leonard Bernstein had not written for the stage in some time, and his contract as conductor of the New York Philharmonic was ending. Sondheim was invited to Robbins' house in the hope that Guare would convince him to write the lyrics for a musical version of "The Exception and the Rule"; according to Robbins, Bernstein would not work without Sondheim. When Sondheim agreed, Guare asked: "Why haven't you all worked together since "West Side Story"?" Sondheim answered, "You'll see". Guare said that working with Sondheim was like being with an old college roommate, and he depended on him to "decode and decipher their crazy way of working"; Bernstein worked only after midnight, and Robbins only in the early morning. Bernstein's score, which was supposed to be light, was influenced by his need to make a musical statement. Stuart Ostrow, who worked with Sondheim on "The Girls Upstairs", agreed to produce the musical (now entitled "A Pray By Blecht" and, later, "The Race to Urga"). An opening night was scheduled, but during auditions Robbins asked to be excused for a moment. When he did not return, a doorman said he had gotten into a limousine to go to John F. Kennedy International Airport. Bernstein burst into tears and said, "It's over." Sondheim later said of this experience: "I was ashamed of the whole project. It was arch and didactic in the worst way." He wrote one-and-a-half songs and threw them away, the only time he has ever done that. Eighteen years later, Sondheim refused Bernstein and Robbins' request to retry the show. He has lived in a Turtle Bay, Manhattan brownstone since writing "Gypsy" in 1959. Ten years later, while he was playing music he heard a knock on the door. His neighbor, Katharine Hepburn, was in "bare feet – this angry, red-faced lady" and told him "You have been keeping me awake all night!" (she was practicing for her musical debut in "Coco"). When Sondheim asked why she had not asked him to play for her, she said she lost his phone number. According to Sondheim, "My guess is that she wanted to stand there in her bare feet, suffering for her art". After "Do I Hear a Waltz?", Sondheim devoted himself solely to writing both music and lyrics for the theater - and in 1970, he began a collaboration with director Harold Prince that would result in a body of work that is considered one of the high water marks of musical theater history. Their first show with Prince as director was the 1970 concept musical "Company". A show about a single man and his married friends, "Company" (with a book by George Furth) lacked a straightforward plot, and was instead centered around themes such as marriage and the difficulty of making an emotional connection with another person. It opened on April 26, 1970 at the Alvin Theatre, where it ran for 705 performances after seven previews, and won Tony Awards for best musical, best music and best lyrics. It was revived on Broadway in 1995 and 2006, and will be revived again in 2020 (in a version where the main character is gender-swapped). "Follies" (1971), with a book by James Goldman, opened on April 4, 1971 at the Winter Garden Theatre and ran for 522 performances after 12 previews. The plot centers on a reunion, in a crumbling Broadway theatre scheduled for demolition, of performers in "Weismann's" "Follies" (a musical revue, based on the "Ziegfeld Follies", which played in that theatre between the world wars). The production, one of the most lavish of its time, also featured choreography and co-direction by Michael Bennett"," who went on to create "A Chorus Line" (1975). The show enjoyed two revivals on Broadway in 2001 and 2011. "A Little Night Music" (1973), with a more traditional plot based on Ingmar Bergman's "Smiles of a Summer Night" and a score primarily in waltz time, was one of the composer's greatest commercial successes. "Time" magazine called it "Sondheim's most brilliant accomplishment to date". "Send in the Clowns", a song from the musical, was a hit for Judy Collins, and became Sondheim's most well-known song. The show opened on Broadway at the Shubert Theatre on February 25, 1973, and ran for 601 performances and 12 previews. It was revived on Broadway in 2009. "Pacific Overtures" (1976), with a book by John Weidman, was the most non-traditional of the Sondheim-Prince collaborations: the show explored the westernization of Japan, and was originally presented in Kabuki style. The show closed after a run of 193 performances, and was revived on Broadway in 2004. "" (1979), Sondheim's most operatic score and libretto (which, with "Pacific Overtures" and "A Little Night Music", has been produced in opera houses), explores an unlikely topic: murderous revenge and cannibalism. The book, by Hugh Wheeler, is based on Christopher Bond's 1973 stage version of the Victorian original. The show has since been revived on Broadway twice (1989, 2005), and has been performed in musical theaters and opera houses alike. It ran off-Broadway at the Barrow Street Theatre until August 26, 2018. "Merrily We Roll Along" (1981), with a book by George Furth, is one of Sondheim's more traditional scores; Frank Sinatra and Carly Simon have recorded songs from the musical. According to Sondheim's music director, Paul Gemignani, "Part of Steve's ability is this extraordinary versatility." However, the show was not the success their previous collaborations had been: after a chaotic series of preview performances, the show opened to widely negative reviews, and closed after a run of less than two weeks. Due to the high quality of Sondheim's score, however, the show has been repeatedly revised and produced in the ensuing years. Martin Gottfried wrote, "Sondheim had set out to write traditional songs ... But [despite] that there is nothing ordinary about the music." Sondheim later said: "Did I feel betrayed? I'm not sure I would put it like that. What did surprise me was the feeling around the Broadway community – if you can call it that, though I guess I will for lack of a better word – that they wanted Hal and me to fail." "Merrily"s failure greatly affected Sondheim; he was ready to quit theatre and do movies, create video games or write mysteries: "I wanted to find something to satisfy myself that does not involve Broadway and dealing with all those people who hate me and hate Hal." Sondheim and Prince's collaboration was suspended from "Merrily" to the 2003 production of "Bounce", another failure. However, Sondheim decided "that there are better places to start a show" and found a new collaborator in James Lapine after he saw Lapine's "Twelve Dreams" off-Broadway in 1981: "I was discouraged, and I don't know what would have happened if I hadn't discovered "Twelve Dreams" at the Public Theatre"; Lapine has a taste "for the avant-garde and for visually-oriented theatre in particular". Their first collaboration was "Sunday in the Park with George" (1984), with Sondheim's music evoking Georges Seurat's pointillism. Sondheim and Lapine won the 1985 Pulitzer Prize for Drama for the play, and it was revived on Broadway in 2008, and again in a limited run in 2017. They collaborated on "Into the Woods" (1987), a musical based on several Brothers Grimm fairy tales. Although Sondheim has been called the first composer to bring rap music to Broadway (with the Witch in the opening number of "Into the Woods"), he attributes the first rap in theatre to Meredith Willson's "Rock Island" from "The Music Man". The show was revived on Broadway in 2002. Sondheim and Lapine's last work together was the rhapsodic "Passion" (1994), adapted from Ettore Scola's Italian film "Passione D'Amore". With a run of 280 performances, "Passion" was the shortest-running show to win a Tony Award for Best Musical. "Assassins" opened off-Broadway at Playwrights Horizons on December 18, 1990, with a book by John Weidman. The show explored, in revue form, a group of historical figures who tried (either with success or without) to assassinate the President of the United States. The musical closed on February 16, 1991, after 73 performances. The show eventually received a Broadway production in 2004. "Saturday Night" was shelved until its 1997 production at London's Bridewell Theatre. The following year, its score was recorded; a revised version, with two new songs, ran off-Broadway at Second Stage Theatre in 2000 and at London's Jermyn Street Theatre in 2009. During the late 1990s, Sondheim and Weidman reunited for "Wise Guys", a musical comedy following brothers Addison and Wilson Mizner. A Broadway production, starring Nathan Lane and Victor Garber, directed by Sam Mendes and planned for the spring of 2000, was delayed. Renamed "Bounce" in 2003, it was produced at the Goodman Theatre in Chicago and the Kennedy Center in Washington, D.C., in a production directed by Harold Prince, his first collaboration with Sondheim since 1981. Although after poor reviews "Bounce" never reached Broadway, a revised version opened off-Broadway as "Road Show" at the Public Theater on October 28, 2008. Directed by John Doyle, it closed on December 28, 2008. Asked about writing new work, Sondheim replied in 2006: "No ... It's age. It's a diminution of energy and the worry that there are no new ideas. It's also an increasing lack of confidence. I'm not the only one. I've checked with other people. People expect more of you and you're aware of it and you shouldn't be." In December 2007 he said that in addition to continuing work on "Bounce", he was "nibbling at a couple of things with John Weidman and James Lapine". Lapine created a multimedia production, originally entitled "Sondheim: a Musical Revue", which was scheduled to open in April 2009 at the Alliance Theatre in Atlanta; however, it was canceled due to "difficulties encountered by the commercial producers attached to the project ... in raising the necessary funds". A revised version, "Sondheim on Sondheim", was produced at Studio 54 by the Roundabout Theatre Company; previews began on March 19, 2010, and it ran from April 22 to June 13. The revue's cast included Barbara Cook, Vanessa L. Williams, Tom Wopat, Norm Lewis and Leslie Kritzer. Sondheim collaborated with Wynton Marsalis on "A Bed and a Chair: A New York Love Affair", an Encores! concert on November 13–17, 2013 at New York City Center. Directed by John Doyle with choreography by Parker Esse, it consisted of "more than two dozen Sondheim compositions, each piece newly re-imagined by Marsalis". The concert featured Bernadette Peters, Jeremy Jordan, Norm Lewis, Cyrille Aimée, four dancers and the Jazz at Lincoln Center Orchestra conducted by David Loud. In "Playbill", Steven Suskin described the concert as "neither a new musical, a revival, nor a standard songbook revue; it is, rather, a staged-and-sung chamber jazz rendition of a string of songs ... Half of the songs come from "Company" and "Follies"; most of the other Sondheim musicals are represented, including the lesser-known "Passion" and "Road Show"". For the 2014 film adaptation of "Into the Woods", Sondheim wrote a new song, "She'll Be Back", which was to be sung by The Witch, but was eventually cut. In February 2012 it was announced that Sondheim would collaborate on a new musical with David Ives, and he had "about 20–30 minutes of the musical completed". The show, tentatively called "All Together Now", was assumed to follow the format of "Merrily We Roll Along". Sondheim described the project as "two people and what goes into their relationship ... We'll write for a couple of months, then have a workshop. It seemed experimental and fresh 20 years ago. I have a feeling it may not be experimental and fresh any more". On October 11, 2014, it was confirmed the Sondheim and Ives musical would be based on two Luis Buñuel films ("The Exterminating Angel" and "The Discreet Charm of the Bourgeoisie") and would reportedly open (in previews) at the Public Theater in 2017. In August 2016, a reading for the musical was held at the Public Theater, and it was reported that only the first act was finished, which cast doubt on the speculated 2017 start of previews. There was a workshop in November 2016, with the participation of Matthew Morrison, Shuler Hensley, Heidi Blickenstaff, Sierra Boggess, Gabriel Ebert, Sara Stiles, Michael Cerveris and Jennifer Simard. The working title was reported to be "Buñuel" by the New York Post and other outlets, but Sondheim later clarified that this was an error and that they still had no title.
https://en.wikipedia.org/wiki?curid=29268
Self-determination The right of a people to self-determination is a cardinal principle in modern international law (commonly regarded as a "jus cogens" rule), binding, as such, on the United Nations as authoritative interpretation of the Charter's norms. It states that people, based on respect for the principle of equal rights and fair equality of opportunity, have the right to freely choose their sovereignty and international political status with no interference. The concept was first expressed in the 1860s, and spread rapidly thereafter. During and after World War I, the principle was encouraged by both Vladimir Lenin and United States President Woodrow Wilson. Having announced his Fourteen Points on 8 January 1918, on 11 February 1918 Wilson stated: "National aspirations must be respected; people may now be dominated and governed only by their own consent. 'Self determination' is not a mere phrase; it is an imperative principle of action." During World War II, the principle was included in the Atlantic Charter, declared on 14 August 1941, by Franklin D. Roosevelt, President of the United States, and Winston Churchill, Prime Minister of the United Kingdom, who pledged The Eight Principal points of the Charter. It was recognized as an international legal right after it was explicitly listed as a right in the UN Charter. The principle does not state how the decision is to be made, nor what the outcome should be, whether it be independence, federation, protection, some form of autonomy or full assimilation. Neither does it state what the delimitation between peoples should be—nor what constitutes a people. There are conflicting definitions and legal criteria for determining which groups may legitimately claim the right to self-determination. By extension, the term self-determination has come to mean the free choice of one's own acts without external compulsion. The employment of imperialism, through the expansion of empires, and the concept of political sovereignty, as developed after the Treaty of Westphalia, also explain the emergence of self-determination during the modern era. During, and after, the Industrial Revolution many groups of people recognized their shared history, geography, language, and customs. Nationalism emerged as a uniting ideology not only between competing powers, but also for groups that felt subordinated or disenfranchised inside larger states; in this situation, self-determination can be seen as a reaction to imperialism. Such groups often pursued independence and sovereignty over territory, but sometimes a different sense of autonomy has been pursued or achieved. The world possessed several traditional, continental empires such as the Ottoman, Russian, Austrian/Habsburg, and the Qing Empire. Political scientists often define competition in Europe during the Modern Era as a balance of power struggle, which also induced various European states to pursue colonial empires, beginning with the Spanish and Portuguese, and later including the British, French, Dutch, and German. During the early 19th century, competition in Europe produced multiple wars, most notably the Napoleonic Wars. After this conflict, the British Empire became dominant and entered its "imperial century", while nationalism became a powerful political ideology in Europe. Later, after the Franco-Prussian War in 1870, "New Imperialism" was unleashed with France and later Germany establishing colonies in Middle East, Southeast Asia, the South Pacific, and Africa. Japan also emerged as a new power. Multiple theaters of competition developed across the world: The Ottoman Empire, Austrian Empire, Russian Empire, Qing Empire and the new Empire of Japan maintained themselves, often expanding or contracting at the expense of another empire. All ignored notions of self-determination for those governed. The revolt of New World British colonists in North America, during the mid-1770s, has been seen as the first assertion of the right of national and democratic self-determination, because of the explicit invocation of natural law, the natural rights of man, as well as the consent of, and sovereignty by, the people governed; these ideas were inspired particularly by John Locke's enlightened writings of the previous century. Thomas Jefferson further promoted the notion that the will of the people was supreme, especially through authorship of the United States Declaration of Independence which inspired Europeans throughout the 19th century. The French Revolution was motivated similarly and legitimatized the ideas of self-determination on that Old World continent. Within the New World during the early 19th century, most of the nations of Spanish America achieved independence from Spain. The United States supported that status, as policy in the hemisphere relative to European colonialism, with the Monroe Doctrine. The American public, organized associated groups, and Congressional resolutions, often supported such movements, particularly the Greek War of Independence (1821–29) and the demands of Hungarian revolutionaries in 1848. Such support, however, never became official government policy, due to balancing of other national interests. After the American Civil War and with increasing capability, the United States government did not accept self-determination as a basis during its Purchase of Alaska and attempted purchase of the West Indian islands of Saint Thomas and Saint John in the 1860s, or its growing influence in the Hawaiian Islands, that led to annexation in 1898. With its victory in the Spanish–American War in 1899 and its growing stature in the world, the United States supported annexation of the former Spanish colonies of Guam, Puerto Rico and the Philippines, without the consent of their peoples, and it retained "quasi-suzerainty" over Cuba, as well. Nationalist sentiments emerged inside the traditional empires including: Pan-Slavism in Russia; Ottomanism, Kemalist ideology and Arab nationalism in the Ottoman Empire; State Shintoism and Japanese identity in Japan; and Han identity in juxtaposition to the Manchurian ruling class in China. Meanwhile, in Europe itself there was a rise of nationalism, with nations such as Greece, Hungary, Poland and Bulgaria seeking or winning their independence. Karl Marx supported such nationalism, believing it might be a "prior condition" to social reform and international alliances. In 1914 Vladimir Lenin wrote: "[It] would be wrong to interpret the right to self-determination as meaning anything but the right to existence as a separate state." Woodrow Wilson revived America's commitment to self-determination, at least for European states, during World War I. When the Bolsheviks came to power in Russia in November 1917, they called for Russia's immediate withdrawal as a member of the Allies of World War I. They also supported the right of all nations, including colonies, to self-determination." The 1918 Constitution of the Soviet Union acknowledged the right of secession for its constituent republics. This presented a challenge to Wilson's more limited demands. In January 1918 Wilson issued his Fourteen Points of January 1918 which, among other things, called for adjustment of colonial claims, insofar as the interests of colonial powers had equal weight with the claims of subject peoples. The Treaty of Brest-Litovsk in March 1918 led to Soviet Russia's exit from the war and the nominal independence of Armenia, Finland, Estonia, Latvia, Ukraine, Lithuania, Georgia and Poland, though in fact those territories were under German control. The end of the war led to the dissolution of the defeated Austro-Hungarian Empire and Czechoslovakia and the union of the State of Slovenes, Croats and Serbs and the Kingdom of Serbia as new states out of the wreckage of the Habsburg empire. However, this imposition of states where some nationalities (especially Poles, Czechs, and Serbs and Romanians) were given power over nationalities who disliked and distrusted them eventually used as a pretext for German aggression in World War II. Wilson publicly argued that the agreements made in the aftermath of the war would be a "readjustment of those great injustices which underlie the whole structure of European and Asiatic society", which he attributed to the absence of democratic rule. The new order emerging in the postwar period would, according to Wilson, place governments "in the hands of the people and taken out of the hands of coteries and of sovereigns, who had to right to rule over the people." The League of Nations was established as the symbol of the emerging postwar order; one of its earliest tasks was to legitimize the territorial boundaries of the new nations-states created in the territories of the former Ottoman Empire, Asia, and Africa. The principle of self-determination did not extend so far as to end colonialism; under the reasoning that the local populations were not civilized enough the League of Nations was to assign each of the post-Ottoman, Asian and African states and colonies to a European power by the grant of a League of Nations mandate. One of the German objections to the Treaty of Versailles was a somewhat selective application of the principle of self-determination as the majority of the people in Austria and in the Sudetenland region of Czechoslovakia wanted to join Germany while the majority of people in Danzig wanted to remain within the "Reich", but the Allies ignored the German objections. Wilson's 14 Points had called for Polish independence to be restored and Poland to have "secure access to the sea", which would imply that the German city of Danzig (modern Gdańsk, Poland), which occupied a strategic location where the Vistula river flowed into the Baltic sea, be ceded to Poland. At the Paris peace conference in 1919, the Polish delegation led by Roman Dmowski asked for Wilson to honor point 14 of the 14 points by transferring Danzig to Poland. arguing that Poland would not be economically viable without Danzig. However, as the 90% of the people in Danzig in this period were German, the Allied leaders at the Paris peace conference compromised by creating the Free City of Danzig, a city-state in which Poland had certain special rights. Through the city of Danzig was 90% German and 10% Polish, the surrounding countryside around Danzig was overwhelmingly Polish, and the ethnically Polish rural areas included in the Free City of Danzig objected, arguing that they wanted to be part of Poland. Neither the Poles nor the Germans were happy with this compromise and the Danzig issue became a flash-point of German-Polish tension throughout the interwar period. During the 1920s and 1930s there were some successful movements for self-determination in the beginnings of the process of decolonization. In the Statute of Westminster the United Kingdom granted independence to Canada, New Zealand, Newfoundland, the Irish Free State, the Commonwealth of Australia, and the Union of South Africa after the British parliament declared itself as incapable of passing laws over them without their consent. Egypt, Afghanistan and Iraq also achieved independence from Britain and Lebanon from France. Other efforts were unsuccessful, like the Indian independence movement. And Italy, Japan and Germany all initiated new efforts to bring certain territories under their control, leading to World War II. In particular, the National Socialist Program invoked this right of nations in its first point (out of 25), as it was publicly proclaimed on 24 February 1920 by Adolf Hitler. In Asia, Japan became a rising power and gained more respect from Western powers after its victory in the Russo-Japanese War. Japan joined the Allied Powers in World War I and attacked German colonial possessions in the Far East, adding former German possessions to its own empire. In the 1930s, Japan gained significant influence in Inner Mongolia and Manchuria after it invaded Manchuria. It established Manchukuo, a puppet state in Manchuria and eastern Inner Mongolia. This was essentially the model Japan followed as it invaded other areas in Asia and established the Greater East Asia Co-Prosperity Sphere. Japan went to considerable trouble to argue that Manchukuo was justified by the principle of self-determination, claiming that people of Manchuria wanted to break away from China and asked the Kwantung Army to intervene on their behalf. However, the Lytton commission which had been appointed by the League of Nations to decide if Japan had committed aggression or not, stated the majority of people in Manchuria who were Han Chinese who did not wish to leave China. In 1912, the Republic of China officially succeeded the Qing Dynasty, while Outer Mongolia, Tibet and Tuva proclaimed their independence. Independence was not accepted by the government of China. By the Treaty of Kyakhta (1915) Outer Mongolia recognized China's sovereignty. However, the Soviet threat of seizing parts of Inner Mongolia induced China to recognize Outer Mongolia's independence, provided that a referendum was held. The referendum took place on October 20, 1945, with (according to official numbers) 100% of the electorate voting for independence. Many of Eastern Asia's current disputes to sovereignty and self-determination stem from unresolved disputes from World War II. After its fall, the Empire of Japan renounced control over many of its former possessions including Korea, Sakhalin Island, and Taiwan. In none of these areas were the opinions of affected people consulted, or given significant priority. Korea was specifically granted independence but the receiver of various other areas was not stated in the Treaty of San Francisco, giving Taiwan "de facto" independence although its political status continues to be ambiguous. In 1941 Allies of World War II declared the Atlantic Charter and accepted the principle of self-determination. In January 1942 twenty-six states signed the Declaration by United Nations, which accepted those principles. The ratification of the United Nations Charter in 1945 at the end of World War II placed the right of self-determination into the framework of international law and diplomacy. On 14 December 1960, the United Nations General Assembly adopted United Nations General Assembly Resolution 1514 (XV) subtitled "Declaration on the Granting of Independence to Colonial Countries and Peoples", which supported the granting of independence to colonial countries and people by providing an inevitable legal linkage between self-determination and its goal of decolonisation. It postulated a new international law-based right of freedom to exercise economic self-determination. Article 5 states: Immediate steps shall be taken in Trust and Non-Self-Governing Territories, or all other territories which have not yet attained independence, to transfer all powers to the people of those territories, without any conditions or reservations, in accordance with their freely expressed will and desire, without any distinction as to race, creed or colour, in order to enable them to enjoy complete independence and freedom. On 15 December 1960 the United Nations General Assembly adopted United Nations General Assembly Resolution 1541 (XV), subtitled "Principles which should guide members in determining whether or nor an obligation exists to transmit the information called for under Article 73e of the United Nations Charter in Article 3", which provided that "[t]he inadequacy of political, economic, social and educational preparedness should never serve as a pretext for delaying the right to self-determination and independence." To monitor the implementation of Resolution 1514, in 1961 the General Assembly created the Special Committee referred to popularly as the Special Committee on Decolonization to ensure decolonization complete compliance with the principles of self-determination in General Assembly Resolution 1541 (XV). However, the charter and other resolutions did not insist on full independence as the best way of obtaining self-government, nor did they include an enforcement mechanism. Moreover, new states were recognized by the legal doctrine of uti possidetis juris, meaning that old administrative boundaries would become international boundaries upon independence if they had little relevance to linguistic, ethnic, and cultural boundaries. Nevertheless, justified by the language of self-determination, between 1946 and 1960, thirty-seven new nations in Asia, Africa, and the Middle East gained independence from colonial powers. The territoriality issue inevitably would lead to more conflicts and independence movements within many states and challenges to the assumption that territorial integrity is as important as self-determination. Decolonization in the world was contrasted by the Soviet Union's successful post-war expansionism. Tuva and several regional states in Eastern Europe, the Baltic, and Central Asia had been fully annexed by the Soviet Union during World War II. Now, it extended its influence by establishing the satellite states of Eastern Germany and the countries of Eastern Europe, along with support for revolutionary movements in China and North Korea. Although satellite states were independent and possessed sovereignty, the Soviet Union violated principles of self-determination by suppressing the Hungarian revolution of 1956 and the Prague Spring Czechoslovak reforms of 1968. It invaded Afghanistan to support a communist government assailed by local tribal groups. However, Marxism–Leninism and its theory of imperialism were also strong influences in the national emancipation movements of Third World nations rebelling against colonial or puppet regimes. In many Third World countries, communism became an ideology that united groups to oppose imperialism or colonization. Soviet actions were contained by the United States which saw communism as a menace to its interests. Throughout the cold war, the United States created, supported, and sponsored regimes with various success that served their economic and political interests, among them anti-communist regimes such as that of Augusto Pinochet in Chile and Suharto in Indonesia. To achieve this, a variety of means was implemented, including the orchestration of coups, sponsoring of anti-communist countries and military interventions. Consequently, many self-determination movements, which spurned some type of anti-communist government, were accused of being Soviet-inspired or controlled. In Asia, the Soviet Union had already converted Mongolia into a satellite state but abandoned propping up the Second East Turkestan Republic and gave up its Manchurian claims to China. The new People's Republic of China had gained control of mainland China in the Chinese Civil War. The Korean War shifted the focus of the Cold War from Europe to Asia, where competing superpowers took advantage of decolonization to spread their influence. In 1947, India gained independence from the British Empire. The empire was in decline but adapted to these circumstances by creating the British Commonwealth—since 1949 the Commonwealth of Nations—which is a free association of equal states. As India obtained its independence, multiple ethnic conflicts emerged in relation to the formation of a statehood during the Partition of India which resulted in Islamic Pakistan and Secular India. Before the advent of the British, no empire based in mainland India had controlled any part of what now makes up the country's Northeast, part of the reason for the ongoing insurgency in Northeast India. In 1971 Bangladesh obtained independence from Pakistan. Burma also gained independence from the British Empire, but declined membership in the Commonwealth. Indonesia gained independence from the Netherlands in 1949 after the latter failed to restore colonial control. As mentioned above, Indonesia also wanted a powerful position in the region that could be lessened by the creation of united Malaysia. The Netherlands retained Dutch New Guinea, but Indonesia threatened to invade and annex it. A vote was supposedly taken under the UN sponsored Act of Free Choice to allow West New Guineans to decide their fate, although many dispute its veracity. Later, Portugal relinquished control over East Timor in 1975, at which time Indonesia promptly invaded and annexed it. The Cold War began to wind down after Mikhail Gorbachev assumed power in March 1985. With the cooperation of the American president Ronald Reagan, Gorbachev wound down the size of the Soviet Armed Forces and reduced nuclear arms in Europe, while liberalizing the economy. In 1989 – 90, the communist regimes of Soviet satellite states collapsed in rapid succession in Poland, Hungary, Czechoslovakia, East Germany, Bulgaria, Romania, and Mongolia. East and West Germany united, Czechoslovakia peacefully split into Czech Republic and Slovakia, while in 1990 Yugoslavia began a violent break up into 6 states. Kosovo, which was previously an autonomous unit of Serbia declared independence in 2008, but has received less international recognition. In December 1991, Gorbachev resigned as president and the Soviet Union dissolved relatively peacefully into fifteen sovereign republics, all of which rejected communism and most of which adopted democratic reforms and free-market economies. Inside those new republics, four major areas have claimed their own independence, but not received widespread international recognition. After decades of civil war, Indonesia finally recognized the independence of East Timor in 2002. In 1949, the Communists won the civil war and established the People's Republic of China in Mainland China. The Kuomintang-led Republic of China government retreated to Taipei, its jurisdiction now limited to Taiwan and several outlying islands. Since then, the People's Republic of China has been involved in disputes with the ROC over issues of sovereignty and the political status of Taiwan. As noted, self-determination movements remain strong in some areas of the world. Some areas possess "de facto" independence, such as Taiwan, North Cyprus, Kosovo, and South Ossetia, but their independence is disputed by one or more major states. Significant movements for self-determination also persist for locations that lack "de facto" independence, such as Kurdistan, Balochistan, Chechnya, and the State of Palestine Since the early 1990s, the legitimatization of the principle of national self-determination has led to an increase in the number of conflicts within states, as sub-groups seek greater self-determination and full secession, and as their conflicts for leadership within groups and with other groups and with the dominant state become violent. The international reaction to these new movements has been uneven and often dictated more by politics than principle. The 2000 United Nations Millennium Declaration failed to deal with these new demands, mentioning only "the right to self-determination of peoples which remain under colonial domination and foreign occupation." In an issue of "Macquarie University Law Journal" Associate Professor Aleksandar Pavkovic and Senior Lecturer Peter Radan outlined current legal and political issues in self-determination. These include: There is not yet a recognized legal definition of "peoples" in international law. Vita Gudeleviciute of Vytautas Magnus University Law School, reviewing international law and UN resolutions, finds in cases of non-self-governing peoples (colonized and/or indigenous) and foreign military occupation "a people" is the entire population of the occupied territorial unit, no matter their other differences. In cases where people lack representation by a state's government, the unrepresented become a separate people. Present international law does not recognize ethnic and other minorities as separate peoples, with the notable exception of cases in which such groups are systematically disenfranchised by the government of the state they live in. Other definitions offered are "peoples" being self-evident (from ethnicity, language, history, etc.), or defined by "ties of mutual affection or sentiment", i.e. "loyalty", or by mutual obligations among peoples. Or the definition may be simply that a people is a group of individuals who unanimously choose a separate state. If the "people" are unanimous in their desire for self-determination, it strengthens their claim. For example, the populations of federal units of the Yugoslav federation were considered a people in the breakup of Yugoslavia, although some of those units had very diverse populations. Although there is no fully accepted definition of peoples, references are often made to a definition proposed by UN Special Rapporteur Martínez Cobo in his study on discrimination against indigenous populations. UN Independent Expert on the Promotion of a democratic and equitable International Order, Alfred de Zayas, relied on the "Kirby definition" in his 2014 Report to the General Assembly A/69/272 as "a group of persons with a common historical tradition, racial or ethnic identity, cultural homogeneity, linguistic unity, religious or ideological affinity, territorial connection,or common economic life. To this should be added a subjective element: the will to be identified as a people and the consciousness of being a people.". Abulof suggests that self-determination entails the "moral double helix" of duality (personal right to align with a people, and the people's right to determine their politics) and mutuality (the right is as much the other's as the self's). Thus, self-determination grants individuals the right to form "a people," which then has the right to establish an independent state, as long as they grant the same to all other individuals and peoples. Criteria for the definition of "people having the right of self-determination" was proposed during 2010 Kosovo case decision of the International Court of Justice: 1. traditions and culture 2. ethnicity 3. historical ties and heritage 4. language 5. religion 6. sense of identity or kinship 7. the will to constitute a people 8. common suffering. National self-determination appears to challenge the principle of territorial integrity (or sovereignty) of states as it is the will of the people that makes a state legitimate. This implies a people should be free to choose their own state and its territorial boundaries. However, there are far more self-identified nations than there are existing states and there is no legal process to redraw state boundaries according to the will of these peoples. According to the Helsinki Final Act of 1975, the UN, ICJ and international law experts, there is no contradiction between the principles of self-determination and territorial integrity, with the latter taking precedence. Pavkovic and Radan describe three theories of international relations relevant to self-determination. Allen Buchanan, author of seven books on self-determination and secession, supports territorial integrity as a moral and legal aspect of constitutional democracy. However, he also advances a "Remedial Rights Only Theory" where a group has "a general right to secede if and only if it has suffered certain injustices, for which secession is the appropriate remedy of last resort. " He also would recognize secession if the state grants, or the constitution includes, a right to secede. Vita Gudeleviciute holds that in cases of non-self-governing peoples and foreign military occupation the principle of self-determination trumps that of territorial integrity. In cases where people lack representation by a state's government, they also may be considered a separate people, but under current law cannot claim the right to self-determination. On the other hand, she finds that secession within a single state is a domestic matter not covered by international law. Thus there are no on what groups may constitute a seceding people. A number of states have laid claim to territories, which they allege were removed from them as a result of colonialism. This is justified by reference to Paragraph 6 of UN Resolution 1514(XV), which states that any attempt "aimed at partial or total disruption of the national unity and the territorial integrity of a country is incompatible with the purposes and principles of the Charter". This, it is claimed, applies to situations where the territorial integrity of a state had been disrupted by colonisation, so that the people of a territory subject to a historic territorial claim are prevented from exercising a right to self-determination. This interpretation is rejected by many states, who argue that Paragraph 2 of UN Resolution 1514(XV) states that "all peoples have the right to self-determination" and Paragraph 6 cannot be used to justify territorial claims. The original purpose of Paragraph 6 was "to ensure that acts of self-determination occur within the established boundaries of colonies, rather than within sub-regions". Further, the use of the word "attempt" in Paragraph 6 denotes future action and cannot be construed to justify territorial redress for past action. An attempt sponsored by Spain and Argentina to qualify the right to self-determination in cases where there was a territorial dispute was rejected by the UN General Assembly, which re-iterated the right to self-determination was a universal right. In order to accommodate demands for minority rights and avoid secession and the creation of a separate new state, many states decentralize or devolve greater decision-making power to new or existing subunits or autonomous areas. More limited measures might include restricting demands to the maintenance of national cultures or granting non-territorial autonomy in the form of national associations which would assume control over cultural matters. This would be available only to groups that abandoned secessionist demands and the territorial state would retain political and judicial control, but only if would remain with the territorially organized state. Pavković explores how national self-determination, in the form of creation of a new state through secession, could override the principles of majority rule and of equal rights, which are primary liberal principles. This includes the question of how an unwanted state can be imposed upon a minority. He explores five contemporary theories of secession. In "anarcho-capitalist" theory only landowners have the right to secede. In communitarian theory, only those groups that desire direct or greater political participation have the right, including groups deprived of rights, per Allen Buchanan. In two nationalist theories, only national cultural groups have a right to secede. Australian professor Harry Beran's democratic theory endorses the equality of the right of secession to all types of groups. Unilateral secession against majority rule is justified if the group allows secession of any other group within its territory. Most sovereign states do not recognize the right to self-determination through secession in their constitutions. Many expressly forbid it. However, there are several existing models of self-determination through greater autonomy and through secession. In liberal constitutional democracies the principle of majority rule has dictated whether a minority can secede. In the United States Abraham Lincoln acknowledged that secession might be possible through amending the United States Constitution. The Supreme Court in "Texas v. White" held secession could occur "through revolution, or through consent of the States." The British Parliament in 1933 held that Western Australia only could secede from Australia upon vote of a majority of the country as a whole; the previous two-thirds majority vote for secession via referendum in Western Australia was insufficient. The Chinese Communist Party followed the Soviet Union in including the right of secession in its 1931 constitution in order to entice ethnic nationalities and Tibet into joining. However, the Party eliminated the right to secession in later years, and had anti-secession clause written into the Constitution before and after the founding the People's Republic of China. The 1947 Constitution of the Union of Burma contained an express state right to secede from the union under a number of procedural conditions. It was eliminated in the 1974 constitution of the Socialist Republic of the Union of Burma (officially the "Union of Myanmar"). Burma still allows "local autonomy under central leadership". As of 1996 the constitutions of Austria, Ethiopia, France, and Saint Kitts and Nevis have express or implied rights to secession. Switzerland allows for the secession from current and the creation of new cantons. In the case of proposed Quebec separation from Canada the Supreme Court of Canada in 1998 ruled that only both a clear majority of the province and a constitutional amendment confirmed by all participants in the Canadian federation could allow secession. The 2003 draft of the European Union Constitution allowed for the voluntary withdrawal of member states from the union, although the State which wanted to leave could not be involved in the vote deciding whether or not they can leave the Union. There was much discussion about such self-determination by minorities before the final document underwent the unsuccessful ratification process in 2005. As a result of the successful constitutional referendum held in 2003, every municipality in the Principality of Liechtenstein has the right to secede from the Principality by a vote of a majority of the citizens residing in this municipality. In determining international borders between sovereign states, self-determination has yielded to a number of other principles. Once groups exercise self-determination through secession, the issue of the proposed borders may prove more controversial than the fact of secession. The bloody Yugoslav wars in the 1990s were related mostly to border issues because the international community applied a version of uti possidetis juris in transforming the existing internal borders of the various Yugoslav republics into international borders, despite the conflicts of ethnic groups within those boundaries. In the 1990s indigenous populations of the northern two-thirds of Quebec province opposed being incorporated into a Quebec nation and stated a determination to resist it by force. The border between Northern Ireland and the Irish Free State was based on the borders of existing counties and did not include all of historic Ulster. A Boundary Commission was established to consider re-drawing it. Its proposals, which amounted to a small net transfer to Northern Ireland, were leaked to the press and then not acted upon. In December 1925, the governments of the Irish Free State, Northern Ireland, and the United Kingdom agreed to accept the existing border. There have been a number of notable cases of self-determination. For more information on past movements see list of historical separatist movements and lists of decolonized nations. Also see list of autonomous areas by country and lists of active separatist movements. The Republic of Artsakh (Republic of Nagorno-Karabakh), in the Caucasus region, declared its independence basing on self-determination rights on September 2, 1991. It successfully defended its independence in subsequent war with Azerbaijan, but remains largely unrecognized by UN states today. It is a member of the Community for Democracy and Rights of Nations along with three other Post-Soviet disputed republics. From 2003 onwards, self-determination has become the topic of some debate in Australia in relation to Aboriginal Australians and Torres Strait Islanders. In the 1970s, the Indigenous community approached the Federal Government and requested the right to administer their own communities. This encompassed basic local government functions, ranging from land dealings and management of community centres to road maintenance and garbage collection, as well as setting education programmes and standards in their local schools. The traditional homeland of the Tuareg peoples was divided up by the modern borders of Mali, Algeria and Niger. Numerous rebellions occurred over the decades, but in 2012 the Tuaregs succeeded in occupying their land and declaring the independence of Azawad. However, their movement was hijacked by the Islamist terrorist group Ansar Dine. The Basque Country (, , ) as a cultural region (not to be confused with the homonym Autonomous Community of the Basque country) is a European region in the western Pyrenees that spans the border between France and Spain, on the Atlantic coast. It comprises the autonomous communities of the Basque Country and Navarre in Spain and the Northern Basque Country in France. Since the 19th century, Basque nationalism has demanded the right of some kind of self-determination. This desire for independence is particularly stressed among leftist Basque nationalists. The right of self-determination was asserted by the Basque Parliament in 1990, 2002 and 2006. Since self-determination is not recognized in the Spanish Constitution of 1978, some Basques abstained and some voted against it in the referendum of December 6 of that year. It was approved by a clear majority at the Spanish level, and with 74.6% of the votes in the Basque Country. However, the overall turnout in the Basque Country was 45% when the Spanish overall turnover was 67.9%. The derived autonomous regime for the BAC was approved by Spanish Parliament and also by the Basque citizens in referendum. The autonomous statue of Navarre ("Amejoramiento del Fuero": "improvement of the charter") was approved by the Spanish Parliament and, like the statues of 13 out of 17 Spanish autonomous communities, it didn't need a referendum to enter into force. "Euskadi Ta Askatasuna" or ETA (; pronounced ), was an armed Basque nationalist, separatist and terrorist organization. Founded in 1959, it evolved from a group advocating traditional cultural ways to a paramilitary group with the goal of Basque independence. Its ideology was Marxist–Leninist. The Nigerian Civil War was fought between Biafran secessionists of the Republic of Biafra and the Nigerian central government. From 1999 to the present day, the indigenous people of Biafra have been agitating for independence to revive their country. They have registered a human rights organization known as Bilie Human Rights Initiative both in Nigeria and in the United Nations to advocate for their right to self-determination and achieve independence by the rule of law. After the 2012 Catalan march for independence, in which between 600,000 and 1.5 million citizens marched, the President of Catalonia, Artur Mas, called for new parliamentary elections on 25 November 2012 to elect a new parliament that would exercise the right of self-determination for Catalonia, a right not recognised under the Spanish constitution. The Parliament of Catalonia voted to hold a vote in the next four-year legislature on the question of self-determination. The parliamentary decision was approved by a large majority of MPs: 84 voted for, 21 voted against, and 25 abstained. The Catalan Parliament applied to the Spanish Parliament for the power to call a referendum to be devolved, but this was turned down. In December 2013 the President of the Generalitat Artur Mas and the governing coalition agreed to set the referendum for self-determination on 9 November 2014, and legislation specifically saying that the consultation would not be a "referendum" was enacted, only to be blocked by the Spanish Constitutional Court, at the request of the Spanish government. Given the block, the Government turned it into a simple "consultation to the people" instead. The question in the consultation was "Do you want Catalonia to be a State?" and, if the answer to this question was yes, "Do you want this State to be an independent State?". However, as the consultation was not a formal referendum, these (printed) answers were just suggestions and other answers were also accepted and catalogued as "other answers" instead as null votes. The turnout in this consultation was about 2·3m people out of 6·2m people that were called to vote (this figure does not coincide with the census figure of 5·3m for two main reasons: first, because organisers had no access to an official census due to the non-binding character of the consultation, and second, because the legal voting age was set to 16 rather than 18). Due to the lack of an official census, potential voters were assigned to electoral tables according to home address and first family name. Participants had to sign up first with their full name and national ID in a voter registry before casting their ballot, which prevented participants from potentially casting multiple ballots. The overall result was 80·76% in favor of both questions, 11% in favor of the first question but not of the second questions, 4·54% against both; the rest were classified as "other answers". The voter turnout was around 37% (most people against the consultation didn't go to vote). Four top members of Catalonia's political leadership were barred from public office for having defied the Constitutional court's last-minute ban. Almost three years later (1 October 2017), the Catalan government called a referendum for independence under legislation adopted in September 2017 (despite being blocked by the Constitutional Court of Spain), with the question "Do you want Catalonia to become an independent state in the form of a Republic?". On polling day, the Catalan police prevented voting in over 500 polling stations, without incident, while the Spanish police confiscated ballot boxes and closed down 92, voting centres with violent truncheon charges. The opposition parties had called for non-participation. The turnout (according to the votes that were counted) was 2.3m out of 5.3m (43.03% of the census), and 90.18% of the ballots were in favour of independence. The turnout, ballot count and results were similar to those of the 2014 "consultation". Under Dzhokhar Dudayev, Chechnya declared independence as the Chechen Republic of Ichkeria, using self-determination, Russia's history of bad treatment of Chechens, and a history of independence before invasion by Russia as main motives. Russia has restored control over Chechnya, but the separatist government functions still in exile, though it has been split into two entities: the Akhmed Zakayev-run secular Chechen Republic (based in Poland, the UK and the US), and the Islamic Caucasus Emirate. There is an active secessionist movement based on the self-determination of the residents of the Donetsk and Luhansk regions of eastern Ukraine, allegedly against the illegitimacy and corruption of the Ukrainian government. However, many in the international community assert that referendums held there in 2014 regarding independence from Ukraine were illegitimate and undemocratic. Similarly, there are reports that presidential elections in May 2014 were prevented from taking place in the two regions after armed gunmen took control of polling stations, kidnapped election officials, and stole lists of electors, thus denying the population the chance to express their will in a free, fair, and internationally recognised election. There are also arguments that the de facto separation of Eastern Ukraine from the rest of the country is not an expression of self-determination, but rather a manipulation through pro-Soviet sentiment revival and an invasion by neighbouring Russia, with Ukrainian President Petro Poroshenko claiming in 2015 that up to 9,000 Russian soldiers were deployed in Ukraine. Self-determination is referred to in the Falkland Islands Constitution and is a factor in the Falkland Islands sovereignty dispute. The population has existed for over nine generations, continuously for over 185 years. In the 2013 referendum organised by the Falkland Islands Government, 99.8% voted to remain British. As administering power, the British Government considers since the majority of inhabitants wish to remain British, transfer of sovereignty to Argentina would be counter to their right to self-determination. Argentina states the principle of self-determination is not applicable since the current inhabitants are not aboriginal and were brought to replace the Argentine population, which was expelled by an 'act of force', forcing the Argentinian inhabitants to directly leave the islands. This refers to the re-establishment of British rule in the year 1833 during which Argentina claims the existing population living in the islands was expelled. Argentina thus argues that, in the case of the Falkland Islands, the principle of territorial integrity should have precedence over self-determination. Historical records dispute Argentina's claims and whilst acknowledging the garrison was expelled note the existing civilian population remained at Port Louis and there was no attempt to settle the islands until 1841. The right to self-determination is referred to in the pre-amble of Chapter 1 of the Gibraltar constitution, and, since the United Kingdom also gave assurances that the right to self-determination of Gibraltarians would be respected in any transfer of sovereignty over the territory, is a factor in the dispute with Spain over the territory. The impact of the right to self-determination of Gibraltarians was seen in the 2002 Gibraltar sovereignty referendum, where Gibraltarian voters overwhelmingly rejected a plan to share sovereignty over Gibraltar between the UK and Spain. However, the UK government differs with the Gibraltarian government in that it considers Gibraltarian self-determination to be limited by the Treaty of Utrecht, which prevents Gibraltar achieving independence without the agreement of Spain, a position that the Gibraltarian government does not accept. The Spanish government denies that Gibraltarians have the right to self-determination, considering them to be "an artificial population without any genuine autonomy" and not "indigenous". However, the Partido Andalucista has agreed to recognise the right to self-determination of Gibraltarians. Before the United Nations's adoption of resolution 2908 (XXVII) on 2 November 1972, The People's Republic of China vetoed the former British colony of Hong Kong's right to self-determination on 8 March 1972. This sparked several nation's protest along with Great Britain's declaration on 14 December that the decision is invalid. Decades later , a nationalist independence movement, dubbed as the Hong Kong independence movement emerged in the now Communist Chinese controlled territory. It advocates the autonomous region to become a fully independent sovereign state. The city is considered a special administrative region (SAR) which, according to the PRC, enjoys a high degree of autonomy under the People's Republic of China (PRC), guaranteed under Article 2 of Hong Kong Basic Law[1] (which is ratified under the Sino-British Joint Declaration), since the transfer of the sovereignty of Hong Kong from the United Kingdom to the PRC in 1997. Since the handover, many Hongkongers are increasingly concerned about Beijing's growing encroachment on the territory's freedoms and the failure of the Hong Kong government to deliver 'true' democracy.[2] The 2014–15 Hong Kong electoral reform package deeply divided the city, as it allowed Hongkongers to have universal suffrage, but Beijing would have authority to screen the candidates to restrict the electoral method for the Chief Executive of Hong Kong (CE), the highest-ranking official of the territory. This sparked the 79-day massive peaceful protests which was dubbed as the "Umbrella Revolution" and the pro-independence movement emerged on the Hong Kong political scene.[2]  Since then, localism has gained momentum, particularly after the failure of the peaceful Umbrella Movement. Young localist leaders have led numerous protest actions against pro-Chinese policies to raise awareness of social problems of Hong Kong under Chinese rule. These include the sit-in protest against the Bill to Strengthen Internet Censorship, demonstrations against Chinese political interference in the University of Hong Kong, the Recover Yuen Long protests and the 2016 Mong Kok civil unrest. According to a survey conducted by the Chinese University of Hong Kong (CUHK) in July 2016, 17.4% of respondents supported the city becoming an independent entity after 2047, while 3.6% stated that it is "possible".[3] Ever since Pakistan and India's inception in 1947 the legal state of Jammu and Kashmir, the land between India and Pakistan, has been contested as Britain was resigning from their rule over this land. Maharaja Hari Singh, the ruler of Kashmir at the time of accession, signed the Instrument of Accession Act on October 26, 1947 as his territory was being attacked by Pakistani tribesmen. The passing of this Act allowed Jammu and Kashmir to accede to India on legal terms. When this Act was taken to Lord Mountbatten, the last viceroy of British India, he agreed to it and stated that a referendum needed to be held by the citizens in India, Pakistan, and Kashmir so that they could vote as to where Kashmir should accede to. This referendum that Mountbatten called for never took place and framed one of the legal disputes for Kashmir. In 1948 the United Nations intervened and ordered a plebiscite to be taken in order to hear the voices of the Kashmiris if they would like to accede to Pakistan or India. This plebiscite left out the right for Kashmiris to have the right of self-determination and become an autonomous state. To this date the Kashmiris have been faced with numerous human rights violations committed by both India and Pakistan and have yet to gain complete autonomy which they have been seeking through self-determination. The insurgency in Kashmir against Indian rule has existed in various forms. A widespread armed insurgency started in Kashmir against India rule in 1989 after allegations of rigging by the Indian government in the 1987 Jammu and Kashmir state election. This led to some parties in the state assembly forming militant wings, which acted as a catalyst for the emergence of armed insurgency in the region. The conflict over Kashmir has resulted in tens of thousands of deaths. The Inter-Services Intelligence of Pakistan has been accused by India of supporting and training both pro-Pakistan and pro-independence militants to fight Indian security forces in Jammu and Kashmir, a charge that Pakistan denies. According to official figures released in the Jammu and Kashmir assembly, there were 3,400 disappearance cases and the conflict has left more than 47,000 to 100,000 people dead as of July 2009. However, violence in the state had fallen sharply after the start of a slow-moving peace process between India and Pakistan. After the peace process failed in 2008, mass demonstrations against Indian rule, and also low-scale militancy have emerged again. However, despite boycott calls by separatist leaders in 2014, the Jammu and Kashmir Assembly elections saw highest voters turnout in last 25 years since insurgency erupted. As per the Indian government, it recorded more than 65% of voters turnout which was more than usual voters turnout in other state assembly elections of India. It considered as increase in faith of Kashmiri people in democratic process of India. However, activists say that the voter turnout is highly exaggerated and that elections are held under duress. Votes are cast because the people want stable governance of the state and this cannot be mistaken as an endorsement of Indian rule. Kurdistan is a historical region primarily inhabited by the Kurdish people of the Middle East. The territory is currently part of Turkey, Iraq, Syria and Iran. There are Kurdish self-determination movements in each of the 4 states. Iraqi Kurdistan has to date achieved the largest degree of self-determination through the formation of the Kurdistan Regional Government, an entity recognised by the Iraqi Federal Constitution. Although the right of the creation of a Kurdish state was recognized following World War I in the Treaty of Sèvres, the treaty was then annulled by the Treaty of Lausanne (1923). To date two separate Kurdish republics and one Kurdish Kingdom have declared sovereignty. The Republic of Ararat (Ağrı Province, Turkey), the Republic of Mehabad (West Azerbaijan Province, Iran) and the Kingdom of Kurdistan (Sulaymaniyah Governorate, Iraqi Kurdistan, Iraq), each of these fledgling states was crushed by military intervention. The Patriotic Union of Kurdistan which currently holds the Iraqi presidency and the Kurdistan Democratic Party which governs the Kurdistan Regional Government both explicitly commit themselves to the development of Kurdish self-determination, but opinions vary as to the question of self-determination sought within the current borders and countries. Efforts towards Kurdish self-determination are considered illegal separatism by the governments of Turkey and Iran, and the movement is politically repressed in both states. This is intertwined with Kurdish nationalist insurgencies in Iran and in Turkey, which in turn justify and are justified by the repression of peaceful advocacy. In Syria, a self-governing local Kurdish-dominated polity was established in 2012, amongst the upheaval of the Syrian Civil War, but has not been recognized by any foreign state. Naga refers to a vaguely-defined conglomeration of distinct tribes living on the border of India and Burma. Each of these tribes lived in a sovereign village before the arrival of the British, but developed a common identity as the area was Christianized. After the British left India, a section of Nagas under the leadership of Angami Zapu Phizo sought to establish a separate country for the Nagas. Phizo's group, the Naga National Council (NNC), claimed that 99. 9% of the Nagas wanted an independent Naga country according to a referendum conducted by it. It waged a secessionist insurgency against the Government of India. The NNC collapsed after Phizo got his dissenters killed or forced them to seek refuge with the Government. Phizo escaped to London, while NNC's successor secessionist groups continued to stage violent attacks against the Indian Government. The Naga People's Convention (NPC), another major Naga organization, was opposed to the secessionists. Its efforts led to the creation of a separate Nagaland state within India in 1963. The secessionist violence declined considerably after the Shillong Accord of 1975. However, three factions of the National Socialist Council of Nagaland (NSCN) continue to seek an independent country which would include parts of India and Burma. They envisage a sovereign, predominantly Christian nation called "Nagalim". Another controversial episode with perhaps more relevance was the British beginning their exit from British Malaya. An experience concerned the findings of a "United Nations Assessment Team" that led the British territories of North Borneo and Sarawak in 1963 to determine whether or not the populations wished to become a part of the new Malaysia Federation. The United Nation Team's mission followed on from an earlier assessment by the British-appointed Cobbold Commission which had arrived in the territories in 1962 and held hearings to determine public opinion. It also sifted through 1600 letters and memoranda submitted by individuals, organisations and political parties. Cobbold concluded that around two thirds of the population favoured to the formation of Malaysia while the remaining third wanted either independence or continuing control by the United Kingdom. The United Nations team largely confirmed these findings, which were later accepted by the General Assembly, and both territories subsequently wish to form the new Federation of Malaysia. The conclusions of both the Cobbold Commission and the United Nations team were arrived at without any referendums self-determination being held. Unlike in Singapore, however, no referendum was ever conducted in Sarawak and North Borneo. they sought to consolidate several of the previous ruled entities then there was Manila Accord, an agreement between the Philippines, Federation of Malaya and Indonesia on 31 July 1963 to abide by the wishes of the people of North Borneo and Sarawak within the context of United Nations General Assembly Resolution 1541 (XV), Principle 9 of the Annex taking into account referendums in North Borneo and Sarawak that would be free and without coercion. This also triggered the Indonesian confrontation because Indonesia opposed the violation of the agreements. Cyprus was settled by Mycenaean Greeks in two waves in the 2nd millennium BC. As a strategic location in the Middle East, it was subsequently occupied by several major powers, including the empires of the Assyrians, Egyptians and Persians, from whom the island was seized in 333 BC by Alexander the Great. Subsequent rule by Ptolemaic Egypt, the Classical and Eastern Roman Empire, Arab caliphates for a short period and the French Lusignan dynasty. Following the death in 1473 of James II, the last Lusignan king, the Republic of Venice assumed control of the island, while the late king's Venetian widow, Queen Catherine Cornaro, reigned as figurehead. Venice formally annexed the Kingdom of Cyprus in 1489, following the abdication of Catherine. The Venetians fortified Nicosia by building the Walls of Nicosia, and used it as an important commercial hub. Although the Lusignan French aristocracy remained the dominant social class in Cyprus throughout the medieval period, the former assumption that Greeks were treated only as serfs on the island is no longer considered by academics to be accurate. It is now accepted that the medieval period saw increasing numbers of Greek Cypriots elevated to the upper classes, a growing Greek middle ranks, and the Lusignan royal household even marrying Greeks. This included King John II of Cyprus who married Helena Palaiologina. Throughout Venetian rule, the Ottoman Empire frequently raided Cyprus. In 1539 the Ottomans destroyed Limassol and so fearing the worst, the Venetians also fortified Famagusta and Kyrenia. Invaded in 1570, Turks controlled and solely governed all of the Cyprus island from 1571 till its leasing to the United Kingdom in 1878. Cyprus was placed under British administration based on Cyprus Convention in 1878 and formally annexed by Britain in 1914. While Turkish Cypriots made up 18% of the population, the partition of Cyprus and creation of a Turkish state in the north became a policy of Turkish Cypriot leaders and Turkey in the 1950s. Politically, there was no majority/minority relation between Greek Cypriots and Turkish Cypriots; and hence, in 1960, Republic of Cyprus was founded by the constituent communities in Cyprus (Greek Cypriots and Turkish Cypriots) as a non-unitary state; the 1960 Constitution set both Turkish and Greek as the official languages. During 1963–74, the island experienced ethnic clashes and turmoil, the coup to unify the island to Greece and eventual Turkish invasion in 1974. Turkish Republic of Northern Cyprus was declared in 1983 and recognized only by Turkey. Monroe Leigh, 1990, The Legal Status in International Law of the Turkish Cypriot and the Greek Cypriot Communities in Cyprus. The Greek Cypriot and Turkish Cypriot regimes participating in these negotiations, and the respective communities which they represent, are presently entitled to exercise equal rights under international law, including rights of self-determination. Before the Turkey's invasion in 1974, Turkish Cypriots were concentrated in Turkish Cypriot enclaves in the island. Northern Cyprus fulfills all the classical criteria of statehood. United Nations Peace Force in Cyprus (UNFICYP) operates based on the laws of Northern Cyprus in north of Cyprus island. According to European Court of Human Rights (ECtHR), the laws of Northern Cyprus is valid in the north of Cyprus. ECtHR did "not" accept the claim that the Courts of Northern Cyprus lacked "independence and/or impartiality". ECtHR directed all Cypriots to exhaust "domestic remedies" applied by Northern Cyprus before taking their cases to ECtHR. In 2014, United States' Federal Court qualified Turkish Republic of Northern Cyprus as a "democratic country". In 2017, United Kingdom's High Court decided that "There was no duty in UK law upon the UK's Government to refrain from recognising Northern Cyprus. The United Nations itself works with Northern Cyprus law enforcement agencies and facilitates cooperation between the two parts of the island." UK's High Court also dismissed the claim that "cooperation between UK police and law agencies in northern Cyprus was illegal". In Canada, many in the province of Quebec have wanted the province to separate from Confederation. The Parti Québécois has asserted Quebec's "right to self-determination. " There is debate on under which conditions would this right be realized. French-speaking Quebec nationalism and support for maintaining Québécois culture would inspire Quebec nationalists, many of whom were supporters of the Quebec sovereignty movement during the late-20th century. Scotland has a long-standing independence movement, with polls suggesting in January 2020 that 52% of eligible voters would vote for an independent Scotland. The country's largest political party, The SNP , campaigns for Scottish independence. A referendum on independence was held in 2014, where it was rejected by 55% of voters. The Independence debate was reignited in the wake of the UK referendum on EU membership where Scotland voted overwhelmingly to remain a member of the EU. Results in the rest of the UK, however, led to Scotland being taken out of the EU.. In late 2019 the Scottish Government announced plans to hold another referendum on Scottish Independence. This was given assent by the Scottish Parliament but, as of February 2020, the UK Prime Minister has refused to grant the powers required to hold the referendum. Section 235 of the South African Constitution allows for the right to self-determination of a community, within the framework of "the right of the South African people as a whole to self-determination", and pursuant to national legislation. This section of the constitution was one of the negotiated settlements during the handing over of political power in 1994. Supporters of an independent Afrikaner homeland have argued that their goals are reasonable under this new legislation. In Italy, South Tyrol/Alto Adige was annexed after the First World War. The German-speaking inhabitants of South Tyrol are protected by the Gruber-De Gasperi Agreement, but there are still supporters of the self determination of South Tyrol, e.g. the party Die Freiheitlichen and the South Tyrolean independence movement. At the end of WWII the Allies offered to separate South Tyrol from Italy, but the South Tyrolean People's Party refused, preferring to obtain huge fiscal and economic advantages from Rome. The colonization of the North American continent and its Native American population has been the source of legal battles since the early 19th century. Many Native American tribes were resettled onto separate tracts of land (reservations), which have retained a certain degree of autonomy within the United States. The federal government recognizes Tribal Sovereignty and has established a number of laws attempting to clarify the relationship among the federal, state, and tribal governments. The Constitution and later federal laws recognize the local sovereignty of tribal nations, but do not recognize full sovereignty equivalent to that of foreign nations, hence the term "domestic dependent nations" to qualify the federally recognized tribes. Certain Chicano nationalist groups seek to "recreate" an ethnic-based state to be called Aztlán, after the legendary homeland of the Aztecs. It would comprise the Southwestern United States, historic territory of indigenous peoples and their descendants, as well as colonists and later settlers under the Spanish colonial and Mexican governments. Black nationalists have argued that, by virtue of slaves' unpaid labor and the harsh experiences of African Americans under slavery and Jim Crow, African Americans have a moral claim to the areas where the highest percentage of the population classified as black lives. They believe this area should be the basis of forming an independent state of New Afrika, designed to have an African-American majority and political control. There are several active Hawaiian autonomy or independence movements, each with the goal of realizing some level of political control over single or several islands. The groups range from those seeking territorial units similar to Indian reservations under the United States, with the least amount of independent control, to the Hawaiian sovereignty movement, which is projected to have the most independence. The Hawaiian Sovereignty movement seeks to revive the Hawaiian nation under the Hawaiian constitution. Supporters of this concept say that Hawaii retained its sovereignty while under control of the United States. Since 1972, the U.N. Decolonization Committee has called for Puerto Rico's "decolonization" and for the US to recognize the island's right to self-determination and independence. In 2007 the Decolonization Subcommittee called for the United Nations General Assembly to review the political status of Puerto Rico, a power reserved by the 1953 Resolution. This followed the 1967 passage of a plebiscite act that provided for a vote on the status of Puerto Rico with three status options: continued commonwealth, statehood, and independence. In the first plebscite, the commonwealth option won with 60.4% of the votes, but US congressional committees failed to enact legislation to address the status issue. In subsequent plebiscites in 1993 and 1998, the status quo was favored. In a referendum that took place in November 2012, a majority of Puerto Rican residents voted to change the territory's relationship with the United States, with the statehood option being the preferred option. But a large number of ballots—one-third of all votes cast—were left blank on the question of preferred alternative status. Supporters of the commonwealth status had urged voters to blank their ballots. When the blank votes are counted as anti-statehood votes, the statehood option would have received less than 50% of all ballots received. As of January 2014, Washington has not taken action to address the results of this plebiscite. Many current US state, regional and city secession groups use the language of self-determination. A 2008 Zogby International poll revealed that 22% of Americans believe that "any state or region has the right to peaceably secede and become an independent republic." Since the late 20th century, some states periodically discuss desires to secede from the United States. Unilateral secession was ruled unconstitutional by the US Supreme Court in "Texas v. White" (1869). In the case of Hawaii, the struggle for self-determination does not fall under secession, as it is less a break from federal administration, than a return to the process through which cession was claimed to have occurred: namely the ongoing occupation via a US imposed military coup; and/or removal from the UN list of Non-Self-Governing Territories. to educate or properly inform the citizenry of Hawaii of its options for self-determination and sidestepped guidelines laid out in UN General Assembly resolution 742 (1953). The self-determination of the West Papuan people has been violently suppressed by the Indonesian government since the withdrawal of Dutch colonial rule under the Netherlands New Guinea in 1962. There is an active movement based on the self-determination of the Sahrawi people in the Western Sahara region. Morocco also claims the entire territory, and maintains control of about two-thirds of the region.
https://en.wikipedia.org/wiki?curid=29269
Spinor In geometry and physics, spinors are elements of a complex vector space that can be associated with Euclidean space. Like geometric vectors and more general tensors, spinors transform linearly when the Euclidean space is subjected to a slight (infinitesimal) rotation. However, when a sequence of such small rotations is composed (integrated) to form an overall final rotation, the resulting spinor transformation depends on which sequence of small rotations was used. Unlike vectors and tensors, a spinor transforms to its negative when the space is continuously rotated through a complete turn from 0° to 360° (see picture). This property characterizes spinors: spinors can be viewed as the "square roots" of vectors (although this is inaccurate and may be misleading; they are better viewed as "square roots" of sections of vector bundles - in the case of the exterior algebra bundle of the cotangent bundle, they thus become "square roots" of differential forms). It is also possible to associate a substantially similar notion of spinor to Minkowski space, in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913. In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. Spinors are characterized by the specific way in which they behave under rotations. They change in different ways depending not just on the overall final rotation, but the details of how that rotation was achieved (by a continuous path in the rotation group). There are two topologically distinguishable classes (homotopy classes) of paths through rotations that result in the same overall rotation, as illustrated by the belt trick puzzle. These two inequivalent classes yield spinor transformations of opposite sign. The spin group is the group of all rotations keeping track of the class. It doubly covers the rotation group, since each rotation can be obtained in two inequivalent ways as the endpoint of a path. The space of spinors by definition is equipped with a (complex) linear representation of the spin group, meaning that elements of the spin group act as linear transformations on the space of spinors, in a way that genuinely depends on the homotopy class. In mathematical terms, spinors are described by a double-valued projective representation of the rotation group SO(3). Although spinors can be defined purely as elements of a representation space of the spin group (or its Lie algebra of infinitesimal rotations), they are typically defined as elements of a vector space that carries a linear representation of the Clifford algebra. The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis-independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, and in applications the Clifford algebra is often the easiest to work with. A Clifford space operates on a spinor space, and the elements of a spinor space are spinors. After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations. The spinors are the column vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices, and the two-component complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what precisely constitutes a "column vector" (or spinor), involves the choice of basis and gamma matrices in an essential way. As a representation of the spin group, this realization of spinors as (complex) column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of so-called "half-spin" or Weyl representations if the dimension is even. What characterizes spinors and distinguishes them from geometric vectors and other tensors is subtle. Consider applying a rotation to the coordinates of a system. No object in the system itself has moved, only the coordinates have, so there will always be a compensating change in those coordinate values when applied to any object of the system. Geometrical vectors, for example, have components that will undergo "the same" rotation as the coordinates. More broadly, any tensor associated with the system (for instance, the stress of some medium) also has coordinate descriptions that adjust to compensate for changes to the coordinate system itself. Spinors do not appear at this level of the description of a physical system, when one is concerned only with the properties of a single isolated rotation of the coordinates. Rather, spinors appear when we imagine that instead of a single rotation, the coordinate system is gradually (continuously) rotated between some initial and final configuration. For any of the familiar and intuitive ("tensorial") quantities associated with the system, the transformation law does not depend on the precise details of how the coordinates arrived at their final configuration. Spinors, on the other hand, are constructed in such a way that makes them "sensitive" to how the gradual rotation of the coordinates arrived there: They exhibit path-dependence. It turns out that, for any final configuration of the coordinates, there are actually two ("topologically") inequivalent "gradual" (continuous) rotations of the coordinate system that result in this same configuration. This ambiguity is called the homotopy class of the gradual rotation. The belt trick puzzle (shown) demonstrates two different rotations, one through an angle of 2 and the other through an angle of 4, having the same final configurations but different classes. Spinors actually exhibit a sign-reversal that genuinely depends on this homotopy class. This distinguishes them from vectors and other tensors, none of which can feel the class. Spinors can be exhibited as concrete objects using a choice of Cartesian coordinates. In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to (angular momenta about) the three coordinate axes. These are 2×2 matrices with complex entries, and the two-component complex column vectors on which these matrices act by matrix multiplication are the spinors. In this case, the spin group is isomorphic to the group of 2×2 unitary matrices with determinant one, which naturally sits inside the matrix algebra. This group acts by conjugation on the real vector space spanned by the Pauli matrices themselves, realizing it as a group of rotations among them, but it also acts on the column vectors (that is, the spinors). More generally, a Clifford algebra can be constructed from any vector space "V" equipped with a (nondegenerate) quadratic form, such as Euclidean space with its standard dot product or Minkowski space with its standard Lorentz metric. The space of spinors is the space of column vectors with formula_1 components. The orthogonal Lie algebra (i.e., the infinitesimal "rotations") and the spin group associated to the quadratic form are both (canonically) contained in the Clifford algebra, so every Clifford algebra representation also defines a representation of the Lie algebra and the spin group. Depending on the dimension and metric signature, this realization of spinors as column vectors may be irreducible or it may decompose into a pair of so-called "half-spin" or Weyl representations. When the vector space "V" is four-dimensional, the algebra is described by the gamma matrices. The space of spinors is formally defined as the fundamental representation of the Clifford algebra. (This may or may not decompose into irreducible representations.) The space of spinors may also be defined as a spin representation of the orthogonal Lie algebra. These spin representations are also characterized as the finite-dimensional projective representations of the special orthogonal group that do not factor through linear representations. Equivalently, a spinor is an element of a finite-dimensional group representation of the spin group on which the center acts non-trivially. There are essentially two frameworks for viewing the notion of a spinor. From a representation theoretic point of view, one knows beforehand that there are some representations of the Lie algebra of the orthogonal group that cannot be formed by the usual tensor constructions. These missing representations are then labeled the spin representations, and their constituents "spinors". From this view, a spinor must belong to a representation of the double cover of the rotation group , or more generally of a double cover of the generalized special orthogonal group on spaces with a metric signature of . These double covers are Lie groups, called the spin groups or . All the properties of spinors, and their applications and derived objects, are manifested first in the spin group. Representations of the double covers of these groups yield double-valued projective representations of the groups themselves. (This means that the action of a particular rotation on vectors in the quantum Hilbert space is only defined up to a sign.) From a geometrical point of view, one can explicitly construct the spinors and then examine how they behave under the action of the relevant Lie groups. This latter approach has the advantage of providing a concrete and elementary description of what a spinor is. However, such a description becomes unwieldy when complicated properties of the spinors, such as Fierz identities, are needed. The language of Clifford algebras (sometimes called geometric algebras) provides a complete picture of the spin representations of all the spin groups, and the various relationships between those representations, via the classification of Clifford algebras. It largely removes the need for "ad hoc" constructions. In detail, let "V" be a finite-dimensional complex vector space with nondegenerate bilinear form "g". The Clifford algebra is the algebra generated by "V" along with the anticommutation relation It is an abstract version of the algebra generated by the gamma or Pauli matrices. If "V" = "n", with the standard form we denote the Clifford algebra by Cℓ"n"(). Since by the choice of an orthonormal basis every complex vectorspace with non-degenerate form is isomorphic to this standard example, this notation is abused more generally if . If is even, Cℓ"n"() is isomorphic as an algebra (in a non-unique way) to the algebra of complex matrices (by the Artin-Wedderburn theorem and the easy to prove fact that the Clifford algebra is central simple). If is odd, Cℓ2"k"+1() is isomorphic to the algebra of two copies of the complex matrices. Therefore, in either case has a unique (up to isomorphism) irreducible representation (also called simple Clifford module), commonly denoted by Δ, of dimension 2["n"/2]. Since the Lie algebra is embedded as a Lie subalgebra in equipped with the Clifford algebra commutator as Lie bracket, the space Δ is also a Lie algebra representation of called a spin representation. If "n" is odd, this Lie algebra representation is irreducible. If "n" is even, it splits further into two irreducible representations called the Weyl or "half-spin representations". Irreducible representations over the reals in the case when "V" is a real vector space are much more intricate, and the reader is referred to the Clifford algebra article for more details. Spinors form a vector space, usually over the complex numbers, equipped with a linear group representation of the spin group that does not factor through a representation of the group of rotations (see diagram). The spin group is the group of rotations keeping track of the homotopy class. Spinors are needed to encode basic information about the topology of the group of rotations because that group is not simply connected, but the simply connected spin group is its double cover. So for every rotation there are two elements of the spin group that represent it. Geometric vectors and other tensors cannot feel the difference between these two elements, but they produce "opposite" signs when they affect any spinor under the representation. Thinking of the elements of the spin group as homotopy classes of one-parameter families of rotations, each rotation is represented by two distinct homotopy classes of paths to the identity. If a one-parameter family of rotations is visualized as a ribbon in space, with the arc length parameter of that ribbon being the parameter (its tangent, normal, binormal frame actually gives the rotation), then these two distinct homotopy classes are visualized in the two states of the belt trick puzzle (above). The space of spinors is an auxiliary vector space that can be constructed explicitly in coordinates, but ultimately only exists up to isomorphism in that there is no "natural" construction of them that does not rely on arbitrary choices such as coordinate systems. A notion of spinors can be associated, as such an auxiliary mathematical object, with any vector space equipped with a quadratic form such as Euclidean space with its standard dot product, or Minkowski space with its Lorentz metric. In the latter case, the "rotations" include the Lorentz boosts, but otherwise the theory is substantially similar. The constructions given above, in terms of Clifford algebra or representation theory, can be thought of as defining spinors as geometric objects in zero-dimensional space-time. To obtain the spinors of physics, such as the Dirac spinor, one extends the construction to obtain a spin structure on 4-dimensional space-time (Minkowski space). Effectively, one starts with the tangent manifold of space-time, each point of which is a 4-dimensional vector space with "SO"(3,1) symmetry, and then builds the spin group at each point. The neighborhoods of points are endowed with concepts of smoothness and differentiability: the standard construction is one of a fibre bundle, the fibers of which are affine spaces transforming under the spin group. After constructing the fiber bundle, one may then consider differential equations, such as the Dirac equation, or the Weyl equation on the fiber bundle. These equations (Dirac or Weyl) have solutions that are plane waves, having symmetries characteristic of the fibers, "i.e." having the symmetries of spinors, as obtained from the (zero-dimensional) Clifford algebra/spin representation theory described above. Such plane-wave solutions (or other solutions) of the differential equations can then properly be called fermions; fermions have the algebraic qualities of spinors. By general convention, the terms "fermion" and "spinor" are often used interchangeably in physics, as synonyms of one-another. It appears that all fundamental particles in nature that are spin-1/2 are described by the Dirac equation, with the possible exception of the neutrino. There does not seem to be any "a priori" reason why this would be the case. A perfectly valid choice for spinors would be the non-complexified version of , the Majorana spinor. There also does not seem to be any particular prohibition to having Weyl spinors appear in nature as fundamental particles. The Dirac, Weyl, and Majorana spinors are interrelated, and their relation can be elucidated on the basis of real geometric algebra. Dirac and Weyl spinors are complex representations while Majorana spinors are real representations. Weyl spinors are insufficient to describe massive particles, such as electrons, since the Weyl plane-wave solutions necessarily travel at the speed of light; for massive particles, the Dirac equation is needed. The initial construction of the Standard Model of particle physics starts with both the electron and the neutrino as massless Weyl spinors; the Higgs mechanism gives electrons a mass; the classical neutrino remained massless, and was thus an example of a Weyl spinor. However, because of observed neutrino oscillation, it is now believed that they are not Weyl spinors, but perhaps instead Majorana spinors. It is not known whether Weyl spinor fundamental particles exist in nature. The situation for condensed matter physics is different: one can can construct two and three-dimensional "spacetimes" in a large variety of different physical materials, ranging from semiconductors to far more exotic materials. In 2015, an international team led by Princeton University scientists announced that they had found a quasiparticle that behaves as a Weyl fermion. One major mathematical application of the construction of spinors is to make possible the explicit construction of linear representations of the Lie algebras of the special orthogonal groups, and consequently spinor representations of the groups themselves. At a more profound level, spinors have been found to be at the heart of approaches to the Atiyah–Singer index theorem, and to provide constructions in particular for discrete series representations of semisimple groups. The spin representations of the special orthogonal Lie algebras are distinguished from the tensor representations given by Weyl's construction by the weights. Whereas the weights of the tensor representations are integer linear combinations of the roots of the Lie algebra, those of the spin representations are half-integer linear combinations thereof. Explicit details can be found in the spin representation article. The spinor can be described, in simple terms, as "vectors of a space the transformations of which are related in a particular way to rotations in physical space". Stated differently: Several ways of illustrating everyday analogies have been formulated in terms of the plate trick, tangloids and other examples of orientation entanglement. Nonetheless, the concept is generally considered notoriously difficult to understand, as illustrated by Michael Atiyah's statement that is recounted by Dirac's biographer Graham Farmelo: The most general mathematical form of spinors was discovered by Élie Cartan in 1913. The word "spinor" was coined by Paul Ehrenfest in his work on quantum physics. Spinors were first applied to mathematical physics by Wolfgang Pauli in 1927, when he introduced his spin matrices. The following year, Paul Dirac discovered the fully relativistic theory of electron spin by showing the connection between spinors and the Lorentz group. By the 1930s, Dirac, Piet Hein and others at the Niels Bohr Institute (then known as the Institute for Theoretical Physics of the University of Copenhagen) created toys such as Tangloids to teach and model the calculus of spinors. Spinor spaces were represented as left ideals of a matrix algebra in 1930, by G. Juvet and by Fritz Sauter. More specifically, instead of representing spinors as complex-valued 2D column vectors as Pauli had done, they represented them as complex-valued 2 × 2 matrices in which only the elements of the left column are non-zero. In this manner the spinor space became a minimal left ideal in . In 1947 Marcel Riesz constructed spinor spaces as elements of a minimal left ideal of Clifford algebras. In 1966/1967, David Hestenes replaced spinor spaces by the even subalgebra Cℓ01,3() of the spacetime algebra Cℓ1,3(). As of the 1980s, the theoretical physics group at Birkbeck College around David Bohm and Basil Hiley has been developing algebraic approaches to quantum theory that build on Sauter and Riesz' identification of spinors with minimal left ideals. Some simple examples of spinors in low dimensions arise from considering the even-graded subalgebras of the Clifford algebra . This is an algebra built up from an orthonormal basis of mutually orthogonal vectors under addition and multiplication, "p" of which have norm +1 and "q" of which have norm −1, with the product rule for the basis vectors The Clifford algebra Cℓ2,0() is built up from a basis of one unit scalar, 1, two orthogonal unit vectors, "σ"1 and "σ"2, and one unit pseudoscalar . From the definitions above, it is evident that , and . The even subalgebra Cℓ02,0(), spanned by "even-graded" basis elements of Cℓ2,0(), determines the space of spinors via its representations. It is made up of real linear combinations of 1 and "σ"1"σ"2. As a real algebra, Cℓ02,0() is isomorphic to the field of complex numbers . As a result, it admits a conjugation operation (analogous to complex conjugation), sometimes called the "reverse" of a Clifford element, defined by which, by the Clifford relations, can be written The action of an even Clifford element on vectors, regarded as 1-graded elements of Cℓ2,0(), is determined by mapping a general vector to the vector where "γ"∗ is the conjugate of "γ", and the product is Clifford multiplication. In this situation, a spinor is an ordinary complex number. The action of "γ" on a spinor "φ" is given by ordinary complex multiplication: An important feature of this definition is the distinction between ordinary vectors and spinors, manifested in how the even-graded elements act on each of them in different ways. In general, a quick check of the Clifford relations reveals that even-graded elements conjugate-commute with ordinary vectors: On the other hand, comparing with the action on spinors , "γ" on ordinary vectors acts as the "square" of its action on spinors. Consider, for example, the implication this has for plane rotations. Rotating a vector through an angle of "θ" corresponds to , so that the corresponding action on spinors is via . In general, because of logarithmic branching, it is impossible to choose a sign in a consistent way. Thus the representation of plane rotations on spinors is two-valued. In applications of spinors in two dimensions, it is common to exploit the fact that the algebra of even-graded elements (that is just the ring of complex numbers) is identical to the space of spinors. So, by abuse of language, the two are often conflated. One may then talk about "the action of a spinor on a vector." In a general setting, such statements are meaningless. But in dimensions 2 and 3 (as applied, for example, to computer graphics) they make sense. The Clifford algebra Cℓ3,0() is built up from a basis of one unit scalar, 1, three orthogonal unit vectors, "σ"1, "σ"2 and "σ"3, the three unit bivectors "σ"1"σ"2, "σ"2"σ"3, "σ"3"σ"1 and the pseudoscalar . It is straightforward to show that , and . The sub-algebra of even-graded elements is made up of scalar dilations, and vector rotations where corresponds to a vector rotation through an angle "θ" about an axis defined by a unit vector . As a special case, it is easy to see that, if , this reproduces the "σ"1"σ"2 rotation considered in the previous section; and that such rotation leaves the coefficients of vectors in the "σ"3 direction invariant, since The bivectors "σ"2"σ"3, "σ"3"σ"1 and "σ"1"σ"2 are in fact Hamilton's quaternions i, j, and k, discovered in 1843: With the identification of the even-graded elements with the algebra of quaternions, as in the case of two dimensions the only representation of the algebra of even-graded elements is on itself. Thus the (real) spinors in three-dimensions are quaternions, and the action of an even-graded element on a spinor is given by ordinary quaternionic multiplication. Note that the expression (1) for a vector rotation through an angle , "the angle appearing in γ was halved". Thus the spinor rotation (ordinary quaternionic multiplication) will rotate the spinor through an angle one-half the measure of the angle of the corresponding vector rotation. Once again, the problem of lifting a vector rotation to a spinor rotation is two-valued: the expression (1) with in place of "θ"/2 will produce the same vector rotation, but the negative of the spinor rotation. The spinor/quaternion representation of rotations in 3D is becoming increasingly prevalent in computer geometry and other applications, because of the notable brevity of the corresponding spin matrix, and the simplicity with which they can be multiplied together to calculate the combined effect of successive rotations about different axes. A space of spinors can be constructed explicitly with concrete and abstract constructions. The equivalence of these constructions are a consequence of the uniqueness of the spinor representation of the complex Clifford algebra. For a complete example in dimension 3, see spinors in three dimensions. Given a vector space "V" and a quadratic form "g" an explicit matrix representation of the Clifford algebra can be defined as follows. Choose an orthonormal basis for "V" i.e. where and for . Let . Fix a set of matrices such that (i.e. fix a convention for the gamma matrices). Then the assignment extends uniquely to an algebra homomorphism by sending the monomial in the Clifford algebra to the product of matrices and extending linearly. The space on which the gamma matrices act is now a space of spinors. One needs to construct such matrices explicitly, however. In dimension 3, defining the gamma matrices to be the Pauli sigma matrices gives rise to the familiar two component spinors used in non relativistic quantum mechanics. Likewise using the Dirac gamma matrices gives rise to the 4 component Dirac spinors used in 3+1 dimensional relativistic quantum field theory. In general, in order to define gamma matrices of the required kind, one can use the Weyl–Brauer matrices. In this construction the representation of the Clifford algebra , the Lie algebra , and the Spin group , all depend on the choice of the orthonormal basis and the choice of the gamma matrices. This can cause confusion over conventions, but invariants like traces are independent of choices. In particular, all physically observable quantities must be independent of such choices. In this construction a spinor can be represented as a vector of 2"k" complex numbers and is denoted with spinor indices (usually "α", "β", "γ"). In the physics literature, abstract spinor indices are often used to denote spinors even when an abstract spinor construction is used. There are at least two different, but essentially equivalent, ways to define spinors abstractly. One approach seeks to identify the minimal ideals for the left action of on itself. These are subspaces of the Clifford algebra of the form , admitting the evident action of by left-multiplication: . There are two variations on this theme: one can either find a primitive element that is a nilpotent element of the Clifford algebra, or one that is an idempotent. The construction via nilpotent elements is more fundamental in the sense that an idempotent may then be produced from it. In this way, the spinor representations are identified with certain subspaces of the Clifford algebra itself. The second approach is to construct a vector space using a distinguished subspace of , and then specify the action of the Clifford algebra "externally" to that vector space. In either approach, the fundamental notion is that of an isotropic subspace . Each construction depends on an initial freedom in choosing this subspace. In physical terms, this corresponds to the fact that there is no measurement protocol that can specify a basis of the spin space, even if a preferred basis of is given. As above, we let be an -dimensional complex vector space equipped with a nondegenerate bilinear form. If is a real vector space, then we replace by its complexification and let denote the induced bilinear form on . Let be a maximal isotropic subspace, i.e. a maximal subspace of such that . If is even, then let be an isotropic subspace complementary to . If is odd, let be a maximal isotropic subspace with , and let be the orthogonal complement of . In both the even- and odd-dimensional cases and have dimension . In the odd-dimensional case, is one-dimensional, spanned by a unit vector . Since "W" is isotropic, multiplication of elements of "W" inside is skew. Hence vectors in "W" anti-commute, and is just the exterior algebra Λ∗"W". Consequently, the "k"-fold product of "W" with itself, "W""k", is one-dimensional. Let "ω" be a generator of "W""k". In terms of a basis of in "W", one possibility is to set Note that (i.e., "ω" is nilpotent of order 2), and moreover, for all . The following facts can be proven easily: In detail, suppose for instance that "n" is even. Suppose that "I" is a non-zero left ideal contained in . We shall show that "I" must be equal to by proving that it contains a nonzero scalar multiple of "ω". Fix a basis "w""i" of "W" and a complementary basis "w""i"′ of "W" so that Note that any element of "I" must have the form "αω", by virtue of our assumption that . Let be any such element. Using the chosen basis, we may write where the "a""i"1…"i""p" are scalars, and the "B""j" are auxiliary elements of the Clifford algebra. Observe now that the product Pick any nonzero monomial "a" in the expansion of "α" with maximal homogeneous degree in the elements "w"i: then is a nonzero scalar multiple of "ω", as required. Note that for "n" even, this computation also shows that as a vector space. In the last equality we again used that "W" is isotropic. In physics terms, this shows that Δ is built up like a Fock space by creating spinors using anti-commuting creation operators in "W" acting on a vacuum "ω". The computations with the minimal ideal construction suggest that a spinor representation can also be defined directly using the exterior algebra of the isotropic subspace "W". Let denote the exterior algebra of "W" considered as vector space only. This will be the spin representation, and its elements will be referred to as spinors. The action of the Clifford algebra on Δ is defined first by giving the action of an element of "V" on Δ, and then showing that this action respects the Clifford relation and so extends to a homomorphism of the full Clifford algebra into the endomorphism ring End(Δ) by the universal property of Clifford algebras. The details differ slightly according to whether the dimension of "V" is even or odd. When dim() is even, where "W′" is the chosen isotropic complement. Hence any decomposes uniquely as with and . The action of on a spinor is given by where "i"("w′") is interior product with "w′" using the non degenerate quadratic form to identify "V" with "V"∗, and ε(w) denotes the exterior product. It may be verified that and so respects the Clifford relations and extends to a homomorphism from the Clifford algebra to End(Δ). The spin representation Δ further decomposes into a pair of irreducible complex representations of the Spin group (the half-spin representations, or Weyl spinors) via When dim("V") is odd, , where "U" is spanned by a unit vector "u" orthogonal to "W". The Clifford action "c" is defined as before on , while the Clifford action of (multiples of) "u" is defined by As before, one verifies that "c" respects the Clifford relations, and so induces a homomorphism. If the vector space "V" has extra structure that provides a decomposition of its complexification into two maximal isotropic subspaces, then the definition of spinors (by either method) becomes natural. The main example is the case that the real vector space "V" is a hermitian vector space , i.e., "V" is equipped with a complex structure "J" that is an orthogonal transformation with respect to the inner product "g" on "V". Then splits in the ±"i" eigenspaces of "J". These eigenspaces are isotropic for the complexification of "g" and can be identified with the complex vector space and its complex conjugate . Therefore, for a hermitian vector space the vector space Λ (as well as its complex conjugate Λ"V") is a spinor space for the underlying real euclidean vector space. With the Clifford action as above but with contraction using the hermitian form, this construction gives a spinor space at every point of an almost Hermitian manifold and is the reason why every almost complex manifold (in particular every symplectic manifold) has a Spinc structure. Likewise, every complex vector bundle on a manifold carries a Spinc structure. A number of Clebsch–Gordan decompositions are possible on the tensor product of one spin representation with another. These decompositions express the tensor product in terms of the alternating representations of the orthogonal group. For the real or complex case, the alternating representations are In addition, for the real orthogonal groups, there are three characters (one-dimensional representations) The Clebsch–Gordan decomposition allows one to define, among other things: If is even, then the tensor product of Δ with the contragredient representation decomposes as which can be seen explicitly by considering (in the Explicit construction) the action of the Clifford algebra on decomposable elements . The rightmost formulation follows from the transformation properties of the Hodge star operator. Note that on restriction to the even Clifford algebra, the paired summands are isomorphic, but under the full Clifford algebra they are not. There is a natural identification of Δ with its contragredient representation via the conjugation in the Clifford algebra: So also decomposes in the above manner. Furthermore, under the even Clifford algebra, the half-spin representations decompose For the complex representations of the real Clifford algebras, the associated reality structure on the complex Clifford algebra descends to the space of spinors (via the explicit construction in terms of minimal ideals, for instance). In this way, we obtain the complex conjugate of the representation Δ, and the following isomorphism is seen to hold: In particular, note that the representation Δ of the orthochronous spin group is a unitary representation. In general, there are Clebsch–Gordan decompositions In metric signature , the following isomorphisms hold for the conjugate half-spin representations Using these isomorphisms, one can deduce analogous decompositions for the tensor products of the half-spin representations . If is odd, then In the real case, once again the isomorphism holds Hence there is a Clebsch–Gordan decomposition (again using the Hodge star to dualize) given by There are many far-reaching consequences of the Clebsch–Gordan decompositions of the spinor spaces. The most fundamental of these pertain to Dirac's theory of the electron, among whose basic requirements are
https://en.wikipedia.org/wiki?curid=29276
Safety engineering Safety engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail. Analysis techniques can be split into two categories: qualitative and quantitative methods. Both approaches share the goal of finding causal dependencies between a hazard on system level and failures of individual components. Qualitative approaches focus on the question "What must go wrong, such that a system hazard may occur?", while quantitative methods aim at providing estimations about probabilities, rates and/or severity of consequences. The complexity of the technical systems such as Improvements of Design and Materials, Planned Inspections, Fool-proof design, and Backup Redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels. Traditionally, safety analysis techniques rely solely on skill and expertise of the safety engineer. In the last decade model-based approaches have become prominent. In contrast to traditional methods, model-based techniques try to derive relationships between causes and consequences from some sort of model of the system. The two most common fault modeling techniques are called failure mode and effects analysis and fault tree analysis. These techniques are just ways of finding problems and of making plans to cope with failures, as in probabilistic risk assessment. One of the earliest complete studies using this technique on a commercial nuclear plant was the WASH-1400 study, also known as the Reactor Safety Study or the Rasmussen Report. Failure Mode and Effects Analysis (FMEA) is a bottom-up, inductive analytical method which may be performed at either the functional or piece-part level. For functional FMEA, failure modes are identified for each function in a system or equipment item, usually with the help of a functional block diagram. For piece-part FMEA, failure modes are identified for each piece-part component (such as a valve, connector, resistor, or diode). The effects of the failure mode are described, and assigned a probability based on the failure rate and failure mode ratio of the function or component. This quantiazation is difficult for software ---a bug exists or not, and the failure models used for hardware components do not apply. Temperature and age and manufacturing variability affect a resistor; they do not affect software. Failure modes with identical effects can be combined and summarized in a Failure Mode Effects Summary. When combined with criticality analysis, FMEA is known as Failure Mode, Effects, and Criticality Analysis or FMECA, pronounced "fuh-MEE-kuh". Fault tree analysis (FTA) is a top-down, deductive analytical method. In FTA, initiating primary events such as component failures, human errors, and external events are traced through Boolean logic gates to an undesired top event such as an aircraft crash or nuclear reactor core melt. The intent is to identify ways to make top events less probable, and verify that safety goals have been achieved. Fault trees are a logical inverse of success trees, and may be obtained by applying de Morgan's theorem to success trees (which are directly related to reliability block diagrams). FTA may be qualitative or quantitative. When failure and event probabilities are unknown, qualitative fault trees may be analyzed for minimal cut sets. For example, if any minimal cut set contains a single base event, then the top event may be caused by a single failure. Quantitative FTA is used to compute top event probability, and usually requires computer software such as CAFTA from the Electric Power Research Institute or SAPHIRE from the Idaho National Laboratory. Some industries use both fault trees and event trees. An event tree starts from an undesired initiator (loss of critical supply, component failure etc.) and follows possible further system events through to a series of final consequences. As each new event is considered, a new node on the tree is added with a split of probabilities of taking either branch. The probabilities of a range of "top events" arising from the initial event can then be seen. The offshore oil and gas industry uses a qualitative safety systems analysis technique to ensure the protection of offshore production systems and platforms. The analysis is used during the design phase to identify process engineering hazards together with risk mitigation measures. The methodology is described in the American Petroleum Institute Recommended Practice 14C "Analysis, Design, Installation, and Testing of Basic Surface Safety Systems for Offshore Production Platforms." The technique uses system analysis methods to determine the safety requirements to protect any individual process component, e.g. a vessel, pipeline, or pump. The safety requirements of individual components are integrated into a complete platform safety system, including liquid containment and emergency support systems such as fire and gas detection. The first stage of the analysis identifies individual process components, these can include: flowlines, headers, pressure vessels, atmospheric vessels, fired heaters, exhaust heated components, pumps, compressors, pipelines and heat exchangers. Each component is subject to a safety analysis to identify undesirable events (equipment failure, process upsets, etc.) for which protection must be provided. The analysis also identifies a detectable condition (e.g. high pressure) which is used to initiate actions to prevent or minimize the effect of undesirable events. A Safety Analysis Table (SAT) for pressure vessels includes the following details. Other undesirable events for a pressure vessel are under-pressure, gas blowby, leak, and excess temperature together with their associated causes and detectable conditions. Once the events, causes and detectable conditions have been identified the next stage of the methodology uses a Safety Analysis Checklist (SAC) for each component. This lists the safety devices that may be required or factors that negate the need for such a device. For example, for the case of liquid overflow from a vessel (as above) the SAC identifies: The analysis ensures that two levels of protection are provided to mitigate each undesirable event. For example, for a pressure vessel subjected to over-pressure the primary protection would be a PSH (pressure switch high) to shut off inflow to the vessel, secondary protection would be provided by a pressure safety valve (PSV) on the vessel. The next stage of the analysis relates all the sensing devices, shutdown valves (ESVs), trip systems and emergency support systems in the form of a Safety Analysis Function Evaluation (SAFE) chart. X denotes that the detection device on the left (e.g. PSH) initiates the shutdown or warning action on the top right (e.g. ESV closure). The SAFE chart constitutes the basis of Cause and Effect Charts which relate the sensing devices to shutdown valves and plant trips which defines the functional architecture of the process shutdown system. The methodology also specifies the systems testing that is necessary to ensure the functionality of the protection systems. API RP 14C was first published in June 1974. The 8th edition was published in February 2017. API RP 14C was adapted as ISO standard ISO 10418 in 1993 entitled "Petroleum and natural gas industries — Offshore production installations — Analysis, design, installation and testing of basic surface process safety systems." The latest 2003 edition of ISO 10418 is currently (2019) undergoing revision. Typically, safety guidelines prescribe a set of steps, deliverable documents, and exit criterion focused around planning, analysis and design, implementation, verification and validation, configuration management, and quality assurance activities for the development of a safety-critical system. In addition, they typically formulate expectations regarding the creation and use of traceability in the project. For example, depending upon the criticality level of a requirement, the US Federal Aviation Administration guideline DO-178B/C requires traceability from requirements to design, and from requirements to source code and executable object code for software components of a system. Thereby, higher quality traceability information can simplify the certification process and help to establish trust in the maturity of the applied development process. Usually a failure in safety-certified systems is acceptable if, on average, less than one life per 109 hours of continuous operation is lost to failure.{as per FAA document AC 25.1309-1A} Most Western nuclear reactors, medical equipment, and commercial aircraft are certified to this level. The cost versus loss of lives has been considered appropriate at this level (by FAA for aircraft systems under Federal Aviation Regulations). Once a failure mode is identified, it can usually be mitigated by adding extra or redundant equipment to the system. For example, nuclear reactors contain dangerous radiation, and nuclear reactions can cause so much heat that no substance might contain them. Therefore, reactors have emergency core cooling systems to keep the temperature down, shielding to contain the radiation, and engineered barriers (usually several, nested, surmounted by a containment building) to prevent accidental leakage. Safety-critical systems are commonly required to permit no single event or component failure to result in a catastrophic failure mode. Most biological organisms have a certain amount of redundancy: multiple organs, multiple limbs, etc. For any given failure, a fail-over or redundancy can almost always be designed and incorporated into a system. There are two categories of techniques to reduce the probability of failure: Fault avoidance techniques increase the reliability of individual items (increased design margin, de-rating, etc.). Fault tolerance techniques increase the reliability of the system as a whole (redundancies, barriers, etc.). Safety engineering and reliability engineering have much in common, but safety is not reliability. If a medical device fails, it should fail safely; other alternatives will be available to the surgeon. If the engine on a single-engine aircraft fails, there is no backup. Electrical power grids are designed for both safety and reliability; telephone systems are designed for reliability, which becomes a safety issue when emergency (e.g. US "911") calls are placed. Probabilistic risk assessment has created a close relationship between safety and reliability. Component reliability, generally defined in terms of component failure rate, and external event probability are both used in quantitative safety assessment methods such as FTA. Related probabilistic methods are used to determine system Mean Time Between Failure (MTBF), system availability, or probability of mission success or failure. Reliability analysis has a broader scope than safety analysis, in that non-critical failures are considered. On the other hand, higher failure rates are considered acceptable for non-critical systems. Safety generally cannot be achieved through component reliability alone. Catastrophic failure probabilities of 10−9 per hour correspond to the failure rates of very simple components such as resistors or capacitors. A complex system containing hundreds or thousands of components might be able to achieve a MTBF of 10,000 to 100,000 hours, meaning it would fail at 10−4 or 10−5 per hour. If a system failure is catastrophic, usually the only practical way to achieve 10−9 per hour failure rate is through redundancy. When adding equipment is impractical (usually because of expense), then the least expensive form of design is often "inherently fail-safe". That is, change the system design so its failure modes are not catastrophic. Inherent fail-safes are common in medical equipment, traffic and railway signals, communications equipment, and safety equipment. The typical approach is to arrange the system so that ordinary single failures cause the mechanism to shut down in a safe way (for nuclear power plants, this is termed a passively safe design, although more than ordinary failures are covered). Alternately, if the system contains a hazard source such as a battery or rotor, then it may be possible to remove the hazard from the system so that its failure modes cannot be catastrophic. The U.S. Department of Defense Standard Practice for System Safety (MIL–STD–882) places the highest priority on elimination of hazards through design selection. One of the most common fail-safe systems is the overflow tube in baths and kitchen sinks. If the valve sticks open, rather than causing an overflow and damage, the tank spills into an overflow. Another common example is that in an elevator the cable supporting the car keeps spring-loaded brakes open. If the cable breaks, the brakes grab rails, and the elevator cabin does not fall. Some systems can never be made fail safe, as continuous availability is needed. For example, loss of engine thrust in flight is dangerous. Redundancy, fault tolerance, or recovery procedures are used for these situations (e.g. multiple independent controlled and fuel fed engines). This also makes the system less sensitive for the reliability prediction errors or quality induced uncertainty for the separate items. On the other hand, failure detection & correction and avoidance of common cause failures becomes here increasingly important to ensure system level reliability.
https://en.wikipedia.org/wiki?curid=29278
SIGGRAPH SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) is an annual conference on computer graphics (CG) organized by the ACM SIGGRAPH, starting in 1974. The main conference is held in North America; SIGGRAPH Asia, a second yearly conference, has been held since 2008 in countries throughout Asia. The conference incorporates both academic presentations as well as an industry trade show. Other events at the conference include educational courses and panel discussions on recent topics in computer graphics and interactive techniques. The SIGGRAPH conference proceedings, which are published in the ACM Transactions on Graphics, has one of the highest impact factors among academic publications in the field of computer graphics. The paper acceptance rate for SIGGRAPH has historically been between 17% and 29%, with the average accept rate between 2015 and 2019 of 27%. The submitted papers are peer-reviewed under a process that was historically single-blind, but has recently changed to double-blind. The papers accepted for presentation at SIGGRAPH are printed since 2003 in a special issue of the "ACM Transactions on Graphics" journal. Prior to 1992, SIGGRAPH papers were printed as part of the "Computer Graphics" publication; between 1993 and 2001, there was a dedicated "SIGGRAPH Conference Proceedings" series of publications. SIGGRAPH has several awards programs to recognize contributions to computer graphics. The most prestigious is the Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics. It has been awarded every two years since 1983 to recognize an individual's lifetime achievement in computer graphics. The SIGGRAPH conference experienced significant growth starting in the 1970s, peaking around the turn of the century. A second conference, SIGGRAPH Asia, started in 2008.
https://en.wikipedia.org/wiki?curid=29279
Lehi (militant group) Lehi (; "Lohamei Herut Israel – Lehi", "Fighters for the Freedom of Israel – Lehi"), often known pejoratively as the Stern Gang, was a Zionist paramilitary organization founded by Avraham ("Yair") Stern in Mandatory Palestine. Its avowed aim was to evict the British authorities from Palestine by resort to force, allowing unrestricted immigration of Jews and the formation of a Jewish state, a "new totalitarian Hebrew republic". It was initially called the National Military Organization in Israel, upon being founded in August 1940, but was renamed Lehi one month later. The group referred to its members as terrorists and admitted to having carried out terrorist attacks. Lehi split from the Irgun militant group in 1940 in order to continue fighting the British during World War II. Lehi initially sought an alliance with Fascist Italy and Nazi Germany, offering to fight alongside them against the British in return for the transfer of all Jews from Nazi-occupied Europe to Palestine. Believing that Nazi Germany was a lesser enemy of the Jews than Britain, Lehi twice attempted to form an alliance with the Nazis. During World War II, it declared that it would establish a Jewish state based upon "nationalist and totalitarian principles". After Stern's death in 1942, the new leadership of Lehi began to move it towards support for Joseph Stalin's Soviet Union. In 1944, Lehi officially declared its support for National Bolshevism. It said that its National Bolshevism involved an amalgamation of left-wing and right-wing political elements – Stern said Lehi incorporated elements of both the left and the right – however this change was unpopular and Lehi began to lose support as a result. Lehi and the Irgun were jointly responsible for the massacre in Deir Yassin. Lehi assassinated Lord Moyne, British Minister Resident in the Middle East, and made many other attacks on the British in Palestine. On 29 May 1948, the government of Israel, having inducted its activist members into the Israel Defense Forces, formally disbanded Lehi, though some of its members carried out one more terrorist act, the assassination of Folke Bernadotte some months later, an act condemned by Bernadotte's replacement as mediator, Ralph Bunche. After the assassination, the new Israeli government declared Lehi a terrorist organization, arresting some 200 members and convicting some of the leaders. Just before the first Israeli elections in January 1949, a general amnesty to Lehi members was granted by the government. In 1980, Israel instituted a military decoration, an "award for activity in the struggle for the establishment of Israel", the Lehi ribbon. Former Lehi leader Yitzhak Shamir became Prime Minister of Israel in 1983. Lehi was created in August 1940 by Avraham Stern. Stern had been a member of the Irgun ("Irgun Tsvai Leumi" – "National Military Organization") high command. Zeev Jabotinsky, then the Irgun's supreme commander, had decided that diplomacy and working with Britain would best serve the Zionist cause. World War II was in progress, and Britain was fighting Nazi Germany. The Irgun suspended its underground military activities against the British for the duration of the war. Stern argued that the time for Zionist diplomacy was over and that it was time for armed struggle against the British. Like other Zionists, he objected to the White Paper of 1939, which restricted both Jewish immigration and Jewish land purchases in Palestine. For Stern, "no difference existed between Hitler and Chamberlain, between Dachau or Buchenwald and sealing the gates of Eretz Israel." Stern wanted to open Palestine to all Jewish refugees from Europe, and considered this as by far the most important issue of the day. Britain would not allow this. Therefore, he concluded, the "Yishuv" (Jews of Palestine) should fight the British rather than support them in the war. When the Irgun made a truce with the British, Stern left the Irgun to form his own group, which he called "Irgun Tsvai Leumi B'Yisrael" ("National Military Organization in Israel"), later "Lohamei Herut Israel" ("Fighters for the Freedom of Israel"). In September 1940, the organization was officially named "Lehi", the Hebrew acronym of the latter name. Stern and his followers believed that dying for the "foreign occupier" who was obstructing the creation of the Jewish State was useless. They differentiated between "enemies of the Jewish people" (the British) and "Jew haters" (the Nazis), believing that the former needed to be defeated and the latter manipulated. In 1940, the idea of the Final Solution was still "unthinkable", and Stern believed that Hitler wanted to make Germany "judenrein" through emigration, as opposed to extermination. In December 1940, Lehi even contacted Germany with a proposal to aid German conquest in the Middle East in return for recognition of a Jewish state open to unlimited immigration. Lehi had three main goals: Lehi believed in its early years that its goals would be achieved by finding a strong international ally that would expel the British from Palestine, in return for Jewish military help; this would require the creation of a broad and organised military force "demonstrating its desire for freedom through military operations." Lehi also referred to themselves as 'terrorists' and may have been one of the last organizations to do so. An article titled "Terror" in the Lehi underground newspaper "He Khazit" ("The Front") argued as follows: Neither Jewish ethics nor Jewish tradition can disqualify terrorism as a means of combat. We are very far from having any moral qualms as far as our national war goes. We have before us the command of the Torah, whose morality surpasses that of any other body of laws in the world: "Ye shall blot them out to the last man." But first and foremost, terrorism is for us a part of the political battle being conducted under the present circumstances, and it has a great part to play: speaking in a clear voice to the whole world, as well as to our wretched brethren outside this land, it proclaims our war against the occupier. We are particularly far from this sort of hesitation in regard to an enemy whose moral perversion is admitted by all. The article described the goals of terror: Yitzhak Shamir, one of the three leaders of Lehi after Avraham Stern's assassination, argued for the legitimacy of Lehi's actions: There are those who say that to kill [T.G.] Martin [a CID sergeant who had recognised Shamir in a lineup] is terrorism, but to attack an army camp is guerrilla warfare and to bomb civilians is professional warfare. But I think it is the same from the moral point of view. Is it better to drop an atomic bomb on a city than to kill a handful of persons? I don’t think so. But nobody says that President Truman was a terrorist. All the men we went for individually – Wilkin, Martin, MacMichael and others – were personally interested in succeeding in the fight against us. So it was more efficient and more moral to go for selected targets. In any case, it was the only way we could operate, because we were so small. For us it was not a question of the professional honor of a soldier, it was the question of an idea, an aim that had to be achieved. We were aiming at a political goal. There are many examples of what we did to be found in the Bible – Gideon and Samson, for instance. This had an influence on our thinking. And we also learned from the history of other peoples who fought for their freedom – the Russian and Irish revolutionaries, Giuseppe Garibaldi and Josip Broz Tito. Avraham Stern laid out the ideology of Lehi in the essay "18 Principles of Rebirth": Unlike the left-wing Haganah and right-wing Irgun, Lehi members were not a homogeneous collective with a single political, religious, or economic ideology. They were a combination of militants united by the goal of liberating the land of Israel from British rule. Most Lehi leaders defined their organization as an anti-imperialism movement and stated that their opposition to British colonial rule in Palestine was not based on a particular policy but rather on the presence of a foreign power over the homeland of the Jewish people. Avraham Stern defined the British Mandate as "foreign rule" regardless of British policies and took a radical position against such imperialism even if it were to be benevolent. In the early years of the state of Israel Lehi veterans could be found supporting nearly all political parties and some Lehi leaders founded a left-wing political party called the Fighters' List with Natan Yellin-Mor as its head. The party took part in the elections in January 1949 and won a single parliamentary seat. A number of Lehi veterans established the Semitic Action movement in 1956 which sought the creation of a regional federation encompassing Israel and its Arab neighbors on the basis of an anti-colonialist alliance with other indigenous inhabitants of the Middle East. Some writers have stated that Lehi's true goals were the creation of a totalitarian state. Perlinger and Weinberg write that the organisation's ideology placed "its world view in the quasi-fascist radical Right, which is characterised by xenophobia, a national egotism that completely subordinates the individual to the needs of the nation, anti-liberalism, total denial of democracy and a highly centralised government." Perliger and Weinberg state that most Lehi members were admirers of the Italian Fascist movement. According to Kaplan and Penslar, Lehi's ideology was a mix of fascist and communist thought combined with racism and universalism. Others counter these claims. They note that when Lehi founder Avraham Stern went to study in fascist Italy, he refused to join the for foreign students, even though members got large reductions in tuition. According to Yaacov Shavit professor at the Department of Jewish History, Tel Aviv University articles in publications by Lehi contained references to a Jewish "master race", contrasting the Jews with Arabs who were seen as a "nation of slaves" Sasha Polakow-Suransky writes about Lehi "Lehi was also unabashedly racist towards Arabs. Their publications described Jews as a master race and Arabs as a slave race." Lehi advocated mass expulsion of all Arabs from Palestine and Transjordan or even their physical annihilation. Many Lehi combatants had received military training. Some had attended the state military academy in Civitavecchia, in Fascist Italy. Others received military training from instructors of the Polish Armed Forces in 1938–1939. This training was conducted in Trochenbrod (Zofiówka) in Wołyń Voivodeship, Podębin near Łódź, and the forests around Andrychów. They were taught how to use explosives. One of them reported later: "Poles treated terrorism as a science. We have mastered mathematical principles of demolishing constructions made of concrete, iron, wood, bricks and dirt." The group was initially unsuccessful. Early attempts to raise funds through criminal activities, including a bank robbery in Tel Aviv in 1940 and another robbery on 9 January 1942 in which Jewish passers-by were killed, brought about the temporary collapse of the group. An attempt to assassinate the head of the British secret police in Lod in which three police personnel were killed, two Jewish and one British, elicited a severe response from the British and Jewish establishments who collaborated against Lehi. Stern's group was seen as a terrorist organisation by the British authorities, who instructed the Defence Security Office (the colonial branch of MI5) to track down its leaders. In 1942, Stern, after he was arrested, was shot dead in disputed circumstances by Inspector Geoffrey J. Morton of the CID. The arrest of several other members led momentarily to the group's eclipse, until it was revived after the September 1942 escape of two of its leaders, Yitzhak Shamir and Eliyahu Giladi, aided by two other escapees Natan Yellin-Mor (Friedman) and Israel Eldad (Sheib). (Giladi was later killed by Lehi under circumstances that remain mysterious.) Shamir's codename was "Michael", a reference to one of Shamir's heroes, Michael Collins. Lehi was guided by spiritual and philosophical leaders such as Uri Zvi Greenberg and Israel Eldad. After the killing of Giladi, the organization was led by a triumvirate of Eldad, Shamir, and Yellin-Mor. Lehi adopted a non-socialist platform of anti-imperialist ideology. It viewed the continued British rule of Palestine as a violation of the Mandate's provision generally, and its restrictions on Jewish immigration to be an intolerable breach of international law. However they also targeted Jews whom they regarded as traitors, and during the 1948 Arab-Israeli War they joined in operations with the Haganah and Irgun against Arab targets, for example Deir Yassin. According to a compilation by Nachman Ben-Yehuda, Lehi was responsible for 42 assassinations, more than twice as many as the Irgun and Haganah combined during the same period. Of those Lehi assassinations that Ben-Yehuda classified as political, more than half the victims were Jews. Lehi also rejected the authority of the Jewish Agency for Israel and related organizations, operating entirely on its own throughout nearly all of its existence. Lehi prisoners captured by the British generally refused to employ lawyers in their defense. The defendants would conduct their own defense, and would deny the right of the military court to try them, saying that in accordance with the Hague Convention they should be accorded the status of prisoners of war. For the same reason, Lehi prisoners refused to plead for amnesty, even when it was clear that this would have spared them the death penalty. Moshe Barazani, a Lehi member, and Meir Feinstein, an Irgun member, took their own lives in prison with a grenade smuggled inside an orange so the British could not hang them. In mid-1940, Stern became convinced that the Italians were interested in the establishment of a fascist Jewish state in Palestine. He conducted negotiations, he thought, with the Italians via an intermediary Moshe Rotstein, and drew up a document that became known as the "Jerusalem Agreement". In exchange for Italy's recognition of, and aid in obtaining, Jewish sovereignty over Palestine, Stern promised that Zionism would come under the aegis of Italian fascism, with Haifa as its base, and the Old City of Jerusalem under Vatican control, except for the Jewish quarter. In Heller's words, Stern's proposal would "turn the 'Kingdom of Israel' into a satellite of the Axis powers." However, the "intermediary" Rotstein was in fact an agent of the Irgun, conducting a sting operation under the direction of the Irgun intelligence leader in Haifa, Israel Pritzker, in cooperation with the British. Secret British documents about the affair were uncovered by historian Eldad Harouvi (now director of the Palmach Archives) and confirmed by former Irgun intelligence officer Yitzhak Berman. When Rotstein's role later became clear, Lehi sentenced him to death and assigned Yaacov Eliav to kill him, but the assassination never took place. However, Pritzker was killed by Lehi in 1943. Late in 1940, Lehi, having identified a common interest between the intentions of the new German order and Jewish national aspirations, proposed forming an alliance in World War II with Nazi Germany. The organization offered cooperation in the following terms: Lehi would rebel against the British, while Germany would recognize an independent Jewish state in Palestine/Eretz Israel, and all Jews leaving their homes in Europe, by their own will or because of government injunctions, could enter Palestine with no restriction of numbers. Late in 1940, Lehi representative Naftali Lubenchik went to Beirut to meet German official Werner Otto von Hentig. The Lehi documents outlined that its rule would be authoritarian and indicated similarities between the organization and Nazis. Israel Eldad, one of the leading members of Lehi, wrote about Hitler "it is not Hitler who is the hater of the kingdom of Israel and the return to Zion, it is not Hitler who subjects us to the cruel fate of falling a second and a third time into Hitler's hands, but the British." Stern also proposed recruiting some 40,000 Jews from occupied Europe to invade Palestine with German support to oust the British. On 11 January 1941, Vice Admiral Ralf von der Marwitz, the German Naval attaché in Turkey, filed a report (the "Ankara document") conveying an offer by Lehi to "actively take part in the war on Germany's side" in return for German support for "the establishment of the historic Jewish state on a national and totalitarian basis, bound by a treaty with the German Reich." According to Yellin-Mor:Lubenchik did not take along any written memorandum for the German representatives. Had there been a need for one, he would have formulated it on the spot, since he was familiar with the episode of the Italian "intermediary" and with the numerous drafts connected with it. Apparently one of von Hentig's secretaries noted down the essence of the proposal in his own words.According to Joseph Heller, "The memorandum arising from their conversation is an entirely authentic document, on which the stamp of the 'IZL in Israel' is clearly embossed." Von der Marwitz delivered the offer, classified as secret, to the German Ambassador in Turkey and on 21 January 1941 it was sent to Berlin. There was never any response. A second attempt to contact the Nazis was made at the end of 1941, but it was even less successful. The emissary Yellin-Mor was arrested in Syria before he could carry out his mission. This proposed alliance with Nazi Germany cost Lehi and Stern much support. The Stern Gang also had links with, and support from, the Vichy France Sûreté's Lebanese offices. Even as the full scale of Nazi atrocities became more evident in 1943, Lehi refused to accept Hitler as main foe (as opposed to Great Britain). As a group that never had over a few hundred members, Lehi relied on audacious but small-scale operations to bring their message home. They adopted the tactics of groups such as the Socialist Revolutionaries and the Combat Organization of the Polish Socialist Party in Czarist Russia, and the Irish Republican Army. To this end, Lehi conducted small-scale operations such as individual assassinations of British officials (notable targets included Lord Moyne, CID detectives, and Jewish "collaborators"), and random shootings against soldiers and police officers. Another strategy, adopted in 1946, was to send bombs in the mail to British politicians. Other actions included sabotaging infrastructure targets: bridges, railroads, telephone and telegraph lines, and oil refineries, as well as the use of vehicle bombs against British military, police, and administrative targets. Lehi financed its operations from private donations, extortion, and bank robbery. Its campaign of violence lasted from 1944 to 1948. Initially conducted together with the Irgun, it included a six-month suspension to avoid being targeted by the Haganah during the Hunting Season, and later operated jointly with the Haganah and Irgun under the Jewish Resistance Movement. After the Jewish Resistance Movement was dissolved, it operated independently as part of the general Jewish insurgency in Palestine. On 6 November 1944, Lehi assassinated Lord Moyne, the British Minister Resident in the Middle East, in Cairo. Moyne was the highest ranking British official in the region. Yitzhak Shamir claimed later that Moyne was assassinated because of his support for a Middle Eastern Arab Federation and anti-Semitic lectures in which Arabs were held to be racially superior to Jews. The assassination rocked the British government, and outraged Winston Churchill, the British Prime Minister. The two assassins, Eliahu Bet-Zouri and Eliahu Hakim were captured and used their trial as a platform to make public their political propaganda. They were executed. In 1975 their bodies were returned to Israel and given a state funeral. In 1982, postage stamps were issued for 20 Olei Hagardom, including Bet-Zouri and Hakim, in a souvenir sheet called "Martyrs of the struggle for Israel's independence." On 25 April 1946, a Lehi unit attacked a car park in Tel Aviv occupied by the British 6th Airborne Division. Under a barrage of heavy covering fire, Lehi fighters broke into the car park, shot soldiers they encountered at close range, stole rifles from arms racks, laid mines to cover the retreat, and withdrew. Seven soldiers were killed in the attack, which caused widespread outrage among the British security forces in Palestine. It resulted in retaliatory anti-Jewish violence by British troops and a punitive curfew imposed on Tel Aviv's roads and a closure of places of entertainment in the city by the British Army. On 12 January 1947, Lehi members drove a truckload of explosives into a British police station in Haifa killing four and injuring 140, in what has been called 'the world's first true truck bomb'. Following the bombing of the British embassy in Rome, October 1946, a series of operations against targets in the United Kingdom were launched. On 7 March 1947, Lehi's only successful operation in Britain was carried out when a Lehi bomb severely damaged the British Colonial Club, a London recreational facility for soldiers and students from Britain's colonies in Africa and the West Indies. On 15 April 1947 a bomb consisting of twenty-four sticks of explosives was planted in the Colonial Office, Whitehall. It failed to explode due to a fault in the timer. Five weeks later, on 22 May, five alleged Lehi members were arrested in Paris with bomb making material including explosives of the same type as found in London. On 2 June, two Lehi members, Betty Knouth and Yaakov Levstein, were arrested crossing from Belgium to France. Envelopes addressed to British officials, with detonators, batteries and a time fuse were found in one of Knouth's suitcases. Knouth was sentenced to a year in prison, Levstein to eight months. The British Security Services identified Knouth as the person who planted the bomb in the Colonial Office. Shortly after their arrest, 21 letter bombs were intercepted addressed to senior British figures. The letters had been posted in Italy. The intended recipients included Bevin, Attlee, Churchill and Eden. Knouth aka Gilberte/Elizabeth Lazarus. Levstein was travelling as Jacob Elias; his fingerprints connected him to the deaths of several Palestine Policemen as well as an attempt on the life of the British High Commissioner. In 1973, Margaret Truman wrote that letter bombs were also posted to her father, U.S. President Harry S. Truman, in 1947. Former Lehi leader Yellin-Mor admitted that letter bombs had been sent to British targets but denied that any had been sent to Truman. Shortly after the 1947 publication of "The Last Days of Hitler", Lehi issued a death threat against the author, Hugh Trevor-Roper, for his portrayal of Hitler, feeling that Trevor-Roper had attempted to exonerate the German populace from responsibility. During the lead-up to the 1948 Arab–Israeli War, Lehi mined the Cairo–Haifa train several times. On 29 February 1948, Lehi mined the train north of Rehovot, killing 28 British soldiers and wounding 35. On 31 March, Lehi mined the train near Binyamina, killing 40 civilians and wounding 60. Shlomo Sand writes that as a method of applying pressure on Arab villagers to abandon their settlements, Lehi planned a terror attack on Nablus and its Arab city headquarters; Lehi fighter Elisha Ibzov (Avraham Cohen) was captured with a truck filled with explosives on his way to the city. Lehi fighters in return abducted four adult villagers and a youth from al-Sheikh Muwannis with no connection to Ibzov's capture, and threatened to kill them. As rumours spread that they were already murdered, panic set out in the villagers and the settlement became increasingly abandoned, despite eventual release of the hostages One of the most widely known acts of Lehi was the attack on the Palestinian-Arab village of Deir Yassin. In the months before the British evacuation from Palestine, the Arab League-sponsored Arab Liberation Army (ALA) occupied several strategic points along the road between Jerusalem and Tel Aviv, cutting off supplies to the Jewish part of Jerusalem. One of these points was Deir Yassin. By March 1948, the road was cut off and Jewish Jerusalem was under siege. The Haganah launched Operation Nachshon to break the siege. On 6 April, the Haganah attacked al-Qastal, a village two kilometers north of Deir Yassin, also overlooking the Jerusalem-Tel Aviv road. Then on 9 April 1948, about 120 Lehi and Irgun fighters, acting in cooperation with the Haganah, attacked and captured Deir Yassin. The attack was at night, the fighting was confused, and many civilian inhabitants of the village were killed. This action had great consequences for the war, and remains a cause celebre for Palestinians ever since. Exactly what happened has never been established clearly. The Arab League reported a great massacre: 254 killed, with rape and lurid mutilations. Israeli investigations claimed the actual number of dead was between 100 and 120, and there were no mass rapes, but most of the dead were civilians, and admitted some were killed deliberately. Lehi and Irgun both denied an organized massacre. Accounts by Lehi veterans such as Ezra Yakhin note that many of the attackers were killed or wounded, assert that Arabs fired from every building and that Iraqi and Syrian soldiers were among the dead, and even that some Arab fighters dressed as women. However, Jewish authorities, including Haganah, the Chief Rabbinate, the Jewish Agency, and David Ben-Gurion, also condemned the attack, lending credence to the charge of massacre. The Jewish Agency even sent a letter of condemnation, apology, and condolence to King Abdullah I of Jordan. Both the Arab reports and Jewish responses had hidden motives: the Arab leaders wanted to encourage Palestinian Arabs to fight rather than surrender, to discredit the Zionists with international opinion, and to increase popular support in their countries for an invasion of Palestine. The Jewish leaders wanted to discredit Irgun and Lehi. Ironically, the Arab reports backfired in one respect: frightened Palestinian Arabs did not surrender, but did not fight either – they fled, allowing Israel to gain much territory with little fighting and also without absorbing many Arabs. Lehi similarly interpreted events at Deir Yassin as turning the tide of war in favor of the Jews. Lehi leader Israel Eldad later wrote in his memoirs from the underground period that "without Deir Yassin the State of Israel could never have been established". The Deir Yassin story did not much sway international opinion. It did increase, not only support, but pressure on Arab governments to intervene. Abdullah of Jordan was now compelled to join the invasion of Palestine after Israel's declaration of independence on 14 May. Although Lehi had stopped operating nationally after May 1948, the group continued to function in Jerusalem. On 17 September 1948, Lehi assassinated UN mediator Count Folke Bernadotte. The assassination was directed by Yehoshua Zettler and carried out by a four-man team led by Meshulam Makover. The fatal shots were fired by Yehoshua Cohen. The Security Council described the assassination as a "cowardly act which appears to have been committed by a criminal group of terrorists". Three days after the assassination, the Israeli government passed the Ordinance to Prevent Terrorism and declared Lehi to be a terrorist organization. Many Lehi members were arrested, including leaders Nathan Yellin-Mor and Matitiahu Schmulevitz who were arrested on 29 September. Eldad and Shamir managed to escape arrest. Yellin-Mor and Schmulevitz were charged with leadership of a terrorist organization and on 10 February 1949 were sentenced to 8 years and 5 years imprisonment, respectively. However the State (Temporary) Council soon announced a general amnesty for Lehi members and they were released. Between 5 December 1948 and 25 January 1949, Yellin-Mor and Schmuelevitch were tried in a military court on terrorism charges. The prosecution accused them of the murder of Bernadotte, though they were not specifically charged with it. Senior officers of the IDF, including Yisrael Galili and David Shaltiel, told the court that Lehi had hindered, rather than assisted the fight against the British and the Arabs. While the trial was in progress, some of the Lehi leadership founded a USSR-leaning political party called the Fighters' List with Yellin-Mor as its leader. The party took part in the elections in January 1949 with Yellin-Mor and Schmuelevitch heading the list. The trial verdict was handed down on 10 February, soon after the Fighters' List had won one seat with only 1.2% of the vote. Yellin-Mor was sentenced to 8 years and Schmuelevitch to 5 years imprisonment, but the court agreed to remit the sentences if the prisoners agreed to a list of conditions. The Provisional State Council then authorised their pardon. The party disbanded after several years and did not contest the 1951 elections. In 1956, some Lehi veterans established the Semitic Action movement, which sought the creation of a regional federation encompassing Israel and its Arab neighbors on the basis of an anti-colonialist alliance with other indigenous inhabitants of the Middle East. Not all Lehi alumni gave up political violence after independence: former members were involved in the activities of the Kingdom of Israel militant group, the 1957 assassination of Rudolf Kastner, and likely the 1952 attempted assassination of David-Zvi Pinkas. In 1980, Israel instituted the Lehi ribbon, red, black, grey, pale blue and white, which is awarded to former members of the Lehi underground who wished to carry it, "for military service towards the establishment of the State of Israel". The words and music of a song "Unknown Soldiers" (also translated "Anonymous Soldiers") were written by Avraham Stern in 1932 during the early days of the Irgun. It became the Irgun's anthem until the split with Lehi in 1940, after which it became the Lehi anthem. A number of Lehi's members went on to play important roles in Israel's public life.
https://en.wikipedia.org/wiki?curid=29287
Server-side scripting Server-side scripting is a technique used in web development which involves employing scripts on a web server which produce a response customized for each user's (client's) request to the website. The alternative is for the web server itself to deliver a static web page. Scripts can be written in any of a number of server-side scripting languages that are available (see below). Server-side scripting is distinguished from client-side scripting where embedded scripts, such as JavaScript, are run client-side in a web browser, but both techniques are often used together. Server-side scripting is often used to provide a customized interface for the user. These scripts may assemble client characteristics for use in customizing the response based on those characteristics, the user's requirements, access rights, etc. Server-side scripting also enables the website owner to hide the source code that generates the interface, whereas with client-side scripting, the user has access to all the code received by the client. A down-side to the use of server-side scripting is that the client needs to make further requests over the network to the server in order to show new information to the user via the web browser. These requests can slow down the experience for the user, place more load on the server, and prevent use of the application when the user is disconnected from the server. When the server serves data in a commonly used manner, for example according to the HTTP or FTP protocols, users may have their choice of a number of client programs (most modern web browsers can request and receive data using both of those protocols). In the case of more specialized applications, programmers may write their own server, client, and communications protocol, that can only be used with one another. Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be considered client-side operations. Netscape introduced an implementation of JavaScript for server-side scripting with Netscape Enterprise Server, first released in December, 1994 (soon after releasing JavaScript for browsers). Server-side scripting was later used in early 1995 by Fred DuFresne while developing the first web site for Boston, MA television station WCVB. The technology is described in US patent 5835712. The patent was issued in 1998 and is now owned by Open Invention Network (OIN). In 2010 OIN named Fred DuFresne a "Distinguished Inventor" for his work on server-side scripting. Today, a variety of services use server-side scripting to deliver results back to a client as a paid or free service. An example would be WolframAlpha, which is a computational knowledge engine that computes results outside the clients environment and returns the computed result back. A more commonly used service is Google's proprietary search engine, which searches millions of cached results related to the user specified keyword and returns an ordered list of links back to the client. Apple's Siri application also employs server-side scripting outside of a web application. The application takes an input, computes a result, and returns the result back to the client. In the earlier days of the web, server-side scripting was almost exclusively performed by using a combination of C programs, Perl scripts, and shell scripts using the Common Gateway Interface (CGI). Those scripts were executed by the operating system, and the results were served back by the web server. Many modern web servers can directly execute on-line scripting languages such as ASP, JSP, Perl, PHP and Ruby either by the web server itself or via extension modules (e.g. mod_perl or mod_php) to the web server. For example, WebDNA includes its own embedded database system. Either form of scripting (i.e., CGI or direct execution) can be used to build up complex multi-page sites, but direct execution usually results in less overhead because of the lower number of calls to external interpreters. Dynamic websites sometimes use custom web application servers, such as Glassfish, Plack and Python's "Base HTTP Server" library, although some may not consider this to be server-side scripting. When using dynamic web-based scripting techniques, developers must have a keen understanding of the logical, temporal, and physical separation between the client and the server. For a user's action to trigger the execution of server-side code, for example, a developer working with classic ASP must explicitly cause the user's browser to make a request back to the web server. Creating such interactions can easily consume much development time and lead to unreadable code. Server-side scripts are completely processed by the servers instead of clients. When clients request a page containing server-side scripts, the applicable server processes the scripts and returns an HTML page to the client. There are a number of server-side scripting languages available, including:
https://en.wikipedia.org/wiki?curid=29288
Optical spectrometer An optical spectrometer (spectrophotometer, spectrograph or spectroscope) is an instrument used to measure properties of light over a specific portion of the electromagnetic spectrum, typically used in spectroscopic analysis to identify materials. The variable measured is most often the light's intensity but could also, for instance, be the polarization state. The independent variable is usually the wavelength of the light or a unit directly proportional to the photon energy, such as reciprocal centimeters or electron volts, which has a reciprocal relationship to wavelength. A spectrometer is used in spectroscopy for producing spectral lines and measuring their wavelengths and intensities. Spectrometers may also operate over a wide range of non-optical wavelengths, from gamma rays and X-rays into the far infrared. If the instrument is designed to measure the spectrum in absolute units rather than relative units, then it is typically called a spectrophotometer. The majority of spectrophotometers are used in spectral regions near the visible spectrum. In general, any particular instrument will operate over a small portion of this total range because of the different techniques used to measure different portions of the spectrum. Below optical frequencies (that is, at microwave and radio frequencies), the spectrum analyzer is a closely related electronic device. Spectrometers are used in many fields. For example, they are used in astronomy to analyze the radiation from astronomical objects and deduce chemical composition. The spectrometer uses a prism or a grating to spread the light from a distant object into a spectrum. This allows astronomers to detect many of the chemical elements by their characteristic spectral fingerprints. If the object is glowing by itself, it will show spectral lines caused by the glowing gas itself. These lines are named for the elements which cause them, such as the hydrogen alpha, beta, and gamma lines. Chemical compounds may also be identified by absorption. Typically these are dark bands in specific locations in the spectrum caused by energy being absorbed as light from other objects passes through a gas cloud. Much of our knowledge of the chemical makeup of the universe comes from spectra. Spectroscopes are often used in astronomy and some branches of chemistry. Early spectroscopes were simply prisms with graduations marking wavelengths of light. Modern spectroscopes generally use a diffraction grating, a movable slit, and some kind of photodetector, all automated and controlled by a computer. Joseph von Fraunhofer developed the first modern spectroscope by combining a prism, diffraction slit and telescope in a manner that increased the spectral resolution and was reproducible in other laboratories. Fraunhofer also went on to invent the first diffraction spectroscope. Gustav Robert Kirchhoff and Robert Bunsen discovered the application of spectroscopes to chemical analysis and used this approach to discover caesium and rubidium. Kirchhoff and Bunsen's analysis also enabled a chemical explanation of stellar spectra, including Fraunhofer lines. When a material is heated to incandescence it emits light that is characteristic of the atomic makeup of the material. Particular light frequencies give rise to sharply defined bands on the scale which can be thought of as fingerprints. For example, the element sodium has a very characteristic double yellow band known as the Sodium D-lines at 588.9950 and 589.5924 nanometers, the color of which will be familiar to anyone who has seen a low pressure sodium vapor lamp. In the original spectroscope design in the early 19th century, light entered a slit and a collimating lens transformed the light into a thin beam of parallel rays. The light then passed through a prism (in hand-held spectroscopes, usually an Amici prism) that refracted the beam into a spectrum because different wavelengths were refracted different amounts due to dispersion. This image was then viewed through a tube with a scale that was transposed upon the spectral image, enabling its direct measurement. With the development of photographic film, the more accurate spectrograph was created. It was based on the same principle as the spectroscope, but it had a camera in place of the viewing tube. In recent years, the electronic circuits built around the photomultiplier tube have replaced the camera, allowing real-time spectrographic analysis with far greater accuracy. Arrays of photosensors are also used in place of film in spectrographic systems. Such spectral analysis, or spectroscopy, has become an important scientific tool for analyzing the composition of unknown material and for studying astronomical phenomena and testing astronomical theories. In modern spectrographs in the UV, visible, and near-IR spectral ranges, the spectrum is generally given in the form of photon number per unit wavelength (nm or μm), wavenumber (μm−1, cm−1), frequency (THz), or energy (eV), with the units indicated by the abscissa. In the mid- to far-IR, spectra are typically expressed in units of Watts per unit wavelength (μm) or wavenumber (cm−1). In many cases, the spectrum is displayed with the units left implied (such as "digital counts" per spectral channel). A spectrograph is an instrument that separates light by its wavelengths and records this data. A spectrograph typically has a multi-channel detector system or camera that detects and records the spectrum of light. The term was first used in 1876 by Dr. Henry Draper when he invented the earliest version of this device, and which he used to take several photographs of the spectrum of Vega. This earliest version of the spectrograph was cumbersome to use and difficult to manage. There are several kinds of machines referred to as "spectrographs", depending on the precise nature of the waves. The first spectrographs used photographic paper as the detector. The plant pigment phytochrome was discovered using a spectrograph that used living plants as the detector. More recent spectrographs use electronic detectors, such as CCDs which can be used for both visible and UV light. The exact choice of detector depends on the wavelengths of light to be recorded. A spectrograph is sometimes called polychromator, as an analogy to monochromator. The star spectral classification and discovery of the main sequence, Hubble's law and the Hubble sequence were all made with spectrographs that used photographic paper. The forthcoming James Webb Space Telescope will contain both a near-infrared spectrograph (NIRSpec) and a mid-infrared spectrograph (MIRI). An Echelle spectrograph uses two diffraction gratings, rotated 90 degrees with respect to each other and placed close to one another. Therefore, an entrance point and not a slit is used and a 2d CCD-chip records the spectrum. Usually one would guess to retrieve a spectrum on the diagonal, but when both gratings have a wide spacing and one is blazed so that only the first order is visible and the other is blazed that a lot of higher orders are visible, one gets a very fine spectrum nicely folded onto a small common CCD-chip. The small chip also means that the collimating optics need not to be optimized for coma or astigmatism, but the spherical aberration can be set to zero.
https://en.wikipedia.org/wiki?curid=29293
Quake II Quake II is a first-person shooter video game released in December 1997. It was developed by id Software and published by Activision. It is not a direct sequel to "Quake"; id decided to revert to an existing trademark when the game's fast-paced, tactile feel felt closer to a "Quake" game than a new franchise. The game's storyline is continued in Quake 4. The soundtrack for "Quake II" was mainly provided by Sonic Mayhem, with some additional tracks by Bill Brown; the main theme was also composed by Bill Brown and Rob Zombie, and one track by Jer Sypult. The soundtrack for the Nintendo 64 version of the game was composed by Aubrey Hodges, credited as Ken "Razor" Richmond. "Quake II" is a first-person shooter, in which the player shoots enemies from the perspective of the main character. The gameplay is very similar to that featured in "Quake", in terms of movement and controls, although the player's movement speed has been slowed down, and the player now has the ability to crouch. The game retains four of the eight weapons from "Quake" (the Shotgun, Super Shotgun, Grenade Launcher, and Rocket Launcher), although they have been redesigned visually and made to function in slightly different ways. The remainder of "Quake"s eight weapons (the Axe, Nailgun, Super Nailgun, and Thunderbolt) are not present in "Quake II". The six newly introduced weapons are the Blaster, Machine Gun, Chain Gun, Hyperblaster, Railgun, and BFG10K. The Quad Damage power up from "Quake" is present in "Quake II", and new power-ups include the Ammo Pack, Invulnerability, Bandolier, Enviro-Suit, Rebreather, and Silencer. The single player game features a number of changes from "Quake". First, the player is given mission-based objectives that correspond to the storyline, including stealing a Tank Commander's head to open a door and calling down an air-strike on a bunker. CGI cutscenes are used to illustrate the player's progress through the main objectives, although they are all essentially the same short piece of video, showing a computerized image of the player character as he moves through game's levels. Another addition is the inclusion of a non-hostile character type: the player character's captured comrades. It is not possible to interact with these characters, however, as they have all been driven insane by their Strogg captors. The game features much larger levels than "Quake", with many more wide open areas. There is also a hub system that allows the player to travel back and forth between levels, which is necessary to complete certain objectives. Some of the textures and symbols that appear in the game are very similar to some of those found in "Quake". Enemies demonstrate visible wounds after they have taken damage. The multiplayer portion is similar to that of "Quake". It can be played as a free-for-all deathmatch game mode, a cooperative version of the single-player game, or as a 1 vs 1 match that is used in official tournaments, like the Cyberathlete Professional League. It can also be played in Capture the Flag mode (CTF). The deathmatch game benefited from the release of eight specifically designed levels that id Software added after the game's initial release. They were introduced to the game via one of the early patches, that were released free of charge. Prior to the release of these maps, players were limited to playing multiplayer games on the single-player levels, which, while functional as multiplayer levels, were not designed with deathmatch gameplay specifically in mind. As in "Quake", it is possible to customize the way in which the player appears to other people in multiplayer games. However, whereas in "Quake", the only option was to change the color of the player's uniform unless third party modifications were used, now the game comes with a selection of three different player models: a male marine, a female marine, and a cyborg marine; choice of player model also affects the speech effects the player's character will make, such as exhaling in effort while jumping or groaning when injured. Each model can be customized from in the in-game menu via the selection of pre-drawn skins, which differ in many ways; for example, skin color, camouflage style, and application of facepaint. "Quake II" takes place in a science fiction environment. In the single-player game, the player assumes the role of a Marine named Bitterman taking part in "Operation Alien Overlord", a desperate attempt to prevent an alien invasion of Earth by launching a counterattack against the home planet of the hostile Strogg civilization. Most of the other soldiers are captured or killed as soon as they approach the planned landing zone. Bitterman survives only because another Marine's personal capsule collided with his upon launch, causing him to crash far short of the landing zone. It falls upon Bitterman to penetrate the Strogg capital city alone and assassinate the Strogg leader, the Makron. Originally, Quake II was supposed to be an entirely new game and IP; titles like "Strogg", "Lock and Load", and even just "Load" were toyed with in the early days of development. But after numerous failed attempts, the team at id decided to stick with 'Quake II' and forego the gothic Lovecraftian horror theme from the original in favor of a more sci-fi aesthetic. Artist and co-owner Adrian Carmack had said that Quake II is his favorite game in the series because "it was different and a cohesive project". Unlike "Quake", where hardware-accelerated graphics controllers were supported only with later patches, "Quake II" came with OpenGL support out of the box. Later downloads from id Software added support for AMD's 3DNow! instruction set for improved performance on their K6-2 processors, and Rendition released a native renderer for their V1000 graphics chip. The latest version is 3.21. This update includes numerous bug fixes and new levels designed for multiplayer deathmatch. Version 3.21, available as source code on id Software's FTP server, has no improved functionality over version 3.20 and is simply a slight modification to make compiling for Linux easier. "Quake II" uses an improved client–server model introduced in "Quake". The game code of "Quake II", which defines all the functionality for weapons, entities, and game mechanics, can be changed in any way because id Software published the source code of their own implementation that shipped with the game. "Quake II" uses the shared library functionality of the operating system to load the game library at run-time—this is how mod authors are able to alter the game and provide different gameplay mechanics, new weapons, and much more. The full source code to "Quake II" version 3.19 was released under the terms of the GNU GPL on December 22, 2001. Version 3.21 followed later. A LCC-friendly version was released on January 1, 2002 by a modder going by the name of Major Bitch. Since the release of the "Quake II" source code, several updates from third-party projects to the game engine have been created; the most prominent of these are projects focused on graphical enhancements to the game such as most notable "Yamagi Quake II", "Quake2maX", "EGL", "Quake II Evolved", and "KMQuake II". The source release also revealed numerous security flaws which can result in remote compromise of both the "Quake II" client and server. As id Software no longer maintains "Quake II", most third-party engines include fixes for these bugs. The unofficial patch 3.24 that fixes bugs and adds only meager tweaks is recommended for "Quake II" purists, as it is not intended to add new features or be an engine mod in its own right. The most popular server-side engine modification for multiplayer, "R1Q2", is generally recommended as a replacement for the 3.20 release for both clients and servers. In July 2003, Vertigo Software released a port of "Quake II" for the Microsoft .NET platform, using Managed C++, called "Quake II .NET". It became a poster application for the language, showcasing the powerful interoperability between .NET and standard C++ code. It remains one of the top downloads on the Visual C++ website. In May 2004, Bytonic Software released a port of "Quake II" (called "Jake2") written in Java using JOGL. In 2010 Google ported "Jake2" to HTML5, running in Safari and Chrome. "Quake II"s game engine was a popular license and formed the basis for several commercial and free games, such as "", "War§ow", "SiN", "Anachronox", "Heretic II", "Daikatana", "Soldier of Fortune", "", and "". Valve's 1998 video game "Half-Life" used the "Quake II" engine during early development stages. However, the final version runs on a heavily modified version of the "Quake" engine, "GoldSrc", with a small amount of the "Quake II" code. Ports of "Quake II" were released in 1999 on the Nintendo 64 (ported by Raster Productions) and PlayStation (ported by Hammerhead) video game consoles. In both cases, the core gameplay was largely identical; however, changes were made to the game sequence and split-screen multiplayer replaced network or Internet play. A Macintosh port was developed by Logicware and released in July 1999. "Quake II: Colossus" ("Quake II" with both official add-ons) was ported to Linux by id Software and published by Macmillan Digital Publishing in 1999. Be Inc. officially ported "Quake II: Colossus" to the BeOS to test their OpenGL acceleration in 1999, and provided the game files for free download at a later date—a Windows, Macintosh, or Linux install CD was required to install the game, with the official add-ons being optional. "Jake2" is a "Quake II" port shown by the JOGL team for JavaOne 2004, to present an example of Java-OpenGL interoperability. "Jake2" has since been used by Sun as an example of Java Web Start capabilities for games distribution over the Internet. In 2009, Tectoy Digital ported "Quake II" to the Brazilian gaming console Zeebo. The game is available for free, but does not feature CG movies or multiplayer support of any kind. The PlayStation version contains abridged versions of Units 1, 3, 6, 7, 8, and 10 of the PC version, redesigned to meet the console's technical limitations. For example, many short airlock-like corridors were added to maps to provide loading pauses inside what were contiguous areas in the PC version. In addition, part of the first mission of the N64 port is used as a prologue. Some enemy types were removed and two new enemies was added: the Arachnid, a human-spider cyborg with twin railgun arms, and the Guardian, a bipedal boss enemy. Saving the game is only possible between levels and at mid-level checkpoints where the game loads, while in the PC version the game could be saved and loaded at any time. The game supports the PlayStation Mouse peripheral to provide a greater parity with the PC version's gameplay. The music used in this port is a combination of the "Quake II" original music score and tracks from the PC version's mission packs, while the opening and closing cut-scenes are taken from the Ground Zero expansion pack. The PlayStation version uses a new engine developed by Hammerhead for their future PlayStation projects and runs at a 512x240 resolution at 30 frames per second. The developer was keen to retain a visual parity with the PC version and avoid tricks such as the use of environmental fog. Colored lights for levels and enemies, and yellow highlights for gunfire and explosions, are carried across from the PC version, with the addition of lens flare effects located around the light sources on the original lightmaps. There is no skybox; instead, a flat Gouraud-textured purple sky is drawn around the top of the level. The game uses particles to render blood, debris, and rail gun beams analogously to the PC version. There is also a split-screen multiplayer mode for two to four players (a four player game is possible using the PlayStation's Multi-tap). The only available player avatar is a modified version of the male player avatar from the PC version, the most noticeable difference being the addition of a helmet. Players can only customize the color of their avatar's armor and change their name. The twelve multiplayer levels featured are unique to the PlayStation version, with none of the PC multiplayer maps being carried over. The Nintendo 64 version has completely different single player levels and multiplayer maps, and features multiplayer support for up to four players. This version also has new lighting effects, mostly seen in gunfire, and also uses the Expansion Pak for extra graphical detail. This port also features an entirely new soundtrack, consisting mostly of dark ambient pieces, composed by Aubrey Hodges. A port of "Quake II" was included with "Quake 4" for the Xbox 360 on a bonus disc. This is a direct port of the original game, with some graphical improvements. However, it allows for System Link play for up to sixteen players, split-screen for four players, and cooperative play in single-player for up to sixteen players or four players with split-screen alone. On December 20, 2018, Polish programmer Krzysztof Kondrak released the original Quake 2 v3.21 source code with Vulkan support added. The port, called "vkQuake2", is available under the GPLv2. As with the original "Quake", "Quake II" was designed to allow players to easily create custom content. A large number of mods, maps, graphics such as player models and skins, and sound effects were created and distributed to others free of charge via the Internet. Popular websites such as PlanetQuake and Telefragged allowed players to gain access to custom content. Another improvement over "Quake" was that it was easier to select custom player models, skins, and sound effects because they could be selected from an in-game menu. Two unofficial expansions were released on CDs in 1998: "Zaero", developed by Team Evolve and published by Macmillan Digital Publishing, and "Juggernaut: The New Story", developed by Canopy Games and published by HeadGames Publishing. Other notable mods include "Action Quake 2", "Rocket Arena", "Weapons Factory", "Loki's Minions Capture the Flag", and "RailwarZ Insta-Gib Capture the Flag". "Quake II" released on December 9, 1997 in the United States (one day short of the release of "Doom" four years prior) and on December 12, 1997 in Europe. Despite the title, "Quake II" is a sequel to the original "Quake" in name only. The scenario, enemies, and theme are entirely separate and do not fall into the same continuity as "Quake". id initially wanted to set it separately from "Quake", but due to legal reasons (most of their suggested names were already taken), they decided to use the working title. "Quake II" was also adopted as a name to leverage the popularity of "Quake" according to Jennell Jaquays. "Quake II" has been released on Steam, but this version does not include the soundtrack. The game was released on a bonus disc included with "Quake 4" Special Edition for the PC, along with both expansion packs. This version also lacks the soundtrack. "Quake II" is also available on a bonus disc with the Xbox 360 version of "Quake 4". This version is a direct port featuring the original soundtrack and multiplayer maps. In 2015, "Quake II: Quad Damage", a bundle containing the original game with the mission packs has been released at GOG.com, unlike the previous releases, this one contains a new customizeable launcher and the official soundtrack in OGG format which was made possible to play in-game, making it the only digital release to include music. The game has also been included in the following official compilations: A remastered version of the game, titled "Quake II RTX" was announced by Nvidia on March 18, 2019 and was released on June 6, 2019 for Windows and Linux on Steam. This remastered version requires an Nvidia RTX GPU, as it has been developed to utilize these cards' hardware ray-tracing functionality. The game, provided free of charge, includes the three levels present in the original "Quake II" demo, but can be used to play the full game if its data files are available. "Quake II Mission Pack: The Reckoning" is the first official expansion pack, released on May 31, 1998. It was developed by Xatrix Entertainment. First announced in January 1998, it features eighteen new single player levels, six new deathmatch levels, three new weapons (the Ion Ripper, Phalanx Particle Cannon, and Trap), a new power-up, two new enemies, seven modified versions of existing enemies, and five new music tracks. The storyline follows Joker, a member of an elite squad of marines on a mission to infiltrate a Strogg base on one of Stroggos' moons and destroy the Strogg fleet, which is preparing to attack. Joker crash lands in the swamps outside of the compound where his squad is waiting. He travels through the swamps and bypasses the compounds outer defenses and enters through the main gate, finding his squad just in time to watch them get executed by Strogg forces. Next, Joker escapes on his own to the fuel refinery where he helps the Air Force destroy all fuel production, then infiltrates the Strogg spaceport, boards a cargo ship and reaches the Moon Base, destroying it and the Strogg fleet. Notably, the section of the game that takes place on the Moon Base has low gravity, something that was previously used on one secret level of the original "Quake". The Reckoning received mixed reviews. It holds 69.50% from Gamerankings and Gamespot given a score of 7.4/10. "Quake II Mission Pack: Ground Zero" is the second official expansion pack, released on September 11, 1998. It was developed by Rogue Entertainment. It comes with fourteen new single-player levels, ten new multiplayer maps, five additional music tracks, five new enemies, seven new power-ups, and five new weapons. In the expansion's story the Gravity Well has trapped the Earth Fleet in orbit above the planet Stroggos. One of the marines who managed to land, Stepchild, must now make his way to the Gravity Well to destroy it and free the fleet above and disable the entire defenses of the planet. The Ground Zero received average to mixed reviews. It holds 65.40% from Gamerankings. Patrick Baggatta of IGN gave the expansion 7.5/10, describing it as similar to the original, but noting occasionally confusing map design. Elliott Chin of GameSpot gave the game 7.9/10, citing it as decent for an expansion and praising the monsters and enhanced AI. Johnny B. of Game Revolution rated the expansion D+, citing bad level design and few additions to the original game, and noted the multiplayer power-up gameplay as the only fun feature. "Quake II Netpack I: Extremities" contains, among other features, 11 game mods and 12 deathmatch maps. "Quake II" entered PC Data's monthly computer game sales rankings at #2 for December 1997, behind "Riven". The game's sales in the United States alone reached 240,913 copies by the end of 1997, after its release on December 9. According to PC Data, it was the country's 22nd-best-selling computer game of 1997. The following year, "Quake II" secured fifth place on PC Data's charts for January and February 1998, then dropped to #8 in March and #9 in April. It remained in PC Data's top 20 for another two months, before exiting in July 1998. "Quake II" surpassed 850,000 units shipped to retailers by May 1998, and 900,000 by June. According to PC Data, "Quake II" was the United States' 14th-best-selling computer game during the January–November 1998 period. It ultimately secured 15th place for the full year, with sales of 279,536 copies and revenues of $12.6 million. GameDaily reported in January 1999 that "Quake II"s sales in the United States had reached 550,000 units; this number rose to 610,000 units by December of that year. Worldwide, "Quake II" sold over 1 million copies by 2002. "Next Generation" reviewed the PC version of the game, rating it four stars out of five, and stated that "All in all, id should be commended for the advancement of its technology and improvement in its single-player level design, but it's going to be up to mod designers to provide the necessary additions to the multiplayer game in order to make it stand out from "Quake"." "Quake II" received positive reviews. Aggregating review website GameRankings gave the PC version 87%, the Nintendo 64 version 81%, and the PlayStation version 80%. "AllGame" editor Michael L. House praised Quake II by stating "the beauty of Quake II is not in the single-player game, it's in the multi-player feature". "GameSpot" editor Vince Broady described Quake II as "the only first-person shooter to render the original Quake entirely obsolete". Daniel Erickson reviewed the N64 version of the game for "Next Generation", rating it four stars out of five, and stated that "A good first-person shooter with a great multiplayer mode; "GoldenEye" is no longer the only game in town." "Quake II" won "Macworld"s 1999 "Best Shoot-'Em-Up" award, and the magazine's Christopher Breen wrote, "In either single-player or multiplayer mode, for careening-through-corridor-carnage satisfaction, Quake II is a must-have." It also won "Computer Gaming World"s 1997 "Action Game of the Year" award. The editors wrote that "for pure adrenaline-pumping, visceral, instantly gratifying action, "Quake II" is the hands-down winner. No game gave us the rush that "Quake II" did." In 1998, "PC Gamer" declared it the 3rd-best computer game ever released, and the editors called it "id's gun-happy masterpiece is the most sensational and subtle shooter ever, and one of the best games of any type ever created". In 1999, "Next Generation" listed "Quake 2" as number 5 on their "Top 50 Games of All Time", commenting that, ""Quake 2" is the standard for multiplayer shooting, and we've yet to see a ""Quake" killer" that can keep us from returning to multiplayer "Quake" for longer than a month or so."
https://en.wikipedia.org/wiki?curid=25216
Qi In traditional Chinese culture, qi or ch'i ( ) is believed to be a vital force forming part of any living entity. "Qi" translates as "air" and figuratively as "material energy", "life force", or "energy flow". "Qi" is the central underlying principle in Chinese traditional medicine and in Chinese martial arts. The practice of cultivating and balancing "qi" is called "qigong". Believers of "qi" describe it as a vital force, the flow of which must be unimpeded for health. "Qi" is a pseudoscientific, unverified concept, which has never been directly observed, and is unrelated to the concept of energy used in science (vital energy itself being an abandoned scientific notion). The cultural keyword "qì" is analyzable in terms of Chinese and Sino-Xenic pronunciations. Possible etymologies include the logographs , , and with various meanings ranging from "vapor" to "anger", and the English loanword "qi" or "ch'i". The logograph is read with two Chinese pronunciations, the usual "qì" "air; vital energy" and the rare archaic "xì" "to present food" (later disambiguated with ). Pronunciations of in modern varieties of Chinese with standardized IPA equivalents include: Standard Chinese "qì" , Wu Chinese "qi" , Southern Min "khì" , Eastern Min "ké" , Standard Cantonese "hei3" , and Hakka Chinese "hi" . Pronunciations of in Sino-Xenic borrowings include: Japanese "ki", Korean "gi", and Vietnamese "khi." Reconstructions of the Middle Chinese pronunciation of standardized to IPA transcription include: /kʰe̯iH/ (Bernard Karlgren), /kʰĭəiH/ (Wang Li), /kʰiəiH/ (Li Rong), /kʰɨjH/ (Edwin Pulleyblank), and /kʰɨiH/ (Zhengzhang Shangfang). Reconstructions of the Old Chinese pronunciation of standardized to IPA transcription include: /*kʰɯds/ (Zhengzhang Shangfang) and /*C.qʰəp-s/ (William H. Baxter and Laurent Sagart). The etymology of "qì" interconnects with Kharia "kʰis" "anger", Sora "kissa" "move with great effort", Khmer "kʰɛs" "strive after; endeavor", and Gyalrongic "kʰɐs" "anger". In the East Asian languages, "qì" has three logographs: In addition, "qì" is an uncommon character especially used in writing Daoist talismans. Historically, the word "qì" was generally written as until the Han dynasty (206 BCE–220 CE), when it was replaced by the graph clarified with "mǐ" "rice" indicating "steam (rising from rice as it cooks.)" This primary logograph , the earliest written character for "qì," consisted of three wavy horizontal lines seen in Shang dynasty (c. 1600–1046 BCE) oracle bone script, Zhou dynasty (1046–256 BCE) bronzeware script and large seal script, and Qin dynasty (221–206 BCE) small seal script. These oracle, bronze, and seal scripts logographs were used in ancient times as a phonetic loan character to write "qǐ" "plead for; beg; ask" which did not have an early character. The vast majority of Chinese characters are classified as radical-phonetic characters. Such characters combine a semantically suggestive "radical characters" with a phonetic element approximating ancient pronunciation. For example, the widely known word "dào" "the Dao; the way" graphically combines the "walk" radical with a "shǒu" "head" phonetic. Although the modern "dào" and "shǒu" pronunciations are dissimilar, the Old Chinese "*lˤuʔ-s" and "*l̥uʔ-s" were alike. The regular script character "qì" is unusual because "qì" is both the "air radical" and the phonetic, with "mǐ" "rice" semantically indicating "steam; vapor". This "qì" "air/gas radical" was only used in a few native Chinese characters like "yīnyūn" "thick mist/smoke", but was also used to create new scientific characters for gaseous chemical elements. Some examples are based on pronunciations in European languages: "fú" (with a "fú" phonetic) "fluorine" and "nǎi" (with a "nǎi" phonetic) "neon". Others are based on semantics: "qīng" (with a "jīng" phonetic, abbreviating "qīng" "light-weight") "hydrogen (the lightest element)" and "lǜ" (with a "lù" phonetic, abbreviating "lǜ" "green") "(greenish-yellow) chlorine". "Qì" is the phonetic element in a few characters such as "kài" "hate" with the "heart-mind radical" or , "xì" "set fire to weeds" with the "fire radical" , and "xì" "to present food" with the "food radical" . The first Chinese dictionary of characters, the "Shuowen Jiezi"(121 CE) notes that the primary "qì" is a pictographic character depicting "cloudy vapors", and that the full combines "rice" with the phonetic "qi" , meaning "present provisions to guests" (later disambiguated as "xì" ). Qi is a polysemous word. The unabridged Chinese-Chinese character dictionary "Hanyu Da Cidian" defines it as "present food or provisions" for the "xì" pronunciation but also lists 23 meanings for the "qì" pronunciation. The modern "ABC Chinese-English Comprehensive Dictionary," which enters "xì" "grain; animal feed; make a present of food", and a "qì" entry with seven translation equivalents for the noun, two for bound morphemes, and three equivalents for the verb. n. ① air; gas ② smell ③ spirit; vigor; morale ④ vital/material energy (in Ch[inese] metaphysics) ⑤ tone; atmosphere; attitude ⑥ anger ⑦ breath; respiration b.f. ① weather "tiānqì" ② [linguistics] aspiration "sòngqì" v. ① anger ② get angry ③ bully; insult. Qi was an early Chinese loanword in English. It was romanized as "k'i" in Church Romanization in the early-19th century, as "ch'i" in Wade–Giles in the mid-19th century (sometimes misspelled "chi" omitting the apostrophe), and as qi in Pinyin in the mid-20th century. The "Oxford English Dictionary" entry for "qi" gives the pronunciation as , the etymology from Chinese "qì" "air; breath", and a definition of "The physical life-force postulated by certain Chinese philosophers; the material principle." It also gives eight usage examples, with the first recorded example of "k'í" in 1850 ("The Chinese Repository"), of "ch'i" in 1917 ("The Encyclopaedia Sinica"), and "qi" in 1971 (Felix Mann's "Acupuncture") References to concepts analogous to qi are found in many Asian belief systems. Philosophical conceptions of qi from the earliest records of Chinese philosophy (5th century BCE) correspond to Western notions of humours, the ancient Hindu yogic concept of "prana." An early form of qi comes from the writings of the Chinese philosopher Mencius (4th century BCE). The ancient Chinese described qi as "life force". They believed it permeated everything and linked their surroundings together. Qi was also linked to the flow of energy around and through the body, forming a cohesive functioning unit. By understanding the rhythm and flow of qi, they believed they could guide exercises and treatments to provide stability and longevity. Although the concept has been important within many Chinese philosophies, over the centuries the descriptions of qi have varied and have sometimes been in conflict. Until China came into contact with Western scientific and philosophical ideas, the Chinese had not categorized all things in terms of matter and energy. Qi and "li" (: "pattern") were 'fundamental' categories similar to matter and energy. Fairly early on, some Chinese thinkers began to believe that there were different fractions of qi—the coarsest and heaviest fractions formed solids, lighter fractions formed liquids, and the most ethereal fractions were the "lifebreath" that animated living beings. "Yuanqi" is a notion of innate or prenatal qi which is distinguished from acquired qi that a person may develop over their lifetime. The earliest texts that speak of qi give some indications of how the concept developed. In the Analects of Confucius qi could mean "breath". Combining it with the Chinese word for blood (making 血氣, "xue"–"qi", blood and breath), the concept could be used to account for motivational characteristics: The philosopher Mozi used the word qi to refer to noxious vapors that would eventually arise from a corpse were it not buried at a sufficient depth. He reported that early civilized humans learned how to live in houses to protect their qi from the moisture that troubled them when they lived in caves. He also associated maintaining one's qi with providing oneself with adequate nutrition. In regard to another kind of qi, he recorded how some people performed a kind of prognostication by observing qi (clouds) in the sky. Mencius described a kind of qi that might be characterized as an individual's vital energies. This qi was necessary to activity and it could be controlled by a well-integrated willpower. When properly nurtured, this qi was said to be capable of extending beyond the human body to reach throughout the universe. It could also be augmented by means of careful exercise of one's moral capacities. On the other hand, the qi of an individual could be degraded by adverse external forces that succeed in operating on that individual. Living things were not the only things believed to have qi. Zhuangzi indicated that wind is the "qi" of the Earth. Moreover, cosmic yin and yang "are the greatest of qi. He described qi as "issuing forth" and creating profound effects. He also said "Human beings are born [because of] the accumulation of "qi". When it accumulates there is life. When it dissipates there is death... There is one "qi" that connects and pervades everything in the world." Another passage traces life to intercourse between Heaven and Earth: "The highest Yin is the most restrained. The highest Yang is the most exuberant. The restrained comes forth from Heaven. The exuberant issues forth from Earth. The two intertwine and penetrate forming a harmony, and [as a result] things are born." The Guanzi essay "Neiye" (Inward Training) is the oldest received writing on the subject of the cultivation of vapor "[qi]" and meditation techniques. The essay was probably composed at the Jixia Academy in Qi in the late fourth century B.C. Xun Zi, another Confucian scholar of the Jixia Academy, followed in later years. At 9:69/127, Xun Zi says, "Fire and water have "qi" but do not have life. Grasses and trees have life but do not have perceptivity. Fowl and beasts have perceptivity but do not have "yi" (sense of right and wrong, duty, justice). Men have "qi", life, perceptivity, and "yi"." Chinese people at such an early time had no concept of radiant energy, but they were aware that one can be heated by a campfire from a distance away from the fire. They accounted for this phenomenon by claiming ""qi"" radiated from fire. At 18:62/122, he also uses ""qi"" to refer to the vital forces of the body that decline with advanced age. Among the animals, the gibbon and the crane were considered experts at inhaling the "qi". The Confucian scholar Dong Zhongshu (ca. 150 BC) wrote in Luxuriant Dew of the Spring and Autumn Annals: "The gibbon resembles a macaque, but he is larger, and his color is black. His forearms being long, he lives eight hundred years, because he is expert in controlling his breathing." ("") Later, the syncretic text assembled under the direction of Liu An, the Huai Nan Zi, or "Masters of Huainan", has a passage that presages most of what is given greater detail by the Neo-Confucians: The "Huangdi Neijing" "(""The Yellow Emperor's Classic of Medicine", circa 2nd century BCE) is historically credited with first establishing the pathways, called meridians, through which qi allegedly circulates in the human body. In traditional Chinese medicine, symptoms of various illnesses are believed to be either the product of disrupted, blocked, and unbalanced "qi" movement through meridians or deficiencies and imbalances of qi in the "Zang Fu" organs. Traditional Chinese medicine often seeks to relieve these imbalances by adjusting the circulation of "qi" using a variety of techniques including herbology, food therapy, physical training regimens (qigong, t'ai chi ch'uan, and other martial arts training), moxibustion, "tui na", or acupuncture. The nomenclature of Qi in the human body is different depending on its sources, roles, and locations. For sources there is a difference between so-called "Primordial Qi" (acquired at birth from one's parents) and Qi acquired throughout one's life. Or again Chinese medicine differentiates between Qi acquired from the air we breathe (so called "Clean Air") and Qi acquired from food and drinks (so-called "Grain Qi"). Looking at roles Qi is divided into "Defensive Qi" and "Nutritive Qi". Defensive Qi's role is to defend the body against invasions while Nutritive Qi's role is to provide sustenance for the body. Lastly, looking at locations, Qi is also named after the Zang-Fu organ or the Meridian in which it resides: "Liver Qi", "Spleen Qi", etc. A qi field ("chu-chong") refers to the cultivation of an energy field by a group, typically for healing or other benevolent purposes. A qi field is believed to be produced by visualization and affirmation. They are an important component of Wisdom Healing'Qigong ("Zhineng Qigong"), founded by Grandmaster Ming Pang. Concepts similar to qi can be found in many cultures. "Prana" in Hinduism and Indian culture, "chi" in the Igbo religion, "pneuma" in ancient Greece, "mana" in Hawaiian culture, "lüng" in Tibetan Buddhism, "manitou" in the culture of the indigenous peoples of the Americas. Some elements of the "qi" concept can be found in the term 'energy' when used in the context of various esoteric forms of spirituality and alternative medicine. The existence of Qi has not been proven scientifically. A 1997 consensus statement on acupuncture by the United States National Institutes of Health noted that concepts such as qi "are difficult to reconcile with contemporary biomedical information". The 2014 Skeptoid podcast episode titled "Your Body's Alleged Energy Fields" related a Reiki practitioner's report of what was happening as she passed her hands over a subject's body: Evaluating these claims, author and scientific skeptic Brian Dunning reported: The traditional Chinese art of geomancy, the placement and arrangement of space called feng shui, is based on calculating the balance of qi, interactions between the five elements, yin and yang, and other factors. The retention or dissipation of qi is believed to affect the health, wealth, energy level, luck, and many other aspects of the occupants. Attributes of each item in a space affect the flow of qi by slowing it down, redirecting it or accelerating it. This is said to influence the energy level of the occupants. Positive qi flows in curved lines, whereas negative qi travels in straight lines. In order for qi to be nourishing and positive, it must continue to flow not too quickly or too slowly. In addition, qi should not be blocked abruptly, because it would become stagnant and turn destructive. One use for a "luopan" is to detect the flow of qi. The quality of qi may rise and fall over time. Feng shui with a compass might be considered a form of divination that assesses the quality of the local environment. There are three kinds of qi, known as heaven qi ("tian qi" 天气), Earth qi ("di qi" 地气), and human qi ("ren qi" 人气). Heaven qi is composed of natural forces including the sun and rain. Earth qi is affected by heaven qi. For example, too much sun would lead to drought, and a lack of sun would cause plants to die off. Human qi is affected by earth qi, because the environment has effects on human beings. Feng shui is the balancing of heaven, Earth, and human qi. Qìgōng (气功 or 氣功) involves coordinated breathing, movement, and awareness. It is traditionally viewed as a practice to cultivate and balance qi. With roots in traditional Chinese medicine, philosophy and martial arts, "qigong" is now practiced worldwide for exercise, healing, meditation, and training for martial arts. Typically a "qigong" practice involves rhythmic breathing, slow and stylized movement, a mindful state, and visualization of guiding qi. Qi is a didactic concept in many Chinese, Vietnamese, Korean and Japanese martial arts. Martial qigong is a feature of both internal and external training systems in China and other East Asian cultures. The most notable of the qi-focused "internal" force (jin) martial arts are Baguazhang, Xing Yi Quan, T'ai Chi Ch'uan, Southern Praying Mantis, Snake Kung Fu, Southern Dragon Kung Fu, Aikido, Kendo, Hapkido, Aikijujutsu, Luohan Quan, and Liu He Ba Fa. Demonstrations of "qi" or "ki" are popular in some martial arts and may include the unraisable body, the unbendable arm, and other feats of power. Some of these feats can alternatively be explained using biomechanics and physics. Acupuncture is a part of traditional Chinese medicine that involves insertion of needles into superficial structures of the body (skin, subcutaneous tissue, muscles) at acupuncture points to balance the flow of qi. This is often accompanied by moxibustion, a treatment that involves burning mugwort on or near the skin at an acupuncture point.
https://en.wikipedia.org/wiki?curid=25217
Quantum computing Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. Computers that perform quantum computations are known as quantum computers. Quantum computers are believed to be able to solve certain computational problems, such as integer factorization (which underlies RSA encryption), substantially faster than classical computers. The study of quantum computing is a subfield of quantum information science. Quantum computing began in the early 1980s, when physicist Paul Benioff proposed a quantum mechanical model of the Turing machine. Richard Feynman and Yuri Manin later suggested that a quantum computer had the potential to simulate things that a classical computer could not. In 1994, Peter Shor developed a quantum algorithm for factoring integers that had the potential to decrypt RSA-encrypted communications. Despite ongoing experimental progress since the late 1990s, most researchers believe that "fault-tolerant quantum computing [is] still a rather distant dream." In recent years, investment into quantum computing research has increased in both the public and private sector. On 23 October 2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration (NASA), claimed to have performed a quantum computation that is infeasible on any classical computer. There are several models of quantum computing, including the quantum circuit model, quantum Turing machine, adiabatic quantum computer, one-way quantum computer, and various quantum cellular automata. The most widely used model is the quantum circuit. Quantum circuits are based on the quantum bit, or "qubit", which is somewhat analogous to the bit in classical computation. Qubits can be in a 1 or 0 quantum state, or they can be in a superposition of the 1 and 0 states. However, when qubits are measured the result of the measurement is always either a 0 or a 1; the probabilities of these two outcomes depend on the quantum state that the qubits were in immediately prior to the measurement. Computation is performed by manipulating qubits with quantum logic gates, which are somewhat analogous to classical logic gates. There are currently two main approaches to physically implementing a quantum computer: analog and digital. Analog approaches are further divided into quantum simulation, quantum annealing, and adiabatic quantum computation. Digital quantum computers use quantum logic gates to do computation. Both approaches use quantum bits or qubits. There are currently a number of significant obstacles in the way of constructing useful quantum computers. In particular, it is difficult to maintain the quantum states of qubits as they are prone to quantum decoherence, and quantum computers require significant error correction as they are far more prone to errors than classical computers. Any computational problem that can be solved by a classical computer can also, in principle, be solved by a quantum computer. Conversely, quantum computers obey the Church–Turing thesis; that is, any computational problem that can be solved by a quantum computer can also be solved by a classical computer. While this means that quantum computers provide no additional advantages over classical computers in terms of computability, they do in theory enable the design of algorithms for certain problems that have significantly lower time complexities than known classical algorithms. Notably, quantum computers are believed to be able to quickly solve certain problems that no classical computer could solve in "any feasible amount of time"—a feat known as "quantum supremacy." The study of the computational complexity of problems with respect to quantum computers is known as quantum complexity theory. The prevailing model of quantum computation describes the computation in terms of a network of quantum logic gates. A memory consisting of formula_1 bits of information has formula_2 possible states. A vector representing all memory states thus has formula_2 entries (one for each state). This vector is viewed as a "probability vector" and represents the fact that the memory is to be found in a particular state. In the classical view, one entry would have a value of 1 (i.e. a 100% probability of being in this state) and all other entries would be zero. In quantum mechanics, probability vectors are generalized to density operators. This is the technically rigorous mathematical foundation for quantum logic gates, but the intermediate quantum state vector formalism is usually introduced first because it is conceptually simpler. This article focuses on the quantum state vector formalism for simplicity. We begin by considering a simple memory consisting of only one bit. This memory may be found in one of two states: the zero state or the one state. We may represent the state of this memory using Dirac notation so that formula_4 A quantum memory may then be found in any quantum superposition formula_5 of the two classical states formula_6 and formula_7: formula_8 In general, the coefficients formula_9 and formula_10 are complex numbers. In this scenario, one qubit of information is said to be encoded into the quantum memory. The state formula_5 is not itself a probability vector but can be connected with a probability vector via a measurement operation. If the quantum memory is measured to determine if the state is formula_6 or formula_7 (this is known as a computational basis measurement), the zero state would be observed with probability formula_14 and the one state with probability formula_15. The numbers formula_9 and formula_10 are called quantum amplitudes. The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix formula_18 Mathematically, the application of such a logic gate to a quantum state vector is modelled with matrix multiplication. Thus formula_19 and formula_20. The mathematics of single qubit gates can be extended to operate on multiqubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit whilst leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are formula_21 The CNOT gate can then be represented using the following matrix: formula_22 As a mathematical consequence of this definition, formula_23, formula_24, formula_25, and formula_26. In other words, the CNOT applies a NOT gate (formula_27 from before) to the second qubit if and only if the first qubit is in the state formula_7. If the first qubit is formula_6, nothing is done to either qubit. In summary, a quantum computation can be described as a network of quantum logic gates and measurements. Any measurement can be deferred to the end of a quantum computation, though this deferment may come at a computational cost. Because of this possibility of deferring a measurement, most quantum circuits depict a network consisting only of quantum logic gates and no measurements. More information can be found in the following articles: universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch–Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction. Any quantum computation can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security. However, other cryptographic algorithms do not appear to be broken by those algorithms. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking. Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. However, quantum computers offer polynomial speedup for some problems. The most well-known example of this is "quantum database search", which can be solved by Grover's algorithm using quadratically fewer queries to the database than that are required by classical algorithms. In this case, the advantage is not only provable but also optimal, it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees. Problems that can be addressed with Grover's algorithm have the following properties: For problems with all these properties, the running time of Grover's algorithm on a quantum computer will scale as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is Boolean satisfiability problem. In this instance, the "database" through which the algorithm is iterating is that of all possible answers. An example (and possible) application of this is a password cracker that attempts to guess the password or secret key for an encrypted file or system. Symmetric ciphers such as Triple DES and AES are particularly vulnerable to this kind of attack. This application of quantum computing is a major interest of government agencies. Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. Quantum annealing or Adiabatic quantum computation relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which is slowly evolved to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. The Quantum algorithm for linear systems of equations, or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is expected to provide speedup over classical counterparts. John Preskill has introduced the term "quantum supremacy" to refer to the hypothetical speedup advantage that a quantum computer would have over a classical computer in a certain field. Google announced in 2017 that it expected to achieve quantum supremacy by the end of the year though that did not happen. IBM said in 2018 that the best classical computers will be beaten on some practical task within about five years and views the quantum supremacy test only as a potential future benchmark. Although skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved, in October 2019, a Sycamore processor created in conjunction with Google AI Quantum was reported to have achieved quantum supremacy, with calculations more than 3,000,000 times as fast as those of Summit, generally considered the world's fastest computer. Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed the following requirements for a practical quantum computer: Sourcing parts for quantum computers is also very difficult. Many quantum computers, like those constructed by Google and IBM, need Helium-3, a nuclear research byproduct, and special superconducting cables that are only made by the Japanese company Coax Co. One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time "T"2 (for NMR and MRI technology, also called the "dephasing time"), typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence. As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions. These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time. As described in the Quantum threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing. Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between "L" and "L"2, where "L" is the number of qubits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of "L". For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about "L"2 or about 107 steps and at 1 MHz, about 10 seconds. A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows: There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are: The quantum Turing machine is theoretically important but the physical implementation of this model is not feasible. All four models of computation have been shown to be equivalent; each can simulate the other with no more than polynomial overhead. For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits): A large number of candidates demonstrates that quantum computing, despite rapid progress, is still in its infancy. Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers. Conversely, any problem solvable by a quantum computer is also solvable by a classical computer; or more formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem and the existence of quantum computers does not disprove the Church–Turing thesis. As of yet, quantum computers do not satisfy the strong Church thesis. While hypothetical machines have been realized, a universal quantum computer has yet to been physically constructed. The strong version of Church's thesis requires a physical computer, and therefore there is no quantum computer that yet satisfies the strong Church thesis. While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve many problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers. However, the capacity of quantum computers to accelerate classical algorithms has rigid upper bounds, and the overwhelming majority of classical calculations cannot be accelerated by the use of quantum computers. The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that BPPformula_30BQP and is widely suspected that BQPformula_31BPP, which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity. The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that Pformula_30BQPformula_30PSPACE; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that NPformula_31BQP; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it would follow from NP-hardness that all problems in NP are in BQP). The relationship of BQP to the basic classical complexity classes can be summarized as follows: It is also known that BQP is contained in the complexity class "#P" (or more precisely in the associated class of decision problems "P#P"), which is a subclass of PSPACE. It has been speculated that further advances in physics could lead to even faster computers. For instance, it has been shown that a non-local hidden variable quantum computer based on Bohmian Mechanics could implement a search of an formula_36-item database in at most formula_37 steps, a slight speedup over Grover's algorithm, which runs in formula_38 steps. Note, however, that neither search method would allow quantum computers to solve NP-complete problems in polynomial time. Theories of quantum gravity, such as M-theory and loop quantum gravity, may allow even faster computers to be built. However, defining computation in these theories is an open problem due to the problem of time; that is, within these physical theories there is currently no obvious way to describe what it means for an observer to submit input to a computer at one point in time and then receive output at a later point in time.
https://en.wikipedia.org/wiki?curid=25220
Quasigroup In mathematics, especially in abstract algebra, a quasigroup is an algebraic structure resembling a group in the sense that "division" is always possible. Quasigroups differ from groups mainly in that they are not necessarily associative. A quasigroup with an identity element is called a loop. There are at least two structurally equivalent formal definitions of quasigroup. One defines a quasigroup as a set with one binary operation, and the other, from universal algebra, defines a quasigroup as having three primitive operations. The homomorphic image of a quasigroup defined with a single binary operation, however, need not be a quasigroup. We begin with the first definition. A quasigroup is a non-empty set "Q" with a binary operation ∗ (that is, a magma), obeying the Latin square property. This states that, for each "a" and "b" in "Q", there exist unique elements "x" and "y" in "Q" such that both hold. (In other words: Each element of the set occurs exactly once in each row and exactly once in each column of the quasigroup's multiplication table, or Cayley table. This property ensures that the Cayley table of a finite quasigroup, and, in particular, finite group, is a Latin square.) The uniqueness requirement can be replaced by the requirement that the magma be cancellative. The unique solutions to these equations are written and . The operations '\' and '/' are called, respectively, left and right division. The empty set equipped with the empty binary operation satisfies this definition of a quasigroup. Some authors accept the empty quasigroup but others explicitly exclude it. Given some algebraic structure, an identity is an equation in which all variables are tacitly universally quantified, and in which all operations are among the primitive operations proper to the structure. Algebraic structures axiomatized solely by identities are called varieties. Many standard results in universal algebra hold only for varieties. Quasigroups are varieties if left and right division are taken as primitive. A quasigroup is a type (2,2,2) algebra (i.e., equipped with three binary operations) satisfying the identities: In other words: Multiplication and division in either order, one after the other, on the same side by the same element, have no net effect. Hence if is a quasigroup according to the first definition, then is the same quasigroup in the sense of universal algebra. And vice versa: if is a quasigroup according to the sense of universal algebra, then is a quasigroup according to the first definition. A loop is a quasigroup with an identity element; that is, an element, "e", such that It follows that the identity element, "e", is unique, and that every element of "Q" has unique left and right inverses (which need not be the same). A quasigroup with an idempotent element is called a pique ("pointed idempotent quasigroup"); this is a weaker notion than a loop but common nonetheless because, for example, given an abelian group, , taking its subtraction operation as quasigroup multiplication yields a pique with the group identity (zero) turned into a "pointed idempotent". (That is, there is a principal isotopy .) A loop that is associative is a group. A group can have a non-associative pique isotope, but it cannot have a nonassociative loop isotope. There are weaker associativity properties that have been given special names. For instance, a Bol loop is a loop that satisfies either: or else A loop that is both a left and right Bol loop is a Moufang loop. This is equivalent to any one of the following single Moufang identities holding for all "x", "y", "z": Smith (2007) names the following important properties and subclasses: A quasigroup is semisymmetric if the following equivalent identities hold: Although this class may seem special, every quasigroup "Q" induces a semisymmetric quasigroup "Q"Δ on the direct product cube "Q"3 via the following operation: where "//" and "\\" are the conjugate division operations given by formula_2 and formula_3. A narrower class that is a totally symmetric quasigroup (sometimes abbreviated TS-quasigroup) in which all conjugates coincide as one operation: . Another way to define (the same notion of) totally symmetric quasigroup is as a semisymmetric quasigroup which also is commutative, i.e. . Idempotent total symmetric quasigroups are precisely (i.e. in a bijection with) Steiner triples, so such a quasigroup is also called a Steiner quasigroup, and sometimes the latter is even abbreviated as squag; the term sloop is defined similarly for a Steiner quasigroup that is also a loop. Without idempotency, total symmetric quasigroups correspond to the geometric notion of extended Steiner triple, also called Generalized Elliptic Cubic Curve (GECC). A quasigroup is called totally anti-symmetric if for all , both of the following implications hold: It is called weakly totally anti-symmetric if only the first implication holds. This property is required, for example, in the Damm algorithm. Quasigroups have the cancellation property: if , then . This follows from the uniqueness of left division of "ab" or "ac" by "a". Similarly, if , then . The definition of a quasigroup can be treated as conditions on the left and right multiplication operators , defined by The definition says that both mappings are bijections from "Q" to itself. A magma "Q" is a quasigroup precisely when all these operators, for every "x" in "Q", are bijective. The inverse mappings are left and right division, that is, In this notation the identities among the quasigroup's multiplication and division operations (stated in the section on universal algebra) are where 1 denotes the identity mapping on "Q". The multiplication table of a finite quasigroup is a Latin square: an table filled with "n" different symbols in such a way that each symbol occurs exactly once in each row and exactly once in each column. Conversely, every Latin square can be taken as the multiplication table of a quasigroup in many ways: the border row (containing the column headers) and the border column (containing the row headers) can each be any permutation of the elements. See small Latin squares and quasigroups. For a countably infinite quasigroup "Q", it is possible to imagine an infinite array in which every row and every column corresponds to some element "q" of "Q", and where the element "a"*"b" is in the row corresponding to "a" and the column responding to "b". In this situation too, the Latin Square property says that each row and each column of the infinite array will contain every possible value precisely once. For an uncountably infinite quasigroup, such as the group of non-zero real numbers under multiplication, the Latin square property still holds, although the name is somewhat unsatisfactory, as it is not possible to produce the array of combinations to which the above idea of an infinite array extends since the real numbers cannot all be written in a sequence. Every loop element has a unique left and right inverse given by A loop is said to have ("two-sided") "inverses" if formula_9 for all "x". In this case the inverse element is usually denoted by formula_10. There are some stronger notions of inverses in loops which are often useful: A loop has the "inverse property" if it has both the left and right inverse properties. Inverse property loops also have the antiautomorphic and weak inverse properties. In fact, any loop which satisfies any two of the above four identities has the inverse property and therefore satisfies all four. Any loop which satisfies the left, right, or antiautomorphic inverse properties automatically has two-sided inverses. A quasigroup or loop homomorphism is a map between two quasigroups such that . Quasigroup homomorphisms necessarily preserve left and right division, as well as identity elements (if they exist). Let "Q" and "P" be quasigroups. A quasigroup homotopy from "Q" to "P" is a triple of maps from "Q" to "P" such that for all "x", "y" in "Q". A quasigroup homomorphism is just a homotopy for which the three maps are equal. An isotopy is a homotopy for which each of the three maps is a bijection. Two quasigroups are isotopic if there is an isotopy between them. In terms of Latin squares, an isotopy is given by a permutation of rows α, a permutation of columns β, and a permutation on the underlying element set γ. An autotopy is an isotopy from a quasigroup to itself. The set of all autotopies of a quasigroup form a group with the automorphism group as a subgroup. Every quasigroup is isotopic to a loop. If a loop is isotopic to a group, then it is isomorphic to that group and thus is itself a group. However, a quasigroup which is isotopic to a group need not be a group. For example, the quasigroup on R with multiplication given by is isotopic to the additive group , but is not itself a group. Every medial quasigroup is isotopic to an abelian group by the Bruck–Toyoda theorem. Left and right division are examples of forming a quasigroup by permuting the variables in the defining equation. From the original operation ∗ (i.e., ) we can form five new operations: (the opposite operation), / and \, and their opposites. That makes a total of six quasigroup operations, which are called the conjugates or parastrophes of ∗. Any two of these operations are said to be "conjugate" or "parastrophic" to each other (and to themselves). If the set "Q" has two quasigroup operations, ∗ and ·, and one of them is isotopic to a conjugate of the other, the operations are said to be isostrophic to each other. There are also many other names for this relation of "isostrophe", e.g., paratopy. An "n"-ary quasigroup is a set with an "n"-ary operation, with , such that the equation has a unique solution for any one variable if all the other "n" variables are specified arbitrarily. Polyadic or multiary means "n"-ary for some nonnegative integer "n". A 0-ary, or nullary, quasigroup is just a constant element of "Q". A 1-ary, or unary, quasigroup is a bijection of "Q" to itself. A binary, or 2-ary, quasigroup is an ordinary quasigroup. An example of a multiary quasigroup is an iterated group operation, ; it is not necessary to use parentheses to specify the order of operations because the group is associative. One can also form a multiary quasigroup by carrying out any sequence of the same or different group or quasigroup operations, if the order of operations is specified. There exist multiary quasigroups that cannot be represented in any of these ways. An "n"-ary quasigroup is irreducible if its operation cannot be factored into the composition of two operations in the following way: where and . Finite irreducible "n"-ary quasigroups exist for all ; see Akivis and Goldberg (2001) for details. An "n"-ary quasigroup with an "n"-ary version of associativity is called an n-ary group. A right-quasigroup is a type (2,2) algebra satisfying both identities: "y" = ("y" / "x") ∗ "x"; "y" = ("y" ∗ "x") / "x". Similarly, a left-quasigroup is a type (2,2) algebra satisfying both identities: "y" = "x" ∗ ("x" \ "y"); "y" = "x" \ ("x" ∗ "y"). The number of isomorphism classes of small quasigroups and loops is given here:
https://en.wikipedia.org/wiki?curid=25223
Quaestor A ( , ; "investigator") was a public official in Ancient Rome. The position served different functions depending on the period. In the Roman Kingdom, ' (quaestors with judicial powers) were appointed by the king to investigate and handle murders. In the Roman Republic, quaestors (Lat. ') were elected officials who supervised the state treasury and conducted audits. It was the lowest ranking position in the ' (course of offices). However, this means that in the political environment of Rome, it was quite common for many aspiring politicians to take the position of quaestor as an early rung on the political ladder. In the Roman Empire, the position, which was initially replaced by the ' (prefect), reemerged during the late empire as "", a position appointed by the emperor to lead the imperial council and respond to petitioners. "Quaestor" derives from the Latin verb ', ', meaning "to inquire". The job title has traditionally been understood as deriving from the original investigative function of the ". Ancient authors, perhaps influenced by etymology, reasoned that the investigative role of the " had evolved to include financial matters, giving rise to the similarly-named later offices. However, this connection has been questioned by modern scholars. The earliest quaestors were ' (quaestors with judicial power), an office dating back to the Kingdom of Rome. ' were chosen to investigate capital crimes, and may have been appointed as needed rather than holding a permanent position. Ancient authors disagree on the exact manner of selection for this office as well as on its earliest institution, with some dating it to the mythical reign of Romulus. In the Roman Republic, quaestors were elected officials who supervised the treasury and financial accounts of the state, its armies and its officers. The quaestors tasked with financial supervision were also called ', because they oversaw the ' (public treasury) in the Temple of Saturn. The earliest origins of the office is obscure, but by about 420 BC there were four quaestors elected each year by the " (Assembly of the People). After 267 BC, the number was expanded to ten. The office of quaestor, usually a former broad-striped tribune, was adopted as the first official post of the ' ( course of offices), the standard sequence that made up a career in public service. Once elected as quaestor, a Roman man earned the right to sit in the Senate and began progressing through the '. Quaestors were not provided any ' (civil servant bodyguards) while in the city of Rome, but while in the provinces, they were allowed to have the ' (a bound bundle of wooden rods symbolizing a magistrate's authority and jurisdiction). Every Roman consul, the highest elected official in the ", and every provincial governor was appointed a quaestor. Some quaestors were assigned to work in the city and others in the provinces where their responsibilities could include being recruited into the military. Some provincial quaestors were assigned as staff to military generals or served as second-in-command to governors in the Roman provinces. Still others were assigned to oversee military finances. Lucius Cornelius Sulla's reforms in 81 BC raised the number of quaestors to 20 and the minimum age for a quaestorship was 30 for patricians (members of ruling class families) and 32 for plebeians (commoners). Additionally, the reforms granted quaestors automatic membership in the Senate upon being elected, whereas previously, membership in the Senate was granted only after censors revised the Senate rolls, which occurred less frequently than the annual induction of quaestors. This relationship between a consul and a quaestor was similar to that between a patron and a client. The quaestor was essential a client to their superior. There was some level of mutual respect between the two individuals, but a defined sense of place and knowledge of each other's roles. This relationship often continued past the designated terms of either individual, and the quaestor could be called upon for assistance or other needs by the consul. Breaking this pact or doing harm by a former superior would make the quaestor seem dishonorable or even treasonous. Constantine the Great created the office ' (quaestor of the sacred place) which functioned as the Roman Empire's senior legal official. Emperor Justinian I also created the offices ', a judicial and police official for Constantinople, and ' (quaestor of the army), a short-lived joint military-administrative post covering the border of the lower Danube. The ' survived long into the Byzantine Empire, although its duties were altered to match the "". The term is last attested in 14th century Byzantium as a purely honorific title. In the early republic, there were two quaestors, and their duties were maintaining the public treasury, both taking in funds and deciding whom to pay them to. This continued until 421 BCE when the number of quaestors was doubled to 4. While two continued with the same duties of those that had come before, the other two had additional responsibilities, each being in service to the one of the consuls.'" When consuls went to war, each was assigned a quaestor. The quaestor's main responsibilities involved the distribution of war spoils between the aerarium, or public treasury, and the army. The key responsibility of the quaestor was the administration of public funds to higher-ranking officials in order to pursue their goals, whether those involve military conquests which require funding for armies or public works projects. The office of quaestor was a position bound to their superior, whether that be a consul, governor, or other magistrate, and the duties would often reflect their superiors. For example, Gaius Gracchus was quaestor under the consul Orestes in Sardinia, and many of his responsibilities involved leading military forces. While not in direct command of the army, the quaestor would be in charge of organizational and lesser duties that were a necessary part of the war machine. During the reign of the Emperor Constantine I, the office of quaestor was reorganized into a judicial position known as the quaestor sacri palatii. The office functioned as the primary legal adviser to the emperor, and was charged with the creation of laws as well as answers petitions to the emperor. From 440 onward, the office of the quaestor worked in conjunction with the praetorian prefect of the East to oversee the supreme tribunal, or supreme court, at Constantinople. There they heard appeals from the various subordinate courts and governors. Under the Emperor Justinian I, an additional office named quaestor was created to control police and judicial matters in Constantinople. In this new position, a quaestor was responsible for wills, as well as supervision of complaints by tenants regarding their landlords, and finally over the homeless. See also . Following the death of his brother Tiberius Gracchus, Gaius Gracchus stayed out of the political spotlight for a period until he was forced to defend a good friend of his named Vettius in court. Hearing his vocal abilities, the Senate began to fear that Gaius would arouse the people in the same manner as his brother and appointed him quaestor to Gnaeus Aufidius Orestes in Sardinia to prevent him from becoming a tribune. Gaius used his position as quaestor to successfully defeat his enemies as well as gain a large amount of loyalty among his troops. Following an incident where Gaius won the support of a local village to provide for his troops, the Senate attempted to keep Gaius in Sardinia indefinitely by reappointing Orestes to stay in Sardinia. Gaius was not pleased by this and returned to Rome demanding an explanation, actions which eventually led to his election as a tribune of the people. Marcus Antonius, or Mark Antony, who is most well known for his civil war with Octavian, started off his political career in the position of quaestor after being a prefect in Syria and then one of Julius Caesar's legates in Gaul. Through a combination of Caesar's favor and his oratory skills defending the legacy of Publius Clodius, Antony was able to win the quaestorship in 51 BCE. This then led to Antony's election as augur and tribune of the people in 50 BC due to Caesar's efforts to reward his ally. While Julius Caesar served as Quaestor to the Governor or Proconsul/Propraetor in Hispania Ulterior he took major military action against the rebellious tribes of the region. His time as Quaestor was uneventful although when he became Governor there, he settled the disputes. Marcus Tullius Cicero was the Quaestor to the Propraetor/Proconsul of Sicily. He fixed major agricultural problems in the region and improved on the purchase and selling of grain. The farmers after this loved Cicero and began to travel to Rome to vote for him in elections every year.
https://en.wikipedia.org/wiki?curid=25225
Q.E.D. Q.E.D. or QED (sometimes italicized ()) is an initialism of the Latin phrase "", literally meaning "what was to be shown". Traditionally, the abbreviation is placed at the end of a mathematical proof or philosophical argument in print publications to indicate that the proof or the argument is complete, and hence is used with the meaning "thus it has been demonstrated". The phrase "quod erat demonstrandum" is a translation into Latin from the Greek (; abbreviated as "ΟΕΔ"). Translating from the Latin phrase into English yields "what was to be demonstrated". However, translating the Greek phrase can produce a slightly different meaning. In particular, since the verb also means "to show" or "to prove", a different translation from the Greek phrase would read "The very thing it was required to have shown." The Greek phrase was used by many early Greek mathematicians, including Euclid and Archimedes. The translated Latin phrase (and its associated acronym) was subsequently used by many post-Renaissance mathematicians and philosophers, including Galileo, Spinoza, Isaac Barrow and Isaac Newton. During the European Renaissance, scholars often wrote in Latin, and phrases such as "Q.E.D." were often used to conclude proofs. Perhaps the most famous use of "Q.E.D." in a philosophical argument is found in the "Ethics" of Baruch Spinoza, published posthumously in 1677. Written in Latin, it is considered by many to be Spinoza's "magnum opus". The style and system of the book are, as Spinoza says, "demonstrated in geometrical order", with axioms and definitions followed by propositions. For Spinoza, this is a considerable improvement over René Descartes's writing style in the "Meditations", which follows the form of a diary. There is another Latin phrase with a slightly different meaning, usually shortened similarly, but being less common in use. , originating from the Greek geometers' closing (), meaning "which had to be done". Because of the difference in meaning, the two phrases should not be confused. Euclid used the Greek original of Quod Erat Faciendum (Q.E.F.) to close propositions that were not proofs of theorems, but constructions of geometric objects. For example, Euclid's first proposition showing how to construct an equilateral triangle, given one side, is concluded this way. Many times, mathematicians will only utilize () faciendia as a result of the results of previous definitions or demonstradums. An idea of this is expressed within Topics (Aristotle), where he goes over the difference between a proposition and a problem. " For if it be put in this way, "'An animal that walks on two feet" is the definition of man, is it not?' or '"Animal" is the genus of man, is it not?' the result is a proposition: but if thus, 'Is "an animal that walks on two feet" a definition of man or no?' (or 'Is "animal" his genus or no?') the result is a problem." This is parallel to the idea of the difference between a Q.E.D. and a Q.E.F. A proposition (Q.E.D.) like this functions exactly the same way as it does for Euclid: the proposition is intended to prove a particular property, the problem (Q.E.F.) on the other hand requires multiple propositions in order to prove, or even construct an entirely new category. The problems are the dialectic's objective to solve. In a similar fashion, there are many different ways to construct a mathematical system to construct a triangle. There is only one triangle, however, and the triangle has definite properties. In this way, truth is sought within mathematics and philosophy in a congruous way. Euclid's Elements could be thought of as a document whose objective is to construct a dodecahedron and an icosahedron (Propositions 16 and 17 book XIII). Appollonius' On Conics Book I could be thought of as a document whose objective is to construct a pair of hyperbolas from two bisecting lines (Proposition 50 of book I). Propositions have historically been used in logic and mathematics to work towards solving a problem, and these fields both reflect that in their foundations through Euclid and Aristotle. There is no common formal English equivalent, although the end of a proof may be announced with a simple statement such as "this completes the proof", "as required", "as desired", "as expected", "hence proved", "ergo", or other similar locutions. WWWWW or W5 – an abbreviation of "Which Was What Was Wanted" – has been used similarly. Often this is considered to be more tongue-in-cheek than "Q.E.D." or the Halmos tombstone symbol "(see below)". Due to the paramount importance of proofs in mathematics, mathematicians since the time of Euclid have developed conventions to demarcate the beginning and end of proofs. In printed English language texts, the formal statements of theorems, lemmas, and propositions are set in italics by tradition. The beginning of a proof usually follows immediately thereafter, and is indicated by the word "proof" in boldface or italics. On the other hand, several symbolic conventions exist to indicate the end of a proof. While some authors still use the classical abbreviation, Q.E.D., it is relatively uncommon in modern mathematical texts. Paul Halmos pioneered the use of a solid black square at the end of a proof as a Q.E.D symbol, a practice which has become standard, although not universal. Halmos adopted this use of a symbol from magazine typography customs in which simple geometric shapes had been used to indicate the end of an article. This symbol was later called the "tombstone", the "Halmos symbol", or even a "halmos" by mathematicians. Often the Halmos symbol is drawn on chalkboard to signal the end of a proof during a lecture, although this practice is not so common as its use in printed text. The tombstone symbol appears in TeX as the character formula_1 (filled square, \blacksquare) and sometimes, as a formula_2 (hollow square, \square or \Box). In the AMS Theorem Environment for LaTeX, the hollow square is the default end-of-proof symbol. Unicode explicitly provides the "end of proof" character, U+220E (∎). Some authors use other Unicode symbols to note the end of a proof, including, ▮ (U+25AE, a black vertical rectangle), and ‣ (U+2023, a triangular bullet). Other authors have adopted two forward slashes (//) or four forward slashes (////). In other cases, authors have elected to segregate proofs typographically—by displaying them as indented blocks. In Joseph Heller's book "Catch-22", the Chaplain, having been told to examine a forged letter allegedly signed by him (which he knew he didn't sign), verified that his "name" was in fact there. His investigator replied, "Then you wrote it. Q.E.D." The chaplain said he didn't write it and that it wasn't his handwriting, to which the investigator replied, "Then you signed your name in somebody else's handwriting again." In the 1978 science-fiction radio comedy, and later in the television, novel, and film adaptations of "The Hitchhiker's Guide to the Galaxy", "Q.E.D." is referred to in the Guide's entry for the babel fish, when it is claimed that the babel fish – which serves the "mind-bogglingly" useful purpose of being able to translate any spoken language when inserted into a person's ear – is used as evidence for existence and non-existence of God. The exchange from the novel is as follows: "'I refuse to prove I exist,' says God, 'for proof denies faith, and without faith I am nothing.' 'But,' says Man, 'The babel fish is a dead giveaway, isn't it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don't. QED.' 'Oh dear,' says God, 'I hadn't thought of that,' and promptly vanishes in a puff of logic." In Neal Stephenson's 1999 novel "Cryptonomicon", Q.E.D. is used as a punchline to several humorous anecdotes, in which characters go to great lengths to prove something non-mathematical. Singer-songwriter Thomas Dolby's 1988 song "Airhead" includes the lyric, "Quod erat demonstrandum, baby," referring to the self-evident vacuousness of the eponymous subject; and in response, a female voice squeals, delightedly, "Oooh... you speak French!"
https://en.wikipedia.org/wiki?curid=25228
Quagga The quagga ( or ) ("Equus quagga quagga") was a subspecies of plains zebra that lived in South Africa until becoming extinct late in the 19th century. It was long thought to be a distinct species, but early genetic studies have supported it being a subspecies of plains zebra. A more recent study suggested that it was merely the southernmost cline or ecotype of the species. The name was derived from its call, which sounded like "kwa-ha-ha". The quagga is believed to have been around long and tall at the shoulder. It was distinguished from other zebras by its limited pattern of primarily brown and white stripes, mainly on the front part of the body. The rear was brown and without stripes, and therefore more horse-like. The distribution of stripes varied considerably between individuals. Little is known about the quagga's behaviour, but it may have gathered into herds of 30–50. Quaggas were said to be wild and lively, yet were also considered more docile than Burchell's zebra. They were once found in great numbers in the Karoo of Cape Province and the southern part of the Orange Free State in South Africa. After the Dutch settlement of South Africa began, the quagga was heavily hunted as it competed with domesticated animals for forage. Some were taken to zoos in Europe, but breeding programmes were unsuccessful. The last wild population lived in the Orange Free State, and the quagga was extinct in the wild by 1878. The last captive specimen died in Amsterdam on 12 August 1883. Only one quagga was ever photographed alive and only 23 skins are preserved today. In 1984, the quagga was the first extinct animal to have its DNA analysed, and the Quagga Project is trying to recreate the phenotype of hair coat pattern and related characteristics by selectively breeding Burchell's zebras. The name "quagga" is derived from the Khoikhoi word for "zebra" and is onomatopoeic, being said to resemble the quagga's call, variously transcribed as "kwa-ha-ha", "kwahaah", or "oug-ga". The name is still used colloquially for the plains zebra. The quagga was originally classified as a distinct species, "Equus quagga", in 1778 by Dutch naturalist Pieter Boddaert. Traditionally, the quagga and the other plains and mountain zebras were placed in the subgenus "Hippotigris". Much debate has occurred over the status of the quagga in relation to the plains zebra. It is poorly represented in the fossil record, and the identification of these fossils is uncertain, as they were collected at a time when the name "quagga" referred to all zebras. Fossil skulls of "Equus mauritanicus" from Algeria have been claimed to show affinities with the quagga and the plains zebra, but they may be too badly damaged to allow definite conclusions to be drawn from them. Quaggas have also been identified in cave art attributed to the San. Reginald Innes Pocock was perhaps the first to suggest that the quagga was a subspecies of plains zebra in 1902. As the quagga was scientifically described and named before the plains zebra, the trinomial name for the quagga becomes "E. quagga quagga" under this scheme, and the other subspecies of plains zebra are placed under "E. quagga", as well. Historically, quagga taxonomy was further complicated because the extinct southernmost population of Burchell's zebra ("Equus quagga burchellii", formerly "Equus burchellii burchellii") was thought to be a distinct subspecies (also sometimes thought a full species, "E. burchellii"). The extant northern population, the "Damara zebra", was later named "Equus quagga antiquorum", which means that it is today also referred to as "E. q. burchellii", after it was realised they were the same taxon. The extinct population was long thought very close to the quagga, since it also showed limited striping on its hind parts. As an example of this, Shortridge placed the two in the now disused subgenus "Quagga" in 1934. Most experts now suggest that the two subspecies represent two ends of a cline. Different subspecies of plains zebras were recognised as members of "Equus quagga" by early researchers, though much confusion existed over which species were valid. Quagga subspecies were described on the basis of differences in striping patterns, but these differences were since attributed to individual variation within the same populations. Some subspecies and even species, such as "E. q. danielli" and "Hippotigris isabellinus", were only based on illustrations (iconotypes) of aberrant quagga specimens. Some authors have described the quagga as a kind of wild horse rather than a zebra, and one craniometric study from 1980 seemed to confirm its affiliation with the horse ("Equus caballus"). Early morphological studies have been pointed out to be erroneous; using skeletons from stuffed specimens can be problematical, as early taxidermists sometimes used donkey and horse skulls inside their mounts when the originals were unavailable. The quagga was the first extinct animal to have its DNA analysed, and this 1984 study launched the field of ancient DNA analysis. It confirmed that the quagga was more closely related to zebras than to horses, with the quagga and mountain zebra ("Equus zebra") sharing an ancestor 3–4 million years ago. An immunological study published the following year found the quagga to be closest to the plains zebra. A 1987 study suggested that the mtDNA of the quagga diverged at a range of roughly 2% per million years, similar to other mammal species, and again confirmed the close relation to the plains zebra. Later morphological studies came to conflicting conclusions. A 1999 analysis of cranial measurements found that the quagga was as different from the plains zebra as the latter is from the mountain zebra. A 2004 study of skins and skulls instead suggested that the quagga was not a distinct species, but a subspecies of the plains zebra. In spite of these findings, many authors subsequently kept the plains zebra and the quagga as separate species. A genetic study published in 2005 confirmed the subspecific status of the quagga. It showed that the quagga had little genetic diversity, and that it diverged from the other plains zebra subspecies only between 120,000 and 290,000 years ago, during the Pleistocene, and possibly the penultimate glacial maximum. Its distinct coat pattern perhaps evolved rapidly because of geographical isolation and/or adaptation to a drier environment. In addition, plains zebra subspecies tend to have less striping the further south they live, and the quagga was the most southern-living of them all. Other large African ungulates diverged into separate species and subspecies during this period, as well, probably because of the same climate shift. The simplified cladogram below is based on the 2005 analysis (some taxa shared haplotypes and could, therefore, not be differentiated): A 2018 genetic study of plains zebras populations confirmed the quagga as a member of the species. They found no evidence for subspecific differentiation based on morphological differences between southern populations of zebras, including the quagga. Modern plains zebra populations may have originated from southern Africa, and the quagga appears to be less divergent from neighbouring populations than the northernmost living population in northeastern Uganda. Instead, the study supported a north–south genetic continuum for plains zebras, with the Ugandan population being the most distinct. Zebras from Namibia appear to be the closest genetically to the quagga. The quagga is believed to have been long and tall at the shoulder. Its coat pattern was unique among equids: zebra-like in the front but more like a horse in the rear. It had brown and white stripes on the head and neck, brown upper parts and a white belly, tail and legs. The stripes were boldest on the head and neck and became gradually fainter further down the body, blending with the reddish brown of the back and flanks, until disappearing along the back. It appears to have had a high degree of polymorphism, with some having almost no stripes and others having patterns similar to the extinct southern population of Burchell's zebra, where the stripes covered most of the body except for the hind parts, legs and belly. It also had a broad dark dorsal stripe on its back. It had a standing mane with brown and white stripes. The only quagga to have been photographed alive was a mare at the Zoological Society of London's Zoo. Five photographs of this specimen are known, taken between 1863 and 1870. On the basis of photographs and written descriptions, many observers suggest that the stripes on the quagga were light on a dark background, unlike other zebras. Reinhold Rau, pioneer of the Quagga Project, claimed that this is an optical illusion: that the base colour is a creamy white and that the stripes are thick and dark. Embryological evidence supports zebras being dark coloured with white as an addition. Living in the very southern end of the plains zebra's range, the quagga had a thick winter coat that moulted each year. Its skull was described as having a straight profile and a concave diastema, and as being relatively broad with a narrow occiput. Like other plains zebras, the quagga did not have a dewlap on its neck as the mountain zebra does. The 2004 morphological study found that the skeletal features of the southern Burchell's zebra population and the quagga overlapped, and that they were impossible to distinguish. Some specimens also appeared to be intermediate between the two in striping, and the extant Burchell's zebra population still exhibits limited striping. It can therefore be concluded that the two subspecies graded morphologically into each other. Today, some stuffed specimens of quaggas and southern Burchell's zebra are so similar that they are impossible to definitely identify as either, since no location data was recorded. The female specimens used in the study were larger than the males on average. The quagga was the southernmost distributed plains zebra, mainly living south of the Orange River. It was a grazer, and its habitat range was restricted to the grasslands and arid interior scrubland of the Karoo region of South Africa, today forming parts of the provinces of Northern Cape, Eastern Cape, Western Cape and the Free State. These areas were known for distinctive flora and fauna and high amounts of endemism. Little is known about the behaviour of quaggas in the wild, and it is sometimes unclear what exact species of zebra is referred to in old reports. The only source that unequivocally describes the quagga in the Free State is that of the English military engineer and hunter Major Sir William Cornwallis Harris. His 1840 account reads as follows: The geographical range of the quagga does not appear to extend to the northward of the river Vaal. The animal was formerly extremely common within the colony; but, vanishing before the strides of civilisation, is now to be found in very limited numbers and on the borders only. Beyond, on those sultry plains which are completely taken possession of by wild beasts, and may with strict propriety be termed the domains of savage nature, it occurs in interminable herds; and, although never intermixing with its more elegant congeners, it is almost invariably to be found ranging with the white-tailed gnu and with the ostrich, for the society of which bird especially it evinces the most singular predilection. Moving slowly across the profile of the ocean-like horizon, uttering a shrill, barking neigh, of which its name forms a correct imitation, long files of quaggas continually remind the early traveller of a rival caravan on its march. Bands of many hundreds are thus frequently seen doing their migration from the dreary and desolate plains of some portion of the interior, which has formed their secluded abode, seeking for those more luxuriant pastures where, during the summer months, various herbs thrust forth their leaves and flowers to form a green carpet, spangled with hues the most brilliant and diversified. Quaggas have been reported gathering into herds of 30–50, and sometimes travelled in a linear fashion. They may have been sympatric with Burchell's zebra between the Vaal and Orange rivers. This is disputed, and there is no evidence that they interbred. It could also have shared a small portion of its range with Hartmann's mountain zebra ("Equus zebra hartmannae"). Quaggas were said to be lively and highly strung, especially the stallions. During the 1830s, quaggas were used as harness animals for carriages in London, the males probably being gelded to mitigate their volatile nature. Local farmers used them as guards for their livestock, as they were likely to attack intruders. On the other hand, captive quaggas in European zoos were said to be tamer and more docile than Burchell's zebra. One specimen was reported to have lived in captivity for 21 years and 4 months, dying in 1872. Since the practical function of striping has not been determined for zebras in general, it is unclear why the quagga lacked stripes on its hind parts. A cryptic function for protection from predators (stripes obscure the individual zebra in a herd) and biting flies (which are less attracted to striped objects), as well as various social functions, have been proposed for zebras in general. Differences in hind quarter stripes may have aided species recognition during stampedes of mixed herds, so that members of one subspecies or species would follow its own kind. It has also been evidence that the zebras developed striping patterns as thermoregulation to cool themselves down, and that the quagga lost them due to living in a cooler climate, although one problem with this is that the mountain zebra lives in similar environments and has a bold striping pattern. A 2014 study strongly supported the biting-fly hypothesis, and the quagga appears to have lived in areas with lesser amounts of fly activity than other zebras. As it was easy to find and kill, the quagga was hunted by early Dutch settlers and later by Afrikaners to provide meat or for their skins. The skins were traded or used locally. The quagga was probably vulnerable to extinction due to its limited distribution, and it may have competed with domestic livestock for forage. The quagga had disappeared from much of its range by the 1850s. The last population in the wild, in the Orange Free State, was extirpated in the late 1870s. The last known wild quagga died in 1878. Quaggas were captured and shipped to Europe, where they were displayed in zoos. Lord Morton tried to save the animal from extinction by starting a captive-breeding programme. He was only able to obtain a single male, which in desperation he bred with a female horse. This produced a female hybrid with zebra stripes on its back and legs. Lord Morton's mare was sold and was subsequently bred with a black stallion, resulting in offspring that again had zebra stripes. An account of this was published in 1820 by the Royal Society. This led to new ideas on telegony, referred to as pangenesis by Charles Darwin. At the close of the 19th century, the Scottish zoologist James Cossar Ewart argued against these ideas and proved, with several cross-breeding experiments, that zebra stripes can pop up as an atavistic trait at any time. The quagga was long regarded a suitable candidate for domestication, as it counted as the most docile of the striped horses. The earliest Dutch colonists in South Africa had already fantasized about this possibility, because their imported work horses did not perform very well in the extreme climate and regularly fell prey to the feared African horse sickness. In 1843, the English naturalist Charles Hamilton Smith wrote that the quagga was 'unquestionably best calculated for domestication, both as regards strength and docility'. Only a few descriptions have been given of tame or domesticated quaggas in South Africa. In Europe, the only confirmed cases are two stallions driven in a phaeton by Joseph Wilfred Parkins, sheriff of London in 1819–1820, and the quaggas and their hybrid offspring of London Zoo, which were used to pull a cart and transport vegetables from the market to the zoo. Nevertheless, the reveries continued long after the death of the last quagga in 1883. In 1889, the naturalist Henry Bryden wrote: "That an animal so beautiful, so capable of domestication and use, and to be found not long since in so great abundance, should have been allowed to be swept from the face of the earth, is surely a disgrace to our latter-day civilization." The specimen in London died in 1872 and the one in Berlin in 1875. The last captive quagga, a female in Amsterdam's Natura Artis Magistra zoo, lived there from 9 May 1867 until it died on 12 August 1883, but its origin and cause of death are unclear. Its death was not recognised as signifying the extinction of its kind at the time, and the zoo requested another specimen; hunters believed it could still be found "closer to the interior" in the Cape Colony. Since locals used the term quagga to refer to all zebras, this may have led to the confusion. The extinction of the quagga was internationally accepted by the 1900 Convention for the Preservation of Wild Animals, Birds and Fish in Africa. The last specimen was featured on a Dutch stamp in 1988. There are 23 known stuffed and mounted quagga specimens throughout the world, including a juvenile, two foals, and a foetus. In addition, a mounted head and neck, a foot, seven complete skeletons, and samples of various tissues remain. A 24th mounted specimen was destroyed in Königsberg, Germany, during World War II, and various skeletons and bones have also been lost. After the very close relationship between the quagga and extant plains zebras was discovered, Reinhold Rau started the Quagga Project in 1987 in South Africa to create a quagga-like zebra population by selectively breeding for a reduced stripe pattern from plains zebra stock, with the eventual aim of introducing them to the quagga's former range. To differentiate between the quagga and the zebras of the project, they refer to it as "Rau quaggas". The founding population consisted of 19 individuals from Namibia and South Africa, chosen because they had reduced striping on the rear body and legs. The first foal of the project was born in 1988. Once a sufficiently quagga-like population has been created, participants in the project plan to release them in the Western Cape. Introduction of these quagga-like zebras could be part of a comprehensive restoration programme, including such ongoing efforts as eradication of non-native trees. Quaggas, wildebeest, and ostriches, which occurred together during historical times in a mutually beneficial association, could be kept together in areas where the indigenous vegetation has to be maintained by grazing. In early 2006, the third- and fourth-generation animals produced by the project were considered looking much like the depictions and preserved specimens of the quagga. This type of selective breeding is called breeding back. The practice is controversial, since the resulting zebras will resemble the quaggas only in external appearance, but will be genetically different. The technology to use recovered DNA for cloning has not yet been developed.
https://en.wikipedia.org/wiki?curid=25229
QuickTime QuickTime is an extensible multimedia framework developed by Apple Inc., capable of handling various formats of digital video, picture, sound, panoramic images, and interactivity. First made in 1991, the latest Mac version, QuickTime X, is currently available on Mac OS X Snow Leopard and newer. Apple ceased support for the Windows version of QuickTime in 2016, and ceased support for QuickTime 7 on macOS in 2018. As of Mac OS X Lion, the underlying media framework for QuickTime, QTKit, was deprecated in favor of a newer graphics framework, AVFoundation, and completely discontinued as of macOS Catalina. QuickTime is bundled with macOS. QuickTime for Microsoft Windows is downloadable as a standalone installation, and was bundled with Apple's iTunes prior to iTunes 10.5, but is no longer supported and therefore security vulnerabilities will no longer be patched. Already, at the time of the Windows version's discontinuation, two such zero-day vulnerabilities (both of which permitted arbitrary code execution) were identified and publicly disclosed by Trend Micro; consequently, Trend Micro strongly advised users to uninstall the product from Windows systems. Software development kits (SDK) for QuickTime are available to the public with an Apple Developer Connection (ADC) subscription. It is available free of charge for both macOS and Windows operating systems. There are some other free player applications that rely on the QuickTime framework, providing features not available in the basic QuickTime Player. For example, iTunes can export audio in WAV, AIFF, MP3, AAC, and Apple Lossless. In addition, macOS has a simple AppleScript that can be used to play a movie in full-screen mode, but since version 7.2 full-screen viewing is now supported in the non-Pro version. QuickTime Player 7 is limited to only basic playback operations unless a QuickTime Pro license key is purchased from Apple. Until recently, Apple's professional applications (e.g. Final Cut Studio, Logic Studio) included a QuickTime Pro license. Pro keys are specific to the major version of QuickTime for which they are purchased and unlock additional features of the QuickTime Player application on macOS or Windows. The Pro key does not require any additional downloads; entering the registration code immediately unlocks the hidden features. QuickTime 7 is still available for download from Apple, but as of mid 2016, Apple stopped selling registration keys for the Pro version. Features enabled by the Pro license include, but are not limited to: Mac OS X Snow Leopard includes QuickTime X. QuickTime Player X lacks cut, copy and paste and will only export to four formats, but its limited export feature is free. Users do not have an option to upgrade to a Pro version of QuickTime X, but those who have already purchased QuickTime 7 Pro and are upgrading to Snow Leopard from a previous version of Mac OS X will have QuickTime 7 stored in the Utilities or user defined folder. Otherwise, users will have to install QuickTime 7 from the "Optional Installs" directory of the Snow Leopard DVD after installing the OS. Mac OS X Lion and later also include QuickTime X. No installer for QuickTime 7 is included with these software packages, but users can download the QuickTime 7 installer from the Apple support site. QuickTime X on later versions of macOS support cut, copy and paste functions similarly to the way QuickTime 7 Pro did; the interface has been significantly modified to simplify these operations, however. On September 24, 2018 Apple ended support for QuickTime 7 and QuickTime Pro, and updated many download and support pages on their website stating that QuickTime 7 "will not be compatible with future macOS releases". The QuickTime framework provides the following: As of early 2008, the framework hides many older codecs listed below from the user although the option to "Show legacy encoders" exists in QuickTime Preferences to use them. The framework supports the following file types and codecs natively: Due to macOS Mojave being the last version to include support for 32-bit APIs and Apple's plans to drop 32-bit application support in future macOS releases, many codecs will no longer be supported in newer macOS releases, starting with macOS Catalina, which was released on October 7, 2019. PictureViewer is a component of QuickTime for Microsoft Windows and the Mac OS 8 and Mac OS 9 operating systems. It is used to view picture files from the still image formats that QuickTime supports. In macOS, it is replaced by Preview. As of version 7.7.9, the Windows version requires one to go to their "Windows Uninstall Or Change A Program" screen to "modify" their installation of QuickTime 7 to include the "Legacy QuickTime Feature" of "QuickTime PictureViewer". The native file format for QuickTime video, QuickTime File Format, specifies a multimedia container file that contains one or more tracks, each of which stores a particular type of data: audio, video, effects, or text (e.g. for subtitles). Each track either contains a digitally encoded media stream (using a specific format) or a data reference to the media stream located in another file. The ability to contain abstract data references for the media data, and the separation of the media data from the media offsets and the track edit lists means that QuickTime is particularly suited for editing, as it is capable of importing and editing in place (without data copying). Other file formats that QuickTime supports natively (to varying degrees) include AIFF, WAV, DV-DIF, MP3, and MPEG program stream. With additional QuickTime Components, it can also support ASF, DivX Media Format, Flash Video, Matroska, Ogg, and many others. On February 11, 1998, the ISO approved the QuickTime file format as the basis of the MPEG‑4 file format. The MPEG-4 file format specification was created on the basis of the QuickTime format specification published in 2001. The MP4 (codice_1) file format was published in 2001 as the revision of the MPEG-4 Part 1: Systems specification published in 1999 (ISO/IEC 14496-1:2001). In 2003, the first version of MP4 format was revised and replaced by MPEG-4 Part 14: MP4 file format (ISO/IEC 14496-14:2003). The MP4 file format was generalized into the ISO Base Media File Format ISO/IEC 14496-12:2004, which defines a general structure for time-based media files. It in turn is used as the basis for other multimedia file formats (for example 3GP, Motion JPEG 2000). A list of all registered extensions for ISO Base Media File Format is published on the official registration authority website www.mp4ra.org. This registration authority for code-points in "MP4 Family" files is Apple Computer Inc. and it is named in Annex D (informative) in MPEG-4 Part 12. By 2000, MPEG-4 formats became industry standards, first appearing with support in QuickTime 6 in 2002. Accordingly, the MPEG-4 container is designed to capture, edit, archive, and distribute media, unlike the simple file-as-stream approach of MPEG-1 and MPEG-2. QuickTime 6 added limited support for MPEG-4; specifically encoding and decoding using Simple Profile (SP). Advanced Simple Profile (ASP) features, like B-frames, were unsupported (in contrast with, for example, encoders such as XviD or 3ivx). QuickTime 7 supports the H.264 encoder and decoder. Because both MOV and MP4 containers can use the same MPEG-4 codecs, they are mostly interchangeable in a QuickTime-only environment. MP4, being an international standard, has more support. This is especially true on hardware devices, such as the Sony PSP and various DVD players; on the software side, most DirectShow / Video for Windows codec packs include a MP4 parser, but not one for MOV. In QuickTime Pro's MPEG-4 Export dialog, an option called "Passthrough" allows a clean export to MP4 without affecting the audio or video streams. QuickTime 7 now supports multi-channel AAC-LC and HE-AAC audio (used, for example, in the high-definition trailers on Apple's site), for both .MOV and .MP4 containers. Apple released the first version of QuickTime on December 2, 1991 as a multimedia add-on for System 6 and later. The lead developer of QuickTime, Bruce Leak, ran the first public demonstration at the May 1991 Worldwide Developers Conference, where he played Apple's famous 1984 advertisement in a window at 320×240 pixels resolution. The original video codecs included: The first commercial project produced using QuickTime 1.0 was the CD-ROM From Alice to Ocean. The first publicly visible use of QuickTime was Ben & Jerry's interactive factory tour (dubbed "The Rik & Joe Show" after its in-house developers). "The Rik and Joe Show" was demonstrated onstage at MacWorld in San Francisco when John Sculley announced QuickTime. Apple released QuickTime 1.5 for Mac OS in the latter part of 1992. This added the SuperMac-developed Cinepak vector-quantization video codec (initially known as Compact Video). It could play video at 320×240 resolution at 30 frames per second on a 25 MHz Motorola 68040 CPU. It also added "text" tracks, which allowed for captioning, lyrics and other potential uses. Apple contracted San Francisco Canyon Company to port QuickTime to the Windows platform. Version 1.0 of QuickTime for Windows provided only a subset of the full QuickTime API, including only movie playback functions driven through the standard movie controller. QuickTime 1.6 came out the following year. Version 1.6.2 first incorporated the "QuickTime PowerPlug" which replaced some components with PowerPC-native code when running on PowerPC Macs. Apple released QuickTime 2.0 for System Software 7 in June 1994—the only version never released for free. It added support for music tracks, which contained the equivalent of MIDI data and which could drive a sound-synthesis engine built into QuickTime itself (using a limited set of instrument sounds licensed from Roland), or any external MIDI-compatible hardware, thereby producing sounds using only small amounts of movie data. Following Bruce Leak's departure to Web TV, the leadership of the QuickTime team was taken over by Peter Hoddie. QuickTime 2.0 for Windows appeared in November 1994 under the leadership of Paul Charlton. As part of the development effort for cross-platform QuickTime, Charlton (as architect and technical lead), along with ace individual contributor Michael Kellner and a small highly effective team including Keith Gurganus, ported a subset of the Macintosh Toolbox to Intel and other platforms (notably, MIPS and SGI Unix variants) as the enabling infrastructure for the QuickTime Media Layer (QTML) which was first demonstrated at the Apple Worldwide Developers Conference (WWDC) in May 1996. The QTML later became the foundation for the Carbon API which allowed legacy Macintosh applications to run on the Darwin kernel in Mac OS X. The next versions, 2.1 and 2.5, reverted to the previous model of giving QuickTime away for free. They improved the music support and added sprite tracks which allowed the creation of complex animations with the addition of little more than the static sprite images to the size of the movie. QuickTime 2.5 also fully integrated QuickTime VR 2.0.1 into QuickTime as a QuickTime extension. On January 16, 1997, Apple released the QuickTime MPEG Extension (PPC only) as an add-on to QuickTime 2.5, which added software MPEG-1 playback capabilities to QuickTime. In 1994, Apple filed suit against software developer San Francisco Canyon for intellectual property infringement and breach of contract. Apple alleged that San Francisco Canyon had helped develop Video for Windows using several hundred lines of unlicensed QuickTime source code, which was subsequently unilaterally removed. Microsoft and Intel were added to the lawsuit in 1995. The suit ended in a settlement in 1997. The release of QuickTime 3.0 for Mac OS on March 30, 1998 introduced the now-standard revenue model of releasing the software for free, but with additional features of the Apple-provided MoviePlayer application that end-users could only unlock by buying a QuickTime Pro license code. Since the "Pro" features were the same as the existing features in QuickTime 2.5, any previous user of QuickTime could continue to use an older version of the central MoviePlayer application for the remaining lifespan of Mac OS to 2002; indeed, since these additional features were limited to MoviePlayer, any other QuickTime-compatible application remained unaffected. QuickTime 3.0 added support for graphics importer components that could read images from GIF, JPEG, TIFF and other file formats, and video output components which served primarily to export movie data via FireWire. Apple also licensed several third-party technologies for inclusion in QuickTime 3.0, including the Sorenson Video codec for advanced video compression, the QDesign Music codec for substantial audio compression, and the complete Roland Sound Canvas instrument set and GS Format extensions for improved playback of MIDI music files. It also added video "effects" which programmers could apply in real-time to video tracks. Some of these effects would even respond to mouse clicks by the user, as part of the new movie interaction support (known as wired movies). During the development cycle for QuickTime 3.0, part of the engineering team was working on a more advanced version of QuickTime to be known as QuickTime interactive or QTi. Although similar in concept to the wired movies feature released as part of QuickTime 3.0, QuickTime interactive was much more ambitious. It allowed any QuickTime movie to be a fully interactive and programmable container for media. A special track type was added that contained an interpreter for a custom programming language based on 68000 assembly language. This supported a comprehensive user interaction model for mouse and keyboard event handling based in part on the AML language from the Apple Media Tool. The QuickTime interactive movie was to have been the playback format for the next generation of HyperCard authoring tool. Both the QuickTime interactive and the HyperCard 3.0 projects were canceled in order to concentrate engineering resources on streaming support for QuickTime 4.0, and the projects were never released to the public. Apple released QuickTime 4.0 on June 8, 1999 for Mac OS 7.5.5 through 8.6 (later Mac OS 9) and Windows 95, Windows 98, and Windows NT. Three minor updates (versions 4.0.1, 4.0.2, and 4.0.3) followed. It introduced features that most users now consider basic: On December 17, 1999, Apple provided QuickTime 4.1, this version's first major update. Two minor versions (4.1.1 and 4.1.2) followed. The most notable improvements in the 4.1.x family were: QuickTime 5 was one of the shortest-lived versions of QuickTime, released in April 2001 and superseded by QuickTime 6 a little over a year later. This version was the last to have greater capabilities under Mac OS 9 than under Mac OS X, and the last version of QuickTime to support Mac OS versions 7.5.5 through 8.5.1 on a PowerPC Mac and Windows 95. Version 5.0 was initially only released for Mac OS and Mac OS X on April 14, 2001, and version 5.0.1 followed shortly thereafter on April 23, 2001, supporting the classic Mac OS, Mac OS X, and Windows. Three more updates to QuickTime 5 (versions 5.0.2, 5.0.4, and 5.0.5) were released over its short lifespan. QuickTime 5 delivered the following enhancements: On July 15, 2002, Apple released QuickTime 6.0, providing the following features: QuickTime 6 was initially available for Mac OS 8.6 – 9.x, Mac OS X (10.1.5 minimum), and Windows 98, Me, 2000, and XP. Development of QuickTime 6 for Mac OS slowed considerably in early 2003, after the release of Mac OS X v10.2 in August 2002. QuickTime 6 for Mac OS continued on the 6.0.x path, eventually stopping with version 6.0.3. QuickTime 6.1 & 6.1.1 for Mac OS X v10.1 and Mac OS X v10.2 (released October 22, 2002) and QuickTime 6.1 for Windows (released March 31, 2003) offered ISO-Compliant MPEG-4 file creation and fixed the CAN-2003-0168 vulnerability. Apple released QuickTime 6.2 exclusively for Mac OS X on April 29, 2003 to provide support for iTunes 4, which allowed AAC encoding for songs in the iTunes library. (iTunes was not available for Windows until October 2003.) On June 3, 2003, Apple released QuickTime 6.3, delivering the following: QuickTime 6.4, released on October 16, 2003 for Mac OS X v10.2, Mac OS X v10.3, and Windows, added the following: On December 18, 2003, Apple released QuickTime 6.5, supporting the same systems as version 6.4. Versions 6.5.1 and 6.5.2 followed on April 28, 2004 and October 27, 2004. These versions would be the last to support Windows 98 and Me. The 6.5 family added the following features: QuickTime 6.5.3 was released on October 12, 2005 for Mac OS X v10.2.8 after the release of QuickTime 7.0, fixing a number of security issues. Initially released on April 29, 2005 in conjunction with Mac OS X v10.4 (for version 10.3.9 and 10.4.x), QuickTime 7.0 featured the following: After a couple of preview Windows releases, Apple released 7.0.2 as the first stable release on September 7, 2005 for Windows 2000 and Windows XP. Version 7.0.4, released on January 10, 2006 was the first universal binary version. But it suffered numerous bugs, including a buffer overrun, which is more problematic to most users. Apple dropped support for Windows 2000 with the release of QuickTime 7.2 on July 11, 2007. The last version available for Windows 2000, 7.1.6, contains numerous security vulnerabilities. References to this version have been removed from the QuickTime site, but it can be downloaded from Apple's support section. Apple has not indicated that they will be providing any further security updates for older versions. QuickTime 7.2 is the first version for Windows Vista. Apple dropped support for Flash content in QuickTime 7.3, breaking content that relied on Flash for interactivity, or animation tracks. Security concerns seem to be part of the decision. Flash flv files can still be played in QuickTime if the free Perian plugin is added. In QuickTime 7.3, a processor that supports SSE is required. QuickTime 7.4 does not require SSE. Unlike versions 7.2 and 7.3, QuickTime 7.4 cannot be installed on Windows XP without service packs or with Service Pack 1/1A installed (its setup program checks if Service Pack 2 is installed). QuickTime 7.5 was released on June 10, 2008. QuickTime 7.5.5 was released on September 9, 2008, which requires Mac OS X v10.4 or higher, dropping 10.3 support. QuickTime 7.6 was released on January 21, 2009. QuickTime 7.7 was released on August 23, 2011. QuickTime 7.6.6 is available for OS X, 10.6.3 Snow Leopard until 10.14 Mojave, as 10.15 Catalina will only support 64-bit applications. There is a 7.7 release of QuickTime 7 for OS X, but it is only for Leopard 10.5. QuickTime 7.7.6 is the last release for Windows XP. As it's since version 7.4, they can be installed here only when Service Pack 2 or 3 is installed. QuickTime 7.7.9 is the last Windows release of QuickTime. Apple stopped supporting QuickTime on Windows afterwards. Safari 12, released on September 17, 2018 for macOS Sierra and macOS High Sierra (and the default browser included on macOS Mojave released on September 24, 2018), which drops support for NPAPI plug-ins (except for Adobe Flash) dropped its support for QuickTime 7's web plugin. On September 24, 2018 Apple dropped support for the macOS version of QuickTime 7. This effectively marked the end of the technology in Apple's codec and web development. Starting with macOS Catalina, QuickTime 7 applications, image, audio and video codecs will no longer be compatible with macOS or supported by Apple. QuickTime X (pronounced "QuickTime Ten") was initially demonstrated at WWDC on June 8, 2009, and shipped with Mac OS X v10.6. It includes visual chapters, conversion, sharing to YouTube, video editing, capture of video and audio streams, screen recording, GPU acceleration, and live streaming. But it removed support for various widely used formats; in particular the omission of MIDI caused significant inconvenience and trouble to many musicians and their potential audiences. In addition, a screen recorder is featured which records whatever is on the screen. However, to prevent bootlegging the user is unable to record any video that is played on the DVD Player or purchased content from iTunes, thus being greyed out. The reason for the jump in numbering from 7 to 10 (X) was to indicate a similar break with the previous versions of the product that Mac OS X indicated. QuickTime X is fundamentally different from previous versions, in that it is provided as a Cocoa (Objective-C) framework and breaks compatibility with the previous QuickTime 7 C-based APIs that were previously used. QuickTime X was completely rewritten to implement modern audio video codecs in 64-bit. QuickTime X is a combination of two technologies: QuickTime Kit Framework (QTKit) and QuickTime X Player. QTKit is used by QuickTime player to display media. QuickTime X does not implement all of the functionality of the previous QuickTime as well as some of the codecs. When QuickTime X attempts to operate with a 32-bit codec or perform an operation not supported by QuickTime X, it will start a 32-bit helper process to perform the requested operation. The website "Ars Technica" revealed that QuickTime X uses QuickTime 7.x via QTKit to run older codecs that have not made the transition to 64-bit. QuickTime X does not support .SRT subtitle files. It has been suggested using the program Subler to interleave the MP4 and SRT files will fix this oversight, which can be downloaded at Bitbucket. QuickTime 7 may still be required to support older formats on Snow Leopard such as QTVR, interactive QuickTime movies, and MIDI files. In such cases, a compatible version of QuickTime 7 is included on Snow Leopard installation disc and may be installed side-by-side with QuickTime X. Users who have a Pro license for QuickTime 7 can then activate their license. A Snow Leopard compatible version of QuickTime 7 may also be downloaded from Apple Support website. The software got an increment with the release of Mavericks, and as of August 2018, the current version is v10.5. It contains more sharing options (email, YouTube, Facebook, Flickr etc.), more export options (including web export in multiple sizes, and export for iPhone 4/iPad/Apple TV (but not Apple TV 2) ). It also includes a new way of fast forwarding through a video and mouse support for scrolling. Starting with macOS Catalina, Apple only provides QuickTime X, as QuickTime 7 was never updated to 64-bit, affecting many applications, image, audio and video formats utilizing QuickTime 7, and compatibility with these codecs in QuickTime X. QuickTime X previously provided the QTKit Framework on Mac OS 10.6 until 10.14. Since the release of macOS 10.15, AVKit and AVFoundation are used instead (due to the removal of 32-bit audio and video codecs, as well as image formats and APIs supported by QuickTime 7). QuickTime consists of two major subsystems: the Movie Toolbox and the Image Compression Manager. The Movie Toolbox consists of a general API for handling time-based data, while the Image Compression Manager provides services for dealing with compressed raster data as produced by video and photo codecs. Developers can use the QuickTime software development kit (SDK) to develop multimedia applications for Mac or Windows with the C programming language or with the Java programming language (see QuickTime for Java), or, under Windows, using COM/ActiveX from a language supporting this. The COM/ActiveX option was introduced as part of QuickTime 7 for Windows and is intended for programmers who want to build standalone Windows applications using high-level QuickTime movie playback and control with some import, export, and editing capabilities. This is considerably easier than mastering the original QuickTime C API. QuickTime 7 for Mac introduced the QuickTime Kit (aka QTKit), a developer framework that is intended to replace previous APIs for Cocoa developers. This framework is for Mac only, and exists as Objective-C abstractions around a subset of the C interface. Mac OS X v10.5 extends QTKit to full 64-bit support. The QTKit allows multiplexing between QuickTime X and QuickTime 7 behind the scenes so that the user need not worry about which version of QuickTime they need to use. QuickTime 7.4 was found to disable Adobe's video compositing program, After Effects. This was due to the DRM built into version 7.4 since it allowed movie rentals from iTunes. QuickTime 7.4.1 resolved this issue. Versions 4.0 through 7.3 contained a buffer overflow bug which could compromise the security of a PC using either the QuickTime Streaming Media client, or the QuickTime player itself. The bug was fixed in version 7.3.1. QuickTime 7.5.5 and earlier are known to have a list of significant vulnerabilities that allow a remote attacker to execute arbitrary code or cause a denial of service (out-of-bounds memory access and application crash) on a targeted system. The list includes six types of buffer overflow, data conversion, signed vs. unsigned integer mismatch, and uninitialized memory pointer. QuickTime 7.6 has been found to disable Mac users' ability to play certain games, such as "Civilization IV" and "The Sims 2". There are fixes available from the publisher, Aspyr. QuickTime 7 lacks support for H.264 Sample Aspect Ratio. QuickTime X does not have this limitation, but many Apple products (such as Apple TV) still use the older QuickTime 7 engine. iTunes previously utilized QuickTime 7, but as of October 2019, iTunes no longer utilizes the older QuickTime 7 engine. QuickTime 7.7.x on Windows fails to encode H.264 on multi-core systems with more than approximately 20 threads, e.g. HP Z820 with 2× 8-core CPUs. A suggested solution is to disable hyper-threading/limit CPU cores. Encoding speed and stability depends on the scaling of the player window. On April 14, 2016, Christopher Budd of Trend Micro announced that Apple has ceased all security patching of QuickTime for Windows, and called attention to two Zero Day Initiative advisories, ZDI-16-241 and ZDI-16-242, issued by Trend Micro's subsidiary TippingPoint on that same day. Also on that same day, the United States Computer Emergency Readiness Team issued alert TA16-105A, encapsulating Budd's announcement and the Zero Day Initiative advisories. Apple responded with a statement that QuickTime 7 for Windows is no longer supported by Apple.
https://en.wikipedia.org/wiki?curid=25231
Quartz Quartz is a hard, crystalline mineral composed of silicon and oxygen atoms. The atoms are linked in a continuous framework of SiO4 silicon–oxygen tetrahedra, with each oxygen being shared between two tetrahedra, giving an overall chemical formula of SiO2. Quartz is the second most abundant mineral in Earth's continental crust, behind feldspar. Quartz exists in two forms, the normal α-quartz and the high-temperature β-quartz, both of which are chiral. The transformation from α-quartz to β-quartz takes place abruptly at . Since the transformation is accompanied by a significant change in volume, it can easily induce fracturing of ceramics or rocks passing through this temperature threshold. There are many different varieties of quartz, several of which are semi-precious gemstones. Since antiquity, varieties of quartz have been the most commonly used minerals in the making of jewelry and hardstone carvings, especially in Eurasia. The word "quartz" is derived from the German word "Quarz", which had the same form in the first half of the 14th century in Middle High German in East Central German and which came from the Polish dialect term "kwardy", which corresponds to the Czech term "tvrdý" ("hard"). The Ancient Greeks referred to quartz as κρύσταλλος ("krustallos") derived from the Ancient Greek "κρύος" ("kruos") meaning "icy cold", because some philosophers (including Theophrastus) apparently believed the mineral to be a form of supercooled ice. Today, the term "rock crystal" is sometimes used as an alternative name for the purest form of quartz. Quartz belongs to the trigonal crystal system. The ideal crystal shape is a six-sided prism terminating with six-sided pyramids at each end. In nature quartz crystals are often twinned (with twin right-handed and left-handed quartz crystals), distorted, or so intergrown with adjacent crystals of quartz or other minerals as to only show part of this shape, or to lack obvious crystal faces altogether and appear massive. Well-formed crystals typically form in a 'bed' that has unconstrained growth into a void; usually the crystals are attached at the other end to a matrix and only one termination pyramid is present. However, doubly terminated crystals do occur where they develop freely without attachment, for instance within gypsum. A quartz geode is such a situation where the void is approximately spherical in shape, lined with a bed of crystals pointing inward. α-quartz crystallizes in the trigonal crystal system, space group "P"3121 or "P"3221 depending on the chirality. β-quartz belongs to the hexagonal system, space group "P"6222 and "P"6422, respectively. These space groups are truly chiral (they each belong to the 11 enantiomorphous pairs). Both α-quartz and β-quartz are examples of chiral crystal structures composed of achiral building blocks (SiO4 tetrahedra in the present case). The transformation between α- and β-quartz only involves a comparatively minor rotation of the tetrahedra with respect to one another, without change in the way they are linked. Although many of the varietal names historically arose from the color of the mineral, current scientific naming schemes refer primarily to the microstructure of the mineral. Color is a secondary identifier for the cryptocrystalline minerals, although it is a primary identifier for the macrocrystalline varieties. Pure quartz, traditionally called rock crystal or clear quartz, is colorless and transparent or translucent, and has often been used for hardstone carvings, such as the Lothair Crystal. Common colored varieties include citrine, rose quartz, amethyst, smoky quartz, milky quartz, and others. These color differentiations arise from the presence of impurities which change the molecular orbitals, causing some electronic transitions to take place in the visible spectrum causing colors. Polymorphs of quartz include: α-quartz (low), β-quartz, tridymite, moganite, cristobalite, coesite, and stishovite. The most important distinction between types of quartz is that of "macrocrystalline" (individual crystals visible to the unaided eye) and the microcrystalline or cryptocrystalline varieties (aggregates of crystals visible only under high magnification). The cryptocrystalline varieties are either translucent or mostly opaque, while the transparent varieties tend to be macrocrystalline. Chalcedony is a cryptocrystalline form of silica consisting of fine intergrowths of both quartz, and its monoclinic polymorph moganite. Other opaque gemstone varieties of quartz, or mixed rocks including quartz, often including contrasting bands or patterns of color, are agate, carnelian or sard, onyx, heliotrope, and jasper. Amethyst is a form of quartz that ranges from a bright vivid violet to dark or dull lavender shade. The world's largest deposits of amethysts can be found in Brazil, Mexico, Uruguay, Russia, France, Namibia and Morocco. Sometimes amethyst and citrine are found growing in the same crystal. It is then referred to as ametrine. An amethyst is formed when there is iron in the area where it was formed. Blue quartz contains inclusions of fibrous magnesio-riebeckite or crocidolite. Inclusions of the mineral dumortierite within quartz pieces often result in silky-appearing splotches with a blue hue, shades giving off purple and/or grey colors additionally being found. "Dumortierite quartz" (sometimes called "blue quartz") will sometimes feature contrasting light and dark color zones across the material. Interest in the certain quality forms of blue quartz as a collectible gemstone particularly arises in India and in the United States. Citrine is a variety of quartz whose color ranges from a pale yellow to brown due to ferric impurities. Natural citrines are rare; most commercial citrines are heat-treated amethysts or smoky quartzes. However, a heat-treated amethyst will have small lines in the crystal, as opposed to a natural citrine's cloudy or smokey appearance. It is nearly impossible to differentiate between cut citrine and yellow topaz visually, but they differ in hardness. Brazil is the leading producer of citrine, with much of its production coming from the state of Rio Grande do Sul. The name is derived from the Latin word "citrina" which means "yellow" and is also the origin of the word "citron". Sometimes citrine and amethyst can be found together in the same crystal, which is then referred to as ametrine. Citrine has been referred to as the "merchant's stone" or "money stone", due to a superstition that it would bring prosperity. Citrine was first appreciated as a golden-yellow gemstone in Greece between 300 and 150 BC, during the Hellenistic Age. The yellow quartz was used prior to that to decorate jewelry and tools but it was not highly sought after. Milk quartz or milky quartz is the most common variety of crystalline quartz. The white color is caused by minute fluid inclusions of gas, liquid, or both, trapped during crystal formation, making it of little value for optical and quality gemstone applications. Rose quartz is a type of quartz which exhibits a pale pink to rose red hue. The color is usually considered as due to trace amounts of titanium, iron, or manganese, in the material. Some rose quartz contains microscopic rutile needles which produces an asterism in transmitted light. Recent X-ray diffraction studies suggest that the color is due to thin microscopic fibers of possibly dumortierite within the quartz. Additionally, there is a rare type of pink quartz (also frequently called crystalline rose quartz) with color that is thought to be caused by trace amounts of phosphate or aluminium. The color in crystals is apparently photosensitive and subject to fading. The first crystals were found in a pegmatite found near Rumford, Maine, US and in Minas Gerais, Brazil. Smoky quartz is a gray, translucent version of quartz. It ranges in clarity from almost complete transparency to a brownish-gray crystal that is almost opaque. Some can also be black. The translucency results from natural irradiation creating free silicon within the crystal. Prasiolite, also known as "vermarine", is a variety of quartz that is green in color. Since 1950, almost all natural prasiolite has come from a small Brazilian mine, but it is also seen in Lower Silesia in Poland. Naturally occurring prasiolite is also found in the Thunder Bay area of Canada. It is a rare mineral in nature; most green quartz is heat-treated amethyst. Not all varieties of quartz are naturally occurring. Some clear quartz crystals can be treated using heat or gamma-irradiation to induce color where it would not otherwise have occurred naturally. Susceptibility to such treatments depends on the location from which the quartz was mined. Prasiolite, an olive colored material, is produced by heat treatment; natural prasiolite has also been observed in Lower Silesia in Poland. Although citrine occurs naturally, the majority is the result of heat-treating amethyst or smoky quartz. Carnelian is widely heat-treated to deepen its color. Because natural quartz is often twinned, synthetic quartz is produced for use in industry. Large, flawless, single crystals are synthesized in an autoclave via the hydrothermal process; emeralds are also synthesized in this fashion. Like other crystals, quartz may be coated with metal vapors to give it an attractive sheen. Quartz is a defining constituent of granite and other felsic igneous rocks. It is very common in sedimentary rocks such as sandstone and shale. It is a common constituent of schist, gneiss, quartzite and other metamorphic rocks. Quartz has the lowest potential for weathering in the Goldich dissolution series and consequently it is very common as a residual mineral in stream sediments and residual soils. Generally a high presence of quartz suggests a "mature" rock, since it indicates the rock has been heavily reworked and quartz was the primary mineral that endured heavy weathering. While the majority of quartz crystallizes from molten magma, much quartz also chemically precipitates from hot hydrothermal veins as gangue, sometimes with ore minerals like gold, silver and copper. Large crystals of quartz are found in magmatic pegmatites. Well-formed crystals may reach several meters in length and weigh hundreds of kilograms. Naturally occurring quartz crystals of extremely high purity, necessary for the crucibles and other equipment used for growing silicon wafers in the semiconductor industry, are expensive and rare. A major mining location for high purity quartz is the Spruce Pine Gem Mine in Spruce Pine, North Carolina, United States. Quartz may also be found in Caldoveiro Peak, in Asturias, Spain. The largest documented single crystal of quartz was found near Itapore, Goiaz, Brazil; it measured approximately 6.1×1.5×1.5 m and weighed 39,916 kilograms. Quartz is extracted from open pit mines. Miners only use explosives on rare occasions when they need to expose a deep seam of quartz. The reason for this is that although quartz is known for its hardness, it damages easily if it is suddenly exposed to a change in temperature, such as that caused by a blast. Instead, mining operations use bulldozers and backhoes to remove soil and clay, and expose the quartz crystal veins in the rock. Tridymite and cristobalite are high-temperature polymorphs of SiO2 that occur in high-silica volcanic rocks. Coesite is a denser polymorph of SiO2 found in some meteorite impact sites and in metamorphic rocks formed at pressures greater than those typical of the Earth's crust. Stishovite is a yet denser and higher-pressure polymorph of SiO2 found in some meteorite impact sites. Lechatelierite is an amorphous silica glass SiO2 which is formed by lightning strikes in quartz sand. As quartz is a form of silica, it is a possible cause for concern in various workplaces. Cutting, grinding, chipping, sanding, drilling, and polishing natural and manufactured stone products can release hazardous levels of very small, crystalline silica dust particles into the air that workers breathe. Crystalline silica of respirable size is a recognized human carcinogen and may lead to other diseases of the lungs such as silicosis and pulmonary fibrosis. The word "quartz" comes from the German , which is of Slavic origin (Czech miners called it "křemen"). Other sources attribute the word's origin to the Saxon word "Querkluftertz", meaning "cross-vein ore". Quartz is the most common material identified as the mystical substance maban in Australian Aboriginal mythology. It is found regularly in passage tomb cemeteries in Europe in a burial context, such as Newgrange or Carrowmore in Ireland. The Irish word for quartz is "grianchloch", which means 'sunstone'. Quartz was also used in Prehistoric Ireland, as well as many other countries, for stone tools; both vein quartz and rock crystal were knapped as part of the lithic technology of the prehistoric peoples. While jade has been since earliest times the most prized semi-precious stone for carving in East Asia and Pre-Columbian America, in Europe and the Middle East the different varieties of quartz were the most commonly used for the various types of jewelry and hardstone carving, including engraved gems and cameo gems, rock crystal vases, and extravagant vessels. The tradition continued to produce objects that were very highly valued until the mid-19th century, when it largely fell from fashion except in jewelry. Cameo technique exploits the bands of color in onyx and other varieties. Roman naturalist Pliny the Elder believed quartz to be water ice, permanently frozen after great lengths of time. (The word "crystal" comes from the Greek word "κρύσταλλος", "ice".) He supported this idea by saying that quartz is found near glaciers in the Alps, but not on volcanic mountains, and that large quartz crystals were fashioned into spheres to cool the hands. This idea persisted until at least the 17th century. He also knew of the ability of quartz to split light into a spectrum. In the 17th century, Nicolas Steno's study of quartz paved the way for modern crystallography. He discovered that regardless of a quartz crystal's size or shape, its long prism faces always joined at a perfect 60° angle. Quartz's piezoelectric properties were discovered by Jacques and Pierre Curie in 1880. The quartz oscillator or resonator was first developed by Walter Guyton Cady in 1921. George Washington Pierce designed and patented quartz crystal oscillators in 1923. Warren Marrison created the first quartz oscillator clock based on the work of Cady and Pierce in 1927. Efforts to synthesize quartz began in the mid nineteenth century as scientists attempted to create minerals under laboratory conditions that mimicked the conditions in which the minerals formed in nature: German geologist Karl Emil von Schafhäutl (1803–1890) was the first person to synthesize quartz when in 1845 he created microscopic quartz crystals in a pressure cooker. However, the quality and size of the crystals that were produced by these early efforts were poor. By the 1930s, the electronics industry had become dependent on quartz crystals. The only source of suitable crystals was Brazil; however, World War II disrupted the supplies from Brazil, so nations attempted to synthesize quartz on a commercial scale. German mineralogist Richard Nacken (1884–1971) achieved some success during the 1930s and 1940s. After the war, many laboratories attempted to grow large quartz crystals. In the United States, the U.S. Army Signal Corps contracted with Bell Laboratories and with the Brush Development Company of Cleveland, Ohio to synthesize crystals following Nacken's lead. (Prior to World War II, Brush Development produced piezoelectric crystals for record players.) By 1948, Brush Development had grown crystals that were 1.5 inches (3.8 cm) in diameter, the largest to date.
https://en.wikipedia.org/wiki?curid=25233
Quadrivium In liberal arts education, the quadrivium (plural: quadrivia) consists of the four subjects or arts (namely arithmetic, geometry, music, and astronomy), taught after teaching the trivium. The word is Latin, meaning "four ways", and its use for the four subjects has been attributed to Boethius or Cassiodorus in the 6th century. Together, the trivium and the quadrivium comprised the seven liberal arts (based on thinking skills), as distinguished from the practical arts (such as medicine and architecture). The quadrivium consisted of arithmetic, geometry, music, and astronomy. These followed the preparatory work of the trivium, consisting of grammar, logic, and rhetoric. In turn, the quadrivium was considered the foundation for the study of philosophy (sometimes called the "liberal art "par excellence"") and theology. The quadrivium was the upper division of the medieval education in the liberal arts, which comprised arithmetic (number), geometry (number in space), music (number in time), and astronomy (number in space and time). Educationally, the trivium and the quadrivium imparted to the student the seven liberal arts (essential thinking skills) of classical antiquity. These four studies compose the secondary part of the curriculum outlined by Plato in "The Republic" and are described in the seventh book of that work (in the order Arithmetic, Geometry, Astronomy, Music). The quadrivium is implicit in early Pythagorean writings and in the "De nuptiis" of Martianus Capella, although the term "quadrivium" was not used until Boethius, early in the sixth century. As Proclus wrote: The Pythagoreans considered all mathematical science to be divided into four parts: one half they marked off as concerned with quantity, the other half with magnitude; and each of these they posited as twofold. A quantity can be considered in regard to its character by itself or in its relation to another quantity, magnitudes as either stationary or in motion. Arithmetic, then, studies quantities as such, music the relations between quantities, geometry magnitude at rest, spherics [astronomy] magnitude inherently moving. At many medieval universities, this would have been the course leading to the degree of Master of Arts (after the BA). After the MA, the student could enter for bachelor's degrees of the higher faculties (Theology, Medicine or Law). To this day, some of the postgraduate degree courses lead to the degree of Bachelor (the B.Phil and B.Litt. degrees are examples in the field of philosophy). The study was eclectic, approaching the philosophical objectives sought by considering it from each aspect of the quadrivium within the general structure demonstrated by Proclus (AD 412–485), namely arithmetic and music on the one hand and geometry and cosmology on the other. The subject of music within the quadrivium was originally the classical subject of harmonics, in particular the study of the proportions between the musical intervals created by the division of a monochord. A relationship to music as actually practised was not part of this study, but the framework of classical harmonics would substantially influence the content and structure of music theory as practised in both European and Islamic cultures. In modern applications of the liberal arts as curriculum in colleges or universities, the quadrivium may be considered to be the study of number and its relationship to space or time: arithmetic was pure number, geometry was number in space, music was number in time, and astronomy was number in space and time. Morris Kline classified the four elements of the quadrivium as pure (arithmetic), stationary (geometry), moving (astronomy), and applied (music) number. This schema is sometimes referred to as "classical education", but it is more accurately a development of the 12th- and 13th-century Renaissance with recovered classical elements, rather than an organic growth from the educational systems of antiquity. The term continues to be used by the Classical education movement and at the independent Oundle School, in the United Kingdom.
https://en.wikipedia.org/wiki?curid=25234
Quadrupedalism Quadrupedalism is a form of terrestrial locomotion in animals using four limbs or legs. An animal or machine that usually moves in a quadrupedal manner is known as a quadruped, meaning "four feet" (from the Latin "quattuor" for "four" and "pes" for "foot"). The majority of quadrupeds are vertebrate animals, including mammals such as cattle, dogs and cats, and reptiles such as lizards. Few other animals are quadrupedal, though a few birds like the shoebill sometimes use their wings to right themselves after lunging at prey. Although the words "quadruped" and "tetrapod" are both derived from terms meaning "four-footed", they have distinct meanings. A tetrapod is any member of the taxonomic unit "Tetrapoda" (which is defined by descent from a specific four-limbed ancestor) whereas a quadruped actually uses four limbs for locomotion. Not all tetrapods are quadrupeds and not all quadrupeds are tetrapods. The distinction between quadrupeds and tetrapods is important in evolutionary biology, particularly in the context of tetrapods whose limbs have adapted to other roles (e.g., hands in the case of humans, wings in the case of birds, and fins in the case of whales). All of these animals are tetrapods, but none is a quadruped. Even snakes, whose limbs have become vestigial or lost entirely, are nevertheless tetrapods. Most quadrupedal animals are tetrapods but there are a few exceptions. For instance, among the insects, the praying mantis is a quadruped. In July 2005, in rural Turkey, scientists discovered five Kurdish siblings who had learned to walk naturally on their hands and feet. Unlike chimpanzees, which ambulate on their knuckles, the Kurdish siblings walked on their palms, allowing them to preserve the dexterity of their fingers. Many people, especially practitioners of parkour and freerunning and Georges Hébert's Natural Method, find benefit in quadrupedal movements to build full body strength. Kenichi Ito is a Japanese man famous for speed running on four limbs. Quadrupedalism is sometimes referred to as being on all fours, and is observed in crawling especially by infants. BigDog is a dynamically stable quadruped robot created in 2005 by Boston Dynamics with Foster-Miller, the NASA Jet Propulsion Laboratory, and the Harvard University Concord Field Station. Also by NASA JPL, in collaboration with University of California, Santa Barbara Robotics Lab, is RoboSimian, with emphasis on stability and deliberation. It has been demonstrated at the DARPA Robotics Challenge. A related concept to quadrupedalism is pronogrady, or having a horizontal posture of the trunk. Although nearly all quadrupedal animals are pronograde, there are also bipedal animals with that posture, including many living birds and extinct dinosaurs. Non-human apes with orthograde (vertical) backs may walk quadrupedally in what is called knuckle-walking.
https://en.wikipedia.org/wiki?curid=25236
Quarantine A quarantine is a restriction on the movement of people and goods which is intended to prevent the spread of disease or pests. It is often used in connection to disease and illness, preventing the movement of those who may have been exposed to a communicable disease, but do not have a confirmed medical diagnosis. It is distinct from medical isolation, in which those confirmed to be infected with a communicable disease are isolated from the healthy population. Quarantine considerations are often one aspect of border control. The concept of quarantine has been known since biblical times, and is known to have been practised through history in various places. Notable quarantines in modern history include that of the village of Eyam in 1665 during the bubonic plague outbreak in England; East Samoa during the 1918 flu pandemic; the 1972 Yugoslav smallpox outbreak, and extensive quarantines applied throughout the world during the COVID-19 pandemic. Ethical and practical considerations need to be considered when applying quarantine to people. Practice differs from country to country. In some countries, quarantine is just one of many measures governed by legislation relating to the broader concept of biosecurity; for example Australian biosecurity is governed by the single overarching "Biosecurity Act 2015". The word "quarantine" comes from "quarantena", meaning "forty days", used in the 14th–15th-centuries Venetian language and designating the period that all ships were required to be isolated before passengers and crew could go ashore during the Black Death plague epidemic; it followed the "trentino", or thirty-day isolation period, first imposed in 1347 in the Republic of Ragusa, Dalmatia (modern Dubrovnik in Croatia). Merriam-Webster gives various meanings to the noun form, including "a period of 40 days", several relating to ships, "a state of enforced isolation", and as "a restriction on the movement of people and goods which is intended to prevent the spread of disease or pests". The word is also used as a verb. Quarantine is distinct from medical isolation, in which those confirmed to be infected with a communicable disease are isolated from the healthy population. Quarantine may be used interchangeably with "cordon sanitaire", and although the terms are related, "cordon sanitaire" refers to the restriction of movement of people into or out of a defined geographic area, such as a community, in order to prevent an infection from spreading. An early mention of isolation occurs in the Biblical book of Leviticus, written in the seventh century BC or perhaps earlier, which describes the procedure for separating out infected people to prevent spread of disease under the Mosaic Law: "If the shiny spot on the skin is white but does not appear to be more than skin deep and the hair in it has not turned white, the priest is to isolate the affected person for seven days. On the seventh day the priest is to examine him, and if he sees that the sore is unchanged and has not spread in the skin, he is to isolate him for another seven days." The Islamic prophet Muhammad advised quarantine: "Those with contagious diseases should be kept away from those who are healthy." Ibn Sina also recommended quarantine for patients with infectious diseases, especially tuberculosis. The mandatory hospital quarantine of special groups of patients, including those with leprosy, started early in Islamic history. Between 706 and 707 the sixth Umayyad caliph Al-Walid I built the first hospital in Damascus and issued an order to isolate those infected with leprosy from other patients in the hospital. The practice of mandatory quarantine of leprosy in general hospitals continued until the year 1431, when the Ottomans built a leprosy hospital in Edirne. Incidents of quarantine occurred throughout the Muslim world, with evidence of voluntary community quarantine in some of these reported incidents. The first documented involuntary community quarantine was established by the Ottoman quarantine reform in 1838. The word "quarantine" originates from "quarantena", the Venetian language form, meaning "forty days". This is due to the 40-day isolation of ships and people practised as a measure of disease prevention related to the plague. Between 1348 and 1359, the Black Death wiped out an estimated 30% of Europe's population, and a significant percentage of Asia's population. Such a disaster led governments to establish measures of containment to handle recurrent epidemics. A document from 1377 states that before entering the city-state of Ragusa in Dalmatia (modern Dubrovnik in Croatia), newcomers had to spend 30 days (a "trentine") in a restricted place (originally nearby islands) waiting to see whether the symptoms of Black Death would develop. In 1448 the Venetian Senate prolonged the waiting period to 40 days, thus giving birth to the term "quarantine". The forty-day quarantine proved to be an effective formula for handling outbreaks of the plague. Dubrovnik was the first city in Europe to set up quarantine sites such as the Lazzarettos of Dubrovnik where arriving ship personnel were held for up to 40 days. According to current estimates, the bubonic plague had a 37-day period from infection to death; therefore, the European quarantines would have been highly successful in determining the health of crews from potential trading and supply ships. Other diseases lent themselves to the practice of quarantine before and after the devastation of the plague. Those afflicted with leprosy were historically isolated long-term from society, and attempts were made to check the spread of syphilis in northern Europe after 1492, the advent of yellow fever in Spain at the beginning of the 19th century, and the arrival of Asiatic cholera in 1831. Venice took the lead in measures to check the spread of plague, having appointed three guardians of public health in the first years of the Black Death (1348). The next record of preventive measures comes from Reggio/Modena in 1374. Venice founded the first lazaret (on a small island adjoining the city) in 1403. In 1467 Genoa followed the example of Venice, and in 1476 the old leper hospital of Marseille was converted into a plague hospital. The great lazaret of Marseille, perhaps the most complete of its kind, was founded in 1526 on the island of Pomègues. The practice at all the Mediterranean lazarets did not differ from the English procedure in the Levantine and North African trade. On the arrival of cholera in 1831 some new lazarets were set up at western ports, notably a very extensive establishment near Bordeaux, afterwards turned to another use. Epidemics of yellow fever ravaged urban communities in North America throughout the late-eighteenth and early-nineteenth centuries, the best-known examples being the 1793 Philadelphia yellow fever epidemic and outbreaks in Georgia (1856) and Florida (1888). Cholera and smallpox epidemics continued throughout the nineteenth century, and plague epidemics affected Honolulu and San Francisco from 1899 until 1901. State governments generally relied on the "cordon sanitaire" as a geographic quarantine measure to control the movement of people into and out of affected communities. During the 1918 influenza pandemic, some communities instituted protective sequestration (sometimes referred to as "reverse quarantine") to keep the infected from introducing influenza into healthy populations. Most Western countries implemented a range of containment strategies, including isolation, surveillance, and the closure of schools, churches, theatres and public events. By the middle of the 19th century, the Ottoman Empire had established quarantine stations, including in Anatolia and the Balkans. For example, at the port of Izmir, all ships and their cargo would be inspected and those suspected of carrying the plague would be towed to separate docks and their personnel housed in separate buildings for a determined period of time. In Thessaly, along the Greek-Turkish border, all travellers entering and exiting the Ottoman Empire would be quarantined for 9–15 days. Upon appearance of the plague, the quarantine stations would be militarised and the Ottoman army would be involved in border control and disease monitoring. Since 1852 several conferences were held involving European powers, with a view to uniform action in keeping out infection from the East and preventing its spread within Europe. All but that of 1897 were concerned with cholera. No result came of those at Paris (1852), Constantinople (1866), Vienna (1874), and Rome (1885), but each of the subsequent ones doctrine of constructive infection of a ship as coming from a scheduled port, and an approximation to the principles advocated by Great Britain for many years. The principal countries which retained the old system at the time were Spain, Portugal, Turkey, Greece and Russia (the British possessions at the time, Gibraltar, Malta and Cyprus, being under the same influence). The aim of each international sanitary convention had been to bind the governments to a uniform minimum of preventive action, with further restrictions permissible to individual countries. The minimum specified by international conventions was very nearly the same as the British practice, which had been in turn adapted to continental opinion in the matter of the importation of rags. The Venice convention of 30 January 1892 dealt with cholera by the Suez Canal route; that of Dresden of 15 April 1893, with cholera within European countries; that of Paris of 3 April 1894, with cholera by the pilgrim traffic; and that of Venice, on 19 March 1897, was in connection with the outbreak of plague in the East, and the conference met to settle on an international basis the steps to be taken to prevent, if possible, its spread into Europe. An additional convention was signed in Paris on 3 December 1903. A multilateral international sanitary convention was concluded at Paris on 17 January 1912. This convention was most comprehensive and was designated to replace all previous conventions on that matter. It was signed by 40 countries, and consisted of 160 articles. Ratifications by 16 of the signatories were exchanged in Paris on 7 October 1920. Another multilateral convention was signed in Paris on 21 June 1926, to replace that of 1912. It was signed by 58 countries worldwide, and consisted of 172 articles. In Latin America, a series of regional sanitary conventions were concluded. Such a convention was concluded in Rio de Janeiro on 12 June 1904. A sanitary convention between the governments of Argentina, Brazil, Paraguay and Uruguay was concluded in Montevideo on 21 April 1914. The convention covers cases of Asiatic cholera, oriental plague and yellow fever. It was ratified by the Uruguayan government on 13 October 1914, by the Paraguayan government on 27 September 1917 and by the Brazilian government on 18 January 1921. Sanitary conventions were also concluded between European states. A Soviet-Latvian sanitary convention was signed on 24 June 1922, for which ratifications were exchanged on 18 October 1923. A bilateral sanitary convention was concluded between the governments of Latvia and Poland on 7 July 1922, for which ratifications were exchanged on 7 April 1925. Another was concluded between the governments of Germany and Poland in Dresden on 18 December 1922, and entered into effect on 15 February 1923. Another one was signed between the governments of Poland and Romania on 20 December 1922. Ratifications were exchanged on 11 July 1923. The Polish government also concluded such a convention with the Soviet government on 7 February 1923, for which ratifications were exchanged on 8 January 1924. A sanitary convention was also concluded between the governments of Poland and Czechoslovakia on 5 September 1925, for which ratifications were exchanged on 22 October 1926. A convention was signed between the governments of Germany and Latvia on 9 July 1926, for which ratifications were exchanged on 6 July 1927. One of the first points to be dealt with in 1897 was to settle the incubation period for this disease, and the period to be adopted for administrative purposes. It was admitted that the incubation period was, as a rule, a comparatively short one, namely, of some three or four days. After much discussion ten days was accepted by a very large majority. The principle of disease notification was unanimously adopted. Each government had to notify to other governments on the existence of plague within their several jurisdictions, and at the same time state the measures of prevention which are being carried out to prevent its diffusion. The area deemed to be infected was limited to the actual district or village where the disease prevailed, and no locality was deemed to be infected merely because of the importation into it of a few cases of plague while there has been no diffusion of the malady. As regards the precautions to be taken on land frontiers, it was decided that during the prevalence of plague every country had the inherent right to close its land frontiers against traffic. As regards the Red Sea, it was decided after discussion that a healthy vessel could pass through the Suez Canal, and continue its voyage in the Mediterranean during the period of incubation of the disease the prevention of which is in question. It was also agreed that vessels passing through the Canal in quarantine might, subject to the use of the electric light, coal in quarantine at Port Said by night as well as by day, and that passengers might embark in quarantine at that port. Infected vessels, if these carry a doctor and are provided with a disinfecting stove, have a right to navigate the Canal, in quarantine, subject only to the landing of those who were suffering from plague. In the 21st century, people suspected of carrying infectious diseases have been quarantined, as in the cases of Andrew Speaker (multi-drug-resistant tuberculosis, 2007) and Kaci Hickox (Ebola, 2014). This was already the case since the late 20th century. During the 1957–58 influenza pandemic and the 1968 flu pandemic, several countries implemented measures to control spread of the disease. In addition, the World Health Organization applied a global influenza surveillance network. In the SARS epidemic, thousands of Chinese people were quarantined and checkpoints to take temperatures were set up. Moving infected patients to isolation wards and home-based self-quarantine of people potentially exposed was the main way the Western African Ebola virus epidemic was ended in 2016; members of the 8th WHO Emergency Committee criticised international travel restrictions imposed during the epidemic as ineffective due to difficulty of enforcement, and counterproductive as they slowed down aid efforts. The People's Republic of China has employed mass quarantines – firstly of the city of Wuhan and subsequently of all of Hubei province (population 55.5 million) – in the COVID-19 pandemic. After few weeks, the Italian government imposed lockdowns in all the country (more than 60 million people) to stop the coronavirus pandemic. During the COVID-19 pandemic, India quarantined itself from the world for a period of one month Most governments around the world restricted or advised against all non-essential travel to and from countries and areas affected by the outbreak. The virus has already spread within communities in large parts of the world, with many not knowing where or how they were infected. Plain yellow, green, and even black flags have been used to symbolise disease in both ships and ports, with the colour yellow having a longer historical precedent, as a colour of marking for houses of infection, previous to its use as a maritime marking colour for disease. The present flag used for the purpose is the "Lima" (L) flag, which is a mixture of yellow and black flags previously used. It is sometimes called the "yellow jack" but this was also a name for yellow fever, which probably derives its common name from the flag, not the colour of the victims (cholera ships also used a yellow flag). The plain yellow flag ("Quebec" or Q in international maritime signal flags) probably derives its letter symbol for its initial use in "quarantine", but this flag in modern times indicates the opposite—a ship that 'requests free pratique', i.e. that declares itself free of quarantinable disease, and requests boarding and routine port inspection. The quarantining of people often raises questions of civil rights, especially in cases of long confinement or segregation from society, such as that of Mary Mallon (also known as Typhoid Mary), a typhoid fever carrier who was arrested and quarantined in 1907 and later spent the last 23 years and 7 months of her life in medical isolation at Riverside Hospital on North Brother Island. Guidance on when and how human rights can be restricted to prevent the spread of infectious disease is found in The Siracusa Principles, a non-binding document developed by the Siracusa International Institute for Criminal Justice and Human Rights and adopted by the United Nations Economic and Social Council in 1984. The Siracusa Principles state that restrictions on human rights under the International Covenant on Civil and Political Rights must meet standards of legality, evidence-based necessity, proportionality, and gradualism, noting that public health can be used as grounds for limiting certain rights if the state needs to take measures 'aimed at preventing disease or injury or providing care for the sick and injured.' Limitations on rights (such as quarantine) must be 'strictly necessary,' meaning that they must: In addition, when quarantine is imposed, public health ethics specify that: Finally, the state is ethically obligated to offer certain guarantees: Quarantine can have negative psychological effects on those that are quarantined. These include post-traumatic stress, confusion and anger. According to a "Rapid Review" published in The Lancet in response to the COVID-19 pandemic, "Stressors included longer quarantine duration, infection fears, frustration, boredom, inadequate supplies, inadequate information, financial loss, and stigma. Some researchers have suggested long-lasting effects. In situations where quarantine is deemed necessary, officials should quarantine individuals for no longer than required, provide clear rationale for quarantine and information about protocols, and ensure sufficient supplies are provided. Appeals to altruism by reminding the public about the benefits of quarantine to wider society can be favourable." Quarantine periods can be very short, such as in the case of a suspected anthrax attack, in which people are allowed to leave as soon as they shed their potentially contaminated garments and undergo a decontamination shower. For example, an article entitled "Daily News workers quarantined" describes a brief quarantine that lasted until people could be showered in a decontamination tent. The February/March 2003 issue of "HazMat Magazine" suggests that people be "locked in a room until proper decon could be performed", in the event of "suspect anthrax". "Standard-Times" senior correspondent Steve Urbon (14 February 2003) describes such temporary quarantine powers: Civil rights activists in some cases have objected to people being rounded up, stripped and showered against their will. But Capt. Chmiel said local health authorities have "certain powers to quarantine people". The purpose of such quarantine-for-decontamination is to prevent the spread of contamination and to contain the contamination such that others are not put at risk from a person fleeing a scene where contamination is suspect. It can also be used to limit exposure, as well as eliminate a vector. New developments for quarantine include new concepts in quarantine vehicles such as the ambulance bus, mobile hospitals, and lockdown/invacuation (inverse evacuation) procedures, as well as docking stations for an ambulance bus to dock to a facility under lockdown. Biosecurity in Australia is governed by the "Biosecurity Act 2015". The Australian Quarantine and Inspection Service (AQIS) is responsible for border inspection of products brought into Australia, and assesses the risks the products might harm Australian environment. No person, goods and vessels are permitted into Australia without clearance from AQIS. Visitors are required to fill in the information card on arriving in Australia. Besides other risk factors, visitors are required to declare what food and products made of wood and other natural materials they have. Visitors who fail to do so may be subject to a fine of A$220, or may face criminal prosecution and be fined up to A$100,000 or imprisonment of up to 10 years. Australia has very strict quarantine standards. Quarantine in northern Australia is especially important because of its proximity to South-East Asia and the Pacific, which have many pests and diseases not present in Australia. For this reason, the region from Cairns to Broome—including the Torres Strait—is the focus for quarantine activities that protect all Australians. As Australia has been geographically isolated from other major continents for millions of years, there is an endemically unique ecosystem free of several severe pests and diseases that are present in many parts of the world. If other products are brought inside along with pests and diseases, it would damage the ecosystem seriously and add millions of costs in the local agricultural businesses. There are three quarantine Acts of Parliament in Canada: "Quarantine Act" (humans) and "Health of Animals Act" (animals) and "Plant Protection Act" (vegetations). The first legislation is enforced by the Canada Border Services Agency after a complete rewrite in 2005. The second and third legislations are enforced by the Canadian Food Inspection Agency. If a health emergency exists, the Governor in Council can prohibit importation of anything that it deems necessary under the "Quarantine Act". Under the "Quarantine Act", all travellers must submit to screening and if they believe they might have come into contact with communicable diseases or vectors, they must disclose their whereabouts to a Border Services Officer. If the officer has reasonable grounds to believe that the traveller is or might have been infected with a communicable disease or refused to provide answers, a quarantine officer (QO) must be called and the person is to be isolated. If a person refuses to be isolated, any peace officer may arrest without warrant. A QO who has reasonable grounds to believe that the traveller has or might have a communicable disease or is infested with vectors, after the medical examination of a traveller, can order him/her into treatment or measures to prevent the person from spreading the disease. QO can detain any traveller who refuses to comply with his/her orders or undergo health assessments as required by law. Under the "Health of Animals Act" and "Plant Protection Act", inspectors can prohibit access to an infected area, dispose or treat any infected or suspected to be infected animals or plants. The Minister can order for compensation to be given if animals/plants were destroyed pursuant to these acts. Each province also enacts its own quarantine/environmental health legislation. Under the "Prevention and Control of Disease Ordinance" (HK Laws. Chap 599), a health officer may seize articles they believe to be infectious or containing infectious agents. All travellers, if requested, must submit themselves to a health officer. Failure to do so is against the law and is subject to arrest and prosecution. The law allows for a health officer who have reasonable grounds to detain, isolate, quarantine anyone or anything believed to be infected and to restrict any articles from leaving a designated quarantine area. He/she may also order the Civil Aviation Department to prohibit the landing or leaving, embarking or disembarking of an aircraft. This power also extends to land, sea or air crossings. Under the same ordinance, any police officer, health officer, member of the Civil Aid Service, or member of the Auxiliary Medical Service can arrest a person who obstructs or escapes from detention. To reduce the risk of introducing rabies from continental Europe, the United Kingdom used to require that dogs, and most other animals introduced to the country, spend six months in quarantine at an HM Customs and Excise pound; this policy was abolished in 2000 in favour of a scheme generally known as Pet Passports, where animals can avoid quarantine if they have documentation showing they are up to date on their appropriate vaccinations. The plague had disappeared from England for more than thirty years before the practice of quarantine against it was definitely established by the Quarantine Act 1710 ("9 Ann."). The first act was called for due to fears that the plague might be imported from Poland and the Baltic states. The second act of 1721 was due to the prevalence of plague at Marseille and other places in Provence, France. It was renewed in 1733 after a new outbreak in continental Europe, and again in 1743, due to an epidemic in Messina. In 1752 a rigorous quarantine clause was introduced into an act regulating trade with the Levant, and various arbitrary orders were issued during the next twenty years to meet the supposed danger of infection from the Baltic states. Although no plague cases ever came to England during that period, the restrictions on traffic became more stringent, and in 1788 a very strict Quarantine Act was passed, with provisions affecting cargoes in particular. The act was revised in 1801 and 1805, and in 1823–24 an elaborate inquiry was followed by an act making quarantine only at discretion of the privy council, which recognised yellow fever or other highly infectious diseases as calling for quarantine, along with plague. The threat of cholera in 1831 was the last occasion in England of the use of quarantine restrictions. Cholera affected every country in Europe despite all efforts to keep it out. When cholera returned to England in 1849, 1853 and 1865–66, no attempt was made to seal the ports. In 1847 the privy council ordered all arrivals with a clean bill of health from the Black Sea and the Levant to be admitted, provided there had been no case of plague during the voyage, and afterwards the practice of quarantine was discontinued. After the passing of the first Quarantine Act (1710) the protective practices in England were haphazard and arbitrary. In 1721 two vessels carrying cotton goods from Cyprus, then affected by the plague, were ordered to be burned with their cargoes, the owners receiving an indemnity. By the clause in the Levant Trade Act of 1752, ships arriving in the United Kingdom with a "foul bill" (i.e. coming from a country where plague existed) had to return to the lazarets of Malta, Venice, Messina, Livorno, Genoa or Marseille, to complete a quarantine or to have their cargoes opened and aired. Since 1741 Stangate Creek (on the Medway) had been the quarantine station but it was available only for vessels with clean bills of health. In 1755 lazarets in the form of floating hulks were established in England for the first time, the cleansing of cargo (particularly by exposure to dews) having been done previously on the ship's deck. No medical inspections were conducted, but control was the responsibility of the Officers of Royal Customs and quarantine. In 1780, when plague was in Poland, even vessels with grain from the Baltic had to spend forty days in quarantine, and unpack and air their cargoes, but due to complaints mainly from Edinburgh and Leith, an exception was made for grain after that date. About 1788 an order of the council required every ship liable to quarantine to hoist a yellow flag in the daytime and show a light at the main topmast head at night, in case of meeting any vessel at sea, or upon arriving within four leagues of the coast of Great Britain or Ireland. After 1800, ships from plague-affected countries (or with foul bills) were permitted to complete their quarantine in the Medway instead of at a Mediterranean port on the way, and an extensive lazaret was built on Chetney Hill near Chatham (although it was later demolished). The use of floating hulks as lazarets continued as before. In 1800 two ships with hides from Mogador in Morocco were ordered to be sunk with their cargoes at the Nore, the owners receiving an indemnity. Animal hides were suspected of harbouring infections, along with a long list of other items, and these had to be exposed on the ship's deck for twenty-one days or less (six days for each instalment of the cargo), and then transported to the lazaret, where they were opened and aired for another forty days. The whole detention of the vessel was from sixty to sixty-five days, including the time for reshipment of her cargo. Pilots had to pass fifteen days on board a convalescent ship. From 1846 onwards the quarantine establishments in the United Kingdom were gradually reduced, while the last vestige of the British quarantine law was removed by the Public Health Act of 1896, which repealed the Quarantine Act of 1825 (with dependent clauses of other acts), and transferred from the privy council to the Local Government Board the powers to deal with ships arriving infected with yellow fever or plague. The powers to deal with cholera ships had been already transferred by the Public Health Act 1875. British regulations of 9 November 1896 applied to yellow fever, plague and cholera. Officers of the Customs, as well as of Royal Coast Guard and the Board of Trade (for signalling), were empowered to take the initial steps. They certified in writing the master of a supposedly infected ship, and detained the vessel provisionally for not more than twelve hours, giving notice meanwhile to the port sanitary authority. The medical officer of the port boarded the ship and examined every person in it. Every person found infected was taken to a hospital and quarantined under the orders of the medical officer, and the vessel remained under his orders. Every person suspected could be detained on board for 48 hours or removed to the hospital for a similar period. All others were free to land upon giving the addresses of their destinations to be sent to the respective local authorities, so that the dispersed passengers and crew could be kept individually under observation for a few days. The ship was then disinfected, dead bodies buried at sea, infected clothing, bedding, etc., destroyed or disinfected, and bilge-water and water-ballast pumped out at a suitable distance before the ship entered a dock or basin. Mail was subject to no detention. A stricken ship within 3 miles of the shore had to fly a yellow and black flag at the main mast from sunrise to sunset. In the United States, authority to quarantine people with infectious diseases is split between the state and federal governments. States (and tribal governments recognised by the federal government) have primary authority to quarantine people within their boundaries. Federal jurisdiction only applies to people moving across state or national borders, or people on federal property. Communicable diseases for which apprehension, detention, or conditional release of people are authorised must be specified in Executive Orders of the President. As of 2014, these include Executive Orders 13295 13375, and 13674; the latest executive order specifies the following infectious diseases: cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral haemorrhagic fevers (Lassa, Marburg, Ebola, Crimean-Congo, South American, and others not yet isolated or named), severe acute respiratory syndromes (SARS), and influenza from a novel or re-emergent source. The Department of Health and Human Services is responsible for quarantine decisions, specifically the Centers for Disease Control and Prevention's Division of Global Migration and Quarantine. As of 21 March 2017, Centers for Disease Control and Prevention (CDC) regulations specify: The rules: The Division of Global Migration and Quarantine (DGMQ) of the US Center for Disease Control (CDC) operates small quarantine facilities at a number of US ports of entry. As of 2014, these included one land crossing (in El Paso, Texas) and 19 international airports. Besides the port of entry where it is located, each station is also responsible for quarantining potentially infected travellers entering through any ports of entry in its assigned region. These facilities are fairly small; each one is operated by a few staff members and capable of accommodating 1–2 travellers for a short observation period. Cost estimates for setting up a temporary larger facility, capable of accommodating 100 to 200 travellers for several weeks, have been published by the Airport Cooperative Research Program (ACRP) in 2008 of the Transportation Research Board. The United States puts immediate quarantines on imported products if a contagious disease is identified and can be traced back to a certain shipment or product. All imports will also be quarantined if the disease appears in other countries. According to Title 42 U.S.C. §§264 and 266, these statutes provide the Secretary of Health and Human Services peacetime and wartime authority to control the movement of people into and within the United States to prevent the spread of communicable disease. Quarantine law began in Colonial America in 1663, when in an attempt to curb an outbreak of smallpox, the city of New York established a quarantine. In the 1730s, the city built a quarantine station on the Bedloe's Island. The Philadelphia Lazaretto was the first quarantine hospital in the United States, built in 1799, in Tinicum Township, Delaware County, Pennsylvania. There are similar national landmarks such as Swinburne Island and Angel Island. The Pest House in Concord, Massachusetts was used as early as 1752 to quarantine those suffering from cholera, tuberculosis and smallpox. In early June 1832, during the cholera epidemic in New York, Governor Enos Throop called a special session of the Legislature for 21 June, to pass a Public Health Act by both Houses of the State Legislature. It included to a strict quarantine along the Upper and Lower New York-Canadian frontier. In addition, New York City Mayor Walter Browne established a quarantine against all peoples and products of Europe and Asia, which prohibited ships from approaching closer than 300 yards to the city, and all vehicles were ordered to stop 1.5 miles away. The Immigrant Inspection Station on Ellis Island, built in 1892, is often mistakenly assumed to have been a quarantine station, however its marine hospital (Ellis Island Immigrant Hospital) only qualified as a contagious disease facility to handle less virulent diseases like measles, trachoma and less advanced stages of tuberculosis and diphtheria; those afflicted with smallpox, yellow fever, cholera, leprosy or typhoid fever, could neither be received nor treated there. Mary Mallon was quarantined in 1907 under the Greater New York Charter, Sections 1169–1170, which permitted the New York City Board of Health to "remove to a proper place…any person sick with any contagious, pestilential or infectious disease." During the 1918 flu pandemic, people were also quarantined. Most commonly suspect cases of infectious diseases are requested to voluntarily quarantine themselves, and Federal and local quarantine statutes only have been uncommonly invoked since then, including for a suspected smallpox case in 1963. The 1944 Public Health Service Act "to apprehend, detain, and examine certain infected persons who are peculiarly likely to cause the interstate spread of disease" clearly established the federal government's quarantine authority for the first time. It gave the United States Public Health Service responsibility for preventing the introduction, transmission and spread of communicable diseases from foreign countries into the United States, and expanded quarantine authority to include incoming aircraft. The act states that "...any individual reasonably believed to be infected with a communicable disease in a qualifying stage and...if found to be infected, may be detained for such time and in such manner as may be reasonably necessary." No federal quarantine orders were issued from 1963 until 2020, as American citizens were evacuated from China during the COVID-19 pandemic. Eyam was a village in Britain that imposed protective sequestration on itself to stop the spread of the bubonic plague in 1665. The plague ran its course over 14 months and one account states that it killed at least 260 villagers. The church in Eyam has a record of 273 individuals who were victims of the plague. On 28 July 1814, the convict ship "Surry" arrived in Sydney Harbour from England. Forty-six people had died of typhoid during the voyage, including 36 convicts, and the ship was placed in quarantine on the North Shore. Convicts were landed, and a camp was established in the immediate vicinity of what is now Jeffrey Street in Kirribilli. This was the first site in Australia to be used for quarantine purposes. Mary Mallon was a cook who was found to be a carrier of Salmonella enterica subsp. enterica, the cause of typhoid fever, and was forcibly isolated from 1907 to 1910. At least 53 cases of the infection were traced to her, and three deaths. Subsequently she spent a further 23 years in isolation prior to her death in 1938. The presence of the bacteria in her gallbladder was confirmed on autopsy. During the 1918 flu pandemic, the then Governor of American Samoa, John Martin Poyer, imposed a full quarantine of the islands from all incoming ships, successfully achieving zero deaths within the territory. In contrast, the neighbouring New Zealand-controlled Western Samoa was among the hardest hit, with a 90% infection rate and over 20% of its adults dying from the disease. This failure by the New Zealand government to prevent and contain the Spanish Flu subsequently rekindled Samoan anti-colonial sentiments that led to its eventual independence. In 1942, during World War II, British forces tested out their biological weapons program on Gruinard Island and infected it with anthrax. Subsequently a quarantine order was placed on the island. The quarantine was lifted in 1990, when the island was declared safe, and a flock of sheep was released onto the island. Between 24 July 1969 and 9 February 1971, the astronauts of Apollo 11, Apollo 12, and Apollo 14, were quarantined (in each case for a total of 21 days) after returning to Earth, initially where they were recovered and then being transferred to the Lunar Receiving Laboratory, to prevent possible interplanetary contamination by microorganisms from the Moon. All lunar samples were also held in the biosecure environment of the Lunar Receiving Laboratory for initial assay. The 1972 Yugoslav smallpox outbreak was the final outbreak of smallpox in Europe. The World Health Organization fought the outbreak with extensive quarantine, and the government instituted martial law. In 2014, Kaci Hickox, a Doctors Without Borders nurse from Maine, legally battled 21-day quarantines imposed by the states of New Jersey and Maine after returning home from treating Ebola patients in Sierra Leone. "Hickox was sequestered in a medical tent for days because New Jersey announced new Ebola regulations the day she arrived. She eventually was allowed to travel to Maine, where the state sought to impose a 'voluntary quarantine' before trying and failing to create a buffer between her and others. A state judge rejected attempts to restrict her movements, saying she posed no threat as long as she wasn't demonstrating any symptoms of Ebola. Hickox said health care professionals like those at the U.S. Centers for Disease Control and Prevention – not politicians like New Jersey Gov. Chris Christie and Maine Gov. Paul LePage – should be in charge of making decisions that are grounded in science, not fear." During the COVID-19 pandemic, multiple governmental actors enacted quarantines in an effort to curb the rapid spread of the virus. On 26 March, 1.7 billion people worldwide were under some form of lockdown, which increased to 2.6 billion people two days later—around a third of the world's population. In Hubei, the origin of the epidemic, a "cordon sanitaire" was imposed on Wuhan and other major cities in China, affecting around 500 million people, which is unprecedented in scale in human history, to limit the rate of spread of the disease. The 'lockdown' of Wuhan, and subsequently a wider-scale 'lockdown' throughout Hubei province, began on 23 January 2020. At this stage, the spread of the virus in mainland China was running at approximately 50% growth in cases per day. On 8 February, the daily rate of spread fell below 10%. For figures, see COVID-19 pandemic in Mainland China. As the outbreak spread there, beginning 22 February 2020, a "cordon sanitaire" was imposed on a group of at least 10 different municipalities in Northern Italy, effectively quarantining more than 50,000 people. This followed a second day when the declared detected cases leapt enormously (the period from 21 to 23 February saw daily increases of 567%, 295% and 90% respectively). A week later the rate of increase of cases in Italy was significantly reduced (the period from 29 February to 4 March saw daily increases of 27%, 50%, 20%, 23% and 23%). On 8 March 2020, a much wider region of Northern Italy was placed under quarantine restrictions, involving around 16 million people. On the next day, the quarantine was extended to the whole of Italy, effective on 10 March 2020, placing roughly 60 million people under quarantine. A team of Chinese experts, together with some 31 tonnes of supplies, arrived in Rome on 13 March 2020 to help Italy fight the virus. On 22 March 2020, Russia sent nine Ilyushin 76 planes with expert virologists, epidemiologists, medical equipment and pharmaceuticals in a humanitarian aid operation that Italian media dubbed "From Russia With Love". Eventually the lockdown was extended until 3 May, although starting from 14 April stationery shops, bookshops and children clothing's shops were allowed to open. On 26 April, the so-called "Phase 2" was announced, to start from 4 May. Movements across regions were still forbidden, while movements between municipalities were allowed only to visit relatives or for work and health reasons. Moreover, closed factories could re-open, but schools, bars, restaurants and barbers were still closed. As at 4 May, when new cases were running around 0.5%, ca. 1600 persons, per day and consistently falling, it was expected that museums and retailers may reopen from 18 May, while hairdressers, bars and restaurants were expected to reopen fully on 1 June. As cases of the virus spread to and took hold in more European countries, many followed the earlier examples of China and Italy and began instituting policies of lockdown. Notable among these were Ireland (where schools have been closed for the rest of March and limits set on sizes of meetings), Spain (where a lockdown was announced on 14 March), Czech Republic, Norway, Denmark, Iceland, Poland, Turkey and France, while the United Kingdom noticeably lagged behind in adopting such measures. As of 18 March, more than 250 million people are in lockdown across Europe. In the immediate context of the start of the pandemic in Wuhan, countries neighbouring or close to China adopted a cautious approach. For example, Sri Lanka, Macau, Hong Kong, Vietnam, Japan and South Korea had all imposed some degree of lockdown by 19 February. As countries across the world reported escalating case numbers and deaths, more and more countries began to announce travel restrictions and lockdowns. Africa and Latin America were relatively delayed in the spread of the virus, but even on these continents, countries began to impose travel bans and lockdowns. Brazil and Mexico began lockdowns in late February and much of the rest of Latin America followed suit in early March. Much of Africa was on lockdown by the start of April. Kenya, for example, blocked certain international flights and subsequently placed a ban on 'global' meetings. , more than 280 million people, or about 86% of the population, are under some form of lockdown in the United States, 59 million people are in lockdown in South Africa, and 1.3 billion people are in lockdown in India. Self-quarantine (or self-isolation) is a popular term that emerged during the COVID-19 pandemic, which spread to most countries in 2020. Citizens able to do so were encouraged to stay home to curb the spread of the disease. U.S. President John F. Kennedy euphemistically referred to the U.S. Navy's interdiction of shipping en route to Cuba during the Cuban Missile Crisis as a "quarantine" rather than a blockade, because a quarantine is a legal act in peacetime, whereas a blockade is defined as an act of aggression under the U.N. Charter. In computer science, "quarantining" describes putting files infected by computer viruses into a special directory, so as to eliminate the threat they pose, without irreversibly deleting them. The Spanish term for quarantine, "(la) cuarentena", refers also to the period of postpartum confinement in which a new mother and her baby are sheltered from the outside world.
https://en.wikipedia.org/wiki?curid=25237
Quasar A quasar () (also known as a quasi-stellar object abbreviated QSO) is an extremely luminous active galactic nucleus (AGN), in which a supermassive black hole with mass ranging from millions to billions of times the mass of the Sun is surrounded by a gaseous accretion disk. As gas in the disk falls towards the black hole, energy is released in the form of electromagnetic radiation, which can be observed across the electromagnetic spectrum. The power radiated by quasars is enormous: the most powerful quasars have luminosities thousands of times greater than a galaxy such as the Milky Way. The term originated as a contraction of quasi-stellar "[star-like]" radio source, because quasars were first identified during the 1950s as sources of radio-wave emission of unknown physical origin, and when identified in photographic images at visible wavelengths they resembled faint star-like points of light. High-resolution images of quasars, particularly from the Hubble Space Telescope, have demonstrated that quasars occur in the centers of galaxies, and that some host-galaxies are strongly interacting or merging galaxies. As with other categories of AGN, the observed properties of a quasar depend on many factors, including the mass of the black hole, the rate of gas accretion, the orientation of the accretion disk relative to the observer, the presence or absence of a jet, and the degree of obscuration by gas and dust within the host galaxy. Quasars are found over a very broad range of distances, and quasar discovery surveys have demonstrated that quasar activity was more common in the distant past. The peak epoch of quasar activity was approximately 10 billion years ago. , the most distant known quasar is ULAS J1342+0928 at redshift "z" = 7.54; light observed from this quasar was emitted when the universe was only 690 million years old. The supermassive black hole in this quasar, estimated at 800 million solar masses, is the most distant black hole identified to date. The term "quasar" was first used in a paper by Taiwanese-born U.S. astrophysicist Hong-Yee Chiu in May 1964, in "Physics Today", to describe certain astronomically-puzzling objects: Between 1917 and 1922, it became clear from work by Heber Curtis, Ernst Öpik and others, that some objects ("nebulae") seen by astronomers were in fact distant galaxies like our own. But when radio astronomy commenced in the 1950s, astronomers detected, among the galaxies, a small number of anomalous objects with properties that defied explanation. The objects emitted large amounts of radiation of many frequencies, but no source could be located optically, or in some cases only a faint and point-like object somewhat like a distant star. The spectral lines of these objects, which identify the chemical elements of which the object is composed, were also extremely strange and defied explanation. Some of them changed their luminosity very rapidly in the optical range and even more rapidly in the X-ray range, suggesting an upper limit on their size, perhaps no larger than our own Solar System. This implies an extremely high power density. Considerable discussion took place over what these objects might be. They were described as ""quasi-stellar" [meaning: star-like] "radio sources"", or ""quasi-stellar objects"" (QSOs), a name which reflected their unknown nature, and this became shortened to "quasar". The first quasars (3C 48 and 3C 273) were discovered in the late 1950s, as radio sources in all-sky radio surveys. They were first noted as radio sources with no corresponding visible object. Using small telescopes and the Lovell Telescope as an interferometer, they were shown to have a very small angular size. By 1960, hundreds of these objects had been recorded and published in the Third Cambridge Catalogue while astronomers scanned the skies for their optical counterparts. In 1963, a definite identification of the radio source 3C 48 with an optical object was published by Allan Sandage and Thomas A. Matthews. Astronomers had detected what appeared to be a faint blue star at the location of the radio source and obtained its spectrum, which contained many unknown broad emission lines. The anomalous spectrum defied interpretation. British-Australian astronomer John Bolton made many early observations of quasars, including a breakthrough in 1962. Another radio source, 3C 273, was predicted to undergo five occultations by the Moon. Measurements taken by Cyril Hazard and John Bolton during one of the occultations using the Parkes Radio Telescope allowed Maarten Schmidt to find a visible counterpart to the radio source and obtain an optical spectrum using the Hale Telescope on Mount Palomar. This spectrum revealed the same strange emission lines. Schmidt was able to demonstrate that these were likely to be the ordinary spectral lines of hydrogen redshifted by 15.8%, at the time, a high redshift (with only a handful of much fainter galaxies known with higher redshift). If this was due to the physical motion of the "star", then 3C 273 was receding at an enormous velocity, around , far beyond the speed of any known star and defying any obvious explanation. Nor would an extreme velocity help to explain 3C 273's huge radio emissions. If the redshift was cosmological (now known to be correct), the large distance implied that 3C 273 was far more luminous than any galaxy, but much more compact. Also, 3C 273 was bright enough to detect on archival photographs dating back to the 1900s; it was found to be variable on yearly timescales, implying that a substantial fraction of the light was emitted from a region less than 1 light-year in size, tiny compared to a galaxy. Although it raised many questions, Schmidt's discovery quickly revolutionized quasar observation. The strange spectrum of 3C 48 was quickly identified by Schmidt, Greenstein and Oke as hydrogen and magnesium redshifted by 37%. Shortly afterwards, two more quasar spectra in 1964 and five more in 1965 were also confirmed as ordinary light that had been redshifted to an extreme degree. While the observations and redshifts themselves were not doubted, their correct interpretation was heavily debated, and Bolton's suggestion that the radiation detected from quasars were ordinary spectral lines from distant highly redshifted sources with extreme velocity was not widely accepted at the time. An extreme redshift could imply great distance and velocity but could also be due to extreme mass or perhaps some other unknown laws of nature. Extreme velocity and distance would also imply immense power output, which lacked explanation. The small sizes were confirmed by interferometry and by observing the speed with which the quasar as a whole varied in output, and by their inability to be seen in even the most powerful visible-light telescopes as anything more than faint starlike points of light. But if they were small and far away in space, their power output would have to be immense and difficult to explain. Equally, if they were very small and much closer to our galaxy, it would be easy to explain their apparent power output, but less easy to explain their redshifts and lack of detectable movement against the background of the universe. Schmidt noted that redshift is also associated with the expansion of the universe, as codified in Hubble's law. If the measured redshift was due to expansion, then this would support an interpretation of very distant objects with extraordinarily high luminosity and power output, far beyond any object seen to date. This extreme luminosity would also explain the large radio signal. Schmidt concluded that 3C 273 could either be an individual star around 10 km wide within (or near to) our galaxy, or a distant active galactic nucleus. He stated that a distant and extremely powerful object seemed more likely to be correct. Schmidt's explanation for the high redshift was not widely accepted at the time. A major concern was the enormous amount of energy these objects would have to be radiating, if they were distant. In the 1960s no commonly accepted mechanism could account for this. The currently accepted explanation, that it is due to matter in an accretion disc falling into a supermassive black hole, was only suggested in 1964 by Edwin Salpeter and Yakov Zel'dovich, and even then it was rejected by many astronomers, because in the 1960s, the existence of black holes was still widely seen as theoretical and too exotic, and because it was not yet confirmed that many galaxies (including our own) have supermassive black holes at their center. The strange spectral lines in their radiation, and the speed of change seen in some quasars, also suggested to many astronomers and cosmologists that the objects were comparatively small and therefore perhaps bright, massive and not far away; accordingly that their redshifts were not due to distance or velocity, and must be due to some other reason or an unknown process, meaning that the quasars were not really powerful objects nor at extreme distances, as their redshifted light implied. A common alternative explanation was that the redshifts were caused by extreme mass (gravitational redshifting explained by general relativity) and not by extreme velocity (explained by special relativity). Various explanations were proposed during the 1960s and 1970s, each with their own problems. It was suggested that quasars were nearby objects, and that their redshift was not due to the expansion of space (special relativity) but rather to light escaping a deep gravitational well (general relativity). This would require a massive object, which would also explain the high luminosities. However, a star of sufficient mass to produce the measured redshift would be unstable and in excess of the Hayashi limit. Quasars also show forbidden spectral emission lines, previously only seen in hot gaseous nebulae of low density, which would be too diffuse to both generate the observed power and fit within a deep gravitational well. There were also serious concerns regarding the idea of cosmologically distant quasars. One strong argument against them was that they implied energies that were far in excess of known energy conversion processes, including nuclear fusion. There were some suggestions that quasars were made of some hitherto unknown form of stable antimatter regions and that this might account for their brightness. Others speculated that quasars were a white hole end of a wormhole, or a chain reaction of numerous supernovae. Eventually, starting from about the 1970s, many lines of evidence (including the first X-ray space observatories, knowledge of black holes and modern models of cosmology) gradually demonstrated that the quasar redshifts are genuine and due to the expansion of space, that quasars are in fact as powerful and as distant as Schmidt and some other astronomers had suggested, and that their energy source is matter from an accretion disc falling onto a supermassive black hole. This included crucial evidence from optical and X-ray viewing of quasar host galaxies, finding of "intervening" absorption lines, which explained various spectral anomalies, observations from gravitational lensing, Peterson and Gunn's 1971 finding that galaxies containing quasars showed the same redshift as the quasars, and Kristian's 1973 finding that the "fuzzy" surrounding of many quasars was consistent with a less luminous host galaxy. This model also fits well with other observations suggesting that many or even most galaxies have a massive central black hole. It would also explain why quasars are more common in the early universe: as a quasar draws matter from its accretion disc, there comes a point when there is less matter nearby, and energy production falls off or ceases, as the quasar becomes a more ordinary type of galaxy. The accretion-disc energy-production mechanism was finally modeled in the 1970s, and black holes were also directly detected (including evidence showing that supermassive black holes could be found at the centers of our own and many other galaxies), which resolved the concern that quasars were too luminous to be a result of very distant objects or that a suitable mechanism could not be confirmed to exist in nature. By 1987 it was "well accepted" that this was the correct explanation for quasars, and the cosmological distance and energy output of quasars was accepted by almost all researchers. Later it was found that not all quasars have strong radio emission; in fact only about 10% are "radio-loud". Hence the name "QSO" (quasi-stellar object) is used (in addition to "quasar") to refer to these objects, further categorised into the "radio-loud" and the "radio-quiet" classes. The discovery of the quasar had large implications for the field of astronomy in the 1960s, including drawing physics and astronomy closer together. In 1979 the gravitational lens effect predicted by Albert Einstein's general theory of relativity was confirmed observationally for the first time with images of the double quasar 0957+561. It is now known that quasars are distant but extremely luminous objects, so any light that reaches the Earth is redshifted due to the metric expansion of space. Quasars inhabit the centers of active galaxies and are among the most luminous, powerful, and energetic objects known in the universe, emitting up to a thousand times the energy output of the Milky Way, which contains 200–400 billion stars. This radiation is emitted across the electromagnetic spectrum, almost uniformly, from X-rays to the far infrared with a peak in the ultraviolet optical bands, with some quasars also being strong sources of radio emission and of gamma-rays. With high-resolution imaging from ground-based telescopes and the Hubble Space Telescope, the "host galaxies" surrounding the quasars have been detected in some cases. These galaxies are normally too dim to be seen against the glare of the quasar, except with special techniques. Most quasars, with the exception of 3C 273, whose average apparent magnitude is 12.9, cannot be seen with small telescopes. Quasars are believed—and in many cases confirmed—to be powered by accretion of material into supermassive black holes in the nuclei of distant galaxies, as suggested in 1964 by Edwin Salpeter and Yakov Zel'dovich. Light and other radiation cannot escape from within the event horizon of a black hole. The energy produced by a quasar is generated "outside" the black hole, by gravitational stresses and immense friction within the material nearest to the black hole, as it orbits and falls inward. The huge luminosity of quasars results from the accretion discs of central supermassive black holes, which can convert between 6% and 32% of the mass of an object into energy, compared to just 0.7% for the p–p chain nuclear fusion process that dominates the energy production in Sun-like stars. Central masses of 105 to 109 solar masses have been measured in quasars by using reverberation mapping. Several dozen nearby large galaxies, including our own Milky Way galaxy, that do not have an active center and do not show any activity similar to a quasar, are confirmed to contain a similar supermassive black hole in their nuclei (galactic center). Thus it is now thought that all large galaxies have a black hole of this kind, but only a small fraction have sufficient matter in the right kind of orbit at their center to become active and power radiation in such a way as to be seen as quasars. This also explains why quasars were more common in the early universe, as this energy production ends when the supermassive black hole consumes all of the gas and dust near it. This means that it is possible that most galaxies, including the Milky Way, have gone through an active stage, appearing as a quasar or some other class of active galaxy that depended on the black-hole mass and the accretion rate, and are now quiescent because they lack a supply of matter to feed into their central black holes to generate radiation. The matter accreting onto the black hole is unlikely to fall directly in, but will have some angular momentum around the black hole, which will cause the matter to collect into an accretion disc. Quasars may also be ignited or re-ignited when normal galaxies merge and the black hole is infused with a fresh source of matter. In fact, it has been suggested that a quasar could form when the Andromeda Galaxy collides with our own Milky Way galaxy in approximately 3–5 billion years. In the 1980s, unified models were developed in which quasars were classified as a particular kind of active galaxy, and a consensus emerged that in many cases it is simply the viewing angle that distinguishes them from other active galaxies, such as blazars and radio galaxies. The highest-redshift quasar known () is ULAS J1342+0928, with a redshift of 7.54, which corresponds to a comoving distance of approximately 29.36 billion light-years from Earth (these distances are much larger than the distance light could travel in the universe's 13.8 billion year history because space itself has also been expanding). More than quasars have been found, most from the Sloan Digital Sky Survey. All observed quasar spectra have redshifts between 0.056 and 7.54 (as of 2017). Applying Hubble's law to these redshifts, it can be shown that they are between 600 million and 29.36 billion light-years away (in terms of comoving distance). Because of the great distances to the farthest quasars and the finite velocity of light, they and their surrounding space appear as they existed in the very early universe. The power of quasars originates from supermassive black holes that are believed to exist at the core of most galaxies. The Doppler shifts of stars near the cores of galaxies indicate that they are rotating around tremendous masses with very steep gravity gradients, suggesting black holes. Although quasars appear faint when viewed from Earth, they are visible from extreme distances, being the most luminous objects in the known universe. The brightest quasar in the sky is 3C 273 in the constellation of Virgo. It has an average apparent magnitude of 12.8 (bright enough to be seen through a medium-size amateur telescope), but it has an absolute magnitude of −26.7. From a distance of about 33 light-years, this object would shine in the sky about as brightly as our Sun. This quasar's luminosity is, therefore, about 4 trillion (4) times that of the Sun, or about 100 times that of the total light of giant galaxies like the Milky Way. This assumes that the quasar is radiating energy in all directions, but the active galactic nucleus is believed to be radiating preferentially in the direction of its jet. In a universe containing hundreds of billions of galaxies, most of which had active nuclei billions of years ago but only seen today, it is statistically certain that thousands of energy jets should be pointed toward the Earth, some more directly than others. In many cases it is likely that the brighter the quasar, the more directly its jet is aimed at the Earth. Such quasars are called blazars. The hyperluminous quasar APM 08279+5255 was, when discovered in 1998, given an absolute magnitude of −32.2. High-resolution imaging with the Hubble Space Telescope and the 10 m Keck Telescope revealed that this system is gravitationally lensed. A study of the gravitational lensing of this system suggests that the light emitted has been magnified by a factor of ~10. It is still substantially more luminous than nearby quasars such as 3C 273. Quasars were much more common in the early universe than they are today. This discovery by Maarten Schmidt in 1967 was early strong evidence against Steady-state cosmology and in favor of the Big Bang cosmology. Quasars show the locations where massive black holes are growing rapidly (by accretion). These black holes grow in step with the mass of stars in their host galaxy in a way not understood at present. One idea is that jets, radiation and winds created by the quasars, shut down the formation of new stars in the host galaxy, a process called "feedback". The jets that produce strong radio emission in some quasars at the centers of clusters of galaxies are known to have enough power to prevent the hot gas in those clusters from cooling and falling onto the central galaxy. Quasars' luminosities are variable, with time scales that range from months to hours. This means that quasars generate and emit their energy from a very small region, since each part of the quasar would have to be in contact with other parts on such a time scale as to allow the coordination of the luminosity variations. This would mean that a quasar varying on a time scale of a few weeks cannot be larger than a few light-weeks across. The emission of large amounts of power from a small region requires a power source far more efficient than the nuclear fusion that powers stars. The conversion of gravitational potential energy to radiation by infalling to a black hole converts between 6% and 32% of the mass to energy, compared to 0.7% for the conversion of mass to energy in a star like our Sun. It is the only process known that can produce such high power over a very long term. (Stellar explosions such as supernovas and gamma-ray bursts, and direct matter–antimatter annihilation, can also produce very high power output, but supernovae only last for days, and the universe does not appear to have had large amounts of antimatter at the relevant times). Since quasars exhibit all the properties common to other active galaxies such as Seyfert galaxies, the emission from quasars can be readily compared to those of smaller active galaxies powered by smaller supermassive black holes. To create a luminosity of 1040 watts (the typical brightness of a quasar), a super-massive black hole would have to consume the material equivalent of 10 stars per year. The brightest known quasars devour 1000 solar masses of material every year. The largest known is estimated to consume matter equivalent to 10 Earths per second. Quasar luminosities can vary considerably over time, depending on their surroundings. Since it is difficult to fuel quasars for many billions of years, after a quasar finishes accreting the surrounding gas and dust, it becomes an ordinary galaxy. Radiation from quasars is partially "nonthermal" (i.e., not due to black-body radiation), and approximately 10% are observed to also have jets and lobes like those of radio galaxies that also carry significant (but poorly understood) amounts of energy in the form of particles moving at relativistic speeds. Extremely high energies might be explained by several mechanisms (see Fermi acceleration and Centrifugal mechanism of acceleration). Quasars can be detected over the entire observable electromagnetic spectrum, including radio, infrared, visible light, ultraviolet, X-ray and even gamma rays. Most quasars are brightest in their rest-frame ultraviolet wavelength of 121.6 nm Lyman-alpha emission line of hydrogen, but due to the tremendous redshifts of these sources, that peak luminosity has been observed as far to the red as 900.0 nm, in the near infrared. A minority of quasars show strong radio emission, which is generated by jets of matter moving close to the speed of light. When viewed downward, these appear as blazars and often have regions that seem to move away from the center faster than the speed of light (superluminal expansion). This is an optical illusion due to the properties of special relativity. Quasar redshifts are measured from the strong spectral lines that dominate their visible and ultraviolet emission spectra. These lines are brighter than the continuous spectrum. They exhibit Doppler broadening corresponding to mean speed of several percent of the speed of light. Fast motions strongly indicate a large mass. Emission lines of hydrogen (mainly of the Lyman series and Balmer series), helium, carbon, magnesium, iron and oxygen are the brightest lines. The atoms emitting these lines range from neutral to highly ionized, leaving it highly charged. This wide range of ionization shows that the gas is highly irradiated by the quasar, not merely hot, and not by stars, which cannot produce such a wide range of ionization. Like all (unobscured) active galaxies, quasars can be strong X-ray sources. Radio-loud quasars can also produce X-rays and gamma rays by inverse Compton scattering of lower-energy photons by the radio-emitting electrons in the jet. "Iron quasars" show strong emission lines resulting from low-ionization iron (Fe ), such as IRAS 18508-7815. Quasars also provide some clues as to the end of the Big Bang's reionization. The oldest known quasars ("z" = 6) display a Gunn–Peterson trough and have absorption regions in front of them indicating that the intergalactic medium at that time was neutral gas. More recent quasars show no absorption region, but rather their spectra contain a spiky area known as the Lyman-alpha forest; this indicates that the intergalactic medium has undergone reionization into plasma, and that neutral gas exists only in small clouds. The intense production of ionizing ultraviolet radiation is also significant, as it would provide a mechanism for reionization to occur as galaxies form. Despite this, current theories suggest that quasars were not the primary source of reionization; the primary causes of reionization were probably the earliest generations of stars, known as Population III stars (possibly 70%), and dwarf galaxies (very early small high-energy galaxies) (possibly 30%). Quasars show evidence of elements heavier than helium, indicating that galaxies underwent a massive phase of star formation, creating population III stars between the time of the Big Bang and the first observed quasars. Light from these stars may have been observed in 2005 using NASA's Spitzer Space Telescope, although this observation remains to be confirmed. The taxonomy of quasars includes various subtypes representing subsets of the quasar population having distinct properties. Because quasars are extremely distant, bright, and small in apparent size, they are useful reference points in establishing a measurement grid on the sky. The International Celestial Reference System (ICRS) is based on hundreds of extra-galactic radio sources, mostly quasars, distributed around the entire sky. Because they are so distant, they are apparently stationary to our current technology, yet their positions can be measured with the utmost accuracy by very-long-baseline interferometry (VLBI). The positions of most are known to 0.001 arcsecond or better, which is orders of magnitude more precise than the best optical measurements. A grouping of two or more quasars on the sky can result from a chance alignment, where the quasars are not physically associated, from actual physical proximity, or from the effects of gravity bending the light of a single quasar into two or more images by gravitational lensing. When two quasars appear to be very close to each other as seen from Earth (separated by a few arcseconds or less), they are commonly referred to as a "double quasar". When the two are also close together in space (i.e. observed to have similar redshifts), they are termed a "quasar pair", or as a "binary quasar" if they are close enough that their host galaxies are likely to be physically interacting. As quasars are overall rare objects in the universe, the probability of three or more separate quasars being found near the same physical location is very low, and determining whether the system is closely separated physically requires significant observational effort. The first true triple quasar was found in 2007 by observations at the W. M. Keck Observatory Mauna Kea, Hawaii. LBQS 1429-008 (or QQQ J1432-0106) was first observed in 1989 and at the time was found to be a double quasar. When astronomers discovered the third member, they confirmed that the sources were separate and not the result of gravitational lensing. This triple quasar has a redshift of "z" = 2.076. The components are separated by an estimated 30–50 kpc, which is typical for interacting galaxies. In 2013, the second true triplet of quasars, QQQ J1519+0627, was found with a redshift "z" = 1.51, the whole system fitting within a physical separation of 25 kpc. The first true quadruple quasar system was discovered in 2015 at a redshift "z" = 2.0412 and has an overall physical scale of about 200 kpc. A multiple-image quasar is a quasar whose light undergoes gravitational lensing, resulting in double, triple or quadruple images of the same quasar. The first such gravitational lens to be discovered was the double-imaged quasar Q0957+561 (or Twin Quasar) in 1979. An example of a triply lensed quasar is PG1115+08. Several quadruple-image quasars are known, including the Einstein Cross and the Cloverleaf Quasar, with the first such discoveries happening in the mid-1980s.
https://en.wikipedia.org/wiki?curid=25239
Quinquagesima Quinquagesima () is one of the names used in the Western Church for the Sunday before Ash Wednesday. It is also called Quinquagesima Sunday, Quinquagesimae, Estomihi, Shrove Sunday, or the Sunday next before Lent. The name Quinquagesima originates from Latin "quinquagesimus" (fiftieth). This is in reference to the fifty days before Easter Day using inclusive counting which counts both Sundays (normal counting would count only one of these). Since the forty days of the Lent do not include Sundays, the first day of Lent, Ash Wednesday, succeeds Quinquagesima Sunday by only three days. The name Estomihi is derived from the incipit or opening words of the Introit for the Sunday, "Esto mihi in Deum protectorem, et in locum refugii, ut salvum me facias", ("Be Thou unto me a God, a Protector, and a place of refuge, to save me") . The earliest Quinquagesima Sunday can occur is February 1 and the latest is March 7. Recent and upcoming dates: In the Roman Catholic Church, the terms for this Sunday (and the two immediately before it — Sexagesima and Septuagesima Sundays) were eliminated in the reforms following the Second Vatican Council, and these Sundays are part of Ordinary Time. According to the reformed Roman Rite Roman Catholic calendar, this Sunday is now known by its number within Ordinary Time — fourth through ninth, depending upon the date of Easter. The earlier form of the Roman Rite, with its references to Quinquagesima Sunday, and to the Sexagesima and Septuagesima Sundays, continues to be observed in some communities. In traditional lectionaries, the Sunday concentrates on , "Jesus took the twelve aside and said, 'Lo, we go to Jerusalem, and everything written by the prophets about the Son of Man shall be fulfilled' ... The disciples, however, understood none of this," which from verse 35 is followed by Luke's version of Healing the blind near Jericho. The passage presages the themes of Lent and Holy Week. In most churches, palms blessed on Palm Sunday of the previous year are burned on this day after the last mass of the day, the ashes of these burned palms are used for the liturgy of Ash Wednesday. This Sunday has different names in the two different calendars used in the Church of England: in the "Book of Common Prayer" calendar (1662) this Sunday is known as "Quinquagesima", while in the "Common Worship" calendar (2000) it is known as the "Sunday next before Lent". In this latter calendar it is part of the period of Ordinary Time that falls between the feasts of the Presentation of Christ in the Temple (the end of the Epiphany season) and Ash Wednesday. In the Revised Common Lectionary the Sunday before Lent is designated "Transfiguration Sunday", and the gospel reading is the story of the Transfiguration of Jesus from Matthew, Mark, or Luke. Some churches whose lectionaries derive from the RCL, e.g. the Church of England, use these readings but do not designate the Sunday "Transfiguration Sunday". In the Eastern Orthodox Church, its equivalent, the Sunday before Great Lent, is called "Forgiveness Sunday", "Maslenitsa Sunday", or "Cheesefare Sunday". The latter name comes because this Sunday concludes Maslenitsa, the week in which butter and cheese may be eaten, which are prohibited during Great Lent. The former name derives from the fact that this Sunday is followed by a special Vespers called "Forgiveness Vespers" which opens Great Lent. On this day the Eastern Orthodox Church Christians at the liturgy listen to the Gospel speaking of forgiveness of sins, fasting, and the gathering of treasures in heaven. On this day, all Orthodox Christians ask each other for forgiveness to begin the Great Lent with a good heart, to focus on the spiritual life, to purify the heart from sin in confession, and to meet Easter - the day of the Resurrection of Jesus with a pure heart. This is the last day before Lent when non-lenten food is eaten. In Lutheranism is combined with (Paul's praise of love). Composers writing cantatas for Estomihi Sunday include:
https://en.wikipedia.org/wiki?curid=25240
Quisling Quisling (; ) is a term originating in Norway, which is used in Scandinavian languages and in English for a person who collaborates with an enemy occupying force – or more generally as a synonym for traitor. The word originates from the surname of the Norwegian war-time leader Vidkun Quisling, who headed a domestic Nazi collaborationist regime during World War II. Use of Quisling's surname as a term predates World War II. The first recorded use of the term was by Norwegian Labour Party politician Oscar Torp in a 2 January 1933 newspaper interview, where he used it as a general term for followers of Vidkun Quisling. Quisling was at this point in the process of establishing the Nasjonal Samling (National Unity) party, a fascist party modelled on the German Nazi Party. Further uses of the term were made by Aksel Sandemose, in a newspaper article in "Dagbladet" in 1934, and by the newspaper "Vestfold Arbeiderblad", in 1936. The term with the opposite meaning, a Norwegian patriot, is Jøssing. The use of the name as a term for collaborators or traitors in general probably came about upon Quisling's unsuccessful 1940 coup d'état, when he attempted to seize power and make Norway cease resisting the invading Germans. The term was widely introduced to an English-speaking audience by the British newspaper "The Times". It published an editorial on 19 April 1940 titled "Quislings everywhere", in which it was asserted that "To writers, the word Quisling is a gift from the gods. If they had been ordered to invent a new word for traitor... they could hardly have hit upon a more brilliant combination of letters. Aurally it contrives to suggest something at once slippery and tortuous." The "Daily Mail" picked up the term four days after "The Times" editorial was published. "The War Illustrated" discussed "potential Quislings" among the Dutch during the German invasion of the Netherlands. Subsequently, the BBC brought the word into common use internationally. Chips Channon described how during the Norway Debate of 7–8 May 1940, he and other Conservative MPs who supported Prime Minister of the United Kingdom Neville Chamberlain called those who voted against a motion of no confidence "Quislings". Chamberlain's successor Winston Churchill used the term during an address to the Allied Delegates at St. James's Palace on 21 June 1941, when he said: "A vile race of Quislings—to use a new word which will carry the scorn of mankind down the centuries—is hired to fawn upon the conqueror, to collaborate in his designs and to enforce his rule upon their fellow countrymen while grovelling low themselves." He used the term again in an address to both houses of Congress in the United States of America on 26 December 1941. Commenting upon the effect of a number of Allied victories against Axis forces, and moreover the United States’ decision to enter the war, Churchill opined: "Hope has returned to the hearts of scores of millions of men and women, and with that hope there burns the flame of anger against the brutal, corrupt invader. And still more fiercely burn the fires of hatred and contempt for the filthy Quislings whom he has suborned." The term subsequently entered the language and became a target for political cartoonists. In the United States it was used often. Some examples include: In the Warner Bros. cartoon "Tom Turk and Daffy" (1944), it was uttered by a Thanksgiving turkey whose presence is betrayed to Porky Pig by Daffy Duck. In the American film "Edge of Darkness" (1943), about the Resistance in Norway, the heroine's brother is often described as a quisling. The back-formed verb, "to quisle" () existed. This back-formed verb gave rise to a much less common version of the noun: "quisler". However, H. L. Mencken, an authority on American English (he wrote "The American Language", a multi-volume scholarly work) even in 1944 appeared not to be aware of the existence of the verb form, and "to quisle" has entirely disappeared from contemporary usage. "Quisling" was applied to some Communist figures who participated in the establishment of Communist regimes. As an illustration, the renegade socialist Zdeněk Fierlinger of Czechoslovakia was frequently derided as "Quislinger" for his collaboration with the Communist Party of Czechoslovakia. "The Patriot Game", one of the best known songs to emerge from the Irish nationalist struggle, includes the line "...those quislings who sold out the Patriot Game" in some versions (although the original uses "cowards" and other versions substitute "rebels" or "traitors"). In the Norwegian television series Occupied, Norwegians who are seen as collaborating with the Russian invaders and later with European Union peacekeepers are called Quislings. In the epilogue of Farnham's Freehold by Robert A. Heinlein, a sign is posted listing available goods and services. One of the items listed is "Jerked Quisling (by the neck)". In the early 21st century, the term demonstrated continued currency as it was used by some anti-Trump American writers to describe President Donald Trump and his associates. In a June 2018 "New York Times" column, Nobel laureate Paul Krugman called US President Trump a "quisling", in reference to what Krugman described as Trump's "serv[ing] the interests of foreign masters at his own country’s expense" and "defend[ing] Russia while attacking our closest allies". Other publications also applied the term. For instance, Joe Scarborough in the "Washington Post" ("These are desperate times for the quislings of Trump"), Rich Lowry in "Politico" ("The GOP elite... is the quisling establishment"), Robert Zubrin in "Is Donald Trump A Russian Quisling?" in the conservative "The Federalist," former United States Mint director Philip N. Diehl in "The Hill" ("The historical reference that more aptly applies to pro-Trump Republicans is that of the Quislings"), David Driesen in "History News Network" ("Trump seeks a government of quislings"), Dick Polman on NPR station WHYY-FM ("Ever since last summer, most Republicans have marinated in their cowardice... The next step toward home-grown tyranny – the quisling phase – has already begun"), and so forth.
https://en.wikipedia.org/wiki?curid=25243
Quill A quill pen is a writing tool made from a moulted flight feather (preferably a primary wing-feather) of a large bird. Quills were used for writing with ink before the invention of the dip pen, the metal-nibbed pen, the fountain pen, and, eventually, the ballpoint pen. The hand-cut goose quill is rarely used as a calligraphy tool, because many papers are now derived from wood pulp and wear down the quill very quickly. However, it is still the tool of choice for a few scribes who noted that quills provide an unmatched sharp stroke as well as greater flexibility than a steel pen. In a carefully prepared quill, the slit does not widen through wetting and drying with ink. It will retain its shape adequately and only requires infrequent sharpening and can be used time and time again until there is little left of it. The hollow shaft of the feather (the "calamus") acts as an ink reservoir and ink flows to the tip by capillary action. The strongest quills come from the primary flight feathers discarded by birds during their annual moult. Generally, feathers from the left wing (it is supposed) are favored by the normal majority of British writers because the feather curves away from the sight line, over the back of the hand. The quill barrel is cut to six or seven inches in length, so no such consideration of curvature or 'sight-line' is necessary. Additionally, writing with the left hand in the era in which the quill was popular was discouraged, and quills were never sold as left and right-handed, only by their size and species. Goose feathers are most commonly used; scarcer, more expensive swan feathers are used for larger lettering. Depending on availability and strength of the feather, as well as quality and characteristic of the line wanted by the writer, other feathers used for quill-pen making include those from the crow, eagle, owl, hawk, and turkey. Crow feathers were particularly useful as quills when fine work, such as accounting books, was required. Each bird could supply only about 10 to 12 good-quality quills. On a true quill, the barbs are stripped off completely on the trailing edge. (The pinion for example only has significant barbs on one side of the barrel.) Later, a fashion developed for stripping partially and leaving a decorative top of a few barbs. The fancy, fully plumed quill is mostly a Hollywood invention and has little basis in reality. Most, if not all, manuscript illustrations of scribes show a quill devoid of decorative barbs, or at least mostly stripped. Quill pens were used to write the vast majority of medieval manuscripts. Quill pens were also used to write the "Magna Carta" and the Declaration of Independence. U.S. President Thomas Jefferson bred geese specially at Monticello to supply his tremendous need for quills. Quill pens are still used today mainly by professional scribes and calligraphers. Quills are also used as the plectrum material in string instruments, particularly the harpsichord. From the 17th to 19th centuries the central tube of the quill was used as a priming tube (filled with gunpowder) to fire cannon. Quills were the primary writing instrument in the western world from the 6th to the 19th century. The best quills were usually made from goose, swan, and later turkey feathers. Quills went into decline after the invention of the metal pen, mass production beginning in Great Britain as early as 1822 by John Mitchell of Birmingham. In the Middle East and much of the Islamic world, quills were not used as writing implements. Only reed pens were used as writing implements. Quill pens were the instrument of choice during the medieval era due to their compatibility with parchment and vellum. Before this the reed pen had been used, but a finer letter was achieved on animal skin using a cured quill. Other than written text, they were often used to create figures, decorations, and images on manuscripts, although many illuminators and painters preferred fine brushes for their work. The variety of different strokes in formal hands was accomplished by good penmanship as the tip was square cut and rigid, exactly as it is today with modern steel pens. It was much later, in the 1600s, with the increased popularity of writing, especially in the copperplate script promoted by the many printed manuals available from the 'Writing Masters', that quills became more pointed and flexible. According to the Supreme Court Historical Society, 20 goose-quill pens, neatly crossed, are placed at the four counsel tables each day the U.S. Supreme Court is in session; "most lawyers appear before the Court only once, and gladly take the quills home as souvenirs." This has been done since the earliest sessions of the Court. Quills are denominated from the order in which they are fixed in the wing; the first is favoured by the expert calligrapher, the second and third quills being very satisfactory also, plus the pinion feather. Flags the 5th and 6th feathers are also used. No other feather on the wing would be considered suitable by a professional scribe. Information can be obtained on the techniques of curing and cutting quills In order to harden a quill that is soft, thrust the barrel into hot ashes, stirring it till it is soft; then taking it out, press it almost flat upon your knees with the back of a penknife, and afterwards reduce it to a roundness with your fingers. If you have a number to harden, set water and alum over the fire; and while it is boiling put in a handful of quills, the barrels only, for a minute, and then lay them by. An accurate account of the Victorian process by William Bishop, from researches with one of the last London quill dressers, is recorded in the "Calligrapher's Handbook" cited on this page. In the Jewish tradition quill pens, called (), are used by scribes to write Torah Scrolls, Mezuzot, and Tefillin. From the 19th century in radical and socialist symbolism, quills have been used to symbolize clerks and intelligentsia. Some notable examples are the Radical Civic Union, the Czech National Social Party in combination with the hammer, symbol of the labour movement, or the Democratic Party of Socialists of Montenegro. A quill knife was the original primary tool used for cutting and sharpening quills, known as "dressing". Following the decline of the quill in the 1820s, after the introduction of the maintenance-free, mass-produced steel dip nib by John Mitchell, knives were still manufactured but became known as desk knives, stationery knives or latterly as the name stuck "pen" knives. There is a small but significant difference between a pen knife and a quill knife, in that the quill knife has a blade that is flat on one side and convex on the other which facilitates the round cuts required to shape a quill. A "pen" knife by contrast has two flat sides. This distinction is not recognised by modern traders, dealers or collectors, who define a quill knife as any small knife with a fixed or hinged blade, including such items as ornamental fruit knives. Plectra for psalteries and lutes can be cut similarly to writing pens. The rachis, the portion of the stem between the barbs, not the calamus, of the primary flight feathers of birds of the crow family was preferred for harpsichords. In modern instruments, plastic is more common, but they are often still called "quills". The lesiba uses a quill attached to a string to produce sound.
https://en.wikipedia.org/wiki?curid=25247
Qantas Qantas Airways Limited () is the flag carrier of Australia and its largest airline by fleet size, international flights and international destinations. It is the third oldest airline in the world, after KLM and Avianca, having been founded in November 1920; it began international passenger flights in May 1935. The Qantas name comes from ""QANTAS"", an acronym for its original name, ""Queensland and Northern Territory Aerial Services"", and it is nicknamed "The Flying Kangaroo". Qantas is a founding member of the Oneworld airline alliance. The airline is based in the Sydney suburb of Mascot, adjacent to its main hub at Sydney Airport. , Qantas had a 65% share of the Australian domestic market and carried 14.9% of all passengers travelling in and out of Australia. Various subsidiary airlines operate to regional centres and on some trunk routes within Australia under the QantasLink banner. Qantas also owns Jetstar Airways, a low-cost airline that operates both international services from Australia and domestic services within Australia and New Zealand; and holds stakes in a number of other Jetstar-branded airlines. Qantas was founded in Winton, Queensland on 16 November 1920 by Hudson Fysh, Paul McGinness and Fergus McMaster as Queensland and Northern Territory Aerial Services Limited. The airline's first aircraft was an Avro 504K. It moved its headquarters to Longreach, Queensland in 1921 and Brisbane, Queensland in 1930. In 1934, QANTAS and Britain's Imperial Airways (a forerunner of British Airways) formed a new company, Qantas Empire Airways Limited (QEA). The new airline commenced operations in December 1934, flying between Brisbane and Darwin. QEA flew internationally from May 1935, when the service from Darwin was extended to Singapore (Imperial Airways operated the rest of the service through to London). When World War II began, enemy action and accidents destroyed half of the fleet of ten, and most of the fleet was taken over by the Australian government for war service. Flying boat services were resumed in 1943, with flights between the Swan River at Crawley in Perth, Western Australia and Koggala Lake in Ceylon (now Sri Lanka). This linked up with the British Overseas Airways Corporation (BOAC, the successor airline to Imperial Airways) service to London. Qantas' kangaroo logo was first used on the "Kangaroo Route", begun in 1944, from Sydney to Karachi, where BOAC crews took over for the rest of the journey to the UK. In 1947, QEA was nationalised by the Australian government led by Labor Prime Minister Ben Chifley. QANTAS Limited was then wound up. After nationalisation, Qantas' remaining domestic network, in Queensland, was transferred to the also nationally owned Trans-Australia Airlines, leaving Qantas with a purely international network. Shortly after nationalisation, QEA began its first services outside the British Empire, to Tokyo. Services to Hong Kong began around the same time. In 1957 a head office, Qantas House, opened in Sydney. In June 1959 Qantas entered the jet age when the first Boeing 707-138 was delivered. On , Qantas merged with nationally owned domestic airline Australian Airlines (renamed from Trans-Australia Airlines in 1986). The airline started to be rebranded to Qantas in the following year. Qantas was gradually privatised between 1993 and 1997. Under the legislation passed to allow the privatisation, Qantas must be at least 51% owned by Australian shareholders. In 1998, Qantas co-founded the Oneworld alliance with American Airlines, British Airways, Canadian Airlines and Cathay Pacific, with other airlines joining subsequently. With the entry of new budget airline Virgin Blue (now Virgin Australia) into the domestic market in 2000, Qantas' market share fell. Qantas created the budget Jetstar Airways in 2001 to compete. The main domestic competitor to Qantas, Ansett Australia, collapsed on 14 September 2001. Market share for Qantas immediately neared 90%, but competition with Virgin increased as it expanded; the market share of the Qantas Group eventually settled at a relatively stable position of about 65%, with 30% for Virgin and other regional airlines accounting for the rest of the market. Qantas briefly revived the Australian Airlines name for a short-lived international budget airline between 2002 and 2006, but this subsidiary was shut down in favour of expanding Jetstar internationally, including to New Zealand. In 2004, the Qantas group expanded into the Asian budget airline market with Jetstar Asia Airways, in which Qantas owns a minority stake. A similar model was used for the investment into Jetstar Pacific, headquartered in Vietnam, in 2007, and Jetstar Japan, launched in 2012. In December 2006, Qantas was the subject of a failed bid from a consortium calling itself Airline Partners Australia. Merger talks with British Airways in 2008 also did not proceed to an agreement. In 2011, an industrial relations dispute between Qantas and the Transport Workers Union of Australia resulted in the grounding of all Qantas aircraft and lock-out of the airline's staff for two days. On 25 March 2018, a Qantas Boeing 787 Dreamliner became the first aircraft to operate a scheduled non-stop commercial flight between Australia and Europe, with the inaugural arrival in London of Flight 9 (QF9). QF9 was a 17-hour, 14,498 km (9,009-mile) journey from Perth Airport in Western Australia to London Heathrow. On 20 October 2019, Qantas Airways completed the longest commercial flight to date between New York and Sydney using Boeing 787-9 Dreamliner in 19hr 20mins. On 19 March 2020, Qantas confirmed it would suspend about 60% of domestic flights, put two thirds of its employees on leave, suspend all international flights and ground more than 150 of its aircraft from the end of March until at least 31 May 2020 following expanded government travel restrictions due to the COVID-19 pandemic. To survive the pandemic, Qantas had announced that it would be axing 6000 jobs. Qantas had also announced it would be offloading its 30% stake in Jetstar Pacific to Vietnam Airlines, hence retiring the Jetstar brand in Vietnam. The key trends for the Qantas Group (Qantas Airways Ltd and Controlled Entities, which includes Jetstar and Qantas Cargo), are shown below (as at year ending 30 June): Qantas' headquarters are located at the Qantas Centre in the suburb of Mascot, Sydney, New South Wales. The headquarters underwent a redevelopment which was completed in December 2013. Qantas has operated a number of passenger airline subsidiaries since inception, including: Qantas operates a freight service under the name Qantas Freight (which uses aircraft operated by Qantas subsidiary Express Freighters Australia and also leases aircraft from Atlas Air) and also wholly owns the logistics-and-air-freight company Australian air Express. Qantas, through its Aboriginal and Torres Strait Islander Programme, has some links with the Aboriginal Australian community. As of 2007, the company has run the programme for more than ten years and 1–2% of its staff are Aboriginal and Torres Strait Islander. Qantas employs a full-time Diversity Coordinator, who is responsible for the programme. Qantas has also bought and donated Aboriginal art. In 1993, the airline bought the painting "Honey Ant and Grasshopper Dreaming" from the Central Australian desert region. As of 2007, this painting is on permanent loan to Yiribana at the Art Gallery of New South Wales. In 1996, Qantas donated five extra bark paintings to the gallery. Qantas has also sponsored and supported Aboriginal artists in the past. An early television campaign, starting in 1969 and running for several decades, was aimed at American audiences; it featured a live koala, voiced by Howard Morris, who complained that too many tourists were coming to Australia and concluded "I hate Qantas." The koala ads have been ranked among the greatest commercials of all time. A long-running advertising campaign features renditions by children's choirs of Peter Allen's "I Still Call Australia Home", at various famous landmarks in Australia and foreign locations such as Venice. The song has also been used in Qantas's safety videos since 2018. Qantas is the main sponsor of the Australia national rugby union team. It also sponsors the Socceroos, Australia's national association football team. Qantas was the naming rights sponsor for the Formula One Australian Grand Prix from 2010 until 2012. On 26 December 2011, Qantas signed a four-year deal with Australian cricket's governing body Cricket Australia, to be the official carrier of the Australia national cricket team. Qantas management has expressed strong support for Marriage Equality and LGBTIQ issues, with CEO Alan Joyce said to be, "arguably the most prominent corporate voice in the marriage equality campaign." As official airline partner for the Sydney Mardi Gras, Qantas decorated one of its aircraft with rainbow wording and positioned a rainbow flag next to the tail's flying kangaroo. Qantas also served pride cookies to its passengers. It had a rainbow roo float in the Mardi Gras parade. There has been criticism of Qantas using its corporate power to prosecute the private interests on their staff and the community. Peter Dutton has said that chief executives such as Alan Joyce at Qantas should "stick to their knitting" rather than using the company's brand to advocate for political causes. A senior church leader has made similar comments. Despite the criticism, Qantas will continue to advocate for marriage equality which will include offering customers specially commissioned rings with the phrase, "until we all belong". This phrase will also appear on Qantas boarding passes and other paraphernalia. The cost of the campaign by Qantas and other participating companies is expected to be more than $5 million. Joyce has pledged Qantas will, "continue social-justice campaigning". In relation to a rugby player, sacked by Rugby Australia which is financially supported by Qantas, following his social media postings on homosexuality. In August 2011, the company announced that following financial losses of A$200 million ($209 million) for the year ending June 2011 and a decline in market share, major structural changes would be made. One planned change that did not come to fruition was the plan to create a new Asia-based premium airline that would operate under a different name. In addition to this plan, Qantas announced it planned to cut 1,000 jobs. The reforms included route changes, in particular the cessation of services to London via Hong Kong and Bangkok. While Qantas still operated in these cities, onward flights to London would be via its Oneworld partner British Airways under a code-share service. The following year Qantas reported an A$245 million full-year loss to the end of June 2012, citing high fuel prices, intense competition and industrial disputes. This was the first full year loss since Qantas was fully privatised 17 years previously, in 1995, and led to the airline cancelling its order of 35 new Boeing 787 Dreamliner aircraft, to reduce its spending. Qantas subsequently divested itself of its 50% holding of StarTrack, Australia's largest road freight company, in part for acquiring full interest in Australian Air Express. On 26 March 2012, Qantas set up Jetstar Hong Kong with China Eastern Airlines Corporation, which was intended to begin flights in 2013, but became embroiled in a protracted approval process. Qantas and Emirates began an alliance on 31 March 2013, in which their combined carriers offered 98 flights per week to Dubai, that saw bookings up six-fold. In September 2013, following the announcement the carrier expected another A$250 million ( million) net loss for the half-year period that ended on 31 December and the implementation of further cost-cutting measures that would see the cut of 1,000 jobs within a year, S&P downgraded Qantas credit from BBB- (the lowest investment grade) to BB+. Moody's applied a similar downgrading a month later. Losses continued into 2014 reporting year, with the Qantas Group reporting a half year loss of A$235 million ( million) and eventual full year loss of A$2.84 billion. In February 2014 additional cost-cutting measures to save A$2 billion, including the loss of 5,000 jobs that will see the workforce lowered from 32,000 to 27,000 by 2017 were announced. In May 2014 the company stated it expected to shed 2,200 jobs by June 2014, including those of 100 pilots. The carrier also reduced the size of its fleet by retiring aircraft and deferring deliveries; and planned to sell some of its assets. With 2,200 employees laid off by June 2014, another 1,800 job positions were planned to be cut by June 2015. Also during 2014 the "Qantas Sale Act", under which the airline was privatised, was amended to repeal parts of section 7. That act limits foreign ownership of Qantas to 49 percent, with foreign airlines subject to further restrictions, including a 35-percent limit for all foreign airline shareholdings combined. In addition, a single foreign entity can hold no more than 25 percent of the airline's shares. The airline returned to profit in 2015, announcing a A$557 million after tax profit in August 2015, in contrast with a A$2.84 billion loss the year earlier. In 2015, Qantas sold its lease of Terminal 3 at Sydney Airport, which was due to continue until 2019, back to Sydney Airport Corporation for $535 million. This meant Sydney Airport resumed operational responsibility of the terminal, including the lucrative retail areas. Paris-based Australian designer Martin Grant is responsible for the new Qantas airline staff uniforms that were publicly unveiled on 16 April 2013. These were to replace the previous uniforms, dubbed colloquially as "Morrisey" by staff after the designer, Peter Morrissey. The new outfits combine the colours of navy blue, red and fuchsia pink. Qantas chief executive Alan Joyce stated that the new design "speaks of Australian style on the global stage" at the launch event that involved Qantas employees modelling the uniforms. Grant consulted with Qantas staff members over the course of one year to finalise the 35 styles that were eventually created. Not all employees were happy with the new uniform, however, with one flight attendant being quoted as saying "The uniforms are really tight and they are simply not practical for the very physical job we have to do." Qantas operates flightseeing charters to Antarctica on behalf of Croydon Travel. It first flew Antarctic flightseeing trips in 1977. They were suspended for a number of years due to the crash of Air New Zealand Flight 901 on Mount Erebus in 1979. Qantas restarted the flights in 1994. Although these flights do not touch down, they require specific polar operations and crew training due to factors like sector whiteout, which contributed to the 1979 Air New Zealand disaster. With Flights 7 and 8 – a non-stop service between Sydney and Dallas/Fort Worth operated by the Airbus A380 – commencing on 29 September 2014, Qantas operated the world's longest passenger flight on the world's largest passenger aircraft. This was overtaken on 1 March 2016 by Emirates' new Auckland-Dubai service. After it ordered Boeing 787 aircraft, Qantas announced an intention to launch non-stop flights between Australia and the United Kingdom during March 2018 from Perth, Western Australia to London. The inaugural flight left Perth on 24 March. On 19 March 2020, Qantas confirmed it would suspend all international flights and about 60% of domestic flights from the end of March until at least 31 May 2020 following expanded government travel restrictions due to the COVID-19 pandemic. , Qantas had codeshare agreements with the following airlines: In addition to the above codeshares, Qantas has currently entered into joint ventures with the following airlines: , the Qantas mainline fleet consists of the following aircraft: , Qantas and its subsidiaries operated 297 aircraft, including 71 aircraft by Jetstar Airways; 90 by the various QantasLink-branded airlines and six by Express Freighters Australia (on behalf of Qantas Freight, which also wet leases three Atlas Air Boeing 747-400Fs). On 22 August 2012, Qantas announced that, due to losses and to conserve capital, it had cancelled its 35-aircraft Boeing 787-9 order while keeping the 15-aircraft 787-8 order for Jetstar Airways and moving forward 50 purchase rights. On 20 August 2015 Qantas announced that it had ordered eight Boeing 787-9s for delivery from 2017. In February 2019, Qantas cancelled its remaining orders for a further eight Airbus A380-800 aircraft. In June 2019, during the Paris Air Show, Qantas Group converted 26 Airbus A321neo orders to the A321XLR variant and another ten A321neo orders to the A321LR variant; and ordered an additional ten A321XLRs. This brought Qantas Group's total Airbus A320neo family order to 109 aircraft, consisting of 45 A320neos, 28 A321LRs, and 36 A321XLRs. At the time of the announcement, Qantas CEO Alan Joyce stated that a decision had not yet been made on how the aircraft would be distributed between Qantas and Jetstar Airways, or whether they were to be used for network growth or the replacement of older aircraft. In December 2019, Qantas announced it had selected the Airbus A350-1000 for its Project Sunrise program of non-stop flights from Sydney, Melbourne and Brisbane to cities such as London, New York, Paris, Rio de Janeiro, Cape Town, and Frankfurt. No orders have been placed but Qantas will work closely with Airbus to prepare contract terms for up to 12 aircraft ahead of a final decision by the Qantas Board. Due to the impact of the COVID-19 pandemic on aviation, plans for "Project Sunrise" were put on hold indefinitely. Qantas has named its aircraft since 1926. Themes have included Greek gods; stars; people in Australian aviation history; and Australian birds. Since 1959, the majority of Qantas aircraft have been named after Australian cities. The Airbus A380 series, the flagship of the airline, is named after Australian aviation pioneers, with the first A380 named "Nancy-Bird Walton". Two Qantas aircraft are currently decorated with an Indigenous Australian art scheme. One aircraft, a Boeing 737-800, wears a livery called "Mendoowoorrji", which was revealed in November 2013. The design was drawn from the late West Australian Aboriginal artist Paddy Bedford. A Boeing 787-9 Dreamliner is adorned in a paint scheme inspired by the late Emily Kame Kngwarreye's 1991 painting "Yam Dreaming". The adaptation of "Yam Dreaming" to the aircraft, led by Balarinji, a Sydney-based and Aboriginal-owned design firm, incorporates the red Qantas tailfin into the design, which includes white dots with red and orange tones. The design depicts the yam plant, an important and culturally significant symbol in Kngwarreye's Dreaming stories, and a staple food source in her home region of Utopia. The design was applied to the aircraft during manufacture, prior to its delivery in March 2018 to Alice Springs Airport, situated 230 kilometers southeast of Utopia, where the aircraft was met by Kngwarreye's descendants, the local community, and Qantas executives. The aircraft would later operate Qantas' inaugural nonstop services between Perth and London Heathrow, and between Melbourne and San Francisco, scheduled with Boeing 787 aircraft. Australian Aboriginal art designs have previously adorned some Qantas aircraft; the first design was called "Wunala Dreaming", which was unveiled in 1994 and had been painted on now-retired Boeing 747-400 and 747-400ER aircraft between 1994 and 2012. The "motif" was an overall-red design depicting ancestral spirits in the form of kangaroos travelling in the outback. The second design was called "Nalanji Dreaming" and was depicted on a Boeing 747-300 from 1995 until its retirement in 2005. "Nalanji Dreaming" was a bright blue design inspired by rainforest landscape and tropical seas. The third design was titled "Yananyi Dreaming," and featured a depiction of Uluru. The scheme was designed by Uluru-based artist Rene Kulitja, in collaboration with Balarinji. It was painted on the 737 at the Boeing factory prior to its delivery in 2002. It was repainted into the standard livery in 2014. In November 2014 the airline revealed that the 75th Boeing 737-800 jet to be delivered would carry a 'retro-livery' based on the airline's 1971 'ochre' colour scheme design featuring the iconic 'Flying Kangaroo' on its tail and other aspects drawn from its 1970s fleet. The aircraft was delivered on 17 November. Qantas announced a second 737-800 would receive a 'retro roo' livery in October 2015. On 16 November 2015 the airline unveiled the second 'retro roo' 737, bearing a replica livery from 1959 to celebrate the airline's 95th birthday. Several Qantas aircraft have been decorated with promotional liveries, promoting telecommunications company Optus; the Disney motion picture "Planes"; the Australian national association football team, the Socceroos; and the Australian national rugby union team, the Wallabies. Two aircraft – an Airbus A330-200 and a Boeing 747-400ER – were decorated with special liveries promoting the Oneworld airline alliance (of which Qantas is a member) in 2009. On 29 September 2014, nonstop Airbus A380 service to Dallas/Fort Worth International Airport was inaugurated using an A380 decorated with a commemorative cowboy hat and bandana on the kangaroo tail logo. Prior to the 2017 Sydney Mardi Gras, Qantas decorated one of its Airbus A330-300 aircraft with rainbow lettering and depicted a rainbow flag on the tail of the aircraft. Qantas domestic flights are primarily operated by Boeing 737-800 and Airbus A330-200 aircraft; Airbus A330-300s sometimes operate domestically as well. A two-class configuration (Business and Economy) is offered. Domestic Business Class is offered on all Boeing 737 and Airbus A330 aircraft. On the Boeing 737, Business is exclusively available in the first three rows of the cabin, with a seat configuration of 2-2, seat recline and a larger pitch between seats. As the A330s operate international flights, Business Suites are sometimes available on domestic routes. These seats feature all-aisle access in a 1-2-1 configuration and a fully flat bed. Domestic Economy Class is offered on all Boeing 737 and Airbus A330 aircraft. Seat pitch is usually and seat width ranges from . Layouts are 3–3 on the 737 and 2-4-2 on the A330. Qantas international flights are primarily operated on Airbus A380s, A330-300s, Boeing 747s, 787s and sometimes on Airbus A330-200s and Boeing 737-800s. Passenger class configuration varies by aircraft, with the Airbus A330-300 offering a two class configuration of Business and Economy on short to medium haul flights. This compares to the Airbus A380, which offers a four class configuration of First, Business, Premium Economy and Economy on selected long haul flights. First class is offered exclusively on the Airbus A380. It offers 14 individual suites in a 1-1-1 layout. The seats rotate, facing forward for takeoff, but rotating to the side for dining and sleeping, with 83.5 in seat pitch (extending to a 212 cm fully flat bed) and a width of . Each suite has a widescreen HD monitor with 1,000 AVOD programs. In addition to 110 V AC power outlets, USB ports are offered for connectivity. Passengers are also able to make use of the on-board business lounge on the upper deck. Complimentary access to both the first class and business class lounges (or affiliated lounges) is offered. Updated versions of this seat were fitted to the airline's refurbished Airbus A380 aircraft from late 2019. This seat featured refreshed cushioning and larger entertainment screens compared to the older version seat. International Business class is offered on all Qantas mainline passenger aircraft. On all International and selected Domestic flights, Qantas offers two different types of Business Class seats, as listed below. Business Suites are offered exclusively on all Boeing 787, Airbus A330-300 and selected Airbus A330-200 and A380 aircraft. These seats include beds and are in a 1-2-1 configuration. The Business Suite was introduced on the A330 in October 2014, and also contains a bed. This seat includes a Panasonic eX3 system with a touchscreen. By the end of 2016, the business class seats of Qantas' entire Airbus A330 fleet were refitted. Airbus A330 Business Suites are available on Asian routes, transcontinental routes across Australia and smaller routes such as the East Coast triangle. Newer versions of this seat were fitted to the airline's new Boeing 787 fleet in late 2017. Business Skybeds are offered on all Boeing 747 and selected A380 aircraft. On the Boeing 747, seating is in a 2-3-2 configuration on the main deck and a 2–2 configuration on the upper deck. Older versions of the lie-flat Skybeds featured of seat pitch and width; however passengers slept at a distinct slope to the cabin floor. Later versions of the Skybed have an pitch, and lie fully horizontal. Skybed seats on Boeing 747s feature a touchscreen monitor with 400 AVOD programs. The Boeing 747 Business Skybeds are available on Asian, African and South America routes. On the Airbus A380, 64 fully flat Skybed seats are available with seat pitch (converting to a 200 cm long bed). These seats are located on the upper-deck in a 2-2-2 configuration in two separate cabins. Features include a 30 cm touchscreen monitor with 1,000 AVOD programmes and an on-board lounge. Airbus A380 Business Skybeds are available on Qantas' flagship routes such as Australia to/from: London via Singapore, Los Angeles, Dallas and Hong Kong (seasonal). In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft with new Business Suites as offered on Airbus A330 and Boeing 787 aircraft. The aircraft will gain six business class seats compared to the previous configuration and all aircraft will be completed by the end of 2020. Complimentary access to the Qantas business class lounge (or affiliated lounges) is also offered. Premium economy class is offered exclusively on all Airbus A380, Boeing 787-9 and Boeing 747-400 aircraft. It has a seat pitch of on the Boeing 747 and it ranges from on the Airbus A380, with a width of . On the Boeing 747, it is configured in a 2-3-2 seating arrangement around the middle of the main deck, whilst it is in a 2-3-2 at the rear of the upper deck on the A380. On the Boeing 787, it is configured in a 2-3-2 seating arrangement around the middle of the aircraft. The total number of seats depends on the aircraft type, as A380s have 35 seats, 747s have 36 seats and 787s have 28 seats. Qantas premium economy is presented as a lighter business class product rather than most other airlines' premium economy, which is often presented as a higher economy class, however Qantas premium economy does not offer access to premium lounges, and meals are only a slightly uprated version of economy class meals. In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft with new Premium Economy seats, as offered on Boeing 787 aircraft. The aircraft will gain 25 premium economy seats compared to the previous configuration and all aircraft will be completed by the end of 2020. International Economy class is available on all Qantas mainline passenger aircraft. Seat pitch is usually and seat width ranges from . Layouts are 3–3 on the 737, 2-4-2 on the A330, 3-3-3 on the B787-9 and 3-4-3 on the 747. On the A380, the layout is 3-4-3 and there are four self-service snack bars located in between cabins. In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft which includes new Economy seats with new seat cushions and improved inflight entertainment, as offered on Boeing 787 aircraft. The aircraft will have less economy seats compared to the previous configuration due to an increase in the number of premium seats. Every Qantas mainline aircraft has some form of video audio entertainment. Qantas has several types of in-flight entertainment (IFE) systems installed on its aircraft and refers to the in-flight experience as "On:Q". The "Total Entertainment System" by Rockwell Collins was featured on selected domestic and international aircraft between 2000 and 2019. This AVOD system included personal LCD screens in all classes, located in the seat back for economy and business class, and in the armrest for premium economy and first class. The Mainscreen System is featured on selected Boeing 737-800 aircraft. This entertainment system, introduced between 2002 and 2011, has overhead video screens as the main form of entertainment. Movies are shown on the screens for lengthier flights, or TV programmes on shorter flights. A news telecast will usually feature at the start of the flight. Audio options are less varied than on Q, iQ or the Total Entertainment System. The "iQ" inflight entertainment system by Panasonic Avionics Corporation is featured on all Boeing 747, and selected Airbus A380 and Boeing 737-800 aircraft. This audio video on demand (AVOD) experience, introduced in 2008, is based on the Panasonic Avionics system and features expanded entertainment options; touch screens; and new communications-related features such as Wi-Fi and mobile phone functionality; as well as increased support for electronics (such as USB and iPod connectivity). The "Q" inflight entertainment system by Panasonic Avionics Corporation in collaboration with Massive Interactive is featured on all Airbus A330-300, A330-200, Boeing 787 and selected Airbus A380 aircraft. This audio video on demand (AVOD) experience, introduced in 2014 and updated in 2018 on selected aircraft, is based on the Panasonic eX3 system and features extensive entertainment options; enhanced touch screens; and communications-related features such as Wi-Fi and mobile phone functionality; as well as increased support for electronics (such as USB and iPod connectivity). A "my flight" feature offers access to maps, playlists and a service timeline showing when drinks and meals will be served and the best time for resting on long-haul flights. Q Streaming is an in-flight entertainment system in which entertainment is streamed to iPads or personal devices available in all classes on selected aircraft. A selection of movies, TV, music and a kids' choice are available. In 2007, Qantas conducted a trial for use of mobile telephones with AeroMobile, during domestic services for three months on a Boeing 767. During the trial, passengers were allowed to send and receive text messages and emails, but were not able to make or receive calls. Since 2014, Sky News Australia has provided multiple news bulletins both in-flight and in Qantas branded lounges. Previously, the Australian Nine Network provided a news bulletin for Qantas entitled "Nine's Qantas Inflight News", which was the same broadcast as Nine's "Early Morning News", however Nine lost the contract to Sky News. In July 2015, Qantas signed a deal with American cable network HBO to provide over 120 hours of television programming in-flight from the network which will be updated monthly, as well as original lifestyle and entertainment programming from both Foxtel and the National Geographic Channel. In 2017 Qantas commenced rolling out complimentary high speed Wi-Fi on domestic aircraft. The services utilises NBN Co Sky Muster satellites to deliver higher speeds than generally offered by onboard Wi-Fi. Previously, in July 2007 Qantas had announced Wi-Fi on would be available on its long haul A380s and 747-400s although that system ultimately did not proceed following trials. "Qantas The Australian Way" is the airline's in-flight magazine. In mid-2015, the magazine ended a 14-year publishing deal with Bauer Media, switching its publisher to Medium Rare. The Qantas Club is the airline lounge for Qantas with airport locations around Australia and the world. Additionally, Qantas operates dedicated international first-class lounges in Sydney, Melbourne, Auckland, Los Angeles and Singapore. Domestically, Qantas also offers dedicated Business Lounges at Sydney, Melbourne, Brisbane, Canberra and Perth for domestic Business Class, Qantas Platinum and Platinum One, and OneWorld Emerald frequent flyers. In April 2013, Qantas opened its new flagship lounge in Singapore, the Qantas Singapore Lounge. This replaced the former separate first- and business-class lounges as a result of the new Emirates alliance. Similar combined lounges were also opened in Hong Kong in April 2014 and in Brisbane in October 2016. These new lounges provide the same service currently offered by Sofitel in its flagship First lounges in Sydney and Melbourne and a dining experience featuring Neil Perry's Spice Temple inspired dishes and signature cocktails. Qantas Club Members, Gold Frequent Flyers and Oneworld Sapphire holders are permitted to enter domestic Qantas Clubs when flying on Qantas or Jetstar flights along with one guest who need not be travelling. Platinum and Oneworld Emerald Members are permitted to bring in two guests who do not need to be travelling. Internationally, members use Qantas International Business Class lounges (or the Oneworld equivalent). Guests of the member must be travelling to gain access to international lounges. When flying with American Airlines, members have access to Admirals Club lounges and when flying on British Airways, members have access to British Airways' Terraces and Galleries Lounges. Platinum Frequent Flyers had previously been able to access the Qantas Club in Australian domestic terminals at any time, regardless of whether they were flying that day. Travellers holding Oneworld Sapphire or Emerald status are also allowed in Qantas Club lounges worldwide. Access to Qantas First lounges is open to passengers travelling on internationally operated Qantas or Oneworld first-class flights, as well as Qantas platinum and Oneworld emerald frequent flyers. Emirates first-class passengers are also eligible for access to the Qantas first lounges in Sydney and Melbourne. The Qantas Club also offers membership by paid subscription (one, two, or four years) or by achievement of Gold or Platinum frequent flyer status. Benefits of membership include lounge access, priority check-in, priority luggage handling and increased luggage allowances. The Qantas frequent-flyer program is aimed at rewarding customer loyalty. The program is long-standing, although the date of the actual inception has been a matter that has generated some commentary. Qantas state the program launched in 1987 although other sources claim what is the current program was launched in the early 1990s, with a Captain's Club program existing before that. Points are accrued based on distance flown, with bonuses that vary by travel class. Points can also be earned on other Oneworld airlines as well as through other non-airline partners. Points can be redeemed for flights or upgrades on flights operated by Qantas, Oneworld airlines, and other partners. Other partners include credit cards, car rental companies, hotels and many others. Flights with Qantas and selected partner airlines earn Status Credits — and accumulation of these allows progression to Silver status (Oneworld Ruby), Gold status (Oneworld Sapphire), Platinum and Platinum One status (Oneworld Emerald). Membership of the program has grown significantly since 2000, when the program had 2.4 million members. By 2005 membership had grown to 4.3 million, then to 7.2 million by 2010 and 10.8 million in 2015. As at 2018, the program has 12.3 million members, or approaching the equivalent of half of the Australian population. Qantas has faced criticism regarding availability of seats for members redeeming points. In 2004, the Australian Competition and Consumer Commission directed Qantas to provide greater disclosure to members regarding the availability of frequent-flyer seats. In March 2008, an analyst at JPMorgan Chase suggested that the Qantas frequent-flyer program could be worth A$2 billion (US$1.9 billion), representing more than a quarter of the total market value of Qantas. On 1 July 2008 a major overhaul of the program was announced. The two key new features of the program were Any Seat rewards, in which members could now redeem any seat on an aircraft, rather than just selected seats — at a price. The second new feature was Points Plus Pay, which has enabled members to use a combination of cash and points to redeem an award. Additionally, the Frequent Flyer store was also expanded to include a greater range of products and services. Announcing the revamp, Qantas confirmed it would be seeking to raise about A$1 billion in 2008 by selling up to 40% of the frequent flyer program. However, in September 2008, it stated it would defer the float, citing volatile market conditions. It is often claimed that Qantas has never had an aircraft crash. While it is true that the company has neither lost a jet airliner nor had any jet fatalities, it had eight fatal accidents and an aircraft shot down between 1927 and 1945, with the loss of 63 people. Half of these accidents and the shoot-down occurred during World War II, when the Qantas aircraft were operating on behalf of Allied military forces. Post-war, it lost another four aircraft (one was owned by BOAC and operated by Qantas in a pooling arrangement) with a total of 21 people killed. The last fatal accidents suffered by Qantas were in 1951, with three fatal crashes in five months. Qantas' safety record allows the airline to be officially known as the world's safest airline for seven years in a row from 2012 until 2019 (current). Since the end of World War II, the following accidents and incidents have occurred: On 26 May 1971 Qantas received a call from a "Mr. Brown" claiming that there was a bomb planted on a Hong Kong-bound jet and demanding $500,000 in unmarked $20 notes. The caller and threat were taken seriously when he directed police to an airport locker where a functional bomb was found. Arrangements were made to pick up the money in front of the head office of the airline in the heart of the Sydney business district. Qantas paid the money and it was collected, after which Mr. Brown called again, advising the "bomb on the plane" story was a hoax. The initial pursuit of the perpetrator was bungled by the New South Wales Police Force which, despite having been advised of the matter from the time of the first call, failed to establish adequate surveillance of the pick-up of the money. Directed not to use their radios (for fear of being "overheard"), the police were unable to communicate adequately. Tipped off by a still-unidentified informer, the police arrested an Englishman, Peter Macari, finding more than $138,000 hidden in an Annandale property. Convicted and sentenced to 15 years in prison, Macari served nine years before being deported to Britain. More than $224,000 remains unaccounted for. The 1986 telemovie "Call Me Mr. Brown", directed by Scott Hicks and produced by Terry Jennings, relates to this incident. On 4 July 1997 a copycat extortion attempt was thwarted by police and Qantas security staff. In November 2005 it was revealed that Qantas had a policy of not seating adult male passengers next to unaccompanied children. This led to accusations of discrimination. The policy came to light following an incident in 2004 when Mark Wolsay, who was seated next to a young boy on a Qantas flight in New Zealand, was asked to change seats with a female passenger. A steward informed him that "it was the airline's policy that only women were allowed to sit next to unaccompanied children". Cameron Murphy of the NSW Council for Civil Liberties president criticised the policy and stated that "there was no basis for the ban". He said it was wrong to assume that all adult males posed a danger to children. The policy has also been criticised for failing to take female abusers into consideration. In 2010, when British Airways was successfully sued to change its child seating policy, Qantas argued again that banning men from sitting next to unaccompanied children "reflected parents' concerns". In August 2012, the controversy resurfaced when a male passenger had to swap seats with a female passenger after the crew noticed he was sitting next to an unrelated girl travelling alone. The man felt discriminated against and humiliated before the other passengers as a possible paedophile. A Qantas spokesman defended the policy as consistent with that of other airlines in Australia and around the globe. In 2006 a class action lawsuit, alleging price-fixing on air cargo freight, was commenced in Australia. The lawsuit was settled early in 2011 with Qantas agreeing to pay in excess of $21 million to settle the case. Qantas has pleaded guilty to participating in a cartel that fixed the price of air cargo. Qantas Airways Ltd. was fined CAD$155,000 after it admitted that its freight division fixed surcharges on cargo exported on certain routes from Canada between May 2002 and February 2006. In July 2007, Qantas pleaded guilty in the United States to price fixing and was fined a total of $61 million through the Department of Justice investigation. The executive in charge was jailed for six months. Other Qantas executives were granted immunity after the airline agreed to co-operate with authorities. In 2008 the Australian Competition and Consumer Commission fined the airline $20 million for breaches of the acts associated with protecting consumers. In November 2010 Qantas was fined 8.8 million euros for its part in an air cargo cartel involving up to 11 other airlines. Qantas was fined NZ$6.5 million in April 2011 when it pleaded guilty in the New Zealand High Court to the cartel operation. In response to ongoing industrial unrest over failed negotiations involving three unions (the Australian Licensed Aircraft Engineers Association (ALAEA), the Australian and International Pilots Association (AIPA) and the Transport Workers Union of Australia (TWU)), the company grounded its entire domestic and international fleet from 5 pm AEDT on 29 October. Employees involved would be locked out from 8 p.m. AEDT on 31 October. It was reported that the grounding would have a daily financial impact of A$20 million. In the early hours of 31 October, Fair Work Australia ordered that all industrial action taken by Qantas and the involved trade unions be terminated immediately. The order was requested by the federal government amid fears that an extended period of grounding would do significant damage to the national economy, especially the tourism and mining sectors. The grounding affected an estimated 68,000 customers worldwide. Qantas has been subject to protests in relation to asylum seekers deportations leading to disruptions of flights. In 2015 activists prevented the transfer of a Tamil man from Melbourne to Darwin (from where he was to be deported to Colombo) by refusing to take their seats on a Qantas flight. It was reported that Qantas banned the student from taking Qantas flights in the future. A nameless head of security from Qantas sent a letter to the Melbourne student's email account saying her "actions are unacceptable and will not be tolerated by the Qantas Group or the Jetstar Group". Also in 2015, another Tamil man was to be sent from Melbourne to Darwin to later be deported. A protest by the man led to him not being put on the plane. A spokesman for Qantas said flight QF838 was delayed almost two hours. The delays reportedly caused inconvenience to multiple passengers, especially those with connecting flights. A spokesperson from Qantas stated that, "[s]afety and security is the number-one priority for all airlines and an aircraft is not the right place for people to conduct protests." Campaigners also asked Qantas to rule out deporting Iraqi man Saeed in 2017. Campaigners have asked Qantas not to participate in the high-profile deportation case of the Nadesalingam family. In response a Qantas spokesperson stated: "We appreciate that this is a sensitive issue. The government and courts are best placed to make decisions on complex immigration matters, not airlines". In 2009, Qantas was one of the inaugural inductees into the Queensland Business Leaders Hall of Fame.
https://en.wikipedia.org/wiki?curid=25254
QED (text editor) QED is a line-oriented computer text editor that was developed by Butler Lampson and L. Peter Deutsch for the Berkeley Timesharing System running on the SDS 940. It was implemented by L. Peter Deutsch and Dana Angluin between 1965 and 1966. QED (for "quick editor") addressed teleprinter usage, but systems "for CRT displays [were] not considered, since many of their design considerations [were] quite different." Ken Thompson later wrote a version for CTSS; this version was notable for introducing regular expressions. Thompson rewrote QED in BCPL for Multics. The Multics version was ported to the GE-600 system used at Bell Labs in the late 1960s under GECOS and later GCOS after Honeywell took over GE's computer business. The GECOS-GCOS port used I/O routines written by A. W. Winklehoff. Dennis Ritchie, Ken Thompson and Brian Kernighan wrote the QED manuals used at Bell Labs. Given that the authors were the primary developers of the Unix operating system, it is natural that QED had a strong influence on the classic UNIX text editors ed, sed and their descendants such as ex and sam, and more distantly AWK and Perl. A version of QED named FRED (Friendly Editor) was written at the University of Waterloo for Honeywell systems by Peter Fraser. A University of Toronto team consisting of Tom Duff, Rob Pike, Hugh Redelmeier, and David Tilbrook implemented a version of QED that runs on UNIX; David Tilbrook later included QED as part of his QEF tool set. QED was also used as a character-oriented editor on the Norwegian-made Norsk Data systems, first Nord TSS, then Sintran III. It was implemented for the Nord-1 computer in 1971 by Bo Lewendal who after working with Deutsch and Lampson at Project Genie and at the Berkeley Computer Corporation, had taken a job with Norsk Data (and who developed the Nord TSS later in 1971).
https://en.wikipedia.org/wiki?curid=25256
Qusay Hussein Qusay Saddam Hussein al-Tikriti (or Qusai, ; c. 1967 – 22 July 2003) was an Iraqi politician and heir. He was the second son of former Iraqi President Saddam Hussein. He was appointed as his father's heir apparent in 2000. He was also in charge of the military. Hussein was born in Baghdad around 1966 to Ba'athist revolutionary Saddam Hussein, who was in prison at the time, and his wife and cousin, Sajida Talfah. Some sources have said the birth year was 1967 while others have said 1968. As a child, his father would take him and his brother to watch executions. Unlike other members of his family and the government, little is known about Hussein, politically or personally. He married Sahar Maher Abd al-Rashid; the daughter of Maher Abd al-Rashid, a top ranking military official, and had three sons: Mustapha Qusay (born 3 January 1989 – 22 July 2003); Yahya Qusay (born 1991) and Yaqub Qusay (birthyear unknown). Hussein played a role in crushing the Shiite uprising in the aftermath of the 1991 Gulf War and is also thought to have masterminded the destruction of the southern marshes of Iraq. The wholesale destruction of these marshes ended a centuries-old way of life that prevailed among the Shiite Marsh Arabs who made the wetlands their home, and ruined the habitat for dozens of species of migratory birds. The Iraqi government stated that the action was intended to produce usable farmland, though a number of outsiders believe the destruction was aimed against the Marsh Arabs as retribution for their participation in the 1991 uprising. Hussein's older brother Uday was viewed as their father's heir-apparent until he sustained serious injuries in a 1996 assassination attempt. Unlike Uday, who was known for extravagance and erratic, violent behavior, Qusay kept a low profile so details regarding his actions and roles are obscure. Iraqi dissidents claim that Hussein was responsible for the killing of many political activists. "The Sunday Times" reported that Hussein ordered the killing of Khalis Mohsen al-Tikriti, an engineer at the military industrialization organization, because he believed Mohsen was planning to leave Iraq. In 1998, Iraqi opposition groups accused Hussein of ordering the execution of thousands of political prisoners after hundreds of inmates were similarly executed to make room for new prisoners in crowded jails. Hussein's service in the Iraqi Republican Guard began in 2000. It is believed that he became the supervisor of the Guard and the head of internal security forces (possibly the Special Security Organization (SSO)), and had authority over other Iraqi military units. On the afternoon of 22 July 2003, troops of the 101st Airborne 3/327th Infantry HQ and C-Company, aided by U.S. Special Forces, killed Hussein, his 14-year-old son Mustapha, and his older brother Uday, during a raid on a house in the northern Iraqi city of Mosul. Acting on a tip from Hussein's cousin, a special forces team attempted to apprehend everyone in the house at the time. After being fired on, the special forces moved back and called for backup. After Task Force 121 members were wounded, the 3/327th Infantry surrounded and fired on the house with a TOW missile, Mark 19 Automatic Grenade Launcher, M2 50 Caliber Machine guns and small arms. After about four hours of battle (the whole operation lasted 6 hours), the soldiers entered the house and found four dead, including the two brothers and their bodyguard. There were reports that Hussein's 14-year-old son Mustapha was the fourth body found. Brigadier general Frank Helmick, the assistant commander of 101st Airborne, commented that all occupants of the house died during the gun battle before U.S. troops were able to enter. Soldiers, who tried to enter the house three times, encountered resistance with AK-47 and grenades in the first two attempts. Uday, Qusay and guard protected the street and the first floor from the bathroom at the front of the house; Qusay's son took cover from the bedroom in the back and defended themselves. The American forces then bombed the house many times and fired missiles. Three adults were thought to have died due to the TOW missile fired into the front of the house. In the third attempt, the soldiers killed Qusay's only remaining 14-year-old son after he fired. Brigade commander Col. Joe Anderson said an Arabic announcement was made at 10 am on the day and called on people inside to come out peacefully. The answer he received was bullet bombardment. An experienced team of commandos tried to attack the building, but they had to retreat under fire. Four American soldiers were injured. Anderson then ordered his men to fire with 50-caliber heavy machine guns. Uday and Kusay refused to surrender even after a helicopter fired a rocket and the Strike Brigade threw 40 mm grenades at them. The Colonel decided that more firepower was necessary to take down the brothers, leading to 12 TOW missiles being fired into the building. After his sons death, Saddam Hussein recorded a tape and said, "Beloved Iraqis, your brothers Uday and Qusay, and Mustafa, the son of Qusay, took a stand of faith, which pleases God, makes a friend happy, and makes an enemy angry. They stood in the arena of jihad in Mosul, after a valiant battle with the enemy that lasted six hours. The armies of aggression mobilised all types of weapons of the ground forces against them and succeeded to harm them only when they used planes against the house where they were. Thus, they adopted a stand with which God has honoured this Hussein family so that the present would be a continuation of the brilliant, genuine, faithful, and honourable past. We thank God for what he has ordained for us when he honoured us with their martyrdom for his sake. We ask Almighty God to satisfy them and all the righteous martyrs after they satisfied him with their faithful Jihadist stand. Had Saddam Hussein had 100 children, other than Uday and Qusay, Saddam Hussein would have sacrificed them on the same path. God honoured us by their martyrdom. If you had killed Uday, Qusay, Mustafa, and another mujahideen man with them, all the youths of our nation and the youths of Iraq are Uday, Qusay, and Mustafa in the fields of jihad." Later, the American command said that dental records had conclusively identified two of the dead men as Saddam Hussein's sons. They also announced that the informant (possibly the owner of the villa, Nawaf al-Zaidan, in Mosul in which the brothers were killed) would receive the combined $30 million reward previously offered for their apprehension. On 23 July 2003, the American command stated that it had conclusively identified two of the dead men as Saddam Hussein's sons from dental records. Because many Iraqis were skeptical of news of the deaths, the U.S. Government released photos of the corpses and allowed Iraq's governing council to identify the bodies despite the U.S. objection to the publication of American corpses on Arab television. Afterwards, their bodies were reconstructed by morticians. For example, Qusay's beard was shaved and gashes from the battle were removed. They also announced that the informant, possibly the owner of the house, would receive the combined $30 million reward on the pair. Hussein was the ace of clubs in the coalition forces' most-wanted Iraqi playing cards. His father was the ace of spades and his brother was the ace of hearts. Hussein's other two sons, Yahya Qusay and Yaqub Qusay, are presumed alive, but their whereabouts are unknown.
https://en.wikipedia.org/wiki?curid=25257
Quantum chromodynamics In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks and gluons, the fundamental particles that make up composite hadrons such as the proton, neutron and pion. QCD is a type of quantum field theory called a non-abelian gauge theory, with symmetry group SU(3). The QCD analog of electric charge is a property called "color". Gluons are the force carrier of the theory, like photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics. A large body of experimental evidence for QCD has been gathered over the years. QCD exhibits two main properties: and independently by David Politzer in the same year. For this work all three shared the 2004 Nobel Prize in Physics. Physicist Murray Gell-Mann coined the word "quark" in its present sense. It originally comes from the phrase "Three quarks for Muster Mark" in "Finnegans Wake" by James Joyce. On June 27, 1978, Gell-Mann wrote a private letter to the editor of the "Oxford English Dictionary", in which he related that he had been influenced by Joyce's words: "The allusion to three quarks seemed perfect." (Originally, only three quarks had been discovered.) The three kinds of charge in QCD (as opposed to one in quantum electrodynamics or QED) are usually referred to as "color charge" by loose analogy to the three kinds of color (red, green and blue) perceived by humans. Other than this nomenclature, the quantum parameter "color" is completely unrelated to the everyday, familiar phenomenon of color. The force between quarks is known as the colour force (or color force ) or strong interaction, and is responsible for the strong nuclear force. Since the theory of electric charge is dubbed "electrodynamics", the Greek word χρῶμα "chroma" "color" is applied to the theory of color charge, "chromodynamics". With the invention of bubble chambers and spark chambers in the 1950s, experimental particle physics discovered a large and ever-growing number of particles called hadrons. It seemed that such a large number of particles could not all be fundamental. First, the particles were classified by charge and isospin by Eugene Wigner and Werner Heisenberg; then, in 1953–56, according to strangeness by Murray Gell-Mann and Kazuhiko Nishijima (see Gell-Mann–Nishijima formula). To gain greater insight, the hadrons were sorted into groups having similar properties and masses using the "eightfold way", invented in 1961 by Gell-Mann and Yuval Ne'eman. Gell-Mann and George Zweig, correcting an earlier approach of Shoichi Sakata, went on to propose in 1963 that the structure of the groups could be explained by the existence of three flavors of smaller particles inside the hadrons: the quarks. Perhaps the first remark that quarks should possess an additional quantum number was made as a short footnote in the preprint of Boris Struminsky in connection with the Ω− hyperon being composed of three strange quarks with parallel spins (this situation was peculiar, because since quarks are fermions, such a combination is forbidden by the Pauli exclusion principle): Boris Struminsky was a PhD student of Nikolay Bogolyubov. The problem considered in this preprint was suggested by Nikolay Bogolyubov, who advised Boris Struminsky in this research. In the beginning of 1965, Nikolay Bogolyubov, Boris Struminsky and Albert Tavkhelidze wrote a preprint with a more detailed discussion of the additional quark quantum degree of freedom. This work was also presented by Albert Tavkhelidze without obtaining consent of his collaborators for doing so at an international conference in Trieste (Italy), in May 1965. A similar mysterious situation was with the Δ++ baryon; in the quark model, it is composed of three up quarks with parallel spins. In 1964–65, Greenberg and Han–Nambu independently resolved the problem by proposing that quarks possess an additional SU(3) gauge degree of freedom, later called color charge. Han and Nambu noted that quarks might interact via an octet of vector gauge bosons: the gluons. Since free quark searches consistently failed to turn up any evidence for the new particles, and because an elementary particle back then was "defined" as a particle which could be separated and isolated, Gell-Mann often said that quarks were merely convenient mathematical constructs, not real particles. The meaning of this statement was usually clear in context: He meant quarks are confined, but he also was implying that the strong interactions could probably not be fully described by quantum field theory. Richard Feynman argued that high energy experiments showed quarks are real particles: he called them "partons" (since they were parts of hadrons). By particles, Feynman meant objects which travel along paths, elementary particles in a field theory. The difference between Feynman's and Gell-Mann's approaches reflected a deep split in the theoretical physics community. Feynman thought the quarks have a distribution of position or momentum, like any other particle, and he (correctly) believed that the diffusion of parton momentum explained diffractive scattering. Although Gell-Mann believed that certain quark charges could be localized, he was open to the possibility that the quarks themselves could not be localized because space and time break down. This was the more radical approach of S-matrix theory. James Bjorken proposed that pointlike partons would imply certain relations in deep inelastic scattering of electrons and protons, which were verified in experiments at SLAC in 1969. This led physicists to abandon the S-matrix approach for the strong interactions. In 1973 the concept of color as the source of a "strong field" was developed into the theory of QCD by physicists Harald Fritzsch and Heinrich Leutwyler, together with physicist Murray Gell-Mann. In particular, they employed the general field theory developed in 1954 by Chen Ning Yang and Robert Mills (see Yang–Mills theory), in which the carrier particles of a force can themselves radiate further carrier particles. (This is different from QED, where the photons that carry the electromagnetic force do not radiate further photons.) The discovery of asymptotic freedom in the strong interactions by David Gross, David Politzer and Frank Wilczek allowed physicists to make precise predictions of the results of many high energy experiments using the quantum field theory technique of perturbation theory. Evidence of gluons was discovered in three-jet events at PETRA in 1979. These experiments became more and more precise, culminating in the verification of perturbative QCD at the level of a few percent at the LEP in CERN. The other side of asymptotic freedom is confinement. Since the force between color charges does not decrease with distance, it is believed that quarks and gluons can never be liberated from hadrons. This aspect of the theory is verified within lattice QCD computations, but is not mathematically proven. One of the Millennium Prize Problems announced by the Clay Mathematics Institute requires a claimant to produce such a proof. Other aspects of non-perturbative QCD are the exploration of phases of quark matter, including the quark–gluon plasma. The relation between the short-distance particle limit and the confining long-distance limit is one of the topics recently explored using string theory, the modern form of S-matrix theory. Every field theory of particle physics is based on certain symmetries of nature whose existence is deduced from observations. These can be QCD is a non-abelian gauge theory (or Yang–Mills theory) of the SU(3) gauge group obtained by taking the color charge to define a local symmetry. Since the strong interaction does not discriminate between different flavors of quark, QCD has approximate flavor symmetry, which is broken by the differing masses of the quarks. There are additional global symmetries whose definitions require the notion of chirality, discrimination between left and right-handed. If the spin of a particle has a positive projection on its direction of motion then it is called left-handed; otherwise, it is right-handed. Chirality and handedness are not the same, but become approximately equivalent at high energies. As mentioned, "asymptotic freedom" means that at large energy – this corresponds also to "short distances" – there is practically no interaction between the particles. This is in contrast – more precisely one would say "dual"– to what one is used to, since usually one connects the absence of interactions with "large" distances. However, as already mentioned in the original paper of Franz Wegner, a solid state theorist who introduced 1971 simple gauge invariant lattice models, the high-temperature behaviour of the "original model", e.g. the strong decay of correlations at large distances, corresponds to the low-temperature behaviour of the (usually ordered!) "dual model", namely the asymptotic decay of non-trivial correlations, e.g. short-range deviations from almost perfect arrangements, for short distances. Here, in contrast to Wegner, we have only the dual model, which is that one described in this article. The color group SU(3) corresponds to the local symmetry whose gauging gives rise to QCD. The electric charge labels a representation of the local symmetry group U(1) which is gauged to give QED: this is an abelian group. If one considers a version of QCD with "Nf" flavors of massless quarks, then there is a global (chiral) flavor symmetry group SUL("Nf") × SUR("Nf") × UB(1) × UA(1). The chiral symmetry is spontaneously broken by the QCD vacuum to the vector (L+R) SUV("Nf") with the formation of a chiral condensate. The vector symmetry, UB(1) corresponds to the baryon number of quarks and is an exact symmetry. The axial symmetry UA(1) is exact in the classical theory, but broken in the quantum theory, an occurrence called an anomaly. Gluon field configurations called instantons are closely related to this anomaly. There are two different types of SU(3) symmetry: there is the symmetry that acts on the different colors of quarks, and this is an exact gauge symmetry mediated by the gluons, and there is also a flavor symmetry which rotates different flavors of quarks to each other, or "flavor SU(3)". Flavor SU(3) is an approximate symmetry of the vacuum of QCD, and is not a fundamental symmetry at all. It is an accidental consequence of the small mass of the three lightest quarks. In the QCD vacuum there are vacuum condensates of all the quarks whose mass is less than the QCD scale. This includes the up and down quarks, and to a lesser extent the strange quark, but not any of the others. The vacuum is symmetric under SU(2) isospin rotations of up and down, and to a lesser extent under rotations of up, down and strange, or full flavor group SU(3), and the observed particles make isospin and SU(3) multiplets. The approximate flavor symmetries do have associated gauge bosons, observed particles like the rho and the omega, but these particles are nothing like the gluons and they are not massless. They are emergent gauge bosons in an approximate string description of QCD. The dynamics of the quarks and gluons are controlled by the quantum chromodynamics Lagrangian. The gauge invariant QCD Lagrangian is where formula_1 is the quark field, a dynamical function of spacetime, in the fundamental representation of the SU(3) gauge group, indexed by formula_2; formula_3 is the gauge covariant derivative; the γμ are Dirac matrices connecting the spinor representation to the vector representation of the Lorentz group. The symbol formula_4 represents the gauge invariant gluon field strength tensor, analogous to the electromagnetic field strength tensor, "F"μν, in quantum electrodynamics. It is given by: where formula_6 are the gluon fields, dynamical functions of spacetime, in the adjoint representation of the SU(3) gauge group, indexed by "a", "b"...; and "fabc" are the structure constants of SU(3). Note that the rules to move-up or pull-down the "a", "b", or "c" indices are "trivial", (+, ..., +), so that "fabc" = "fabc" = "f""a""bc" whereas for the "μ" or "ν" indices one has the non-trivial "relativistic" rules corresponding to the metric signature (+ − − −). The variables "m" and "g" correspond to the quark mass and coupling of the theory, respectively, which are subject to renormalization. An important theoretical concept is the "Wilson loop" (named after Kenneth G. Wilson). In lattice QCD, the final term of the above Lagrangian is discretized via Wilson loops, and more generally the behavior of Wilson loops can distinguish confined and deconfined phases. Quarks are massive spin- fermions which carry a color charge whose gauging is the content of QCD. Quarks are represented by Dirac fields in the fundamental representation 3 of the gauge group SU(3). They also carry electric charge (either − or +) and participate in weak interactions as part of weak isospin doublets. They carry global quantum numbers including the baryon number, which is for each quark, hypercharge and one of the flavor quantum numbers. Gluons are spin-1 bosons which also carry color charges, since they lie in the adjoint representation 8 of SU(3). They have no electric charge, do not participate in the weak interactions, and have no flavor. They lie in the singlet representation 1 of all these symmetry groups. Every quark has its own antiquark. The charge of each antiquark is exactly the opposite of the corresponding quark. According to the rules of quantum field theory, and the associated Feynman diagrams, the above theory gives rise to three basic interactions: a quark may emit (or absorb) a gluon, a gluon may emit (or absorb) a gluon, and two gluons may directly interact. This contrasts with QED, in which only the first kind of interaction occurs, since photons have no charge. Diagrams involving Faddeev–Popov ghosts must be considered too (except in the unitarity gauge). Detailed computations with the above-mentioned Lagrangian show that the effective potential between a quark and its anti-quark in a meson contains a term that increases in proportion to the distance between the quark and anti-quark (formula_7), which represents some kind of "stiffness" of the interaction between the particle and its anti-particle at large distances, similar to the entropic elasticity of a rubber band (see below). This leads to "confinement"  of the quarks to the interior of hadrons, i.e. mesons and nucleons, with typical radii "R"c, corresponding to former "Bag models" of the hadrons The order of magnitude of the "bag radius" is 1 fm (= 10−15 m). Moreover, the above-mentioned stiffness is quantitatively related to the so-called "area law" behaviour of the expectation value of the Wilson loop product "P"W of the ordered coupling constants around a closed loop "W"; i.e. formula_9 is proportional to the "area" enclosed by the loop. For this behaviour the non-abelian behaviour of the gauge group is essential. Further analysis of the content of the theory is complicated. Various techniques have been developed to work with QCD. Some of them are discussed briefly below. This approach is based on asymptotic freedom, which allows perturbation theory to be used accurately in experiments performed at very high energies. Although limited in scope, this approach has resulted in the most precise tests of QCD to date. Among non-perturbative approaches to QCD, the most well established one is lattice QCD. This approach uses a discrete set of spacetime points (called the lattice) to reduce the analytically intractable path integrals of the continuum theory to a very difficult numerical computation which is then carried out on supercomputers like the QCDOC which was constructed for precisely this purpose. While it is a slow and resource-intensive approach, it has wide applicability, giving insight into parts of the theory inaccessible by other means, in particular into the explicit forces acting between quarks and antiquarks in a meson. However, the numerical sign problem makes it difficult to use lattice methods to study QCD at high density and low temperature (e.g. nuclear matter or the interior of neutron stars). A well-known approximation scheme, the expansion, starts from the idea that the number of colors is infinite, and makes a series of corrections to account for the fact that it is not. Until now, it has been the source of qualitative insight rather than a method for quantitative predictions. Modern variants include the AdS/CFT approach. For specific problems effective theories may be written down which give qualitatively correct results in certain limits. In the best of cases, these may then be obtained as systematic expansions in some parameter of the QCD Lagrangian. One such effective field theory is chiral perturbation theory or ChiPT, which is the QCD effective theory at low energies. More precisely, it is a low energy expansion based on the spontaneous chiral symmetry breaking of QCD, which is an exact symmetry when quark masses are equal to zero, but for the u, d and s quark, which have small mass, it is still a good approximate symmetry. Depending on the number of quarks which are treated as light, one uses either SU(2) ChiPT or SU(3) ChiPT . Other effective theories are heavy quark effective theory (which expands around heavy quark mass near infinity), and soft-collinear effective theory (which expands around large ratios of energy scales). In addition to effective theories, models like the Nambu–Jona-Lasinio model and the chiral model are often used when discussing general features. Based on an Operator product expansion one can derive sets of relations that connect different observables with each other. In one of his recent works, Kei-Ichi Kondo derived as a low-energy limit of QCD, a theory linked to the Nambu–Jona-Lasinio model since it is basically a particular non-local version of the Polyakov–Nambu–Jona-Lasinio model. The later being in its local version, nothing but the Nambu–Jona-Lasinio model in which one has included the Polyakov loop effect, in order to describe a 'certain confinement'. The Nambu–Jona-Lasinio model in itself is, among many other things, used because it is a 'relatively simple' model of chiral symmetry breaking, phenomenon present up to certain conditions (Chiral limit i.e. massless fermions) in QCD itself. In this model, however, there is no confinement. In particular, the energy of an isolated quark in the physical vacuum turns out well defined and finite. The notion of quark flavors was prompted by the necessity of explaining the properties of hadrons during the development of the quark model. The notion of color was necessitated by the puzzle of the . This has been dealt with in the section on the history of QCD. The first evidence for quarks as real constituent elements of hadrons was obtained in deep inelastic scattering experiments at SLAC. The first evidence for gluons came in three-jet events at PETRA. Several good quantitative tests of perturbative QCD exist: Quantitative tests of non-perturbative QCD are fewer, because the predictions are harder to make. The best is probably the running of the QCD coupling as probed through lattice computations of heavy-quarkonium spectra. There is a recent claim about the mass of the heavy meson Bc . Other non-perturbative tests are currently at the level of 5% at best. Continuing work on masses and form factors of hadrons and their weak matrix elements are promising candidates for future quantitative tests. The whole subject of quark matter and the quark–gluon plasma is a non-perturbative test bed for QCD which still remains to be properly exploited. One qualitative prediction of QCD is that there exist composite particles made solely of gluons called glueballs that have not yet been definitively observed experimentally. A definitive observation of a glueball with the properties predicted by QCD would strongly confirm the theory. In principle, if glueballs could be definitively ruled out, this would be a serious experimental blow to QCD. But, as of 2013, scientists are unable to confirm or deny the existence of glueballs definitively, despite the fact that particle accelerators have sufficient energy to generate them. There are unexpected cross-relations to condensed matter physics. For example, the notion of gauge invariance forms the basis of the well-known Mattis spin glasses, which are systems with the usual spin degrees of freedom formula_10 for "i" =1...,N, with the special fixed "random" couplings formula_11 Here the εi and εk quantities can independently and "randomly" take the values ±1, which corresponds to a most-simple gauge transformation formula_12 This means that thermodynamic expectation values of measurable quantities, e.g. of the energy formula_13 are invariant. However, here the "coupling degrees of freedom" formula_14, which in the QCD correspond to the "gluons", are "frozen" to fixed values (quenching). In contrast, in the QCD they "fluctuate" (annealing), and through the large number of gauge degrees of freedom the entropy plays an important role (see below). For positive "J"0 the thermodynamics of the Mattis spin glass corresponds in fact simply to a "ferromagnet in disguise", just because these systems have no "frustration" at all. This term is a basic measure in spin glass theory. Quantitatively it is identical with the loop product formula_15 along a closed loop "W". However, for a Mattis spin glass – in contrast to "genuine" spin glasses – the quantity "PW" never becomes negative. The basic notion "frustration" of the spin-glass is actually similar to the Wilson loop quantity of the QCD. The only difference is again that in the QCD one is dealing with SU(3) matrices, and that one is dealing with a "fluctuating" quantity. Energetically, perfect absence of frustration should be non-favorable and atypical for a spin glass, which means that one should add the loop product to the Hamiltonian, by some kind of term representing a "punishment". In the QCD the Wilson loop is essential for the Lagrangian rightaway. The relation between the QCD and "disordered magnetic systems" (the spin glasses belong to them) were additionally stressed in a paper by Fradkin, Huberman and Shenker, which also stresses the notion of duality. A further analogy consists in the already mentioned similarity to polymer physics, where, analogously to Wilson Loops, so-called "entangled nets" appear, which are important for the formation of the entropy-elasticity (force proportional to the length) of a rubber band. The non-abelian character of the SU(3) corresponds thereby to the non-trivial "chemical links", which glue different loop segments together, and "asymptotic freedom" means in the polymer analogy simply the fact that in the short-wave limit, i.e. for formula_16 (where "Rc" is a characteristic correlation length for the glued loops, corresponding to the above-mentioned "bag radius", while λw is the wavelength of an excitation) any non-trivial correlation vanishes totally, as if the system had crystallized. There is also a correspondence between confinement in QCD – the fact that the color field is only different from zero in the interior of hadrons – and the behaviour of the usual magnetic field in the theory of type-II superconductors: there the magnetism is confined to the interior of the Abrikosov flux-line lattice,   i.e., the London penetration depth "λ" of that theory is analogous to the confinement radius "Rc" of quantum chromodynamics. Mathematically, this correspondendence is supported by the second term, formula_17 on the r.h.s. of the Lagrangian.
https://en.wikipedia.org/wiki?curid=25264
Queue (abstract data type) In computer science, a queue is a collection of entities that are maintained in a sequence and can be modified by the addition of entities at one end of the sequence and the removal of entities from the other end of the sequence. By convention, the end of the sequence at which elements are added is called the back, tail, or rear of the queue, and the end at which elements are removed is called the head or front of the queue, analogously to the words used when people line up to wait for goods or services. The operation of adding an element to the rear of the queue is known as "enqueue", and the operation of removing an element from the front is known as "dequeue". Other operations may also be allowed, often including a "peek" or "front" operation that returns the value of the next element to be dequeued without dequeuing it. The operations of a queue make it a first-in-first-out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. A queue is an example of a linear data structure, or more abstractly a sequential collection. Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes. Common implementations are circular buffers and linked lists. Queues provide services in computer science, transport, and operations research where various entities such as data, objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the function of a buffer. Another usage of queues is in the implementation of breadth-first search. Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again. Fixed-length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a high-level language, but it does admittedly slow things down a little, because the array indices must be compared to zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds, which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for any high-level language that does not have pointer syntax. The array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects or pointers can implement or come with libraries for dynamic lists. Such data structures may have not specified a fixed capacity limit besides memory constraints. Queue "overflow" results from trying to add an element onto a full queue and queue "underflow" happens when trying to remove an element from an empty queue. A "bounded queue" is a queue limited to a fixed number of items. There are several efficient implementations of FIFO queues. An efficient implementation is one that can perform the operations—enqueuing and dequeuing—in O(1) time. Queues may be implemented as a separate data type, or maybe considered a special case of a double-ended queue (deque) and not implemented separately. For example, Perl and Ruby allow pushing and popping an array from both ends, so one can use push and unshift functions to enqueue and dequeue a list (or, in reverse, one can use shift and pop), although in some cases these operations are not efficient. C++'s Standard Template Library provides a "codice_1" templated class which is restricted to only push/pop operations. Since J2SE5.0, Java's library contains a interface that specifies queue operations; implementing classes include and (since J2SE 1.6) . PHP has an SplQueue class and third party libraries like beanstalk'd and Gearman. A simple queue implemented in JavaScript: class Queue { Queues can also be implemented as a purely functional data structure. Two versions of the implementation exist. The first one, called real-time queue, presented below, allows the queue to be persistent with operations in O(1) worst-case time, but requires lazy lists with memoization. The second one, with no lazy lists nor memoization is presented at the end of the sections. Its amortized time is formula_1 if the persistency is not used; but its worst-time complexity is formula_2 where "n" is the number of elements in the queue. Let us recall that, for formula_3 a list, formula_4 denotes its length, that "NIL" represents an empty list and formula_5 represents the list whose head is "h" and whose tail is "t". The data structure used to implement our queues consists of three linked lists formula_6 where "f" is the front of the queue, "r" is the rear of the queue in reverse order. The invariant of the structure is that "s" is the rear of "f" without its formula_7 first elements, that is formula_8. The tail of the queue formula_9 is then almost formula_6 and inserting an element "x" to formula_6 is almost formula_12. It is said almost, because in both of those results, formula_13. An auxiliary function formula_14 must then be called for the invariant to be satisfied. Two cases must be considered, depending on whether formula_15 is the empty list, in which case formula_16, or not. The formal definition is formula_17 and formula_18 where formula_19 is "f" followed by "r" reversed. Let us call formula_20 the function which returns "f" followed by "r" reversed. Let us furthermore assume that formula_16, since it is the case when this function is called. More precisely, we define a lazy function formula_22 which takes as input three list such that formula_16, and return the concatenation of "f", of "r" reversed and of "a". Then formula_24. The inductive definition of rotate is formula_25 and formula_26. Its running time is formula_27, but, since lazy evaluation is used, the computation is delayed until the results is forced by the computation. The list "s" in the data structure has two purposes. This list serves as a counter for formula_28, indeed, formula_29 if and only if "s" is the empty list. This counter allows us to ensure that the rear is never longer than the front list. Furthermore, using "s", which is a tail of "f", forces the computation of a part of the (lazy) list "f" during each "tail" and "insert" operation. Therefore, when formula_29, the list "f" is totally forced. If it was not the case, the internal representation of "f" could be some append of append of... of append, and forcing would not be a constant time operation anymore. Note that, without the lazy part of the implementation, the real-time queue would be a non-persistent implementation of queue in formula_1 amortized time. In this case, the list "s" can be replaced by the integer formula_28, and the reverse function would be called when formula_15 is 0.
https://en.wikipedia.org/wiki?curid=25265
Quake (video game) Quake is a first-person shooter video game developed by id Software and published by GT Interactive in 1996. It is the first game in the "Quake" series. In the game, players must find their way through various maze-like, medieval environments while battling a variety of monsters using an array of weaponry. The overall atmosphere is dark and gritty, with many stone textures and a rusty, capitalized font. The successor to id Software's "Doom" series, "Quake" built upon the technology and gameplay of its predecessor. Unlike the "Doom" engine before it, the "Quake" engine offered full real-time 3D rendering and had early support for 3D acceleration through OpenGL. After "Doom" helped to popularize multiplayer deathmatches in 1993, "Quake" added various multiplayer options. Online multiplayer became increasingly common, with the QuakeWorld update and software such as QuakeSpy making the process of finding and playing against others on the Internet easier and more reliable. "Quake" features music composed by Trent Reznor and his band, Nine Inch Nails. In "Quake" single-player mode, players explore and navigate to the exit of each Gothic and dark level, facing monsters and finding secret areas along the way. Usually there are switches to activate or keys to collect in order to open doors before the exit can be reached. Reaching the exit takes the player to the next level. Before accessing an episode, there is a set of three pathways with easy, medium, and hard skill levels. The fourth skill level, "Nightmare", was "so bad that it was hidden, so people won't wander in by accident"; the player must drop through water before the episode four entrance and go into a secret passage to access it. "Quake" single-player campaign is organized into four individual episodes with seven to eight levels in each (including one secret level per episode, one of which is a "low gravity" level that challenges the player's abilities in a different way). As items are collected, they are carried to the next level. If the player's character dies, he must restart at the beginning of the level. The game may be saved at any time in the PC versions and between levels in the console versions. Upon completing an episode, the player is returned to the hub "START" level, where another episode can be chosen. Each episode starts the player from scratch, without any previously collected items. Episode one (which formed the shareware or downloadable demo version of "Quake") has the most traditional ideology of a boss in the last level. The ultimate objective at the end of each episode is to recover a magic rune. After all of the runes are collected, the floor of the hub level opens up to reveal an entrance to the "END" level which contains a final puzzle. In multiplayer mode, players on several computers connect to a server (which may be a dedicated machine or on one of the player's computers), where they can either play the single-player campaign together in co-op (cooperative) mode, or play against each other in multiplayer. When players die in multiplayer mode, they can immediately respawn, but will lose any items that were collected. Similarly, items that have been picked up previously respawn after some time, and may be picked up again. The most popular multiplayer modes are all forms of deathmatch. Deathmatch modes typically consist of either "free-for-all" (no organization or teams involved), one-on-one "duels", or organized "teamplay" with two or more players per team (or clan). Teamplay is also frequently played with one or another mod. Monsters are not normally present in teamplay, as they serve no purpose other than to get in the way and reveal the positions of the players. The gameplay in "Quake" was considered unique for its time because of the different ways the player can maneuver through the game. For example: bunny hopping or strafe jumping can be used to move faster than normal, while rocket jumping enables the player to reach otherwise-inaccessible areas at the cost of some self-damage. The player can start and stop moving suddenly, jump unnaturally high, and change direction while moving through the air. Many of these non-realistic behaviors contribute to "Quake"s appeal. Multiplayer "Quake" was one of the first games singled out as a form of electronic sport. A notable participant was Dennis Fong who won John Carmack's Ferrari 328 at the Microsoft-sponsored Red Annihilation tournament in 1997. In the single-player game, the player takes the role of the protagonist known as Ranger (voiced by Trent Reznor) who was sent into a portal in order to stop an enemy code-named "Quake". The government had been experimenting with teleportation technology and developed a working prototype called a "Slipgate"; the mysterious Quake compromised the Slipgate by connecting it with its own teleportation system, using it to send death squads to the "Human" dimension in order to test the martial capabilities of humanity. The sole surviving protagonist in "Operation Counterstrike" is Ranger, who must advance, starting each of the four episodes from an overrun human military base, before fighting his way into other dimensions, reaching them via the Slipgate or their otherworld equivalent. After passing through the Slipgate, Ranger's main objective is to collect four magic runes from four dimensions of Quake; these are the key to stopping the enemy later discovered as Shub-Niggurath and ending the invasion of Earth. The single-player campaign consists of 30 separate levels, or "maps", divided into four episodes (with a total of 26 regular maps and four secret ones), as well as a hub level to select a difficulty setting and episode, and the game's final boss level. Each episode represents individual dimensions that the player can access through magical portals (as opposed to the technological Slipgate) that are discovered over the course of the game. The various realms consist of a number of gothic, medieval, and lava-filled caves and dungeons, with a recurring theme of hellish and satanic imagery reminiscent of "Doom" (such as pentagrams and images of demons on the walls). The game's setting is inspired by several dark fantasy influences, most notably that of H. P. Lovecraft. Dimensional Shamblers appear as enemies, the "Spawn" enemies are called "Formless Spawn of Tsathoggua" in the manual, the boss of the first episode is named Chthon, and the main villain is named Shub-Niggurath (though actually resembling a Dark Young). Some levels have Lovecraftian names, such as the Vaults of Zin and The Nameless City. In addition, six levels exclusively designed for multiplayer deathmatch are also included. Originally, the game was supposed to include more Lovecraftian bosses, but this concept was scrapped due to time constraints. A preview included with id's very first release, 1990's "Commander Keen", advertised a game entitled "The Fight for Justice" as a follow-up to the "Commander Keen" trilogy. It would feature a character named Quake, "the strongest, most dangerous person on the continent", armed with thunderbolts and a "Ring of Regeneration". Conceived as a VGA full-color side-scrolling role-playing game, "The Fight for Justice" was never released. Lead designer and director John Romero later conceived of "Quake" as an action game taking place in a fully 3D world, inspired by Sega AM2's 3D fighting game "Virtua Fighter". "Quake" was also intended to feature "Virtua Fighter" influenced third-person melee combat. However, id Software considered it to be risky. Because the project was taking too long, the third-person melee was eventually dropped. This led to creative differences between Romero and id Software, and eventually his departure from the company after "Quake" was released. Even though he led the project, Romero did not receive any money from "Quake". In 2000, Romero released "Daikatana", the game that he envisioned what "Quake" was supposed to be and despite its shaky development and considered to be one of the worst games of all time, he said "Daikatana" was "more fun to make than "Quake"" due to the lack of creative interference. "Quake" was given as a title to the game that id Software was working on shortly after the release of "". The earliest information released described "Quake" as focusing on a Thor-like character who wields a giant hammer, and is able to knock away enemies by throwing the hammer (complete with real-time inverse kinematics). Initially, the levels were supposed to be designed in an Aztec style, but the choice was dropped some months into the project. Early screenshots then showed medieval environments and dragons. The plan was for the game to have more RPG-style elements. However, work was very slow on the engine, since John Carmack, the main programmer of "Quake", was not only developing a fully 3D engine, but also a TCP/IP networking model (Carmack later said that he should have done two separate projects which developed those things). Working with a game engine that was still in development presented difficulties for the designers. Eventually, the whole id Software team began to think that the original concept may not have been as wise a choice as they first believed. Thus, the final game was very stripped down from its original intentions, and instead featured gameplay similar to "Doom" and its sequel, although the levels and enemies were closer to medieval RPG style rather than science-fiction. In a December 1, 1994 post to an online bulletin board, John Romero wrote, "Okay, people. It seems that everyone is speculating on whether Quake is going to be a slow, RPG-style light-action game. Wrong! What does id do best and dominate at? Can you say "action"? I knew you could. Quake will be constant, hectic action throughout – probably more so than Doom." "Quake" was programmed by John Carmack, Michael Abrash, and John Cash. The levels and scenarios were designed by American McGee, Sandy Petersen, John Romero, and Tim Willits, and the graphics were designed by Adrian Carmack, Kevin Cloud and Paul Steed. Cloud created the monster and player graphics using Alias. The game engine developed for "Quake", the "Quake" engine, popularized several major advances in the first-person shooter genre: polygonal models instead of prerendered sprites; full 3D level design instead of a 2.5D map; prerendered lightmaps; and allowing end users to partially program the game (in this case with QuakeC), which popularized fan-created modifications (mods). Before the release of the full game or the shareware version of "Quake", id Software released "QTest" on February 24, 1996. It was described as a technology demo and was limited to three multiplayer maps. There was no single-player support and some of the gameplay and graphics were unfinished or different from their final versions. "QTest" gave gamers their first peek into the filesystem and modifiability of the "Quake" engine, and many entity mods (that placed monsters in the otherwise empty multiplayer maps) and custom player skins began appearing online before the full game was even released. Initially, the game was designed so that when the player ran out of ammunition, the player character would hit enemies with a gun-butt. Shortly before release this was replaced with an axe. "Quake"s music and sound design was done by Trent Reznor and Nine Inch Nails, using ambient soundscapes and synthesized drones to create atmospheric tracks. In an interview, Reznor remarked that the "Quake" soundtrack "is not music, it's textures and ambiences and whirling machine noises and stuff. We tried to make the most sinister, depressive, scary, frightening kind of thing... It's been fun." The game also has some ammo boxes decorated with the Nine Inch Nails logo. Digital re-releases lack the CD soundtrack that came with the original shareware release. Players can download the soundtrack online to recover it. The first port to be completed was the Linux port Quake 0.91 by id Software employee Dave D. Taylor on July 5, 1996, followed by a SPARC Solaris port later that year also by Taylor. The first commercially released port was the port to Mac OS, done by MacSoft and Lion Entertainment, Inc. (the latter company ceased to exist just prior to the port's release, leading to MacSoft's involvement) in late August 1997. ClickBOOM announced version for Amiga-computers in 1998. Finally in 1999, a retail version of the Linux port was distributed by Macmillan Digital Publishing USA in a bundle with the three add-ons as "Quake: The Offering". "Quake" was also ported to home console systems. On December 2, 1997, the game was released for the Sega Saturn. Initially GT Interactive was to publish this version itself, but it later cancelled the release and the Saturn rights were picked up by Sega. Sega then took the project away from the original development team, who had been encountering difficulties getting the port to run at a decent frame rate, and assigned it to Lobotomy Software. The Sega Saturn port used Lobotomy Software's own 3D game engine, "SlaveDriver" (the same game engine that powered the Sega Saturn versions of "PowerSlave" and "Duke Nukem 3D"), instead of the original "Quake" engine. It is the only version of "Quake" that is rated "T" for Teen instead of "M" for Mature. "Quake" had also been ported to the Sony PlayStation by Lobotomy Software, but the port was cancelled due to difficulties in finding a publisher. A port of "Quake" for the Atari Jaguar was also advertized as 30% complete in a May 1996 issue of "Ultimate Future Games" magazine, but it was never released. Another port of "Quake" was also slated for Panasonic M2 but never occurred due to the cancellation of the system. On March 24, 1998, the game was released for the Nintendo 64 by Midway Games. This version was developed by the same programming team that worked on "Doom 64", at id Software's request. The Nintendo 64 version was originally slated to be released in 1997, but Midway delayed it until March 1998 to give the team time to implement the deathmatch modes. Both console ports required compromises because of the limited CPU power and ROM storage space for levels. For example, the levels were rebuilt in the Saturn version in order to simplify the architecture, thereby reducing demands on the CPU. The Sega Saturn version includes 28 of the 32 single-player levels from the original PC version of the game, though the secret levels, Ziggurat Vertigo (E1M8), The Underearth (E2M7), The Haunted Halls (E3M7), and The Nameless City (E4M8), were removed. Instead, it has four exclusive secret levels: Purgatorium, Hell's Aerie, The Coliseum, and Watery Grave. It also contains an exclusive unlockable, "Dank & Scuz", which is a story set in the Quake milieu and presented in the form of a slide show with voice acting. There are no multiplayer modes in the Sega Saturn version; as a result of this, all of the deathmatch maps from the PC version were removed from the Sega Saturn port. The Nintendo 64 version includes 25 single-player levels from the PC version, though it is missing The Grisly Grotto (E1M4), The Installation (E2M1), The Ebon Fortress (E2M4), The Wind Tunnels (E3M5), The Sewage System (E4M1), and Hell's Atrium (E4M5) levels. It also does not use the hub "START" map where the player chooses a difficulty level and an episode; the difficulty level is chosen from a menu when starting the game, and all of the levels are played in sequential order from The Slipgate Complex (E1M1) to Shub Niggurath's Pit (END). The Nintendo 64 version, while lacking the cooperative multiplayer mode, includes two player deathmatch. All six of the deathmatch maps from the PC version are in the Nintendo 64 port, and an exclusive deathmatch level, The Court of Death, is also included. Two ports of "Quake" for the Nintendo DS exist, "QuakeDS" and "CQuake". Both run well, however, multiplayer does not work on "QuakeDS". Since the source code for "Quake" was released, a number of unofficial ports have been made available for PDAs and mobile phones, such as PocketQuake, as well as versions for the Symbian S60 series of mobile phones and Android mobile phones. The Rockbox project also distributes a version of "Quake" that runs on some MP3 players. In 2005, id Software signed a deal with publisher Pulse Interactive to release a version of "Quake" for mobile phones. The game was engineered by Californian company Bear Naked Productions. Initially due to be released on only two mobile phones, the Samsung Nexus (for which it was to be an embedded game) and the LG VX360. "Quake mobile" was reviewed by GameSpot on the Samsung Nexus and they cited its US release as October 2005; they also gave it a "Best Mobile Game" in their E3 2005 Editor's Choice Awards. "It is unclear as to whether the game actually did ship with the Samsung Nexus. The game is only available for the DELL x50v and x51v both of which are PDAs not mobile phones. "Quake Mobile" does not feature the Nine Inch Nails soundtrack due to space constraints. "Quake Mobile" runs the most recent version of GL Quake (Quake v.1.09 GL 1.00) at 800x600 resolution and 25 fps. The most recent version of "Quake Mobile" is v.1.20 which has stylus support. There was an earlier version v.1.19 which lacked stylus support. The two "Quake" expansion packs, "Scourge of Armagon" and "Dissolution of Eternity", are also available for "Quake Mobile". A Flash-based version of the game by Michael Rennie runs "Quake" at full speed in any Flash-enabled web browser. Based on the shareware version of the game, it includes only the first episode and is available for free on the web. "Quake" can be heavily modified by altering the graphics, audio, or scripting in QuakeC, and has been the focus of many fan created "mods". The first mods were small gameplay fixes and patches initiated by the community, usually enhancements to weapons or gameplay with new enemies. Later mods were more ambitious and resulted in "Quake" fans creating versions of the game that were drastically different from id Software's original release. The first major "Quake" mod was "Team Fortress". This mod consists of Capture the Flag gameplay with a class system for the players. Players choose a class, which creates various restrictions on weapons and armor types available to that player, and also grants special abilities. For example, the bread-and-butter "Soldier" class has medium armor, medium speed, and a well-rounded selection of weapons and grenades, while the "Scout" class is lightly armored, very fast, has a scanner that detects nearby enemies, but has very weak offensive weapons. One of the other differences with CTF is the fact that the flag is not returned automatically when a player drops it: running over one's flag in "Threewave CTF" would return the flag to the base, and in "TF" the flag remains in the same spot for preconfigured time and it has to be defended on remote locations. This caused a shift in defensive tactics compared to "Threewave CTF". "Team Fortress" maintained its standing as the most-played online "Quake" modification for many years. Another popular mod was "Threewave Capture the Flag" (CTF), primarily authored by Dave 'Zoid' Kirsch. "Threewave CTF" is a partial conversion consisting of new levels, a new weapon (a grappling hook), power-ups, new textures, and new gameplay rules. Typically, two teams (red and blue) would compete in a game of Capture the flag, though a few maps with up to four teams (red, blue, green, and yellow) were created. Capture the Flag soon became a standard game mode included in most popular multiplayer games released after "Quake". "Rocket Arena" provides the ability for players to face each other in small, open arenas with changes in the gameplay rules so that item collection and detailed level knowledge are no longer factors. A series of short rounds, with the surviving player in each round gaining a point, instead tests the player's aiming and dodging skills and reflexes. "Clan Arena" is a further modification that provides team play using "Rocket Arena" rules. One mod category, "bots", was introduced to provide surrogate players in multiplayer mode. "Arcane Dimensions" is a singleplayer mod. It's a partial conversion with breakable objects and walls, enhanced particle system, numerous visual improvements and new enemies and weapons. The level design is much more complex in terms of geometry and gameplay than in the original game. There are a large number of custom levels that have been made by users and fans of "Quake". , new maps are still being made, over twenty years since the game's release. Custom maps are new maps that are playable by loading them into the original game. Custom levels of various gameplay types have been made, but most are in the single-player and deathmatch genres. More than 1500 single-player and a similar number of deathmatch maps have been made for "Quake". According to David Kushner in "Masters of Doom", id Software released a retail shareware version of "Quake" before the game's full retail distribution by GT Interactive. These shareware copies could be converted into complete versions through passwords purchased via phone. However, Kushner wrote that "gamers wasted no time hacking the shareware to unlock the full version of the game for free." This problem, combined with the scale of the operation, led id Software to cancel the plan. As a result, the company was left with 150,000 unsold shareware copies in storage. The venture damaged "Quake"s initial sales and caused its retail push by GT Interactive to miss the holiday shopping season. Following the game's full release, Kushner remarked that its early "sales were good — with 250,000 units shipped — but not a phenomenon like "Doom II"." In the United States, "Quake" placed sixth on PC Data's monthly computer game sales charts for November and December 1996. Its shareware edition was the sixth-best-selling computer game of 1996 overall, while its retail SKU claimed 20th place. It remained in PC Data's monthly top 10 from January to April 1997, but was absent by May. During its first 12 months, "Quake" sold 373,000 retail copies and earned $18 million in the United States, according to PC Data. Its final retail sales for 1997 were 273,936 copies, which made it the country's 16th-highest computer game seller for the year. Sales of "Quake" reached 550,000 units in the United States alone by December 1999. Worldwide, it sold 1.1 million units by that date. "Quake" was critically acclaimed on the PC. Aggregating review websites GameRankings and Metacritic gave the original PC version 93% and 94/100, and the Nintendo 64 port 76% and 74/100. A "Next Generation" critic lauded the game's realistic 3D physics and genuinely unnerving sound effects. "GamePro" said "Quake" had been over-hyped but is excellent nonetheless, particularly its usage of its advanced 3D engine. The review also praised the sound effects, atmospheric music, and graphics, though it criticized that the polygons used to construct the enemies are too obvious at close range. Many critics have cited Quake as one of the best video games ever made. "Next Generation" listed it as number 9 on their "Top 100 Games of All Time", saying that it is similar to "Doom" but supports a maximum of eight players instead of four. In 1996, "Computer Gaming World" declared "Quake" the 36th-best computer game ever released, and listed "telefragged" as #1 on its list of "the 15 best ways to die in computer gaming". In 1997, the Game Developers Choice Awards gave Quake three spotlight awards for Best Sound Effects, Best Music or Soundtrack and Best On-Line/Internet Game. "Entertainment Weekly" gave the game a B+ and wrote that "an extended bit of subterranean mayhem that offers three major improvements over its immediate predecessor ["Doom"]." "Next Generation" reviewed the Macintosh version of the game, rating it four stars out of five, and stated that "Though replay value is limited by the lack of interactive environments or even the semblance of a plot, there's no doubt that "Quake" and its engine are something powerful and addictive." "Next Generation" reviewed the Saturn version of the game, rating it three stars out of five, and stated that ""Quake" for Saturn is simply a latecoming showpiece for the system's power." "Next Generation" reviewed the Nintendo 64 version of the game, rating it three stars out of five, and stated that "As a whole, "Quake 64" doesn't live up to the experience offered by the high-end, 3D-accelerated PC version; it is, however, an entertaining gaming experience that is worthy of a close look and a nice addition to the blossoming number of first-person shooters for Nintendo 64." "Next Generation" reviewed the arcade version of the game, rating it three stars out of five, and stated that "For those who don't have LAN or internet capabilities, check out arcade "Quake". It's a blast." In 1998, "PC Gamer" declared it the 28th-best computer game ever released, and the editors called it "one of the most addictive, adaptable, and pulse-pounding 3D shooters ever created". As an example of the dedication that "Quake" has inspired in its fan community, a group of expert players recorded speedrun demos (replayable recordings of the player's movement) of "Quake" levels completed in record time on the "Nightmare" skill level. The footage was edited into a continuous 19 minutes, 49 seconds demo called "Quake done Quick" and released on June 10, 1997. Owners of "Quake" could replay this demo in the game engine, watching the run unfold as if they were playing it themselves. This involved a number of players recording run-throughs of individual levels, using every trick and shortcut they could discover in order to minimize the time it took to complete, usually to a degree that even the original level designers found difficult to comprehend, and in a manner that often bypassed large areas of the level. Stitching a series of the fastest runs together into a coherent whole created a demonstration of the entire game. "Recamming" is also used with speedruns in order to make the experience more movie-like, with arbitrary control of camera angles, editing, and sound that can be applied with editing software after the runs are first recorded. However, the fastest possible time for a given level will not necessarily result in the fastest time used to contribute to "running" the entire game. One example is acquiring the grenade launcher in an early level, an act that slows down the time for that level over the best possible, but speeds up the overall game time by allowing the runner to bypass a large area in a later level that they could not otherwise do. A second attempt, "Quake done Quicker", reduced the completion time to 16 minutes, 35 seconds (a reduction of 3 minutes, 14 seconds). "Quake done Quicker" was released on September 13, 1997. One of the levels included was the result of an online competition to see who could get the fastest time. The culmination of this process of improvement was "Quake done Quick with a Vengeance". Released three years to the day after "Quake done Quicker", this pared down the time taken to complete all four episodes, on Nightmare (hardest) difficulty, to 12 minutes, 23 seconds (a further reduction of 4 minutes, 12 seconds), partly by using techniques that had formerly been shunned in such films as being less aesthetically pleasing. This run was recorded as an in-game demo, but interest was such that an .avi video clip was created to allow those without the game to see the run. Most full-game speedruns are a collaborative effort by a number of runners (though some have been done by single runners on their own). Although each particular level is credited to one runner, the ideas and techniques used are iterative and collaborative in nature, with each runner picking up tips and ideas from the others, so that speeds keep improving beyond what was thought possible as the runs are further optimized and new tricks or routes are discovered. Further time improvements of the continuous whole game run were achieved into the 21st century. In addition, many thousands of individual level runs are kept at Speed Demos Archive's "Quake" section, including many on custom maps. Speedrunning is a counterpart to multiplayer modes in making "Quake" one of the first games promoted as a virtual sport. The source code of the "Quake" and "QuakeWorld" engines was licensed under the GPL on December 21, 1999. The id Software maps, objects, textures, sounds, and other creative works remain under their original proprietary license. The shareware distribution of "Quake" is still freely redistributable and usable with the GPLed engine code. One must purchase a copy of "Quake" in order to receive the registered version of the game which includes more single-player episodes and the deathmatch maps. Based on the success of the first "Quake" game, and later published "Quake II" and "Quake III Arena"; "Quake 4" was released in October 2005, developed by Raven Software using the "Doom 3" engine. "Quake" was the game primarily responsible for the emergence of the machinima artform of films made in game engines, thanks to edited "Quake" demos such as "Ranger Gone Bad" and "Blahbalicious", the in-game film "The Devil's Covenant", and the in-game-rendered, four-hour epic film "The Seal of Nehahra". On June 22, 2006, it had been ten years since the original uploading of the game to cdrom.com archives. Many Internet forums had topics about it, and it was a front-page story on Slashdot. On October 11, 2006, John Romero released the original map files for all of the levels in "Quake" under the GPL. "Quake" has four sequels: "Quake II", "Quake III Arena", "Quake 4", and "". In 2002, a version of "Quake" was produced for mobile phones. A copy of "Quake" was also released as a compilation in 2001, labeled "Ultimate Quake", which included the original "Quake", "Quake II", and "Quake III Arena" which was published by Activision. In 2008, "Quake" was honored at the 59th Annual Technology & Engineering Emmy Awards for advancing the art form of user modifiable games. John Carmack accepted the award. Years after its original release, "Quake" is still regarded by many critics as one of the greatest and most influential games ever made. There were two official expansion packs released for "Quake". The expansion packs pick up where the first game left off, include all of the same weapons, power-ups, monsters, and gothic atmosphere/architecture, and continue/finish the story of the first game and its protagonist. An unofficial third expansion pack, "Abyss of Pandemonium", was developed by the Impel Development Team, published by Perfect Publishing, and released on April 14, 1998; an updated version, version 2.0, titled "Abyss of Pandemonium – The Final Mission" was released as freeware. An authorized expansion pack, "Q!ZONE" was developed and published by WizardWorks, and released in 1996. In honor of "Quake"'s 20th anniversary, MachineGames, an internal development studio of ZeniMax Media, who are the current owners of the "Quake" IP, released online a new expansion pack for free, called "Episode 5: Dimension of the Past". "Quake Mission Pack No. 1: Scourge of Armagon" was the first official mission pack, released on March 5, 1997. Developed by Hipnotic Interactive, it features three episodes divided into seventeen new single-player levels (three of which are secret), a new multiplayer level, a new soundtrack composed by Jeehun Hwang, and gameplay features not originally present in "Quake", including rotating structures and breakable walls. Unlike the main "Quake" game and Mission Pack No. 2, "Scourge" does away with the episode hub, requiring the three episodes to be played sequentially. The three new enemies include Centroids, large cybernetic scorpions with nailguns; Gremlins, small goblins that can steal weapons and multiply by feeding on enemy corpses; and Spike Mines, floating orbs that detonate when near the player. The three new weapons include the Mjolnir, a large lightning emitting hammer; the Laser Cannon, which shoots bouncing bolts of energy; and the Proximity Mine Launcher, which fires grenades that attach to surfaces and detonate when an opponent comes near. The three new power-ups include the Horn of Conjuring, which summons an enemy to protect the player; the Empathy Shield, which halves the damage taken by the player between the player and the attacking enemy; and the Wetsuit, which renders the player invulnerable to electricity and allows the player to stay underwater for a period of time. The storyline follows Armagon, a general of Quake's forces, planning to invade Earth via a portal known as the 'Rift'. Armagon resembles a giant gremlin with cybernetic legs and a combined rocket launcher/laser cannon for arms. Tim Soete of "GameSpot" gave it a score 8.6 out of 10. "Quake Mission Pack No. 2: Dissolution of Eternity" was the second official mission pack, released on March 19, 1997. Developed by Rogue Entertainment, it features two episodes divided into fifteen new single-player levels, a new multiplayer level, a new soundtrack, and several new enemies and bosses. Notably, the pack lacks secret levels. The eight new enemies include Electric Eels, Phantom Swordsmen, Multi-Grenade Ogres (which fire cluster grenades), Hell Spawn, Wraths (floating, robed undead), Guardians (resurrected ancient Egyptian warriors), Mummies, and statues of various enemies that can come to life. The four new types of bosses include Lava Men, Overlords, large Wraths, and a dragon guarding the "temporal energy converter". The two new power-ups include the Anti Grav Belt, which allows the player to jump higher; and the Power Shield, which lowers the damage the player receives. Rather than offering new weapons, the mission pack gives the player four new types of ammo for existing weapons, such as "lava nails" for the Nailgun, cluster grenades for the Grenade Launcher, rockets that split into four in a horizontal line for the Rocket Launcher, and plasma cells for the Thunderbolt, as well as a grappling hook to help with moving around the levels. Tim Soete of "GameSpot" gave it a score of 7.7 out of 10. In late 1996, id Software released "VQuake", a port of the "Quake" engine to support hardware accelerated rendering on graphics cards using the Rendition Vérité chipset. Aside from the expected benefit of improved performance, "VQuake" offered numerous visual improvements over the original software-rendered "Quake". It boasted full 16-bit color, bilinear filtering (reducing pixelation), improved dynamic lighting, optional anti-aliasing, and improved source code clarity, as the improved performance finally allowed the use of gotos to be abandoned in favor of proper loop constructs. As the name implied, "VQuake" was a proprietary port specifically for the Vérité; consumer 3D acceleration was in its infancy at the time, and there was no standard 3D API for the consumer market. After completing "VQuake", John Carmack vowed to never write a proprietary port again, citing his frustration with Rendition's Speedy3D API. To improve the quality of online play, id Software released "QuakeWorld" on December 17, 1996, a build of "Quake" that featured significantly revamped network code including the addition of client-side prediction. The original "Quake" network code would not show the player the results of his actions until the server sent back a reply acknowledging them. For example, if the player attempted to move forward, his client would send the request to move forward to the server, and the server would determine whether the client was actually able to move forward or if he ran into an obstacle, such as a wall or another player. The server would then respond to the client, and only then would the client display movement to the player. This was fine for play on a LAN, a high bandwidth, very low latency connection, but the latency over a dial-up Internet connection is much larger than on a LAN, and this caused a noticeable delay between when a player tried to act and when that action was visible on the screen. This made gameplay much more difficult, especially since the unpredictable nature of the Internet made the amount of delay vary from moment to moment. Players would experience jerky, laggy motion that sometimes felt like ice skating, where they would slide around with seemingly no ability to stop, due to a build-up of previously-sent movement requests. John Carmack has admitted that this was a serious problem which should have been fixed before release, but it was not caught because he and other developers had high-speed Internet access at home. With the help of client-side prediction, which allowed players to see their own movement immediately without waiting for a response from the server, "QuakeWorld" network code allowed players with high-latency connections to control their character's movement almost as precisely as when playing in single-player mode. The Netcode parameters could be adjusted by the user so that "QuakeWorld" performed well for users with high and low latency. The trade off to client-side prediction was that sometimes other players or objects would no longer be quite where they had appeared to be, or, in extreme cases, that the player would be pulled back to a previous position when the client received a late reply from the server which overrode movement the client had already previewed; this was known as "warping". As a result, some serious players, particularly in the U.S., still preferred to play online using the original "Quake" engine (commonly called "NetQuake") rather than "QuakeWorld". However, the majority of players, especially those on dial-up connections, preferred the newer network model, and "QuakeWorld" soon became the dominant form of online play. Following the success of "QuakeWorld", client-side prediction has become a standard feature of nearly all real-time online games. As with all other "Quake" upgrades, "QuakeWorld" was released as a free, unsupported add-on to the game and was updated numerous times through 1998. On January 22, 1997, id Software released "GLQuake". This was designed to use the OpenGL 3D API to access hardware 3D graphics acceleration cards to rasterize the graphics, rather than having the computer's CPU fill in every pixel. In addition to higher framerates for most players, "GLQuake" provided higher resolution modes and texture filtering. "GLQuake" also experimented with reflections, transparent water, and even rudimentary shadows. "GLQuake" came with a driver enabling the subset of OpenGL used by the game to function on the 3dfx "Voodoo Graphics" card, the only consumer-level card at the time capable of running "GLQuake" well. Previously, John Carmack had experimented with a version of Quake specifically written for the Rendition Vérité chip used in the Creative Labs "PCI 3D Blaster" card. This version had met with only limited success, and Carmack decided to write for generic APIs in the future rather than tailoring for specific hardware. On March 11, 1997, id Software released "WinQuake", a version of the non-OpenGL engine designed to run under Microsoft Windows; the original "Quake" had been written for DOS, allowing for launch from Windows 95, but could not run under Windows NT-based operating systems because it required direct access to hardware. "WinQuake" instead accessed hardware via Win32-based APIs such as DirectSound, DirectInput, and DirectDraw that were supported on Windows 95, Windows NT 4.0 and later releases. Like "GLQuake", "WinQuake" also allowed higher resolution video modes. This removed the last barrier to widespread popularity of the game. In 1998, LBE Systems and Laser-Tron released "Quake: Arcade Tournament Edition" in the arcades in limited quantities. To celebrate "Quake"s 20th anniversary, a mission pack was developed by MachineGames and released on June 24, 2016. It features 10 new single-player levels and a new multiplayer level, but does not use new gameplay additions from "Scourge of Armagon" and "Dissolution of Eternity". Chronologically, it is set between the main game and the expansions. Quake never got a direct sequel. After the departure of Sandy Petersen, the remaining id employees chose to change the thematic direction substantially for "Quake II", making the design more technological and futuristic, rather than maintaining the focus on Lovecraftian fantasy. "Quake 4" followed the design themes of "Quake II", whereas "Quake III Arena" mixed these styles; it had a parallel setting that housed several "id all-stars" from various games as playable characters. The mixed settings occurred because "Quake II" originally began as a separate product line. The id designers fell back on the project's nickname of ""Quake II"" because the game's fast-paced, tactile feel felt closer to a Quake game than a new franchise. Since any sequel to the original "Quake" had already been vetoed, it became a way of continuing the series without continuing the storyline or setting of the first game. In June 2011, John Carmack made an offhand comment that id Software was considering going back to the "...mixed up Cthulhu-ish Quake 1 world and rebooting [in] that direction." There is also another game released called "Quake Live". At E3 2016, Quake Champions was announced at the Bethesda press conference. The game is be a multiplayer-only shooter in the style of "Quake III Arena" and is exclusively for Windows. On July 20, 2016, Axel Gneiting, an id Tech employee responsible for implementing the Vulkan rendering path to the id Tech 6 engine used in "Doom" (2016), released a port called "vkQuake" under the GPLv2.
https://en.wikipedia.org/wiki?curid=25266
Quantum field theory In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity and quantum mechanics, but "not" general relativity's description of gravity. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. QFT treats particles as excited states (also called quanta) of their underlying fields, which are more fundamental than the particles. Interactions between particles are described by interaction terms in the Lagrangian involving their corresponding fields. Each interaction can be visually represented by Feynman diagrams according to perturbation theory in quantum mechanics. As a successful theoretical framework today, quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory — quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. Quantum field theory is the result of the combination of classical field theory, quantum mechanics, and special relativity. A brief overview of these theoretical precursors is in order. The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise "Philosophiæ Naturalis Principia Mathematica". The force of gravity as described by Newton is an "action at a distance" — its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact." It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields — a numerical quantity (a vector) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick. Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day. The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted. Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths. Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization. Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles. In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave-particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances. Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli. In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformation, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred. It was proposed that all physical laws must be the same for observers at different velocities, "i.e." that physical laws be invariant under Lorentz transformations. Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity — it treats time as an ordinary number while promoting spatial coordinates to linear operators. Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s. Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators. With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world. In his seminal 1927 paper "The quantum theory of the emission and absorption of radiation", Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence, as well as non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations. In 1928, Dirac wrote down a wave equation that described relativistic electrons — the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron "g"-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation. The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons "per se", but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom. It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory. QFT naturally incorporated antiparticles in its formalism. Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields, suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta. It was not until 20 years later that a systematic approach to remove such infinities was developed. A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Unfortunately, such achievements were not understood and recognized by the theoretical community. Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables ("e.g." the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions. In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2"S"1/2 and 2"P"1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift. Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations. The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the initial, so-called "bare" parameters (mass, electric charge, etc.), which have no physical meaning, by their finite measured values. To cancel the apparently infinite parameters, one has to introduce additional, infinite, "counterterms" into the Lagrangian. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory. By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron "g"-factor from 2) and vacuum polarisation. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities". At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams. The latter can be used to visually and intuitively organise and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram. It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework. Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades. The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities. The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant , which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods. With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations. In 1954, Yang Chen-Ning and Robert Mills generalised the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups. In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge. Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable. Peter Higgs, Robert Brout, and François Englert proposed in 1964 that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass. By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored, until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion. Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible. These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades. The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model. The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov "et al.". These objects are inaccessible through perturbation theory. Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973. Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory, itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be "the" quantum theory of gravity. Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter. Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle — phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems. Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect. For simplicity, natural units are used in the following sections, in which the reduced Planck constant and the speed of light are both set to one. A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity and the electric field and magnetic field in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom. Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantisation and path integrals are two common formulations of QFT. To motivate the fundamentals of QFT, an overview of classical field theory is in order. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as , where is the position vector, and is the time. Suppose the Lagrangian of the field is where formula_2 is the time-derivative of the field, is the gradient operator, and is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian: we obtain the equations of motion for the field, which describe the way it varies in time and space: This is known as the Klein–Gordon equation. The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: where is a complex number (normalised by convention), denotes complex conjugation, and is the frequency of the normal mode: where , and denotes the covariant derivative. The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background. The correlation functions and physical predictions of a QFT depend on the spacetime metric . For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric. QFTs in curved spacetime generally change according to the "geometry" (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the "topology" (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity. Applications of TQFT include the fractional quantum Hall effect and topological quantum computers. The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime, which relates the braiding statistics of anyons in physics to the link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond. Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton. Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory and the Thirring model. In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined. Since the 1950s, theoretical physicists and mathematicians have attempted to organise all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics, which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem. Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms. Algebraic quantum field theory is another approach to the axiomatisation of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms. One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation). Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as follows.
https://en.wikipedia.org/wiki?curid=25267
Quantum electrodynamics In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction. In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen. The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, who (during the 1920s) was able to compute the coefficient of spontaneous emission of an atom. Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and an elegant formulation of quantum electrodynamics due to Enrico Fermi, physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics. Difficulties with the theory increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies which the theory was unable to explain. A first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with the 1965 Nobel Prize in Physics for their work in this area. Their contributions, and those of Freeman Dyson, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus". QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Gerald Guralnik, Dick Hagen, and Tom Kibble, Peter Higgs, Jeffrey Goldstone, and others, Sheldon Lee Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), "", a classic non-mathematical exposition of QED from the point of view articulated below. The key components of Feynman's presentation of QED are three basic actions. These actions are represented in the form of visual shorthand by the three basic elements of Feynman diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. These can all be seen in the adjacent diagram. As well as the visual shorthand for the actions Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, formula_1. If a photon moves from one place and time formula_2 to another place and time formula_3, the associated quantity is written in Feynman's shorthand as formula_4. The similar quantity for an electron moving from formula_5 to formula_6 is written formula_7. The quantity that tells us about the probability amplitude for the emission or absorption of a photon he calls "j". This is related to, but not the same as, the measured electron charge "e". QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above ("P"("A" to "B"), "E"("C" to "D") and "j") acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman. The basic rules of probability amplitudes that will be used are: Suppose, we start with one electron at a certain place and time (this place and time being given the arbitrary label "A") and a photon at another place and time (given the label "B"). A typical question from a physical standpoint is: "What is the probability of finding an electron at "C" (another place and a later time) and a photon at "D" (yet another place and time)?". The simplest process to achieve this end is for the electron to move from "A" to "C" (an elementary action) and for the photon to move from "B" to "D" (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – "E"("A" to "C") and "P"("B" to "D") – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability. But there are other ways in which the end result could come about. The electron might move to a place and time "E", where it absorbs the photon; then move on before emitting another photon at "F"; then move on to "C", where it is detected, while the new photon moves on to "D". The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of "E" and "F". We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for "E" and "F". (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to "G", where it emits a photon, which goes on to "D", while the electron moves on to "H", where it absorbs the first photon, before moving on to "C". Again, we can calculate the probability amplitude of these possibilities (for all points "G" and "H"). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering. There is an "infinite number" of other intermediate processes in which more and more photons are absorbed and/or emitted. For each of these possibilities, there is a Feynman diagram describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of "any" interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude. That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is "not" true in full quantum electrodynamics. There is a possibility of an electron at "A", or a photon at "B", moving as a basic action to "any other place and time in the universe". That includes places that could only be reached at speeds greater than that of light and also "earlier times". (An electron moving backwards in time can be viewed as a positron moving forward in time.) Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers. Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the "square" of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by or The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers. Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction. That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, "P"("A" to "B") consists of 16 complex numbers, or probability amplitude arrows. There are also some minor changes to do with the quantity "j", which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping. Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at "A" and "B" ending at "C" and "D". The amplitude would be calculated as the "difference", , where we would expect, from our everyday idea of probabilities, that it would be a sum. Finally, one has to compute "P"("A" to "B") and "E"("C" to "D") corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows: where a shorthand symbol such as formula_11 stands for the four real numbers that give the time and position in three dimensions of the point labeled "A". A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from "A" to "B", we must take into account "all" the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to "C", emits a photon there and then absorbs it again at "D" before moving on to "B". Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on "ad infinitum". This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to "infinite" probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process". Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem." Mathematically, QED is an abelian gauge theory with the symmetry group U(1). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field is given in natural units by the real part of where Substituting the definition of "D" into the Lagrangian gives From this Lagrangian, the equations of motion for the "ψ" and "A" fields can be obtained. The derivatives of the Lagrangian concerning "ψ" are Inserting these into () results in with Hermitian conjugate Bringing the middle term to the right-hand side yields The left-hand side is like the original Dirac equation, and the right-hand side is the interaction with the electromagnetic field. the derivatives this time are Substituting back into () leads to Now, if we impose the Lorenz gauge condition the equations reduce to which is a wave equation for the four-potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (The square represents the D'Alembert operator, formula_28.) This theory can be straightforwardly quantized by treating bosonic and fermionic sectors as free. This permits us to build a set of asymptotic states that can be used to start computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator, which for a given initial state formula_29 will give a final state formula_30 in such a way to have This technique is also known as the S-matrix. The evolution operator is obtained in the interaction picture, where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above: and so, one has where "T" is the time-ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine-structure constant as the development parameter. This series is called the Dyson series. Despite the conceptual clarity of this Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators. Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then "look" the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta. Using Wick's theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following To these rules we must add a further one for closed loops that implies an integration on momenta formula_34, since these internal ("virtual") particles are not constrained to any specific energy–momentum, even that usually required by special relativity (see Propagator for details). From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering. Feynman diagrams are in this case and so we are able to get the corresponding amplitude at the first order of a perturbation series for the S-matrix: from which we can compute the cross section for this scattering. The predictive success of quantum electrodynamics largely rests on the use of perturbation theory, expressed in Feynman diagrams. However, quantum electrodynamics also leads to predictions beyond perturbation theory. In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field. This process, called the Schwinger effect, cannot be understood in terms of any finite number of Feynman diagrams and hence is described as nonperturbative. Mathematically, it can be derived by a semiclassical approximation to the path integral of quantum electrodynamics. Higher-order terms can be straightforwardly computed for the evolution operator, but these terms display diagrams containing the following simpler ones that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. A criterion for the theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case, the theory is said to be "renormalizable". The reason for this is that to get observables renormalized, one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio. Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories. An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero. The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that "like" charges would "attract" and "unlike" charges would "repel". This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is "sick" for any negative value of the coupling constant, the series does not converge but are at best an asymptotic series. From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy. The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory.
https://en.wikipedia.org/wiki?curid=25268
Quine (computing) A quine is a computer program which takes no input and produces a copy of its own source code as its only output. The standard terms for these programs in the computability theory and computer science literature are "self-replicating programs", "self-reproducing programs", and "self-copying programs". A quine is a fixed point of an execution environment, when the execution environment is viewed as a function transforming programs into their outputs. Quines are possible in any Turing complete programming language, as a direct consequence of Kleene's recursion theorem. For amusement, programmers sometimes attempt to develop the shortest possible quine in any given programming language. The name "quine" was coined by Douglas Hofstadter, in his popular science book "Gödel, Escher, Bach", in honor of philosopher Willard Van Orman Quine (1908–2000), who made an extensive study of indirect self-reference, and in particular for the following paradox-producing expression, known as Quine's paradox: "Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation. The idea of self-reproducing automata came from the dawn of computing, if not before. John von Neumann theorized about them in the 1940s. Later, Paul Bratley and Jean Millo's article "Computer Recreations: Self-Reproducing Automata" discussed them in 1972. Bratley first became interested in self-reproducing programs after seeing the first known such program written in Atlas Autocode at Edinburgh in the 1960s by the University of Edinburgh lecturer and researcher Hamish Dewar. The "download source" requirement of the Affero General Public License is based on the idea of a quine. In general, the method used to create a quine in any programming language is to have, within the program, two pieces: (a) code used to do the actual printing and (b) data that represents the textual form of the code. The code functions by using the data to print the code (which makes sense since the data represents the textual form of the code), but it also uses the data, processed in a simple way, to print the textual representation of the data itself. Here are three small examples in Python3 (these lines are not from the program's code but are from the output): a='a=%s%s%s;print(a%%(chr(39),a,chr(39)))';print(a%(chr(39),a,chr(39))) b='b={}{}{};print(b.format(chr(39),b,chr(39)))';print(b.format(chr(39),b,chr(39))) c='c=%r;print(c%%c)';print(c%c) In Python 3.8: exec(s:='print("exec(s:=%r)"%s)') The following Java code demonstrates the basic structure of a quine. public class Quine The source code contains a string array of itself, which is output twice, once inside quotation marks. This code was adapted from an original post from c2.com, where the author, Jason Wilson, posted it as a minimalistic version of a Quine, without Java comments. Some programming languages have the ability to evaluate a string as a program. Quines can take advantage of this feature. For example, this Ruby quine: eval s="print 'eval s=';p s" In many functional languages, including Scheme and other Lisps, and interactive languages such as APL, numbers are self-evaluating. In TI-BASIC, if the last line of a program is value returning, the returned value is displayed on the screen. Therefore, in such languages a program containing a single digit results in a 1-byte quine. Since such code does not "construct" itself, this is often considered cheating. 1 In some languages, particularly scripting languages but also C, an empty source file is a fixed point of the language, being a valid program that produces no output. Such an empty program, submitted as "the world's smallest self reproducing program", once won the "worst abuse of the rules" prize in the International Obfuscated C Code Contest. The program was not actually compiled, but used codice_1 to copy the file into another file, which could be executed to print nothing. Other questionable techniques include making use of compiler messages; for example, in the GW-BASIC environment, entering "Syntax Error" will cause the interpreter to respond with "Syntax Error". Quines, per definition, cannot receive "any" form of input, including reading a file, which means a quine is considered to be "cheating" if it looks at its own source code. The following shell script is not a quine: cat $0 The quine concept can be extended to multiple levels of recursion, originating "ouroboros programs", or quine-relays. This should not be confused with Multiquines. This Java program outputs the source for a C++ program that outputs the original Java code. using namespace std; int main(int argc, char* argv[]) public class Quine } Such programs have been produced with various cycle lengths: David Madore, creator of Unlambda, describes multiquines as follows: "A multiquine is a set of r different programs (in r different languages — without this condition we could take them all equal to a single quine), each of which is able to print any of the r programs (including itself) according to the command line argument it is passed. (Note that cheating is not allowed: the command line arguments must not be too long — passing the full text of a program is considered cheating)." A multiquine consisting of 2 languages (or biquine) would be a program which: A biquine could then be seen as a set of two programs, both of which are able to print either of the two, depending on the command line argument supplied. Theoretically, there is no limit on the number of languages in a multiquine, a 5-part multiquine (or pentaquine) has been produced with Python, Perl, C, NewLISP, and F# and there is also a 25-language multiquine. A radiation-hardened quine is a quine that can have any single character removed and still produces the original program with no missing character. Of necessity, such quines are much more convoluted than ordinary quines, as is seen by the following example in Ruby: eval='eval$q=%q(puts %q(10210/#{1 1 if 1==21}}/.i rescue##/ 1 1"[13,213].max_by{|s|s.size}#"##").gsub(/\d/){["=\47eval$q=%q(#$q)#\47##\47 exit)#'##' instance_eval='eval$q=%q(puts %q(10210/#{1 1 if 1==21}}/.i rescue##/ 1 1"[13,213].max_by{|s|s.size}#"##").gsub(/\d/){["=\47eval$q=%q(#$q)#\47##\47 exit)#'##' /#{eval eval if eval==instance_eval}}/.i rescue##/ eval eval"[eval||=9,instance_eval||=9].max_by{|s|s.size}#"##"
https://en.wikipedia.org/wiki?curid=25270
Field of fractions In abstract algebra, the field of fractions of an integral domain is the smallest field in which it can be embedded. The elements of the field of fractions of the integral domain formula_1 are equivalence classes (see the construction below) written as formula_2 with formula_3 and formula_4 in formula_1 and formula_6. The field of fractions of formula_1 is sometimes denoted by formula_8 or formula_9. Mathematicians refer to this construction as the field of fractions, fraction field, field of quotients, or quotient field. All four are in common usage. The expression "quotient field" may sometimes run the risk of confusion with the quotient of a ring by an ideal, which is a quite different concept. Let formula_1 be any integral domain. For formula_17 with formula_18, the fraction formula_19 denotes the equivalence class of pairs formula_20, where formula_20 is equivalent to formula_22 if and only if formula_23. The "field of fractions" formula_8 is defined as the set of all such fractions formula_19. The sum of formula_19 and formula_29 is defined as formula_30, and the product of formula_19 and formula_29 is defined as formula_33 (one checks that these are well defined). The embedding of formula_1 in formula_8 maps each formula_36 in formula_1 to the fraction formula_38 for any nonzero formula_39 (the equivalence class is independent of the choice formula_40). This is modelled on the identity formula_41. The field of fractions of formula_1 is characterised by the following universal property: if formula_43 is an injective ring homomorphism from formula_1 into a field formula_45, then there exists a unique ring homomorphism formula_46 which extends formula_47. There is a categorical interpretation of this construction. Let formula_48 be the category of integral domains and injective ring maps. The functor from formula_48 to the category of fields which takes every integral domain to its fraction field and every homomorphism to the induced map on fields (which exists by the universal property) is the left adjoint of the forgetful functor from the category of fields to formula_48. A multiplicative identity is not required for the role of the integral domain; this construction can be applied to any nonzero commutative rng formula_1 with no nonzero zero divisors. The embedding is given by formula_52 for any nonzero formula_53. For any commutative ring formula_1 and any multiplicative set formula_55 in formula_1, the localization formula_55formula_58formula_1 is the commutative ring consisting of fractions formula_60 with formula_61 and formula_62, where now formula_63 is equivalent to formula_64 if and only if there exists formula_65 such that formula_66. Two special cases of this are notable: The semifield of fractions of an commutative semiring with no zero divisors is the smallest semifield in which it can be embedded. The elements of the semifield of fractions of the commutative semiring formula_1 are equivalence classes written as formula_2 with formula_3 and formula_4 in formula_1.
https://en.wikipedia.org/wiki?curid=25271
Quadratic reciprocity In number theory, the law of quadratic reciprocity is a theorem about modular arithmetic that gives conditions for the solvability of quadratic equations modulo prime numbers. Due to its subtlety, it has many formulations, but the most standard statement is: . This law, together with its supplements, allows the easy calculation of any Legendre symbol, making it possible to determine whether there is an integer solution for any quadratic equation of the form formula_1 for "p" an odd prime; that is, to determine the "perfect squares" mod "p". However, this is a non-constructive result: it gives no help at all for "finding" a specific solution; for this, one needs other ideas. (For example, in the case formula_2 using Euler's criterion one can give an explicit formula for the "square roots" modulo formula_3 of a quadratic residue formula_4, namely, indeed, Note that this formula only works if it is known in advance that formula_4 is a quadratic residue, which can be checked using the law of quadratic reciprocity.) The theorem was conjectured by Euler and Legendre and first proved by Gauss. He refers to it as the "fundamental theorem" in the "Disquisitiones Arithmeticae" and his papers, writing Privately he referred to it as the "golden theorem." He published six proofs, and two more were found in his posthumous papers. There are now over 240 published proofs. The shortest known proof is included below, together with short proofs of the law's supplements (the Legendre symbols of -1 and 2). Since Gauss, generalizing the reciprocity law has been a leading problem in mathematics, and has been crucial to the development of much of the machinery of modern algebra, number theory, and algebraic geometry, culminating in Artin reciprocity, class field theory, and the Langlands program. Quadratic reciprocity arises from certain subtle factorization patterns involving perfect square numbers. In this section, we give examples which lead to the general case. Consider the polynomial formula_8 and its values for formula_9 The prime factorizations of these values are given as follows: The prime factors formula_3 dividing formula_11 are formula_12, and every prime whose final digit is formula_13 or formula_14; no primes ending in formula_15 or formula_16 ever appear. Now, formula_3 is a prime factor of some formula_18 whenever formula_19, i.e. whenever formula_20 i.e. whenever 5 is a quadratic residue modulo formula_3. This happens for formula_22 and those primes with formula_23 and note that the latter numbers formula_24 and formula_25 are precisely the quadratic residues modulo formula_26. Therefore, except for formula_27, we have that formula_26 is a quadratic residue modulo formula_3 iff formula_3 is a quadratic residue modulo formula_26. The law of quadratic reciprocity gives a similar characterization of prime divisors of formula_32 for any prime "q", which leads to a characterization for any integer formula_33. Let "p" be an odd prime. A number modulo "p" is a quadratic residue whenever it is congruent to a square (mod "p"); otherwise it is a quadratic non-residue. ("Quadratic" can be dropped if it is clear from the context.) Here we exclude zero as a special case. Then as a consequence of the fact that the multiplicative group of a finite field of order "p" is cyclic of order "p-1", the following statements hold: For the avoidance of doubt, these statements do "not" hold if the modulus is not prime. For example, there are only 2 quadratic residues (1 and 4) in the multiplicative group modulo 15. Moreover although 7 and 8 are quadratic non-residues, their product 7x8 = 11 is also a quadratic non-residue, in contrast to the prime case. Quadratic residues are entries in the following table: This table is complete for odd primes less than 50. To check whether a number "m" is a quadratic residue mod one of these primes "p", find "a" ≡ "m" (mod "p") and 0 ≤ "a" < "p". If "a" is in row "p", then "m" is a residue (mod "p"); if "a" is not in row "p" of the table, then "m" is a nonresidue (mod "p"). The quadratic reciprocity law is the statement that certain patterns found in the table are true in general. Trivially 1 is a quadratic residue for all primes. The question becomes more interesting for −1. Examining the table, we find −1 in rows 5, 13, 17, 29, 37, and 41 but not in rows 3, 7, 11, 19, 23, 31, 43 or 47. The former set of primes are all congruent to 1 modulo 4, and the latter are congruent to 3 modulo 4. Examining the table, we find 2 in rows 7, 17, 23, 31, 41, and 47, but not in rows 3, 5, 11, 13, 19, 29, 37, or 43. The former primes are all ≡ ±1 (mod 8), and the latter are all ≡ ±3 (mod 8). This leads to −2 is in rows 3, 11, 17, 19, 41, 43, but not in rows 5, 7, 13, 23, 29, 31, 37, or 47. The former are ≡ 1 or ≡ 3 (mod 8), and the latter are ≡ 5, 7 (mod 8). 3 is in rows 11, 13, 23, 37, and 47, but not in rows 5, 7, 17, 19, 29, 31, 41, or 43. The former are ≡ ±1 (mod 12) and the latter are all ≡ ±5 (mod 12). −3 is in rows 7, 13, 19, 31, 37, and 43 but not in rows 5, 11, 17, 23, 29, 41, or 47. The former are ≡ 1 (mod 3) and the latter ≡ 2 (mod 3). Since the only residue (mod 3) is 1, we see that −3 is a quadratic residue modulo every prime which is a residue modulo 3. 5 is in rows 11, 19, 29, 31, and 41 but not in rows 3, 7, 13, 17, 23, 37, 43, or 47. The former are ≡ ±1 (mod 5) and the latter are ≡ ±2 (mod 5). Since the only residues (mod 5) are ±1, we see that 5 is a quadratic residue modulo every prime which is a residue modulo 5. −5 is in rows 3, 7, 23, 29, 41, 43, and 47 but not in rows 11, 13, 17, 19, 31, or 37. The former are ≡ 1, 3, 7, 9 (mod 20) and the latter are ≡ 11, 13, 17, 19 (mod 20). The observations about −3 and 5 continue to hold: −7 is a residue modulo "p" if and only if "p" is a residue modulo 7, −11 is a residue modulo "p" if and only if "p" is a residue modulo 11, 13 is a residue (mod "p") if and only if "p" is a residue modulo 13, etc. The more complicated-looking rules for the quadratic characters of 3 and −5, which depend upon congruences modulo 12 and 20 respectively, are simply the ones for −3 and 5 working with the first supplement. The generalization of the rules for −3 and 5 is Gauss's statement of quadratic reciprocity. Another way to organize the data is to see which primes are residues mod which other primes, as illustrated in the following table. The entry in row "p" column "q" is R if "q" is a quadratic residue (mod "p"); if it is a nonresidue the entry is N. If the row, or the column, or both, are ≡ 1 (mod 4) the entry is blue or green; if both row and column are ≡ 3 (mod 4), it is yellow or orange. The blue and green entries are symmetric around the diagonal: The entry for row "p", column "q" is R (resp N) if and only if the entry at row "q", column "p", is R (resp N). The yellow and orange ones, on the other hand, are antisymmetric: The entry for row "p", column "q" is R (resp N) if and only if the entry at row "q", column "p", is N (resp R). The reciprocity law states that these patterns hold for all "p" and "q". Quadratic Reciprocity (Gauss's statement). If formula_38 then the congruence formula_39 is solvable if and only if formula_40 is solvable. If formula_41 then the congruence formula_39 is solvable if and only if formula_43 is solvable. Quadratic Reciprocity (combined statement). Define formula_44. Then the congruence formula_39 is solvable if and only if formula_46 is solvable. Quadratic Reciprocity (Legendre's statement). If "p" or "q" are congruent to 1 modulo 4, then: formula_40 is solvable if and only if formula_39 is solvable. If "p" and "q" are congruent to 3 modulo 4, then: formula_40 is solvable if and only if formula_39 is not solvable. The last is immediately equivalent to the modern form stated in the introduction above. It is a simple exercise to prove that Legendre's and Gauss's statements are equivalent – it requires no more than the first supplement and the facts about multiplying residues and nonresidues. The following proof, from The American Mathematical Monthly, is apparently the shortest one known. Let where formula_52 and formula_53 is the Legendre symbol. Note that for an odd formula_54 and any formula_55 In particular, substituting formula_57 and formula_4 a nonresidue, we get formula_59, and setting formula_60, we get formula_61; and by similar reasoning, Furthermore, and, recalling that Therefore, for odd formula_54 we have Since formula_68, by induction for odd formula_54 Therefore, by Euler's criterion, for an odd prime formula_33, Now, the formula_33 cyclic shifts of a given formula_33-tuple formula_75 are distinct unless all formula_76 are equal, since the period of its repeated single-position cyclic shift divides formula_33, and so is formula_33 or 1. When they are distinct, their total contribution to the sum defining formula_79 is formula_80, which is divisible by formula_33. Therefore, modulo formula_33 (we take formula_83), So and formula_86 are congruent to formula_79, and thus to each other, modulo formula_33 – but they both are numbers of the form formula_89, so they are equal, which is the law of quadratic reciprocity. The value of the Legendre symbol of formula_90 (used in the proof above) follows directly from Euler's criterion: by Euler's criterion, but both sides of this congruence are numbers of the form formula_89, so they must be equal. Whether formula_93 is a quadratic residue can be concluded if we know the number of solutions of the equation formula_94 with formula_95 which can be solved by standard methods. Namely, all its solutions where formula_96 can be grouped into octuplets of the form formula_97, and what is left are four solutions of the form formula_98 and possibly four additional solutions where formula_99 and formula_100, which exist precisely if formula_93 is a quadratic residue. That is, formula_93 is a quadratic residue precisely if the number of solutions of this equation is divisible by formula_103. And this equation can be solved in just the same way here as over the rational numbers: substitute formula_104, where we demand that formula_105 (leaving out the two solutions formula_106), then the original equation transforms into Here formula_108 can have any value that does not make the denominator zero - for which there are formula_109 possibilities (i.e. formula_93 if formula_90 is a residue, formula_112 if not) - and also does not make formula_4 zero, which excludes one more option, formula_114. Thus there are possibilities for formula_108, and so together with the two excluded solutions there are overall formula_117 solutions of the original equation. Therefore, formula_93 is a residue modulo formula_3 if and only if formula_103 divides formula_121. This is a reformulation of the condition stated above. The theorem was formulated in many ways before its modern form: Euler and Legendre did not have Gauss's congruence notation, nor did Gauss have the Legendre symbol. In this article "p" and "q" always refer to distinct positive odd primes, and "x" and "y" to unspecified integers. Fermat proved (or claimed to have proved) a number of theorems about expressing a prime by a quadratic form: He did not state the law of quadratic reciprocity, although the cases −1, ±2, and ±3 are easy deductions from these and other of his theorems. He also claimed to have a proof that if the prime number "p" ends with 7, (in base 10) and the prime number "q" ends in 3, and "p" ≡ "q" ≡ 3 (mod 4), then Euler conjectured, and Lagrange proved, that Proving these and other statements of Fermat was one of the things that led mathematicians to the reciprocity theorem. Translated into modern notation, Euler stated that for distinct odd primes "p" and "q": This is equivalent to quadratic reciprocity. He could not prove it, but he did prove the second supplement. Fermat proved that if "p" is a prime number and "a" is an integer, Thus if "p" does not divide "a", using the non-obvious fact (see for example Ireland and Rosen below) that the residues modulo "p" form a field and therefore in particular the multiplicative group is cyclic, hence there can be at most two solutions to a quadratic equation: Legendre lets "a" and "A" represent positive primes ≡ 1 (mod 4) and "b" and "B" positive primes ≡ 3 (mod 4), and sets out a table of eight theorems that together are equivalent to quadratic reciprocity: He says that since expressions of the form will come up so often he will abbreviate them as: This is now known as the Legendre symbol, and an equivalent definition is used today: for all integers "a" and all odd primes "p" He notes that these can be combined: A number of proofs, especially those based on Gauss's Lemma, explicitly calculate this formula. Legendre's attempt to prove reciprocity is based on a theorem of his: Example. Theorem I is handled by letting "a" ≡ 1 and "b" ≡ 3 (mod 4) be primes and assuming that formula_136 and, contrary the theorem, that formula_137 Then formula_138 has a solution, and taking congruences (mod 4) leads to a contradiction. This technique doesn't work for Theorem VIII. Let "b" ≡ "B" ≡ 3 (mod 4), and assume Then if there is another prime "p" ≡ 1 (mod 4) such that the solvability of formula_141 leads to a contradiction (mod 4). But Legendre was unable to prove there has to be such a prime "p"; he was later able to show that all that is required is: but he couldn't prove that either. Hilbert symbol (below) discusses how techniques based on the existence of solutions to formula_143 can be made to work. Gauss first proves the supplementary laws. He sets the basis for induction by proving the theorem for ±3 and ±5. Noting that it is easier to state for −3 and +5 than it is for +3 or −5, he states the general theorem in the form: Introducing the notation "a" R "b" (resp. "a" N "b") to mean "a" is a quadratic residue (resp. nonresidue) (mod "b"), and letting "a", "a"′, etc. represent positive primes ≡ 1 (mod 4) and "b", "b"′, etc. positive primes ≡ 3 (mod 4), he breaks it out into the same 8 cases as Legendre: In the next Article he generalizes this to what are basically the rules for the Jacobi symbol (below). Letting "A", "A"′, etc. represent any (prime or composite) positive numbers ≡ 1 (mod 4) and "B", "B"′, etc. positive numbers ≡ 3 (mod 4): All of these cases take the form "if a prime is a residue (mod a composite), then the composite is a residue or nonresidue (mod the prime), depending on the congruences (mod 4)". He proves that these follow from cases 1) - 8). Gauss needed, and was able to prove, a lemma similar to the one Legendre needed: The proof of quadratic reciprocity uses complete induction. These can be combined: A number of proofs of the theorem, especially those based on Gauss sums derive this formula. or the splitting of primes in algebraic number fields, Note that the statements in this section are equivalent to quadratic reciprocity: if, for example, Euler's version is assumed, the Legendre-Gauss version can be deduced from it, and vice versa. This can be proven using Gauss's lemma. Gauss's fourth proof consists of proving this theorem (by comparing two formulas for the value of Gauss sums) and then restricting it to two primes. He then gives an example: Let "a" = 3, "b" = 5, "c" = 7, and "d" = 11. Three of these, 3, 7, and 11 ≡ 3 (mod 4), so "m" ≡ 3 (mod 4). 5×7×11 R 3; 3×7×11 R 5; 3×5×11 R 7;  and  3×5×7 N 11, so there are an odd number of nonresidues. The Jacobi symbol is a generalization of the Legendre symbol; the main difference is that the bottom number has to be positive and odd, but does not have to be prime. If it is prime, the two symbols agree. It obeys the same rules of manipulation as the Legendre symbol. In particular and if both numbers are positive and odd (this is sometimes called "Jacobi's reciprocity law"): However, if the Jacobi symbol is 1 but the denominator is not a prime, it does not necessarily follow that the numerator is a quadratic residue of the denominator. Gauss's cases 9) - 14) above can be expressed in terms of Jacobi symbols: and since "p" is prime the left hand side is a Legendre symbol, and we know whether "M" is a residue modulo "p" or not. The formulas listed in the preceding section are true for Jacobi symbols as long as the symbols are defined. Euler's formula may be written Example. 2 is a residue modulo the primes 7, 23 and 31: But 2 is not a quadratic residue modulo 5, so it can't be one modulo 15. This is related to the problem Legendre had: if formula_164 then "a" is a non-residue modulo every prime in the arithmetic progression "m" + 4"a", "m" + 8"a", ..., if there "are" any primes in this series, but that wasn't proved until decades after Legendre. Eisenstein's formula requires relative primality conditions (which are true if the numbers are prime) The quadratic reciprocity law can be formulated in terms of the Hilbert symbol formula_168 where "a" and "b" are any two nonzero rational numbers and "v" runs over all the non-trivial absolute values of the rationals (the archimedean one and the "p"-adic absolute values for primes "p"). The Hilbert symbol formula_168 is 1 or −1. It is defined to be 1 if and only if the equation formula_170 has a solution in the completion of the rationals at "v" other than formula_171. The Hilbert reciprocity law states that formula_168, for fixed "a" and "b" and varying "v", is 1 for all but finitely many "v" and the product of formula_168 over all "v" is 1. (This formally resembles the residue theorem from complex analysis.) The proof of Hilbert reciprocity reduces to checking a few special cases, and the non-trivial cases turn out to be equivalent to the main law and the two supplementary laws of quadratic reciprocity for the Legendre symbol. There is no kind of reciprocity in the Hilbert reciprocity law; its name simply indicates the historical source of the result in quadratic reciprocity. Unlike quadratic reciprocity, which requires sign conditions (namely positivity of the primes involved) and a special treatment of the prime 2, the Hilbert reciprocity law treats all absolute values of the rationals on an equal footing. Therefore, it is a more natural way of expressing quadratic reciprocity with a view towards generalization: the Hilbert reciprocity law extends with very few changes to all global fields and this extension can rightly be considered a generalization of quadratic reciprocity to all global fields. The early proofs of quadratic reciprocity are relatively unilluminating. The situation changed when Gauss used Gauss sums to show that quadratic fields are subfields of cyclotomic fields, and implicitly deduced quadratic reciprocity from a reciprocity theorem for cyclotomic fields. His proof was cast in modern form by later algebraic number theorists. This proof served as a template for class field theory, which can be viewed as a vast generalization of quadratic reciprocity. Robert Langlands formulated the Langlands program, which gives a conjectural vast generalization of class field theory. He wrote: There are also quadratic reciprocity laws in rings other than the integers. In his second monograph on quartic reciprocity Gauss stated quadratic reciprocity for the ring formula_174 of Gaussian integers, saying that it is a corollary of the biquadratic law in formula_175 but did not provide a proof of either theorem. Dirichlet showed that the law in formula_174 can be deduced from the law for formula_177 without using biquadratic reciprocity. For an odd Gaussian prime formula_178 and a Gaussian integer formula_179 relatively prime to formula_180 define the quadratic character for formula_174 by: Let formula_183 be distinct Gaussian primes where "a" and "c" are odd and "b" and "d" are even. Then Consider the following third root of unity: The ring of Eisenstein integers is formula_186 For an Eisenstein prime formula_187 and an Eisenstein integer formula_179 with formula_189 define the quadratic character for formula_190 by the formula Let λ = "a" + "bω" and μ = "c" + "dω" be distinct Eisenstein primes where "a" and "c" are not divisible by 3 and "b" and "d" are divisible by 3. Eisenstein proved The above laws are special cases of more general laws that hold for the ring of integers in any imaginary quadratic number field. Let "k" be an imaginary quadratic number field with ring of integers formula_193 For a prime ideal formula_194 with odd norm formula_195 and formula_196 define the quadratic character for formula_197 as for an arbitrary ideal formula_199 factored into prime ideals formula_200 define and for formula_202 define Let formula_204 i.e. formula_205 is an integral basis for formula_193 For formula_207 with odd norm formula_208 define (ordinary) integers "a", "b", "c", "d" by the equations, and a function If "m" = "Nμ" and "n" = "Nν" are both odd, Herglotz proved Also, if Then Let "F" be a finite field with "q" = "pn" elements, where "p" is an odd prime number and "n" is positive, and let "F"["x"] be the ring of polynomials in one variable with coefficients in "F". If formula_214 and "f" is irreducible, monic, and has positive degree, define the quadratic character for "F"["x"] in the usual manner: If formula_216 is a product of monic irreducibles let Dedekind proved that if formula_214 are monic and have positive degrees, The attempt to generalize quadratic reciprocity for powers higher than the second was one of the main goals that led 19th century mathematicians, including Carl Friedrich Gauss, Peter Gustav Lejeune Dirichlet, Carl Gustav Jakob Jacobi, Gotthold Eisenstein, Richard Dedekind, Ernst Kummer, and David Hilbert to the study of general algebraic number fields and their rings of integers; specifically Kummer invented ideals in order to state and prove higher reciprocity laws. The ninth in the list of 23 unsolved problems which David Hilbert proposed to the Congress of Mathematicians in 1900 asked for the "Proof of the most general reciprocity law [f]or an arbitrary number field". In 1923 Emil Artin, building upon work by Philipp Furtwängler, Teiji Takagi, Helmut Hasse and others, discovered a general theorem for which all known reciprocity laws are special cases; he proved it in 1927. The links below provide more detailed discussions of these theorems. The "Disquisitiones Arithmeticae" has been translated (from Latin) into English and German. The German edition includes all of Gauss's papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. Footnotes referencing the "Disquisitiones Arithmeticae" are of the form "Gauss, DA, Art. "n"". The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, § "n"". These are in Gauss's "Werke", Vol II, pp. 65–92 and 93–148. German translations are in pp. 511–533 and 534–586 of "Untersuchungen über höhere Arithmetik." Every textbook on elementary number theory (and quite a few on algebraic number theory) has a proof of quadratic reciprocity. Two are especially noteworthy: Franz Lemmermeyer's "Reciprocity Laws: From Euler to Eisenstein" has "many" proofs (some in exercises) of both quadratic and higher-power reciprocity laws and a discussion of their history. Its immense bibliography includes literature citations for 196 different published proofs for the quadratic reciprocity law. Kenneth Ireland and Michael Rosen's "A Classical Introduction to Modern Number Theory" also has many proofs of quadratic reciprocity (and many exercises), and covers the cubic and biquadratic cases as well. Exercise 13.26 (p. 202) says it all
https://en.wikipedia.org/wiki?curid=25272
Quantum information In physics and computer science, quantum information is the information of the state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information refers to both the technical definition in terms of Von Neumann entropy and the general computational term. Quantum information, like classical information, can be processed using digital computers, transmitted from one location to another, manipulated with algorithms, and analyzed with computer science and mathematics. Recently, the field quantum computing has become an active research area because of the possibility to disrupt modern computation, communication, and cryptography. Quantum information differs strongly from classical information, epitomized by the bit, in many striking and unfamiliar ways. While the fundamental unit of classical information is the bit, the most basic unit of quantum information is the qubit. Classical information is measured using Shannon entropy, while the quantum mechanical analogue is Von Neumann entropy. Given a statistical ensemble of quantum mechanical systems with the density matrix formula_1, it is given by formula_2 Many of the same entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy and the conditional quantum entropy. Unlike classical digital states (which are discrete), a qubit is continuous-valued, describable by a direction on the Bloch sphere. Despite being continuously valued in this way, a qubit is the "smallest" possible unit of quantum information, and despite the qubit state being continuously-valued, it is impossible to measure the value precisely. Five famous theorems describe the limits on manipulation of quantum information. These theorems prove that quantum information within the universe is conserved. They open up possibilities in quantum information processing. The state of a qubit contains all of its information. This state is frequently expressed as a vector on the Bloch sphere. This state can be changed by applying linear transformations or quantum gates to them. These unitary transformations are described as rotations on the Bloch Sphere. While classical gates correspond to the familiar operations of Boolean logic, quantum gates are physical unitary operators. The study of all of the above topics and differences comprises quantum information theory. Quantum mechanics is the study of how microscopic physical systems change dynamically in nature. In the field of quantum information theory, the quantum systems studied are abstracted away from any real world counterpart. A qubit might for instance physically be a photon in a linear optical quantum computer, an ion in a trapped ion quantum computer, or it might be a large collection of atoms as in a superconducting quantum computer. Regardless of the physical implementation, the limits and features of qubits implied by quantum information theory hold as all these systems are all mathematically described by the same apparatus of density matrices over the complex numbers. Another important difference with quantum mechanics is that, while quantum mechanics often studies infinite-dimensional systems such as a harmonic oscillator, quantum information theory concerns both with continuous-variable systems and finite-dimensional systems . Many journals publish research in quantum information science, although only a few are dedicated to this area. Among these are:
https://en.wikipedia.org/wiki?curid=25274
Quarterback The quarterback (commonly abbreviated "QB"), colloquially known as the "signal caller", is a position in gridiron football. Quarterbacks are members of the offensive team and line up directly behind the offensive line. In modern American football, the quarterback is usually considered the leader of the offensive team, and is often responsible for calling the play in the huddle. The quarterback also touches the ball on almost every offensive play, and is the offensive player that almost always throws forward passes. When the QB is tackled behind the line of scrimmage, it is called a sack. In modern American football, the quarterback is usually the leader of the offense, and their successes and failures can have a significant impact on the fortunes of his team. Accordingly, the quarterback is among the most glorified, scrutinized and highest-paid positions in team sports. "Bleacher Report" describes the signing of a starting quarterback as a Catch-22, where "NFL teams cannot maintain success without excellent quarterback play. But excellent quarterback play is usually so expensive that it prevents NFL teams from maintaining success"; as a star quarterback's high salary may prevent the signing of other expensive star players as the team has to stay under the hard salary cap. The quarterback touches the ball on almost every offensive play. Prior to each play, the quarterback will usually tell the rest of his team which play the team will run which is done in the huddle. However, when there isn't much time left, teams forgo the huddle and the quarterback calls plays on the run. After the team is lined up, the center will pass the ball back to the quarterback (a process called the snap). Usually on a running play, the quarterback will then hand or pitch the ball backwards to a halfback or fullback. On a passing play, the quarterback is almost always the player responsible for trying to throw the ball downfield to an eligible receiver. Additionally, the quarterback will often run with the football himself, which could be part of a designed play like the option run or quarterback sneak, or it could be an effort to avoid being sacked by the defense. Depending on the offensive scheme by his team, the quarterback's role can vary. In systems like the triple option the quarterback will only pass the ball a few times per game, if at all, while the pass-heavy spread offense as run by schools like Texas Tech requires quarterbacks to throw the ball in most plays. The passing game is emphasized heavily in the Canadian Football League (CFL), where there are only three downs as opposed to the four downs used in American football, a larger field of play and an extra eligible receiver. Different skillsets are required of the quarterback depending upon the offensive system. Quarterbacks that perform well in a pass-heavy spread offensive system, a popular offensive scheme in the NCAA and NFHS, rarely perform well in the National Football League (NFL), as the fundamentals of the pro-style offense used in the NFL are very different from those in the spread system, while quarterbacks in Canadian football need to be able to throw the ball often and accurately. In general, quarterbacks need to have physical skills such as arm strength, mobility, and quick throwing motion, in addition to intangibles such as competitiveness, leadership, intelligence, and downfield vision. In the NFL, quarterbacks are required to wear a uniform number between 1 and 19. In the National Collegiate Athletic Association (NCAA) and National Federation of State High School Associations (NFHS), quarterbacks are required to wear a uniform number between 1 and 49; in the NFHS, the quarterback can also wear a number between 80 and 89. In the CFL, the quarterback can wear any number from 0 to 49 and 70 to 99. Because of their numbering, quarterbacks are eligible receivers in the NCAA, NFHS, and CFL; in the NFL, quarterbacks are eligible receivers if they are not lined up directly under center. Often compared to captains of other team sports, before the implementation of NFL team captains in 2007, the starting quarterback is usually the "de facto" team leader and well-respected player on and off the field. Since 2007, when the NFL allowed teams to designate several captains to serve as on-field leaders, the starting quarterback has usually been one of the team captains as the leader of the team's offense. In the NFL, while the starting quarterback has no other responsibility or authority, he may, depending on the league or individual team, have various informal duties, such as participation in pre-game ceremonies, the coin toss, or other events outside the game. For instance the starting quarterback is the first player (and third person after the team owner and head coach) to be presented with the Lamar Hunt Trophy/George Halas Trophy (after winning the AFC/NFC Conference title) and the Vince Lombardi Trophy (after a Super Bowl victory). The starting quarterback of the victorious Super Bowl team is often chosen for the "I'm going to Disney World!" campaign (which includes a trip to Walt Disney World for them and their families), whether they are the Super Bowl MVP or not; examples include Joe Montana (XXIII), Trent Dilfer (XXXV), Peyton Manning (50), Tom Brady and Julian Edelman (LIII). Dilfer was chosen even though teammate Ray Lewis was the MVP of Super Bowl XXXV, due to the bad publicity from Lewis' murder trial the prior year. Being able to rely on a quarterback is vital to team morale. San Diego Chargers safety Rodney Harrison called the 1998 season a "nightmare" because of poor play by Ryan Leaf and Craig Whelihan and, from the rookie Leaf, obnoxious behavior toward teammates. Although their 1999 season replacements Jim Harbaugh and Erik Kramer were not stars, linebacker Junior Seau said "you can't imagine the security we feel as teammates knowing we have two quarterbacks who have performed in this league and know how to handle themselves as players and as leaders". Commentators have noted the "disproportionate importance" of the quarterback, describing it as the "most glorified -- and scrutinized -- position" in team sports. It is believed that "there is no other position in sports that 'dictates the terms' of a game the way quarterback does, whether that impact is positive or negative, as "Everybody feeds off of what the quarterback can and cannot do...Defensively, offensively, everybody reacts to what threats or non-threats the quarterback has. Everything else is secondary". "An argument can be made that quarterback is the most influential position in team sports, considering he touches the ball on virtually every offensive play of a far shorter season than baseball, basketball or hockey -- a season in which every game is vitally important". Most consistently successful NFL teams (for instance, multiple Super Bowl appearances within a short period of time) have been centered around a single starting quarterback; the one exception was the Washington Redskins under head coach Joe Gibbs who won three Super Bowls with three different starting quarterbacks from 1982 to 1991. On a team's defense, the middle linebacker is regarded as "quarterback of the defense" and is often the defensive leader, since he must be as smart as he is athletic. The middle linebacker (MLB), sometimes known as the "Mike", is the only inside linebacker in the 4–3 scheme. Compared to other positions in gridiron football, the backup quarterback gets considerably much less playing time than the starting quarterback. A backup quarterback, besides being dressed for a game in case of an injury to the starter, may also have additional roles such as a holder on placekicks or a punter, and often helping to prepare the defense. Backup quarterbacks typically have the career of a journeyman quarterback and have short stints with multiple teams, a notable exception being Frank Reich who backed up Jim Kelly for nine years at the Buffalo Bills. A capable backup quarterback, however, may threaten the starting quarterback's place in the team (see Platooning quarterbacks below); Aaron Rodgers was drafted by the Green Bay Packers as the eventual successor to Brett Favre, though Rodgers served in a backup role for a few years in order to develop sufficiently for the team to give him the starting job. A quarterback controversy results when a team has two capable quarterbacks competing for the starting position. Dallas Cowboys head coach Tom Landry alternated Roger Staubach and Craig Morton on each play, sending in the quarterbacks with the play call from the sideline; Morton started in Super Bowl V which his team lost, while Staubach started the Super Bowl VI next year and won. Although Morton played most of the 1972 season due to Staubach's injury, Staubach took back the starting job when he rallied the Cowboys in a come-from-behind win in the playoffs and Morton was subsequently traded; Staubach and Morton faced each other in Super Bowl XII. Another notable quarterback controversy involved the San Francisco 49ers' who had three capable starters; Joe Montana, Steve Young, and Steve Bono. Montana suffered a season-ending injury that cost him the 1991 NFL season and was supplanted by Young. Young was injured midway through the season, but Bono held the starting job (despite Young's recovery) until Bono's own injury let Young reclaim it. Montana also missed most of the 1992 NFL season making only one appearance, then was traded away at his request where he took over as the starter for the Kansas City Chiefs; upon retirement he was succeeded by Bono as the Chiefs' starting quarterback. In addition to their main role, quarterbacks are occasionally used in other roles. Most teams utilize a backup quarterback as their holder on placekicks. A benefit of using quarterbacks as holders is that it would be easier to pull off a fake field goal attempt, but many coaches prefer to use punters as holders because a punter will have far more time in practice sessions to work with the kicker than any quarterback would. In the Wildcat, a formation where a halfback lines up behind the center and the quarterback lines up out wide, the quarterback can be used as a receiving target or a blocker. A more rare use for a quarterback is to punt the ball himself, a play known as a quick kick. Denver Broncos quarterback John Elway was known to perform quick kicks occasionally, typically when the Broncos were facing a third-and-long situation. Philadelphia Eagles quarterback Randall Cunningham, an All-America punter in college, was also known to punt the ball occasionally, and was assigned as the team's default punter for certain situations, such as when the team was backed up inside their own five-yard line. As Roger Staubach's back-up, Dallas Cowboys quarterback Danny White was also the team's punter, opening strategic possibilities for coach Tom Landry. Ascending the starting role upon Staubach's retirement, White held his position as the team's punter for several seasons—a double duty he performed to All-American standard at Arizona State University. White also had two touchdown receptions as a Dallas Cowboy, both from the halfback option. If quarterbacks are uncomfortable with the formation the defense is using, they may call an audible change to their play. For example, if a quarterback receives the call to execute a running play, but he notices that the defense is ready to blitz—that is, to send additional defensive backs across the line of scrimmage in an attempt to tackle the quarterback or hurt his ability to pass—the quarterback may want to change the play. To do this, the quarterback yells a special code, like "Blue 42," or "Texas 29," which tells the offense to switch to a specific play or formation, but it all depends on the quarterback's judgment of the defense's alignment. Quarterbacks can also "spike" (throw the football at the ground) to stop the official game clock. For example, if a team is down by a field goal with only seconds remaining, a quarterback may spike the ball to prevent the game clock from running out. This usually allows the field goal unit to come onto the field, or attempt a final "Hail Mary pass". However, if a team is winning, a quarterback can keep the clock running by kneeling after the snap. This is normally done when the opposing team has no timeouts and there is little time left in the game, as it allows a team to burn up the remaining time on the clock without risking a turnover or injury. A dual-threat quarterback possesses the skills and physique to run with the ball if necessary. With the rise of several blitz-heavy defensive schemes and increasingly faster defensive players, the importance of a mobile quarterback has been redefined. While arm power, accuracy, and pocket presence – the ability to successfully operate from within the "pocket" formed by his blockers – are still the most important quarterback virtues, the ability to elude or run past defenders creates an additional threat that allows greater flexibility in a team's passing and running game. Dual-threat quarterbacks have historically been more prolific at the college level. Typically, a quarterback with exceptional quickness is used in an option offense, which allows the quarterback to hand the ball off, run it himself, or pitch it to the running back following him at a distance of three yards outside and one yard behind. This type of offense forces defenders to commit to the running back up the middle, the quarterback around the end, or the running back trailing the quarterback. It is then that the quarterback has the "option" to identify which match-up is most favorable to the offense as the play unfolds and exploit that defensive weakness. In the college game, many schools employ several plays that are designed for the quarterback to run with the ball. This is much less common in professional football, except for a quarterback sneak, a play that involves the quarterback diving forward behind the offensive line to gain a small amount of yardage, but there is still an emphasis on being mobile enough to escape a heavy pass rush. Historically, high-profile dual-threat quarterbacks in the NFL were uncommon, Steve Young and John Elway being among the notable exceptions, leading their teams to three and five Super Bowl appearances respectively; and Michael Vick, whose rushing ability was a rarity in the early 2000s, although he never led his team to a Super Bowl. In recent years, quarterbacks with dual-threat capabilities have become more popular. Current NFL quarterbacks considered to be dual-threats include Russell Wilson, Lamar Jackson and Josh Allen. Some teams employ a strategy which involves the use of more than one quarterback during the course of a game. This is more common at lower levels of football, such as high school or small college, but rare in major college or professional football. There are four circumstances in which a two-quarterback system may be used. The first is when a team is in the process of determining which quarterback will eventually be the starter, and may choose to use each quarterback for part of the game in order to compare the performances. For instance, the Seattle Seahawks' Pete Carroll used the pre-season games in 2012 to select Russell Wilson as the starting quarterback over Matt Flynn and Tarvaris Jackson. The second is a starter–reliever system, in which the starting quarterback splits the regular season playing time with the backup quarterback, although the former will start playoff games. This strategy is rare, and was last seen in the NFL in the "WoodStrock" combination of Don Strock and David Woodley, which took the Miami Dolphins to the Epic in Miami in 1982 and Super Bowl XVII the following year. The starter-reliever system is distinct from a one-off situation in which a starter is benched in favor of the back-up because the switch is part of the game plan (usually if the starter is playing poorly for that game), and the expectation is that the two players will assume the same roles game after game. The third is if a coach decides that the team has two quarterbacks who are equally effective and proceeds to rotate the quarterbacks at predetermined intervals, such as after each quarter or after each series. Southern California high school football team Corona Centennial operated this model during the 2014 football season, rotating quarterbacks after every series. In a game against the Chicago Bears in the seventh week of the 1971 season, Dallas Cowboys head coach Tom Landry alternated Roger Staubach and Craig Morton on each play, sending in the quarterbacks with the play call from the sideline. The fourth, still occasionally seen in major-college football, is the use of different quarterbacks in different game or down/distance situations. Generally this involves a running quarterback and a passing quarterback in an option or wishbone offense. In Canadian football, quarterback sneaks or other runs in short-yardage situations tend to be successful as a result of the distance between the offensive and defensive lines being one yard. Drew Tate, a quarterback for the Calgary Stampeders, was primarily used in short-yardage situations and led the CFL in rushing touchdowns during the 2014 season with ten scores as the backup to Bo Levi Mitchell. This strategy had all but disappeared from professional American football, but returned to some extent with the advent of the "wildcat" offense. There is a great debate within football circles as to the effectiveness of the so-called "two-quarterback system". Many coaches and media personnel remain skeptical of the model. Teams such as USC (Southern California), OSU (Oklahoma State), Northwestern, and smaller West Georgia have utilized the two-quarterback system; West Georgia, for example, uses the system due to the skill sets of its quarterbacks. Teams like these use this situation because of the advantages it gives them against defenses of the other team, so that the defense is unable to adjust to their game plan. The quarterback position dates to the late 1800s, when American Ivy League schools playing a form of rugby union imported from the United Kingdom began to put their own spin on the game. Walter Camp, a prominent athlete and rugby player at Yale University, pushed through a change in rules at a meeting in 1880 that established a line of scrimmage and allowed for the football to be snapped to a quarterback. The change was meant to allow for teams to strategize their play more thoroughly and retain possession more easily than was possible in the chaos of a scrummage in rugby. In Camp's formulation, the "quarter-back" was the person who received a ball snapped back with another player's foot. Originally he was not allowed to run forward of the line of scrimmage: The quarterback in this context was often called the "blocking back" as their duties usually involved blocking after the initial handoff. The "fullback" was the furthest back behind the line of scrimmage. The "halfback" was halfway between the fullback and the line of scrimmage, and the "quarter-back" was halfway between the halfback and the line of scrimmage. Hence, he was called a "quarter-back" by Walter Camp. The requirement to stay behind the line of scrimmage was soon rescinded, but it was later re-imposed in six-man football. The exchange between the person snapping the ball (typically the center) and the quarterback was initially an awkward one because it involved a kick. At first, centers gave the ball a small boot, and then picked it up and handed it to the quarterback. By 1889, Yale center Bert Hanson was bouncing the ball on the ground to the quarterback between his legs. The following year, a rule change officially made snapping the ball using the hands between the legs legal. Several years later, Amos Alonzo Stagg at the University of Chicago invented the lift-up snap: the center passed the ball off the ground and between his legs to a standing quarterback. A similar set of changes were later adopted in Canadian football as part of the Burnside rules, a set of rules proposed by John Meldrum "Thrift" Burnside, the captain of the University of Toronto's football team. The change from a scrummage to a "scrimmage" made it easier for teams to decide what plays they would run before the snap. At first, the captains of college teams were put in charge of play-calling, indicating with shouted codes which players would run with the ball and how the men on the line were supposed to block. Yale later used visual signals, including adjustments of the captain's knit hat, to call plays. Centers could also signal plays based on the alignment of the ball before the snap. In 1888, however, Princeton University began to have its quarterback call plays using number signals. That system caught on, and quarterbacks began to act as directors and organizers of offensive play. Early on, quarterbacks were used in a variety of formations. Harvard's team put seven men on the line of scrimmage, with three halfbacks who alternated at quarterback and a lone fullback. Princeton put six men on the line and had one designated quarterback, while Yale used seven linemen, one quarterback and two halfbacks who lined up on either side of the fullback. This was the origin of the T-formation, an offensive set that remained in use for many decades afterward and gained popularity in professional football starting in the 1930s. In 1906, the forward pass was legalized in American football; Canadian football did not adopt the forward pass until 1929. Despite the legalization of the forward pass, the most popular formations of the early 20th century focused mostly on the rushing game. The single-wing formation, a run-oriented offensive set, was invented by football coach Glenn "Pop" Warner around the year 1908. In the single-wing, the quarterback was positioned behind the line of scrimmage and was flanked by a tailback, fullback and wingback. He served largely as a blocking back; the tailback typically took the snap, either running forward with the ball or making a lateral pass to one of the other players in the backfield. The quarterback's job was usually to make blocks upfield to help the tailback or fullback gain yards. Passing plays were rare in the single-wing, an unbalanced power formation where four linemen lined up to one side of the center and two lined up to the other. The tailback was the focus of the offense, and was often a triple-threat man who would either pass, run or kick the ball. Offensive play-calling continued to focus on rushing up through the 1920s, when professional leagues began to challenge the popularity of college football. In the early days of the professional National Football League (NFL), which was founded in 1920, games were largely low-scoring affairs. Two-thirds of all games in the 1920s were shutouts, and quarterbacks/tailbacks usually passed only out of desperation. In addition to a reluctance to risk turnovers by passing, various rules existed that limited the effectiveness of the forward pass: passers were required to drop back five yards behind the line of scrimmage before they could attempt a pass, and incomplete passes in the end zone resulted in a change of possession and a touchback. Additionally, the rules required the ball to be snapped from the location on the field where it was ruled dead; if a play ended with a player going out of bounds, the center had to snap the ball from the sideline, an awkward place to start a play. Despite these constraints, player-coach Curly Lambeau of the Green Bay Packers, along with several other NFL figures of his era, was a consistent proponent of the forward pass. The Packers found success in the 1920s and 1930s using variations on the single-wing that emphasized the passing game. Packers quarterback Red Dunn and New York Giants and Brooklyn Dodgers quarterback Benny Friedman were the leading passers of their era, but passing remained a relative rarity among other teams; between 1920 and 1932, there were three times as many running plays as there were passing plays. Early NFL quarterbacks typically were responsible for calling the team's offensive plays with signals before the snap. The use of the huddle to call plays originated with Stagg in 1896, but only began to be used regularly in college games in 1921. In the NFL, players were typically assigned numbers, as were the gaps between offensive linemen. One player, usually the quarterback, would call signals indicating which player was to run the ball and which gap he would run toward. Play-calling or any other kind of coaching from the sidelines was not permitted during this period, leaving the quarterback to devise the offensive strategy (often, the quarterback doubled as head coach during this era). Substitutions were limited, and quarterbacks often played on both offense and defense. The period between 1933 and 1945 was marked by numerous changes for the quarterback position. The rule requiring a quarterback/tailback to be five yards behind the line of scrimmage to pass was abolished. Hash marks were added to the field that established a limited zone between which the ball was placed before snaps, making offensive formations more flexible. Additionally, incomplete passes in the end zone were no longer counted as turnovers and touchbacks. The single-wing continued to be in wide use throughout this, and a number of forward-passing tailbacks became stars, including Sammy Baugh of the Washington Redskins. In 1939, University of Chicago head football coach Clark Shaughnessy made modifications to the T-formation, a formation that put the quarterback behind the center and had him receive the snap directly. Shaughnessy altered the formation by having the linemen be spaced further apart, and he began having players go in motion behind the line of scrimmage before the snap to confuse defenses. These changes were picked up by Chicago Bears coach George Halas, a close friend of Shaughnessy, and they quickly caught on in the professional ranks. Utilizing the T-formation and led by quarterback Sid Luckman, the Bears reached the NFL championship game in 1940 and beat the Redskins by a score of 73–0. The blowout led other teams across the league to adopt variations on the T-formation, including the Philadelphia Eagles, Cleveland Rams and Detroit Lions. Baugh and the Redskins converted to the T-formation and continued to succeed. Thanks in part to the emergence of the T-formation and changes in the rulebooks to liberalize the passing game, passing from the quarterback position became more common in the 1940s and as teams switched to the T-formation, passing tailbacks, such as Sammy Baugh, would line up as quarterbacks instead. Over the course of the decade, passing yards began to exceed rushing yards for the first time in the history of football. The Cleveland Browns of the late 1940s in the All-America Football Conference (AAFC), a professional league created to challenge the NFL, were one of the teams of that era that relied most on passing. Quarterback Otto Graham helped the Browns win four AAFC championships in the late 1940s in head coach Paul Brown's T-formation offense, which emphasized precision timing passes. Cleveland, along with several other AAFC teams, was absorbed by the NFL in 1950 after the dissolution of the AAFC that same year. By the end of the 1940s, all NFL teams aside from the Pittsburgh Steelers used the T-formation as their primary offensive formation. As late as the 1960s, running plays occurred more frequently than passes. NFL quarterback Milt Plum later stated that during his career (1957-1969) passes typically only occurred on third downs and sometimes on first downs. Quarterbacks only increased in importance as rules changed to favor passing and higher scoring and as football gained popularity on television after the 1958 NFL Championship Game, often referred to as "The Greatest Game Ever Played". Early modern offenses evolved around the quarterback as a passing threat, boosted by rules changes in 1978 and 1979 that made it a penalty for defensive backs to interfere with receivers downfield and allowed offensive linemen to pass-block using their arms and open hands; the rules had limited them to blocking with their hands held to their chests. Average passing yards per game rose from 283.3 in 1977 to 408.7 in 1979. The NFL continues to be a pass-heavy league, in part due to further rule changes that prescribed harsher penalties for hitting the quarterback and for hitting defenseless receivers as they awaited passes. Passing in wide-open offenses has also been an emphasis at the high school and college levels, and professional coaches have devised schemes to fit the talents of new generations of quarterbacks. While quarterbacks and team captains usually called plays in football's early years, today coaches often decide which plays the offense will run. Some teams use an offensive coordinator, an assistant coach whose duties include offensive game-planning and often play-calling. In the NFL, coaches are allowed to communicate with quarterbacks and call plays using audio equipment built into the player's helmet. Quarterbacks are allowed to hear, but not talk to, their coaches until there are fifteen seconds left on the play clock. Once the quarterback receives the call, he may relay it to other players via signals or in a huddle. Dallas Cowboys head coach Tom Landry was an early advocate of taking play calling out of the quarterback's hands. Although this remained a common practice in the NFL through the 1970s, fewer QBs were doing it by the 1980s and even Hall of Famers like Joe Montana did not call their own plays. Buffalo Bills QB Jim Kelly was one of the last to regularly call plays. Peyton Manning, formerly of the Indianapolis Colts and Denver Broncos, was the best modern example of a quarterback who called his own plays, primary using an uptempo, no-huddle-based attack. Manning had almost complete control over the offense. Former Baltimore Ravens quarterback Joe Flacco retained a high degree of control over the offense as well, particularly when running a no-huddle scheme, as does Ben Roethlisberger of the Pittsburgh Steelers. During the 2013 season, 67 percent of NFL players were African American (blacks make up 13 percent of the US population), yet only 17 percent of quarterbacks were; 82 percent of quarterbacks were white, with just one percent of quarterbacks from other races. In 2017, the New York Giants benched longtime starter Eli Manning in favor of Geno Smith, who was declared the starter. The Giants were the last team to have never fielded a black starting QB during an NFL season. Since the inception of the game, only three quarterbacks with known black ancestry have led their team to a Super Bowl victory: Doug Williams in 1988, Russell Wilson, who is multiracial, in 2014, and Patrick Mahomes in 2020. Some black quarterbacks claim to have experienced bias towards or against them due to their race. Despite his ability to both pass and run effectively, current Houston Texans signal-caller Deshaun Watson despises being called a dual-threat quarterback because he believes the term is often used to stereotype black quarterbacks. Achievements: Diversity: Strategy and related positions:
https://en.wikipedia.org/wiki?curid=25277
Quadrilateral In Euclidean plane geometry, a quadrilateral is a polygon with four edges (or sides) and four vertices or corners. Sometimes, the term quadrangle is used, by analogy with triangle, and sometimes tetragon for consistency with pentagon (5-sided) and hexagon (6-sided), or 4-gon for consistency with "k"-gons for arbitrary values of "k". The word "quadrilateral" is derived from the Latin words "quadri", a variant of four, and "latus", meaning "side". Quadrilaterals are simple (not self-intersecting) or complex (self-intersecting), also called crossed. Simple quadrilaterals are either convex or concave. The interior angles of a simple (and planar) quadrilateral "ABCD" add up to 360 degrees of arc, that is This is a special case of the "n"-gon interior angle sum formula ("n" − 2) × 180°. All non-self-crossing quadrilaterals tile the plane by repeated rotation around the midpoints of their edges. Any quadrilateral that is not self-intersecting is a simple quadrilateral. In a convex quadrilateral, all interior angles are less than 180° and the two diagonals both lie inside the quadrilateral. In a concave quadrilateral, one interior angle is bigger than 180° and one of the two diagonals lies outside the quadrilateral. A self-intersecting quadrilateral is called variously a cross-quadrilateral, crossed quadrilateral, butterfly quadrilateral or bow-tie quadrilateral. In a crossed quadrilateral, the four "interior" angles on either side of the crossing (two acute and two reflex, all on the left or all on the right as the figure is traced out) add up to 720°. The two diagonals of a convex quadrilateral are the line segments that connect opposite vertices. The two bimedians of a convex quadrilateral are the line segments that connect the midpoints of opposite sides. They intersect at the "vertex centroid" of the quadrilateral (see Remarkable points below). The four maltitudes of a convex quadrilateral are the perpendiculars to a side through the midpoint of the opposite side. There are various general formulas for the area "K" of a convex quadrilateral "ABCD" with sides . The area can be expressed in trigonometric terms as where the lengths of the diagonals are "p" and "q" and the angle between them is "θ". In the case of an orthodiagonal quadrilateral (e.g. rhombus, square, and kite), this formula reduces to formula_3 since "θ" is 90°. The area can be also expressed in terms of bimedians as where the lengths of the bimedians are "m" and "n" and the angle between them is "φ". Bretschneider's formula expresses the area in terms of the sides and two opposite angles: where the sides in sequence are "a", "b", "c", "d", where "s" is the semiperimeter, and "A" and "C" are two (in fact, any two) opposite angles. This reduces to Brahmagupta's formula for the area of a cyclic quadrilateral when . Another area formula in terms of the sides and angles, with angle "C" being between sides "b" and "c", and "A" being between sides "a" and "d", is In the case of a cyclic quadrilateral, the latter formula becomes formula_7 In a parallelogram, where both pairs of opposite sides and angles are equal, this formula reduces to formula_8 Alternatively, we can write the area in terms of the sides and the intersection angle "θ" of the diagonals, so long as this angle is not 90°: In the case of a parallelogram, the latter formula becomes formula_10 Another area formula including the sides "a", "b", "c", "d" is where "x" is the distance between the midpoints of the diagonals and "φ" is the angle between the bimedians. The last trigonometric area formula including the sides "a", "b", "c", "d" and the angle "α" between "a" and "b" is: which can also be used for the area of a concave quadrilateral (having the concave part opposite to angle "α") just changing the first sign + to - . The following two formulas express the area in terms of the sides "a", "b", "c", "d", the semiperimeter "s", and the diagonals "p", "q": The first reduces to Brahmagupta's formula in the cyclic quadrilateral case, since then "pq" = "ac" + "bd". The area can also be expressed in terms of the bimedians "m", "n" and the diagonals "p", "q": In fact, any three of the four values "m", "n", "p", and "q" suffice for determination of the area, since in any quadrilateral the four values are related by formula_17 The corresponding expressions are: if the lengths of two bimedians and one diagonal are given, and if the lengths of two diagonals and one bimedian are given. The area of a quadrilateral "ABCD" can be calculated using vectors. Let vectors AC and BD form the diagonals from "A" to "C" and from "B" to "D". The area of the quadrilateral is then which is half the magnitude of the cross product of vectors AC and BD. In two-dimensional Euclidean space, expressing vector AC as a free vector in Cartesian space equal to (x"1,"y"1) and BD as (x"2,"y"2), this can be rewritten as: In the following table it is listed if the diagonals in some of the most basic quadrilaterals bisect each other, if their diagonals are perpendicular, and if their diagonals have equal length. The list applies to the most general cases, and excludes named subsets. "Note 1: The most general trapezoids and isosceles trapezoids do not have perpendicular diagonals, but there are infinite numbers of (non-similar) trapezoids and isosceles trapezoids that do have perpendicular diagonals and are not any other named quadrilateral." "Note 2: In a kite, one diagonal bisects the other. The most general kite has unequal diagonals, but there is an infinite number of (non-similar) kites in which the diagonals are equal in length (and the kites are not any other named quadrilateral)." The lengths of the diagonals in a convex quadrilateral "ABCD" can be calculated using the law of cosines on each triangle formed by one diagonal and two sides of the quadrilateral. Thus and Other, more symmetric formulas for the lengths of the diagonals, are and In any convex quadrilateral "ABCD", the sum of the squares of the four sides is equal to the sum of the squares of the two diagonals plus four times the square of the line segment connecting the midpoints of the diagonals. Thus where "x" is the distance between the midpoints of the diagonals. This is sometimes known as Euler's quadrilateral theorem and is a generalization of the parallelogram law. The German mathematician Carl Anton Bretschneider derived in 1842 the following generalization of Ptolemy's theorem, regarding the product of the diagonals in a convex quadrilateral This relation can be considered to be a law of cosines for a quadrilateral. In a cyclic quadrilateral, where "A" + "C" = 180°, it reduces to "pq = ac + bd". Since cos ("A" + "C") ≥ −1, it also gives a proof of Ptolemy's inequality. If "X" and "Y" are the feet of the normals from "B" and "D" to the diagonal "AC" = "p" in a convex quadrilateral "ABCD" with sides "a" = "AB", "b" = "BC", "c" = "CD", "d" = "DA", then In a convex quadrilateral "ABCD" with sides "a" = "AB", "b" = "BC", "c" = "CD", "d" = "DA", and where the diagonals intersect at "E", where "e" = "AE", "f" = "BE", "g" = "CE", and "h" = "DE". The shape and size of a convex quadrilateral are fully determined by the lengths of its sides in sequence and of one diagonal between two specified vertices. The two diagonals "p, q" and the four side lengths "a, b, c, d" of a quadrilateral are related by the Cayley-Menger determinant, as follows: The internal angle bisectors of a convex quadrilateral either form a cyclic quadrilateral (that is, the four intersection points of adjacent angle bisectors are concyclic) or they are concurrent. In the latter case the quadrilateral is a tangential quadrilateral. In quadrilateral "ABCD", if the angle bisectors of "A" and "C" meet on diagonal "BD", then the angle bisectors of "B" and "D" meet on diagonal "AC". The bimedians of a quadrilateral are the line segments connecting the midpoints of the opposite sides. The intersection of the bimedians is the centroid of the vertices of the quadrilateral. The midpoints of the sides of any quadrilateral (convex, concave or crossed) are the vertices of a parallelogram called the Varignon parallelogram. It has the following properties: The two bimedians in a quadrilateral and the line segment joining the midpoints of the diagonals in that quadrilateral are concurrent and are all bisected by their point of intersection. In a convex quadrilateral with sides "a", "b", "c" and "d", the length of the bimedian that connects the midpoints of the sides "a" and "c" is where "p" and "q" are the length of the diagonals. The length of the bimedian that connects the midpoints of the sides "b" and "d" is Hence This is also a corollary to the parallelogram law applied in the Varignon parallelogram. The lengths of the bimedians can also be expressed in terms of two opposite sides and the distance "x" between the midpoints of the diagonals. This is possible when using Euler's quadrilateral theorem in the above formulas. Whence and Note that the two opposite sides in these formulas are not the two that the bimedian connects. In a convex quadrilateral, there is the following dual connection between the bimedians and the diagonals: The four angles of a simple quadrilateral "ABCD" satisfy the following identities: and Also, In the last two formulas, no angle is allowed to be a right angle, since tan 90° is not defined. If a convex quadrilateral has the consecutive sides "a", "b", "c", "d" and the diagonals "p", "q", then its area "K" satisfies From Bretschneider's formula it directly follows that the area of a quadrilateral satisfies with equality if and only if the quadrilateral is cyclic or degenerate such that one side is equal to the sum of the other three (it has collapsed into a line segment, so the area is zero). The area of any quadrilateral also satisfies the inequality Denoting the perimeter as "L", we have with equality only in the case of a square. The area of a convex quadrilateral also satisfies for diagonal lengths "p" and "q", with equality if and only if the diagonals are perpendicular. Let "a", "b", "c", "d" be the lengths of the sides of a convex quadrilateral "ABCD" with the area "K" and diagonals "AC = p", "BD = q". Then Let "a", "b", "c", "d" be the lengths of the sides of a convex quadrilateral "ABCD" with the area "K", then the following inequality holds: A corollary to Euler's quadrilateral theorem is the inequality where equality holds if and only if the quadrilateral is a parallelogram. Euler also generalized Ptolemy's theorem, which is an equality in a cyclic quadrilateral, into an inequality for a convex quadrilateral. It states that where there is equality if and only if the quadrilateral is cyclic. This is often called Ptolemy's inequality. In any convex quadrilateral the bimedians "m, n" and the diagonals "p, q" are related by the inequality with equality holding if and only if the diagonals are equal. This follows directly from the quadrilateral identity formula_52 The sides "a", "b", "c", and "d" of any quadrilateral satisfy and Among all quadrilaterals with a given perimeter, the one with the largest area is the square. This is called the "isoperimetric theorem for quadrilaterals". It is a direct consequence of the area inequality where "K" is the area of a convex quadrilateral with perimeter "L". Equality holds if and only if the quadrilateral is a square. The dual theorem states that of all quadrilaterals with a given area, the square has the shortest perimeter. The quadrilateral with given side lengths that has the maximum area is the cyclic quadrilateral. Of all convex quadrilaterals with given diagonals, the orthodiagonal quadrilateral has the largest area. This is a direct consequence of the fact that the area of a convex quadrilateral satisfies where "θ" is the angle between the diagonals "p" and "q". Equality holds if and only if "θ" = 90°. If "P" is an interior point in a convex quadrilateral "ABCD", then From this inequality it follows that the point inside a quadrilateral that minimizes the sum of distances to the vertices is the intersection of the diagonals. Hence that point is the Fermat point of a convex quadrilateral. The centre of a quadrilateral can be defined in several different ways. The "vertex centroid" comes from considering the quadrilateral as being empty but having equal masses at its vertices. The "side centroid" comes from considering the sides to have constant mass per unit length. The usual centre, called just centroid (centre of area) comes from considering the surface of the quadrilateral as having constant density. These three points are in general not all the same point. The "vertex centroid" is the intersection of the two bimedians. As with any polygon, the "x" and "y" coordinates of the vertex centroid are the arithmetic means of the "x" and "y" coordinates of the vertices. The "area centroid" of quadrilateral "ABCD" can be constructed in the following way. Let "Ga", "Gb", "Gc", "Gd" be the centroids of triangles "BCD", "ACD", "ABD", "ABC" respectively. Then the "area centroid" is the intersection of the lines "GaGc" and "GbGd". In a general convex quadrilateral "ABCD", there are no natural analogies to the circumcenter and orthocenter of a triangle. But two such points can be constructed in the following way. Let "Oa", "Ob", "Oc", "Od" be the circumcenters of triangles "BCD", "ACD", "ABD", "ABC" respectively; and denote by "Ha", "Hb", "Hc", "Hd" the orthocenters in the same triangles. Then the intersection of the lines "OaOc" and "ObOd" is called the quasicircumcenter, and the intersection of the lines "HaHc" and "HbHd" is called the "quasiorthocenter" of the convex quadrilateral. These points can be used to define an Euler line of a quadrilateral. In a convex quadrilateral, the quasiorthocenter "H", the "area centroid" "G", and the quasicircumcenter "O" are collinear in this order, and "HG" = 2"GO". There can also be defined a "quasinine-point center" "E" as the intersection of the lines "EaEc" and "EbEd", where "Ea", "Eb", "Ec", "Ed" are the nine-point centers of triangles "BCD", "ACD", "ABD", "ABC" respectively. Then "E" is the midpoint of "OH". Another remarkable line in a convex non-parallelogram quadrilateral is the Newton line, which connects the midpoints of the diagonals, the segment connecting these points being bisected by the vertex centroid. One more interesting line (in some sense dual to the Newton's one) is the line connecting the point of intersection of diagonals with the vertex centroid. The line is remarkable by the fact that it contains the (area) centroid. The vertex centroid divides the segment connecting the intersection of diagonals and the (area) centroid in the ratio 3:1. For any quadrilateral "ABCD" with points "P" and "Q" the intersections of "AD" and "BC" and "AB" and "CD", respectively, the circles "(PAB), (PCD), (QAD)," and "(QBC)" pass through a common point "M", called a Miquel point. For a convex quadrilateral "ABCD" in which "E" is the point of intersection of the diagonals and "F" is the point of intersection of the extensions of sides "BC" and "AD", let ω be a circle through "E" and "F" which meets "CB" internally at "M" and "DA" internally at "N". Let "CA" meet ω again at "L" and let "DB" meet ω again at "K". Then there holds: the straight lines "NK" and "ML" intersect at point "P" that is located on the side "AB"; the straight lines "NL" and "KM" intersect at point "Q" that is located on the side "CD". Points "P" and "Q" are called ”Pascal points” formed by circle ω on sides "AB" and "CD". A hierarchical taxonomy of quadrilaterals is illustrated by the figure to the right. Lower classes are special cases of higher classes they are connected to. Note that "trapezoid" here is referring to the North American definition (the British equivalent is a trapezium). Inclusive definitions are used throughout. A non-planar quadrilateral is called a skew quadrilateral. Formulas to compute its dihedral angles from the edge lengths and the angle between two adjacent edges were derived for work on the properties of molecules such as cyclobutane that contain a "puckered" ring of four atoms. Historically the term gauche quadrilateral was also used to mean a skew quadrilateral. A skew quadrilateral together with its diagonals form a (possibly non-regular) tetrahedron, and conversely every skew quadrilateral comes from a tetrahedron where a pair of opposite edges is removed.
https://en.wikipedia.org/wiki?curid=25278
Quantum teleportation Quantum teleportation is a process in which quantum information (e.g. the exact state of an atom or photon) can be transmitted (exactly, in principle) from one location to another, with the help of classical communication and previously shared quantum entanglement between the sending and receiving location. Because it depends on classical communication, which can proceed no faster than the speed of light, it cannot be used for faster-than-light transport or communication of classical bits. While it has proven possible to teleport one or more qubits of information between two (entangled) quanta, this has not yet been achieved between anything larger than molecules. Although the name is inspired by the teleportation commonly used in fiction, quantum teleportation is limited to the transfer of information rather than matter itself. Quantum teleportation is not a form of transportation, but of communication: it provides a way of immediately transferring a qubit from one location to another without having to move a physical particle along with it. The term was coined by physicist Charles Bennett. The seminal paper first expounding the idea of quantum teleportation was published by C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters in 1993. Quantum teleportation was first realized in single photons, later being demonstrated in various material systems such as atoms, ions, electrons and superconducting circuits. The latest reported record distance for quantum teleportation is by the group of Jian-Wei Pan using the Micius satellite for space-based quantum teleportation. In matters relating to quantum or classical information theory, it is convenient to work with the simplest possible unit of information, the two-state system. In classical information, this is a bit, commonly represented using one or zero (or true or false). The quantum analog of a bit is a quantum bit, or qubit. Qubits encode a type of information, called quantum information, which differs sharply from "classical" information. For example, quantum information can be neither copied (the no-cloning theorem) nor destroyed (the no-deleting theorem). Quantum teleportation provides a mechanism of moving a qubit from one location to another, without having to physically transport the underlying particle to which that qubit is normally attached. Much like the invention of the telegraph allowed classical bits to be transported at high speed across continents, quantum teleportation holds the promise that one day, qubits could be moved likewise. the quantum states of single photons, photon modes, single atoms, atomic ensembles, defect centers in solids, single electrons, and superconducting circuits have been employed as information bearers. The movement of qubits does not require the movement of "things" any more than communication over the internet does: no quantum object needs to be transported, but it is necessary to communicate two classical bits per teleported qubit from the sender to the receiver. The actual teleportation protocol requires that an entangled quantum state or Bell state be created, and its two parts shared between two locations (the source and destination, or Alice and Bob). In essence, a certain kind of quantum channel between two sites must be established first, before a qubit can be moved. Teleportation also requires a classical information channel to be established, as two classical bits must be transmitted to accompany each qubit. The reason for this is that the results of the measurements must be communicated between the source and destination so as to reconstruct the qubit, or else the state of the destination qubit would not be known to the source, and any attempt to reconstruct the state would be random; this must be done over ordinary classical communication channels. The need for such classical channels may, at first, seem disappointing, and this explains why teleportation is limited to the speed of transfer of information, i.e., the speed of light. The main advantages is that Bell states can be shared using photons from lasers, and so teleportation is achievable through open space, i.e., without the need to send information through cables or optical fibers. The quantum states of single atoms have been teleported. Quantum states can be encoded in various degrees of freedom of atoms. For example, qubits can be encoded in the degrees of freedom of electrons surrounding the atomic nucleus or in the degrees of freedom of the nucleus itself. It is inaccurate to say "an atom has been teleported". It is the quantum state of an atom that is teleported. Thus, performing this kind of teleportation requires a stock of atoms at the receiving site, available for having qubits imprinted on them. The importance of teleporting the nuclear state is unclear: the nuclear state does affect the atom, e.g. in hyperfine splitting, but whether such state would need to be teleported in some futuristic "practical" application is debatable. An important aspect of quantum information theory is entanglement, which imposes statistical correlations between otherwise distinct physical systems by creating or placing two or more separate particles into a single, shared quantum state. These correlations hold even when measurements are chosen and performed independently, out of causal contact from one another, as verified in Bell test experiments. Thus, an observation resulting from a measurement choice made at one point in spacetime seems to instantaneously affect outcomes in another region, even though light hasn't yet had time to travel the distance; a conclusion seemingly at odds with special relativity (EPR paradox). However such correlations can never be used to transmit any information faster than the speed of light, a statement encapsulated in the no-communication theorem. Thus, teleportation, as a whole, can never be superluminal, as a qubit cannot be reconstructed until the accompanying classical information arrives. Understanding quantum teleportation requires a good grounding in finite-dimensional linear algebra, Hilbert spaces and projection matrixes. A qubit is described using a two-dimensional complex number-valued vector space (a Hilbert space), which are the primary basis for the formal manipulations given below. A working knowledge of quantum mechanics is not absolutely required to understand the mathematics of quantum teleportation, although without such acquaintance, the deeper meaning of the equations may remain quite mysterious. The prerequisites for quantum teleportation are a qubit that is to be teleported, a conventional communication channel capable of transmitting two classical bits (i.e., one of four states), and means of generating an entangled EPR pair of qubits, transporting each of these to two different locations, A and B, performing a Bell measurement on one of the EPR pair qubits, and manipulating the quantum state of the other pair. The protocol is then as follows: It is worth to notice that the above protocol assumes that the qubits are individually addressable, that means the qubits are distinguishable and physically labeled. However, there can be situations where two identical qubits are indistinguishable due to the spatial overlap of their wave functions. Under this condition, the qubits cannot be individually controlled or measured. Nevertheless, a teleportation protocol analogous to that described above can still be (conditionally) implemented by exploiting two independently-prepared qubits, with no need of an initial EPR pair. This can be made by addressing the internal degrees of freedom of the qubits (e.g., spins or polarizations) by spatially localized measurements performed in separated regions A and B shared by the wave functions of the two indistinguishable qubits. Work in 1998 verified the initial predictions, and the distance of teleportation was increased in August 2004 to 600 meters, using optical fiber. Subsequently, the record distance for quantum teleportation has been gradually increased to , then to , and is now , set in open air experiments in the Canary Islands, done between the two astronomical observatories of the Instituto de Astrofísica de Canarias. There has been a recent record set () using superconducting nanowire detectors that reached the distance of over optical fiber. For material systems, the record distance is . A variant of teleportation called "open-destination" teleportation, with receivers located at multiple locations, was demonstrated in 2004 using five-photon entanglement. Teleportation of a composite state of two single qubits has also been realized. In April 2011, experimenters reported that they had demonstrated teleportation of wave packets of light up to a bandwidth of 10 MHz while preserving strongly nonclassical superposition states. In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported. On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods. On 26 February 2015, scientists at the University of Science and Technology of China in Hefei, led by Chao-yang Lu and Jian-Wei Pan carried out the first experiment teleporting multiple degrees of freedom of a quantum particle. They managed to teleport the quantum information from ensemble of rubidium atoms to another ensemble of rubidium atoms over a distance of using entangled photons. In 2016, researchers demonstrated quantum teleportation with two independent sources which are separated by in Hefei optical fiber network. In September 2016, researchers at the University of Calgary demonstrated quantum teleportation over the Calgary metropolitan fiber network over a distance of . Researchers have also successfully used quantum teleportation to transmit information between clouds of gas atoms, notable because the clouds of gas are macroscopic atomic ensembles. In 2018, physicists at Yale demonstrated a deterministic teleported CNOT operation between logically encoded qubits. There are a variety of ways in which the teleportation protocol can be written mathematically. Some are very compact but abstract, and some are verbose but straightforward and concrete. The presentation below is of the latter form: verbose, but has the benefit of showing each quantum state simply and directly. Later sections review more compact notations. The teleportation protocol begins with a quantum state or qubit formula_4, in Alice's possession, that she wants to convey to Bob. This qubit can be written generally, in bra–ket notation, as: The subscript "C" above is used only to distinguish this state from "A" and "B", below. Next, the protocol requires that Alice and Bob share a maximally entangled state. This state is fixed in advance, by mutual agreement between Alice and Bob, and can be any one of the four Bell states shown. It does not matter which one. In the following, assume that Alice and Bob share the state formula_10 Alice obtains one of the particles in the pair, with the other going to Bob. (This is implemented by preparing the particles together and shooting them to Alice and Bob from a common source.) The subscripts "A" and "B" in the entangled state refer to Alice's or Bob's particle. At this point, Alice has two particles ("C", the one she wants to teleport, and "A", one of the entangled pair), and Bob has one particle, "B". In the total system, the state of these three particles is given by Alice will then make a local measurement in the Bell basis (i.e. the four Bell states) on the two particles in her possession. To make the result of her measurement clear, it is best to write the state of Alice's two qubits as superpositions of the Bell basis. This is done by using the following general identities, which are easily verified: and One applies these identities with "A" and "C" subscripts. The total three particle state, of "A", "B" and "C" together, thus becomes the following four-term superposition: The above is just a change of basis on Alice's part of the system. No operation has been performed and the three particles are still in the same total state. The actual teleportation occurs when Alice measures her two qubits A,C, in the Bell basis Experimentally, this measurement may be achieved via a series of laser pulses directed at the two particles. Given the above expression, evidently the result of Alice's (local) measurement is that the three-particle state would collapse to one of the following four states (with equal probability of obtaining each): Alice's two particles are now entangled to each other, in one of the four Bell states, and the entanglement originally shared between Alice's and Bob's particles is now broken. Bob's particle takes on one of the four superposition states shown above. Note how Bob's qubit is now in a state that resembles the state to be teleported. The four possible states for Bob's qubit are unitary images of the state to be teleported. The result of Alice's Bell measurement tells her which of the above four states the system is in. She can now send her result to Bob through a classical channel. Two classical bits can communicate which of the four results she obtained. After Bob receives the message from Alice, he will know which of the four states his particle is in. Using this information, he performs a unitary operation on his particle to transform it to the desired state formula_22: to recover the state. to his qubit. Teleportation is thus achieved. The above-mentioned three gates correspond to rotations of π radians (180°) about appropriate axes (X, Y and Z) in the Bloch sphere picture of a qubit. Some remarks: Alice's state in qubit 2 is transferred to Bob's qubit 0 using a priorly entangled pair of qubits between Alice and Bob, qubits 1 and 0. There are a variety of different notations in use that describe the teleportation protocol. One common one is by using the notation of quantum gates. In the above derivation, the unitary transformation that is the change of basis (from the standard product basis into the Bell basis) can be written using quantum gates. Direct calculation shows that this gate is given by where "H" is the one qubit Walsh-Hadamard gate and formula_31 is the Controlled NOT gate. Teleportation can be applied not just to pure states, but also mixed states, that can be regarded as the state of a single subsystem of an entangled pair. The so-called entanglement swapping is a simple and illustrative example. If Alice has a particle which is entangled with a particle owned by Bob, and Bob teleports it to Carol, then afterwards, Alice's particle is entangled with Carol's. A more symmetric way to describe the situation is the following: Alice has one particle, Bob two, and Carol one. Alice's particle and Bob's first particle are entangled, and so are Bob's second and Carol's particle: Now, if Bob does a projective measurement on his two particles in the Bell state basis and communicates the results to Carol, as per the teleportation scheme described above, the state of Bob's first particle can be teleported to Carol's. Although Alice and Carol never interacted with each other, their particles are now entangled. A detailed diagrammatic derivation of entanglement swapping has been given by Bob Coecke, presented in terms of categorical quantum mechanics. An important application of entanglement swapping is distributing Bell states for use in entanglement distributed quantum networks. A technical description of the entanglement swapping protocol is given here for pure bell states. The basic teleportation protocol for a qubit described above has been generalized in several directions, in particular regarding the dimension of the system teleported and the number of parties involved (either as sender, controller, or receiver). A generalization to formula_44-level systems (so-called qudits) is straight forward and was already discussed in the original paper by Bennett "et al.": the maximally entangled state of two qubits has to be replaced by a maximally entangled state of two qudits and the Bell measurement by a measurement defined by a maximally entangled orthonormal basis. All possible such generalizations were discussed by Werner in 2001. The generalization to infinite-dimensional so-called continuous-variable systems was proposed in and led to the first teleportation experiment that worked unconditionally. The use of multipartite entangled states instead of a bipartite maximally entangled state allows for several new features: either the sender can teleport information to several receivers either sending the same state to all of them (which allows to reduce the amount of entanglement needed for the process) or teleporting multipartite states or sending a single state in such a way that the receiving parties need to cooperate to extract the information. A different way of viewing the latter setting is that some of the parties can control whether the others can teleport. In general, mixed states ρ may be transported, and a linear transformation ω applied during teleportation, thus allowing data processing of quantum information. This is one of the foundational building blocks of quantum information processing. This is demonstrated below. A general teleportation scheme can be described as follows. Three quantum systems are involved. System 1 is the (unknown) state "ρ" to be teleported by Alice. Systems 2 and 3 are in a maximally entangled state "ω" that are distributed to Alice and Bob, respectively. The total system is then in the state A successful teleportation process is a LOCC quantum channel Φ that satisfies where Tr12 is the partial trace operation with respect systems 1 and 2, and formula_47 denotes the composition of maps. This describes the channel in the Schrödinger picture. Taking adjoint maps in the Heisenberg picture, the success condition becomes for all observable "O" on Bob's system. The tensor factor in formula_49 is formula_50 while that of formula_51 is formula_52. The proposed channel Φ can be described more explicitly. To begin teleportation, Alice performs a local measurement on the two subsystems (1 and 2) in her possession. Assume the local measurement have "effects" If the measurement registers the "i"-th outcome, the overall state collapses to The tensor factor in formula_55 is formula_50 while that of formula_51 is formula_52. Bob then applies a corresponding local operation Ψ"i" on system 3. On the combined system, this is described by where "Id" is the identity map on the composite system formula_60. Therefore, the channel Φ is defined by Notice Φ satisfies the definition of LOCC. As stated above, the teleportation is said to be successful if, for all observable "O" on Bob's system, the equality holds. The left hand side of the equation is: where Ψ"i*" is the adjoint of Ψ"i" in the Heisenberg picture. Assuming all objects are finite dimensional, this becomes The success criterion for teleportation has the expression A local explanation of quantum teleportation is put forward by David Deutsch and Patrick Hayden, with respect to the many-worlds interpretation of quantum mechanics. Their paper asserts that the two bits that Alice sends Bob contain "locally inaccessible information" resulting in the teleportation of the quantum state. "The ability of quantum information to flow through a classical channel "[…]", surviving decoherence, is "[…]" the basis of quantum teleportation."
https://en.wikipedia.org/wiki?curid=25280
Qubit In quantum computing, a qubit () or quantum bit (sometimes qbit) is the basic unit of quantum information—the quantum version of the classical binary bit physically realized with a two-state device. A qubit is a two-state (or two-level) quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics. Examples include: the spin of the electron in which the two levels can be taken as spin up and spin down; or the polarization of a single photon in which the two states can be taken to be the vertical polarization and the horizontal polarization. In a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a coherent superposition of both states simultaneously, a property which is fundamental to quantum mechanics and quantum computing. The coining of the term "qubit" is attributed to Benjamin Schumacher. In the acknowledgments of his 1995 paper, Schumacher states that the term "qubit" was created in jest during a conversation with William Wootters. The paper describes a way of compressing states emitted by a quantum source of information so that they require fewer physical resources to store. This procedure is now known as Schumacher compression. A binary digit, characterized as 0 and 1, is used to represent information in classical computers. A binary digit can represent up to one bit of Shannon information, where a bit is the basic unit of information. However, in this article, the word bit is synonymous with binary digit. In classical computer technologies, a "processed" bit is implemented by one of two levels of low DC voltage, and whilst switching from one of these two levels to the other, a so-called forbidden zone must be passed as fast as possible, as electrical voltage cannot change from one level to another "instantaneously". There are two possible outcomes for the measurement of a qubit—usually taken to have the value "0" and "1", like a bit or binary digit. However, whereas the state of a bit can only be either 0 or 1, the general state of a qubit according to quantum mechanics can be a coherent superposition of both. Moreover, whereas a measurement of a classical bit would not disturb its state, a measurement of a qubit would destroy its coherence and irrevocably disturb the superposition state. It is possible to fully encode one bit in one qubit. However, a qubit can hold more information, e.g. up to two bits using superdense coding. For a system of "n" components, a complete description of its state in classical physics requires only "n" bits, whereas in quantum physics it requires 2"n"−1 complex numbers. In quantum mechanics, the general quantum state of a qubit can be represented by a linear superposition of its two orthonormal basis states (or basis vectors). These vectors are usually denoted as formula_1 and formula_2. They are written in the conventional Dirac—or "bra–ket"—notation; the formula_3 and formula_4 are pronounced "ket 0" and "ket 1", respectively. These two orthonormal basis states, \ In a paper entitled "Solid-state quantum memory using the 31P nuclear spin", published in the October 23, 2008, issue of the journal "Nature", a team of scientists from the U.K. and U.S. reported the first relatively long (1.75 seconds) and coherent transfer of a superposition state in an electron spin "processing" qubit to a nuclear spin "memory" qubit. This event can be considered the first relatively consistent quantum data storage, a vital step towards the development of quantum computing. Recently, a modification of similar systems (using charged rather than neutral donors) has dramatically extended this time, to 3 hours at very low temperatures and 39 minutes at room temperature. Room temperature preparation of a qubit based on electron spins instead of nuclear spin was also demonstrated by a team of scientists from Switzerland and Australia.
https://en.wikipedia.org/wiki?curid=25284
Quechuan languages Quechua (, ; ), usually called ("people's language") in Quechuan languages, is an indigenous language family spoken by the Quechua peoples, primarily living in the Peruvian Andes. Derived from a common ancestral language, it is the most widely spoken pre-Columbian language family of the Americas, with a total of probably some 8–10 million speakers. Approximately 25% (7.7 million) of Peruvians speak a Quechuan language. It is perhaps most widely known for being the main language family of the Inca Empire. The Spaniards encouraged its use until the Peruvian struggle for independence of the 1780s. As a result, Quechua variants are still widely spoken today, being the co-official language of many regions and the second most spoken language family in Peru. Quechua had already expanded across wide ranges of the central Andes long before the expansion of the Inca Empire. The Inca were one among many peoples in present-day Peru who already spoke a form of Quechua. In the Cusco region, Quechua was influenced by neighboring languages such as Aymara, which caused it to develop as distinct. In similar ways, diverse dialects developed in different areas, borrowing from local languages, when the Inca Empire ruled and imposed Quechua as the official language. After the Spanish conquest of Peru in the 16th century, Quechua continued to be used widely by the indigenous peoples as the "common language". It was officially recognized by the Spanish administration, and many Spanish learned it in order to communicate with local peoples. Clergy of the Catholic Church adopted Quechua to use as the language of evangelization. The oldest written records of the language are by missionary Domingo de Santo Tomás, who arrived in Peru in 1538 and learned the language from 1540. He published his "Grammatica o arte de la lengua general de los indios de los reynos del Perú" (Grammar or Art of the General Language of the Indians of the Royalty of Peru) in 1560. Given its use by the Catholic missionaries, the range of Quechua continued to expand in some areas. In the late 18th century, colonial officials ended administrative and religious use of Quechua. They banned it from public use in Peru after the Túpac Amaru II rebellion of indigenous peoples. The Crown banned even "loyal" pro-Catholic texts in Quechua, such as Garcilaso de la Vega's "Comentarios Reales." Despite a brief revival of the language immediately after the Latin American nations achieved independence in the 19th century, the prestige of Quechua had decreased sharply. Gradually its use declined so that it was spoken mostly by indigenous people in the more isolated and conservative rural areas. Nevertheless, in the 21st century, Quechua language speakers number 8 to 10 million people across South America, the most speakers of any indigenous language. As result of Inca expansion into Central Chile, there were bilingual Quechua-Mapudungu Mapuche in Central Chile at the time of the Spanish arrival. It has been argued that Mapuche, Quechua, and Spanish coexisted in Central Chile, with significant bilingualism, during the 17th century. Quechua is the indigenous language that has influenced Chilean Spanish the most. In 2016 the first thesis defense done in Quechua in Europe was done by Peruvian Carmen Escalante Gutiérrez at Pablo de Olavide University. The same year Pablo Landeo wrote the first novel in Quechua without a Spanish translation. A Peruvian student, Roxana Quispe Collantes of the University of San Marcos, completed and defended the first thesis in the language group in 2019; it concerned the works of poet and it was also the first non-Spanish native language thesis done at that university. Currently, there are different initiatives that promote Quechua in the Andes and across the world: many universities offer Quechua classes, community-based organization such us Elva Ambía's Quechua Collective of New York promote the language, and governments are training interpreters in Quechua so they serve in healthcare, justice and bureaucratic facilities. In 1975, Peru became the first country to recognize Quechua as one of its official languages. Ecuador conferred official status on the language in its 2006 constitution, and in 2009, Bolivia adopted a new constitution that recognized Quechua and several other indigenous languages as official languages of the country. The major obstacle to the usage and teaching of Quechuan languages is the lack of written materials in the languages, such as books, newspapers, software, and magazines. The Bible has been translated into Quechua and is distributed by certain missionary groups. Quechua, along with Aymara and minor indigenous languages, remains essentially a spoken language. In recent years, Quechua has been introduced in intercultural bilingual education (IBE) in Peru, Bolivia, and Ecuador. Even in these areas, the governments are reaching only a part of the Quechua-speaking populations. Some indigenous people in each of the countries are having their children study in Spanish for the purposes of social advancement. Radio Nacional del Perú broadcasts news and agrarian programs in Quechua for periods in the mornings. Quechua and Spanish are now heavily intermixed in much of the Andean region, with many hundreds of Spanish loanwords in Quechua. Similarly, Quechua phrases and words are commonly used by Spanish speakers. In southern rural Bolivia, for instance, many Quechua words such as "wawa" (infant), "misi" (cat), "waska" (strap or thrashing), are as commonly used as their Spanish counterparts, even in entirely Spanish-speaking areas. Quechua has also had a profound influence on other native languages of the Americas, such as Mapuche. The number of speakers given varies widely according to the sources. The total in "Ethnologue" 16 is 10 million, mostly based on figures published 1987–2002, but with a few dating from the 1960s. The figure for Imbabura Highland Quechua in "Ethnologue", for example, is 300,000, an estimate from 1977. The missionary organization FEDEPI, on the other hand, estimated one million Imbabura dialect speakers (published 2006). Census figures are also problematic, due to under-reporting. The 2001 Ecuador census reports only 500,000 Quechua speakers, compared to the estimate in most linguistic sources of more than 2 million. The censuses of Peru (2007) and Bolivia (2001) are thought to be more reliable. Additionally, there are an unknown number of speakers in emigrant communities, including Queens, New York, and Paterson, New Jersey, in the United States. There are significant differences among the varieties of Quechua spoken in the central Peruvian highlands and the peripheral varieties of Ecuador, as well as those of southern Peru and Bolivia. They can be labeled Quechua I (or Quechua B, central) and Quechua II (or Quechua A, peripheral). Within the two groups, there are few sharp boundaries, making them dialect continua. However, there is a secondary division in Quechua II between the grammatically simplified northern varieties of Ecuador, Quechua II-B, known there as Kichwa, and the generally more conservative varieties of the southern highlands, Quechua II-C, which include the old Inca capital of Cusco. The closeness is at least in part because of the influence of Cusco Quechua on the Ecuadorean varieties in the Inca Empire. Because Northern nobles were required to educate their children in Cusco, this was maintained as the prestige dialect in the north. Speakers from different points within any of the three regions can generally understand one another reasonably well. There are nonetheless significant local-level differences across each. (Wanka Quechua, in particular, has several very distinctive characteristics that make the variety more difficult to understand, even for other Central Quechua speakers.) Speakers from different major regions, particularly Central or Southern Quechua, are not able to communicate effectively. The lack of mutual intelligibility among the dialects is the basic criterion that defines Quechua not as a single language, but as a language family. The complex and progressive nature of how speech varies across the dialect continua makes it nearly impossible to differentiate discrete varieties; "Ethnologue" lists 45 varieties which are then divided into two groups; Central and Peripheral. Due to the non-intelligibility among the two groups, they are all classified as separate languages. As a reference point, the overall degree of diversity across the family is a little less than that of the Romance or Germanic families, and more of the order of Slavic or Arabic. The greatest diversity is within Central Quechua, or Quechua I, which is believed to lie close to the homeland of the ancestral Proto-Quechua language. Alfredo Torero devised the traditional classification, the three divisions above, plus a fourth, a northern or Peruvian branch. The latter causes complications in the classification, however, as the northern dialects (Cajamarca–Cañaris, Pacaraos, and Yauyos–Chincha) have features of both Quechua I and Quechua II, and so are difficult to assign to either. Torero classifies them as the following: Willem Adelaar adheres to the Quechua I / Quechua II (central/peripheral) bifurcation. But, partially following later modifications by Torero, he reassigns part of Quechua II-A to Quechua I: Landerman (1991) does not believe a truly genetic classification is possible and divides Quechua II so that the family has four geographical–typological branches: Northern, North Peruvian, Central, and Southern. He includes Chachapoyas and Lamas in North Peruvian Quechua so Ecuadorian is synonymous with Northern Quechua. Quechua I (Central Quechua, "Waywash") is spoken in Peru's central highlands, from the Ancash Region to Huancayo. It is the most diverse branch of Quechua, to the extent that its divisions are commonly considered different languages. Quechua II (Peripheral Quechua, "Wamp'una" "Traveler") This is a sampling of words in several Quechuan languages: Quechua shares a large amount of vocabulary, and some striking structural parallels, with Aymara, and the two families have sometimes been grouped together as a "Quechumaran family". That hypothesis is generally rejected by specialists, however. The parallels are better explained by mutual influence and borrowing through intensive and long-term contact. Many Quechua–Aymara cognates are close, often closer than intra-Quechua cognates, and there is little relationship in the affixal system. The Puquina language of the Tiwanaku Empire is a possible source for some of the shared vocabulary between Quechua and Aymara. Jolkesky (2016) notes that there are lexical similarities with the Kunza, Leko, Mapudungun, Mochika, Uru-Chipaya, Zaparo, Arawak, Kandoshi, Muniche, Pukina, Pano, Barbakoa, Cholon-Hibito, Jaqi, Jivaro, and Kawapana language families due to contact. Quechua has borrowed a large number of Spanish words, such as "piru" (from "pero", "but"), "bwenu" (from "bueno", "good"), "iskwila" (from "escuela", "school"), waka (from "vaca", "cow") and "wuru" (from "burro", "donkey"). A number of Quechua words have entered English and French via Spanish, including "coca", "condor", "guano", "jerky", "llama", "pampa", "poncho", "puma", "quinine", "quinoa", "vicuña" ("vigogne" in French), and, possibly, "gaucho". The word "lagniappe" comes from the Quechuan word "yapay" "to increase, to add." The word first came into Spanish then Louisiana French, with the French or Spanish article "la" in front of it, "la ñapa" in Louisiana French or Creole, or "la yapa" in Spanish. A rare instance of a Quechua word being taken into general Spanish use is given by "carpa" for "tent" (Quechua "karpa"). The Quechua influence on Latin American Spanish includes such borrowings as "papa" "potato", "chuchaqui" "hangover" in Ecuador, and diverse borrowings for "altitude sickness": "suruqch'i" in Bolivia, "sorojchi" in Ecuador, and "soroche" in Peru. In Bolivia, particularly, Quechua words are used extensively even by non-Quechua speakers. These include "wawa" "baby, infant," "ch'aki" "hangover," "misi" "cat," "juk'ucho" "mouse," "q'omer uchu" "green pepper," "jacu" "lets go," "chhiri" and "chhurco" "curly haired," among many others. Quechua grammar also enters Bolivian Spanish, such as the use of the suffix -ri. In Bolivian Quechua, -ri is added to verbs to signify an action is performed with affection or, in the imperative, as a rough equivalent to please. In Bolivia, -ri is often included in the Spanish imperative to imply "please" or to soften commands. For example, the standard "pásame" "pass me [something]" becomes "pasarime". At first, Spaniards referred to the language of the Inca empire as the "lengua general", the "general language". The name "quichua" was first used in 1560 by Domingo de Santo Tomás in his "Grammatica o arte de la lengua general de los indios de los reynos del Perú". It is not known what name the native speakers gave to their language before colonial times and whether it was Spaniards who called it "quechua". There are two possible etymologies of Quechua as the name of the language. There is a possibility that the name Quechua was derived from "*qiĉ.wa", the native word which originally meant the "temperate valley" altitude ecological zone in the Andes (suitable for maize cultivation) and to its inhabitants. Alternatively, Pedro Cieza de León and Inca Garcilaso de la Vega, the early Spanish chroniclers, mention the existence of a people called Quichua in the present Apurímac Region, and it could be inferred that their name was given to the entire language. The Hispanicised spellings "Quechua" and "Quichua" have been used in Peru and Bolivia since the 17th century, especially after the Third Council of Lima. Today, the various local pronunciations of "Quechua Simi" include , , , and . Another name that native speakers give to their own language is "runa simi", "language of man/people"; it also seems to have emerged during the colonial period. The description below applies to Cusco Quechua; there are significant differences in other varieties of Quechua. Quechua only has three vowel phonemes: and , as in Aymara (including Jaqaru). Monolingual speakers pronounce them as respectively, but Spanish realizations may also be found. When the vowels appear adjacent to uvular consonants (, , and ), they are rendered more like , and , respectively. About 30% of the modern Quechua vocabulary is borrowed from Spanish, and some Spanish sounds (such as , , , ) may have become phonemic even among monolingual Quechua-speakers. Voicing is not phonemic in Cusco Quechua. Cusco Quechua, North Bolivian Quechua, and South Bolivian Quechua are the only varieties to have glottalized consonants. They, along with certain kinds of Ecuadorian Kichwa, are the only varieties which have aspirated consonants. Because reflexes of a given Proto-Quechua word may have different stops in neighboring dialects (Proto-Quechua "*čaki" 'foot' becomes "č'aki" and "čaka" 'bridge' becomes "čaka"), they are thought to be innovations in Quechua from Aymara, borrowed independently after branching off from Proto-Quechua. Gemination of the tap results in a trill . Stress is penultimate in most dialects of Quechua. In some varieties, factors such as apocope of word-final vowels may cause exceptional final stress. Quechua has been written using the Roman alphabet since the Spanish conquest of the Inca Empire. However, written Quechua is rarely used by Quechua speakers because of the lack of printed material in Quechua. Until the 20th century, Quechua was written with a Spanish-based orthography, for example "Inca, Huayna Cápac, Collasuyo, Mama Ocllo, Viracocha, quipu, tambo, condor". This orthography is the most familiar to Spanish speakers, and so it has been used for most borrowings into English, which essentially always happen through Spanish. In 1975, the Peruvian government of Juan Velasco Alvarado adopted a new orthography for Quechua. This is the system preferred by the Academia Mayor de la Lengua Quechua, which results in the following spellings of the examples listed above: "Inka, Wayna Qhapaq, Qollasuyu, Mama Oqllo, Wiraqocha, khipu, tampu, kuntur". This orthography has the following features: In 1985, a variation of this system was adopted by the Peruvian government that uses the Quechuan three-vowel system, resulting in the following spellings: "Inka, Wayna Qhapaq, Qullasuyu, Mama Uqllu, Wiraqucha, khipu, tampu, kuntur". The different orthographies are still highly controversial in Peru. Advocates of the traditional system believe that the new orthographies look too foreign and believe that it makes Quechua harder to learn for people who have first been exposed to written Spanish. Those who prefer the new system maintain that it better matches the phonology of Quechua, and they point to studies showing that teaching the five-vowel system to children later causes reading difficulties in Spanish. For more on this, see Quechuan and Aymaran spelling shift. Writers differ in the treatment of Spanish loanwords. These are sometimes adapted to the modern orthography and sometimes left as in Spanish. For instance, "I am Roberto" could be written "Robertom kani" or "Ruwirtum kani". (The "-m" is not part of the name; it is an evidential suffix, showing how the information is known: firsthand, in this case.) The Peruvian linguist Rodolfo Cerrón Palomino has proposed an orthographic norm for all of Southern Quechua: this Standard Quechua ("el Quechua estándar" or "Hanan Runasimi") conservatively integrates features of the two widespread dialects Ayacucho Quechua and Cusco Quechua. For instance: The Spanish-based orthography is now in conflict with Peruvian law. According to article 20 of the decree "Decreto Supremo No 004-2016-MC", which approves regulations relative to Law 29735, published in the official newspaper El Peruano on July 22, 2016, adequate spellings of the toponyms in the normalized alphabets of the indigenous languages must progressively be proposed, with the aim of standardizing the spellings used by the National Geographic Institute "(Instituto Geográfico Nacional, IGN)" The IGN implements the necessary changes on the official maps of Peru. Quechua is an agglutinating language, meaning that words are built up from basic roots followed by several suffixes, each of which carry one meaning. Their large number of suffixes changes both the overall meaning of words and their subtle shades of meaning. All varieties of Quechua are very regular agglutinative languages, as opposed to isolating or fusional ones [Thompson]. Their normal sentence order is SOV (subject–object–verb). Notable grammatical features include bipersonal conjugation (verbs agree with both subject and object), evidentiality (indication of the source and veracity of knowledge), a set of topic particles, and suffixes indicating who benefits from an action and the speaker's attitude toward it, but some varieties may lack some of the characteristics. In Quechua, there are seven pronouns. First-person plural pronouns (equivalent to "we") may be inclusive or exclusive; which mean, respectively, that the addressee ("you") is and is not part of the "we". Quechua also adds the suffix "-kuna" to the second and third person singular pronouns "qam" and "pay" to create the plural forms, "qam-kuna" and "pay-kuna". Adjectives in Quechua are always placed before nouns. They lack gender and number and are not declined to agree with substantives. Noun roots accept suffixes that indicate person (defining of possession, not identity), number, and case. In general, the personal suffix precedes that of number. In the Santiago del Estero variety, however, the order is reversed. From variety to variety, suffixes may change. Adverbs can be formed by adding "-ta" or, in some cases, "-lla" to an adjective: "allin – allinta" ("good – well"), "utqay – utqaylla" ("quick – quickly"). They are also formed by adding suffixes to demonstratives: "chay" ("that") – "chaypi" ("there"), "kay" ("this") – "kayman" ("hither"). There are several original adverbs. For Europeans, it is striking that the adverb "qhipa" means both "behind" and "future" and "ñawpa" means "ahead, in front" and "past". Local and temporal concepts of adverbs in Quechua (as well as in Aymara) are associated to each other reversely, compared to European languages. For the speakers of Quechua, we are moving backwards into the future (we cannot see it: it is unknown), facing the past (we can see it: it is remembered). The infinitive forms have the suffix "-y" (e.g"., much'a" 'kiss'; "much'a-y" 'to kiss'). These are the endings for the indicative: The suffixes shown in the table above usually indicate the subject; the person of the object is also indicated by a suffix ("-a-" for first person and "-su-" for second person), which precedes the suffixes in the table. In such cases, the plural suffixes from the table ("-chik" and "-ku") can be used to express the number of the object rather than the subject. Various suffixes are added to the stem to change the meaning. For example, "-chi" is a causative suffix and "-ku" is a reflexive suffix (example: "wañuy" 'to die'; "wañuchiy" 'to kill'; "wañuchikuy" 'to commit suicide'); "-naku" is used for mutual action (example: "marq'ay" 'to hug'; "marq'anakuy" 'to hug each other'), and "-chka" is a progressive, used for an ongoing action (e.g., "mikhuy" 'to eat'; "mikhuchkay" 'to be eating'). Particles are indeclinable: they do not accept suffixes. They are relatively rare, but the most common are "arí" 'yes' and "mana" 'no', although "mana" can take some suffixes, such as "-n"/"-m" ("manan"/"manam"), "-raq" ("manaraq" 'not yet') and "-chu" ("manachu?" 'or not?'), to intensify the meaning. Other particles are "yaw" 'hey, hi', and certain loan words from Spanish, such as "piru" (from Spanish "pero" 'but') and "sinuqa" (from "sino" 'rather'). The Quechuan languages have three different morphemes that mark evidentiality. Evidentiality refers to a morpheme whose primary purpose is to indicate the source of information. In Quechuan languages, evidentiality is a three-term system: there are three evidential morphemes that mark varying levels of source information. The markers can apply to first, second, and third persons. The chart below depicts an example of these morphemes from Wanka Quechua: The parentheses around the vowels indicate that the vowel can be dropped when following an open vowel. For the sake of cohesiveness, the above forms are used to discuss the evidential morphemes. There are dialectal variations to the forms. The variations will be presented in the following descriptions. The following sentences provide examples of the three evidentials and further discuss the meaning behind each of them. Regional variations: In Cusco Quechua, the direct evidential presents itself as "–mi" and "–n". The evidential "–mi" indicates that the speaker has a "strong personal conviction the veracity of the circumstance expressed." It has the basis of direct personal experience. Wanka Quechua I saw them with my own eyes. In Quechuan languages, not specified by the source, the inference morpheme appears as "-ch(i), -ch(a), -chr(a)". The "-chr(a)" evidential indicates that the utterance is an inference or form of conjecture. That inference relays the speaker's non-commitment to the truth-value of the statement. It also appears in cases such as acquiescence, irony, interrogative constructions, and first person inferences. These uses constitute nonprototypical use and will be discussed later in the "changes in meaning and other uses" section. Wanka Quechua I think they will probably come back. Regional variations: It can appear as "–sh(i)" or "–s(i)" depending on the dialect. With the use of this morpheme, the speaker "serves as a conduit through which information from another source passes." The information being related is hearsay or revelatory in nature. It also works to express the uncertainty of the speaker regarding the situation. However, it also appears in other constructions that are discussed in the "changes in meaning" section. Wanka Quechua (I was told) Shanti borrowed it. Hintz discusses an interesting case of evidential behavior found in the Sihaus dialect of Ancash Quechua. The author postulates that instead of three single evidential markers, that Quechuan language contains three pairs of evidential markers. The evidential morphemes have been referred to as markers or morphemes. The literature seems to differ on whether or not the evidential morphemes are acting as affixes or clitics, in some cases, such as Wanka Quechua, enclitics. Lefebvre and Muysken (1998) discuss this issue in terms of case but remark the line between affix and clitic is not clear. Both terms are used interchangeably throughout these sections. Evidentials in the Quechuan languages are "second position enclitics", which usually attach to the first constituent in the sentence, as shown in this example. Once, there were an old man and an old woman. They can, however, also occur on a focused constituent. It is now that Pedro is building the house. Sometimes, the affix is described as attaching to the focus, particularly in the Tarma dialect of Yaru Quechua, but this does not hold true for all varieties of Quechua. In Huanuco Quechua, the evidentials may follow any number of topics, marked by the topic marker "–qa", and the element with the evidential must precede the main verb or be the main verb. However, there are exceptions to that rule, and the more topics there are in a sentence, the more likely the sentence is to deviate from the usual pattern. When she (the witch) reached the peak, God had already taken the child up into heaven. Evidentials can be used to relay different meanings depending on the context and perform other functions. The following examples are restricted to Wanka Quechua. The direct evidential, -mi The direct evidential appears in wh-questions and yes/no questions. By considering the direct evidential in terms of prototypical semantics, it seems somewhat counterintuitive to have a direct evidential, basically an evidential that confirms the speaker's certainty about a topic, in a question. However, if one focuses less on the structure and more on the situation, some sense can be made. The speaker is asking the addressee for information so the speaker assumes the speaker knows the answer. That assumption is where the direct evidential comes into play. The speaker holds a certain amount of certainty that the addressee will know the answer. The speaker interprets the addressee as being in "direct relation" to the proposed content; the situation is the same as when, in regular sentences, the speaker assumes direct relation to the proposed information. When did he come back from Huancayo? The direct evidential affix is also seen in yes/no questions, similar to the situation with wh-questions. Floyd describes yes/no questions as being "characterized as instructions to the addressee to assert one of the propositions of a disjunction." Once again, the burden of direct evidence is being placed on the addressee, not on the speaker. The question marker in Wanka Quechua, "-chun", is derived from the negative –chu marker and the direct evidential (realized as –n in some dialects). Is he going to Tarma? While "–chr(a)" is usually used in an inferential context, it has some non-prototypical uses. "Mild Exhortation" In these constructions the evidential works to reaffirm and encourage the addressee's actions or thoughts. Yes, tell them, "I've gone farther." This example comes from a conversation between husband and wife, discussing the reactions of their family and friends after they have been gone for a while. The husband says he plans to stretch the truth and tell them about distant places to which he has gone, and his wife (in the example above) echoes and encourages his thoughts. "Acquiescence" With these, the evidential is used to highlight the speaker's assessment of inevitability of an event and acceptance of it. There is a sense of resistance, diminished enthusiasm, and disinclination in these constructions. I suppose I'll pay you then. This example comes from a discourse where a woman demands compensation from the man (the speaker in the example) whose pigs ruined her potatoes. He denies the pigs as being his but finally realizes he may be responsible and produces the above example. "Interrogative" Somewhat similar to the "–mi" evidential, the inferential evidential can be found in content questions. However, the salient difference between the uses of the evidentials in questions is that in the "–m(i)" marked questions, an answer is expected. That is not the case with "–chr(a)" marked questions. I wonder what we will give our families when we arrive. "Irony" Irony in language can be a somewhat complicated topic in how it functions differently in languages, and by its semantic nature, it is already somewhat vague. For these purposes, it is suffice to say that when irony takes place in Wanka Quechua, the "–chr(a)" marker is used. (I suppose) That's how you learn [that is the way in which you will learn]. This example comes from discourse between a father and daughter about her refusal to attend school. It can be interpreted as a genuine statement (perhaps one can learn by resisting school) or as an ironic statement (that is an absurd idea). Aside from being used to express hearsay and revelation, this affix also has other uses. "Folktales, myths, and legends" Because folktales, myths, and legends are, in essence, reported speech, it follows that the hearsay marker would be used with them. Many of these types of stories are passed down through generations, furthering this aspect of reported speech. A difference between simple hearsay and folktales can be seen in the frequency of the "–sh(i)" marker. In normal conversation using reported speech, the marker is used less, to avoid redundancy. "Riddles" Riddles are somewhat similar to myths and folktales in that their nature is to be passed by word of mouth. In certain grammatical structures, the evidential marker does not appear at all. In all Quechuan languages the evidential will not appear in a dependent clause. Sadly, no example was given to depict this omission. Omissions occur in Quechua. The sentence is understood to have the same evidentiality as the other sentences in the context. Quechuan speakers vary as to how much they omit evidentials, but they occur only in connected speech. An interesting contrast to omission of evidentials is overuse of evidentials. If a speaker uses evidentials too much with no reason, competence is brought into question. For example, the overuse of –m(i) could lead others to believe that the speaker is not a native speaker or, in some extreme cases, that one is mentally ill. By using evidentials, the Quechua culture has certain assumptions about the information being relayed. Those who do not abide by the cultural customs should not be trusted. A passage from Weber (1986) summarizes them nicely below: Evidentials also show that being precise and stating the source of one's information is extremely important in the language and the culture. Failure to use them correctly can lead to diminished standing in the community. Speakers are aware of the evidentials and even use proverbs to teach children the importance of being precise and truthful. Precision and information source are of the utmost importance. They are a powerful and resourceful method of human communication. Although the body of literature in Quechua is not as sizable as its historical and current prominence would suggest, it is nevertheless not negligible. As in the case of the pre-Columbian Mesoamerica, there are a number of surviving Andean documents in the local language that were written down in Latin characters after the European conquest, but they express, to a great extent, the culture of pre-Conquest times. That type of Quechua literature is somewhat scantier, but nevertheless significant. It includes the so-called Huarochirí Manuscript (1598), describing the mythology and religion of the valley of Huarochirí as well as Quechua poems quoted within the Spanish-language texts of some chronicles dealing with the pre-Conquest period. There are a number of anonymous or signed Quechua dramas dating from the post-conquest period (starting from the 17th century), some of which deal with the Inca era, while most are on religious topics and of European inspiration. The most famous dramas is "Ollantay" and the plays describing the death of Atahualpa. For example, Juan de Espinosa Medrano wrote several dramas in the language. Poems in Quechua were also composed during the colonial period. There is at least one Quechuan version of the Bible. Dramas and poems continued to be written in the 19th and especially in 20th centuries as well; in addition, in the 20th century and more recently, more prose has been published. However, few literary forms were made present in the 19th century as European influences limited literary criticism. While some of that literature consists of original compositions (poems and dramas), the bulk of 20th century Quechua literature consists of traditional folk stories and oral narratives. Johnny Payne has translated two sets of Quechua oral short stories, one into Spanish and the other into English. Demetrio Túpac Yupanqui wrote a Quechuan version of "Don Quixote", under the title "Yachay sapa wiraqucha dun Qvixote Manchamantan". A news broadcast in Quechua, "Ñuqanchik" (all of us), began in Peru in 2016. Many Andean musicians write and sing in their native languages, including Quechua and Aymara. Notable musical groups are Los Kjarkas, Kala Marka, J'acha Mallku, Savia Andina, Wayna Picchu, Wara, Alborada, Uchpa and many others.
https://en.wikipedia.org/wiki?curid=25286
Protein quaternary structure Protein quaternary structure is the number and arrangement of multiple folded protein subunits in a multi-subunit complex. It includes organizations from simple dimers to large homooligomers and complexes with defined or variable numbers of subunits. It can also refer to biomolecular complexes of proteins with nucleic acids and other cofactors. Many proteins are actually assemblies of multiple polypeptide chains. The quaternary structure refers to the number and arrangement of the protein subunits with respect to one another. Examples of proteins with quaternary structure include hemoglobin, DNA polymerase, and ion channels. Enzymes composed of subunits with diverse functions are sometimes called holoenzymes, in which some parts may be known as regulatory subunits and the functional core is known as the catalytic subunit. Other assemblies referred to instead as multiprotein complexes also possess quaternary structure. Examples include nucleosomes and microtubules. Changes in quaternary structure can occur through conformational changes within individual subunits or through reorientation of the subunits relative to each other. It is through such changes, which underlie cooperativity and allostery in "multimeric" enzymes, that many proteins undergo regulation and perform their physiological function. The above definition follows a classical approach to biochemistry, established at times when the distinction between a protein and a functional, proteinaceous unit was difficult to elucidate. More recently, people refer to protein–protein interaction when discussing quaternary structure of proteins and consider all assemblies of proteins as protein complexes. The number of subunits in an oligomeric complex is described using names that end in -mer (Greek for "part, subunit"). Formal and Greco-Latinate names are generally used for the first ten types and can be used for up to twenty subunits, whereas higher order complexes are usually described by the number of subunits, followed by -meric. Although complexes higher than octamers are rarely observed for most proteins, there are some important exceptions. Viral capsids are often composed of multiples of 60 proteins. Several molecular machines are also found in the cell, such as the proteasome (four heptameric rings = 28 subunits), the transcription complex and the spliceosome. The ribosome is probably the largest molecular machine, and is composed of many RNA and protein molecules. In some cases, proteins form complexes that then assemble into even larger complexes. In such cases, one uses the nomenclature, e.g., "dimer of dimers" or "trimer of dimers", to suggest that the complex might dissociate into smaller sub-complexes before dissociating into monomers. Protein quaternary structure can be determined using a variety of experimental techniques that require a sample of protein in a variety of experimental conditions. The experiments often provide an estimate of the mass of the native protein and, together with knowledge of the masses and/or stoichiometry of the subunits, allow the quaternary structure to be predicted with a given accuracy. It is not always possible to obtain a precise determination of the subunit composition for a variety of reasons. The number of subunits in a protein complex can often be determined by measuring the hydrodynamic molecular volume or mass of the intact complex, which requires native solution conditions. For "folded" proteins, the mass can be inferred from its volume using the partial specific volume of 0.73 ml/g. However, volume measurements are less certain than mass measurements, since "unfolded" proteins appear to have a much larger volume than folded proteins; additional experiments are required to determine whether a protein is unfolded or has formed an oligomer. Some bioinformatics methods were developed for predicting the quaternary structural attributes of proteins based on their sequence information by using various modes of pseudo amino acid composition (see, e.g., refs.). Methods that measure the mass or volume under unfolding conditions (such as MALDI-TOF mass spectrometry and SDS-PAGE) are generally not useful, since non-native conditions usually cause the complex to dissociate into monomers. However, these may sometimes be applicable; for example, the experimenter may apply SDS-PAGE after first treating the intact complex with chemical cross-link reagents. Proteins are capable of forming very tight complexes. For example, ribonuclease inhibitor binds to ribonuclease A with a roughly 20 fM dissociation constant. Other proteins have evolved to bind specifically to unusual moieties on another protein, e.g., biotin groups (avidin), phosphorylated tyrosines (SH2 domains) or proline-rich segments (SH3 domains). Protein-protein interactions can be engineered to favor certain oligomerization states.
https://en.wikipedia.org/wiki?curid=25291
Quest for Glory Quest for Glory is a series of hybrid adventure/role-playing video games, which were designed by Corey and Lori Ann Cole. The series was created in the Sierra Creative Interpreter, a toolset developed at Sierra specifically to assist with adventure game development. The series combines humor, puzzle elements, themes and characters borrowed from various legends, puns, and memorable characters, creating a 5-part series in the Sierra stable. The series was originally titled "Hero's Quest". However, Sierra failed to trademark the name. The Milton Bradley Company successfully trademarked an electronic version of their unrelated joint Games Workshop board game, "HeroQuest", which forced Sierra to change the series' title to "Quest for Glory". This decision meant that all future games in the series (as well as newer releases of "Hero's Quest I") used the new name. Lori Cole pitched Quest for Glory to Sierra as a: "rich, narrative-driven, role-playing experience". The series consisted of five games, each of which followed directly upon the events of the last. New games frequently referred to previous entries in the series, often in the form of cameos by recurring characters. The objective of the series is to transform the player character from an average adventurer to a hero by completing non-linear quests. The game also was revolutionary in its character import system. This allowed players to import their individual character, including the skills and wealth s/he had acquired, from one game to the next. Hybrids by their gameplay and themes, the games feature serious stories leavened with humor throughout. There are real dangers to face, and true heroic feats to perform, but silly details and overtones creep in (when the drama of adventuring does not force them out). Cheap word play is particularly frequent, to the point that the second game's ending refers to itself as the hero's "latest set of adventures and miserable puns." The games have recurring story elements. For example, each installment in the series requires the player to create a dispel potion. The games include a number of easter eggs, including a number of allusions to other Sierra games. For example, if a player types "pick nose" in the first game, (or clicks the lockpick icon on the player in the new version), if their lock-picking skill is high enough, the game responds: "Success! You now have an open nose". If the skill is too low, the player could insert the lock pick too far, killing himself. Another example is Dr. Cranium, an allusion to "The Castle of Dr. Brain", in the fourth game. Each game draws its inspiration from a different culture and mythology: (in order, Germanic/fairy tale; Middle Eastern/Arabian Nights; Egyptian/African; Slavic folklore; and finally Greco-Mediterranean) with the hero facing increasingly powerful opponents with help from characters who become more familiar from game to game. Each game varies somewhat from the tradition it is derived from; for example, Baba Yaga, a character borrowed from Slavic folklore, appears in the first game which is based on German mythology. The second game, which uses Middle Eastern folklore, introduces several Arab and African-themed characters who reappear in the third game based on Egyptian mythology. Characters from every game and genre in the series reappear in the fourth and fifth games. In addition to deviating from the player's expectations of the culture represented in each game, the series also includes a number of intentional anachronisms, such as the pizza-loving, mad scientists in the later games. Many CRPG enthusiasts consider the "Quest for Glory" series to be among the best in the genre, and the series is lauded for its non-linearity. The games are notable for blending the mechanics of adventure video games and roleplaying video games, their unique tone which combines pathos and humour, and the game systems which were ahead of their time, such as day-night cycles, non-playable characters which adhered to their own schedules within the games, and character improvement through both skill practice and point investiture. The website Polygon and the Kotaku blog have characterised the game as a precursor to modern day RPGs. Fraser Brown of the Destructoid blog considers the games: "one of the greatest adventure series of all time". Rowan Kaizer of the blog Engadget credits the games' hybrid adventure and roleplaying systems for the series' success. "The binary succeed/fail form of adventure game puzzles tended to either make those games too easy or too hard," he wrote, "But most puzzles in "Quest For Glory" involved some kind of skill check for your hero. This meant that you could succeed at most challenges by practicing or exploring, instead of getting stuck on bizarre item-combination puzzles". The first four games are hybrid Adventure/Role playing video games with real-time combat, while the fifth game switches to the Action/RPG genre. The gameplay standards established in earlier Sierra adventure games are enhanced by the player's ability to choose his character's career path from among the three traditional role-playing game backgrounds: fighter, magic-user/wizard and thief. Further variation is added by the ability to customize the Hero's abilities, including the option of selecting skills normally reserved for another character class, leading to unique combinations often referred to as "hybrid characters". During the second or third games, a character can be initiated as a Paladin by performing honorable actions, changing his class and abilities, and receiving a unique sword. This applies when the character is exported into later games. Any character that finishes any game in the series (except "Dragon Fire", the last in the series) can be exported to a more recent game ("Shadows of Darkness" has a glitch which allows one to import characters from the same game), keeping the character's statistics and parts of its inventory. If the character received the paladin sword, he would keep the magic sword (Soulforge or Piotyr's sword) and special paladin magic abilities. A character imported into a later game in the series from any other game can be assigned any character class, including Paladin. Each career path has its own strengths and weaknesses, and scenarios unique to the class because of the skills associated with it. Each class also has its own distinct way to solve various in-game puzzles, which encourage replay: some puzzles have up to four different solutions. For instance, if a door is closed, instead of lockpicking or casting an open spell, the fighter can simply knock down the door. The magic user and the thief are both non-confrontational characters, as they lack the close range ability of the fighter, but are better able to attack from a distance, using daggers or spells. An example of these separate paths can be seen early in the first game. A gold ring belonging to the healer rests in a nest on top of a tree; fighters might make it fall by hurling rocks, thieves may want to climb the tree, while a magic user can simply cast the fetch spell to retrieve the nest, and then, while the fighter and magic user return the ring for a reward, the thief can choose between returning or selling the same ring in the thieves' guild (which is not available for those not possessing the "thieving" skills). It is also possible to build, over the course of several games, a character that has points in every skill in the game, and can therefore perform nearly every task. Each character class features special abilities unique to that class, as well as a shared set of attributes which can be developed by performing tasks and completing quests. In general, for a particular game the maximum value which can be reached for an ability is 100*[the number of that game]. "Quest for Glory V" allows stat bonuses which can push an attribute over the maximum and lets certain classes raise certain attributes beyond the normal limits. "Quest for Glory V" also features special kinds of equipment which lower some stats while raising others. At the beginning of each game, the player may assign points to certain attributes, and certain classes only have specific attributes enabled, although skills can be added for an extra cost. General attributes influence all characters' classes and how they interact with objects and other people in the game; high values in strength allows movement of heavier objects and communication helps with bargaining goods with sellers. These attributes are changed by performing actions related to the skill; climbing a tree eventually increases the skill value in climb, running increases vitality, and so on. There are also complementing skills which are only of associated with some classes; parry (the ability to block a blow with the sword), for instance, is mainly used by fighters and paladins, lock picking and sneaking thief's hobby, and the ability to cast magic spells is usually associated with magic user. Vital statistics are depleted by performing some actions. Health, (determined by strength and vitality), determines the hit points of the character, which decreases when the player is attacked or harms himself. Stamina, (based on agility and vitality), limits the number of actions (exercise, fighting, running, etc.) the character is able to perform before needing rest or risking injury. Mana is only required by characters with skill in magic, and is calculated according to the character's intelligence and magic attributes. Puzzle and Experience points only show the development of the player and his progress in the game, though in the first game also affects the kind of random encounters a player faces, as some monsters only appear after a certain level of experience is reached. In the valley barony of Spielburg, the evil ogress Baba Yaga has cursed the land and the baron who tried to drive her off. His children have disappeared, while the land is ravaged by monsters and brigands. The Valley of Spielburg is in need of a Hero able to solve these problems. The original game was released in 1989 while a VGA remake was released in 1992. "Quest for Glory II: Trial by Fire" takes place in the land of Shapeir, in the world of Gloriana. Directly following from the events of the first game, the newly proclaimed Hero of Spielburg travels by flying carpet with his friends Abdulla Doo, Shameen and Shema to the desert city of Shapeir. The city is threatened by magical elementals, while the Emir Arus al-Din of Shapeir's sister city Raseir is missing and his city fallen under tyranny. "Quest for Glory II" is the only game in the series not to have originated or have been remade beyond the EGA graphics engine by Sierra, but AGD Interactive released a VGA fan remake of the game using the Adventure Game Studio engine on August 24, 2008. Rakeesh the Paladin brings the Hero (and Prince of Shapeir) along with Uhura and her son Simba to his homeland, the town of Tarna in a jungle and savannah country called Fricana that resembles central African ecosystems. Tarna is on the brink of war; the Simbani, the tribe of Uhura, are ready to do battle with the Leopardmen. Each tribe has stolen a sacred relic from the other, and both refuse to return it until the other side does. The Hero must prevent the war then thwart a demon who may be loosed upon the world. Drawn without warning from his victory in Fricana, the Hero arrives without equipment or explanation in the middle of the hazardous Dark One Caves in the distant land of Mordavia. While struggling to survive in this land plagued with undead, the Hero must prevent a dark power from summoning eternal darkness into the world. Erasmus introduces the player character, the Hero, to the Greece-like kingdom of Silmaria, whose king was recently assassinated. Thus, the traditional Rites of Rulership are due to commence, and the victor will be crowned king. The Hero enters the contest with the assistance of Erasmus, Rakeesh, and many old friends from previous entries in the series. The Hero competes against competitors, including the Silmarian guard Kokeeno Pookameeso, the warlord Magnum Opus, the hulking Gort, and the warrior Elsa Von Spielburg. Originally, the series was to be a tetralogy, consisting of 4 games, with the following themes and cycles: the 4 cardinal directions, the 4 classical elements, the 4 seasons and 4 different mythologies. This is what the creators originally had in mind: However, when "" was designed, it was thought that it would be too difficult for the hero to go straight from Shapeir to Mordavia and defeat the Dark One. To solve the problem, a new game, "", was inserted into the canon, and resulting in a renumbering of the series. Evidence for this can be found in the end of "": the player is told that the next game will be "" and a fanged vampiric moon is shown, to hint at the next game's theme. The developers discussed this in the Fall 1992 issue of Sierra's "InterAction" magazine, and an online chat room: Somewhere between finishing "Trial by Fire" and cranking up the design process for "Shadows of Darkness", the husband-and-wife team realized a fifth chapter would have to be added to bridge the games. That chapter became "Wages of War". The concept of seasons in the games represents the maturation of the Hero as he moves from story to story. It's a critical component in a series that – from the very beginning – was designed to be a defined quartet of stories, representing an overall saga with a distinct beginning, middle, and end. In the first episode, the player is a new graduate of the Famous Adventurer's Correspondence School, ready to venture out into the springtime of his career and build a rep. It's a light-hearted, exhilarating journey into the unknown that can be replayed three times with three distinct outlooks at puzzle-solving. In the second chapter – "Trial by Fire" – the Hero enters the summer of his experience, facing more difficult challenges with more highly developed skills. While the episode is more serious and dangerous than its predecessor, it retains the enchanting mixture of fantasy, challenge, and humor that made the first game a hit with so many fans. Of all the reasons Lori and Corey found for creating a bridge between "Trial by Fire" and "Shadows of Darkenss", the most compelling was the feeling that the Hero character simply hadn't matured enough to face the very grim challenges awaiting him in Transylvania. Along with the Hero, several recurring characters appear and re-appear throughout the series including: Rakeesh Sah Tarna, Baba Yaga, Abdullah Doo, Elsa von Spielburg, the evil Ad Avis, and others. The fictional world in which the Quest for Glory series takes place includes the town of Spielburg (based on German folklore), the desert city of Shapeir (based on the Arabia of "One Thousand and One Nights"), the jungle city of Tarna (based on African mythology, especially Egypt), the hamlet of Mordavia (based on Slavic mythology) and Silmaria (based on Greek mythology). Adventures, monsters and story of the games are usually drawn from legends of the respective mythology on which a title is based, although there are several cross-over exceptions, like the Eastern European Baba Yaga also appearing in the first game, which is distinctly German.
https://en.wikipedia.org/wiki?curid=25292
Quango A quango or QUANGO (less often QuANGO or QANGO) is a quasi non-governmental organisation. It is typically an organisation to which a government has devolved power, but which is still partly controlled and/or financed by government bodies. As its name suggests, a quango is a hybrid form of organization, with elements of both non-government organizations (NGOs) and public sector bodies. The concept is most often applied in the United Kingdom and, to a lesser degree, Australia, Canada, Ireland, New Zealand, the United States, and other English-speaking countries. In the UK, the term quango covers different "arm's-length" government bodies, including "non-departmental public bodies" (NDPBs), non-ministerial government departments, and executive agencies. One UK example is the Forestry Commission, which is a non-ministerial government department responsible for forestry in England. The term has spawned the derivative quangocrat; the Taxpayers' Alliance faulted a majority of them for not making declarations of political activity. The acronym has been extended to cover government agencies of all kinds, often being spelt out as quasi-autonomous national government organization and sometimes modified to qango. In 2006, there were 832 quangos in Ireland - 482 at national and 350 at local level - with a total of 5,784 individual appointees and a combined annual budget of €13 billion. The Irish majority party, Fine Gael, had promised to eliminate 145 quangos should they be the governing party in the 2016 election. Since coming to power they have reduced the overall number of quangos by 17. This reduction also included agencies which the former government had already planned to remove. Despite a 'commitment' from the 1979 Conservative party to curb the growth of unelected bodies, their numbers grew rapidly through their time in power throughout the 80s. The Cabinet Office 2009 report on non-departmental public bodies found that there are 766 NDPBs sponsored by the UK government. The number has been falling: there were 790 in 2008 and 827 in 2007. The number of NDPBs has fallen by over 10% since 1997. Staffing and expenditure of NDPBs have increased. They employed 111,000 people in 2009 and spent £46.5 billion, of which £38.4 billion was directly funded by the Government. Since the coalition government of Conservatives and Liberal Democrats was formed in May 2010, numerous NDPBs have been abolished under Conservative plans to reduce the overall budget deficit by reducing the size of the public sector. As of the end of July 2010, the government had abolished at least 80 NDPBs and warned many others that they faced mergers or deep cuts. In September 2010, "The Telegraph" published a leaked Cabinet Office list suggesting that a further 94 could be abolished, while four would be privatised and 129 merged. In August 2012, Cabinet Office minister Francis Maude said the government was on course to abolish 204 public bodies by 2015, and said this would create a net saving of at least £2.6 billion. Use of the term quango is less common and therefore more controversial in the United States due to their commitment to limited government and electoral accountability. However, Paul Krugman has stated that the US Federal Reserve is, effectively, "what the British call a quango... Its complex structure divides power between the federal government and the private banks that are its members, and in effect gives substantial autonomy to a governing board of long-term appointees." Two other U.S.-based organizations that might be described as quangos are the Internet Corporation for Assigned Names and Numbers (ICANN) and the National Center for Missing and Exploited Children (NCMEC). The term "quasi non-governmental organisation" was created in 1967 by Alan Pifer of the US-based Carnegie Foundation, in an essay on the independence and accountability of public-funded bodies that are incorporated in the private sector. This essay got the attention of David Howell, a Conservative M.P. in Britain, who then organized an Anglo-American project with Pifer, to examine the pros and cons of such enterprises. The lengthy term was shortened to the acronym QUANGO (later lowercased quango) by a British participant to the joint project, Anthony Barker, during one of the conferences on the subject. It describes an ostensibly non-governmental organisation performing governmental functions, often in receipt of funding or other support from government, while mainstream NGOs mostly get their donations or funds from the public and other organisations that support their cause. Numerous quangos were created from the 1980s onwards. Examples in the United Kingdom include those engaged in the regulation of various commercial and service sectors, such as the Water Services Regulation Authority. An essential feature of a quango in the original definition was that it should not be a formal part of the state structure. The term was then extended to apply to a range of organisations, such as executive agencies providing (from 1988) health, education and other services. Particularly in the UK, this occurred in a polemical atmosphere in which it was alleged that proliferation of such bodies was undesirable and should be reversed. In this context, the original acronym was often replaced by a backronym spelt out as "quasi-autonomous national government organisation, and often rendered as 'qango' This spawned the related acronym "qualgo", a 'quasi-autonomous "local" government organisation'. "London Waste Regulation Authority, the first 'qualgo' formed after abolition of the Greater London Council...The new body is a joint board of councilors from London boroughs. 'Qualgo' stands for 'quasi-autonomous local government organization', the municipal equivalent of a quango, in which members are appointed by other councilors". The less contentious term non-departmental public body (NDPB) is often employed to identify numerous organisations with devolved governmental responsibilities. The UK government's definition in 1997 of a non-departmental public body or quango was: "The Times" has accused quangos of bureaucratic waste and excess. In 2005, Dan Lewis, author of "The Essential Guide to Quangos", claimed that the UK had 529 quangos, many of which were useless and duplicated the work of others. Quangos are filled with appointed members. This means, unlike governmental bodies, members of quangos do not need to seek re-election. This is seen as a major criticism in liberal democracy as members of quangos have not been legitimised by the electorate, but have governmental power and influence. They also do not have the same level of accountability as elected officials, worsened by the lack of media coverage of their work.
https://en.wikipedia.org/wiki?curid=25293
Quiver A quiver is a container for holding arrows, bolts, darts, or javelins. It can be carried on an archer's body, the bow, or the ground, depending on the type of shooting and the archer's personal preference. Quivers were traditionally made of leather, wood, furs, and other natural materials, but are now often made of metal or plastic. The English word quiver has its origins in Old French, written as quivre, cuevre or coivre . The most common style of quiver is a flat or cylindrical container suspended from the belt. They are found across many cultures from North America to China. Many variations of this type exist, such as being canted forwards or backwards, and being carried on the dominant hand side, off-hand side, or the small of the back. Some variants enclose almost the entire arrow, while minimalist "pocket quivers" consist of little more than a small stiff pouch that only covers the first few inches. The Bayeux Tapestry shows that most bowmen in medieval Europe used belt quivers. Back quivers are secured to the archer's back by leather straps, with the nock ends protruding above the dominant hand's shoulder. Arrows can be drawn over the shoulder rapidly by the nock. This style of quiver was used by native peoples of North America and Africa, and was also commonly depicted in bas-reliefs from ancient Assyria. While popular in cinema and 20th century art for depictions of medieval European characters (such as Robin Hood), this style of quiver was rarely used in medieval Europe. A ground quiver is used for both target shooting or warfare when the archer is shooting from a fixed location. They can be simply stakes in the ground with a ring at the top to hold the arrows, or more elaborate designs that hold the arrows within reach without the archer having to lean down to draw. A modern invention, the bow quiver attaches directly to the bow's limbs and holds the arrows steady with a clip of some kind. They are popular with compound bow hunters as it allows one piece of equipment to be carried in the field without encumbering the hunter's body. A style used by medieval English Longbowmen and several other cultures, an arrow bag is a simple drawstring cloth sack with a leather spacer at the top to keep the arrows divided. When not in use, the drawstring could be closed, completely covering the arrows so as to protect them from rain and dirt. Some had straps or rope sewn to them for carrying, but many either were tucked into the belt or set on the ground before battle to allow easier access. Yebira refers to a variety of quiver designs. The Yazutsu is a different type, used in Kyudo. Arrows are removed from it before shooting, and held in the hand, so it is mainly used to transport and protect arrows. Dr. Brian Marin, author of Ancient Warfare| Concordia Press| page 137
https://en.wikipedia.org/wiki?curid=25295
Quinine Quinine is a medication used to treat malaria and babesiosis. This includes the treatment of malaria due to "Plasmodium falciparum" that is resistant to chloroquine when artesunate is not available. While sometimes used for restless legs syndrome, quinine is not recommended for this purpose due to the risk of serious side effects. It can be taken by mouth or intravenously. Malaria resistance to quinine occurs in certain areas of the world. Quinine is also the ingredient in tonic water that gives it its bitter taste. Common side effects include headache, ringing in the ears, trouble seeing, and sweating. More severe side effects include deafness, low blood platelets, and an irregular heartbeat. Use can make one more prone to sunburn. While it is unclear if use during pregnancy causes harm to the baby, treating malaria during pregnancy with quinine when appropriate is still recommended. Quinine is an alkaloid, a naturally occurring chemical compound. How it works as a medicine is not entirely clear. Quinine was first isolated in 1820 from the bark of a cinchona tree, which is native to Peru. Bark extracts had been used to treat malaria since at least 1632 and it was introduced to Spain as early as 1636 by Jesuit missionaries from the New World. Quinine is on the World Health Organization's List of Essential Medicines, the safest and most effective medicines needed in a health system. The wholesale price in the developing world is about US$1.70 to $3.40 per course of treatment. In the United States a course of treatment costs more than $200. As of 2006, quinine is no longer recommended by the World Health Organization (WHO) as a first-line treatment for malaria, because there are other substances that are equally effective with fewer side effects. They recommend that it be used only when artemisinins are not available. Quinine is also used to treat lupus and arthritis. Quinine was frequently prescribed as an off-label treatment for leg cramps at night, but this has become less common due to a warning from the US Food and Drug Administration (FDA) that such practice is associated with life-threatening side effects. Quinine is a basic amine and is usually provided as a salt. Various existing preparations include the hydrochloride, dihydrochloride, sulfate, bisulfate and gluconate. In the United States, quinine sulfate is commercially available in 324-mg tablets under the brand name Qualaquin. All quinine salts may be given orally or intravenously (IV); quinine gluconate may also be given intramuscularly (IM) or rectally (PR). The main problem with the rectal route is that the dose can be expelled before it is completely absorbed; in practice, this is corrected by giving a further half dose. No injectable preparation of quinine is licensed in the US; quinidine is used instead. Quinine is a flavor component of tonic water and bitter lemon drink mixers. On the soda gun behind many bars, tonic water is designated by the letter "Q" representing quinine. According to tradition, because of the bitter taste of anti-malarial quinine tonic, British colonials in India mixed it with gin to make it more palatable, thus creating the gin and tonic cocktail, which is still popular today. In France, quinine is an ingredient of an known as , or ""Cap Corse,"" and the wine-based Dubonnet. In Spain, quinine (also known as "Peruvian bark" for its origin from the native cinchona tree) is sometimes blended into sweet Malaga wine, which is then called ""Malaga Quina"". In Italy, the traditional flavoured wine Barolo Chinato is infused with quinine and local herbs, and is served as a . In Canada and Italy, quinine is an ingredient in the carbonated chinotto beverages Brio and San Pellegrino. In Scotland, the company A.G. Barr uses quinine as an ingredient in the carbonated and caffeinated beverage Irn-Bru. In Uruguay and Argentina, quinine is an ingredient of a PepsiCo tonic water named Paso de los Toros. In Denmark, it is used as an ingredient in the carbonated sports drink Faxe Kondi made by Royal Unibrew. As a flavouring agent in drinks, quinine is limited to less than 83 parts per million in the United States, and in the European Union. Quinine (and quinidine) are used as the chiral moiety for the ligands used in Sharpless asymmetric dihydroxylation as well as for numerous other chiral catalyst backbones. Because of its relatively constant and well-known fluorescence quantum yield, quinine is used in photochemistry as a common fluorescence standard. Because of the narrow difference between its therapeutic and toxic effects, quinine is a common cause of drug-induced disorders, including thrombocytopenia and thrombotic microangiopathy. Even from minor levels occurring in common beverages, quinine can have severe adverse effects involving multiple organ systems, among which are immune system effects and fever, hypotension, hemolytic anemia, acute kidney injury, liver toxicity, and blindness. In people with atrial fibrillation, conduction defects, or heart block, quinine can cause heart arrhythmias, and should be avoided. Quinine can cause hemolysis in G6PD deficiency (an inherited deficiency), but this risk is small and the physician should not hesitate to use quinine in people with G6PD deficiency when there is no alternative. Quinine can cause unpredictable serious and life-threatening blood and cardiovascular reactions including low platelet count and hemolytic-uremic syndrome/thrombotic thrombocytopenic purpura (HUS/TTP), long QT syndrome and other serious cardiac arrhythmias including torsades de pointes, blackwater fever, disseminated intravascular coagulation, leukopenia, and neutropenia. Some people who have developed TTP due to quinine have gone on to develop kidney failure. It can also cause serious hypersensitivity reactions include anaphylactic shock, urticaria, serious skin rashes, including Stevens–Johnson syndrome and toxic epidermal necrolysis, angioedema, facial edema, bronchospasm, granulomatous hepatitis, and itchiness. The most common adverse effects involve a group of symptoms called cinchonism, which can include headache, vasodilation and sweating, nausea, tinnitus, hearing impairment, vertigo or dizziness, blurred vision, and disturbance in color perception. More severe cinchonism includes vomiting, diarrhea, abdominal pain, deafness, blindness, and disturbances in heart rhythms. Cinchonism is much less common when quinine is given by mouth, but oral quinine is not well tolerated (quinine is exceedingly bitter and many people will vomit after ingesting quinine tablets). Other drugs, such as Fansidar (sulfadoxine with pyrimethamine) or Malarone (proguanil with atovaquone), are often used when oral therapy is required. Quinine ethyl carbonate is tasteless and odourless, but is available commercially only in Japan. Blood glucose, electrolyte and cardiac monitoring are not necessary when quinine is given by mouth. Quinine has diverse unwanted interactions with numerous prescription drugs, such as potentiating the anticoagulant effects of warfarin. Quinine is used for its toxicity to the malarial pathogen, "Plasmodium falciparum", by interfering with the parasite's ability to dissolve and metabolize hemoglobin. As with other quinoline antimalarial drugs, the precise mechanism of action of quinine has not been fully resolved, although in vitro studies indicate it inhibits nucleic acid and protein synthesis, and inhibits glycolysis in "P. falciparum". The most widely accepted hypothesis of its action is based on the well-studied and closely related quinoline drug, chloroquine. This model involves the inhibition of hemozoin biocrystallization in the heme detoxification pathway, which facilitates the aggregation of cytotoxic heme. Free cytotoxic heme accumulates in the parasites, causing their deaths. Quinine may target the malaria purine nucleoside phosphorylase enzyme. The UV absorption of quinine peaks around 350 nm (in UVA). Fluorescent emission peaks at around 460 nm (bright blue/cyan hue). Quinine is highly fluorescent (quantum yield ~0.58) in 0.1 M sulfuric acid solution. Cinchona trees remain the only economically practical source of quinine. However, under wartime pressure during World War II, research towards its synthetic production was undertaken. A formal chemical synthesis was accomplished in 1944 by American chemists R.B. Woodward and W.E. Doering. Since then, several more efficient quinine total syntheses have been achieved, but none of them can compete in economic terms with isolation of the alkaloid from natural sources. The first synthetic organic dye, mauveine, was discovered by William Henry Perkin in 1856 while he was attempting to synthesize quinine. In the first step of quinine biosynthesis, the enzyme strictosidine synthase catalyzes a stereoselective Pictet–Spengler reaction between tryptamine and secologanin to yield strictosidine. Suitable modification of strictosidine leads to an aldehyde. Hydrolysis and decarboxylation would initially remove one carbon from the iridoid portion and produce corynantheal. Then the tryptamine side-chain were cleaved adjacent to the nitrogen, and this nitrogen was then bonded to the acetaldehyde function to yield cinchonaminal. Ring opening in the indole heterocyclic ring could generate new amine and keto functions. The new quinoline heterocycle would then be formed by combining this amine with the aldehyde produced in the tryptamine side-chain cleavage, giving cinchonidinone. For the last step, hydroxylation and methylation gives quinine. Quinine was used as a muscle relaxant by the Quechua people, who are indigenous to Peru, Bolivia and Ecuador, to halt shivering due to low temperatures. The Quechua would mix the ground bark of cinchona trees with sweetened water to offset the bark's bitter taste, thus producing something similar to tonic water. Spanish Jesuit missionaries were the first to bring cinchona to Europe. The Spanish had observed the Quechua's use of cinchona and were aware of the medicinal properties of cinchona bark by the 1570s or earlier: Nicolás Monardes (1571) and Juan Fragoso (1572) both described a tree, which was subsequently identified as the cinchona tree, whose bark was used to produce a drink to treat diarrhea.
https://en.wikipedia.org/wiki?curid=25297
Quasispecies model The quasispecies model is a description of the process of the Darwinian evolution of certain self-replicating entities within the framework of physical chemistry. A quasispecies is a large group or "cloud" of related genotypes that exist in an environment of high mutation rate (at stationary state), where a large fraction of offspring are expected to contain one or more mutations relative to the parent. This is in contrast to a species, which from an evolutionary perspective is a more-or-less stable single genotype, most of the offspring of which will be genetically accurate copies. It is useful mainly in providing a qualitative understanding of the evolutionary processes of self-replicating macromolecules such as RNA or DNA or simple asexual organisms such as bacteria or viruses (see also viral quasispecies), and is helpful in explaining something of the early stages of the origin of life. Quantitative predictions based on this model are difficult because the parameters that serve as its input are impossible to obtain from actual biological systems. The quasispecies model was put forward by Manfred Eigen and Peter Schuster based on initial work done by Eigen. When evolutionary biologists describe competition between species, they generally assume that each species is a single genotype whose descendants are mostly accurate copies. (Such genotypes are said to have a high reproductive "fidelity".) In evolutionary terms, we are interested in the behavior and fitness of that one species or genotype over time. Some organisms or genotypes, however, may exist in circumstances of low fidelity, where most descendants contain one or more mutations. A group of such genotypes is constantly changing, so discussions of which single genotype is the most fit become meaningless. Importantly, if many closely related genotypes are only one mutation away from each other, then genotypes in the group can mutate back and forth into each other. For example, with one mutation per generation, a child of the sequence AGGT could be AGTT, and a grandchild could be AGGT again. Thus we can envision a "cloud" of related genotypes that is rapidly mutating, with sequences going back and forth among different points in the cloud. Though the proper definition is mathematical, that cloud, roughly speaking, is a quasispecies. Quasispecies behavior exists for large numbers of individuals existing at a certain (high) range of mutation rates. In a species, though reproduction may be mostly accurate, periodic mutations will give rise to one or more competing genotypes. If a mutation results in greater replication and survival, the mutant genotype may out-compete the parent genotype and come to dominate the species. Thus, the individual genotypes (or species) may be seen as the units on which selection acts and biologists will often speak of a single genotype's fitness. In a quasispecies, however, mutations are ubiquitous and so the fitness of an individual genotype becomes meaningless: if one particular mutation generates a boost in reproductive success, it can't amount to much because that genotype's offspring are unlikely to be accurate copies with the same properties. Instead, what matters is the "connectedness" of the cloud. For example, the sequence AGGT has 12 (3+3+3+3) possible single point mutants AGGA, AGGG, and so on. If 10 of those mutants are viable genotypes that may reproduce (and some of whose offspring or grandchildren may mutate back into AGGT again), we would consider that sequence a well-connected node in the cloud. If instead only two of those mutants are viable, the rest being lethal mutations, then that sequence is poorly connected and most of its descendants will not reproduce. The analog of fitness for a quasispecies is the tendency of nearby relatives within the cloud to be well-connected, meaning that more of the mutant descendants will be viable and give rise to further descendants within the cloud. When the fitness of a single genotype becomes meaningless because of the high rate of mutations, the cloud as a whole or quasispecies becomes the natural unit of selection. Quasispecies represents the evolution of high-mutation-rate viruses such as HIV and sometimes single genes or molecules within the genomes of other organisms. Quasispecies models have also been proposed by Jose Fontanari and Emmanuel David Tannenbaum to model the evolution of sexual reproduction. Quasispecies was also shown in compositional replicators (based on the Gard model for abiogenesis) and was also suggested to be applicable to describe cell's replication, which amongst other things requires the maintenance and evolution of the internal composition of the parent and bud. The model rests on four assumptions: In the quasispecies model, mutations occur through errors made in the process of copying already existing sequences. Further, selection arises because different types of sequences tend to replicate at different rates, which leads to the suppression of sequences that replicate more slowly in favor of sequences that replicate faster. However, the quasispecies model does not predict the ultimate extinction of all but the fastest replicating sequence. Although the sequences that replicate more slowly cannot sustain their abundance level by themselves, they are constantly replenished as sequences that replicate faster mutate into them. At equilibrium, removal of slowly replicating sequences due to decay or outflow is balanced by replenishing, so that even relatively slowly replicating sequences can remain present in finite abundance. Due to the ongoing production of mutant sequences, selection does not act on single sequences, but on mutational "clouds" of closely related sequences, referred to as "quasispecies". In other words, the evolutionary success of a particular sequence depends not only on its own replication rate, but also on the replication rates of the mutant sequences it produces, and on the replication rates of the sequences of which it is a mutant. As a consequence, the sequence that replicates fastest may even disappear completely in selection-mutation equilibrium, in favor of more slowly replicating sequences that are part of a quasispecies with a higher average growth rate. Mutational clouds as predicted by the quasispecies model have been observed in RNA viruses and in "in vitro" RNA replication. The mutation rate and the general fitness of the molecular sequences and their neighbors is crucial to the formation of a quasispecies. If the mutation rate is zero, there is no exchange by mutation, and each sequence is its own species. If the mutation rate is too high, exceeding what is known as the error threshold, the quasispecies will break down and be dispersed over the entire range of available sequences. A simple mathematical model for a quasispecies is as follows: let there be formula_1 possible sequences and let there be formula_2 organisms with sequence "i". Let's say that each of these organisms asexually gives rise to formula_3 offspring. Some are duplicates of their parent, having sequence "i", but some are mutant and have some other sequence. Let the mutation rate formula_4 correspond to the probability that a "j" type parent will produce an "i" type organism. Then the expected fraction of offspring generated by "j" type organisms that would be "i" type organisms is formula_5, where formula_6. Then the total number of "i"-type organisms after the first round of reproduction, given as formula_7, is Sometimes a death rate term formula_9 is included so that: where formula_11 is equal to 1 when i=j and is zero otherwise. Note that the "n-th" generation can be found by just taking the "n-th" power of W substituting it in place of W in the above formula. This is just a system of linear equations. The usual way to solve such a system is to first diagonalize the W matrix. Its diagonal entries will be eigenvalues corresponding to certain linear combinations of certain subsets of sequences which will be eigenvectors of the W matrix. These subsets of sequences are the quasispecies. Assuming that the matrix W is a primitive matrix (irreducible and aperiodic), then after very many generations only the eigenvector with the largest eigenvalue will prevail, and it is this quasispecies that will eventually dominate. The components of this eigenvector give the relative abundance of each sequence at equilibrium. W being primitive means that for some integer formula_12, that the formula_13 power of W is > 0, i.e. all the entries are positive. If W is primitive then each type can, through a sequence of mutations (i.e. powers of W) mutate into all the other types after some number of generations. W is not primitive if it is periodic, where the population can perpetually cycle through different disjoint sets of compositions, or if it is reducible, where the dominant species (or quasispecies) that develops can depend on the initial population, as is the case in the simple example given below. The quasispecies formulae may be expressed as a set of linear differential equations. If we consider the difference between the new state formula_7 and the old state formula_2 to be the state change over one moment of time, then we can state that the time derivative of formula_2 is given by this difference, formula_17 we can write: The quasispecies equations are usually expressed in terms of concentrations formula_19 where The above equations for the quasispecies then become for the discrete version: or, for the continuum version: The quasispecies concept can be illustrated by a simple system consisting of 4 sequences. Sequences [0,0], [0,1], [1,0], and [1,1] are numbered 1, 2, 3, and 4, respectively. Let's say the [0,0] sequence never mutates and always produces a single offspring. Let's say the other 3 sequences all produce, on average, formula_24 replicas of themselves, and formula_25 of each of the other two types, where formula_26. The W matrix is then: The diagonalized matrix is: And the eigenvectors corresponding to these eigenvalues are: Only the eigenvalue formula_29 is more than unity. For the n-th generation, the corresponding eigenvalue will be formula_30 and so will increase without bound as time goes by. This eigenvalue corresponds to the eigenvector [0,1,1,1], which represents the quasispecies consisting of sequences 2, 3, and 4, which will be present in equal numbers after a very long time. Since all population numbers must be positive, the first two quasispecies are not legitimate. The third quasispecies consists of only the non-mutating sequence 1. It's seen that even though sequence 1 is the most fit in the sense that it reproduces more of itself than any other sequence, the quasispecies consisting of the other three sequences will eventually dominate (assuming that the initial population was not homogeneous of the sequence 1 type).
https://en.wikipedia.org/wiki?curid=25308
Qing dynasty The Qing dynasty, officially the Great Qing (), was the last imperial dynasty of China. It was established in 1636, and ruled China proper from 1644 to 1912. It was preceded by the Ming dynasty and succeeded by the Republic of China. The Qing multi-cultural empire lasted for almost three centuries and formed the territorial base for modern China. It was the fifth largest empire in world history in terms of territorial size. The dynasty was founded by the Manchu Aisin Gioro clan in Manchuria. In the late sixteenth century, Nurhaci, originally a Ming vassal, began organizing "Banners" which were military-social units that included Manchu, Han, and Mongol elements. Nurhaci united Manchu clans and officially proclaimed the Later Jin dynasty in 1616. His son Hong Taiji began driving Ming forces out of the Liaodong Peninsula and declared a new dynasty, the Qing, in 1636. As Ming control disintegrated, peasant rebels led by Li Zicheng conquered the capital Beijing in 1644. Ming general Wu Sangui refused to serve them, but opened the Shanhai Pass to the Banner Armies led by the regent Prince Dorgon, who defeated the rebels and seized the capital. Dorgon served as Prince Regent under the Shunzhi Emperor and implemented policies of rule. Resistance from the Ming loyalists in the south and the Revolt of the Three Feudatories led by Wu Sangui delayed complete conquest until 1683 under the Kangxi Emperor (1661–1722). The Ten Great Campaigns of the Qianlong Emperor from the 1750s to the 1790s extended Qing control into Inner Asia. During the peak of the Qing dynasty, the empire ruled over the entirety of today's Mainland China, Hainan, Taiwan, Mongolia, Outer Manchuria and Outer Northwest China. The early Qing rulers maintained their Manchu customs, and while their title was Emperor, they used "Bogd khaan" when dealing with the Mongols and they were patrons of Tibetan Buddhism. They governed using Confucian styles and institutions of bureaucratic government and retained the imperial examinations to recruit Han Chinese to work under or in parallel with Manchus. They also adapted the ideals of the Chinese tributary system in asserting superiority over peripheral countries such as Korea and Vietnam, while annexing neighboring territories such as Tibet and Mongolia. The dynasty reached its high point in the late 18th century, then gradually declined in the face of challenges from abroad, internal revolts, population growth, disruption of the economy, corruption, and the reluctance of ruling elites to change their mindsets. The population rose to some 400 million, but taxes and government revenues were fixed at a low rate, leading to fiscal crisis. Following the Opium Wars, European powers led by Great Britain imposed "unequal treaties", free trade, extraterritoriality and treaty ports under foreign control. The Taiping Rebellion (1850–1864) and the Dungan Revolt (1862–1877) in Central Asia led to the deaths of some 20 million people, due to famine, disease, and war. In spite of these disasters, in the Tongzhi Restoration of the 1860s, Han Chinese elites rallied to the defense of the Confucian order and the Manchu rulers. The initial gains in the Self-Strengthening Movement were lost in the First Sino-Japanese War of 1895, in which the Qing lost its influence over Korea and the possession of Taiwan. New Armies were organized, but the ambitious Hundred Days' Reform of 1898 was turned back in a coup by the conservative Empress Dowager Cixi (1835–1908), who was the dominant voice in the national government (with one interruption) after 1861. When the Scramble for Concessions by foreign powers triggered the violently anti-foreign "Boxers" in 1900, with many foreigners and Christians killed, the foreign powers invaded China. Cixi sided with the Boxers and was decisively defeated by the eight invading powers, leading to the flight of the Imperial Court to Xi'an. After agreeing to sign the Boxer Protocol, the government initiated unprecedented fiscal and administrative reforms, including elections, a new legal code, and abolition of the examination system. Sun Yat-sen and other revolutionaries competed with constitutional monarchists such as Kang Youwei and Liang Qichao to transform the Qing Empire into a modern nation. After the deaths of the Guangxu Emperor and Cixi in 1908, the hardline Manchu court alienated reformers and local elites alike by obstructing social reform. The Wuchang Uprising on 11 October 1911 led to the Xinhai Revolution. General Yuan Shikai negotiated the abdication of Puyi, the last emperor, on 12 February 1912. Thereafter, Qing troops were defeated in Tibet and Xinjiang, too. Nurhaci declared himself the "Bright Khan" of the "Jin" (lit. "gold"; known in Chinese historiography as the "Later Jin") state in honor both of the 12th–13th century Jurchen-led Jin dynasty and of his Aisin Gioro clan ("Aisin" being Manchu for the Chinese ("jīn", "gold")). His son Hong Taiji renamed the dynasty "Great Qing" in 1636. There are competing explanations on the meaning of "Qīng" (lit. "clear" or "pure"). The name may have been selected in reaction to the name of the Ming dynasty (), which consists of the Chinese characters for "sun" () and "moon" (), both associated with the fire element of the Chinese zodiacal system. The character "Qīng" () is composed of "water" () and "azure" (), both associated with the water element. This association would justify the Qing conquest as defeat of fire by water. The water imagery of the new name may also have had Buddhist overtones of perspicacity and enlightenment and connections with the Bodhisattva Manjusri. The Manchu name "daicing", which sounds like a phonetic rendering of "Dà Qīng" or "Dai Ching", may in fact have been derived from a Mongolian word ", дайчин" that means "warrior". "Daicing gurun" may therefore have meant "warrior state", a pun that was only intelligible to Manchu and Mongol people. In the later part of the dynasty, however, even the Manchus themselves had forgotten this possible meaning. After conquering "China proper", the Manchus identified their state as "China" (中國, "Zhōngguó"; "Middle Kingdom"), and referred to it as "Dulimbai Gurun" in Manchu ("Dulimbai" means "central" or "middle," "gurun" means "nation" or "state"). The emperors equated the lands of the Qing state (including present-day Northeast China, Xinjiang, Mongolia, Tibet and other areas) as "China" in both the Chinese and Manchu languages, defining China as a multi-ethnic state, and rejecting the idea that "China" only meant Han areas. The Qing emperors proclaimed that both Han and non-Han peoples were part of "China". They used both "China" and "Qing" to refer to their state in official documents, international treaties (as the Qing was known internationally as "China" or the "Chinese Empire") and foreign affairs, and "Chinese language" (Manchu: "Dulimbai gurun i bithe") included Chinese, Manchu, and Mongol languages, and "Chinese people" (中國之人 "Zhōngguó zhī rén"; Manchu: "Dulimbai gurun i niyalma") referred to all subjects of the empire. In the Chinese-language versions of its treaties and its maps of the world, the Qing government used "Qing" and "China" interchangeably. The Qing dynasty was founded not by Han Chinese, who constitute the majority of the Chinese population, but by a sedentary farming people known as the Jurchen, a Tungusic people who lived around the region now comprising the Chinese provinces of Jilin and Heilongjiang. The Manchus are sometimes mistaken for a nomadic people, which they were not. What was to become the Manchu state was founded by Nurhaci, the chieftain of a minor Jurchen tribethe Aisin Gioroin Jianzhou in the early 17th century. Nurhaci may have spent time in a Chinese household in his youth, and became fluent in Chinese as well as Mongol, and read the Chinese novels Romance of the Three Kingdoms and Water Margin. Originally a vassal of the Ming emperors, Nurhaci embarked on an intertribal feud in 1582 that escalated into a campaign to unify the nearby tribes. By 1616, he had sufficiently consolidated Jianzhou so as to be able to proclaim himself Khan of the Great Jin in reference to the previous Jurchen dynasty. Two years later, Nurhaci announced the "Seven Grievances" and openly renounced the sovereignty of Ming overlordship in order to complete the unification of those Jurchen tribes still allied with the Ming emperor. After a series of successful battles, he relocated his capital from Hetu Ala to successively bigger captured Ming cities in Liaodong: first Liaoyang in 1621, then Shenyang (Mukden) in 1625. When the Jurchens were reorganized by Nurhaci into the Eight Banners, many Manchu clans were artificially created as a group of unrelated people founded a new Manchu clan (mukun) using a geographic origin name such as a toponym for their hala (clan name). The irregularities over Jurchen and Manchu clan origin led to the Qing trying to document and systematize the creation of histories for Manchu clans, including manufacturing an entire legend around the origin of the Aisin Gioro clan by taking mythology from the northeast. Relocating his court from Jianzhou to Liaodong provided Nurhaci access to more resources; it also brought him in close contact with the Khorchin Mongol domains on the plains of Mongolia. Although by this time the once-united Mongol nation had long since fragmented into individual and hostile tribes, these tribes still presented a serious security threat to the Ming borders. Nurhaci's policy towards the Khorchins was to seek their friendship and cooperation against the Ming, securing his western border from a powerful potential enemy. Furthermore, the Khorchin proved a useful ally in the war, lending the Jurchens their expertise as cavalry archers. To guarantee this new alliance, Nurhaci initiated a policy of inter-marriages between the Jurchen and Khorchin nobilities, while those who resisted were met with military action. This is a typical example of Nurhaci's initiatives that eventually became official Qing government policy. During most of the Qing period, the Mongols gave military assistance to the Manchus. Some other important contributions by Nurhaci include ordering the creation of a written Manchu script, based on Mongolian script, after the earlier Jurchen script was forgotten (it had been derived from Khitan and Chinese). Nurhaci also created the civil and military administrative system that eventually evolved into the Eight Banners, the defining element of Manchu identity and the foundation for transforming the loosely knitted Jurchen tribes into a nation. There were too few ethnic Manchus to conquer China proper, so they gained strength by defeating and absorbing Mongols. More importantly, they added Han Chinese to the Eight Banners. The Manchus had to create an entire "Jiu Han jun" (Old Han Army) due to the massive number of Han Chinese soldiers who were absorbed into the Eight Banners by both capture and defection. Ming artillery was responsible for many victories against the Manchus, so the Manchus established an artillery corps made out of Han Chinese soldiers in 1641, and the swelling of Han Chinese numbers in the Eight Banners led in 1642 to all Eight Han Banners being created. Armies of defected Ming Han Chinese conquered southern China for the Qing. Han Chinese played a massive role in the Qing conquest of China. Han Chinese Generals who defected to the Manchu were often given women from the Imperial Aisin Gioro family in marriage while the ordinary soldiers who surrendered were often given non-royal Manchu women as wives. Jurchen (Manchu) women married Han Chinese in Liaodong. Manchu Aisin Gioro princesses were also given in marriage to Han Chinese officials' sons. The unbroken series of Nurhaci's military successes ended in January 1626 when he was defeated by Yuan Chonghuan while laying siege to Ningyuan. He died a few months later and was succeeded by his eighth son, Hong Taiji, who emerged after a short political struggle amongst other contenders to be the new Khan. Although Hong Taiji was an experienced leader and the commander of two Banners at the time of his succession, his reign did not start well on the military front. The Jurchens suffered yet another defeat in 1627 at the hands of Yuan Chonghuan. As before, this defeat was, in part, due to the Ming's newly acquired Portuguese cannons. To redress the technological and numerical disparity, Hong Taiji in 1634 created his own artillery corps, the "ujen cooha" (Chinese: ) from among his existing Han troops who cast their own cannons in the European design with the help of defector Chinese metallurgists. One of the defining events of Hong Taiji's reign was the official adoption of the name "Manchu" for the united Jurchen people in November 1635. In 1635, the Manchus' Mongol allies were fully incorporated into a separate Banner hierarchy under direct Manchu command. Hong Taiji conquered the territory north of Shanhai Pass by Ming dynasty and Ligdan Khan in Inner Mongolia. In April 1636, Mongol nobility of Inner Mongolia, Manchu nobility and the Han mandarin held the Kurultai in Shenyang, recommended khan of Later Jin to be the emperor of Great Qing empire. One of the Yuan Dynasty's jade seal has also dedicated to the emperor (Bogd Setsen Khan) by nobility. When he was said to be presented with the imperial seal of the Yuan dynasty after the defeat of the last Khagan of the Mongols, Hong Taiji renamed his state from "Great Jin" to "Great Qing" and elevated his position from Khan to Emperor, suggesting imperial ambitions beyond unifying the Manchu territories. Hong Taiji then proceeded in 1636 to invade Korea again. The change of the name from Jurchen to Manchu was made to hide the fact that the ancestors of the Manchus, the Jianzhou Jurchens, were ruled by the Chinese. The Qing dynasty carefully hid the original editions of the books of ""Qing Taizu Wu Huangdi Shilu"" and the ""Manzhou Shilu Tu"" (Taizu Shilu Tu) in the Qing palace, forbidden from public view because they showed that the Manchu Aisin Gioro family had been ruled by the Ming dynasty and followed many Manchu customs that seemed "uncivilized" in later eyes. In the Ming period, the Koreans of Joseon referred to the Jurchen inhabited lands north of the Korean peninsula, above the rivers Yalu and Tumen to be part of Ming China, as the "superior country" (sangguk) which they called Ming China. The Qing deliberately excluded references and information that showed the Jurchens (Manchus) as subservient to the Ming dynasty, from the History of Ming to hide their former subservient relationship to the Ming. The Veritable Records of Ming were not used to source content on Jurchens during Ming rule in the History of Ming because of this. After the Second Manchu invasion of Korea, Joseon Korea was forced to give several of their royal princesses as concubines to the Qing Manchu regent Prince Dorgon. In 1650, Dorgon married the Korean Princess Uisun. This was followed by the creation of the first two Han Banners in 1637 (increasing to eight in 1642). Together these military reforms enabled Hong Taiji to resoundingly defeat Ming forces in a series of battles from 1640 to 1642 for the territories of Songshan and Jinzhou. This final victory resulted in the surrender of many of the Ming dynasty's most battle-hardened troops, the death of Yuan Chonghuan at the hands of the Chongzhen Emperor (who thought Yuan had betrayed him), and the complete and permanent withdrawal of the remaining Ming forces north of the Great Wall. Meanwhile, Hong Taiji set up a rudimentary bureaucratic system based on the Ming model. He established six boards or executive level ministries in 1631 to oversee finance, personnel, rites, military, punishments, and public works. However, these administrative organs had very little role initially, and it was not until the eve of completing the conquest ten years later that they fulfilled their government roles. Hong Taiji's bureaucracy was staffed with many Han Chinese, including many newly surrendered Ming officials. The Manchus' continued dominance was ensured by an ethnic quota for top bureaucratic appointments. Hong Taiji's reign also saw a fundamental change of policy towards his Han Chinese subjects. Nurhaci had treated Han in Liaodong differently according to how much grain they had: those with less than 5 to 7 sin were treated badly, while those with more than that amount were rewarded with property. Due to a revolt by Han in Liaodong in 1623, Nurhaci, who previously gave concessions to conquered Han subjects in Liaodong, turned against them and ordered that they no longer be trusted. He enacted discriminatory policies and killings against them, while ordering that Han who assimilated to the Jurchen (in Jilin) before 1619 be treated equally, as Jurchens were, and not like the conquered Han in Liaodong. Hong Taiji recognized that the Manchus needed to attract Han Chinese, explaining to reluctant Manchus why he needed to treat the Ming defector General Hong Chengchou leniently. Hong Taiji instead incorporated them into the Jurchen "nation" as full (if not first-class) citizens, obligated to provide military service. By 1648, less than one-sixth of the bannermen were of Manchu ancestry. This change of policy not only increased Hong Taiji's manpower and reduced his military dependence on banners not under his personal control, it also greatly encouraged other Han Chinese subjects of the Ming dynasty to surrender and accept Jurchen rule when they were defeated militarily. Through these and other measures Hong Taiji was able to centralize power unto the office of the Khan, which in the long run prevented the Jurchen federation from fragmenting after his death. Hong Taiji died suddenly in September 1643. As the Jurchens had traditionally "elected" their leader through a council of nobles, the Qing state did not have a clear succession system. The leading contenders for power were Hong Taiji's oldest son Hooge and Hong Taiji's half brother Dorgon. A compromise installed Hong Taiji's five-year-old son, Fulin, as the Shunzhi Emperor, with Dorgon as regent and de facto leader of the Manchu nation. Meanwhile, Ming government officials fought against each other, against fiscal collapse, and against a series of peasant rebellions. They were unable to capitalise on the Manchu succession dispute and the presence of a minor as emperor. In April 1644, the capital, Beijing, was sacked by a coalition of rebel forces led by Li Zicheng, a former minor Ming official, who established a short-lived Shun dynasty. The last Ming ruler, the Chongzhen Emperor, committed suicide when the city fell to the rebels, marking the official end of the dynasty. Li Zicheng then led a collection of rebel forces numbering some 200,000 to confront Wu Sangui, the general commanding the Ming garrison at Shanhai Pass, a key pass of the Great Wall, located fifty miles northeast of Beijing, which defended the capital. Wu Sangui, caught between a rebel army twice his size and an enemy he had fought for years, cast his lot with the foreign but familiar Manchus. Wu Sangui may have been influenced by Li Zicheng's mistreatment of wealthy and cultured officials, including Li's own family; it was said that Li took Wu's concubine Chen Yuanyuan for himself. Wu and Dorgon allied in the name of avenging the death of the Chongzhen Emperor. Together, the two former enemies met and defeated Li Zicheng's rebel forces in battle on May 27, 1644. The newly allied armies captured Beijing on 6 June. The Shunzhi Emperor was invested as the "Son of Heaven" on 30 October. The Manchus, who had positioned themselves as political heirs to the Ming emperor by defeating Li Zicheng, completed the symbolic transition by holding a formal funeral for the Chongzhen Emperor. However, conquering the rest of China Proper took another seventeen years of battling Ming loyalists, pretenders and rebels. The last Ming pretender, Prince Gui, sought refuge with the King of Burma, Pindale Min, but was turned over to a Qing expeditionary army commanded by Wu Sangui, who had him brought back to Yunnan province and executed in early 1662. The Qing had taken shrewd advantage of Ming civilian government discrimination against the military and encouraged the Ming military to defect by spreading the message that the Manchus valued their skills. Banners made up of Han Chinese who defected before 1644 were classed among the Eight Banners, giving them social and legal privileges in addition to being acculturated to Manchu traditions. Han defectors swelled the ranks of the Eight Banners so greatly that ethnic Manchus became a minority—only 16% in 1648, with Han Bannermen dominating at 75% and Mongol Bannermen making up the rest. Gunpowder weapons like muskets and artillery were wielded by the Chinese Banners. Normally, Han Chinese defector troops were deployed as the vanguard, while Manchu Bannermen acted as reserve forces or in the rear and were used predominantly for quick strikes with maximum impact, so as to minimize ethnic Manchu losses. This multi-ethnic force conquered China for the Qing, The three Liaodong Han Bannermen officers who played key roles in the conquest of southern China were Shang Kexi, Geng Zhongming, and Kong Youde, who governed southern China autonomously as viceroys for the Qing after the conquest. Han Chinese Bannermen made up the majority of governors in the early Qing, and they governed and administered China after the conquest, stabilizing Qing rule. Han Bannermen dominated the post of governor-general in the time of the Shunzhi and Kangxi Emperors, and also the post of governor, largely excluding ordinary Han civilians from these posts. To promote ethnic harmony, a 1648 decree allowed Han Chinese civilian men to marry Manchu women from the Banners with the permission of the Board of Revenue if they were registered daughters of officials or commoners, or with the permission of their banner company captain if they were unregistered commoners. Later in the dynasty the policies allowing intermarriage were done away with. The southern cadet branch of Confucius' descendants who held the title "Wujing boshi" (Doctor of the Five Classics) and 65th generation descendant in the northern branch who held the title Duke Yansheng both had their titles confirmed by the Shunzhi Emperor upon the Qing entry into Beijing on 31 October. The Kong's title of Duke was maintained in later reigns. The first seven years of the Shunzhi Emperor's reign were dominated by the regent prince Dorgon. Because of his own political insecurity, Dorgon followed Hong Taiji's example by ruling in the name of the emperor at the expense of rival Manchu princes, many of whom he demoted or imprisoned under one pretext or another. Although the period of his regency was relatively short, Dorgon's precedents and example cast a long shadow over the dynasty. First, the Manchus had entered "South of the Wall" because Dorgon responded decisively to Wu Sangui's appeal. Then, after capturing Beijing, instead of sacking the city as the rebels had done, Dorgon insisted, over the protests of other Manchu princes, on making it the dynastic capital and reappointing most Ming officials. Choosing Beijing as the capital had not been a straightforward decision, since no major Chinese dynasty had directly taken over its immediate predecessor's capital. Keeping the Ming capital and bureaucracy intact helped quickly stabilize the regime and sped up the conquest of the rest of the country. Dorgon then drastically reduced the influence of the eunuchs, a major force in the Ming bureaucracy, and directed Manchu women not to bind their feet in the Chinese style. However, not all of Dorgon's policies were equally popular or as easy to implement. The controversial July 1645 edict (the "haircutting order") forced adult Han Chinese men to shave the front of their heads and comb the remaining hair into the queue hairstyle which was worn by Manchu men, on pain of death. The popular description of the order was: "To keep the hair, you lose the head; To keep your head, you cut the hair." To the Manchus, this policy was a test of loyalty and an aid in distinguishing friend from foe. For the Han Chinese, however, it was a humiliating reminder of Qing authority that challenged traditional Confucian values. The "Classic of Filial Piety" ("Xiaojing") held that "a person's body and hair, being gifts from one's parents, are not to be damaged". Under the Ming dynasty, adult men did not cut their hair but instead wore it in the form of a top-knot. The order triggered strong resistance to Qing rule in Jiangnan and massive killing of Han Chinese. It was Han Chinese defectors who carried out massacres against people refusing to wear the queue. Li Chengdong, a Han Chinese general who had served the Ming but surrendered to the Qing, ordered his Han troops to carry out three separate massacres in the city of Jiading within a month, resulting in tens of thousands of deaths. At the end of the third massacre, there was hardly a living person left in this city. Jiangyin also held out against about 10,000 Han Chinese Qing troops for 83 days. When the city wall was finally breached on 9 October 1645, the Han Chinese Qing army led by the Han Chinese Ming defector Liu Liangzuo (劉良佐), who had been ordered to "fill the city with corpses before you sheathe your swords", massacred the entire population, killing between 74,000 and 100,000 people. The queue was the only aspect of Manchu culture which the Qing forced on the common Han population. The Qing required people serving as officials to wear officialQing clothing, but allowed non-official Han civilians to continue wearing Hanfu (Han clothing). Han Chinese did not object to wearing the queue braid on the back of the head as they traditionally wore all their hair long, but fiercely objected to shaving the forehead so the Qing government exclusively focused on forcing people to shave the forehead rather than wear the braid. Han rebels in the first half of the Qing who objected to Qing hairstyle wore the braid but defied orders to shave the front of the head. One person was executed for refusing to shave the front but he had willingly braided the back of his hair. It was only later westernized revolutionaries, influenced by western hairstyle who began to view the braid as backward and advocated adopting short haired western hairstyles. Han rebels against the Qing like the Taiping even retained their queue braids on the back but the symbol of their rebellion against the Qing was the growing of hair on the front of the head, causing the Qing government to view shaving the front of the head as the primary sign of loyalty to the Qing rather than wearing the braid on the back which did not violate Han customs and which traditional Han did not object to. Koxinga insulted and criticized the Qing hairstyle by referring to the shaven pate looking like a fly. Koxinga and his men objected to shaving when the Qing demanded they shave in exchange for recognizing Koxinga as a feudatory. The Qing demanded that Zheng Jing and his men on Taiwan shave in order to receive recognition as a fiefdom. His men and Ming prince Zhu Shugui fiercely objected to shaving. On 31 December 1650, Dorgon suddenly died during a hunting expedition, marking the official start of the Shunzhi Emperor's personal rule. Because the emperor was only 12 years old at that time, most decisions were made on his behalf by his mother, Empress Dowager Xiaozhuang, who turned out to be a skilled political operator. Although his support had been essential to Shunzhi's ascent, Dorgon had centralised so much power in his hands as to become a direct threat to the throne. So much so that upon his death he was bestowed the extraordinary posthumous title of Emperor Yi (), the only instance in Qing history in which a Manchu "prince of the blood" () was so honored. Two months into Shunzhi's personal rule, however, Dorgon was not only stripped of his titles, but his corpse was disinterred and mutilated. to atone for multiple "crimes", one of which was persecuting to death Shunzhi's agnate eldest brother, Hooge. More importantly, Dorgon's symbolic fall from grace also led to the purge of his family and associates at court, thus reverting power back to the person of the emperor. After a promising start, Shunzhi's reign was cut short by his early death in 1661 at the age of 24 from smallpox. He was succeeded by his third son Xuanye, who reigned as the Kangxi Emperor. The Manchus sent Han Bannermen to fight against Koxinga's Ming loyalists in Fujian. They removed the population from coastal areas in order to deprive Koxinga's Ming loyalists of resources. This led to a misunderstanding that Manchus were "afraid of water". Han Bannermen carried out the fighting and killing, casting doubt on the claim that fear of the water led to the coastal evacuation and ban on maritime activities. Even though a poem refers to the soldiers carrying out massacres in Fujian as "barbarians", both Han Green Standard Army and Han Bannermen were involved and carried out the worst slaughter. 400,000 Green Standard Army soldiers were used against the Three Feudatories in addition to the 200,000 Bannermen. The sixty-one year reign of the Kangxi Emperor was the longest of any Chinese emperor. Kangxi's reign is also celebrated as the beginning of an era known as the "High Qing", during which the dynasty reached the zenith of its social, economic and military power. Kangxi's long reign started when he was eight years old upon the untimely demise of his father. To prevent a repeat of Dorgon's dictatorial monopolizing of power during the regency, the Shunzhi Emperor, on his deathbed, hastily appointed four senior cabinet ministers to govern on behalf of his young son. The four ministers – Sonin, Ebilun, Suksaha, and Oboi – were chosen for their long service, but also to counteract each other's influences. Most important, the four were not closely related to the imperial family and laid no claim to the throne. However, as time passed, through chance and machination, Oboi, the most junior of the four, achieved such political dominance as to be a potential threat. Even though Oboi's loyalty was never an issue, his personal arrogance and political conservatism led him into an escalating conflict with the young emperor. In 1669 Kangxi, through trickery, disarmed and imprisoned Oboi – a significant victory for a fifteen-year-old emperor over a wily politician and experienced commander. The early Manchu rulers established two foundations of legitimacy that help to explain the stability of their dynasty. The first was the bureaucratic institutions and the neo-Confucian culture that they adopted from earlier dynasties. Manchu rulers and Han Chinese scholar-official elites gradually came to terms with each other. The examination system offered a path for ethnic Han to become officials. Imperial patronage of Kangxi Dictionary demonstrated respect for Confucian learning, while the Sacred Edict of 1670 effectively extolled Confucian family values. His attempts to discourage Chinese women from foot binding, however, were unsuccessful. The second major source of stability was the Central Asian aspect of their Manchu identity, which allowed them to appeal to Mongol, Tibetan and Uighur constituents. The ways of the Qing legitimization were different for the Chinese, Mongolian and Tibetan peoples. This contradicted traditional Chinese worldview requiring acculturation of "barbarians". Qing emperors, on the contrary, sought to prevent this in regard to Mongols and Tibetans. The Qing used the title of Emperor (Huangdi) in Chinese, while among Mongols the Qing monarch was referred to as Bogda khan (wise Khan), and referred to as Gong Ma in Tibet. The Qianlong Emperor propagated the image of himself as a Buddhist sage ruler, a patron of Tibetan Buddhism. In the Manchu language, the Qing monarch was alternately referred to as either Huwangdi (Emperor) or Khan with no special distinction between the two usages. The Kangxi Emperor also welcomed to his court Jesuit missionaries, who had first come to China under the Ming. Missionaries including Tomás Pereira, Martino Martini, Johann Adam Schall von Bell, Ferdinand Verbiest and Antoine Thomas held significant positions as military weapons experts, mathematicians, cartographers, astronomers and advisers to the emperor. The relationship of trust was however lost in the later Chinese Rites controversy. Yet controlling the "Mandate of Heaven" was a daunting task. The vastness of China's territory meant that there were only enough banner troops to garrison key cities forming the backbone of a defense network that relied heavily on surrendered Ming soldiers. In addition, three surrendered Ming generals were singled out for their contributions to the establishment of the Qing dynasty, ennobled as feudal princes (藩王), and given governorships over vast territories in Southern China. The chief of these was Wu Sangui, who was given the provinces of Yunnan and Guizhou, while generals Shang Kexi and Geng Jingzhong were given Guangdong and Fujian provinces respectively. As the years went by, the three feudal lords and their extensive territories became increasingly autonomous. Finally, in 1673, Shang Kexi petitioned Kangxi for permission to retire to his hometown in Liaodong province and nominated his son as his successor. The young emperor granted his retirement, but denied the heredity of his fief. In reaction, the two other generals decided to petition for their own retirements to test Kangxi's resolve, thinking that he would not risk offending them. The move backfired as the young emperor called their bluff by accepting their requests and ordering that all three fiefdoms to be reverted to the crown. Faced with the stripping of their powers, Wu Sangui, later joined by Geng Zhongming and by Shang Kexi's son Shang Zhixin, felt they had no choice but to revolt. The ensuing Revolt of the Three Feudatories lasted for eight years. Wu attempted, ultimately in vain, to fire the embers of south China Ming loyalty by restoring Ming customs but then he declaring himself emperor of a new dynasty instead of restoring the Ming. At the peak of the rebels' fortunes, they extended their control as far north as the Yangtze River, nearly establishing a divided China. Wu then hesitated to go further north, not being able to coordinate strategy with his allies, and Kangxi was able to unify his forces for a counterattack led by a new generation of Manchu generals. By 1681, the Qing government had established control over a ravaged southern China which took several decades to recover. Manchu Generals and Bannermen were initially put to shame by the better performance of the Han Chinese Green Standard Army. Kangxi accordingly assigned generals Sun Sike, Wang Jinbao, and Zhao Liangdong to crush the rebels, since he thought that Han Chinese were superior to Bannermen at battling other Han people. Similarly, in north-western China against Wang Fuchen, the Qing used Han Chinese Green Standard Army soldiers and Han Chinese generals as the primary military forces. This choice was due to the rocky terrain, which favoured infantry troops over cavalry, to the desire to keep Bannermen in reserve, and, again, to the belief that Han troops were better at fighting other Han people. These Han generals achieved victory over the rebels. Also due to the mountainous terrain, Sichuan and southern Shaanxi were retaken by the Green Standard Army in 1680, with Manchus participating only in logistics and provisions. 400,000 Green Standard Army soldiers and 150,000 Bannermen served on the Qing side during the war. 213 Han Chinese Banner companies, and 527 companies of Mongol and Manchu Banners were mobilized by the Qing during the revolt. 400,000 Green Standard Army soldiers were used against the Three Feudatories besides 200,000 Bannermen. The Qing forces were crushed by Wu from 1673–1674. The Qing had the support of the majority of Han Chinese soldiers and Han elite against the Three Feudatories, since they refused to join Wu Sangui in the revolt, while the Eight Banners and Manchu officers fared poorly against Wu Sangui, so the Qing responded with using a massive army of more than 900,000 Han Chinese (non-Banner) instead of the Eight Banners, to fight and crush the Three Feudatories. Wu Sangui's forces were crushed by the Green Standard Army, made out of defected Ming soldiers. To extend and consolidate the dynasty's control in Central Asia, the Kangxi Emperor personally led a series of military campaigns against the Dzungars in Outer Mongolia. The Kangxi Emperor was able to successfully expel Galdan's invading forces from these regions, which were then incorporated into the empire. Galdan was eventually killed in the Dzungar–Qing War. In 1683, Qing forces received the surrender of Formosa (Taiwan) from Zheng Keshuang, grandson of Koxinga, who had conquered Taiwan from the Dutch colonists as a base against the Qing. Zheng Keshuang was awarded the title "Duke Haicheng" (海澄公) and was inducted into the Han Chinese Plain Red Banner of the Eight Banners when he moved to Beijing. Several Ming princes had accompanied Koxinga to Taiwan in 1661–1662, including the Prince of Ningjing Zhu Shugui and Prince Zhu Honghuan (朱弘桓), son of Zhu Yihai, where they lived in the Kingdom of Tungning. The Qing sent the 17 Ming princes still living on Taiwan in 1683 back to mainland China where they spent the rest of their lives in exile since their lives were spared from execution. Winning Taiwan freed Kangxi's forces for series of battles over Albazin, the far eastern outpost of the Tsardom of Russia. Zheng's former soldiers on Taiwan like the rattan shield troops were also inducted into the Eight Banners and used by the Qing against Russian Cossacks at Albazin. The 1689 Treaty of Nerchinsk was China's first formal treaty with a European power and kept the border peaceful for the better part of two centuries. After Galdan's death, his followers, as adherents to Tibetan Buddhism, attempted to control the choice of the next Dalai Lama. Kangxi dispatched two armies to Lhasa, the capital of Tibet, and installed a Dalai Lama sympathetic to the Qing. By the end of the 17th century, China was at its greatest height of confidence and political control since the Ming dynasty. The reigns of the Yongzheng Emperor (r. 1723–1735) and his son, the Qianlong Emperor (r. 1735–1796), marked the height of Qing power. During this period, the Qing Empire ruled over 13 million square kilometers of territory. Yet, as the historian Jonathan Spence puts it, the empire by the end of the Qianlong reign was "like the sun at midday". In the midst of "many glories", he writes, "signs of decay and even collapse were becoming apparent". After the death of the Kangxi Emperor in the winter of 1722, his fourth son, Prince Yong (雍親王), became the Yongzheng Emperor. In the later years of Kangxi's reign, Yongzheng and his brothers had fought, and there were rumours that he had usurped the throne – most of the rumours held that Yongzheng's brother Yingzhen (Kangxi's 14th son) was the real successor of the Kangxi Emperor, and that Yongzheng and his confidant Keduo Long had tampered with the Kangxi's testament on the night when Kangxi died, though there was little evidence for these charges. In fact, his father had trusted him with delicate political issues and discussed state policy with him. When Yongzheng came to power at the age of 45, he felt a sense of urgency about the problems that had accumulated in his father's later years, and he did not need instruction on how to exercise power. In the words of one recent historian, he was "severe, suspicious, and jealous, but extremely capable and resourceful", and in the words of another, he turned out to be an "early modern state-maker of the first order". Yongzheng moved rapidly. First, he promoted Confucian orthodoxy and reversed what he saw as his father's laxness by cracking down on unorthodox sects and by decapitating an anti-Manchu writer his father had pardoned. In 1723 he outlawed Christianity and expelled Christian missionaries, though some were allowed to remain in the capital. Next, he moved to control the government. He expanded his father's system of Palace Memorials, which brought frank and detailed reports on local conditions directly to the throne without being intercepted by the bureaucracy, and he created a small Grand Council of personal advisors, which eventually grew into the emperor's "de facto" cabinet for the rest of the dynasty. He shrewdly filled key positions with Manchu and Han Chinese officials who depended on his patronage. When he began to realize that the financial crisis was even greater than he had thought, Yongzheng rejected his father's lenient approach to local landowning elites and mounted a campaign to enforce collection of the land tax. The increased revenues were to be used for "money to nourish honesty" among local officials and for local irrigation, schools, roads, and charity. Although these reforms were effective in the north, in the south and lower Yangzi valley, where Kangxi had wooed the elites, there were long established networks of officials and landowners. Yongzheng dispatched experienced Manchu commissioners to penetrate the thickets of falsified land registers and coded account books, but they were met with tricks, passivity, and even violence. The fiscal crisis persisted. Yongzheng also inherited diplomatic and strategic problems. A team made up entirely of Manchus drew up the Treaty of Kyakhta (1727) to solidify the diplomatic understanding with Russia. In exchange for territory and trading rights, the Qing would have a free hand dealing with the situation in Mongolia. Yongzheng then turned to that situation, where the Zunghars threatened to re-emerge, and to the southwest, where local Miao chieftains resisted Qing expansion. These campaigns drained the treasury but established the emperor's control of the military and military finance. The Yongzheng Emperor died in 1735. His 24-year-old son, Prince Bao (寶親王), then became the Qianlong Emperor. Qianlong personally led military campaigns near Xinjiang and Mongolia, putting down revolts and uprisings in Sichuan and parts of southern China while expanding control over Tibet. The Qianlong Emperor launched several ambitious cultural projects, including the compilation of the "Siku Quanshu", or "Complete Repository of the Four Branches of Literature". With a total of over 3,400 books, 79,000 chapters, and 36,304 volumes, the "Siku Quanshu" is the largest collection of books in Chinese history. Nevertheless, Qianlong used Literary Inquisition to silence opposition. The accusation of individuals began with the emperor's own interpretation of the true meaning of the corresponding words. If the emperor decided these were derogatory or cynical towards the dynasty, persecution would begin. Literary inquisition began with isolated cases at the time of Shunzhi and Kangxi, but became a pattern under Qianlong's rule, during which there were 53 cases of literary persecution. Beneath outward prosperity and imperial confidence, the later years of Qianlong's reign were marked by rampant corruption and neglect. Heshen, the emperor's handsome young favorite, took advantage of the emperor's indulgence to become one of the most corrupt officials in the history of the dynasty. Qianlong's son, the Jiaqing Emperor (r. 1796–1820), eventually forced Heshen to commit suicide. China also began suffering from mounting overpopulation during this period. Population growth was stagnant for the first half of the 17th century due to civil wars and epidemics, but prosperity and internal stability gradually reversed this trend. The introduction of new crops from the Americas such as the potato and peanut allowed an improved food supply as well, so that the total population of China during the 18th century ballooned from 100 million to 300 million people. Soon all available farmland was used up, forcing peasants to work ever-smaller and more intensely worked plots. The Qianlong Emperor once bemoaned the country's situation by remarking, "The population continues to grow, but the land does not." The only remaining part of the empire that had arable farmland was Manchuria, where the provinces of Jilin and Heilongjiang had been walled off as a Manchu homeland. The emperor decreed for the first time that Han Chinese civilians were forbidden to settle. Mongols were forbidden by the Qing from crossing the borders of their banners, even into other Mongol Banners, and from crossing into neidi (the Han Chinese 18 provinces) and were given serious punishments if they did in order to keep the Mongols divided against each other to benefit the Qing. Mongol pilgrims wanting to leave their banner's borders for religious reasons such as pilgrimage had to apply for passports to give them permission. Select groups of Han Chinese bannermen were mass transferred into Manchu Banners by the Qing, changing their ethnicity from Han Chinese to Manchu. Han Chinese bannermen of Tai Nikan 台尼堪 (watchpost Chinese) and Fusi Nikan 抚顺尼堪 (Fushun Chinese) backgrounds into the Manchu banners in 1740 by order of the Qing Qianlong emperor. It was between 1618–1629 when the Han Chinese from Liaodong who later became the Fushun Nikan and Tai Nikan defected to the Jurchens (Manchus). These Han Chinese origin Manchu clans continue to use their original Han surnames and are marked as of Han origin on Qing lists of Manchu clans. Despite officially prohibiting Han Chinese settlement on the Manchu and Mongol lands, by the 18th century the Qing decided to settle Han refugees from northern China who were suffering from famine, floods, and drought into Manchuria and Inner Mongolia. Han Chinese then streamed into Manchuria, both illegally and legally, over the Great Wall and Willow Palisade. As Manchu landlords desired Han Chinese to rent their land and grow grain, most Han Chinese migrants were not evicted. During the eighteenth century Han Chinese farmed 500,000 hectares of privately owned land in Manchuria and 203,583 hectares of lands that were part of courrier stations, noble estates, and Banner lands. In garrisons and towns in Manchuria Han Chinese made up 80% of the population. In 1796, open rebellion broke out by the White Lotus Society against the Qing government. The White Lotus Rebellion continued for eight years, until 1804, and marked a turning point in the history of the Qing dynasty. At the start of the dynasty, the Chinese empire continued to be the hegemonic power in East Asia. Although there was no formal ministry of foreign relations, the Lifan Yuan was responsible for relations with the Mongol and Tibetans in Central Asia, while the tributary system, a loose set of institutions and customs taken over from the Ming, in theory governed relations with East and Southeast Asian countries. The Treaty of Nerchinsk (1689) stabilized relations with Czarist Russia. In the Jahriyya revolt sectarian violence between two suborders of the Naqshbandi Sufis, the Jahriyya Sufi Muslims and their rivals, the Khafiyya Sufi Muslims, led to a Jahriyya Sufi Muslim rebellion which the Qing dynasty in China crushed with the help of the Khafiyya Sufi Muslims. The Eight Trigrams uprising of 1813 broke out in 1813. However, during the 18th century European empires gradually expanded across the world, as European states developed economies built on maritime trade. The dynasty was confronted with newly developing concepts of the international system and state to state relations. European trading posts expanded into territorial control in nearby India and on the islands that are now Indonesia. The Qing response, successful for a time, was to establish the Canton System in 1756, which restricted maritime trade to that city (modern-day Guangzhou) and gave monopoly trading rights to private Chinese merchants. The British East India Company and the Dutch East India Company had long before been granted similar monopoly rights by their governments. In 1793, the British East India Company, with the support of the British government, sent a delegation to China under Lord George Macartney in order to open free trade and put relations on a basis of equality. The imperial court viewed trade as of secondary interest, whereas the British saw maritime trade as the key to their economy. The Qianlong Emperor told Macartney "the kings of the myriad nations come by land and sea with all sorts of precious things", and "consequently there is nothing we lack ..." Demand in Europe for Chinese goods such as silk, tea, and ceramics could only be met if European companies funneled their limited supplies of silver into China. In the late 1700s, the governments of Britain and France were deeply concerned about the imbalance of trade and the drain of silver. To meet the growing Chinese demand for opium, the British East India Company greatly expanded its production in Bengal. Since China's economy was essentially self-sufficient, the country had little need to import goods or raw materials from the Europeans, so the usual way of payment was through silver. The Daoguang Emperor, concerned both over the outflow of silver and the damage that opium smoking was causing to his subjects, ordered Lin Zexu to end the opium trade. Lin confiscated the stocks of opium without compensation in 1839, leading Britain to send a military expedition the following year. The First Opium War revealed the outdated state of the Chinese military. The Qing navy, composed entirely of wooden sailing junks, was severely outclassed by the modern tactics and firepower of the British Royal Navy. British soldiers, using advanced muskets and artillery, easily outmanoeuvred and outgunned Qing forces in ground battles. The Qing surrender in 1842 marked a decisive, humiliating blow to China. The Treaty of Nanjing, the first of the "unequal treaties", demanded war reparations, forced China to open up the Treaty Ports of Canton, Amoy, Fuchow, Ningpo and Shanghai to Western trade and missionaries, and to cede Hong Kong Island to Britain. It revealed weaknesses in the Qing government and provoked rebellions against the regime. In 1842, the Qing dynasty fought a war with the Sikh Empire (the last independent kingdom of India), resulting in a negotiated peace and a return to the "status quo ante bellum". The Taiping Rebellion in the mid-19th century was the first major instance of anti-Manchu sentiment. Amid widespread social unrest and worsening famine, the rebellion not only posed the most serious threat towards Qing rulers, it has also been called the "bloodiest civil war of all time"; during its fourteen-year course from 1850 to 1864 between 20 and 30 million people died. Hong Xiuquan, a failed civil service candidate, in 1851 launched an uprising in Guizhou province, and established the Taiping Heavenly Kingdom with Hong himself as king. Hong announced that he had visions of God and that he was the brother of Jesus Christ. Slavery, concubinage, arranged marriage, opium smoking, footbinding, judicial torture, and the worship of idols were all banned. However, success led to internal feuds, defections and corruption. In addition, British and French troops, equipped with modern weapons, had come to the assistance of the Qing imperial army. It was not until 1864 that Qing armies under Zeng Guofan succeeded in crushing the revolt. After the outbreak of this rebellion, there were also revolts by the Muslims and Miao people of China against the Qing dynasty, most notably in the Miao Rebellion (1854–73) in Guizhou, the Panthay Rebellion (1856–1873) in Yunnan and the Dungan Revolt (1862–77) in the northwest. The Western powers, largely unsatisfied with the Treaty of Nanjing, gave grudging support to the Qing government during the Taiping and Nian Rebellions. China's income fell sharply during the wars as vast areas of farmland were destroyed, millions of lives were lost, and countless armies were raised and equipped to fight the rebels. In 1854, Britain tried to re-negotiate the Treaty of Nanjing, inserting clauses allowing British commercial access to Chinese rivers and the creation of a permanent British embassy at Beijing. In 1856, Qing authorities, in searching for a pirate, boarded a ship, the "Arrow", which the British claimed had been flying the British flag, an incident which led to the Second Opium War. In 1858, facing no other options, the Xianfeng Emperor agreed to the Treaty of Tientsin, which contained clauses deeply insulting to the Chinese, such as a demand that all official Chinese documents be written in English and a proviso granting British warships unlimited access to all navigable Chinese rivers. Ratification of the treaty in the following year led to a resumption of hostilities. In 1860, with Anglo-French forces marching on Beijing, the emperor and his court fled the capital for the imperial hunting lodge at Rehe. Once in Beijing, the Anglo-French forces looted the Old Summer Palace and, in an act of revenge for the arrest of several Englishmen, burnt it to the ground. Prince Gong, a younger half-brother of the emperor, who had been left as his brother's proxy in the capital, was forced to sign the Convention of Beijing. The humiliated emperor died the following year at Rehe. Yet the dynasty rallied. Chinese generals and officials such as Zuo Zongtang led the suppression of rebellions and stood behind the Manchus. When the Tongzhi Emperor came to the throne at the age of five in 1861, these officials rallied around him in what was called the Tongzhi Restoration. Their aim was to adopt Western military technology in order to preserve Confucian values. Zeng Guofan, in alliance with Prince Gong, sponsored the rise of younger officials such as Li Hongzhang, who put the dynasty back on its feet financially and instituted the Self-Strengthening Movement. The reformers then proceeded with institutional reforms, including China's first unified ministry of foreign affairs, the Zongli Yamen; allowing foreign diplomats to reside in the capital; establishment of the Imperial Maritime Customs Service; the formation of modernized armies, such as the Beiyang Army, as well as a navy; and the purchase from Europeans of armament factories. The dynasty lost control of peripheral territories bit by bit. In return for promises of support against the British and the French, the Russian Empire took large chunks of territory in the Northeast in 1860. The period of cooperation between the reformers and the European powers ended with the Tientsin Massacre of 1870, which was incited by the murder of French nuns set off by the belligerence of local French diplomats. Starting with the Cochinchina Campaign in 1858, France expanded control of Indochina. By 1883, France was in full control of the region and had reached the Chinese border. The Sino-French War began with a surprise attack by the French on the Chinese southern fleet at Fuzhou. After that the Chinese declared war on the French. A French invasion of Taiwan was halted and the French were defeated on land in Tonkin at the Battle of Bang Bo. However Japan threatened to enter the war against China due to the Gapsin Coup and China chose to end the war with negotiations. The war ended in 1885 with the Treaty of Tientsin (1885) and the Chinese recognition of the French protectorate in Vietnam. In 1884, pro-Japanese Koreans in Seoul led the Gapsin Coup. Tensions between China and Japan rose after China intervened to suppress the uprising. Japanese Prime Minister Itō Hirobumi and Li Hongzhang signed the Convention of Tientsin, an agreement to withdraw troops simultaneously, but the First Sino-Japanese War of 1895 was a military humiliation. The Treaty of Shimonoseki recognized Korean independence and ceded Taiwan and the Pescadores to Japan. The terms might have been harsher, but when a Japanese citizen attacked and wounded Li Hongzhang, an international outcry shamed the Japanese into revising them. The original agreement stipulated the cession of Liaodong Peninsula to Japan, but Russia, with its own designs on the territory, along with Germany and France, in the Triple Intervention, successfully put pressure on the Japanese to abandon the peninsula. These years saw an evolution in the participation of Empress Dowager Cixi (Wade–Giles: Tz'u-Hsi) in state affairs. She entered the imperial palace in the 1850s as a concubine to the Xianfeng Emperor (r. 1850–1861) and came to power in 1861 after her five-year-old son, the Tongzhi Emperor ascended the throne. She, the Empress Dowager Ci'an (who had been Xianfeng's empress), and Prince Gong (a son of the Daoguang Emperor), staged a coup that ousted several regents for the boy emperor. Between 1861 and 1873, she and Ci'an served as regents, choosing the reign title "Tongzhi" (ruling together). Following the emperor's death in 1875, Cixi's nephew, the Guangxu Emperor, took the throne, in violation of the dynastic custom that the new emperor be of the next generation, and another regency began. In the spring of 1881, Ci'an suddenly died, aged only forty-three, leaving Cixi as sole regent. From 1889, when Guangxu began to rule in his own right, to 1898, the Empress Dowager lived in semi-retirement, spending the majority of the year at the Summer Palace. On 1 November 1897, two German Roman Catholic missionaries were murdered in the southern part of Shandong province (the Juye Incident). Germany used the murders as a pretext for a naval occupation of Jiaozhou Bay. The occupation prompted a "scramble for concessions" in 1898, which included the German lease of Jiazhou Bay, the Russian acquisition of Liaodong, and the British lease of the New Territories of Hong Kong. In the wake of these external defeats, the Guangxu Emperor initiated the Hundred Days' Reform of 1898. Newer, more radical advisers such as Kang Youwei were given positions of influence. The emperor issued a series of edicts and plans were made to reorganize the bureaucracy, restructure the school system, and appoint new officials. Opposition from the bureaucracy was immediate and intense. Although she had been involved in the initial reforms, the Empress Dowager stepped in to call them off, arrested and executed several reformers, and took over day-to-day control of policy. Yet many of the plans stayed in place, and the goals of reform were implanted. Widespread drought in North China, combined with the imperialist designs of European powers and the instability of the Qing government, created conditions that led to the emergence of the Righteous and Harmonious Fists, or "Boxers." In 1900, local groups of Boxers proclaiming support for the Qing dynasty murdered foreign missionaries and large numbers of Chinese Christians, then converged on Beijing to besiege the Foreign Legation Quarter. A coalition of European, Japanese, and Russian armies (the Eight-Nation Alliance) then entered China without diplomatic notice, much less permission. Cixi declared war on all of these nations, only to lose control of Beijing after a short, but hard-fought campaign. She fled to Xi'an. The victorious allies drew up scores of demands on the Qing government, including compensation for their expenses in invading China and execution of complicit officials. By the early 20th century, mass civil disorder had begun in China, and it was growing continuously. To overcome such problems, Empress Dowager Cixi issued an imperial edict in 1901 calling for reform proposals from the governors-general and governors and initiated the era of the dynasty's "New Policies", also known as the "Late Qing Reform". The edict paved the way for the most far-reaching reforms in terms of their social consequences, including the creation of a national education system and the abolition of the imperial examinations in 1905. The Guangxu Emperor died on 14 November 1908, and on 15 November 1908, Cixi also died. Rumors held that she or Yuan Shikai ordered trusted eunuchs to poison the Guangxu Emperor, and an autopsy conducted nearly a century later confirmed lethal levels of arsenic in his corpse. Puyi, the oldest son of Zaifeng, Prince Chun, and nephew to the childless Guangxu Emperor, was appointed successor at the age of two, leaving Zaifeng with the regency. This was followed by the dismissal of General Yuan Shikai from his former positions of power. In April 1911 Zaifeng created a cabinet in which there were two vice-premiers. Nonetheless, this cabinet was also known by contemporaries as "The Royal Cabinet" because among the thirteen cabinet members, five were members of the imperial family or Aisin Gioro relatives. This brought a wide range of negative opinions from senior officials like Zhang Zhidong. The Wuchang Uprising of 10 October 1911 was a success; by 14 November of the 15 provinces had rejected Qing rule. This led to the creation of a new central government, the Republic of China, in Nanjing with Sun Yat-sen as its provisional head. Many provinces soon began "separating" from Qing control. Seeing a desperate situation unfold, the Qing government brought Yuan Shikai back to military power. He took control of his Beiyang Army to crush the revolution in Wuhan at the Battle of Yangxia. After taking the position of Prime Minister and creating his own cabinet, Yuan Shikai went as far as to ask for the removal of Zaifeng from the regency. This removal later proceeded with directions from Empress Dowager Longyu. Yuan Shikai was now a dictator—the ruler of China and the Manchu dynasty had lost all power; it formally abdicated in early 1912. Premier Yuan Shikai and his Beiyang commanders decided that going to war would be unreasonable and costly. Similarly, Sun Yat-sen wanted a republican constitutional reform, for the benefit of China's economy and populace. With permission from Empress Dowager Longyu, Yuan Shikai began negotiating with Sun Yat-sen, who decided that his goal had been achieved in forming a republic, and that therefore he could allow Yuan to step into the position of President of the Republic of China. On 12 February 1912, after rounds of negotiations, Longyu issued an imperial edict bringing about the abdication of the child emperor Puyi. This brought an end to over 2,000 years of Imperial China and began an extended period of instability of warlord factionalism. The unorganized political and economic systems combined with a widespread criticism of Chinese culture led to questioning and doubt about the future. Some Qing loyalists organized themselves as "Royalist Party", and tried to use militant activism and open rebellions to restore the monarchy, but to no avail. In July 1917, there was an abortive attempt to restore the Qing dynasty led by Zhang Xun, which was quickly reversed by republican troops. In the 1930s, the Empire of Japan invaded Northeast China and founded Manchukuo in 1932, with Puyi as its emperor. After the invasion by the Soviet Union, Manchukuo fell in 1945. The early Qing emperors adopted the bureaucratic structures and institutions from the preceding Ming dynasty but split rule between Han Chinese and Manchus, with some positions also given to Mongols. Like previous dynasties, the Qing recruited officials via the imperial examination system, until the system was abolished in 1905. The Qing divided the positions into civil and military positions, each having nine grades or ranks, each subdivided into a and b categories. Civil appointments ranged from an attendant to the emperor or a Grand Secretary in the Forbidden City (highest) to being a prefectural tax collector, deputy jail warden, deputy police commissioner, or tax examiner. Military appointments ranged from being a field marshal or chamberlain of the imperial bodyguard to a third class sergeant, corporal or a first or second class private. The formal structure of the Qing government centered on the Emperor as the absolute ruler, who presided over six Boards (Ministries), each headed by two presidents and assisted by four vice presidents. In contrast to the Ming system, however, Qing ethnic policy dictated that appointments were split between Manchu noblemen and Han officials who had passed the highest levels of the state examinations. The Grand Secretariat, which had been an important policy-making body under the Ming, lost its importance during the Qing and evolved into an imperial chancery. The institutions which had been inherited from the Ming formed the core of the Qing "Outer Court", which handled routine matters and was located in the southern part of the Forbidden City. In order not to let the routine administration take over the running of the empire, the Qing emperors made sure that all important matters were decided in the "Inner Court", which was dominated by the imperial family and Manchu nobility and which was located in the northern part of the Forbidden City. The core institution of the inner court was the Grand Council. It emerged in the 1720s under the reign of the Yongzheng Emperor as a body charged with handling Qing military campaigns against the Mongols, but it soon took over other military and administrative duties and served to centralize authority under the crown. The Grand Councillors served as a sort of privy council to the emperor. The Six Ministries and their respective areas of responsibilities were as follows: Board of Civil Appointments Board of Revenue Board of Rites Board of War Board of Punishments Board of Works From the early Qing, the central government was characterized by a system of dual appointments by which each position in the central government had a Manchu and a Han Chinese assigned to it. The Han Chinese appointee was required to do the substantive work and the Manchu to ensure Han loyalty to Qing rule. In addition to the six boards, there was a Lifan Yuan unique to the Qing government. This institution was established to supervise the administration of Tibet and the Mongol lands. As the empire expanded, it took over administrative responsibility of all minority ethnic groups living in and around the empire, including early contacts with Russia – then seen as a tribute nation. The office had the status of a full ministry and was headed by officials of equal rank. However, appointees were at first restricted only to candidates of Manchu and Mongol ethnicity, until later open to Han Chinese as well. Even though the Board of Rites and Lifan Yuan performed some duties of a foreign office, they fell short of developing into a professional foreign service. It was not until 1861 – a year after losing the Second Opium War to the Anglo-French coalition – that the Qing government bowed to foreign pressure and created a proper foreign affairs office known as the Zongli Yamen. The office was originally intended to be temporary and was staffed by officials seconded from the Grand Council. However, as dealings with foreigners became increasingly complicated and frequent, the office grew in size and importance, aided by revenue from customs duties which came under its direct jurisdiction. There was also another government institution called Imperial Household Department which was unique to the Qing dynasty. It was established before the fall of the Ming, but it became mature only after 1661, following the death of the Shunzhi Emperor and the accession of his son, the Kangxi Emperor. The department's original purpose was to manage the internal affairs of the imperial family and the activities of the inner palace (in which tasks it largely replaced eunuchs), but it also played an important role in Qing relations with Tibet and Mongolia, engaged in trading activities (jade, ginseng, salt, furs, etc.), managed textile factories in the Jiangnan region, and even published books. Relations with the Salt Superintendents and salt merchants, such as those at Yangzhou, were particularly lucrative, especially since they were direct, and did not go through absorptive layers of bureaucracy. The department was manned by "booi", or "bondservants," from the Upper Three Banners. By the 19th century, it managed the activities of at least 56 subagencies. Qing China reached its largest extent during the 18th century, when it ruled China proper (eighteen provinces) as well as the areas of present-day Northeast China, Inner Mongolia, Outer Mongolia, Xinjiang and Tibet, at approximately 13 million km2 in size. There were originally 18 provinces, all of which in China proper, but later this number was increased to 22, with Manchuria and Xinjiang being divided or turned into provinces. Taiwan, originally part of Fujian province, became a province of its own in the 19th century, but was ceded to the Empire of Japan following the First Sino-Japanese War by the end of the century. In addition, many surrounding countries, such as Korea (Joseon dynasty), Vietnam frequently paid tribute to China during much of this period. The Katoor dynasty of Afghanistan also paid tribute to the Qing dynasty of China until the mid-19th century. During the Qing dynasty the Chinese claimed suzerainty over the Taghdumbash Pamir in the south-west of Taxkorgan Tajik Autonomous County but permitted the Mir of Hunza to administer the region in return for a tribute. Until 1937 the inhabitants paid tribute to the Mir of Hunza, who exercised control over the pastures. Khanate of Kokand were forced to submit as protectorate and pay tribute to the Qing dynasty in China between 1774 and 1798. The Qing organization of provinces was based on the fifteen administrative units set up by the Ming dynasty, later made into eighteen provinces by splitting for example, Huguang into Hubei and Hunan provinces. The provincial bureaucracy continued the Yuan and Ming practice of three parallel lines, civil, military, and censorate, or surveillance. Each province was administered by a governor (, "xunfu") and a provincial military commander (, "tidu"). Below the province were prefectures (, "fu") operating under a prefect (, "zhīfǔ"), followed by subprefectures under a subprefect. The lowest unit was the county, overseen by a county magistrate. The eighteen provinces are also known as "China proper". The position of viceroy or governor-general (, "zongdu") was the highest rank in the provincial administration. There were eight regional viceroys in China proper, each usually took charge of two or three provinces. The Viceroy of Zhili, who was responsible for the area surrounding the capital Beijing, is usually considered as the most honorable and powerful viceroy among the eight. By the mid-18th century, the Qing had successfully put outer regions such as Inner and Outer Mongolia, Tibet and Xinjiang under its control. Imperial commissioners and garrisons were sent to Mongolia and Tibet to oversee their affairs. These territories were also under supervision of a central government institution called Lifan Yuan. Qinghai was also put under direct control of the Qing court. Xinjiang, also known as Chinese Turkestan, was subdivided into the regions north and south of the Tian Shan mountains, also known today as Dzungaria and Tarim Basin respectively, but the post of Ili General was established in 1762 to exercise unified military and administrative jurisdiction over both regions. Dzungaria was fully opened to Han migration by the Qianlong Emperor from the beginning. Han migrants were at first forbidden from permanently settling in the Tarim Basin but were the ban was lifted after the invasion by Jahangir Khoja in the 1820s. Likewise, Manchuria was also governed by military generals until its division into provinces, though some areas of Xinjiang and Northeast China were lost to the Russian Empire in the mid-19th century. Manchuria was originally separated from China proper by the Inner Willow Palisade, a ditch and embankment planted with willows intended to restrict the movement of the Han Chinese, as the area was off-limits to civilian Han Chinese until the government started colonizing the area, especially since the 1860s. With respect to these outer regions, the Qing maintained imperial control, with the emperor acting as Mongol khan, patron of Tibetan Buddhism and protector of Muslims. However, Qing policy changed with the establishment of Xinjiang province in 1884. During The Great Game era, taking advantage of the Dungan revolt in northwest China, Yaqub Beg invaded Xinjiang from Central Asia with support from the British Empire, and made himself the ruler of the kingdom of Kashgaria. The Qing court sent forces to defeat Yaqub Beg and Xinjiang was reconquered, and then the political system of China proper was formally applied onto Xinjiang. The Kumul Khanate, which was incorporated into the Qing empire as a vassal after helping Qing defeat the Zunghars in 1757, maintained its status after Xinjiang turned into a province through the end of the dynasty in the Xinhai Revolution up until 1930. In the early 20th century, Britain sent an expedition force to Tibet and forced Tibetans to sign a treaty. The Qing court responded by asserting Chinese sovereignty over Tibet, resulting in the 1906 Anglo-Chinese Convention signed between Britain and China. The British agreed not to annex Tibetan territory or to interfere in the administration of Tibet, while China engaged not to permit any other foreign state to interfere with the territory or internal administration of Tibet. Furthermore, similar to Xinjiang which was converted into a province earlier, the Qing government also turned Manchuria into three provinces in the early 20th century, officially known as the "Three Northeast Provinces", and established the post of Viceroy of the Three Northeast Provinces to oversee these provinces, making the total number of regional viceroys to nine. The early Qing military was rooted in the Eight Banners first developed by Nurhaci to organize Jurchen society beyond petty clan affiliations. There were eight banners in all, differentiated by color. The yellow, bordered yellow, and white banners were known as the "Upper Three Banners" and were under the direct command of the emperor. Only Manchus belonging to the Upper Three Banners, and selected Han Chinese who had passed the highest level of martial exams could serve as the emperor's personal bodyguards. The remaining Banners were known as the "Lower Five Banners". They were commanded by hereditary Manchu princes descended from Nurhachi's immediate family, known informally as the "Iron cap princes". Together they formed the ruling council of the Manchu nation as well as high command of the army. Nurhachi's son Hong Taiji expanded the system to include mirrored Mongol and Han Banners. After capturing Beijing in 1644, the relatively small Banner armies were further augmented by the Green Standard Army, made up of those Ming troops who had surrendered to the Qing, which eventually outnumbered Banner troops three to one. They maintained their Ming era organization and were led by a mix of Banner and Green Standard officers. Banner Armies were organized along ethnic lines, namely Manchu and Mongol, but included non-Manchu bondservants registered under the household of their Manchu masters. The years leading up to the conquest increased the number of Han Chinese under Manchu rule, leading Hong Taiji to create the , and around the time of the Qing takeover of Beijing, their numbers rapidly swelled. Han Bannermen held high status and power in the early Qing period, especially immediately after the conquest during Shunzhi and Kangxi's reign where they dominated Governor-Generalships and Governorships across China at the expense of both Manchu Bannermen and Han civilians. Han also numerically dominated the Banners up until the mid 18th century. European visitors in Beijing called them "Tartarized Chinese" or "Tartarified Chinese". The Qianlong Emperor, concerned about maintaining Manchu identity, re-emphasized Manchu ethnicity, ancestry, language, and culture in the Eight Banners and started a mass discharge of Han Bannermen from the Eight Banners, either asking them to voluntarily resign from the Banner rolls or striking their names off. This led to a change from Han majority to a Manchu majority within the Banner system, and previous Han Bannermen garrisons in southern China such as at Fuzhou, Zhenjiang, Guangzhou, were replaced by Manchu Bannermen in the purge, which started in 1754. The turnover by Qianlong most heavily impacted Han banner garrisons stationed in the provinces while it less impacted Han Bannermen in Beijing, leaving a larger proportion of remaining Han Bannermen in Beijing than the provinces. Han Bannermen's status was decreased from that point on with Manchu Banners gaining higher status. Han Bannermen numbered 75% in 1648 Shunzhi's reign, 72% in 1723 Yongzheng's reign, but decreased to 43% in 1796 during the first year of Jiaqing's reign, which was after Qianlong's purge. The mass discharge was known as the . Qianlong directed most of his ire at those Han Bannermen descended from defectors who joined the Qing after the Qing passed through the Great Wall at Shanhai Pass in 1644, deeming their ancestors as traitors to the Ming and therefore untrustworthy, while retaining Han Bannermen who were descended from defectors who joined the Qing before 1644 in Liaodong and marched through Shanhai pass, also known as those who "followed the Dragon through the pass" (). After a century of peace the Manchu Banner troops lost their fighting edge. Before the conquest, the Manchu banner had been a "citizen" army whose members were farmers and herders obligated to provide military service in times of war. The decision to turn the banner troops into a professional force whose every need was met by the state brought wealth, corruption, and decline as a fighting force. The Green Standard Army declined in a similar way. Early during the Taiping Rebellion, Qing forces suffered a series of disastrous defeats culminating in the loss of the regional capital city of Nanjing in 1853. Shortly thereafter, a Taiping expeditionary force penetrated as far north as the suburbs of Tianjin, the imperial heartlands. In desperation the Qing court ordered a Chinese official, Zeng Guofan, to organize regional and village militias into an emergency army called tuanlian. Zeng Guofan's strategy was to rely on local gentry to raise a new type of military organization from those provinces that the Taiping rebels directly threatened. This new force became known as the Xiang Army, named after the Hunan region where it was raised. The Xiang Army was a hybrid of local militia and a standing army. It was given professional training, but was paid for out of regional coffers and funds its commanders – mostly members of the Chinese gentry – could muster. The Xiang Army and its successor, the Huai Army, created by Zeng Guofan's colleague and protégée Li Hongzhang, were collectively called the "Yong Ying" (Brave Camp). Zeng Guofan had no prior military experience. Being a classically educated official, he took his blueprint for the Xiang Army from the Ming general Qi Jiguang, who, because of the weakness of regular Ming troops, had decided to form his own "private" army to repel raiding Japanese pirates in the mid-16th century. Qi Jiguang's doctrine was based on Neo-Confucian ideas of binding troops' loyalty to their immediate superiors and also to the regions in which they were raised. Zeng Guofan's original intention for the Xiang Army was simply to eradicate the Taiping rebels. However, the success of the Yongying system led to its becoming a permanent regional force within the Qing military, which in the long run created problems for the beleaguered central government. First, the Yongying system signaled the end of Manchu dominance in Qing military establishment. Although the Banners and Green Standard armies lingered on as a drain on resources, henceforth the Yongying corps became the Qing government's de facto first-line troops. Second, the Yongying corps were financed through provincial coffers and were led by regional commanders, weakening central government's grip on the whole country. Finally, the nature of Yongying command structure fostered nepotism and cronyism amongst its commanders, who laid the seeds of regional warlordism in the first half of the 20th century. By the late 19th century, the most conservative elements within the Qing court could no longer ignore China's military weakness. In 1860, during the Second Opium War, the capital Beijing was captured and the Summer Palace sacked by a relatively small Anglo-French coalition force numbering 25,000. The advent of modern weaponry resulting from the European Industrial Revolution had rendered China's traditionally trained and equipped army and navy obsolete. The government attempts to modernize during the Self-Strengthening Movement were initially successful, but yielded few lasting results because of the central government's lack of funds, lack of political will, and unwillingness to depart from tradition. Losing the First Sino-Japanese War of 1894–1895 was a watershed. Japan, a country long regarded by the Chinese as little more than an upstart nation of pirates, annihilated the Qing government's modernized Beiyang Fleet, then deemed to be the strongest naval force in Asia. The Japanese victory occurred a mere three decades after the Meiji Restoration set a feudal Japan on course to emulate the Western nations in their economic and technological achievements. Finally, in December 1894, the Qing government took concrete steps to reform military institutions and to re-train selected units in Westernized drills, tactics and weaponry. These units were collectively called the New Army. The most successful of these was the Beiyang Army under the overall supervision and control of a former Huai Army commander, General Yuan Shikai, who used his position to build networks of loyal officers and eventually become President of the Republic of China. The most significant facts of early and mid-Qing social history was growth in population, population density, and mobility. The population in 1700, according to widely accepted estimates, was roughly 150 million, about what it had been under the late Ming a century before, then doubled over the next century, and reached a height of 450 million on the eve of the Taiping Rebellion in 1850. One reason for this growth was the spread of New World crops like peanuts, sweet potatoes, and potatoes, which helped to sustain the people during shortages of harvest for crops such as rice or wheat.  These crops could be grown under harsher conditions, and thus were cheaper as well, which led to them becoming staples for poorer farmers, decreasing the number of deaths from malnutrition. Diseases such as smallpox, widespread in the seventeenth century, were brought under control by an increase in inoculations. In addition, infant deaths were also greatly decreased due to improvements in birthing techniques and childcare performed by doctors and midwives and through an increase in medical books available to the public. Government campaigns decreased the incidence of infanticide. Unlike Europe, where population growth in this period was greatest in the cities, in China the growth in cities and the lower Yangzi was low. The greatest growth was in the borderlands and the highlands, where farmers could clear large tracts of marshlands and forests. The population was also remarkably mobile, perhaps more so than at any time in Chinese history. Indeed, the Qing government did far more to encourage mobility than to discourage it. Millions of Han Chinese migrated to Yunnan and Guizhou in the 18th century, and also to Taiwan. After the conquests of the 1750s and 1760s, the court organized agricultural colonies in Xinjiang. Migration might be permanent, for resettlement, or the migrants (in theory at least) might regard the move as a temporary sojourn. The latter included an increasingly large and mobile workforce. Local-origin-based merchant groups also moved freely. This mobility also included the organized movement of Qing subjects overseas, largely to Southeastern Asia, in search of trade and other economic opportunities. According to statute, Qing society was divided into relatively closed estates, of which in most general terms there were five. Apart from the estates of the officials, the comparatively minuscule aristocracy, and the degree-holding literati, there also existed a major division among ordinary Chinese between commoners and people with inferior status. They were divided into two categories: one of them, the good "commoner" people, the other "mean" people who were seen as debased and servile. The majority of the population belonged to the first category and were described as "liangmin", a legal term meaning good people, as opposed to "jianmin" meaning the mean (or ignoble) people. Qing law explicitly stated that the traditional four occupational groups of scholars, farmers, artisans and merchants were "good", or having a status of commoners. On the other hand, slaves or bondservants, entertainers (including prostitutes and actors), tattooed criminals, and those low-level employees of government officials were the "mean people". Mean people were considered legally inferior to commoners and suffered unequal treatments, forbidden to take the imperial examination. Furthermore, such people were usually not allowed to marry with free commoners and were even often required to acknowledge their abasement in society through actions such as bowing. However, throughout the Qing dynasty, the emperor and his court, as well as the bureaucracy, worked towards reducing the distinctions between the debased and free but did not completely succeed even at the end of its era in merging the two classifications together. Although there had been no powerful hereditary aristocracy since the Song dynasty, the gentry ("shenshi"), like their British counterparts, enjoyed imperial privileges and managed local affairs. The status of this scholar-official was defined by passing at least the first level of civil service examinations and holding a degree, which qualified him to hold imperial office, although he might not actually do so. The gentry member could legally wear gentry robes and could talk to other officials as equals. Officials who had served for one or two terms could then retire to enjoy the glory of their status. Informally, the gentry then presided over local society and could use their connections to influence the magistrate, acquire land, and maintain large households. The gentry thus included not only the males holding degrees but also their wives, descendants, some of their relatives. The Qing gentry were defined as much by their refined lifestyle as by their legal status. They lived more refined and comfortable lives than the commoners and used sedan-chairs to travel any significant distance. They were usually highly literate and often showed off their learning. They commonly collected objects such as scholars' stones, porcelain or pieces of art for their beauty, which set them off from less cultivated commoners. In Qing society, women did not enjoy the same rights as men. The Confucian moral system, which was built by and thus favored men, restrained their rights, and they were often seen as a type of "merchandise" that could be traded away by their family. Once a woman married, she essentially became the property of her husband's family, and could not divorce her husband except under very specific circumstances, such as severe physical harm or an attempt to sell her into prostitution. Men, on the other hand, could divorce their wives for trivial matters such as excessive talkativeness. Furthermore, women were extremely restricted in owning property and inheritance and were essentially confined to their homes and stripped of social interaction and mobility. Mothers often bound their young daughters' feet, a practice that was seen as a standard of feminine beauty and a necessity to be marriageable, but was also a way to restrict a woman's physical movement in society. By early Qing, the romanticized courtesan culture, which had been much more popular in the late-Ming with men who had sought a model of a refinement and literacy that was missing from their marriage partners, had mostly disappeared. Such a decline was the result of the Qing's reinforced defense of fundamental Confucian family values as well as an attempt to put a stop to the cultural revolution that was happening at the time. The court thus began to rain down heavily on such practices as prostitution, pornography, rape, and homosexuality. However, by the time of the Qianlong emperor, red-light districts had once again become capitals of tasteful and trending courtesanship. In economically diverse port cities such as Tianjin, Chongqing, and Hankou, the sex trade became a large business, which helped supply a fine hierarchy of prostitutes to all classes of men. Shanghai, which had been rapidly growing in the late nineteenth century, became a city where prostitutes of different ranks whom male patrons fawned over and gossiped about, as some became recognized as national entities of femininity. Another rising phenomenon, especially during the eighteenth century, was the cult of widow chastity. The fact that many young women were betrothed during early adolescence coupled with the high rate of early mortality resulted in a significant number of young widows. This resulted in a problem, as most women had already moved into their husband's household and upon her husband's death essentially became a burden who could never fulfill her original duty of producing a male heir. Widow chastity began to be seen as a form of devout filiality for other relationships including loyalty to the emperor, which resulted in the Qing court's attempt to reward those families who resisted selling off their unneeded daughters-in-law in order to underline such women's virtue. However, this system began to decline when families who attempted to "abuse" the system appeared for social competition and authorities speculated that some families coerced their young widows to commit suicide at the time of their husband's death to obtain more honors. Such corruption showed a lack of respect for human life, and was thus greatly disapproved of by the officials who then chose to reward the families more sparingly. One of the main reasons for a shift in gender roles was the unprecedentedly high incidence of men leaving their homes to travel, which in turn gave women more freedom for action. Wives of such men often became the ones to run the household, especially in financial matters. Elite women also began to pursue different fashionable activities, such as writing poetry, and a new frenzy of female sociability appeared. Women started to leave their households to attend local opera performances and temple festivals and some even began to form little societies to visit famous sacred sites with other restless women, which helped to shape a new view of the conventional societal norms on how women should behave. Patrilineal kinship had compelling power socially and culturally; local lineages became the building blocks of society. A person's success or failure depended, people believed, on guidance from a father, from which the family's success and prosperity also grew. The patrilineage kinship structure, that is, descent through the male line, was often translated as "clan" in earlier scholarship. By the Qing, the patrilineage had become the primary organizational device in society. This change began during the Song dynasty when the civil service examination became a means for gaining status versus nobility and inheritance of status. Elite families began to shift their marital practices, identity and loyalty. Instead of intermarrying within aristocratic elites of the same social status, they tended to form marital alliances with nearby families of the same or higher wealth, and established the local people's interests as first and foremost which helped to form intermarried townships. The Neo-Confucian ideology particular Cheng-Zhu thinking adopted by the Qing placed emphasis on patrilineal families and genealogy in society. The emperors exhorted families to compile genealogies in order to strengthen local society. Inner Mongols and Khalkha Mongols in the Qing rarely knew their ancestors beyond four generations and Mongol tribal society was not organized among patrilineal clans, contrary to what was commonly thought, but included unrelated people at the base unit of organization. The Qing tried but failed to promote the Chinese Neo-Confucian ideology of organizing society along patrimonial clans among the Mongols. Qing lineages claimed to be based on biological descent but they were often purposefully crafted. When a member of a lineage gained office or became wealthy, he might look back to identify a "founding ancestor", sometimes using considerable creativity in selecting a prestigious local figure. Once such a person had been chosen, a Chinese character was assigned to be used in the given name of each male in each succeeding generation. A written genealogy was compiled to record the lineage's history, biographies of respected ancestors, a chart of all the family members of each generation, rules for the members to follow, and often copies of title contracts for collective property as well. Lastly, an ancestral hall was built to serve as the lineage's headquarters and a place for annual ancestral sacrifice. Such worship was intended to ensure that the ancestors remain content and benevolent spirits ("shen") who would keep watch over and protect the family. Later observers felt that the ancestral cult focused on the family and lineage, rather than on more public matters such as community and nation. Catholic missionaries—mostly Jesuits—had arrived in the Ming dynasty. By 1701 there were 117 Catholic missionaries, and at most 300,000 converts out of hundreds of millions. There were many persecutions and reverses in the 18th century and by 1800 there was little help from the main supporters in France, Spain and Portugal. The impact on Chinese society was hard to see, apart from some contributions to mathematics, astronomy and the calendar. By the 1840s China was again becoming a major destination for Protestant and Catholic missionaries from Europe and the United States. They encountered significant opposition from local elites, who were committed to Confucianism and resented Western ethical systems. Missionaries were often seen as part of Western imperialism. The educated gentry were afraid for their own power. The mandarins claim to power lay in the knowledge of the Chinese classics—all government officials had to pass extremely difficult tests on Confucianism. The elite currently in power feared this might be replaced by the Bible, scientific training and Western education. Indeed, the examination system was abolished in the early 20th century by reformers who admired Western models of modernization. According to Paul Cohen, from 1860 to 1900: Catholic missionaries in the 19th century arrived primarily from France. While they arrived somewhat later than the Protestants, their congregations grew at a faster rate. By 1900 there were about 1400 Catholic priests and nuns in China serving nearly 1 million Catholics. Over 3000 Protestant missionaries were active among the 250,000 Christians in China. Missionaries, like all foreigners, enjoyed extraterritorial legal rights. The main goal was conversions, but they made relatively few. They were much more successful in setting up schools, as well as hospitals and dispensaries. They usually avoided Chinese politics, but were opponents of foot-binding and opium. Western governments could protect them in the treaty ports, but outside those limited areas they were at the mercy of local government officials and threats were common. Chinese elites often associated missionary activity with the imperialistic exploitation of China, and with promoting "new technology and ideas that threatened their positions." Historian John K. Fairbank says, "To most Chinese, Christian missionaries seem to be the ideological arm of foreign aggression... To the scholar-gentry, missionaries were foreign subversives, whose immoral conduct and teachings were backed by gunboats. Conservative patriots hated and feared these alien, intruders." The missionaries and their converts were a prime target of attack and murder by Boxers in 1900. Medical missions in China by the late 19th century laid the foundations for modern medicine in China. Western medical missionaries established the first modern clinics and hospitals, provided the first training for nurses, and opened the first medical schools in China. By 1901, China was the most popular destination for medical missionaries. The 150 foreign physicians operated 128 hospitals and 245 dispensaries, treating 1.7 million patients. In 1894, male medical missionaries comprised 14 percent of all missionaries; women doctors were four percent. Modern medical education in China started in the early 20th century at hospitals run by international missionaries. They began establishing nurse training schools in China in the late 1880s, but nursing of sick men by female nurses was rejected by local traditions, so the number of Chinese students was small until the practice became accepted in the 1930s. There was also a level of distrust on the part of traditional evangelical missionaries who thought hospitals were diverting needed resources away from the primary goal of conversions. Appointed by the London Missionary Society (LMS), Robert Morrison (1782–1834) is the pioneering Protestant missionary to China. Before his departure on January 31, 1807, he received the missionary training given by David Bogue (1750–1825) at the Gosport Academy. Bogue's missionary strategy comprised three steps: mastering the native language after arriving at the mission locale, prioritizing the translation and publishing of the Bible above all, and establishing a local seminary to prepare the native Christians. Upon his arrival at Canton on September 6, 1807, Morrison followed Bogue's instruction, proceeding with translation and publication work on the Bible after learning the Chinese language. Morrison, assisted by William Milne (1785–1822) who was sent by the LMS, finished the translation of the entire Bible in 1819. Meanwhile, they founded the first Asian Protestant seminary (the Anglo-Chinese College) in Malacca in 1818, which adopted the Gosport curriculum. Afterward, Liang Afa (1789–1855), the Morrison-trained Chinese convert, succeeded and branched out the evangelization mission in inner China. In retrospect, Bogue's three-part strategy has been implemented through Morrison and Milne's mission to China. The two Opium Wars (1839–1860) marked the watershed of the Protestant Christian mission in China. From 1724 to 1858, it was the period of proscription. In 1724, the Yongzheng emperor (1678–1735) announced that Christianity was a “heterodox teaching” and hence proscribed. In 1811, Christian religious activities were further criminalized by the Jiaqing emperor (1760–1820). It was in such a background that Morrison arrived at Canton in China, experienced not only the difficulty in proceeding the missionary work but also the high living cost. Meanwhile, for sustaining his living and securing his legal residence in Canton, Morrison got approval from the LMS and, thus, accepted the employment of the East India Company and worked as a translator since 1809. However, his decision was not un-challenged. In 1823, a newly arrived young missionary found himself could not comply with Morrison's practice of accepting salary from a company made a profit out of the opium trade. He denounced that the opium trade contradicted the morality of Christianity. According to Platt's studies on the existing records, aside from this exceptional case, neither Morrison nor foreigners who benefited from selling opium mentioned anything, but financial terms. After the Opium Wars, there was the arising of new world order between Qing China and the Western states. Following the Treaty of Nanjing signed in 1842, the American treaty and the French treaty signed in 1844, and the Treaty of Tianjin signed in 1858, the Christianity was distinguished from the other local religions and protected by the treaties. Subsequently, the Chinese popular cults, such as the White Lotus and the Eight Trigram, at times attached themselves to Christianity. Meanwhile, the lifting of the proscription made room for the emergence of the Christian-inspired Taiping Movement in the Yangtze River Delta. According to Reilly, the Chinese Bible translated by Morrison, as well as Liang Afa's evangelistic pamphlet, significantly impacted the formation of the Taiping movement and its religious thoughts. At the outset of the twentieth century, along with the Western states' attempt to justify their military invasions and plunders, the missionary publications served as a medium to shape the prevailing narrative of the Boxer Uprising that “continue to circulate into the present.” The Boxer Uprising happened in 1900, in which the Chinese people in northern China stormed certain areas that were barred from entering, such as the missionary stations and the legation areas in Beijing. In 1901, shortly after the suppression of the uprising, a pile of Protestant missionary accounts got published, pioneered by Arthur Smith (1845–1932). The missionary discourse reiterates the “Chinese antiforeignism” underpinned by the Qing government, on the one hand; on the other hand, highlights the missionaries’ sacrifices for the preservation of Christian religion in facing “pagan barbarism.” According to Hevia, despite the conflicting and inconsistent accounts given by the witnesses leave the truth to be questioned, these works help to make the Western military retaliation in responding to the “Chinese brutality” to be reasonable. The ongoing creation and circulation of such narratives and memory, therefore, solidified images of “Chinese savagery” and the victimized and heroized Western states. By the end of the 17th century, the Chinese economy had recovered from the devastation caused by the wars in which the Ming dynasty were overthrown, and the resulting breakdown of order. In the following century, markets continued to expand as in the late Ming period, but with more trade between regions, a greater dependence on overseas markets and a greatly increased population. By the end of the 18th century the population had risen to 300 million from approximately 150 million during the late Ming dynasty. The dramatic rise in population was due to several reasons, including the long period of peace and stability in the 18th century and the import of new crops China received from the Americas, including peanuts, sweet potatoes and maize. New species of rice from Southeast Asia led to a huge increase in production. Merchant guilds proliferated in all of the growing Chinese cities and often acquired great social and even political influence. Rich merchants with official connections built up huge fortunes and patronized literature, theater and the arts. Textile and handicraft production boomed. The government broadened land ownership by returning land that had been sold to large landowners in the late Ming period by families unable to pay the land tax. To give people more incentives to participate in the market, they reduced the tax burden in comparison with the late Ming, and replaced the corvée system with a head tax used to hire laborers. The administration of the Grand Canal was made more efficient, and transport opened to private merchants. A system of monitoring grain prices eliminated severe shortages, and enabled the price of rice to rise slowly and smoothly through the 18th century. Wary of the power of wealthy merchants, Qing rulers limited their trading licenses and usually refused them permission to open new mines, except in poor areas. These restrictions on domestic resource exploration, as well as on foreign trade, are held by some scholars as a cause of the Great Divergence, by which the Western world overtook China economically. During the Ming–Qing period (1368–1911) the biggest development in the Chinese economy was its transition from a command to a market economy, the latter becoming increasingly more pervasive throughout the Qing's rule. From roughly 1550 to 1800 China proper experienced a second commercial revolution, developing naturally from the first commercial revolution of the Song period which saw the emergence of long-distance inter-regional trade of luxury goods. During the second commercial revolution, for the first time, a large percentage of farming households began producing crops for sale in the local and national markets rather than for their own consumption or barter in the traditional economy. Surplus crops were placed onto the national market for sale, integrating farmers into the commercial economy from the ground up. This naturally led to regions specializing in certain cash-crops for export as China's economy became increasingly reliant on inter-regional trade of bulk staple goods such as cotton, grain, beans, vegetable oils, forest products, animal products, and fertilizer. Perhaps the most important factor in the development of the second commercial revolution was the mass influx of silver that entered into the country from foreign trade. After the Spanish conquered the Philippines in the 1570s they mined for silver around the New World, greatly expanding the circulating supply of silver. Foreign trade stimulated the ubiquity of the silver standard, after the re-opening of the southeast coast, which had been closed in the late 17th century, foreign trade was quickly re-established, and was expanding at 4% per annum throughout the latter part of the 18th century. China continued to export tea, silk and manufactures, creating a large, favorable trade balance with the West. The resulting inflow of silver expanded the money supply, facilitating the growth of competitive and stable markets. During the mid-Ming China had gradually shifted to silver as the standard currency for large scale transactions and by the late Kangxi reign the assessment and collection of the land tax was done in silver. By standardizing the collection of the land tax in silver, landlords followed suit and began only accepting rent payments in silver rather than in crops themselves, which in turn incentivized farmers to produce crops for sale in local and national markets rather than for their own personal consumption or barter. Unlike the copper coins, "qian" or cash, used mainly for smaller peasant transactions, silver was not properly minted into a coin but rather was traded in designated units of weight: the "liang" or "tael", which equaled roughly 1.3 ounces of silver. Since it was never properly minted, a third-party had to be brought in to assess the weight and purity of the silver, resulting in an extra "meltage fee" added on to the price of transaction. Furthermore, since the "meltage fee" was unregulated until the reign of the Yongzheng emperor it was the source of much corruption at each level of the bureaucracy. The Yongzheng emperor cracked down on the corrupt "meltage fees," legalizing and regulating them so that they could be collected as a tax, "returning meltage fees to the public coffer." From this newly increased public coffer, the Yongzheng emperor increased the salaries of the officials who collected them, further legitimizing silver as the standard currency of the Qing economy. The second commercial revolution also had a profound effect on the dispersion of the Qing populace. Up until the late Ming there existed a stark contrast between the rural countryside and city metropoles and very few mid-sized cities existed. This was due to the fact that extraction of surplus crops from the countryside was traditionally done by the state and not commercial organizations. However, as commercialization expanded exponentially in the late-Ming and early-Qing, mid-sized cities began popping up to direct the flow of domestic, commercial trade. Some towns of this nature had such a large volume of trade and merchants flowing through them that they developed into full-fledged market-towns. Some of these more active market-towns even developed into small-cities and became home to the new rising merchant-class. The proliferation of these mid-sized cities was only made possible by advancements in long-distance transportation and methods of communication. As more and more Chinese-citizens were travelling the country conducting trade they increasingly found themselves in a far-away place needing a place to stay, in response the market saw the expansion of guild halls to house these merchants. A key distinguishing feature of the Qing economy was the emergence of guild halls around the nation. As inter-regional trade and travel became ever more common during the Qing, guild halls dedicated to facilitating commerce, "huiguan", gained prominence around the urban landscape. The location where two merchants would meet to exchange commodities was usually mediated by a third-party broker who served a variety of roles for the market and local citizenry including bringing together buyers and sellers, guaranteeing the good faith of both parties, standardizing the weights, measurements, and procedures of the two parties, collecting tax for the government, and operating inns and warehouses. It was these broker's and their places of commerce that were expanded during the Qing into full-fledged trade guilds, which, among other things, issued regulatory codes and price schedules, and provided a place for travelling merchants to stay and conduct their business. The first recorded trade guild set up to facilitate inter-regional commerce was in Hankou in 1656. Along with the "huiguan" trade guilds, guild halls dedicated to more specific professions, "gongsuo", began to appear and to control commercial craft or artisanal industries such as carpentry, weaving, banking, and medicine. By the nineteenth century guild halls had much more impact on the local communities than simply facilitating trade, they transformed urban areas into cosmopolitan, multi-cultural hubs, staged theatre performances open to general public, developed real estate by pooling funds together in the style of a trust, and some even facilitated the development of social services such as maintaining streets, water supply, and sewage facilities. In 1685 the Kangxi emperor legalized private maritime trade along the coast, establishing a series of customs stations in major port cities. The customs station at Canton became by far the most active in foreign trade and by the late Kangxi reign more than forty mercantile houses specializing in trade with the West had appeared. The Yongzheng emperor made a parent corporation comprising those forty individual houses in 1725 known as the Cohong system. Firmly established by 1757, the Canton Cohong was an association of thirteen business firms that had been awarded exclusive rights to conduct trade with Western merchants in Canton. Until its abolition after the Opium War in 1842, the Canton Cohong system was the only permitted avenue of Western trade into China, and thus became a booming hub of international trade by the early eighteenth century. By the eighteenth century the most significant export China had was tea. British demand for tea increased exponentially up until they figured out how to grow it for themselves in the hills of northern India in the 1880s. By the end of the eighteenth century tea exports going through the Canton Cohong system amounted to one-tenth of the revenue from taxes collected from the British and nearly the entire revenue of the British East India Company and until the early nineteenth century tea comprised ninety percent of exports leaving Canton. Chinese scholars, court academies, and local officials carried on late Ming dynasty strengths in astronomy, mathematics, and geography, as well as technologies in ceramics, metallurgy, water transport, printing. Contrary to stereotypes in some Western writing, 16th and 17th century Qing dynasty officials and literati eagerly explored the technology and science introduced by Jesuit missionaries. Manchu leaders employed Jesuits to use cannon and gunpowder to great effect in the conquest of China, and the court sponsored their research in astronomy. The aim of these efforts, however, was to reform and improve inherited science and technology, not to replace it. Scientific knowledge advanced during the Qing, but there was not a change in the way this knowledge was organized or the way scientific evidence was defined or its truth tested. The powerful official Ruan Yuan at the end of the eighteenth and early nineteenth centuries, for instance, supported a community of scientists and compiled the "Chouren zhuan" (畴人传; Biographies of mathematical scientists), a collection of biographies that eventually included nearly 700 Chinese and over 200 Western scientists. His attempt to reconcile Chinese and the Western science introduced by the Jesuits by arguing that both had originated in ancient China did not succeed, but he did show that science could be conceived and practiced separately from humanistic scholarship. Those who studied the physical universe shared their findings with each other and identified themselves as men of science, but they did not have a separate and independent professional role with its own training and advancement. They were still literati. The Opium Wars, however, demonstrated the power of steam engine and military technology that had only recently been put into practice in the West. During the Self-Strengthening Movement of the 1860s and 1870s Confucian officials in several coastal provinces established an industrial base in military technology. The introduction of railroads into China raised questions that were more political than technological. A British company built the twelve-mile Shanghai—Woosung line in 1876, obtaining the land under false pretenses, and it was soon torn up. Court officials feared local public opinion and that railways would help invaders, harm farmlands, and obstruct feng shui. To keep development in Chinese hands, the Qing government borrowed 34 billion taels of silver from foreign lenders for railway construction between 1894 and 1911. As late as 1900, only 292 miles were in operation, with 4000 more miles in the planning stage. Finally, 5,200 miles of railway were completed. The British and French After 1905 were finally able to open lines to Burma and Vietnam. Protestant missionaries by the 1830s translated and printed Western science and medical textbooks. The textbooks found homes in the rapidly enlarging network of missionary schools, and universities. The textbooks opened learning open possibilities for the small number of Chinese students interested in science, and a very small number interested in technology. After 1900, Japan had a greater role in bringing modern science and technology to Chinese audiences but even then they reached chiefly the children of the rich landowning gentry, who seldom engaged in industrial careers. Under the Qing, inherited forms of art flourished and innovations occurred at many levels and in many types. High levels of literacy, a successful publishing industry, prosperous cities, and the Confucian emphasis on cultivation all fed a lively and creative set of cultural fields. By the end of the nineteenth century, national artistic and cultural worlds had begun to come to terms with the cosmopolitan culture of the West and Japan. The decision to stay within old forms or welcome Western models was now a conscious choice rather than an unchallenged acceptance of tradition. Classically trained Confucian scholars such as Liang Qichao and Wang Guowei read widely and broke aesthetic and critical ground later cultivated in the New Culture Movement. The Qing emperors were generally adept at poetry and often skilled in painting, and offered their patronage to Confucian culture. The Kangxi and Qianlong Emperors, for instance, embraced Chinese traditions both to control them and to proclaim their own legitimacy. The Kangxi Emperor sponsored the "Peiwen Yunfu", a rhyme dictionary published in 1711, and the "Kangxi Dictionary" published in 1716, which remains to this day an authoritative reference. The Qianlong Emperor sponsored the largest collection of writings in Chinese history, the "Siku Quanshu," completed in 1782. Court painters made new versions of the Song masterpiece, Zhang Zeduan's "Along the River During the Qingming Festival" whose depiction of a prosperous and happy realm demonstrated the beneficence of the emperor. The emperors undertook tours of the south and commissioned monumental scrolls to depict the grandeur of the occasion. Imperial patronage also encouraged the industrial production of ceramics and Chinese export porcelain. Peking glassware became popular after European glass making processes were introduced by Jesuits to Beijing. Yet the most impressive aesthetic works were done among the scholars and urban elite. Calligraphy and painting remained a central interest to both court painters and scholar-gentry who considered the Four Arts part of their cultural identity and social standing. The painting of the early years of the dynasty included such painters as the orthodox Four Wangs and the individualists Bada Shanren (1626–1705) and Shitao (1641–1707). The nineteenth century saw such innovations as the Shanghai School and the Lingnan School which used the technical skills of tradition to set the stage for modern painting. Traditional learning flourished, especially among Ming loyalists such as Dai Zhen and Gu Yanwu, but scholars in the school of evidential learning made innovations in skeptical textual scholarship. Scholar-bureaucrats, including Lin Zexu and Wei Yuan, developed a school of practical statecraft which rooted bureaucratic reform and restructuring in classical philosophy. Philosophy and literature grew to new heights in the Qing period. Poetry continued as a mark of the cultivated gentleman, but women wrote in larger and larger numbers and came from all walks of life. The poetry of the Qing dynasty is a lively field of research, being studied (along with the poetry of the Ming dynasty) for its association with Chinese opera, developmental trends of Classical Chinese poetry, the transition to a greater role for vernacular language, and for poetry by women. The Qing dynasty was a period of literary editing and criticism, and many of the modern popular versions of Classical Chinese poems were transmitted through Qing dynasty anthologies, such as the Quan Tangshi and the "Three Hundred Tang Poems". Although fiction did not have the prestige of poetry, novels flourished. Pu Songling brought the short story to a new level in his "Strange Stories from a Chinese Studio", published in the mid-18th century, and Shen Fu demonstrated the charm of the informal memoir in "Six Chapters of a Floating Life", written in the early 19th century but published only in 1877. The art of the novel reached a pinnacle in Cao Xueqin's "Dream of the Red Chamber", but its combination of social commentary and psychological insight were echoed in highly skilled novels such as Wu Jingzi's "Rulin waishi" (1750) and Li Ruzhen's "Flowers in the Mirror" (1827). In drama, Kong Shangren's Kunqu opera "The Peach Blossom Fan", completed in 1699, portrayed the tragic downfall of the Ming dynasty in romantic terms. The most prestigious form became the so-called Peking opera, though local and folk opera were also widely popular. Cuisine aroused a cultural pride in the richness of a long and varied past. The gentleman gourmet, such as Yuan Mei, applied aesthetic standards to the art of cooking, eating, and appreciation of tea at a time when New World crops and products entered everyday life. Yuan's "Suiyuan Shidan" expounded culinary aesthetics and theory, along with a range of recipes. The Manchu Han Imperial Feast originated at the court. Although this banquet was probably never common, it reflected an appreciation of Manchu culinary customs. Nevertheless, culinary traditionalists such as Yuan Mei lambasted the opulence of the Manchu Han Feast. Yuan wrote that the feast was caused in part by the "vulgar habits of bad chefs" and that "displays this trite are useful only for welcoming new relations through one's gates or when the boss comes to visit". (皆惡廚陋習。只可用之於新親上門,上司入境) After 1912, writers, historians and scholars in China and abroad generally deprecated the failures of the late imperial system. However, in the 21st century, a favorable view has emerged in popular culture. Building pride in Chinese history, nationalists have portrayed Imperial China as benevolent, strong and more advanced than the West. They blame ugly wars and diplomatic controversies on imperialist exploitation by Western nations and Japan. Although officially still communist and Maoist, in practice China's rulers have used this grassroots settlement to proclaim that their current policies are restoring China's historical glory. Chinese Communist Party General Secretary Xi Jinping has sought parity between Beijing and Washington and promised to restore China to its historical glory. The New Qing History is a revisionist historiographical trend starting in the mid-1990s emphasizing the Manchu nature of the dynasty. Earlier historians had emphasized the power of Han Chinese to "sinicize" their conquerors, that is, to assimilate and make them Chinese in their thought and institutions. In the 1980s and early 1990s, American scholars began to learn Manchu and took advantage of newly opened Chinese- and Manchu-language documents in the archives. This research found that the Manchu rulers manipulated their subjects and from the 1630s through at least the 18th century, emperors developed a sense of Manchu identity and used Central Asian models of rule as much as they did Confucian ones. According to the new school the Manchu ruling class regarded "China" as only a part, although a very important part, of a much wider empire that extended into the Inner Asian territories of Mongolia, Tibet, the Manchuria and Xinjiang. Ping-ti Ho criticized the new approach for exaggerating the Manchu character of the dynasty and argued for the sinification of its rule. Some scholars in China accused the American group of imposing American concerns with race and identity or even of imperialist misunderstanding to weaken China. Still others in China agree that this scholarship has opened new vistas for the study of Qing history. The "New Qing History" is not related to the "New Qing History", a multi-volume history of the Qing dynasty that was authorized by the Chinese State Council in 2003.
https://en.wikipedia.org/wiki?curid=25310
Quantum gravity Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics, and where quantum effects cannot be ignored, such as in the vicinity of black holes or similar compact astrophysical objects where the effects of gravity are strong. Three of the four fundamental forces of physics are described within the framework of quantum mechanics and quantum field theory. The current understanding of the fourth force, gravity, is based on Albert Einstein's general theory of relativity, which is formulated within the entirely different framework of classical physics. However, that description is incomplete: describing the gravitational field of a black hole in the general theory of relativity, physical quantities such as the spacetime curvature diverge at the center of the black hole. This signals the break down of the general theory of relativity and the need for a theory that goes beyond general relativity into the quantum. At distances very close to the center of the black hole (closer than the Planck length), quantum fluctuations of spacetime are expected to play an important role. To describe these quantum effects a theory of quantum gravity is needed. Such a theory should allow the description to be extended closer to the center and might even allow an understanding of physics at the center of a black hole. On more formal grounds one can argue that a classical system cannot consistently be coupled to a quantum one. The field of quantum gravity is actively developing and theorists are exploring a variety of approaches to the problem of quantum gravity, the most popular approaches being string theory and loop quantum gravity. All these approaches aim to describe the quantum behavior of the gravitational field. This does not necessarily include unifying all fundamental interactions into a single mathematical framework. However, many approaches to quantum gravity, such as string theory, try to develop a framework that describes all fundamental forces. Such theories are often referred to as a theory of everything. Others, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. One of the difficulties of formulating a quantum gravity theory is that quantum gravitational effects only appear at length scales near the Planck scale, around 10−35 meters, a scale far smaller, and hence only accessible with far higher energies, than those currently available in high energy particle accelerators. Therefore, physicists lack experimental data which could distinguish between the competing theories which have been proposed and thus thought experiment approaches are suggested as a testing tool for these theories. Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. General relativity models gravity as curvature of spacetime: in the slogan of John Archibald Wheeler, "Spacetime tells matter how to move; matter tells spacetime how to curve." On the other hand, quantum field theory is typically formulated in the "flat" spacetime used in special relativity. No theory has yet proven successful in describing the general situation where the dynamics of matter, modeled with quantum mechanics, affect the curvature of spacetime. If one attempts to treat gravity as simply another quantum field, the resulting theory is not renormalizable. Even in the simpler case where the curvature of spacetime is fixed "a priori," developing quantum field theory becomes more mathematically challenging, and many ideas physicists use in quantum field theory on flat spacetime are no longer applicable. It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe. The observation that all fundamental forces except gravity have one or more known messenger particles leads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as the "graviton". These particles act as a force particle similar to the photon of the electromagnetic interaction. Under mild assumptions the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles. Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. The Weinberg–Witten theorem places some constraints on theories in which the graviton is a composite particle. While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be indetectable because they interact too weakly. General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a corresponding quantum field theory. However, gravity is perturbatively nonrenormalizable. For a quantum field theory to be well defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of "finitely many" parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale. On the other hand, in quantizing gravity there are, in perturbation theory, "infinitely many independent parameters" (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of the renormalization group tells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, then "every one" of the infinitely many unknown parameters would begin to matter, and we could make no predictions at all. It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really "is" a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, finding a reliable answer is difficult, pursued in the asymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries. In an effective field theory, all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory. Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally. By treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While easy to grasp in principle, this is the hardest idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in space-time. On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory. String theory can be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise to space-time in a dynamical way. Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence. Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory. Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks. Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation. Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles). See Quantum field theory in curved spacetime for a more complete discussion. A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time. In contrast, general relativity treats time as a dynamical variable which interacts directly with matter and moreover requires the Hamiltonian constraint to vanish, removing any possibility of employing a notion of time similar to that in quantum theory. There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. The central idea of string theory is to replace the classical concept of a point particle in quantum field theory, with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price of this success are unusual features such as six extra dimensions of space in addition to the usual three for space and one for time. In what is called the , it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. As presently understood, however, string theory admits a very large number (10500 by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge. Loop quantum gravity seriously considers general relativity's insight that spacetime is a dynamical field and is therefore a quantum object. Its second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) also affects the structure of space. The main result of loop quantum gravity is the derivation of a granular structure of space at the Planck length. This is derived from following considerations: In the case of electromagnetism, the quantum operator representing the energy of each frequency of the field has a discrete spectrum. Thus the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region likewise have discrete spectrum. Thus area and volume of any portion of space are also quantized, where the quanta are elementary quanta of space. It follows, then, that spacetime has an elementary quantum granular structure at the Planck scale, which cuts off the ultraviolet infinities of quantum field theory. The quantum state of spacetime is described in the theory by means of a mathematical structure called spin networks. Spin networks were initially introduced by Roger Penrose in abstract form, and later shown by Carlo Rovelli and Lee Smolin to derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime. The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields. In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps. The dynamics of the theory is today constructed in several versions. One version starts with the canonical quantization of general relativity. The analogue of the Schrödinger equation is a Wheeler–DeWitt equation, which can be defined within the theory. In the covariant, or spinfoam formulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks. There are a number of other approaches to quantum gravity. The approaches differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified. Examples include: As was emphasized above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, the possibility of experimentally testing quantum gravity had not received much attention prior to the late 1990s. However, in the past decade, physicists have realized that evidence for quantum gravitational effects can guide the development of the theory. Since theoretical development has been slow, the field of phenomenological quantum gravity, which studies the possibility of experimental tests, has obtained increased attention. The most widely pursued possibilities for quantum gravity phenomenology include violations of Lorentz invariance, imprints of quantum gravitational effects in the cosmic microwave background (in particular its polarization), and decoherence induced by fluctuations in the space-time foam. ESA's INTEGRAL satellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10⁻⁴⁸m or 13 orders of magnitude below the Planck scale . The BICEP2 experiment detected what was initially thought to be primordial B-mode polarization caused by gravitational waves in the early universe. Had the signal in fact been primordial in origin, it could have been an indication of quantum gravitational effects, but it soon transpired that the polarization was due to interstellar dust interference. As explained above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, thought experiments are becoming an important theoretical tool. An important aspect of quantum gravity relates to the question of coupling of spin and spacetime. While spin and spacetime are expected to be coupled, the precise nature of this coupling is currently unknown. In particular and most importantly, it is not known how quantum spin sources gravity and what is the correct characterization of the spacetime of a single spin-half particle. To analyze this question, thought experiments in the context of quantum information, have been suggested. This work shows that, in order to avoid violation of relativistic causality, the measurable spacetime around a spin-half particle's (rest frame) must be spherically symmetric - i.e., either spacetime is spherically symmetric, or somehow measurements of the spacetime (e.g., time-dilation measurements) should create some sort of back action that affects and changes the quantum spin.
https://en.wikipedia.org/wiki?curid=25312
Quality of service Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc. In the field of computer networking and other packet-switched telecommunication networks, quality of service refers to traffic prioritization and resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priorities to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. Quality of service is particularly important for the transport of traffic with special requirements. In particular, developers have introduced Voice over IP technology to allow computer networks to become as useful as telephone networks for audio conversations, as well as supporting new applications with even stricter network performance requirements. In the field of telephony, quality of service was defined by the ITU in 1994. Quality of service comprises requirements on all the aspects of a connection, such as service response time, loss, signal-to-noise ratio, crosstalk, echo, interrupts, frequency response, loudness levels, and so on. A subset of telephony QoS is grade of service (GoS) requirements, which comprises aspects of a connection relating to capacity and coverage of a network, for example guaranteed maximum blocking probability and outage probability. In the field of computer networking and other packet-switched telecommunication networks, teletraffic engineering refers to traffic prioritization and resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priorities to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. For example, a required bit rate, delay, delay variation, packet loss or bit error rates may be guaranteed. Quality of service is important for real-time streaming multimedia applications such as voice over IP, multiplayer online games and IPTV, since these often require fixed bit rate and are delay sensitive. Quality of service is especially important in networks where the capacity is a limited resource, for example in cellular data communication. A network or protocol that supports QoS may agree on a traffic contract with the application software and reserve capacity in the network nodes, for example during a session establishment phase. During the session it may monitor the achieved level of performance, for example the data rate and delay, and dynamically control scheduling priorities in the network nodes. It may release the reserved capacity during a tear down phase. A best-effort network or service does not support quality of service. An alternative to complex QoS control mechanisms is to provide high quality communication over a best-effort network by over-provisioning the capacity so that it is sufficient for the expected peak traffic load. The resulting absence of network congestion reduces or eliminates the need for QoS mechanisms. QoS is sometimes used as a quality measure, with many alternative definitions, rather than referring to the ability to reserve resources. Quality of service sometimes refers to the level of quality of service, i.e. the guaranteed service quality. High QoS is often confused with a high level of performance, for example high bit rate, low latency and low bit error rate. QoS is sometimes used in application layer services such as telephony and streaming video to describe a metric that reflects or predicts the subjectively experienced quality. In this context, QoS is the acceptable cumulative effect on subscriber satisfaction of all imperfections affecting the service. Other terms with similar meaning are the quality of experience (QoE), mean opinion score (MOS), perceptual speech quality measure (PSQM) and perceptual evaluation of video quality (PEVQ). See also Subjective video quality. A number of attempts for layer 2 technologies that add QoS tags to the data have gained popularity in the past. Examples are frame relay, asynchronous transfer mode (ATM) and multiprotocol label switching (MPLS) (a technique between layer 2 and 3). Despite these network technologies remaining in use today, this kind of network lost attention after the advent of Ethernet networks. Today Ethernet is, by far, the most popular layer 2 technology. Conventional Internet routers and LAN switches operate on a best effort basis. This equipment is less expensive, less complex and faster and thus more popular than earlier more complex technologies that provide QoS mechanisms. Ethernet optionally uses 802.1p to signal the priority of a frame. There were four "type of service" bits and three "precedence" bits originally provided in each IP packet header, but they were not generally respected. These bits were later re-defined as Differentiated services code points (DSCP). With the advent of IPTV and IP telephony, QoS mechanisms are increasingly available to the end user. In packet-switched networks, quality of service is affected by various factors, which can be divided into human and technical factors. Human factors include: stability of service quality, availability of service, waiting times and user information. Technical factors include: reliability, scalability, effectiveness, maintainability and network congestion. Many things can happen to packets as they travel from origin to destination, resulting in the following problems as seen from the point of view of the sender and receiver: A defined quality of service may be desired or required for certain types of network traffic, for example: These types of service are called "inelastic", meaning that they require a certain minimum bit rate and a certain maximum latency to function. By contrast, "elastic" applications can take advantage of however much or little bandwidth is available. Bulk file transfer applications that rely on TCP are generally elastic. Circuit switched networks, especially those intended for voice transmission, such as Asynchronous Transfer Mode (ATM) or GSM, have QoS in the core protocol, resources are reserved at each step on the network for the call as it is set up, there is no need for additional procedures to achieve required performance. Shorter data units and built-in QoS were some of the unique selling points of ATM for applications such as video on demand. When the expense of mechanisms to provide QoS is justified, network customers and providers can enter into a contractual agreement termed a service-level agreement (SLA) which specifies guarantees for the ability of a connection to give guaranteed performance in terms of throughput or latency based on mutually agreed measures. An alternative to complex QoS control mechanisms is to provide high quality communication by generously over-provisioning a network so that capacity is based on peak traffic load estimates. This approach is simple for networks with predictable peak loads. This calculation may need to appreciate demanding applications that can compensate for variations in bandwidth and delay with large receive buffers, which is often possible for example in video streaming. Over-provisioning can be of limited use in the face of transport protocols (such as TCP) that over time increase the amount of data placed on the network until all available bandwidth is consumed and packets are dropped. Such greedy protocols tend to increase latency and packet loss for all users. The amount of over-provisioning in interior links required to replace QoS depends on the number of users and their traffic demands. This limits usability of over-provisioning. Newer more bandwidth intensive applications and the addition of more users results in the loss of over-provisioned networks. This then requires a physical update of the relevant network links which is an expensive process. Thus over-provisioning cannot be blindly assumed on the Internet. Commercial VoIP services are often competitive with traditional telephone service in terms of call quality even without QoS mechanisms in use on the user's connection to their ISP and the VoIP provider's connection to a different ISP. Under high load conditions, however, VoIP may degrade to cell-phone quality or worse. The mathematics of packet traffic indicate that network requires just 60% more raw capacity under conservative assumptions. Unlike single-owner networks, the Internet is a series of exchange points interconnecting private networks. Hence the Internet's core is owned and managed by a number of different network service providers, not a single entity. Its behavior is much more unpredictable. There are two principal approaches to QoS in modern packet-switched IP networks, a parameterized system based on an exchange of application requirements with the network, and a prioritized system where each packet identifies a desired service level to the network. Early work used the integrated services (IntServ) philosophy of reserving network resources. In this model, applications used RSVP to request and reserve resources through a network. While IntServ mechanisms do work, it was realized that in a broadband network typical of a larger service provider, Core routers would be required to accept, maintain, and tear down thousands or possibly tens of thousands of reservations. It was believed that this approach would not scale with the growth of the Internet, and in any event was antithetical to the end-to-end principle, the notion of designing networks so that core routers do little more than simply switch packets at the highest possible rates. Under DiffServ, packets are marked either by the traffic sources themselves or by the edge devices where the traffic enters the network. In response to these markings, routers and switches use various queuing strategies to tailor performance to requirements. At the IP layer, DSCP markings use the 6 bit DS field in the IP packet header. At the MAC layer, VLAN IEEE 802.1Q can be used to carry 3 bit of essentially the same information. Routers and switches supporting DiffServ configure their network scheduler to use multiple queues for packets awaiting transmission from bandwidth constrained (e.g., wide area) interfaces. Router vendors provide different capabilities for configuring this behavior, to include the number of queues supported, the relative priorities of queues, and bandwidth reserved for each queue. In practice, when a packet must be forwarded from an interface with queuing, packets requiring low jitter (e.g., VoIP or videoconferencing) are given priority over packets in other queues. Typically, some bandwidth is allocated by default to network control packets (such as Internet Control Message Protocol and routing protocols), while best-effort traffic might simply be given whatever bandwidth is left over. At the Media Access Control (MAC) layer, VLAN IEEE 802.1Q and IEEE 802.1p can be used to distinguish between Ethernet frames and classify them. Queueing theory models have been developed on performance analysis and QoS for MAC layer protocols. Cisco IOS NetFlow and the Cisco Class Based QoS (CBQoS) Management Information Base (MIB) are marketed by Cisco Systems. One compelling example of the need for QoS on the Internet relates to congestive collapse. The Internet relies on congestion avoidance protocols, primarily as built into Transmission Control Protocol (TCP), to reduce traffic under conditions that would otherwise lead to congestive collapse. QoS applications, such as VoIP and IPTV, require largely constant bitrates and low latency, therefore they cannot use TCP and cannot otherwise reduce their traffic rate to help prevent congestion. Service-level agreements limit traffic that can be offered to the Internet and thereby enforce traffic shaping that can prevent it from becoming overloaded, and are hence an indispensable part of the Internet's ability to handle a mix of real-time and non-real-time traffic without collapse. Several QoS mechanisms and schemes exist for IP networking. QoS capabilities are available in the following network technologies. End-to-end quality of service can require a method of coordinating resource allocation between one autonomous system and another. The Internet Engineering Task Force (IETF) defined the Resource Reservation Protocol (RSVP) for bandwidth reservation as a proposed standard in 1997. RSVP is an end-to-end bandwidth reservation and admission control protocol. RSVP was not widely adopted due to scalability limitations. The more scalable traffic engineering version, RSVP-TE, is used in many networks to establish traffic-engineered Multiprotocol Label Switching (MPLS) label-switched paths. The IETF also defined Next Steps in Signaling (NSIS) with QoS signalling as a target. NSIS is a development and simplification of RSVP. Research consortia such as "end-to-end quality of service support over heterogeneous networks" (EuQoS, from 2004 through 2007) and fora such as the IPsphere Forum developed more mechanisms for handshaking QoS invocation from one domain to the next. IPsphere defined the Service Structuring Stratum (SSS) signaling bus in order to establish, invoke and (attempt to) assure network services. EuQoS conducted experiments to integrate Session Initiation Protocol, Next Steps in Signaling and IPsphere's SSS with an estimated cost of about 15.6 million Euro and published a book. A research project Multi Service Access Everywhere (MUSE) defined another QoS concept in a first phase from January 2004 through February 2006, and a second phase from January 2006 through 2007. Another research project named PlaNetS was proposed for European funding circa 2005. A broader European project called "Architecture and design for the future Internet" known as 4WARD had a budget estimated at 23.4 million Euro and was funded from January 2008 through June 2010. It included a "Quality of Service Theme" and published a book. Another European project, called WIDENS (Wireless Deployable Network System), proposed a bandwidth reservation approach for mobile wireless multirate adhoc networks. Strong cryptography network protocols such as Secure Sockets Layer, I2P, and virtual private networks obscure the data transferred using them. As all electronic commerce on the Internet requires the use of such strong cryptography protocols, unilaterally downgrading the performance of encrypted traffic creates an unacceptable hazard for customers. Yet, encrypted traffic is otherwise unable to undergo deep packet inspection for QoS. Protocols like ICA and RDP may encapsulate other traffic (e.g. printing, video streaming) with varying requirements that can make optimization difficult. The Internet2 project found, in 2001, that the QoS protocols were probably not deployable inside its Abilene Network with equipment available at that time. Equipment available at the time relied on software to implement QoS. The group also predicted that “logistical, financial, and organizational barriers will block the way toward any bandwidth guarantees” by protocol modifications aimed at QoS. They believed that the economics would encourage network providers to deliberately erode the quality of best effort traffic as a way to push customers to higher priced QoS services. Instead they proposed over-provisioning of capacity as more cost-effective at the time. The Abilene network study was the basis for the testimony of Gary Bachula to the US Senate Commerce Committee's hearing on Network Neutrality in early 2006. He expressed the opinion that adding more bandwidth was more effective than any of the various schemes for accomplishing QoS they examined. Bachula's testimony has been cited by proponents of a law banning quality of service as proof that no legitimate purpose is served by such an offering. This argument is dependent on the assumption that over-provisioning isn't a form of QoS and that it is always possible. Cost and other factors affect the ability of carriers to build and maintain permanently over-provisioned networks. Mobile cellular service providers may offer mobile QoS to customers just as the fixed line PSTN services providers and Internet Service Providers (ISP) may offer QoS. QoS mechanisms are always provided for circuit switched services, and are essential for non-elastic services, for example streaming multimedia. Mobility adds complication to the QoS mechanisms, for several reasons: Quality of service in the field of telephony was first defined in 1994 in the ITU-T Recommendation E.800. This definition is very broad, listing 6 primary components: Support, Operability, Accessibility, Retainability, Integrity and Security. A 1995 recommendation X.902 included a definition is the OSI reference model. In 1998 the ITU published a document discussing QoS in the field of data networking. X.641 offers a means of developing or enhancing standards related to QoS and provide concepts and terminology that will assist in maintaining the consistency of related standards. Some QoS-related IETF Request For Comments (RFC)s are , and ; both these are discussed above. The IETF has also published two RFCs giving background on QoS: , and . The IETF has also published as an informative or "best practices" document about the practical aspects of designing a QoS solution for a DiffServ network. They try to identify which types of applications are commonly run over an IP network to group them into traffic classes, study what treatment do each of these classes need from the network, and suggest which of the QoS mechanisms commonly available in routers can be used to implement those treatments.
https://en.wikipedia.org/wiki?curid=25315
Quadrature amplitude modulation Quadrature amplitude modulation (QAM) is the name of a family of digital modulation methods and a related family of analog modulation methods widely used in modern telecommunications to transmit information. It conveys two analog message signals, or two digital bit streams, by changing ("modulating") the amplitudes of two carrier waves, using the amplitude-shift keying (ASK) digital modulation scheme or amplitude modulation (AM) analog modulation scheme. The two carrier waves of the same frequency are out of phase with each other by 90°, a condition known as orthogonality or quadrature. The transmitted signal is created by adding the two carrier waves together. At the receiver, the two waves can be coherently separated (demodulated) because of their orthogonality property. Another key property is that the modulations are low-frequency/low-bandwidth waveforms compared to the carrier frequency, which is known as the narrowband assumption. Phase modulation (analog PM) and phase-shift keying (digital PSK) can be regarded as a special case of QAM, where the amplitude of the transmitted signal is a constant, but its phase varies. This can also be extended to frequency modulation (FM) and frequency-shift keying (FSK), for these can be regarded as a special case of phase modulation. QAM is used extensively as a modulation scheme for digital telecommunication systems, such as in 802.11 Wi-Fi standards. Arbitrarily high spectral efficiencies can be achieved with QAM by setting a suitable constellation size, limited only by the noise level and linearity of the communications channel.  QAM is being used in optical fiber systems as bit rates increase; QAM16 and QAM64 can be optically emulated with a 3-path interferometer. In a QAM signal, one carrier lags the other by 90°, and its amplitude modulation is customarily referred to as the in-phase component, denoted by The other modulating function is the quadrature component, So the composite waveform is mathematically modeled as: where is the carrier frequency.  At the receiver, a coherent demodulator multiplies the received signal separately with both a cosine and sine signal to produce the received estimates of and . For example: Using standard trigonometric identities, we can write this as: Low-pass filtering removes the high frequency terms (containing ), leaving only the term. This filtered signal is unaffected by showing that the in-phase component can be received independently of the quadrature component.  Similarly, we can multiply by a sine wave and then low-pass filter to extract The addition of two sinusoids is a linear operation that creates no new frequency components. So the bandwidth of the composite signal is comparable to the bandwidth of the DSB (Double-Sideband) components. Effectively, the spectral redundancy of DSB enables a doubling of the information capacity using this technique. This comes at the expense of demodulation complexity. In particular, a DSB signal has zero-crossings at a regular frequency, which makes it easy to recover the phase of the carrier sinusoid. It is said to be self-clocking. But the sender and receiver of a quadrature-modulated signal must share a clock or otherwise send a clock signal. If the clock phases drift apart, the demodulated "I" and "Q" signals bleed into each other, yielding crosstalk. In this context, the clock signal is called a "phase reference". Clock synchronization is typically achieved by transmitting a burst subcarrier or a pilot signal. The phase reference for NTSC, for example, is included within its colorburst signal. Analog QAM is used in: In the frequency domain, QAM has a similar spectral pattern to DSB-SC modulation. Applying Euler's formula to the sinusoids in , the positive-frequency portion of (or analytic representation) is: where formula_5 denotes the Fourier transform, and and are the transforms of and This result represents the sum of two DSB-SC signals with the same center frequency. The factor of represents the 90° phase shift that enables their individual demodulations. As in many digital modulation schemes, the constellation diagram is useful for QAM. In QAM, the constellation points are usually arranged in a square grid with equal vertical and horizontal spacing, although other configurations are possible (e.g. Cross-QAM). Since in digital telecommunications the data is usually binary, the number of points in the grid is usually a power of 2 (2, 4, 8, …). Since QAM is usually square, some of these are rare—the most common forms are 16-QAM, 64-QAM and 256-QAM. By moving to a higher-order constellation, it is possible to transmit more bits per symbol. However, if the mean energy of the constellation is to remain the same (by way of making a fair comparison), the points must be closer together and are thus more susceptible to noise and other corruption; this results in a higher bit error rate and so higher-order QAM can deliver more data less reliably than lower-order QAM, for constant mean constellation energy. Using higher-order QAM without increasing the bit error rate requires a higher signal-to-noise ratio (SNR) by increasing signal energy, reducing noise, or both. If data-rates beyond those offered by 8-PSK are required, it is more usual to move to QAM since it achieves a greater distance between adjacent points in the I-Q plane by distributing the points more evenly. The complicating factor is that the points are no longer all the same amplitude and so the demodulator must now correctly detect both phase and amplitude, rather than just phase. 64-QAM and 256-QAM are often used in digital cable television and cable modem applications. In the United States, 64-QAM and 256-QAM are the mandated modulation schemes for digital cable (see QAM tuner) as standardised by the SCTE in the standard ANSI/SCTE 07 2013. Note that many marketing people will refer to these as QAM-64 and QAM-256. In the UK, 64-QAM is used for digital terrestrial television (Freeview) whilst 256-QAM is used for Freeview-HD. Communication systems designed to achieve very high levels of spectral efficiency usually employ very dense QAM constellations. For example, current Homeplug AV2 500-Mbit/s powerline Ethernet devices use 1024-QAM and 4096-QAM, as well as future devices using ITU-T G.hn standard for networking over existing home wiring (coaxial cable, phone lines and power lines); 4096-QAM provides 12 bits/symbol. Another example is ADSL technology for copper twisted pairs, whose constellation size goes up to 32768-QAM (in ADSL terminology this is referred to as bit-loading, or bit per tone, 32768-QAM being equivalent to 15 bits per tone). Ultra-high capacity Microwave Backhaul Systems also use 1024-QAM. With 1024-QAM, adaptive coding and modulation (ACM) and XPIC, vendors can obtain gigabit capacity in a single 56 MHz channel. In moving to a higher order QAM constellation (higher data rate and mode) in hostile RF/microwave QAM application environments, such as in broadcasting or telecommunications, multipath interference typically increases. There is a spreading of the spots in the constellation, decreasing the separation between adjacent states, making it difficult for the receiver to decode the signal appropriately. In other words, there is reduced noise immunity. There are several test parameter measurements which help determine an optimal QAM mode for a specific operating environment. The following three are most significant:
https://en.wikipedia.org/wiki?curid=25316
Quetzalcoatlus Quetzalcoatlus northropi is a pterosaur known from the Late Cretaceous of North America (Maastrichtian stage) and one of the biggest known flying animals of all time. It is a member of the family Azhdarchidae, a family of advanced toothless pterosaurs with unusually long, stiffened necks. Its name comes from the Aztec feathered serpent god, Quetzalcoatl. The first "Quetzalcoatlus" fossils were discovered in Texas, United States, from the Maastrichtian Javelina Formation at Big Bend National Park (dated to around 68 million years ago) in 1971 by Douglas A. Lawson, a geology graduate student from the Jackson School of Geosciences at the University of Texas at Austin. The specimen consisted of a partial wing (in pterosaurs composed of the forearms and elongated fourth finger), from an individual later estimated at over in wingspan. Lawson discovered a second site of the same age, about from the first, where between 1972 and 1974 he and Professor Wann Langston Jr. of the Texas Memorial Museum unearthed three fragmentary skeletons of much smaller individuals. Lawson in 1975 announced the find in an article in "Science". That same year, in a subsequent letter to the same journal, he made the original large specimen, TMM 41450-3, the holotype of a new genus and species, Quetzalcoatlus northropi. The genus name refers to the Aztec feathered serpent god, Quetzalcoatl. The specific name honors John Knudsen Northrop, the founder of Northrop, who drove the development of large tailless flying wing aircraft designs resembling "Quetzalcoatlus". At first it was assumed that the smaller specimens were juvenile or subadult forms of the larger type. Later, when more remains were found, it was realized they could have been a separate species. This possible second species from Texas was provisionally referred to as a "Quetzalcoatlus" sp. by Alexander Kellner and Langston in 1996, indicating that its status was too uncertain to give it a full new species name. The smaller specimens are more complete than the "Q. northropi" holotype, and include four partial skulls, though they are much less massive, with an estimated wingspan of . The holotype specimen of "Q. northropi" has yet to be properly described and diagnosed, and the current status of the genus "Quetzalcoatlus" has been identified as problematic. Mark Witton and colleagues (2010) noted that the type species of the genus—the fragmentary wing bones comprising "Q. northropi"—represent elements which are typically considered undiagnostic to generic or specific level, and that this complicates interpretations of azhdarchid taxonomy. For instance, Witton "et al." (2010) suggested that the "Q. northropi" type material is of generalised enough morphology to be near identical to that of other giant azhdarchids, such as the overlapping elements of the contemporary Romanian giant azhdarchid" Hatzegopteryx". This being the case, and assuming "Q. northropi" can be distinguished from other pterosaurs (i.e., if it is not a "nomen dubium"), perhaps "Hatzegopteryx" should be regarded as a European occurrence of "Quetzalcoatlus". However, Witton "et al." also noted that the skull material of "Hatzegopteryx" and "Q." sp. differ enough that they cannot be regarded as the same animal, but that the significance of this cannot be ascertained given uncertainty over the relationships of "Quetzalcoatlus" specimens. These issues can only be resolved by "Q. northropi" being demonstrated as a valid taxon and its relationships with "Q". sp. being investigated. An additional complication to these discussions are the likelihood that huge pterosaurs such as "Q. northropi" could have made long, transcontinental flights, suggesting that locations as disparate as North America and Europe could have shared giant azhdarchid species. An azhdarchid neck vertebra, discovered in 2002 from the Maastrichtian age Hell Creek Formation, may also belong to "Quetzalcoatlus". The specimen (BMR P2002.2) was recovered accidentally when it was included in a field jacket prepared to transport part of a "Tyrannosaurus" specimen. Despite this association with the remains of a large carnivorous dinosaur, the vertebra shows no evidence that it was chewed on by the dinosaur. The bone came from an individual azhdarchid pterosaur estimated to have had a wingspan of . When it was first named as a new species in 1975, scientists estimated that the largest "Quetzalcoatlus" fossils came from an individual with a wingspan as large as . Choosing the middle of three extrapolations from the proportions of other pterosaurs gave an estimate of 11 m, 15.5 m, and 21 m, respectively (36 ft, 50.85 ft, 68.9 ft). In 1981, further advanced studies lowered these estimates to . More recent estimates based on greater knowledge of azhdarchid proportions place its wingspan at . Remains found in Texas in 1971 indicate that this reptile had a minimum wingspan of about . Generalized height in a bipedal stance, based on its wingspan, would have been at least high at the shoulder. Weight estimates for giant azhdarchids are extremely problematic because no existing species share a similar size or body plan, and in consequence, published results vary widely. Generalized weight, based on some studies that have historically found extremely low weight estimates for "Quetzalcoatlus", was as low as for a individual. A majority of estimates published since the 2000s have been substantially higher, around . Skull material (from smaller specimens, possibly a related species) shows that "Quetzalcoatlus" had a very sharp and pointed beak. That is contrary to some earlier reconstructions that showed a blunter snout, based on the inadvertent inclusion of jaw material from another pterosaur species, possibly a tapejarid or a form related to "Tupuxuara". A skull crest was also present but its exact form and size are still unknown. Below is a cladogram showing the phylogenetic placement of "Quetzalcoatlus" within Neoazhdarchia from Andres and Myers (2013). "Quetzalcoatlus" was abundant in Texas during the Lancian in a fauna dominated by "Alamosaurus". The "Alamosaurus"-"Quetzalcoatlus" association probably represents semi-arid inland plains. "Quetzalcoatlus" had precursors in North America and its apparent rise to widespreadness may represent the expansion of its preferred habitat rather than an immigration event, as some experts have suggested. There have been a number of different ideas proposed about the lifestyle of "Quetzalcoatlus". Because the area of the fossil site was four hundred kilometers removed from the coastline and there were no indications of large rivers or deep lakes nearby at the end of the Cretaceous, Lawson in 1975 rejected a fish-eating lifestyle, instead suggesting that "Quetzalcoatlus" scavenged like the marabou stork (which will scavenge, but is more of a terrestrial predator of small animals), but then on the carcasses of titanosaur sauropods such as "Alamosaurus". Lawson had found the remains of the giant pterosaur while searching for the bones of this dinosaur, which formed an important part of its ecosystem. In 1996, Lehman and Langston rejected the scavenging hypothesis, pointing out that the lower jaw bent so strongly downwards that even when it closed completely a gap of over five centimeters remained between it and the upper jaw, very different from the hooked beaks of specialized scavenging birds. They suggested that with its long neck vertebrae and long toothless jaws "Quetzalcoatlus" fed like modern-day skimmers, catching fish during flight while cleaving the waves with its beak. While this skim-feeding view became widely accepted, it was not subjected to scientific research until 2007 when a study showed that for such large pterosaurs it was not a viable method because the energy costs would be too high due to excessive drag. In 2008 pterosaur workers Mark Witton and Darren Naish published an examination of possible feeding habits and ecology of azhdarchids. Witton and Naish noted that most azhdarchid remains are found in inland deposits far from seas or other large bodies of water required for skimming. Additionally, the beak, jaw, and neck anatomy are unlike those of any known skimming animal. Rather, they concluded that azhdarchids were more likely terrestrial stalkers, similar to modern storks, and probably hunted small vertebrates on land or in small streams. Though "Quetzalcoatlus", like other pterosaurs, was a quadruped when on the ground, "Quetzalcoatlus" and other azhdarchids have fore and hind limb proportions more similar to modern running ungulate mammals than to their smaller cousins, implying that they were uniquely suited to a terrestrial lifestyle. The nature of flight in "Quetzalcoatlus" and other giant azhdarchids was poorly understood until serious biomechanical studies were conducted in the 21st century. One early (1984) experiment by Paul MacCready used practical aerodynamics to test the flight of "Quetzalcoatlus". MacCready constructed a model flying machine or ornithopter with a simple computer functioning as an autopilot. The model successfully flew with a combination of soaring and wing flapping; the model was based on a then-current weight estimate of around , far lower than more modern estimates of over . The method of flight in these pterosaurs depends largely on weight, which has been controversial, and widely differing masses have been favored by different scientists. Some researchers have suggested that these animals employed slow, soaring flight, while others have concluded that their flight was fast and dynamic. In 2010, Donald Henderson argued that the mass of "Q. northropi" had been underestimated, even the highest estimates, and that it was too massive to have achieved powered flight. He estimated it in his 2010 paper as . Henderson argued that it may have been flightless. Other flight capability estimates have disagreed with Henderson's research, suggesting instead an animal superbly adapted to long-range, extended flight. In 2010, Mike Habib, a professor of biomechanics at Chatham University, and Mark Witton, a British paleontologist, undertook further investigation into the claims of flightlessness in large pterosaurs. After factoring wingspan, body weight, and aerodynamics, computer modelling led the two researchers to conclude that "Q. northropi" was capable of flight up to for 7 to 10 days at altitudes of . Habib further suggested a maximum flight range of for "Q. northropi". Henderson's work was also further criticized by Witton and Habib in another study, which pointed out that although Henderson used excellent mass estimations, they were based on outdated pterosaur models, which caused Henderson's mass estimations to be more than double what Habib used in his estimations, and that anatomical study of "Q. northropi" and other big pterosaur forelimbs showed a higher degree of robustness than would be expected if they were purely quadrupedal. This study proposed that large pterosaurs most likely utilized a short burst of powered flight to then transition to thermal soaring. In 1975, artist Giovanni Casselli depicted "Quetzalcoatlus" as a small-headed scavenger with an extremely long neck in the book "The evolution and ecology of the Dinosaurs" by British palaeontologist Beverly Halstead. Over the next twenty-five years prior to future discoveries, it would launch similar depictions colloquially known as a Paleomeme in various books as noted by Darren Naish. In June 2010, several life-sized models of "Q. northropi" were put on display on London's South Bank as the centerpiece exhibit for the Royal Society's 350th-anniversary exhibition. The models, which included both flying and standing individuals with wingspans of , were intended to help build public interest in science. The models were created by scientists from the University of Portsmouth and engineers from Griffon Hoverwork. The display featured the most accurate pterosaur models constructed at the time; these models took into account the latest evidence based on skeletal and trace fossils from related pterosaurs. In 1985, the US Defense Advanced Research Projects Agency (DARPA) and AeroVironment used "Quetzalcoatlus northropi" as the basis for an experimental ornithopter unmanned aerial vehicle (UAV). They produced a half-scale model weighing , with a wingspan of . Coincidentally, Douglas A. Lawson, who discovered "Q. northropi" in Texas in 1971, named it after John "Jack" Northrop, a developer of tailless flying wing aircraft in the 1940s. The replica of "Q. northropi" incorporates a "flight control system/autopilot which processes pilot commands and sensor inputs, implements several feedback loops, and delivers command signals to its various servo-actuators". It is on exhibit at the National Air and Space Museum.
https://en.wikipedia.org/wiki?curid=25319
QRP operation In amateur radio, QRP operation refers to transmitting at reduced power while attempting to maximize one's effective range. QRP operation is a specialized pursuit within the hobby that was first popularized in the early 1920s. QRP operators generally limit their transmitted RF output power to 5 watts or less for CW operation and 10 watts or less for SSB operation. Reliable two-way communication at such low power levels can be challenging due to changing radio propagation and the difficulty of receiving the relatively weak transmitted signals. QRP enthusiasts may employ optimized antenna systems, enhanced operating skills, and a variety of special modes, in order to maximize their ability to make and maintain radio contact. Since the late 1960s, commercial transceivers specially designed for QRP operation have evolved from vacuum tube to solid state technology. A number of organizations dedicated to QRP operation exist, and aficionados participate in various contests designed to test their skill in making long-distance contacts at low power levels. The term QRP derives from the standard Q code used in radio communications, where "QRP" and "QRP?" are used to request "Reduce power" and ask "Should I reduce power?" respectively. The opposite of QRP is QRO, or increased power operation. Most amateur transceivers are capable of transmitting approximately 100 watts, but in some parts of the world, such as the U.S., amateurs can transmit up to 1,500 watts. QRP enthusiasts contend that this is not always necessary, and doing so wastes power, increases the likelihood of causing interference to nearby televisions, radios, and telephones and, for United States' amateurs, is incompatible with FCC Part 97 rule, which states that one must use "the minimum power necessary to carry out the desired communications". QRP can also be used for emergency communications during disaster recovery. The practice of operating with low power was popularized as early as 1924, with a variety of reports, editorials and articles published in U.S. amateur radio magazines and journals that encouraged amateurs to lower power output, both for purposes of experimentation, and for improving operating conditions by reducing interference. There is not complete agreement on what constitutes QRP power. Most amateur organizations agree that for CW, AM, FM, and data modes, the transmitter output power should be 5 watts (or less). The maximum output power for SSB (single sideband) is not always agreed upon. Some believe that the power should be no more than 10 watts peak envelope power (PEP), while others strongly hold that the power limit should be 5 watts. QRPers are known to use even less than five watts, sometimes operating with as little as 100 milliwatts or even less. Extremely low power—1 watt and below—is often referred to by hobbyists as QRPp. Communicating using QRP can be difficult since the QRPer must face the same challenges of radio propagation faced by amateurs using higher power levels, but with the inherent disadvantages associated with having a weaker signal on the receiving end, all other things being equal. QRP aficionados try to make up for this through more efficient antenna systems and enhanced operating skills. QRP enthusiasts may use special modes that employ technology and software designed to enhance reception of the relatively weak transmitted signals resulting from low power levels. Many of the larger, more powerful commercial transceivers permit the operator to lower their output level to QRP levels. Commercial transceivers specially designed to operate at or near QRP power levels have been commercially available since the late 1960s. In 1969 the American manufacturer Ten-Tec produced the Powermite-1, one of Ten-Tec's first assembled transceivers, and featured modular construction. All stages of the transceiver were on individual circuit boards: the transmitter was capable of about one or two watts of RF, and the receiver was a direct-conversion unit, similar to that found in the Heathkit HW-7 and HW-8 lines, which introduced many amateurs to QRP'ing and led to the popularity of the mode. Enthusiasts operate QRP radios on the HF bands in portable modes, usually carrying the radios in backpacks, with whip antennas. Some QRPers prefer to construct their equipment from kits, published plans, or homebrew it from scratch. Many popular designs are based on the NE612 mixer IC, i.e. the K1, K2, ATS series and the Softrock SDR. Amateur radio organizations dedicated to QRP include QRP Amateur Radio Club International (QRPARCI), American QRP Club, G-QRP Club based in the United Kingdom, and The Adventure Radio Society emphasizing portable QRP operation. Major QRP gatherings are held yearly at hamfests such as Dayton Hamvention, Pacificon, and Frederichshafen. There are specific operating awards, contests, clubs, and conventions devoted to QRP enthusiasts. In the United States, the November Sweepstakes, June and September VHF QSO Parties, January VHF Sweepstakes, and the ARRL International DX Contest, as well as many major international contests have designated special QRP categories. For example, during the annual ARRL's Field Day contest, making a QSO (ham-to-ham contact) using "QRP battery power" is worth five times as many points as a contact made by conventional means. The QRP ARCI club sponsors 12 contests during the year specifically for QRP operators. Typical awards include the QRP ARCI club's "thousand-miles-per-watt" award, available to anyone presenting evidence of a qualifying contact. QRP ARCI also offers special awards for achieving the ARRL's Worked All States, Worked All Continents, and DX Century Club awards under QRP conditions. Other QRP clubs also offer similar versions of these awards, as well as general QRP operating achievement awards.
https://en.wikipedia.org/wiki?curid=25323