text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Software engineering
Software engineering is the systematic application of engineering approaches to the development of software. Software engineering is a branch of computing science.
When the first digital computers appeared in the early 1940s, the instructions to make them operate were wired into the machine. Practitioners quickly realized that this design was not flexible and came up with the "stored program architecture" or von Neumann architecture. Thus the division between "hardware" and "software" began with abstraction being used to deal with the complexity of computing.
Programming languages started to appear in the early 1950s and this was also another major step in abstraction. Major languages such as Fortran, ALGOL, PL/I, and COBOL were released in the late 1950 and 1960s to deal with scientific, algorithmic, and business problems respectively. David Parnas introduced the key concept of modularity and information hiding in 1972 to help programmers deal with the ever-increasing complexity of software systems.
The origins of the term "software engineering" have been attributed to various sources. The term "software engineering" appeared in a list of services offered by companies in the June 1965 issue of COMPUTERS and AUTOMATION and was used more formally in the August 1966 issue of Communications of the ACM (Volume 9, number 8) “letter to the ACM membership” by the ACM President Anthony A. Oettinger;, it is also associated with the title of a NATO conference in 1968 by Professor Friedrich L. Bauer, the first conference on software engineering. Independently, Margaret Hamilton named the discipline "software engineering" during the Apollo missions to give what they were doing legitimacy. At the time there was perceived to be a "software crisis". The 40th International Conference on Software Engineering (ICSE 2018) celebrates 50 years of "Software Engineering" with the Plenary Sessions' keynotes of Frederick Brooks and Margaret Hamilton.
In 1984, the Software Engineering Institute (SEI) was established as a federally funded research and development center headquartered on the campus of Carnegie Mellon University in Pittsburgh, Pennsylvania, United States. Watts Humphrey founded the SEI Software Process Program, aimed at understanding and managing the software engineering process. The Process Maturity Levels introduced would become the Capability Maturity Model Integration for Development(CMMI-DEV), which has defined how the US Government evaluates the abilities of a software development team.
Modern, generally accepted best-practices for software engineering have been collected by the ISO/IEC JTC 1/SC 7 subcommittee and published as the Software Engineering Body of Knowledge (SWEBOK).
Notable definitions of software engineering include:
The term has also been used less formally:
Requirements engineering is about the elicitation, analysis, specification, and validation of requirements for software.
Software design is about the process of defining the architecture, components, interfaces, and other characteristics of a system or component. This is also called Software architecture.
Software development, the main activity of software construction: is the combination of programming (aka coding), verification, software testing, and debugging. A Software development process: is the definition, implementation, assessment, measurement, management, change, and improvement of the software life cycle process itself. It heavily uses Software configuration management which is about systematically controlling changes to the configuration, and maintaining the integrity and traceability of the configuration and code throughout the system life cycle. Modern processes use software versioning.
Software testing: is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with different approaches such as unit testing and integration testing. It is one aspect of software quality.
Software maintenance: refers to the activities required to provide cost-effective support after shipping the software product.
Knowledge of computer programming is a prerequisite for becoming a software engineer. In 2004 the IEEE Computer Society produced the SWEBOK, which has been published as ISO/IEC Technical Report 1979:2004, describing the body of knowledge that they recommend to be mastered by a graduate software engineer with four years of experience.
Many software engineers enter the profession by obtaining a university degree or training at a vocational school. One standard international curriculum for undergraduate software engineering degrees was defined by the Joint Task Force on Computing Curricula of the IEEE Computer Society and the Association for Computing Machinery, and updated in 2014. A number of universities have Software Engineering degree programs; , there were 244 Campus Bachelor of Software Engineering programs, 70 Online programs, 230 Masters-level programs, 41 Doctorate-level programs, and 69 Certificate-level programs in the United States.
In addition to university education, many companies sponsor internships for students wishing to pursue careers in information technology. These internships can introduce the student to interesting real-world tasks that typical software engineers encounter every day. Similar experience can be gained through military service in software engineering.
Legal requirements for the licensing or certification of professional software engineers vary around the world. In the UK, there is no licensing or legal requirement to assume or use the job title Software Engineer. In some areas of Canada, such as Alberta, British Columbia, Ontario, and Quebec, software engineers can hold the Professional Engineer (P.Eng) designation and/or the Information Systems Professional (I.S.P.) designation. In Europe, Software Engineers can obtain the European Engineer (EUR ING) professional title.
The United States, since 2013, has offered an "NCEES" "Professional Engineer" exam for Software Engineering, thereby allowing Software Engineers to be licensed and recognized. NCEES will end the exam after April 2019 due to lack of participation. Mandatory licensing is currently still largely debated, and perceived as controversial. In some parts of the US such as Texas, the use of the term Engineer is regulated by law and reserved only for use by individuals who have a Professional Engineer license.
The IEEE Computer Society and the ACM, the two main US-based professional organizations of software engineering, publish guides to the profession of software engineering. The IEEE's "Guide to the Software Engineering Body of Knowledge – 2004 Version", or SWEBOK, defines the field and describes the knowledge the IEEE expects a practicing software engineer to have. The most current SWEBOK v3 is an updated version and was released in 2014. The IEEE also promulgates a "Software Engineering Code of Ethics".
The U. S. Bureau of Labor Statistics counted 1,365,500 software developers holding jobs in the U.S. in 2018. Employment of computer and information technology occupations is projected to grow 13 percent from 2016 to 2026, faster than the average for all occupations. These occupations are projected to add about 557,100 new jobs. Demand for these workers will stem from greater emphasis on cloud computing, the collection and storage of big data, and information security. Yet, the BLS also says some employment in these occupations are slowing and computer programmers is projected to decline 7 percent from 2016 to 2026 since computer programming can be done from anywhere in the world, so companies sometimes hire programmers in countries where wages are lower. Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and many software engineers hold computer science degrees.
Many software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Specializations include: in industry (analysts, architects, developers, testers, technical support, middleware analysts, managers) and in academia (educators, researchers).
Most software engineers and programmers work 40 hours a week, but about 15 percent of software engineers and 11 percent of programmers worked more than 50 hours a week in 2008. Potential injuries in these occupations are possible because like other workers who spend long periods sitting in front of a computer terminal typing at a keyboard, engineers and programmers are susceptible to eyestrain, back discomfort, and hand and wrist problems such as carpal tunnel syndrome.
The Software Engineering Institute offers certifications on specific topics like security, process improvement and software architecture. IBM, Microsoft and other companies also sponsor their own certification examinations. Many IT certification programs are oriented toward specific technologies, and managed by the vendors of these technologies. These certification programs are tailored to the institutions that would employ people who use these technologies.
Broader certification of general software engineering skills is available through various professional societies. , the IEEE had certified over 575 software professionals as a Certified Software Development Professional (CSDP). In 2008 they added an entry-level certification known as the Certified Software Development Associate (CSDA). The ACM had a professional certification program in the early 1980s, which was discontinued due to lack of interest. The ACM examined the possibility of professional certification of software engineers in the late 1990s, but eventually decided that such certification was inappropriate for the professional industrial practice of software engineering.
In the U.K. the British Computer Society has developed a legally recognized professional certification called "Chartered IT Professional (CITP)", available to fully qualified members ("MBCS"). Software engineers may be eligible for membership of the Institution of Engineering and Technology and so qualify for Chartered Engineer status. In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called "Information Systems Professional (ISP)". In Ontario, Canada, Software Engineers who graduate from a "Canadian Engineering Accreditation Board (CEAB)" accredited program, successfully complete PEO's ("Professional Engineers Ontario") Professional Practice Examination (PPE) and have at least 48 months of acceptable engineering experience are eligible to be licensed through the "Professional Engineers Ontario" and can become Professional Engineers P.Eng. The PEO does not recognize any online or distance education however; and does not consider Computer Science programs to be equivalent to software engineering programs despite the tremendous overlap between the two. This has sparked controversy and a certification war. It has also held the number of P.Eng holders for the profession exceptionally low. The vast majority of working professionals in the field hold a degree in CS, not SE. Given the difficult certification path for holders of non-SE degrees, most never bother to pursue the license.
The initial impact of outsourcing, and the relatively lower cost of international human resources in developing third world countries led to a massive migration of software development activities from corporations in North America and Europe to India and later: China, Russia, and other developing countries. This approach had some flaws, mainly the distance / time zone difference that prevented human interaction between clients and developers and the massive job transfer. This had a negative impact on many aspects of the software engineering profession. For example, some students in the developed world avoid education related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers. Although statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected. Nevertheless, the ability to smartly leverage offshore and near-shore resources via the follow-the-sun workflow has improved the overall operational capability of many organizations. When North Americans are leaving work, Asians are just arriving to work. When Asians are leaving work, Europeans are arriving to work. This provides a continuous ability to have human oversight on business-critical processes 24 hours per day, without paying overtime compensation or disrupting a key human resource, sleep patterns.
While global outsourcing has several advantages, global – and generally distributed – development can run into serious difficulties resulting from the distance between developers. This is due to the key elements of this type of distance that have been identified as geographical, temporal, cultural and communication (that includes the use of different languages and dialects of English in different locations). Research has been carried out in the area of global software development over the last 15 years and an extensive body of relevant work published that highlights the benefits and problems associated with the complex activity. As with other aspects of software engineering research is ongoing in this and related areas.
Software engineering sees its practitioners as individuals who follow well-defined engineering approaches to problem-solving. These approaches are specified in various software engineering books and research papers, always with the connotations of predictability, precision, mitigated risk and professionalism. This perspective has led to calls for licensing, certification and codified bodies of knowledge as mechanisms for spreading the engineering knowledge and maturing the field.
Software craftsmanship has been proposed by a body of software developers as an alternative that emphasizes the coding skills and accountability of the software developers themselves without professionalism or any prescribed curriculum leading to ad-hoc problem-solving (craftsmanship) without engineering (lack of predictability, precision, missing risk mitigation, methods are informal and poorly defined). The Software Craftsmanship Manifesto extends the Agile Software Manifesto and draws a metaphor between modern software development and the apprenticeship model of medieval Europe.
Software engineering extends engineering and draws on the engineering model, i.e. engineering process, engineering project management, engineering requirements, engineering design, engineering construction, and engineering validation. The concept is so new that it is rarely understood, and it is widely misinterpreted, including in software engineering textbooks, papers, and among the communities of programmers and crafters.
One of the core issues in software engineering is that its approaches are not empirical enough because a real-world validation of approaches is usually absent, or very limited and hence software engineering is often misinterpreted as feasible only in a "theoretical environment."
Edsger Dijkstra, the founder of many of the concepts used within software development today, rejected the idea of "software engineering" up until his death in 2002, arguing that those terms were poor analogies for what
he called the "radical novelty" of computer science: | https://en.wikipedia.org/wiki?curid=27010 |
Software Engineering Institute
The Software Engineering Institute (SEI) is an American research and development center headquartered in Pittsburgh, Pennsylvania. Its activities cover cybersecurity, software assurance, software engineering and acquisition, and component capabilities critical to the Department of Defense.
The Carnegie Mellon Software Engineering Institute is a federally funded research and development center headquartered on the campus of Carnegie Mellon University in Pittsburgh, Pennsylvania, United States. The SEI also has offices in Washington, DC and Los Angeles, California. The SEI operates with major funding from the U.S. Department of Defense. The SEI also works with industry and academia through research collaborations.
On November 14, 1984, the U.S. Department of Defense elected Carnegie Mellon University as the host site of the Software Engineering Institute. The institute was founded with an initial allocation of $6 million, with another $97 million to be allocated in the subsequent five years. The SEI's contract with the Department of Defense is subject to review and renewal every five years.
The SEI program of work is conducted in several principal areas: cybersecurity, software assurance, software engineering and acquisition, and component capabilities critical to the Department of Defense.
The SEI defines specific initiatives aimed at improving organizations' software engineering capabilities.
Organizations need to effectively manage the acquisition, development, and evolution (ADE) of software-intensive systems. Success in software engineering management practices helps organizations predict and control quality, schedule, cost, cycle time, and productivity. The best-known example of SEI in management practices is the SEI's Capability Maturity Model (CMM) for Software (now Capability Maturity Model Integration (CMMI)). The CMMI approach consists of models, appraisal methods, and training courses that have been proven to improve process performance. In 2006, Version 1.2 of the CMMI Product Suite included the release of CMMI for Development. CMMI for Development was the first of three constellations defined in Version 1.2: the others include CMMI for Acquisition and CMMI for Services. The CMMI for Services constellation was released in February 2009. Another management practice developed by CERT, which is part of the SEI, is the Resilience Management Model (CERT-RMM). The CERT-RMM is a capability model for operational resilience management. Version 1.0 of the Resilience Management Model was released in May 2010.
SEI work in engineering practices increases the ability of software engineers to analyze, predict, and control selected
functional and non-functional properties of software systems. Key SEI tools and methods include the SEI Architecture Tradeoff Analysis Method (ATAM) method, the SEI Framework for Software Product Line Practice, and the SEI Service Migration and Reuse Technique (SMART).
The SEI is also the home of the CERT/CC (CERT Coordination Center), a federally funded computer security organization. The SEI CERT Program's primary goals are to ensure that appropriate technology and systems-management practices are used to resist attacks on networked systems and to limit damage and ensure continuity of critical services in spite of successful attacks, accidents, or failures. The SEI CERT program is working with US-CERT to produce the Build Security In (BSI) website, which provides guidelines for building security into every phase of the software development lifecycle. The SEI has also conducted research on insider threats and computer forensics. Results of this research and other information now populate the CERT Virtual Training Environment.
Carnegie Mellon, Capability Maturity Model, CMM, CMMI, Architecture Tradeoff Analysis Method, ATAM, and CERT are registered in the U.S. Patent and Trademark Office by Carnegie Mellon University.
The SEI Partner Network helps the SEI disseminate software engineering best practices. Organizations and individuals in the SEI Partner Network are selected, trained, and licensed by the SEI to deliver authentic SEI services, which include courses, consulting methods, and management processes. The network currently consists of nearly 250 partner organizations worldwide.
The SEI sponsors national and international conferences, workshops, and user-group meetings. Other events cover subjects including acquisition of software-intensive systems, commercial off-the-shelf (COTS)-based systems, network security and survivability, software process research, software product lines, CMMI, and the SEI Team Software Process.
SEI courses are currently offered at the SEI's locations in the United States and Europe. In addition, using licensed course materials, SEI Partners train individuals.
The SEI Membership Program helps the software engineering community to network. SEI Members include small business owners, software and systems programmers, CEOs, directors, and managers from both Fortune 500 companies and government organizations
Through the SEI Affiliate Program, organizations place technical experts with the SEI for periods ranging from 12 months to four years. Affiliates currently are working on projects with the SEI to identify, develop, and demonstrate improved software engineering practices.
In order to recognize outstanding achievement in improving an organization's ability to create and evolve software-dependent systems, the SEI and IEEE Computer Society created the Software Process Achievement Award program. In addition to rewarding excellence, the purpose of this award is to foster continuous advancement in the practice of software engineering and to disseminate insights, experiences, and proven practices throughout the relevant research and practitioner communities.
The SEI publishes reports that offer new technical information about software engineering topics, whether theoretical or applied. The SEI also publishes books on software engineering for industry, government and military applications and practices.
In addition, the SEI offers public courses, workshops, and conferences in process improvement, software architecture and product lines, and security.
On November 11, 2015, the head of the Tor Project accused the Software Engineering Institute of aiding Federal Bureau of Investigation in uncovering the identities of users of the Tor network. Later prosecution showed the hack was paid for by the Department of Defense and subpoena by the FBI.
SEI has been an occasional site of anti-war movement and peace movement protests, many of which have been organized by Pittsburgh's Thomas Merton Center. | https://en.wikipedia.org/wiki?curid=27011 |
Software crisis
Software crisis is a term used in the early days of computing science for the difficulty of writing useful and efficient computer programs in the required time. The software crisis was due to the rapid increases in computer power and the complexity of the problems that could now be tackled. With the increase in the complexity of the software, many software problems arose because existing methods were inadequate.
The term "software crisis" was coined by some attendees at the first NATO Software Engineering Conference in 1968 at Garmisch, Germany. Edsger Dijkstra's 1972 ACM Turing Award Lecture makes reference to this same problem:
The causes of the software crisis were linked to the overall complexity of hardware and the software development process. The crisis manifested itself in several ways:
The main cause is that improvements in computing power had outpaced the ability of programmers to effectively utilize those capabilities. Various processes and methodologies have been developed over the last few decades to improve software quality management such as procedural programming and object-oriented programming. However software projects that are large, complicated, poorly specified, and involve unfamiliar aspects, are still vulnerable to large, unanticipated problems. | https://en.wikipedia.org/wiki?curid=27012 |
Swedish Academy
The Swedish Academy (), founded in 1786 by King Gustav III, is one of the Royal Academies of Sweden. Its 18 members, who are elected for life, comprise the highest Swedish language authority. Outside Scandinavia, it is best known as the body that chooses the laureates for the annual Nobel Prize in Literature, awarded in memory of the donor Alfred Nobel.
The Swedish Academy was founded in 1786 by King Gustav III. Modelled after the Académie française, it has 18 members. It is said that Gustaf III originally intended there to be twenty members, half the number of those in the French Academy, but eventually decided on eighteen because the Swedish expression "De Aderton" – 'The Eighteen' – had such a fine solemn ring. The academy's motto is "Talent and Taste" (""Snille och Smak"" in Swedish). The academy's primary purpose is to further the "purity, strength, and sublimity of the Swedish language" (""Svenska Språkets renhet, styrka och höghet"") (Walshe, 1965). To that end the academy publishes two dictionaries. The first is a one-volume glossary called "Svenska Akademiens ordlista" ("SAOL"). The second is a multi-volume dictionary, edited on principles similar to those of the "Oxford English Dictionary", entitled "Svenska Akademiens Ordbok" ("SAOB"). The "SAOL" has reached its 14th edition while the first volume of the "SAOB" was published in 1898 and, as of 2017, work has progressed to words beginning with the letter "V".
The building now known as the Stockholm Stock Exchange Building was built for the bourgeoisie. The bottom floor was used as a trading exchange (this later became the stock exchange), and the upper floor was used for balls, New Year's Eve parties, etc. When the academy was founded, the ballroom was the biggest room in Stockholm that could be heated and thus used in the winter, so the King asked if he could borrow it.
The academy has had its annual meeting there every year since, attended by members of the Swedish royal family. However, it was not until 1914 that the academy gained permanent use of the upper floor as their own. It is here that the academy meets and, amongst other business, announces the names of Nobel Prize laureates. This task arguably makes the academy one of the world's most influential literary bodies.
Members are elected by a secret ballot in the Academy and before the result is made public it must be submitted to the Academy's Patron, the King of Sweden, for his approval. Members of the Academy include writers, linguists, literary scholars, historians and a prominent jurist. Initially writers were in the minority in the Academy, but during the twentieth century the number of writers grew to represent more than half of The Eighteen. The Swedish Academy have a long history of being a heavily male dominated institution, but the Academy has recently moved towards better equality. Since 20 December 2019 one third of the chairs belong to female Academy members.
Prior to 2018 it was not possible for members of the academy to resign; membership was for life, although the academy could decide to exclude members. This happened twice to Gustaf Mauritz Armfelt, who was excluded in 1794, re-elected in 1805 and excluded again in 1811. In 1989, Werner Aspenström, Kerstin Ekman and Lars Gyllensten chose to stop participating in the meetings of the academy, over its refusal to express support for Salman Rushdie when Ayatollah Khomeini condemned him to death for "The Satanic Verses", and in 2005, Knut Ahnlund made the same decision, as a protest against the choice of Elfriede Jelinek as Nobel laureate for 2004. On 25 November 2017, Lotta Lotass said in an interview that she had not participated in the meetings of the academy for more than two years and did not consider herself a member any more.
Dag Hammarskjöld's former farm at Backåkra, close to Ystad in southern Sweden, was bought in 1957 as a summer residence by Hammarskjöld, then Secretary-General of the United Nations (1953–1961). The south wing of the farm is reserved as a summer retreat for the 18 members of the Swedish Academy, of which Hammarskjöld was a member.
On 11 April 2019, the academy published its financial statements for the first time in its history. According to it, the academy owned financial assets worth 1.58 billion Swedish kronor at the end of 2018 (equal to $170M, €150M, or £130M).
In April 2018, three members of the academy board resigned in response to a sexual-misconduct investigation involving author Jean-Claude Arnault, husband of board member Katarina Frostenson. Arnault was accused by at least 18 women of sexual assault and harassment; he denied all accusations. The three members resigned in protest over the lack of appropriate action against Arnault. Two former permanent secretaries, Sture Allén and Horace Engdahl, called the current leader, Sara Danius, a weak leader.
On 10 April, Danius resigned from her position with the academy, bringing the number of empty seats to four. Frostenson voluntarily agreed to withdraw from participating in the academy, bringing the total of withdrawals to five. Because two other seats were still vacant after the Rushdie affair, this left only 11 active members. The scandal was widely seen as damaging to the credibility of the Nobel prize in Literature and the authority of the academy. "With this scandal you cannot possibly say that this group of people has any kind of solid judgment," noted Swedish journalist Björn Wiman.
On 27 April 2018, the Swedish Economic Crime Authority opened a preliminary investigation regarding financial crime linked to an association run by Arnault and Frostenson, which had received funding from the academy.
On 2 May 2018, the Swedish King amended the rules of the academy and made it possible for members to resign. The new rules also state that a member who has been inactive in the work of the academy for more than two years can be asked to resign. Following the new rules, the first members to formally be granted permission to leave the academy and vacate their chairs were Kerstin Ekman, Klas Östergren, Sara Stridsberg and Lotta Lotass.
On 4 May 2018, the Swedish Academy announced that following the preceding internal struggles the Nobel laureate for literature selected in 2018 would be postponed until 2019, when two laureates would be selected.
Since 1901, the Swedish Academy has annually decided who will be the laureate for the Nobel Prize in Literature, awarded in memory of the donor Alfred Nobel.
The Swedish Academy annually awards nearly 50 different prizes and scholarships, most of them for domestic Swedish authors. Common to all is that they are awarded without competition and without application. The Dobloug Prize, the largest of these at $40,000, is a literature prize awarded for Swedish and Norwegian fiction.
Swedish: Stora Priset, literally the Big Prize, was instituted by King Gustav III. The prize, which consists of a single gold medal, is the most prestigious award that can be awarded by the Swedish Academy. It has been awarded to, among others, Selma Lagerlöf (1904 and 1909), Herbert Tingsten (1966), Astrid Lindgren (1971), Evert Taube (1972) and Tove Jansson (1994).
The academy awards around 50 prizes each year. A person does not have to apply nor compete for the prizes.
Full list of awards (in Swedish)
The current members of the Swedish Academy listed by seat number: | https://en.wikipedia.org/wiki?curid=27013 |
Sture Allén
Sture Allén (born 31 December 1928) is a retired Swedish professor of computational linguistics at the University of Gothenburg, who was the permanent secretary of the Swedish Academy between 1986 and 1999. Born in Gothenburg, he was elected to chair 3 of the Swedish Academy in 1980. He is also a member of the Norwegian Academy of Science and Letters. | https://en.wikipedia.org/wiki?curid=27016 |
South Korea
South Korea (Korean: /, RR: "Hanguk"; literally /, RR: "Namhan", or , MR: "Namchosŏn" in North Korean usage), officially the Republic of Korea (ROK; Korean: /, RR: "Daehan Minguk"), is a country in East Asia, constituting the southern part of the Korean Peninsula and sharing a land border with North Korea.
The name "Korea" is derived from Goguryeo, which was one of the great powers in East Asia during its time, ruling most of the Korean Peninsula, Manchuria, parts of the Russian Far East and Inner Mongolia under Gwanggaeto the Great. Its capital, Seoul, is a major global city and half of South Korea's over 51 million people live in the Seoul Capital Area, the fourth largest metropolitan economy in the world.
The Korean Peninsula was inhabited as early as the Lower Paleolithic period. Its first kingdom was noted in Chinese records in the early 7th century BC. Following the unification of the Three Kingdoms of Korea into Silla and Balhae in the late 7th century, Korea was ruled by the Goryeo dynasty (918–1392) and the Joseon dynasty (1392–1897). The succeeding Korean Empire was annexed into the Empire of Japan in 1910. After World War II, Korea was divided into Soviet and U.S.-administered zones, with the latter becoming the Republic of Korea in August 1948. In 1950, a North Korean invasion began the Korean War and after its end in 1953, the country's economy began to soar, recording the fastest rise in average GDP per capita in the world between 1980 and 1990. The June Struggle led to the end of authoritarian rule in 1987 and the country is now the most advanced democracy and has the highest level of press freedom in Asia. It has the 10th highest social mobility in the world, with 17% of children born to parents in the bottom half of educational attainment ending up in the top quarter. South Korea is a member of the OECD's Development Assistance Committee, the G20 and the Paris Club.
South Korea is a highly developed country and the world's 12th-largest economy by nominal GDP. Its citizens enjoy the world's fastest Internet connection speeds and the most dense high-speed railway network. It was named the second-best country in the world to raise kids in the 2020 UN Child Flourishing Index, with the best chance at survival, thriving and well-being due to good healthcare, education, and nutrition. The world's 5th largest exporter and 8th largest importer, South Korea is a global leader in many technology and innovation-driven fields. Since 2014, South Korea has been named the world's most innovative country by the Bloomberg Innovation Index for 6 consecutive years. Since the 21st century, South Korea has been renowned for its globally influential pop culture, particularly in music (K-pop), TV dramas and cinema, a phenomenon referred to as the Korean Wave.
The name "Korea" derives from the name "Goryeo". The name "Goryeo" itself was first used by the ancient kingdom of Goguryeo in the 5th century as a shortened form of its name. The 10th-century kingdom of Goryeo succeeded Goguryeo, and thus inherited its name, which was pronounced by the visiting Persian merchants as "Korea". The modern spelling of Korea first appeared in the late 17th century in the travel writings of the Dutch East India Company's Hendrick Hamel. Despite the coexistence of the spellings "Corea" and "Korea" in 19th century publications, some Koreans believe that Imperial Japan, around the time of the Japanese occupation, intentionally standardised the spelling on "Korea", making Japan appear first alphabetically.
After Goryeo was replaced by Joseon in 1392, Joseon became the official name for the entire territory, though it was not universally accepted. The new official name has its origin in the ancient kingdom of Gojoseon (2333 BC). In 1897, the Joseon dynasty changed the official name of the country from "Joseon" to "Daehan Jeguk" (Korean Empire). The name "Daehan" (Great Han) derives from Samhan (Three Han), referring to the Three Kingdoms of Korea, not the ancient confederacies in the southern Korean Peninsula. However, the name "Joseon" was still widely used by Koreans to refer to their country, though it was no longer the official name. Under Japanese rule, the two names "Han" and "Joseon" coexisted. There were several groups who fought for independence, the most notable being the "Provisional Government of the Republic of Korea" (/).
Following the surrender of Japan, in 1945, the "Republic of Korea" (/, IPA: , ; ) was adopted as the legal English name for the new country. However, it is not a direct translation of the Korean name. As a result, the Korean name "Daehan Minguk" is sometimes used by South Koreans as a metonym to refer to the Korean ethnicity (or "race") as a whole, rather than just the South Korean state.
Since the government only controlled the southern part of the Korean Peninsula, the informal term "South Korea" was coined, becoming increasingly common in the Western world. While South Koreans use "Han" (or "Hanguk") to refer to both Koreas collectively, North Koreans and ethnic Koreans living in China and Japan use the term "Joseon" instead.
The Korean Peninsula was inhabited as early as the Lower Paleolithic period. The history of Korea begins with the founding of Joseon (also known as "Gojoseon", or Old Joseon, to differentiate it with the 14th century dynasty) in 2333 BCE by Dangun, according to Korea's foundation mythology. | https://en.wikipedia.org/wiki?curid=27019 |
Saint George
Saint George (, "Geṓrgios"; ; d. 23 April 303), also George of Lydda, was a soldier of Cappadocian Greek origins, member of the Praetorian Guard for Roman emperor Diocletian, who was sentenced to death for refusing to recant his Christian faith. He became one of the most venerated saints and megalomartyrs in Christianity, and he has been especially venerated as a military saint since the Crusades.
In hagiography, as one of the Fourteen Holy Helpers and one of the most prominent military saints, he is immortalised in the legend of Saint George and the Dragon. His memorial, Saint George's Day, is traditionally celebrated on 23 April. (See under "Feast days" below for the use of the Julian calendar by the Eastern Orthodox Church.)
England, Ethiopia, Georgia, and the Autonomous Communities of Catalonia and Aragon in Spain, and several other nation states, cities, universities, professions and organisations all claim George as their patron.
Very little is known about George's life, but it is thought he was a Roman officer of Greek descent from Cappadocia who was martyred in one of the pre-Constantinian persecutions. Beyond this, early sources give conflicting information.
There are two main versions of the legend, a Greek and a Latin version, which can both be traced to the 5th or 6th century. The saint's veneration dates to the 5th century with some certainty, and possibly still to the 4th. The addition of the dragon legend dates to the 11th century.
The earliest text which preserves fragments of George's narrative is in a Greek hagiography which is identified by Hippolyte Delehaye of the scholarly Bollandists to be a palimpsest of the 5th century.
An earlier work by Eusebius, "Church history", written in the 4th century, contributed to the legend but did not name George or provide significant detail.
The work of the Bollandists Daniel Papebroch, Jean Bolland, and Godfrey Henschen in the 17th century was one of the first pieces of scholarly research to establish the saint's historicity via their publications in "Bibliotheca Hagiographica Graeca". Pope Gelasius I stated in 494 that George was among those saints "whose names are justly reverenced among men, but whose actions are known only to God."
The most complete version of the fifth-century Greek text survives in a translation into Syriac from about 600. From text fragments preserved in the British Library a translation into English was published in 1925.
In the Greek tradition, George was born to Greek Christian parents, in Cappadocia.
His father died for the faith when George was fourteen, and his mother returned with George to her homeland of Syria Palaestina.
A few years later, George's mother died. George travelled to the eastern imperial capital, Nicomedia, where he joined the Roman army.
George was persecuted by one "Dadianus". In later versions of the Greek legend, this name is rationalised to Diocletian, and George's martyrdom is placed in the Diocletian persecution of AD 303. The setting in Nicomedia is also secondary, and inconsistent with the earliest cultus of the saint being located in Diospolis.
George was executed by decapitation before Nicomedia's city wall, on 23 April 303. A witness of his suffering convinced Empress Alexandra of Rome to become a Christian as well, so she joined George in martyrdom. His body was returned to Lydda for burial, where Christians soon came to honour him as a martyr.
The Latin "Acta Sancti Georgii" (6th century) follows the general course of the Greek legend, but Diocletian here becomes "Dacian, Emperor of the Persians". George lived and died in Melitene in Cappadocia.
His martyrdom was greatly extended to more than twenty separate tortures over the course of seven years.
Over the course of his martyrdom, 40,900 pagans were converted to Christianity, including the empress Alexandra. When George finally died, the wicked Dacian was carried away in a whirlwind of fire.
In later Latin versions, the persecutor is the Roman emperor Decius, or a Roman judge named Dacian serving under Diocletian.
There is little information on the early life of George. Herbert Thurston in "The Catholic Encyclopedia" states that based upon an ancient cultus, narratives of the early pilgrims, and the early dedications of churches to George, going back to the fourth century, "there seems, therefore, no ground for doubting the historical existence of St. George", although no faith can be placed in either the details of his history or his alleged exploits.
Although the Diocletianic Persecution of 303, associated with military saints because the persecution was aimed at Christians among the professional soldiers of the Roman army, is of undisputed historicity, the identity of George as a historical individual cannot be ascertained with any confidence .
According to Donald Attwater, "No historical particulars of his life have survived, ... The widespread veneration for St George as a soldier saint from early times had its centre in Palestine at Diospolis, now Lydda. St George was apparently martyred there, at the end of the third or the beginning of the fourth century; that is all that can be reasonably surmised about him."
Edward Gibbon argued that George, or at least the legend from which the above is distilled, is based on George of Cappadocia, a notorious 4th-century Arian bishop who was Athanasius of Alexandria's most bitter rival, and that it was he who in time became George of England.
This identification is seen as highly improbable. Bishop George was slain by Gentile Greeks for exacting onerous taxes, especially inheritance taxes. J. B. Bury, who edited the 1906 edition of Gibbon's "The Decline and Fall", wrote "this theory of Gibbon's has nothing to be said for it." He adds that: "the connection of St. George with a dragon-slaying legend does not relegate him to the region of the myth".
Saint George in all likelihood was martyred before the year 290.
The legend of Saint George and the Dragon was first recorded in the 11th century, in a Georgian source.
It reached Catholic Europe in the 12th century. In the "Golden Legend", by 13th-century Archbishop of Genoa Jacobus da Varagine, George's death was at the hands of Dacian, and about the year 287.
The tradition tells that a fierce dragon was causing panic at the city of Silene, Libya, at the time George arrived there. In order to prevent the dragon from devastating people from the city, they gave two sheep each day to the dragon, but when the sheep were not enough they were forced to sacrifice humans instead of the two sheep. The human to be sacrificed was elected by the city's own people and that time the king's daughter was chosen to be sacrificed but no one was willing to take her place. George saved the girl by slaying the dragon with a lance. The king was so grateful that he offered him treasures as a reward for saving his daughter's life, but George refused it and instead he gave these to the poor. The people of the city were so amazed at what they had witnessed that they became Christians and were all baptized.
The "Golden Legend" offered a historicised narration of George's encounter with a dragon.
This account was very influential and it remains the most familiar version in English owing to William Caxton's 15th-century translation.
In the mediaeval romances, the lance with which George slew the dragon was called Ascalon, after the Levantine city of Ashkelon, today in Israel. The name "Ascalon" was used by Winston Churchill for his personal aircraft during World War II, according to records at Bletchley Park. In Sweden, the princess rescued by George is held to represent the kingdom of Sweden, while the dragon represents an invading army. Several sculptures of George battling the dragon can be found in Stockholm, the earliest inside Storkyrkan ("The Great Church") in the Old Town. Iconography of the horseman with spear overcoming evil was widespread throughout the Christian period.
George (, "Jiriyas" or "Girgus") is included in some Muslim texts as a prophetic figure. The Islamic sources state that he lived among a group of believers who were in direct contact with the last apostles of Jesus. He is described as a rich merchant who opposed erection of Apollo's statue by Mosul's king Dadan. After confronting the king, George was tortured many times to no effect, was imprisoned and was aided by the angels. Eventually, he exposed that the idols were possessed by Satan, but was martyred when the city was destroyed by God in a rain of fire.
Muslim scholars had tried to find a historical connection of the saint due to his popularity. According to Muslim legend, he was martyred under the rule of Diocletian and was killed three times but resurrected every time. The legend is more developed in the Persian version of al-Tabari wherein he resurrects the dead, makes trees sprout and pillars bear flowers. After one of his deaths, the world is covered by darkness which is lifted only when he is resurrected. He is able to convert the queen but she is put to death. He then prays to God to allow him to die, which prayer is granted.
Al-Tha`labi states that he was from Palestine and lived in the times of some disciples of Jesus. He was killed many times by the king of Mosul, and resurrected each time. When the king tried to starve him, he touched a piece of dry wood brought by a woman and turned it green, with varieties of fruits and vegetables growing from it. After his fourth death, the city was burnt along with him. Ibn al-Athir's account of one of his deaths is parallel to the crucifixion of Jesus, stating, "When he died, God sent stormy winds and thunder and lightning and dark clouds, so that darkness fell between heaven and earth, and people were in great wonderment..." The account adds that the darkness was lifted after his resurrection.
A titular church built in Lydda during the reign of Constantine the Great (reigned 306–37) was consecrated to "a man of the highest distinction", according to the church history of Eusebius; the name of the "titulus" "patron" was not disclosed, but later he was asserted to have been George.
The veneration of George spread from Syria Palaestina through Lebanon to the rest of the Byzantine Empire—though the martyr is not mentioned in the Syriac "Breviarium"—and the region east of the Black Sea.
By the 5th century, the veneration of George had reached the Christian Western Roman Empire, as well: in 494, George was canonized as a saint by Pope Gelasius I, among those "whose names are justly reverenced among men, but whose acts are known only to [God]."
The early cult of the saint was localized in Diospolis (Lydda), in Palestine.
The first description of Lydda as a pilgrimage site where George's relics were venerated is "De Situ Terrae Sanctae" by
the archdeacon Theodosius, written between 518 and 530.
By the end of the 6th century, the center of his veneration appears to have shifted to Cappadocia.
The "Life" of Saint Theodore of Sykeon, written in the 7th century, mentions the veneration of the relics of the saint in Cappadocia.
By the time of the early Muslim conquests of the mostly Christian and Zoroastrian Middle East, a basilica in Lydda dedicated to George existed. The church was destroyed by Muslims in 1010, but was later rebuilt and dedicated to George by the Crusaders. In 1191 and during the conflict known as the Third Crusade (1189–92), the church was again destroyed by the forces of Saladin, Sultan of the Ayyubid dynasty (reigned 1171–93). A new church was erected in 1872 and is still standing. In England, he was mentioned among the martyrs by the 8th-century monk Bede. The "Georgslied" is an adaptation of his legend in Old High German, composed in the late 9th century. The earliest dedication to the saint in England is a church at Fordington, Dorset that is mentioned in the will of Alfred the Great. George did not rise to the position of "patron saint" of England, however, until the 14th century, and he was still obscured by Edward the Confessor, the traditional patron saint of England, until in 1552 during the reign of Edward VI all saints' banners other than George's were abolished in the English Reformation.
Belief in an apparition of George heartened the Franks at the Battle of Antioch in 1098, and a similar appearance occurred the following year at Jerusalem. The chivalric military Order of Sant Jordi d'Alfama was established by king Peter the Catholic from the Crown of Aragon in 1201, Republic of Genoa, Kingdom of Hungary (1326), and by Frederick III, Holy Roman Emperor. Edward III of England put his Order of the Garter under the banner of George, probably in 1348. The chronicler Jean Froissart observed the English invoking George as a battle cry on several occasions during the Hundred Years' War. In his rise as a national saint, George was aided by the very fact that the saint had no legendary connection with England, and no specifically localized shrine, as that of Thomas Becket at Canterbury: "Consequently, numerous shrines were established during the late fifteenth century," Muriel C. McClendon has written, "and his did not become closely identified with a particular occupation or with the cure of a specific malady."
In the wake of the Crusades, George became a model of chivalry in works of literature, including medieval romances. In the 13th century, Jacobus de Voragine, Archbishop of Genoa, compiled the "Legenda Sanctorum", ("Readings of the Saints") also known as "Legenda Aurea" (the "Golden Legend"). Its 177 chapters (182 in some editions) include the story of George, among many others. After the invention of the printing press, the book became a bestseller.
The establishment of George as a popular saint and protective giant in the West that had captured the medieval imagination was codified by the official elevation of his feast to a "festum duplex" at a church council in 1415, on the date that had become associated with his martyrdom, 23 April. There was wide latitude from community to community in celebration of the day across late medieval and early modern England, and no uniform "national" celebration elsewhere, a token of the popular and vernacular nature of George's "cultus" and its local horizons, supported by a local guild or confraternity under George's protection, or the dedication of a local church. When the English Reformation severely curtailed the saints' days in the calendar, Saint George's Day was among the holidays that continued to be observed.
In April 2019, the parish church of São Jorge, in São Jorge, Madeira Island, Portugal, solemnly received the relics of George, patron saint of the parish, during the celebrations the 504th anniversary of its foundation. the relics were brought by the new Bishop of Funchal, D. Nuno Brás.
George is renowned throughout the Middle East, as both saint and prophet. His veneration by Christians and Muslims lies in his composite personality combining several Biblical, Quranic and other ancient mythical heroes.
William Dalrymple, who reviewed the literature in 1999, tells us that J. E. Hanauer in his 1907 book "Folklore of the Holy Land: Muslim, Christian and Jewish" "mentioned a shrine in the village of Beit Jala, beside Bethlehem, which at the time was frequented by Christians who regarded it as the birthplace of George and some Jews who regarded it as the burial place of the Prophet Elias. According to Hanauer, in his day the monastery was "a sort of madhouse. Deranged persons of all the three faiths are taken thither and chained in the court of the chapel, where they are kept for forty days on bread and water, the Eastern Orthodox priest at the head of the establishment now and then reading the Gospel over them, or administering a whipping as the case demands.' In the 1920s, according to Taufiq Canaan's "Mohammedan Saints and Sanctuaries in Palestine", nothing seemed to have changed, and all three communities were still visiting the shrine and praying together."
Dalrymple himself visited the place in 1995. "I asked around in the Christian Quarter in Jerusalem, and discovered that the place was very much alive. With all the greatest shrines in the Christian world to choose from, it seemed that when the local Arab Christians had a problem—an illness, or something more complicated—they preferred to seek the intercession of George in his grubby little shrine at Beit Jala rather than praying at the Church of the Holy Sepulchre in Jerusalem or the Church of the Nativity in Bethlehem." He asked the priest at the shrine "Do you get many Muslims coming here?" The priest replied, "We get hundreds! Almost as many as the Christian pilgrims. Often, when I come in here, I find Muslims all over the floor, in the aisles, up and down."
The "Encyclopædia Britannica" quotes G. A. Smith in his "Historic Geography of the Holy Land" p. 164 saying "The Mahommedans who usually identify St. George with the prophet Elijah, at Lydda confound his legend with one about Christ himself. Their name for Antichrist is Dajjal, and they have a tradition that Jesus will slay Antichrist by the gate of Lydda. The notion sprang from an ancient bas-relief of George and the Dragon on the Lydda church. But Dajjal may be derived, by a very common confusion between "n" and "l", from Dagon, whose name two neighbouring villages bear to this day, while one of the gates of Lydda used to be called the Gate of Dagon."
George is described as a prophetic figure in Islamic sources. George is venerated by some Christians and Muslims because of his composite personality combining several Biblical, Quranic and other ancient mythical heroes. In some sources he is identified with Elijah or Mar Elis, George or Mar Jirjus and in others as al-Khidr. The last epithet meaning the "green prophet", is common to both Christian and Muslim folk piety. Samuel Curtiss who visited an artificial cave dedicated to him where he is identified with Elijah, reports that childless Muslim women used to visit the shrine to pray for children. Per tradition, he was brought to his place of martyrdom in chains, thus priests of Church of St. George chain the sick especially the mentally ill to a chain for overnight or longer for healing. This is sought after by both Muslims and Christians.
According to Elizabeth Anne Finn's "Home in the Holy land" (1866):
The mosque of Nabi Jurjis which was restored by Timur in the 14th century, was located in Mosul and supposedly contained the tomb of George. It was however destroyed in July 2014 by the Islamic State of Iraq and the Levant, who also destroyed the Mosque of the Prophet Sheeth (Seth) and the Mosque of the Prophet Younis (Jonah). The militants claim such mosques have become places for apostasy instead of prayer.
George or "Hazrat" Jurjays was the patron saint of Mosul. Along with Theodosius, he was revered by both Christian and Muslim communities of Jazira and Anatolia. The wall paintings of Kırk Dam Altı Kilise at Belisırma dedicated to him are dated between 1282–1304. These paintings depict him as a mounted knight appearing between donors including a Georgian lady called Thamar and her husband, the Emir and Consul Basil, while the Seljuk Sultan Mesud II and Byzantine Emperor Androncius II are also named in the inscriptions.
A shrine attributed to prophet George can be found in Diyarbakir, Turkey. Evliya Celebi states in his "Seyahatname" that he visited the tombs of prophet Jonah and prophet George in the city.
In the General Roman Calendar, the feast of George is on 23 April. In the Tridentine Calendar of 1568, it was given the rank of "Semidouble". In Pope Pius XII's 1955 calendar this rank was reduced to "Simple", and in Pope John XXIII's 1960 calendar to a "Commemoration". Since Pope Paul VI's 1969 revision, it appears as an optional "Memorial". In some countries, such as England, the rank is higher. In England, it is a Solemnity (Roman Catholic) or Feast (Church of England): if it falls between Palm Sunday and the Second Sunday of Easter inclusive, it is transferred to the Monday after the Second Sunday of Easter.
George is very much honoured by the Eastern Orthodox Church, wherein he is referred to as a "Great Martyr", and in Oriental Orthodoxy overall. His major feast day is on 23 April (Julian calendar 23 April currently corresponds to Gregorian calendar 6 May). If, however, the feast occurs before Easter, it is celebrated on Easter Monday, instead. The Russian Orthodox Church also celebrates two additional feasts in honour of George. One is on 3 November, commemorating the consecration of a cathedral dedicated to him in Lydda during the reign of Constantine the Great (305–37). When the church was consecrated, the relics of the George were transferred there. The other feast is on 26 November for a church dedicated to him in Kiev, "circa" 1054.
In Bulgaria, George's day () is celebrated on 6 May, when it is customary to slaughter and roast a lamb. George's day is also a public holiday.
In Serbia and Bosnia and Herzegovina, the Serbian Orthodox Church refers to George as "Sveti Djordje" ("Свети Ђорђе") or "Sveti Georgije" ("Свети Георгије"). George's day ("Đurđevdan") is celebrated on 6 May, and is a common slava (patron saint day) among ethnic Serbs.
In Egypt, the Coptic Orthodox Church of Alexandria refers to George () as the "Prince of Martyrs" and celebrates his martyrdom on the 23rd of Paremhat of the Coptic calendar equivalent to 1 May.
The Copts also celebrate the consecration of the first church dedicated to him on seventh of the month of Hatour of the Coptic calendar usually equivalent to 17 November.
In India, the Syro-Malabar Catholic Church, one of the oriental catholic churches (Eastern Catholic Churches) and Malankara Orthodox Church venerate George. The main pilgrim centers of the saint in India are at Aruvithura and Puthuppally in Kottayam District, Edathua in Alappuzha district, and Edappally in Ernakulam district of the southern state of Kerala. The saint is commemorated each year from 27 April to 14 May at Edathua On 27 April after the flag hoisting ceremony by the parish priest, the statue of the saint is taken from one of the altars and placed at the extension of the church to be venerated by the devotees till 14 May. The main feast day is 7 May, when the statue of the saint along with other saints is taken in procession around the church. Intercession to George of Edathua is believed to be efficacious in repelling snakes and in curing mental ailments.The sacred relics of George were brought to Antioch from Mardin in 900 and were taken to Kerala, India from Antioch in 1912 by Mar Dionysius of Vattasseril and kept in the Orthodox seminary at Kundara, Kerala. H.H Mathews II Catholicos had given the relics to St. George churches at Puthupally, Kottayam District and Chandanappally, Pathanamthitta district.
George is a highly celebrated saint in both the Western and Eastern Christian churches, and many Patronages of Saint George exist throughout the world.
George is the patron saint of England. His cross forms the national flag of England, and features within the Union Flag of the United Kingdom, and other national flags containing the Union Flag, such as those of Australia and New Zealand. By the 14th century, the saint had been declared both the patron saint and the protector of the royal family.
George is the patron saint of Ethiopia. He is also the patron saint of the Ethiopian Orthodox Church, George slaying the dragon is one of the most frequently used subjects of icons in the church.
The country of Georgia, where devotions to the saint date back to the fourth century, is not technically named after the saint, but is a well-attested back-formation of the English name. However, many towns and cities around the world are. George is one of the patron saints of Georgia; the name Georgia ("Sakartvelo" in Georgian) is an anglicisation of "Gurj", ultimately derived from the Persian word "gurj"/"gurjān" ("wolf"). Chronicles describing the land as "Georgie" or Georgia in French and English, date from the early Middle Ages, as written by the travellers John Mandeville and Jacques de Vitry "because of their special reverence for Saint George", but these accounts have been seen as folk etymology and are rejected by the scholarly community. Exactly 365 Orthodox churches in Georgia are named after George according to the number of days in a year. According to legend, George was cut into 365 pieces after he fell in battle and every single piece was spread throughout the entire country.
George is also one of the patron saints of the Mediterranean islands of Malta and Gozo. In a battle between the Maltese and the Moors, George was alleged to have been seen with Saint Paul and Saint Agata, protecting the Maltese. George is the protector of the island of Gozo and the patron of Gozo's largest city, Victoria. The St. George's Basilica in Victoria is dedicated to him.
Devotions to George in Portugal date back to the 12th century. Nuno Álvares Pereira attributed the victory of the Portuguese in the battle of Aljubarrota in 1385 to George. During the reign of John I of Portugal (1357–1433), George became the patron saint of Portugal and the King ordered that the saint's image on the horse be carried in the "Corpus Christi" procession. The flag of George (white with red cross) was also carried by the Portuguese troops and hoisted in the fortresses, during the 15th century. "Portugal and Saint George" became the battle cry of the Portuguese troops, being still today the battle cry of the Portuguese Army, with simply "Saint George" being the battle cry of the Portuguese Navy.
George, is also the patron saint of the region of Aragon, in Spain, where his feast day is celebrated on 23 April and is known as "Aragon Day", "or 'Día de Aragón"' in Spanish. He became the patron saint of the former Kingdom of Aragon and Crown of Aragon when King Pedro I of Aragon won the Battle of Alcoraz in 1096. Legend has it that victory eventually fell to the Christian armies when George appeared to them on the battlefield, helping them secure the reconquest of the city of Huesca which had been under the Muslim control of the Taifa of Zaragoza. The battle, which had begun two years earlier in 1094, was long and arduous, and had also taken the life of King Pedro's own father, King Sancho Ramirez. With the Aragonese spirits flagging, it is said that George descending from heaven on his charger and bearing a dark red cross, appeared at the head of the Christian cavalry leading the knights into battle. Interpreting this as a sign of protection from God, the Christian militia returned emboldened to the battle field, more energized than ever, convinced theirs was the banner of the one true faith. Defeated, the moors rapidly abandoned the battlefield. After two years of being locked down under siege, Huesca was liberated and King Pedro made his triumphal entry into the city. To celebrate this victory, the cross of George was adopted as the coat of arms of Huesca and Aragon, in honour of their saviour. After the taking of Huesca, King Pedro aided the military leader and nobleman, Rodrigo Díaz de Vivar, otherwise known as El Cid, with a coalition army from Aragon in the long reconquest of the Kingdom of Valencia.
Tales of King Pedro's success at Huesca and in leading his expedition of armies with El Cid against the Moors, under the auspices of George on his standard, spread quickly throughout the realm and beyond the Crown of Aragon, and Christian armies throughout Europe quickly began adopting George as their protector and patron, during all subsequent Crusades to the Holy Lands. By 1117, the military order of Templars adopted the Cross of George as a simple unifying sign for international Christian militia embroidered on the left hand side of their tunics, placed above the heart.
The Cross of George, also known in Aragon as The Cross of Alcoraz, continues to emblazon the flags of all of Aragon's provinces.
The association of George with chivalry and noblemen in Aragon continued through the ages. Indeed, even the author Miguel de Cervantes, in his book on the Adventures of Don Quixote, also mentions the Jousting events that took place at the festival of George in Zaragoza in Aragon where one could gain international renown in winning a joust against any of the knights of Aragon.
In Valencia, Catalonia, the Balearics, Malta, Sicily and Sardinia, the origins of the veneration of George go back to their shared history as territories under the Crown of Aragon, thereby sharing the same legend.
One of the highest civil distinctions awarded in Catalonia is the George's Cross ("Creu de Sant Jordi"). The Sant Jordi Awards have been awarded in Barcelona since 1957.
George ("Sant Jordi" in Catalan) is also the patron saint of Catalonia. His cross appears in many buildings and local flags, including the flag of Barcelona, the Catalan capital. A Catalan variation to the traditional legend places George's life story as having occurred in the town of Montblanc, near Tarragona.
It became fashionable in the 15th century, with the full development of classical heraldry, to provide attributed arms to saints and other historical characters from the pre-heraldic ages. The widespread attribution to George of the red cross on a white field in western art – "Saint George's Cross" – probably first arose in Genoa, which had adopted this image for their flag and George as their patron saint in the 12th century. A "vexillum beati Georgii" is mentioned in the Genovese annals for the year 1198, referring to a red flag with a depiction of George and the dragon. An illumination of this flag is shown in the annals for the year 1227. The Genoese flag with the red cross was used alongside this "George's flag", from at least 1218, and was known as the " insignia cruxata comunis Janue" ("cross ensign of the commune of Genoa"). The flag showing the saint himself was the city's principal war flag, but the flag showing the plain cross was used alongside it in the 1240s.
In 1348 Edward III of England chose George as the patron saint of his Order of the Garter, and also took to using a red-on-white cross in the hoist of his Royal Standard.
The term "Saint George's cross" was at first associated with any plain Greek cross touching the edges of the field (not necessarily red on white). Thomas Fuller in 1647 spoke of "the plain or St George's cross" as "the mother of all the others" (that is, the other heraldic crosses).
George is most commonly depicted in early icons, mosaics, and frescos wearing armour contemporary with the depiction, executed in gilding and silver colour, intended to identify him as a Roman soldier. Particularly after the Fall of Constantinople and George's association with the crusades, he is often portrayed mounted upon a white horse. Thus, a 2003 Vatican stamp (issued on the anniversary of the Saint's death) depicts an armoured George atop a white horse, killing the dragon.
Eastern Orthodox iconography also permits George to ride a black horse, as in a Russian icon in the British museum collection.
In the south Lebanese village of Mieh Mieh, the Saint George Church for Melkite Catholics commissioned for its 75th jubilee in 2012 (under the guidance of Mgr Sassine Gregoire), the only icons in the world portraying the whole life of George, as well as the scenes of his torture and martyrdom (drawn in eastern iconographic style).
George may also be portrayed with Saint Demetrius, another early soldier saint. When the two saintly warriors are together and mounted upon horses, they may resemble earthly manifestations of the archangels Michael and Gabriel. Eastern traditions distinguish the two as George rides a white horse and Demetrius a red horse (the red pigment may appear black if it has bituminized).
George can also be identified by his spearing a dragon, whereas Demetrius may be spearing a human figure, representing Maximian.
23 April | https://en.wikipedia.org/wiki?curid=29010 |
Secular humanism
Secular humanism is a philosophy or life stance that embraces human reason, secular ethics, and philosophical naturalism while specifically rejecting religious dogma, supernaturalism, and superstition as the basis of morality and decision making.
Secular humanism posits that human beings are capable of being ethical and moral without religion or belief in a deity. It does not, however, assume that humans are either inherently good or evil, nor does it present humans as being superior to nature. Rather, the humanist life stance emphasizes the unique responsibility facing humanity and the ethical consequences of human decisions. Fundamental to the concept of secular humanism is the strongly held viewpoint that ideology—be it religious or political—must be thoroughly examined by each individual and not simply accepted or rejected on faith. Along with this, an essential part of secular humanism is a continually adapting search for truth, primarily through science and philosophy. Many secular humanists derive their moral codes from a philosophy of utilitarianism, ethical naturalism, or evolutionary ethics, and some advocate a science of morality.
Humanists International is the world union of more than one hundred humanist, rationalist, irreligious, atheist, Bright, secular, Ethical Culture, and freethought organizations in more than 40 countries. The "Happy Human" is recognized as the official symbol of humanism internationally, used by secular humanist organizations in every part of the world. Those who call themselves humanists are estimated to number between four and five million people worldwide.
The meaning of the phrase "secular humanism" has evolved over time. The phrase has been used since at least the 1930s by Anglican priests, and in 1943, the then Archbishop of Canterbury, William Temple, was reported as warning that the "Christian tradition... was in danger of being undermined by a 'Secular Humanism' which hoped to retain Christian values without Christian faith." During the 1960s and 1970s the term was embraced by some humanists who considered themselves anti-religious, as well as those who, although not critical of religion in its various guises, preferred a non-religious approach. The release in 1980 of "A Secular Humanist Declaration" by the newly formed Council for Democratic and Secular Humanism (CODESH, later the Council for Secular Humanism, which with CSICOP in 1991 jointly formed the Center for Inquiry and in 2015 both ceased separate operations, becoming CFI programs) gave secular humanism an organisational identity within the United States; but no overall organisation involved currently uses a name featuring "secular humanism".
However, many adherents of the approach reject the use of the word "secular" as obfuscating and confusing, and consider that the term "secular humanism" has been "demonized by the religious right... All too often secular humanism is reduced to a sterile outlook consisting of little more than secularism slightly broadened by academic ethics. This kind of 'hyphenated humanism' easily becomes more about the adjective than its referent". Adherents of this view, including Humanists International and the American Humanist Association, consider that the unmodified but capitalized word Humanism should be used. The endorsement by the International Humanist and Ethical Union (IHEU) of the capitalization of the word "Humanism", and the dropping of any adjective such as "secular", is quite recent. The American Humanist Association began to adopt this view in 1973, and the IHEU formally endorsed the position in 1989. In 2002 the IHEU General Assembly unanimously adopted the Amsterdam Declaration, which represents the official defining statement of World Humanism for Humanists. This declaration makes exclusive use of capitalized "Humanist" and "Humanism", which is consistent with IHEU's general practice and recommendations for promoting a unified Humanist identity. To further promote Humanist identity, these words are also free of any adjectives, as recommended by prominent members of IHEU. Such usage is not universal among IHEU member organizations, though most of them do observe these conventions.
Historical use of the term humanism (reflected in some current academic usage), is related to the writings of pre-Socratic philosophers. These writings were lost to European societies until Renaissance scholars rediscovered them through Muslim sources and translated them from Arabic into European languages. Thus the term humanist can mean a humanities scholar, as well as refer to The Enlightenment/ Renaissance intellectuals, and those who have agreement with the pre-Socratics, as distinct from secular humanists.
In 1851 George Holyoake coined the term "secularism" to describe "a form of opinion which concerns itself only with questions, the issues of which can be tested by the experience of this life".
The modern secular movement coalesced around Holyoake, Charles Bradlaugh and their intellectual circle. The first secular society, the Leicester Secular Society, dates from 1851. Similar regional societies came together to form the National Secular Society in 1866.
Holyoake's secularism was strongly influenced by Auguste Comte, the founder of positivism and of modern sociology. Comte believed human history would progress in a "law of three stages" from a theological phase, to the "metaphysical", toward a fully rational "positivist" society. In later life, Comte had attempted to introduce a "religion of humanity" in light of growing anti-religious sentiment and social malaise in revolutionary France. This religion would necessarily fulfil the functional, cohesive role that supernatural religion once served.
Although Comte's religious movement was unsuccessful in France, the positivist philosophy of science itself played a major role in the proliferation of secular organizations in the 19th century in England. Richard Congreve visited Paris shortly after the French Revolution of 1848 where he met Auguste Comte and was heavily influenced by his positivist system. He founded the London Positivist Society in 1867, which attracted Frederic Harrison, Edward Spencer Beesly, Vernon Lushington, and James Cotter Morison amongst others.
In 1878, the Society established the Church of Humanity under Congreve's direction. There they introduced sacraments of the Religion of Humanity and published a co-operative translation of Comte's Positive Polity. When Congreve repudiated their Paris co-religionists in 1878, Beesly, Harrison, Bridges, and others formed their own positivist society, with Beesly as president, and opened a rival centre, Newton Hall, in a courtyard off Fleet Street.
The New York City version of the church was established by English immigrant Henry Edger. The American version of the "Church of Humanity". was largely modeled on the English church. Like the English version it wasn't atheistic and had sermons and sacramental rites. At times the services included readings from conventional religious works like the Book of Isaiah. It was not as significant as the church in England, but did include several educated people.
Another important precursor was the ethical movement of the 19th century. The South Place Ethical Society was founded in 1793 as the South Place Chapel on Finsbury Square, on the edge of the City of London, and in the early nineteenth century was known as "a radical gathering-place". At that point it was a Unitarian chapel, and that movement, like Quakers, supported female equality. Under the leadership of Reverend William Johnson Fox, it lent its pulpit to activists such as Anna Wheeler, one of the first women to campaign for feminism at public meetings in England, who spoke in 1829 on "rights of women". In later decades, the chapel changed its name to the South Place Ethical Society, now the Conway Hall Ethical Society. Today Conway Hall explicitly identifies itself as a humanist organisation, albeit one primarily focused on concerts, events, and the maintenance of its humanist library and archives. It bills itself as "The landmark of London’s independent intellectual, political and cultural life."
In America, the ethical movement was propounded by Felix Adler, who established the New York Society for Ethical Culture in 1877. By 1886, similar societies had sprouted up in Philadelphia, Chicago and St. Louis.
These societies all adopted the same statement of principles:
In effect, the movement responded to the religious crisis of the time by replacing theology with unadulterated morality. It aimed to "disentangle moral ideas from religious doctrines, metaphysical systems, and ethical theories, and to make them an independent force in personal life and social relations." Adler was also particularly critical of the religious emphasis on creed, believing it to be the source of sectarian bigotry. He therefore attempted to provide a universal fellowship devoid of ritual and ceremony, for those who would otherwise be divided by creeds. For the same reasons the movement at that time adopted a neutral position on religious beliefs, advocating neither atheism nor theism, agnosticism nor deism.
The first ethical society along these lines in Britain was founded in 1886. By 1896 the four London societies formed the Union of Ethical Societies, and between 1905 and 1910 there were over fifty societies in Great Britain, seventeen of which were affiliated with the Union. The Union of Ethical Societies would later incorporate as the Ethical Union, a registered charity, in 1928. Under the leadership of Harold Blackham, it renamed itself the British Humanist Association in 1967. It became Humanists UK in 2017.
In the 1930s, "humanism" was generally used in a religious sense by the Ethical movement in the United States, and not much favoured among the non-religious in Britain. Yet "it was from the Ethical movement that the non-religious philosophical sense of "Humanism" gradually emerged in Britain, and it was from the convergence of the Ethical and Rationalist movements that this sense of "Humanism" eventually prevailed throughout the Freethought movement".
As an organized movement, Humanism itself is quite recent – born at the University of Chicago in the 1920s, and made public in 1933 with the publication of the first Humanist Manifesto. The American Humanist Association was incorporated as an Illinois non-profit organization in 1943. The International Humanist and Ethical Union was founded in 1952, when a gathering of world Humanists met under the leadership of Sir Julian Huxley. The British Humanist Association took that name in 1967, but had developed from the Union of Ethical Societies which had been founded by Stanton Coit in 1896.
Humanists have put together various Humanist Manifestos, in attempts to unify the Humanist identity.
The original signers of the first Humanist Manifesto of 1933, declared themselves to be religious humanists. Because, in their view, traditional religions were failing to meet the needs of their day, the signers of 1933 declared it a necessity to establish a religion that was a dynamic force to meet the needs of the day. However, this "religion" did not profess a belief in any god. Since then two additional Manifestos were written to replace the first. In the Preface of Humanist Manifesto II, in 1973, the authors Paul Kurtz and Edwin H. Wilson assert that faith and knowledge are required for a hopeful vision for the future. Manifesto II references a section on Religion and states traditional religion renders a disservice to humanity. Manifesto II recognizes the following groups to be part of their naturalistic philosophy: "scientific", "ethical", "democratic", "religious", and "Marxist" humanism.
In 2002, the IHEU General Assembly unanimously adopted the Amsterdam Declaration 2002 which represents the official defining statement of World Humanism.
All member organisations of the International Humanist and Ethical Union are required by bylaw 5.1 to accept the "Minimum Statement on Humanism":
Humanism is a democratic and ethical life stance, which affirms that human beings have the right and responsibility to give meaning and shape to their own lives. It stands for the building of a more humane society through an ethic based on human and other natural values in the spirit of reason and free inquiry through human capabilities. It is not theistic, and it does not accept supernatural views of reality.
To promote and unify "Humanist" identity, prominent members of the IHEU have endorsed the following statements on Humanist identity:
According to the Council for Secular Humanism, within the United States, the term "secular humanism" describes a world view with the following elements and principles:
"A Secular Humanist Declaration" was issued in 1980 by the Council for Secular Humanism's predecessor, CODESH. It lays out ten ideals: Free inquiry as opposed to censorship and imposition of belief; separation of church and state; the ideal of freedom from religious control and from jingoistic government control; ethics based on critical intelligence rather than that deduced from religious belief; moral education; religious skepticism; reason; a belief in science and technology as the best way of understanding the world; evolution; and education as the essential method of building humane, free, and democratic societies.
A general outline of Humanism is also set out in the "Humanist Manifesto" prepared by the American Humanist Association.
In the 20th and 21st centuries, members of Humanist organizations have disagreed as to whether Humanism is a religion. They categorize themselves in one of three ways. Religious humanism, in the tradition of the earliest Humanist organizations in the UK and US, attempts to fulfill the traditional social role of religion. Secular humanism considers all forms of religion, including religious humanism, to be superseded. In order to sidestep disagreements between these two factions, recent Humanist proclamations define Humanism as a "life stance"; proponents of this view making up the third faction. All three types of Humanism (and all three of the American Humanist Association's manifestos) reject deference to supernatural beliefs; promoting the practical, methodological naturalism of science, but also going further and supporting the philosophical stance of metaphysical naturalism. The result is an approach to issues in a secular way. Humanism addresses ethics without reference to the supernatural as well, attesting that ethics is a human enterprise (see naturalistic ethics).
Secular humanism does not prescribe a specific theory of morality or code of ethics. As stated by the Council for Secular Humanism,
Secular humanism affirms that with the present state of scientific knowledge, dogmatic belief in an absolutist moral/ethical system (e.g. Kantian, Islamic, Christian) is unreasonable. However, it affirms that individuals engaging in rational moral/ethical deliberations can discover some universal "objective standards".
Many Humanists adopt principles of the Golden Rule. Some believe that universal moral standards are required for the proper functioning of society. However, they believe such necessary universality can and should be achieved by developing a richer notion of morality through reason, experience and scientific inquiry rather than through faith in a supernatural realm or source.
Humanism is compatible with atheism
and agnosticism,
but being atheist or agnostic does not automatically make one a humanist. Nevertheless, humanism is diametrically opposed to state atheism.
According to Paul Kurtz, considered by some to be the founder of the American secular humanist movement, one of the differences between Marxist–Leninist atheists and humanists is the latter's commitment to "human freedom and democracy" while stating that the militant atheism of the Soviet Union consistently violated basic human rights.
Kurtz also stated that the "defense of religious liberty is as precious to the humanist as are the rights of the believers". Greg M. Epstein states that, "modern, organized Humanism began, in the minds of its founders, as nothing more nor less than a religion without a God".
Many Humanists address ethics from the point of view of ethical naturalism, and some support an actual science of morality.
Secular humanist organizations are found in all parts of the world. Those who call themselves humanists are estimated to number between four and five million people worldwide in 31 countries, but there is uncertainty because of the lack of universal definition throughout censuses. Humanism is a non-theistic belief system and, as such, it could be a sub-category of "Religion" only if that term is defined to mean "Religion and (any) belief system". This is the case in the International Covenant on Civil and Political Rights on freedom of religion "and" beliefs. Many national censuses contentiously define Humanism as a further sub-category of the sub-category "No Religion", which typically includes atheist, rationalist and agnostic thought. In England, Wales 25% of people specify that they have 'No religion' up from 15% in 2001 and in Australia, around 30% of the population specifies "No Religion" in the national census. In the US, the decennial census does not inquire about religious affiliation or its lack; surveys report the figure at roughly 13%. In the 2001 Canadian census, 16.5% of the populace reported having no religious affiliation. In the 2011 Scottish census, 37% stated they had no religion up from 28% in 2001. One of the largest Humanist organizations in the world (relative to population) is Norway's "Human-Etisk Forbund", which had over 86,000 members out of a population of around 4.6 million in 2013 – approximately 2% of the population.
The International Humanist and Ethical Union (IHEU) is the worldwide umbrella organization for those adhering to the Humanist life stance. It represents the views of over three million Humanists organized in over 100 national organizations in 30 countries. Originally based in the Netherlands, the IHEU now operates from London. Some regional groups that adhere to variants of the Humanist life stance, such as the humanist subgroup of the Unitarian Universalist Association, do not belong to the IHEU. Although the European Humanist Federation is also separate from the IHEU, the two organisations work together and share an agreed protocol.
Starting in the mid-20th century, religious fundamentalists and the religious right began using the term "secular humanism" in hostile fashion. Francis A. Schaeffer, an American theologian based in Switzerland, seizing upon the exclusion of the divine from most humanist writings, argued that rampant secular humanism would lead to moral relativism and ethical bankruptcy in his book "How Should We Then Live: The Rise and Decline of Western Thought and Culture" (1976). Schaeffer portrayed secular humanism as pernicious and diabolical, and warned it would undermine the moral and spiritual tablet of America. His themes have been very widely repeated in Fundamentalist preaching in North America. Toumey (1993) found that secular humanism is typically portrayed as a vast evil conspiracy, deceitful and immoral, responsible for feminism, pornography, abortion, homosexuality, and New Age spirituality. In certain areas of the world, Humanism finds itself in conflict with religious fundamentalism, especially over the issue of the separation of church and state. Many Humanists see religions as superstitious, repressive and closed-minded, while religious fundamentalists may see Humanists as a threat to the values set out in their sacred texts.
In recent years, humanists such as Dwight Gilbert Jones and R. Joseph Hoffmann have decried the over-association of Humanism with affirmations of non-belief and atheism. Jones cites a lack of new ideas being presented or debated outside of secularism, while Hoffmann is unequivocal: "I regard the use of the term 'humanism' to mean secular humanism or atheism to be one of the greatest tragedies of twentieth century movementology, perpetrated by second-class minds and perpetuated by third-class polemicists and village atheists. The attempt to sever humanism from the religious and the spiritual was a flatfooted, largely American way of taking on the religious right. It lacked finesse, subtlety, and the European sense of history."
Some Humanists celebrate official religion-based public holidays, such as Christmas or Easter, but as secular holidays rather than religious ones. Many Humanists also celebrate the winter and summer solstice, the former of which (in the northern hemisphere) coincides closely with the religiously-oriented celebration of Christmas, and the equinoxes, of which the vernal equinox is associated with Christianity's Easter and indeed with all other springtime festivals of renewal, and the autumnal equinox which is related to such celebrations such as Halloween and All Souls' Day. The Society for Humanistic Judaism celebrates most Jewish holidays in a secular manner.
The IHEU endorses World Humanist Day (21 June), Darwin Day (12 February), Human Rights Day (10 December) and HumanLight (23 December) as official days of Humanist celebration, though none are yet a public holiday.
In many countries, humanist celebrants (officiants) perform celebrancy services for weddings, funerals, child namings, coming of age ceremonies, and other rituals.
The issue of whether and in what sense secular humanism might be considered a religion, and what the implications of this would be, has become the subject of legal maneuvering and political debate in the United States. The first reference to "secular humanism" in a US legal context was in 1961, although church-state separation lawyer Leo Pfeffer had referred to it in his 1958 book, "Creeds in Competition".
The Education for Economic Security Act of 1984 included a section, Section 20 U.S.C.A. 4059, which initially read: "Grants under this subchapter ['Magnet School Assistance'] may not be used for consultants, for transportation or for any activity which does not augment academic improvement." With no public notice, Senator Orrin Hatch tacked onto the proposed exclusionary subsection the words "or for any course of instruction the substance of which is Secular Humanism". Implementation of this provision ran into practical problems because neither the Senator's staff, nor the Senate's Committee on Labor and Human Resources, nor the Department of Justice could propose a definition of what would constitute a "course of instruction the substance of which is Secular Humanism". So, this determination was left up to local school boards. The provision provoked a storm of controversy which within a year led Senator Hatch to propose, and Congress to pass, an amendment to delete from the statute all reference to secular humanism. While this episode did not dissuade fundamentalists from continuing to object to what they regarded as the "teaching of Secular Humanism", it did point out the vagueness of the claim.
The phrase "secular humanism" became prominent after it was used in the United States Supreme Court case "Torcaso v. Watkins." In the 1961 decision, Justice Hugo Black commented in a footnote, "Among religions in this country which do not teach what would generally be considered a belief in the existence of God are Buddhism, Taoism, Ethical Culture, Secular Humanism, and others."
The footnote in "Torcaso v. Watkins" referenced "Fellowship of Humanity v. County of Alameda", a 1957 case in which an organization of humanists sought a tax exemption on the ground that they used their property "solely and exclusively for religious worship." Despite the group's non-theistic beliefs, the court determined that the activities of the "Fellowship of Humanity", which included weekly Sunday meetings, were analogous to the activities of theistic churches and thus entitled to an exemption. The "Fellowship of Humanity" case itself referred to "Humanism" but did not mention the term "secular humanism". Nonetheless, this case was cited by Justice Black to justify the inclusion of secular humanism in the list of religions in his note. Presumably Justice Black added the word "secular" to emphasize the non-theistic nature of the "Fellowship of Humanity" and distinguish their brand of humanism from that associated with, for example, Christian humanism.
Another case alluded to in the "Torcaso v. Watkins" footnote, and said by some to have established secular humanism as a religion under the law, is the 1957 tax case of "", 249 F.2d 127 (D.C. Cir. 1957). The "Washington Ethical Society" functions much like a church, but regards itself as a non-theistic religious institution, honoring the importance of ethical living without mandating a belief in a supernatural origin for ethics. The case involved denial of the Society's application for tax exemption as a religious organization. The U.S. Court of Appeals reversed the Tax Court's ruling, defined the Society as a religious organization, and granted its tax exemption. The Society terms its practice Ethical Culture. Though Ethical Culture is based on a humanist philosophy, it is regarded by some as a type of religious humanism. Hence, it would seem most accurate to say that this case affirmed that a religion need not be theistic to qualify as a religion under the law, rather than asserting that it established generic secular humanism as a religion.
In the cases of both the "Fellowship of Humanity" and the "Washington Ethical Society," the court decisions turned not so much on the particular beliefs of practitioners as on the function and form of the practice being similar to the function and form of the practices in other religious institutions.
The implication in Justice Black's footnote that secular humanism is a religion has been seized upon by religious opponents of the teaching of evolution, who have made the argument that teaching evolution amounts to teaching a religious idea. The claim that secular humanism could be considered a religion for legal purposes was examined by the United States Court of Appeals for the Ninth Circuit in "Peloza v. Capistrano School District", 37 F.3d 517 (9th Cir. 1994), "cert. denied", 515 U.S. 1173 (1995). In this case, a science teacher argued that, by requiring him to teach evolution, his school district was forcing him to teach the "religion" of secular humanism. The Court responded, "We reject this claim because neither the Supreme Court, nor this circuit, has ever held that evolutionism or Secular Humanism are 'religions' for Establishment Clause purposes." The Supreme Court refused to review the case.
The decision in a subsequent case, "Kalka v. Hawk et al.", offered this commentary:
The Court's statement in "Torcaso" does not stand for the proposition that humanism, no matter in what form and no matter how practiced, amounts to a religion under the First Amendment. The Court offered no test for determining what system of beliefs qualified as a "religion" under the First Amendment. The most one may read into the "Torcaso" footnote is the idea that a particular non-theistic group calling itself the "Fellowship of Humanity" qualified as a religious organization under California law.
Decisions about tax status have been based on whether an organization functions like a church. On the other hand, Establishment Clause cases turn on whether the ideas or symbols involved are inherently religious. An organization can function like a church while advocating beliefs that are not necessarily inherently religious. Author Marci Hamilton has pointed out: "Moreover, the debate is not between secularists and the religious. The debate is believers and non-believers on the one side debating believers and non-believers on the other side. You've got citizens who are [...] of faith who believe in the separation of church and state and you have a set of believers who do not believe in the separation of church and state."
In the 1987 case of "Smith v. Board of School Commissioners of Mobile County" a group of plaintiffs brought a case alleging that the school system was teaching the tenets of an anti-religious religion called "secular humanism" in violation of the Establishment Clause. The complainants asked that 44 different elementary through high school level textbooks (including books on home economics, social science and literature) be removed from the curriculum. Federal judge William Brevard Hand ruled for the plaintiffs agreeing that the books promoted secular humanism, which he ruled to be a religion. The Eleventh Circuit Court unanimously reversed him, with Judge Frank stating that Hand held a "misconception of the relationship between church and state mandated by the establishment clause," commenting also that the textbooks did not show "an attitude antagonistic to theistic belief. The message conveyed by these textbooks is one of neutrality: the textbooks neither endorse theistic religion as a system of belief, nor discredit it".
There are numerous Humanist Manifestos and Declarations, including the following: | https://en.wikipedia.org/wiki?curid=29021 |
Game Gear
The is an 8-bit fourth generation handheld game console released by Sega on October 6, 1990 in Japan, in April 1991 throughout North America and Europe, and during 1992 in Australia. The Game Gear primarily competed with Nintendo's Game Boy, the Atari Lynx, and NEC's TurboExpress. It shares much of its hardware with the Master System, and can play Master System games by the use of an adapter. Sega positioned the Game Gear, which had a full-color backlit screen with a landscape format, as a technologically superior handheld to the Game Boy.
Though the Game Gear was rushed to market, its unique game library and price point gave it an edge over the Atari Lynx and TurboExpress. However, due to its short battery life, lack of original games, and weak support from Sega, the Game Gear was unable to surpass the Game Boy, selling 10.62 million units by March 1996. The Game Gear was discontinued in 1997. It was re-released as a budget system by Majesco Entertainment in 2000, under license from Sega.
Reception of the Game Gear was mixed, with praise for its full-color backlit screen and processing power for its time, criticisms over its large size and short battery life, and questions over the quality of its game library. A microconsole, the Game Gear Micro, was announced in June 2020.
Developed under the name "Project Mercury", the Game Gear was first released in Japan on October 6, 1990, in North America and Europe in 1991, and in Australia in 1992. Originally retailing at JP¥19,800 in Japan, US$149.99 in North America, and GB£99.99 in Europe, the Game Gear was developed to compete with the Game Boy, which Nintendo had released in 1989. The console had been designed as a portable version of the Master System, and featured more powerful systems than the Game Boy, including a full-color screen, in contrast to the monochromatic screen of its rival. According to former Sega console hardware research and development head Hideki Sato, Sega saw the Game Boy's black and white screen as "a challenge to make our own color handheld system."
To improve upon the design of their competition, Sega modeled the Game Gear with a similar shape to a Genesis controller, with the idea being that the curved surfaces and longer length would make the Game Gear more comfortable to hold than the Game Boy. The console's mass was carefully considered from the beginning of the development, aiming for a total mass between that of the Game Boy and the Atari Lynx, another full-color screen competing product. Despite the similarities the Game Gear shared with the Master System, the games of the latter were not directly playable on the Game Gear, and were only able to be played on the handheld by the use of an accessory called the Master Gear Converter. The original Game Gear pack-in game was "Columns", which was similar to the "Tetris" cartridge that Nintendo had included when it launched the Game Boy.
With a late start into the handheld gaming market, Sega rushed to get the Game Gear into stores quickly, having lagged behind Nintendo in sales without a handheld on the market. As one method of doing so, Sega based the hardware of the Game Gear on the Master System, albeit with a much larger color palette than its predecessor: the Game Gear supported 4096 colors, compared to the 64 colors supported by the Master System. Part of the intention of this move was to make Master System games easy to port to the Game Gear. Though the Game Gear was designed to be technologically superior to the Game Boy, its design came at a cost of battery life: whereas the Game Boy could run for more than 30 hours on four AA batteries, the Game Gear required six AA batteries and could only run for three to five hours. With its quick launch in Japan, the handheld sold 40,000 units in its first two days, 90,000 within a month, and the number of back orders for the system was over 600,000. According to Sega of America marketing director Robert Botch, "there is clearly a need for a quality portable system that provides features other systems have failed to deliver. This means easy-to-view, full-color graphics and exciting quality games that appeal to all ages."
Before the Game Gear's launch in 1990, Sega had success marketing its 16-bit home console, the Sega Genesis, by advertising it as a "more mature" option for gamers. In keeping with this approach, Sega positioned the Game Gear as a "grown-up" option compared to the Game Boy. While Sega's marketing in Japan did not take this perspective, instead opting for advertisements with Japanese women featuring the handheld, Sega's worldwide advertising prominently positioned the Game Gear as the "cooler" console than the Game Boy.
In North America, marketing for the Game Gear included side-by-side comparisons of Sega's new handheld with the Game Boy and likened Game Boy players to the obese and uneducated. One Sega advertisement featured the quote, "If you were color blind and had an IQ of less than 12, then you wouldn't mind which portable you had." Such advertising drew fire from Nintendo, who sought to have protests organized against Sega for insulting disabled persons. Sega responded with a statement from Sega of America president Tom Kalinske saying that Nintendo "should spend more time improving their products and marketing rather than working on behind-the-scenes coercive activities". Ultimately, this debate would have little impact on sales for the Game Gear.
Europe and Australia were the last regions to receive the Game Gear. Due to the delays in receiving the new handheld, some importers paid as much as £200 in order to have the new system. Upon the Game Gear's release in Europe, video game distributor Virgin Mastertronic unveiled the price of the Game Gear as £99.99, positioning it as being more expensive than the Game Boy, but less expensive than the Atari Lynx, which was also a full-color system. Marketing in the United Kingdom included the use of the slogan, "To be this good takes Sega", and also included advertisements with a biker with a Game Gear.
Support for the Game Gear by Sega was hurt by its primary focus on its home console systems. In addition to the success of the Genesis, Sega was also supporting two peripherals for its home system, the Sega CD, and the 32X, as well as developing its new 32-bit system, the Sega Saturn. Despite selling 10.62 million units by March 1996 (including 1.78 million in Japan), the Game Gear was never able to match the success of its main rival, the Game Boy, which sold over ten times that number. The system's late sales were further hurt by Nintendo's release of the Game Boy Pocket, a smaller version of the Game Boy which could run on two AAA batteries.
Plans for a 16-bit successor to the Game Gear were made to bring Sega's handheld gaming into the fifth generation of video games, but a new handheld system never materialized for Sega, leaving only the Genesis Nomad, a portable version of the Genesis, to take its place. Moreover, the Nomad was intended to supplement the Game Gear rather than replace it; in press coverage leading up to the Nomad's release, Sega representatives said the company was not dropping support for the Game Gear in favor of the Nomad, and that "We believe the two can co-exist". Though the Nomad had been released in 1995, Sega did not officially end support for the Game Gear until 1996 in Japan, and 1997 worldwide.
Though the system was no longer supported by Sega in 2000, third-party developer Majesco Entertainment released a version of the Game Gear at US$30, with games retailing at US$15. New games were released, such as a port of "Super Battletank". This version was also compatible with all previous Game Gear games, but was incompatible with the TV Tuner and some Master System converters. Over ten years later, on March 2, 2011, Nintendo announced that their 3DS Virtual Console service on the Nintendo eShop would feature games from the Game Gear.
A handheld game console, the Game Gear was designed to be played while being held horizontally. The console contains an 8-bit 3.5 MHz Zilog Z80 chip for a central processing unit, the same as the Master System. Its screen measures 3.2 inches on the diagonal and is able to display up to 32 colors at a time from a total palette of 4096 colors, with a frame rate of 59.922751013551 Hz at a display resolution of 160 × 144 non-square pixels. The screen is backlit in order to allow gamers to play in low-lighting situations. Powered by 6 AA batteries, the Game Gear has an approximate battery life of 3 to 5 hours. In order to lengthen this duration and to save money for consumers, Sega also released two types of external rechargeable battery packs for the Game Gear. The system contains 8kB of RAM and an additional 16kB of video RAM. It produces sound using a Texas Instruments SN76489 PSG, which was also used in the Master System; however, unlike the Master System, stereo sound is able to be supplied through an output for headphones. Physically, the Game Gear measures 210mm across, 113mm high, and 38mm deep.
Several accessories were created for the Game Gear during its lifespan. A TV Tuner accessory with a whip antenna plugs into the system's cartridge slot, allowing the viewing of analog television stations over-the-air on the Game Gear's screen. Released at $105.88 ($186 in 2016), the add-on was expensive but unique for collectors and contributed to the system's popularity. Another accessory, the Super Wide Gear, magnifies the Game Gear screen to compensate for its relatively small size. Also released was the Car Gear adapter that plugs into cars or cigarette lighters to power the system while traveling, and the Gear to Gear Cable that establishes a data connection between two Game Gear systems using the same multiplayer game and let users play against each other.
Over the course of its lifespan, the Game Gear also received a number of variations. Later releases included several different colors for the console, including a blue "sports" variation released in North America bundled with "World Series Baseball '95" or "The Lion King". A white version was also released, sold in a bundle with a TV tuner. Other versions included a red Coca-Cola-themed unit, bundled with the game "Coca-Cola Kid", and the Kids Gear, a Japan-only variation targeted toward children.
Over 300 games were released for the Game Gear, although at the time of the console's launch, there were only six software games available. Prices for game cartridges initially ranged from $24.99 to $29.99 each. The casings were molded black plastic with a rounded front to aid in removal. Games for the system included "Sonic the Hedgehog", "The GG Shinobi", "Space Harrier", and "Land of Illusion Starring Mickey Mouse", which was considered the best game for the system by "GamesRadar+". Later games took advantage of the success of the Genesis, Sega's 16-bit video game console, with games released from franchises originally released on the Genesis. A large part of the Game Gear's library consists of Master System ports. Because of the landscape orientation of the Game Gear's screen and the similarities in hardware between the handheld console and the Master System, it was easy for developers to port Master System games to the Game Gear.
Due to Nintendo's licensing practices during the lifespan of the Game Gear, few third-party developers were available to create games for Sega's system. This was a contributing factor to the large number of Master System ports for the Game Gear. Likewise, because of this, the Game Gear library contained many games that were not available on other handhelds, pulling sales away from the Atari Lynx and NEC TurboExpress and helping to establish the Game Gear's position in the market. While the Game Gear's library consisted of over 300 games, however, the Game Boy's library contained over 1000 individual games. Several Game Gear games were released years later on the Nintendo 3DS's Virtual Console service on the Nintendo eShop. The emulator for the Virtual Console releases was handled by M2.
Game Gear surpassed the Atari Lynx and NEC TurboExpress, but lagged far behind the Game Boy in the handheld marketplace. Retrospective reception to the Game Gear is mixed. In 2008, "GamePro" listed the Game Gear as 10th on their list of the "10 Worst-Selling Handhelds of All Time" and criticized aspects of the implementation of its technology, but also stated that the Game Gear could be considered a success for having nearly 11 million units sold. According to "GamePro" reviewer Blake Snow, "Unlike the Game Boy, the Game Gear rocked the landscape holding position, making it less cramped for human beings with two hands to hold. And even though the Game Gear could be considered a success, its bulky frame, relative high price, constant consumption of AA batteries, and a lack of appealing games ultimately kept Sega from releasing a true successor." In speaking with "Famitsu DC" for their November 1998 issue, Sato stated that the Game Gear did take a significant piece of the handheld console market share, but that "Nintendo’s Game Boy was such a runaway success, and had gobbled up so much of the market, that our success was still seen as a failure, which I think is a shame."
"GamesRadar+" offered some praise for the system and its library, stating, "With its 8-bit processor and bright color screen, it was basically the Sega Master System in your hands. How many batteries did we suck dry playing Sonic, Madden and Road Rash on the bus or in the car, or in the dark when we were supposed to be sleeping? You couldn't do that on a Game Boy!" By contrast, "IGN" reviewer Levi Buchanan stated the Game Gear's biggest fault was its game library when compared to the Game Boy, stating, "the software was completely lacking compared to its chief rival, which was bathed in quality games. It didn't matter that the Game Gear was more powerful. The color screen did not reverse any fortunes. Content and innovation beat out technology, a formula that Nintendo is using right now with the continued ascendance of the DS and Wii." Buchanan later went on to praise some parts of the Game Gear's library, however, stating "Some of those Master System tweaks were very good games, and fun is resilient against time." "Retro Gamer" praised Sega's accomplishment in surviving against the competition of Nintendo in the handheld console market with the Game Gear, noting that "for all the handhelds that have gone up against the might of Nintendo and ultimately lost out, Sega's Game Gear managed to last the longest, only outdone in sales by the Sony PSP. For its fans, it will remain a piece of classic gaming hardware whose legacy lives on forever."
On June 3, 2020 as part of their 60th anniversary, Sega revealed a Game Gear retroconsole, the . The Micro is scheduled for release in Japan on October 6, 2020 through Japanese storefronts in four different versions, varying in color and the game selection, with each containing four separate Game Gear games. Each unit otherwise is the same size, measuring with a display, and is powered by 2 AAA batteries or through a separate USB charger. Each unit also includes a headphone jack. A magnifying accessory modeled after the original system's Big Window accessory will be offered to customers who preorder all four variations. An international release has yet to be announced.
The models and corresponding games for each are: | https://en.wikipedia.org/wiki?curid=29027 |
32X
The 32X is an add-on for the Sega Genesis video game console. Codenamed "Project Mars", the 32X was designed to expand the power of the Genesis and serve as a transitional console into the 32-bit era until the release of the Sega Saturn. Independent of the Genesis, the 32X uses its own ROM cartridges and has its own library of games. It was distributed under the name in Japan, Genesis 32X in North America, Mega Drive 32X in the PAL region, and Mega 32X in Brazil.
Unveiled by Sega at June 1994's Consumer Electronics Show, the 32X was presented as a low-cost option for consumers looking to play 32-bit games. It was developed in response to the Atari Jaguar and concerns that the Saturn would not make it to market by the end of 1994. Though it was conceived as an entirely new console, at the suggestion of Sega of America executive Joe Miller and his team, it was converted into an add-on for the Genesis and made more powerful. The final design contained two 32-bit central processing units and a 3D graphics processor.
The 32X failed to attract third-party video game developers and consumers because of the announcement of the Saturn's simultaneous release in Japan. Sega's efforts to rush the 32X to market cut into time for game development, resulting in a weak library of 40 games that did not fully use the hardware, including Genesis ports. Sega produced 800,000 32X units and sold an estimated 665,000 by the end of 1994, selling the rest at steep discounts until it was discontinued in 1996 as Sega turned its focus to the Saturn.
The 32X is considered a commercial failure. Initial reception was positive, highlighting the low price and power expansion to the Genesis. Later reviews, both contemporary and retrospective, for the 32X have been mostly negative because of its shallow game library, poor market timing and its market fragmentation of the Genesis.
The Sega Genesis, initially released in Japan as the Mega Drive in 1988, was Sega's entry into the 16-bit era of video game consoles. The console was then released as the Genesis in 1989 for the North American market, with releases in other regions following a year later.
Although the earlier release of the Sega CD add-on had been commercially disappointing, Sega began to develop a stop-gap solution that would bridge the gap between the Genesis and the Sega Saturn, serving as a less expensive entry into the 32-bit era. The decision to create a new system was made by Sega CEO Hayao Nakayama and broadly supported by Sega of America employees. According to former Sega of America producer Scot Bayless, Nakayama was worried that the Saturn would not be available until after 1994, and about the recent release of the 64-bit Atari Jaguar. As a result, the direction given was to have this second release to market by the end of the year.
During the Winter Consumer Electronics Show in January 1994, Sega of America research and development head Joe Miller took a phone call in his Las Vegas hotel suite from Nakayama, in which Nakayama stressed the importance of coming up with a quick response to the Jaguar. Included on this call were Bayless, Sega hardware team head Hideki Sato, and Sega of America vice president of technology Marty Franz. One potential idea for this came from a concept from Sega of Japan, later known as "Project Jupiter", an entirely new independent console. Project Jupiter was initially slated to be a new version of the Genesis, with an upgraded color palette and a lower cost than the upcoming Saturn, as well as with some limited 3D capabilities thanks to integration of ideas from the development of the Sega Virtua Processor chip. Miller suggested an alternative strategy, citing concerns with releasing a new console with no previous design specifications within six to nine months. According to former Sega of America producer Michael Latham, Miller said, "Oh, that's just a horrible idea. If all you're going to do is enhance the system, you should make it an add-on. If it's a new system with legitimate new software, great. But if the only thing it does is double the colors..." Miller, however, insists that the decision was made collectively to talk about alternative solutions. One idea was to leverage the existing Genesis as a way to keep from alienating Sega customers, who would otherwise be required to discard their Genesis systems entirely to play 32-bit games, and to control the cost of the new system. This would come in the form of an add-on. From these discussions, Project Jupiter was discontinued and the new add-on, codenamed "Project Mars", was advanced.
At the suggestion from Miller and his team, Sega designed the 32X as a peripheral for the existing Genesis, expanding its power with two 32-bit SuperH-2 processors. The SH-2 had been developed in 1993 as a joint venture between Sega and Japanese electronics company Hitachi. The original design for the 32X add-on, according to Bayless, was created on a cocktail napkin, but Miller insists that this was not the case. At the end of the Consumer Electronics show, with the basic design of the 32X in place, Sega of Japan invited Sega of America to assist in development of the new add-on.
Although the new unit was a stronger console than originally proposed, it was not compatible with Saturn games. This was justified by Sega's statement that both platforms would run at the same time, and that the 32X would be aimed at players who could not afford the more expensive Saturn. Bayless praised the potential of this system at this point, calling it "a coder's dream for the day" with its twin processors and 3D capabilities. Sega of America headed up the development of the 32X, with some assistance from Sato's team at Sega of Japan. Shortages of processors due to the same 32-bit chips being used in both the 32X and the Saturn hindered the development of the 32X, as did the language barrier between the teams in Japan and the United States.
Before the 32X could be launched, the release date of the Saturn was announced for November 1994 in Japan, coinciding with the 32X's target launch date in North America. Sega of America now was faced with trying to market the 32X with the Saturn's Japan release occurring simultaneously. Their answer was to call the 32X a "transitional device" between the Genesis and the Saturn, to which Bayless describes of the strategy, "[f]rankly, it just made us look greedy and dumb to consumers."
The unveiling of the 32X to the public came at the Summer Consumer Electronics Show in June 1994 in Chicago. Promoted as the "poor man's entry into 'next generation' games", 32X was marketed for its US$159 price point as a less-expensive alternative to the Saturn. However, Sega would not answer as to whether or not a Genesis console equipped with a Sega CD and a 32X would be able to run Saturn software. Founder of The 3DO Company, Trip Hawkins, was willing to point out that it would not, stating, "Everyone knows that 32X is a Band-Aid. It's not a 'next generation system.' It's fairly expensive. It's not particularly high-performance. It's hard to program for, and it's not compatible with the Saturn." In response to these comments, Sega executive Richard Brudvik-Lindner pointed out that the 32X would play Genesis games, and had the same system architecture as the Saturn.
In August of that year, "GamePro" highlighted the advantages of the upcoming add-on in its 32-bit processors and significantly lower price, noting that "[n]o doubt gotta-get-it-now gamers will spend the big bucks to grab Saturn or PlayStation systems and games from Japan. For the rest of us, however, 32X may well be the system of choice in '94." In promotion for the new system, Sega promised 12 games available at launch and 50 games due for release in 1995 from third-party developers.
The 32X was released on November 21, 1994, in North America, in time for the holiday season that year. As announced, it retailed for $159.99, and had a reasonably successful launch in the marketplace. Demand among retailers was high, and Sega could not keep up with orders for the new system. Over 1,000,000 orders had been placed for 32X units, but Sega had only managed to ship 600,000 units by January 1995. Launching at about the same price as a Genesis console, the price of the 32X was less than half of what the Saturn's price would be at launch. Despite Sega's initial promises, only six games were available at its North American launch, including "Doom", "Star Wars Arcade", "Virtua Racing Deluxe", and "Cosmic Carnage". Although "Virtua Racing" was considered strong, "Cosmic Carnage" "looked and played so poorly that reporters made jokes about it". Games were available at a retail price of $69.95. Advertising for the system included images of the 32X being connected to a Genesis console to create an "arcade system". Japan received the 32X on December 3, 1994. The system's PAL release came in January 1995, at a price of GB£169.99, and also experienced initial high demand.
Despite the lower price console's positioning as an inexpensive entry into 32-bit gaming, Sega had a difficult time convincing third-party developers to create games for the new system. Top developers were already aware of the coming arrival of the Sega Saturn, Nintendo 64, and PlayStation, and did not believe the 32X would be capable of competing with any of those systems. The quick development time of the 32X also made game development difficult, according to Franz. Not wanting to create games for an add-on that was "a technological dead-end", many developers decided not to make games for the system. Problems plagued games developed in-house due to the time crunch to release the 32X. According to Bayless, "games in the queue were effectively jammed into a box as fast as possible, which meant massive cutting of corners in every conceivable way. Even from the outset, designs of those games were deliberately conservative because of the time crunch. By the time they shipped they were even more conservative; they did nothing to show off what the hardware was capable of."
Journalists were similarly concerned about Sega's tactic of selling two similar consoles at different prices and attempting to support both, likening Sega's approach to that of General Motors and segmenting the market for its consoles. In order to convince the press that the 32X was a worthwhile console, Sega flew in journalists from all around the country to San Francisco for a party at a local nightclub. The event featured a speech from Tom Kalinske, live music with a local rapper praising the 32X, and 32X games on exhibition. However, the event turned out to be a bust, as journalists attempted to leave the party due to its loud music and unimpressive games on display, only to find that the buses that brought them to the nightclub had just left and would not return until the scheduled end of the party.
Though the system had a successful launch, demand soon disappeared. Over the first three months of 1995, several of the 32X's third party publishers, including Capcom and Konami, cancelled their 32X projects so that they could focus on producing games for the Saturn and PlayStation. The 32X failed to catch on with the public, and is considered a commercial failure. By 1995, the Genesis had still not proven successful in Japan, where it was known as Mega Drive, and the Saturn was beating the PlayStation, so Sega CEO Hayao Nakayama decided to force Sega of America to focus on the Saturn and cut support for Genesis products, executing a surprise early launch of the Saturn in the early summer of 1995. Sega was supporting five different consoles before this—Saturn, Genesis, Game Gear, Pico, and the Master System—as well as the Sega CD and Sega 32X add-ons. Sales estimates for the 32X stood at 665,000 units at the end of 1994. Despite assurances from Sega that many games would be developed for the system, in early 1996, Sega finally conceded that it had promised too much out of the add-on and decided to discontinue the 32X in order to focus on the Saturn. In September 1995, the retail price for the 32X dropped to $99, and later the remaining inventory was cleared out of stores at $19.95, with 800,000 units sold in total.
The Sega Neptune is an unproduced two-in-one Genesis and 32X console which Sega planned to release in fall 1995, with the retail price planned to be something less than US$200. It was featured as early as March 1995, with "Sega Magazine" saying the console "shows [Sega's] commitment to the hardware." Sega cancelled the Neptune in October 1995, citing fears that it would dilute their marketing for the Saturn while being priced too close to the Saturn to be a viable competitor. "Electronic Gaming Monthly" used the Sega Neptune as an April Fools' Day prank in its April 2001 issue. The issue included a small article in which the writers announced that Sega had found a warehouse full of old Sega Neptunes, and were selling them on a website for $199.
The 32X can be used only in conjunction with a Genesis system. It is inserted into the system like a standard game cartridge. The add-on requires its own separate power supply, a connection cable linking it to the Genesis, and an additional conversion cable for the original model of the Genesis. As well as playing its own library of cartridges, the 32X is backwards-compatible with Genesis games, and can also be used in conjunction with the Sega CD to play games that use both add-ons. The 32X also came with a spacer so it would fit properly with the second model of the Genesis; an optional spacer was offered for use with the Sega Genesis CDX system, but ultimately never shipped due to risks of electric shock when the 32X and CDX were connected. Installation of the 32X also requires the insertion of two included electromagnetic shield plates into the Genesis' cartridge slot.
Seated on top of a Genesis, the 32X measures . The 32X contains two Hitachi SH2 32-bit RISC processors with a clock speed of 23 MHz, which Sega claimed would allow the system to work 40 times faster than a stand-alone Genesis. Its graphics processing unit is capable of producing 32,768 colors and rendering 50,000 polygons per second, which provides a noticeable improvement over the polygon rendering of the Genesis. The 32X also includes 256 kilobytes of random-access memory (RAM), along with 256 kilobytes of video RAM. Sound is supplied through a pulse-width modulation sound source. Input/output is supplied to a television set via a provided A/V cable that supplies composite video and stereo audio, or through an RF modulator. Stereo audio can also be played through headphones via a headphone jack on the attached Genesis.
The 32X library consists of 40 games, including six that required both the Sega 32X and Sega CD. Among them were ports of arcade games "After Burner", "Space Harrier", and "Star Wars Arcade", a sidescroller with a hummingbird as a main character in "Kolibri", and a 32X-exclusive "Sonic the Hedgehog" spinoff "Knuckles' Chaotix". Several of the games released for the 32X are enhanced ports of Genesis games, including "NFL Quarterback Club" and "World Series Baseball '95". In a retrospective review of the console, "Star Wars Arcade" was considered the best game for the 32X by "IGN" for its cooperative play, soundtrack, and faithful reproduction of the experiences of "Star Wars". In a separate review, "IGN"'s Buchanan praised the 32X game "Shadow Squadron" as superior to "Star Wars Arcade". "Retro Gamer" writer Damien McFerran, however, praised "Virtua Fighter" as "the jewel in the 32X's crown", and "GamesRadar+" named "Knuckles' Chaotix" as the best game for the system. "Next Generation" called "Virtua Fighter" "the colorful wreath on 32X's coffin", reflecting the consensus among contemporary critics that the game was at once arguably the 32X's best release and a clear harbinger of the platform's imminent discontinuation, since it was clearly inferior to the Saturn versions of "Virtua Fighter Remix" (which had already been released) and "Virtua Fighter 2" (which was due out in just a few months). In response to fan inquiries, Sega stated that the 32X architecture was not powerful enough to handle a port of "Virtua Fighter 2".
Although the console used 32-bit processing and was capable of better graphics and sound than the Genesis alone, most games for the 32X did not take advantage of its hardware. "Doom" for the 32X received near perfect reviews from gaming magazines upon launch, but was later criticized for being an inferior version of the game compared to releases for the PC and the Atari Jaguar, with the 32X version criticized for missing levels, poor graphic and audio quality, jerky movement, and running within a window on the screen. Though the system had enhanced audio capabilities, 32X games did not use this, which Franz believes was due to developers being unwilling to invest in designing games to work with the new audio enhancements. One source of these problems was the rush to release games for the 32X's launch; former Sega of America executive producer Michael Latham explained, in reference to 32X launch game "Cosmic Carnage", "We were rushed. We had to get games out for the 32X and it was going to be such a close cycle. When "Cosmic Carnage" showed up, we didn't even want to ship it. It took a lot of convincing, you know, to ship that title." Likewise with "Doom", id Software's John Carmack rushed to have the port ready for release at the 32X's launch and had to trim out a third of the game's levels in order to meet the deadline for the port to be published on time. Because of time limitations, game designs were intentionally conservative and did not show what the 32X hardware was able to do. In an interview at the end of 1995, Sega vice president of marketing Mike Ribero, while insisting that Sega was not abandoning the 32X, acknowledged that first party software support for the system had been lackluster: "I won't lie to you, we screwed up with 32X. We overpromised and underdelivered."
Initial reception to the 32X and its games upon the launch of the add-on was very positive. Four reviewers from "Electronic Gaming Monthly" scored the add-on 8, 7, 8, and 8 out of 10 in their 1995 Buyer's Guide, highlighting the add-on's enhancements to the Genesis but questioning how long the system would be supported. "GamePro" commented that the 32X's multiple input and power cords make it "as complicated as setting up your VCR" and noted some performance glitches with the prototype such as freezes and overheating, but expressed confidence that the production models would perform well and gave the add-on their overall approval. Reviews of its launch games, such as "Doom", were likewise positive.
By late 1995, feedback to the add-on had soured. In its 1996 Buyer's Guide, "Electronic Gaming Monthly"'s four reviewers scored the add-on 3, 3, 3, and 2 out of 10, criticizing the game library and Sega's abandonment of the system in favor of the Saturn. A review in "Next Generation" panned the 32X for its weak polygon processing, the tendency of developers to show off its capabilities with garishly colored games, and its apparent function as "simply a way of grabbing extra 1994 mind and market share while waiting for Saturn". The review gave it one out of five stars. "Game Players" assessed it as so much less powerful than the Saturn and PlayStation that its lower price could not be considered an enticement, and said that the vast majority of its games could have been done just as well on the Super NES. Additionally commenting that both first party and third party software support had been weak, they concluded, "The lack of support [and] good games, and the release of Saturn make the 32X a system that never was."
Retrospectively, the 32X is widely criticized as having been under-supported and a poor idea in the wake of the release of the Sega Saturn. "1UP.com"'s Jeremy Parish stated that the 32X "tainted just about everything it touched." "GamesRadar+" also panned the system, placing it as their ninth-worst console with reviewer Mikel Reparaz criticizing that "it was a stopgap system that would be thrown under the bus when the Sega Saturn came out six months later, and everyone seemed to know it except for die-hard Sega fans and the company itself." "Retro Gamer'"s Damien McFerran offered some praise for the power increase of the 32X to offer ports of "Space Harrier", "After Burner", and "Virtua Fighter" that were accurate to the original arcade versions, as well as the add-on's price point, stating, "If you didn't have deep enough pockets to afford a Saturn, then the 32X was a viable option; it's just a shame that it sold so poorly because the potential was there for true greatness." Levi Buchanan, writing for "IGN", saw some sense in the move for Sega to create the 32X but criticized its implementation. According to Buchanan, "I actually thought the 32X was a better idea than the SEGA CD... The 32X, while underpowered, at least advanced the ball. Maybe it only gained a few inches in no small part due to a weak library, but at least the idea was the right one."
In particular, the console's status as an add-on and poor timing after the announcement of the Saturn has been identified by reviewers as being responsible factors for fracturing the audience for Sega's video game consoles in terms of both developers and consumers. "Allgame"'s Scott Alan Marriott states that "[e]very add-on whittled away at the number of potential buyers and discouraged third-party companies from making the games necessary to boost sales." "GamePro" criticized the concept of the add-on, noting the expenses involved in purchasing the system. According to reviewer Blake Snow, "Just how many 16-bit attachments did one need? All in all, if you were one of the unlucky souls who completely bought into Sega's add-on frenzy, you would have spent a whopping $650 for something that weighed about as much as a small dog." Writing for "GamesRadar+", Reparaz noted that "developers—not wanting to waste time on a technological dead-end—abandoned the 32X in droves. Gamers quickly followed suit, turning what was once a promising idea into an embarrassing footnote in console history, as well as an object lesson in why console makers shouldn't split their user base with pricey add-ons." Reparaz went on to criticize Sega's decision to release the 32X, noting that "(u)ltimately, the 32X was the product of boneheaded short-sightedness: its existence put Sega into competition with itself once the Saturn rolled out." Writing for "IGN", Buchanan points out, "Notice that we haven't seen many add-ons like the 32X since 1994? I think the 32X killed the idea of an add-on like this—a power booster—permanently. And that's a good thing. Because add-ons, if not implemented properly, just splinter an audience."
Former executives at Sega have mixed opinions of the 32X. Bayless believes firmly that the 32X serves as a warning to the video game industry not to risk splintering the market for consoles by creating add-ons, and was critical of the Kinect and PlayStation Move for doing so. Franz places the 32X's commercial failure on its inability to function without an attached Genesis and lack of a CD drive, despite its compatibility with the Sega CD, stating, "The 32X was destined to die because it didn't have a CD drive and was an add-on. An add-on device is never as well thought out as a built-from-scratch device." Miller, on the other hand, remembers the 32X positively, stating, "I think the 32X actually was an interesting, viable platform. The timing was wrong, and certainly our ability to stick with it, given what we did with Saturn, was severely limited. There were a whole bunch of reasons why we couldn’t ultimately do what we had to do with that platform, without third party support and with the timing of Saturn, but I still think the project was a success for a bunch of other reasons. In hindsight, it was not a great idea for a whole bunch of other reasons." | https://en.wikipedia.org/wiki?curid=29028 |
Severan dynasty
The Severan dynasty was a Roman imperial dynasty, which ruled the Roman Empire between 193 and 235. The dynasty was founded by the general Septimius Severus, who rose to power as the victor of the Civil War of 193–197.
Although Septimius Severus successfully restored peace following the upheaval of the late 2nd century, the dynasty was disturbed by highly unstable family relationships, as well as constant political turmoil foreshadowing the imminent Crisis of the Third Century.
It was one of the last lineages of the Principate founded by Augustus.
Lucius Septimius Severus was born to a family of Phoenicia equestrian rank in Leptis Magna, the Roman province of Africa proconsularis, in modern-day Libya. He rose through military service to consular rank under the later Antonines. He married Syrian noblewoman Julia Domna and had two children with her, Caracalla and Geta. He was subsequently proclaimed emperor in 193 by his legionaries in Noricum during the political unrest that followed the death of Commodus, he secured sole rule over the empire in 197 after defeating his last rival, Clodius Albinus, at the Battle of Lugdunum.
Severus fought a successful war against the Parthians and campaigned with success against barbarian incursions in Roman Britain, rebuilding Hadrian's Wall. In Rome, his relations with the Senate were poor, but he was popular with the commoners, as with his soldiers, whose salary he raised. Starting in 197, his Praetorian prefect Gaius Fulvius Plautianus was a negative influence, and he would be executed in 205. One of Plautianus's successors was the jurist Papinian. Severus continued official persecution of Christians and Jews, as they were the only two groups who would not assimilate their beliefs to the official syncretistic creed.
Severus died while campaigning in Britain. He was succeeded by his sons Caracalla and Geta, who reigned under the influence of their mother, Julia Domna.
The eldest son of Severus, he was born Lucius Septimius Bassianus in Lugdunum, Gaul. "Caracalla" was a nickname referring to the Gallic hooded tunic he habitually wore even when he slept. Upon his father's death, Caracalla was proclaimed co-emperor with his brother Geta. Conflict between the two culminated in the assassination of the latter less than a year after their father's death. Reigning alone, Caracalla was noted for lavish bribes to the legionaries and unprecedented cruelty, authorizing numerous assassinations of perceived enemies and rivals. He campaigned with indifferent success against the Alamanni. The Baths of Caracalla in Rome are the most enduring monument of his rule. He was assassinated while en route to a campaign against the Parthians by a Praetorian Guard.
Younger son of Severus, Geta was made co-emperor with his older brother Caracalla upon his father's death. Unlike the much more successful joint reign of Marcus Aurelius and his brother Lucius Verus in the previous century, relations were hostile between the two Severan brothers from the very start. Geta was assassinated in his mother's apartments by order of Caracalla, who thereafter ruled as sole Augustus.
Marcus Opelius Macrinus was born in 164 at Caesarea Mauretaniae (modern day Cherchell, Algeria). Although coming from a humble background that was "not" dynastically related to the Severan dynasty; he rose through the imperial household until, under the emperor Caracalla, he was made Prefect of the Praetorian Guard. On account of the cruelty and treachery of the emperor, Macrinus became involved in a conspiracy to kill him, and ordered the Praetorian Guard to do so. On April 8, 217, Caracalla was assassinated travelling to Carrhae. Three days later, Macrinus was declared Augustus.
His most significant early decision was to make peace with the Parthians, but many thought that the terms were degrading to the Romans. However, his downfall was his refusal to award the pay and privileges promised to the eastern troops by Caracalla. He also kept those forces wintered in Syria, where they became attracted to the young Elagabalus. After months of mild rebellion by the bulk of the army in Syria, Macrinus took his loyal troops to meet the army of Elagabalus near Antioch. Despite a good fight by the Praetorian Guard, his soldiers were defeated. Macrinus managed to escape to Chalcedon but his authority was lost: he was betrayed and executed after a short reign of just 14 months.
Marcus Opelius Diadumenianus (known as Diadumenian) was the son of Macrinus, born in 208. He was given the title Caesar in 217, when his father became Emperor. After his father's defeat outside Antioch, he tried to escape east to Parthia, but was captured and killed before he could achieve this.
Elagabalus was born Varius Avitus Bassianus in 204, and became known later as Marcus Aurelius Antonius. The name "Elagabalus" followed the Latin nomenclature for the Syrian sun god Elagabal, of whom he had become a priest at an early age. Elagabal was represented by a large, dark rock called a baetyl. Elagabalus's grandmother, Julia Maesa, Julia Domna's sister and sister-in-law of Emperor Septimius Severus, arranged for the restoration of the Severan dynasty, and persuaded soldiers from The Gallic Third Legion who were stationed near Emesa, using her enormous wealth, as well as the claim that Caracalla had slept with her daughter and that the boy was his bastard to swear fealty to Elagabalus. He was later invited alongside his mother and daughters to the military camp, clad in imperial purple, and crowned as emperor by the soldiers.
His reign in Rome has long been known for being outrageous, although the historical sources are few, and in many cases not to be fully trusted. He is said to have smothered guests at a banquet by flooding the room with rose petals, married his male lover (who was thereafter referred to as the "Empress's husband"), and married a vestal virgin. Dio suggests he was transgender, and that he offered large sums to the physician who could give him female genitalia.
The running of the Empire during this time was mainly left to his grandmother and mother (Julia Soamias). Seeing that her grandson's outrageous behavior could mean the loss of power, Julia Maesa persuaded Elagabalus to accept his young cousin Severus Alexander as Caesar (and thus the nominal Emperor-to-be). Alexander was popular with the troops, who increasingly objected to Elagabalus' behaviour. Jealous of this popularity, Elagabalus removed the title of Caesar from his cousin, enraging Alexander's protectors, the Praetorian Guard. Elagabalus and his mother were then assassinated in a Praetorian Guard camp mutiny.
Born Marcus Julius Gessius Bassianus Alexianus in around 208, Alexander was adopted as heir apparent by his slightly older and very unpopular cousin, the Emperor Elagabalus at the urging of the influential and powerful Julia Maesa— who was grandmother of both cousins and who had arranged for the emperor's acclamation by the Third Legion.
On March 6, 222, when Alexander was just fourteen, a rumor went around the city's troops that Alexander had been killed, which triggered his ascension as emperor. The eighteen-year-old Emperor Elagabalus and his mother were both taken from the palace, dragged through the streets, murdered and thrown in the river Tiber by the Praetorian Guard, who then proclaimed Alexander Severus as Augustus.
Ruling from the age of fourteen under the influence of his able mother, Julia Avita Mamaea, Alexander restored, to some extent, the moderation that characterized the rule of Septimius Severus. The rising strength of the Sasanian Empire (226–651) heralded perhaps the greatest external challenge that Rome faced in the 3rd century. His prosecution of the war against a German invasion of Gaul led to his overthrow by the troops he was leading there, whose regard the twenty-seven-year-old had lost during the affair.
His death was the epochal event beginning the troubled Crisis of the Third Century, where a succession of briefly-reigning military emperors, rebellious generals, and counter-claimants presided over governmental chaos, civil war, general instability and great economic disruption. He was succeeded by Maximinus Thrax, the first of a series of weak emperors, each ruling on average only 2 to 3 years, that ended fifty years later with the Emperor Diocletian splitting the Eastern and Western Roman Empires.
The women of the Severan dynasty, beginning with Septimius Severus's wife Julia Domna, were notably active in advancing the careers of their male relatives. Other notable women who exercised power behind the scenes in this period include Julia Maesa, sister of Julia Domna, and Maesa's two daughters Julia Soaemias, mother of Elagabalus, and Julia Avita Mamaea, mother of Alexander Severus. Also of interest is Publia Fulvia Plautilla, daughter of Gaius Fulvius Plautianus, the Prefect Commander of the Praetorian Guard, married to but despised by Caracalla who had her exiled and eventually executed. | https://en.wikipedia.org/wiki?curid=29031 |
Sega CD
The Sega CD, released as the in most regions outside North America and Brazil, is a CD-ROM accessory for the Mega Drive/Genesis designed and produced by Sega as part of the fourth generation of video game consoles. It was released on December 12, 1991 in Japan, October 15, 1992 in North America, and April 2, 1993 in Europe. The Sega CD plays CD-based games and adds hardware functionality such as a faster central processing unit and graphic enhancements like sprite scaling and rotation. It can also play audio CDs and CD+G discs.
The main benefit of CD technology was greater storage, which allowed for games to be nearly 320 times larger than Genesis cartridges. This benefit manifested as full motion video (FMV) games such as the controversial "Night Trap", which became a focus of the 1993 congressional hearings on issues of video game violence and ratings. Sega of Japan partnered with JVC to design the Sega CD and refused to consult with Sega of America until the project was complete. Sega of America assembled parts from various "dummy" units to obtain a working prototype. It was redesigned several times by Sega and licensed third-party developers.
While the Sega CD became known for several well received games such as "Sonic CD" and "", its game library contained many Genesis ports and poorly received FMV games. 2.24 million Sega CD units were sold by March 1996, after which Sega discontinued the system to focus on the Sega Saturn. Retrospective reception is mixed, with praise for individual games and additional functions, but criticism for its lack of deep games, high price, and support from Sega.
Released in 1988, the Genesis (known as the Mega Drive in Europe and Japan) was Sega's entry into the fourth generation of video game consoles. In mid-1990, Sega CEO Hayao Nakayama hired Tom Kalinske as CEO of Sega of America. Kalinske developed a four-point plan for sales of the Genesis: cut the console's price, develop games for the American market with a new American team, continue aggressive advertising campaigns, and ship "Sonic the Hedgehog" with the Genesis as a pack-in game. The Japanese board of directors initially disapproved of the plan, but all four points were approved by Nakayama, who told Kalinske, "I hired you to make the decisions for Europe and the Americas, so go ahead and do it." Magazines praised "Sonic" as one of the greatest games yet made, and Sega's console finally took off as customers who had been waiting for the Super Nintendo Entertainment System (SNES) decided to purchase a Genesis instead.
By the early 1990s, compact discs (CDs) were making significant headway as a storage medium for music and video games. NEC had been the first to use CD technology in a video game console with their PC Engine CD-ROM² System add-on in October 1988 in Japan (launched in North America as the TurboGrafx-CD the following year), which sold 80,000 units in six months. That year, Nintendo announced a partnership with Sony to develop its own CD-ROM peripheral for the SNES. Commodore International released their CD-based CDTV multimedia system in early 1991, while the CD-i from Philips arrived towards the end of that year.
Shortly after the release of the Genesis, Sega's Consumer Products Research and Development Labs led by manager Tomio Takami were tasked with creating a CD-ROM add-on, which became the Sega CD. The Sega CD was originally intended to equal the capabilities of the TurboGrafx-CD, but with twice as much random-access memory (RAM), and sell for about JP¥20,000 (or US$150). In addition to relatively short loading times, Takami's team planned the device to feature hardware scaling and rotation similar to that of Sega's arcade games, which required a dedicated digital signal processor (DSP).
However, two changes made later in development contributed to the final unit's higher-than-expected price. Because the Genesis' Motorola 68000 CPU was too slow to handle the Sega CD's new graphical capabilities, an additional 68000 CPU was incorporated. In addition, upon hearing rumors that NEC planned a memory upgrade to the TurboGrafx-CD, which would bring its available RAM from 0.5 Mbit to between 2 and 4 Mbit, Sega decided to increase the Sega CD's available RAM from 1 Mbit to 6 Mbit. This proved to be one of the greatest technical challenges since the CD's access speed was initially too slow to run programs effectively. The cost of the device was now estimated at $370, but market research convinced Sega executives that consumers would be willing to pay more for a state-of-the-art machine. Sega partnered with JVC, which had been working with Warner New Media to develop a CD player under the CD+G standard.
Until mid-1991, Sega of America had been kept largely uninformed of the details of the project, without a functioning unit to test (although Sega of America was provided with preliminary technical documents earlier in the year). According to former Sega of America executive producer Michael Latham, "When you work at a multinational company, there are things that go well and there are things that don't. They didn't want to send us working Sega CD units. They wanted to send us dummies and not send us the working CD units until the last minute because they were concerned about what we would do with it and if it would leak out. It was very frustrating." Latham and Sega of America vice president of licensing Shinobu Toyoda put together a functioning Sega CD by acquiring a ROM for the system and installing it in a dummy unit.
Sega of America staff were also frustrated by the Sega CD's construction. Former Sega of America senior producer Scot Bayless said: "The Mega-CD was designed with a cheap, consumer-grade audio CD drive, not a CD-ROM. Quite late in the run-up to launch, the quality assurance teams started running into severe problems with many of the units—and when I say severe, I mean units literally bursting into flames. We worked around the clock, trying to catch the failure in-progress, and after about a week we finally realized what was happening," citing the need for games to use more time seeking data than the CD drive was designed to provide.
Sega announced the release of the Mega-CD in Japan for late 1991, and North America (as the Sega CD) in 1992. It was unveiled to the public for the first time at the 1991 Tokyo Toy Show, to positive reception from critics. It was released in Japan on December 12, 1991, initially retailing at JP¥49,800. Though the unit sold quickly, the small install base of the Mega Drive in Japan meant that sales declined rapidly. Within its first year in Japan, the Mega-CD only sold 100,000 units. Third-party development of games for the new system suffered because Sega took a long amount of time to release software development kits. Other factors affecting sales included the high launch price of the Mega-CD in Japan and only two games available at launch.
On October 15, 1992, the Sega CD was released in North America, with a retail price of US$299. Advertising included one of Sega's slogans, "Welcome to the Next Level". Though only 50,000 units were available at launch due to production problems, the Sega CD sold over 200,000 units by the end of 1992. As part of Sega's sales, Blockbuster LLC purchased Sega CD units for rental in their stores. The Mega-CD was launched in Europe in the spring of 1993, starting with the United Kingdom on April 2, 1993, at a price of GB£269.99. The European version was packaged with "Sol-Feace" and "Cobra Command" in a two-disc set, along with a compilation CD of five Mega Drive games. Only 70,000 units were initially available in the UK, but 60,000 units were sold by August 1993.
Emphasized by Sega of America, the benefits of the Sega CD's additional storage space allowed for a large amount of full motion video (FMV) games, with Digital Pictures becoming an important partner for Sega. After the initial competition between Sega and Nintendo to develop a CD-based add-on, Nintendo eventually canceled the development process of its own peripheral after having partnered with Sony and then with Philips to develop one.
Sega released a second model, the Sega CD 2 (Mega-CD 2), on April 23, 1993 in Japan at a price of JP¥29,800. It was released in North America several months later at the reduced price of US$229, bundled with one of the system's best-selling games, "Sewer Shark". Designed to bring down the manufacturing costs of the Sega CD, the newer model is smaller and does not use a motorized disc tray. A limited number of games were developed that used the Sega CD and the 32X add-on, released in November 1994.
On December 9, 1993, the United States Congress began to hold hearings on video game violence and the marketing of violent video games to children. One game at the center of this controversy was the Sega CD's "Night Trap", a full-motion video adventure game by Digital Pictures. "Night Trap" had been brought to the attention of United States Senator Joe Lieberman, who said: "It ends with this attack scene on this woman in lingerie, in her bathroom. I know that the creator of the game said it was all meant to be a satire of "Dracula"; but nonetheless, I thought it sent out the wrong message." Lieberman's research concluded that the average video game player was between seven and twelve years old and that video game publishers were marketing violence to children.
In the United Kingdom, former Sega of Europe development director Mike Brogan noted that ""Night Trap" got Sega an awful lot of publicity... Questions were even raised in the UK Parliament about its suitability. This came at a time when Sega was capitalizing on its image as an edgy company with attitude, and this only served to reinforce that image." Despite increased sales as a result of the hearings, Sega recalled "Night Trap" and rereleased it with revisions in 1994. Following these hearings, video game manufacturers came together in 1994 to establish a unified rating system, the Entertainment Software Rating Board.
Newer CD-based consoles such as the 3DO Interactive Multiplayer rendered the Sega CD technically obsolete, reducing public interest. In late 1993, less than a year after the Sega CD's launches in North America and Europe, the media reported that Sega was no longer accepting in-house development proposals for the Mega-CD in Japan. In early 1995, Sega shifted its focus to the Sega Saturn and discontinued advertising for Genesis hardware, including the Sega CD. Sega officially discontinued the Sega CD in the first quarter of 1996, saying that it needed to concentrate on fewer platforms and felt the Sega CD could not compete due to its high price and outdated single-speed drive. The last games scheduled to be released for the Sega CD, "Myst" and "Brain Dead 13", were cancelled. 2.24 million Sega CD units were sold worldwide, including 400,000 in Japan.
The Sega CD can only be used in conjunction with a Genesis system, attaching through an expansion slot on the side of the main console. It requires its own power supply. In addition to playing its own library of games in CD-ROM format, the Sega CD can also play compact discs and karaoke CD+G discs, and can be used in conjunction with the 32X to play 32-bit games that use both add-ons. The second model, also known as the Sega CD 2, includes a steel joining plate to be screwed into the bottom of the Genesis and extension spacer to work with the original Genesis model.
The main CPU of the Sega CD is a 12.5MHz 16-bit Motorola 68000 processor, which runs 5 MHz faster than the Genesis processor. It contains 1 Mbit of boot ROM, allocated for the CD game BIOS, CD player software, and compatibility with CD+G discs. 6 Mbit of RAM are allocated to data for programs, pictures, and sounds; 512 Kbit to PCM waveform memory; 128 Kbit to CD-ROM data cache memory; and an additional 64 Kbit are allocated as the backup memory. Additional backup memory in the form of a 1 Mbit Backup RAM Cartridge was also available as a separate purchase, released near the end of the system's life. Audio can be supplied through the Ricoh RF5C164, and two RCA pin jacks allow the Sega CD to output stereophonic sound separate from the Genesis. Combining stereo sound from a Genesis to either version of the Sega CD requires a cable between the Genesis's headphone jack and an input jack on the back of the CD unit. This is not required for the second model of the Genesis.
Though the Sega CD offers a faster processor, its main purpose is to expand the size of the games. Whereas ROM cartridges of the day typically contained 8 to 16 megabits of data, a CD-ROM disc can hold more than 640 megabytes of data, more than 320 times the storage of a Genesis cartridge. This allows the Sega CD to run games containing full motion video.
The Sega CD received several variations during its lifetime, of which Sega constructed three. The original model utilized a front-loading motorized disc tray and sat underneath the Genesis. Sega later released the second model of the Sega CD, which was redesigned to sit next to the Genesis console and featured a top-loading disc tray in place of the motorized tray of the original model. In addition to the add-on models, Sega also released the Genesis CDX (Multi-Mega in Europe). This console was a combination of the Genesis and Sega CD in one unit and initially retailed at US$399. Unique to this model was its additional functionality as a portable compact disc player.
Three additional system models were created by other electronics companies. Working with Sega, JVC released the Wondermega on April 1, 1992, in Japan, at an initial retail price of ¥82,800 (or US$620). The system was later redesigned by JVC and released as the X'Eye in North America in September 1994. Designed by JVC to be a Genesis and Sega CD combination with high-quality audio, the Wondermega's high price kept it out of the hands of average consumers. Likewise was the case with the Pioneer LaserActive, which was also an add-on that required an attachment developed by Sega, known as the Mega-LD pack, in order to play Genesis and Sega CD games. Though the LaserActive, developed by Pioneer Corporation, was lined up to compete with the 3DO Interactive Multiplayer, the combined system and Mega-LD pack retailed at nearly $1600, becoming a very expensive option for Sega CD players. Aiwa also released the CSD-GM1, a combination Genesis/Sega CD unit built into a boombox.
The Sega CD supports a library of over 200 games created by Sega and third-party publishers. Included in this library are six games which, while receiving individual Sega CD releases, also received separate versions that used both the Sega CD and 32X add-ons. Among the games were a number of FMV games, including "Sewer Shark" and "Fahrenheit". Well regarded games include "Sonic CD", "" and "", "Popful Mail", and "Snatcher", as well as the controversial "Night Trap". Although Sega created "Streets of Rage" for the Genesis to compete against the SNES port of the arcade hit "Final Fight", the Sega CD received an enhanced version of "Final Fight" game that has been praised for its greater faithfulness to the arcade original. "" was noted for its impressive use of the Sega CD hardware as well as its violent content. In particular, "Sonic CD" garnered acclaim for its graphics and time travel gameplay, which improved upon the traditional "Sonic" formula. The Sega CD also received enhanced ports of Genesis games including "Batman Returns" and "Ecco the Dolphin".
Given the large number of FMV games and Genesis ports, the Sega CD's game library has been criticized for its lack of depth. Full-motion video quality was substandard on the Sega CD due to poor video compression software and limited color palette, and the concept never caught on with the public. According to Digital Pictures founder Tom Zito, "Sega CD could only put up 32 colors at a time, so you had this horrible grainy look to the images," though the system was able to put up 64 colors at one time. Likewise, most Genesis ports for the Sega CD featured additional full motion video sequences, extra levels, and enhanced audio, but were otherwise identical to their Genesis release. The video quality in these sequences has also been criticized as comparable to an old VHS tape.
Near the time of its release, the Sega CD was awarded Best New Peripheral of 1992 by Electronic Gaming Monthly. Four separate reviews scored the add-on 8, 9, 8, and 8 out of 10; reviewers cited its upgrades to the Genesis as well as its high-quality and expanding library of games. Later reception in 1995 by Electronic Gaming Monthly showed a more mixed response to the peripheral, with four reviewers scoring it 5 out of 10, citing its game library issues and substandard video quality. "GamePro" also criticized the weak games library and substandard video quality, noting that many of the games were simple ports of cartridge games with minimal enhancements and commenting that "The Sega CD could have been an upgrade, but it's essentially a big memory device with CD sound." They gave it a "thumbs sideways" and recommended that Genesis fans buy an SNES before even considering a Sega CD. Likewise, in a special Game Machine Cross Review in May 1995, "Famicom Tsūshin" scored the Japanese Mega-CD 2 a 17 out of 40.
Retrospective reception of the Sega CD is mixed, praising certain games but criticizing its low value for money and limitations on the benefits it provides to the Genesis. "GamePro" listed the Sega CD as the 7th-worst selling video game console of all time, with reviewer Blake Snow noting that "The problem was threefold: the device was expensive at $299, it arrived late in the 16-bit life cycle, and it didn't do much (if anything) to enhance the gameplay experience." Snow went on to note, however, that the Sega CD did have in its library "the greatest "Sonic" game of all time" in "Sonic CD". IGN's Levi Buchanan criticized Sega's implementation of CD technology for the Genesis, noting, "What good is the extra storage space if there is nothing inventive to be done with it? No new gameplay concepts emerged from the SEGA CD—it just offered more of the same. In fact, with few exceptions like "Sonic CD", it often offered some of the 16-bit generation's worst games, like "Demolition Man"." Jeremy Parish of USgamer pointed out that "Sega was hardly the only company to muddy its waters with a CD add-on in the early '90s" and highlighted some "gems" for the system, but cautioned "the benefits offered by the Sega CD had to be balanced against the fact that the add-on more than doubled the price (and complexity) of the [Genesis]." Writing for "Retro Gamer", Damien McFerran cited various reasons for the Sega CD's limited sales, including the add-on's high price, lack of significant enhancement to the Genesis console, and lack of ability to function without a console attached. "Retro Gamer" writer Aaron Birch, however, defended the Sega CD and wrote that "the single biggest cause of the Mega-CD's failure was the console itself. When the system came out, CD-ROM technology was still in its infancy and companies had yet to get to grips with the possibilities it offered... quite simply, the Mega-CD was a console ahead of its time."
The poor support for the Sega CD has often been criticized as the first link in the devaluation of the Sega brand. Writing for IGN, Buchanan described an outside perspective on Sega's decision to release the Sega CD with its poor library and console support, stating, "[T]he SEGA CD instead looked like a strange, desperate move—something designed to nab some ink but without any real, thought-out strategy. Genesis owners that invested in the add-on were sorely disappointed, which undoubtedly helped sour the non-diehards on the brand." In reviewing for "GamePro", Snow commented that "[the] Sega CD marked the first of several Sega systems that saw very poor support; something that devalued the once-popular Sega brand in the eyes of consumers, and something that would ultimately lead to the company's demise as a hardware maker."
Former Sega of America senior producer Scot Bayless attributes the unsuccessful market to a lack of direction from Sega with the add-on. According to Bayless, "It was a fundamental paradigm shift with almost no thought given to consequences. I honestly don't think anyone at Sega asked the most important question: 'Why?' There's a rule I developed during my time as an engineer in the military aviation business: never fall in love with your tech. I think that's where the Mega-CD went off the rails. The whole company fell in love with the idea without ever really asking how it would affect the games you made." Sega of America producer Michael Latham offers a contrasting view of support for the add-on, however, stating "I loved the Sega CD. I always thought the platform was under-appreciated and that it was hurt by an over-concentration of trying to make Hollywood interactive film games versus using its storage and extended abilities to make just plain great video games." Former Sega of Europe president Nick Alexander commented on the Mega-CD, saying "The Mega CD was interesting but probably misconceived and was seen very much as the interim product it was. I am afraid I cannot recall the sales numbers, but it was not a success." | https://en.wikipedia.org/wiki?curid=29032 |
Sega Pico
The Sega Pico, also known as is an educational video game console by Sega Toys. Marketed as "edutainment", the main focus of the Pico was educational video games for children between 3 and 7 years old. The Pico was released in June 1993 in Japan and November 1994 in North America and Europe, later reaching China. It was succeeded by the Advanced Pico Beena, which was released in Japan in 2005. Though the Pico was sold continuously in Japan through the release of the Beena, in North America and Europe the Pico was less successful and was discontinued in early 1998, later being re-released by Majesco Entertainment. Releases for the Pico were focused on education for children and included titles supported by licensed franchised animated characters, including Sega's own "Sonic the Hedgehog" series. Overall, Sega claims sales of 3.4 million Pico consoles and 11.2 million game cartridges, and over 350,000 Beena consoles and 800,000 cartridges.
Powered by the same hardware used in the Sega Genesis, the physical shape of the Pico was designed to appear similar to a laptop. Included in the Pico is a stylus called the "Magic Pen" and a pad to draw on. Controlling the games for the system is accomplished either by using the Magic Pen like a mouse or by pressing the directional buttons on the console. The Pico does not include its screen or RF output, and instead must be connected to a monitor through Composite video or a VCR to be played on an RF screen. Touching the pen to the pad would either allow drawing or animate a character on the screen.
Cartridges for the system were referred to as "Storyware", and take the form of picture books with a cartridge slot on the bottom. The Pico changes the television display and the set of tasks for the player to accomplish each time a page is turned. Sound, including voices and music, also accompanied every page. Games for the Pico focused on education, including on subjects such as music, counting, spelling, reading, matching, and coloring. Titles included licensed animated characters from various franchises, such as "Disney's The Lion King: Adventures at Pride Rock" and "A Year at Pooh Corner". Sega also released titles including their mascot, Sonic the Hedgehog, including "Sonic Gameworld" and "Tails and the Music Maker".
According to former Sega console hardware research and development head Hideki Sato, the development of the Sega Pico was possible due to the company's past work on the MyCard cartridges developed for the SG-1000, as well as on drawing tablets. The sensor technology used in the pad came from that developed for 1987 arcade game "World Derby", while its CPU and graphics chip came from the Genesis.
At the price of JP¥13,440, the Pico was released in Japan in June 1993. In North America, Sega unveiled the Pico at the 1994 American International Toy Fair, showcasing its drawing and display abilities before releasing it in November. The console was advertised at a price of approximately US$160 but was eventually released at a price of US$139. "Storyware" cartridges sold for US$39.99 to US$49.99. The Pico's slogan was: ""The computer that thinks it's a toy."" The Sega Pico won a few awards including the "National Parenting Seal of Approval", a "Platinum Seal Award" and a gold medal for "National Association of Parenting Publications Awards".
After a lack of success, Sega discontinued the Pico in North America in early 1998. Later, a remake of the Pico made by Majesco Entertainment was released in North America in August 1999 at a price of US$49.99, with Storyware selling at $19.99. The Pico would later be released in China in 2002, priced at CN¥690.
In 2000, Sega claimed that the Pico had sold 2.5 million units. As of April 2005, Sega claims that 3.4 million Pico consoles and 11.2 million software cartridges had been sold worldwide. The Pico was recognized in 1995 by being listed on Dr. Toy's 100 Best Products, as well as being listed in "Child" as one of the best computer games available. According to Joseph Szadkowski of "The Washington Times", "Pico has enough power to be a serious learning aid that teaches counting, spelling, matching, problem-solving, memory, logic, hand/eye coordination and important, basic computer skills." Former Sega of America vice president of product development Joe Miller claims that he named his dog after the system because of his passion for the console. By contrast, Steven L. Kent claims that Sega of Japan CEO Hayao Nakayama watched the Pico "utterly fail" in North America. According to Warren Buckleitner of "Children's Software Revenue", the Pico failed in North America due to a lack of credibility in the product.
The Advanced Pico Beena, also known simply as Beena or BeenaLite, is an educational console system targeted at young children sold by Sega Toys, released in 2005 in Japan. It is the successor to the Pico and marketed around the "learn while playing" concept. According to Sega Toys, the focus of the Advanced Pico Beena is on learning in a new social environment and is listed as their upper-end product. Topics listed as being educational focuses for the Beena include intellectual, moral, physical, dietary and safety education. The name of the console was chosen to sound like the first syllables of "Be Natural".
Compared to the Pico, Beena adds several functions. Beena can be played without a television, and supports multiplayer by a separately sold additional Magic Pen. The console also supports data saving. Playtime can be limited by settings in the system. Some games for the Beena also offer adaptive difficulty, which becomes more difficult to play based on the skill level of the player. The Beena Lite, a more affordable version of the console, was released on July 17, 2008. At the time of its release, Sega estimated that 350,000 Beena consoles had been sold, and 800,000 game cartridges. It is technically the last Sega console. | https://en.wikipedia.org/wiki?curid=29033 |
Sega VR
The Sega VR is a virtual reality headset developed by Sega in the early 1990s. Versions were planned for arcades, Genesis, and Saturn. Only the arcade version was released, and the home console versions were canceled.
The Sega VR's design was based on an IDEO virtual reality head-mounted display containing LCD screens in the visor and stereo headphones. Inertial sensors in the headset allow the system to track and react to the movements of the user's head.
Sega, flush with funds from the success of its Mega Drive/Genesis, announced the peripheral in 1991. It was later seen in early 1993, at the Winter Consumer Electronics Show (CES), where "Electronic Gaming Monthly" noted it was an adaptation of a similar headset that Sega was already using for arcades. The magazine stated that a Mega Drive/Genesis version was planned for release in late 1993 at and would be released with four launch games, including a port of arcade game "Virtua Racing". Sega later announced its release schedule for early 1994, according to "Electronic Games".
Because of development difficulties, the Sega VR headset remained only a prototype and was never released to the general public. Then-CEO Tom Kalinske stated that the system would not be released due to it inducing motion sickness and severe headaches in users. It was last seen at the 1993 Summer CES, where it was demonstrated by Alan Hunter. It vanished from release schedules in 1994. Four games were apparently developed for the system, each using 16 MB cartridges that were to be bundled with the headset.
The company claimed to have terminated the project because the virtual reality effect was too realistic, so users might move while wearing the headset and injure themselves. The limited processing power of the system makes this claim unlikely, although there were reports of testers developing headaches and motion sickness. Mark Pesce, who worked on the Sega VR project, says SRI International, a research institute, warned Sega of the "hazards of prolonged use".
Only five original games were known to be in development.
In addition, Sega also announced a port of Sega AM2's hit 1992 arcade game "Virtua Racing" as a launch game for the device.
Sega went on to other VR projects for use in arcades and a similar peripheral was reported but never seen for the Saturn. The project encouraged a brief flurry of other companies to offer VR products.
In 1994, Sega VR technology was utilized for the Sega VR-1 motion simulator arcade attraction, which was available at SegaWorld arcades. It is able to track head movement and features 3D polygon graphics in stereoscopic 3D. A scaled-down version, "Dennoo Senki Net Merc", was demonstrated at Japan's 1995 AOU (Amusement Operators Union) show, using the Sega Model 1 arcade system board to produce the 3D graphics. However, the game's flat-shaded graphics were compared unfavorably to the Sega Model 2's textured-filtered graphics. | https://en.wikipedia.org/wiki?curid=29034 |
Sega Saturn
The is a home video game console developed by Sega and released on November 22, 1994 in Japan, May 11, 1995 in North America, and July 8, 1995 in Europe. Part of the fifth generation of video game consoles, it was the successor to the successful Sega Genesis. The Saturn has a dual-CPU architecture and eight processors. Its games are in CD-ROM format, and its game library contains several ports of arcade games as well as original games.
Development of the Saturn began in 1992, the same year Sega's groundbreaking 3D Model 1 arcade hardware debuted. The system was designed around a new CPU from Japanese electronics company Hitachi. Sega added another video display processor in early 1994 to better compete with Sony's forthcoming PlayStation.
The Saturn was initially successful in Japan but failed to sell in large numbers in the United States after its surprise May 1995 launch, four months before its scheduled release date. After the debut of the Nintendo 64 in late 1996, the Saturn rapidly lost market share in the U.S., where it was discontinued in 1998. Having sold 9.26 million units worldwide, the Saturn is considered a commercial failure. The failure to release a game in the "Sonic the Hedgehog" series, known in development as "Sonic X-treme", has been considered a factor in the console's poor performance.
Although the Saturn is remembered for several well regarded games, including "Nights into Dreams", the "Panzer Dragoon" series, and the "Virtua Fighter" series, its reputation is mixed due to its complex hardware design and limited third-party support. Sega's management has been criticized for its decisions during the system's development and discontinuation.
Released in 1988, the Genesis (known as the Mega Drive in Europe, Japan and Australia) was Sega's entry into the fourth generation of video game consoles. In mid-1990, Sega CEO Hayao Nakayama hired Tom Kalinske as president and CEO of Sega of America. Kalinske developed a four-point plan for sales of the Genesis: lower the price of the console, create a U.S.-based team to develop games targeted at the American market, continue aggressive advertising campaigns, and sell "Sonic the Hedgehog" with the console. The Japanese board of directors initially disapproved of the plan, but all four points were approved by Nakayama, who told Kalinske, "I hired you to make the decisions for Europe and the Americas, so go ahead and do it." Magazines praised "Sonic" as one of the greatest games ever made, and Sega's console finally took off as customers who had been waiting for the Super Nintendo Entertainment System (SNES) decided to purchase a Genesis instead. However, the release of a CD-based add-on for the Genesis, the Sega CD (known as Mega-CD outside of North America), was commercially disappointing.
Sega also experienced success with arcade games. In 1992 and 1993, the new Sega Model 1 arcade system board showcased Sega AM2's "Virtua Racing" and "Virtua Fighter" (the first 3D fighting game), which played a crucial role in popularizing 3D polygonal graphics. In particular, "Virtua Fighter" garnered praise for its simple three-button control scheme, with strategy coming from the intuitively observed differences between characters that felt and acted differently rather than the more ornate combos of two-dimensional competitors. Despite its crude visuals—with characters composed of fewer than 1,200 polygons—"Virtua Fighter"s fluid animation and relatively realistic depiction of distinct fighting styles gave its combatants a lifelike presence considered impossible to replicate with sprites. The Model 1 was an expensive system board, and bringing home releases of its games to the Genesis required more than its hardware could handle. Several alternatives helped to bring Sega's newest arcade games to the console, such as the Sega Virtua Processor chip used for "Virtua Racing", and eventually the Sega 32X add-on.
Development of the Saturn was supervised by Hideki Sato, Sega's director and deputy general manager of research and development. According to Sega project manager Hideki Okamura, the project started over two years before the Saturn was showcased at the Tokyo Toy Show in June 1994. The name "Saturn" was initially only the codename during development. "Computer Gaming World" in March 1994 reported a rumor that "the Sega "Saturn" ... will release in Japan before the end of the year" for $250–300.
In 1993, Sega and Japanese electronics company Hitachi formed a joint venture to develop a new CPU for the Saturn, which resulted in the creation of the "SuperH RISC Engine" (or SH-2) later that year. The Saturn was designed around a dual-SH2 configuration. According to Kazuhiro Hamada, Sega's section chief for Saturn development during the system's conception, "the SH-2 was chosen for reasons of cost and efficiency. The chip has a calculation system similar to a DSP [digital signal processor], but we realized that a single CPU would not be enough to calculate a 3D world." Although the Saturn's design was largely finished before the end of 1993, reports in early 1994 of the technical capabilities of Sony's upcoming PlayStation console prompted Sega to include another video display processor (VDP) to improve the system's 2D performance and texture-mapping. CD-ROM-based and cartridge-only versions of the Saturn hardware were considered for simultaneous release during the system's development, but this idea was discarded due to concerns over the lower quality and higher price of cartridge-based games.
According to Kalinske, Sega of America "fought against the architecture of Saturn for quite some time". Seeking an alternative graphics chip for the Saturn, Kalinske attempted to broker a deal with Silicon Graphics, but Sega of Japan rejected the proposal. Silicon Graphics subsequently collaborated with Nintendo on the Nintendo 64. Kalinske, Sony Electronic Publishing's Olaf Olafsson, and Sony America's Micky Schulhof had discussed development of a joint "Sega/Sony hardware system", which never came to fruition due to Sega's desire to create hardware that could accommodate both 2D and 3D visuals and Sony's competing notion of focusing on 3D technology. Publicly, Kalinske defended the Saturn's design: "Our people feel that they need the multiprocessing to be able to bring to the home what we're doing next year in the arcades."
In 1993, Sega restructured its internal studios in preparation for the Saturn's launch. To ensure high-quality 3D games would be available early in the Saturn's life, and to create a more energetic working environment, developers from Sega's arcade division were asked to create console games. New teams, such as "Panzer Dragoon" developer Team Andromeda, were formed during this time.
In January 1994, Sega began to develop an add-on for the Genesis, the Sega 32X, which would serve as a less expensive entry into the 32-bit era. The decision to create the add-on was made by Nakayama and widely supported by Sega of America employees. According to former Sega of America producer Scot Bayless, Nakayama was worried that the Saturn would not be available until after 1994 and that the recently released Atari Jaguar would reduce Sega's hardware sales. As a result, Nakayama ordered his engineers to have the system ready for launch by the end of the year. The 32X would not be compatible with the Saturn, but Sega executive Richard Brudvik-Lindner pointed out that the 32X would play Genesis games, and had the same system architecture as the Saturn. This was justified by Sega's statement that both platforms would run at the same time, and that the 32X would be aimed at players who could not afford the more expensive Saturn. According to Sega of America research and development head Joe Miller, the 32X served a role in assisting development teams to familiarize themselves with the dual SH-2 architecture also used in the Saturn. Because both machines shared many of the same parts and were preparing to launch around the same time, tensions emerged between Sega of America and Sega of Japan when the Saturn was given priority.
Sega released the Saturn in Japan on November 22, 1994, at a price of ¥44,800. "Virtua Fighter", a faithful port of the popular arcade game, sold at a nearly one-to-one ratio with the Saturn console at launch and was crucial to the system's early success in Japan. Though Sega had wanted to launch with "Clockwork Knight" and "Panzer Dragoon," the only other first-party game available at launch was "Wan Chai Connection". Fueled by the popularity of "Virtua Fighter", Sega's initial shipment of 200,000 Saturn units sold out on the first day. Sega waited until the December 3 launch of the PlayStation to ship more units; when both were sold side by side, the Saturn proved more popular.
Meanwhile, Sega released the 32X on November 21, 1994 in North America, December 3, 1994 in Japan, and January 1995 in PAL territories, and was sold at less than half of the Saturn's launch price. After the holiday season, however, interest in the 32X rapidly declined. 500,000 Saturn units were sold in Japan by the end of 1994 (compared to 300,000 PlayStation units), and sales exceeded 1 million within the following six months. There were conflicting reports that the PlayStation enjoyed a higher sell-through rate, and the system gradually began to overtake the Saturn in sales during 1995. Sony attracted many third-party developers to the PlayStation with a liberal $10 licensing fee, excellent development tools, and the introduction of a 7- to 10-day order system that allowed publishers to meet demand more efficiently than the 10- to 12-week lead times for cartridges that had previously been standard in the Japanese video game industry.
In March 1995, Sega of America CEO Tom Kalinske announced that the Saturn would be released in the U.S. on "Saturnday" (Saturday) September 2, 1995. However, Sega of Japan mandated an early launch to give the Saturn an advantage over the PlayStation. At the first Electronic Entertainment Expo (E3) in Los Angeles on May 11, 1995, Kalinske gave a keynote presentation in which he revealed the release price of US$399 (including a copy of "Virtua Fighter"), and described the features of the console. Kalinske also revealed that, due to "high consumer demand", Sega had already shipped 30,000 Saturns to Toys "R" Us, Babbage's, Electronics Boutique, and Software Etc. for immediate release. The announcement upset retailers who were not informed of the surprise release, including Best Buy and Walmart; KB Toys responded by dropping Sega from its lineup. Sony subsequently unveiled the retail price for the PlayStation: Olaf Olafsson, the head of Sony Computer Entertainment America (SCEA), summoned Steve Race to the stage, who said "$299", and then walked away to applause. The Saturn's release in Europe also came before the previously announced North American date, on July 8, 1995, at a price of £399.99. European retailers and press did not have time to promote the system or its games, harming sales. The PlayStation launched in Europe on September 29, 1995; by November, it had already outsold the Saturn by a factor of three in the United Kingdom, where Sony had allocated £20 million of marketing during the holiday season compared to Sega's £4 million.
The Saturn's U.S. launch was accompanied by a reported $50 million advertising campaign that included coverage in publications such as "Wired" and "Playboy". Early advertising for the system was targeted at a more mature, adult audience than the Sega Genesis ads. Because of the early launch, the Saturn had only six games (all published by Sega) available to start as most third-party games were slated to be released around the original launch date. "Virtua Fighter"s relative lack of popularity in the West, combined with a release schedule of only two games between the surprise launch and September 1995, prevented Sega from capitalizing on the Saturn's early timing. Within two days of its September 9, 1995 launch in North America, the PlayStation (backed by a large marketing campaign) sold more units than the Saturn had in the five months following its surprise launch, with almost all of the initial shipment of 100,000 units being sold in advance, and the rest selling out across the U.S.
A high-quality port of the Namco arcade game "Ridge Racer" contributed to the PlayStation's early success, and garnered favorable media in comparison to the Saturn version of Sega's "Daytona USA", which was considered inferior to its arcade counterpart. Namco, a longtime arcade competitor with Sega, also unveiled the Namco System 11 arcade board, based on raw PlayStation hardware. Although the System 11 was technically inferior to Sega's Model 2 arcade board, its lower price made it attractive to smaller arcades. Following a 1994 acquisition of Sega developers, Namco released "Tekken" for the System 11 and PlayStation. Directed by former "Virtua Fighter" designer Seiichi Ishii, "Tekken" was intended to be fundamentally similar, with the addition of detailed textures and twice the frame rate. "Tekken" surpassed "Virtua Fighter" in popularity due to its superior graphics and nearly arcade-perfect console port, becoming the first million-selling PlayStation game.
On October 2, 1995, Sega announced a Saturn price reduction to $299. High-quality Saturn ports of the Sega Model 2 arcade hits "Sega Rally Championship", "Virtua Cop", and "Virtua Fighter 2" (running at 60 frames per second at a high resolution) were available by the end of the year, and were generally regarded as superior to competitors on the PlayStation. Notwithstanding a subsequent increase in Saturn sales during the 1995 holiday season, the games were not enough to reverse the PlayStation's decisive lead. By 1996, the PlayStation had a considerably larger library than the Saturn, although Sega hoped to generate interest with upcoming exclusives such as "Nights into Dreams". An informal survey of retailers showed that the Saturn and PlayStation sold in roughly equal numbers during the first quarter of 1996. Within its first year, the PlayStation secured over 20% of the entire U.S. video game market. On the first day of the May 1996 E3 show, Sony announced a PlayStation price reduction to $199, a reaction to the release of the Model 2 Saturn in Japan at a price roughly equivalent to $199. On the second day, Sega announced it would match this price, though Saturn hardware was more expensive to manufacture.
Despite the launch of the PlayStation and Saturn, sales of 16-bit games and consoles continued to account for 64% of the video game market in 1995. Sega underestimated the continued popularity of the Genesis, and did not have the inventory to meet demand. Sega was able to capture 43% of the dollar share of the U.S. video game market and sell more than 2 million Genesis units in 1995, but Kalinske estimated that "we could have sold another 300,000 Genesis systems in the November/December timeframe." Nakayama's decision to focus on the Saturn over the Genesis, based on the systems' relative performance in Japan, has been cited as the major contributing factor in this miscalculation.
Due to long-standing disagreements with Sega of Japan, Kalinske lost most of his interest in his work as CEO of Sega of America. By the spring of 1996, rumors were circulating that Kalinske planned to leave Sega, and a July 13 article in the press reported speculation that Sega of Japan was planning significant changes to Sega of America's management team. On July 16, 1996, Sega announced that Shoichiro Irimajiri had been appointed chairman and CEO of Sega of America, while Kalinske would be leaving Sega after September 30 of that year. A former Honda executive, Irimajiri had been actively involved with Sega of America since joining Sega in 1993. Sega also announced that David Rosen and Nakayama had resigned from their positions as chairman and co-chairman of Sega of America, though both men remained with the company. Bernie Stolar, a former executive at Sony Computer Entertainment of America, was named Sega of America's executive vice president in charge of product development and third-party relations. Stolar, who had arranged a six-month PlayStation exclusivity deal for "Mortal Kombat 3" and helped build close relations with Electronic Arts while at Sony, was perceived as a major asset by Sega officials. Finally, Sega of America made plans to expand its PC software business.
Stolar was not supportive of the Saturn due to his belief that the hardware was poorly designed, and publicly announced at E3 1997 that "The Saturn is not our future." While Stolar had "no interest in lying to people" about the Saturn's prospects, he continued to emphasize quality games for the system, and subsequently reflected that "we tried to wind it down as cleanly as we could for the consumer." At Sony, Stolar opposed the localization of certain Japanese PlayStation games that he felt would not represent the system well in North America, and advocated a similar policy for the Saturn during his time at Sega, although he later sought to distance himself from this perception. These changes were accompanied by a softer image that Sega was beginning to portray in its advertising, including removing the "Sega!" scream and holding press events for the education industry. Marketing for the Saturn in Japan also changed with the introduction of "Segata Sanshiro" (played by Hiroshi Fujioka) as a character in a series of TV advertisements starting in 1997; the character would eventually star in a Saturn video game.
Temporarily abandoning arcade development, Sega AM2 head Yu Suzuki began developing several Saturn-exclusive games, including a role-playing game in the "Virtua Fighter" series. Initially conceived as an obscure prototype "The Old Man and the Peach Tree" and intended to address the flaws of contemporary Japanese RPGs (such as poor non-player character artificial intelligence routines), "Virtua Fighter RPG" evolved into a planned 11-part, 45-hour "revenge epic in the tradition of Chinese cinema", which Suzuki hoped would become the Saturn's killer app. The game was eventually released as "Shenmue" for the Saturn's successor, the Dreamcast.
As Sonic Team was working on "Nights into Dreams", Sega tasked the U.S.-based Sega Technical Institute (STI) with developing what would have been the first fully 3D entry in its popular "Sonic the Hedgehog" series. The game, "Sonic X-treme", was moved to the Saturn after several prototypes for other hardware (including the 32X) were discarded. It featured a fisheye lens camera system that rotated levels with Sonic's movement. After Nakayama ordered the game be reworked around the engine created for its boss battles, the developers were forced to work between 16 and 20 hours a day to meet their December 1996 deadline. Weeks of development were wasted after Stolar rescinded STI's access to Sonic Team's "Nights into Dreams" engine following an ultimatum by "Nights" programmer Yuji Naka. After programmer Ofer Alon quit and designers Chris Senn and Chris Coffin became ill, "Sonic X-Treme" was cancelled in early 1997. Sonic Team started work on an original 3D "Sonic" game for the Saturn, but development was shifted to the Dreamcast and the game became "Sonic Adventure". STI was disbanded in 1996 as a result of changes in management at Sega of America.
Journalists and fans have speculated about the impact a completed "X-treme" might have had on the market. David Houghton of GamesRadar described the prospect of "a good 3D "Sonic" game" on the Saturn as "a 'What if...' situation on a par with the dinosaurs not becoming extinct". "IGN"'s Travis Fahs called "X-treme" "the turning point not only for Sega's mascot and their 32-bit console, but for the entire company", but noted that the game served as "an empty vessel for Sega's ambitions and the hopes of their fans". Dave Zdyrko, who operated a prominent Saturn fan website during the system's lifespan, said: "I don't know if ["X-treme"] could've saved the Saturn, but ... "Sonic" helped make the Genesis and it made absolutely no sense why there wasn't a great new "Sonic" title ready at or near the launch of the [Saturn]". In a 2007 retrospective, producer Mike Wallis maintained that "X-treme" "definitely would have been competitive" with Nintendo's "Super Mario 64". "Next Generation" reported in late 1996 that "X-treme" would have harmed Sega's reputation if it did not compare well to contemporary competition. Naka said he had been relieved by the cancellation, feeling that the game was not promising.
From 1993 to early 1996, although Sega's revenue declined as part of an industry-wide slowdown, the company retained control of 38% of the U.S. video game market (compared to Nintendo's 30% and Sony's 24%). 800,000 PlayStation units were sold in the U.S. by the end of 1995, compared to 400,000 Saturn units. In part due to an aggressive price war, the PlayStation outsold the Saturn by two-to-one in 1996, while Sega's 16-bit sales declined markedly. By the end of 1996, the PlayStation had sold 2.9 million units in the U.S., more than twice the 1.2 million Saturn units sold. The Christmas 1996 "Three Free" pack, which bundled the Saturn with "Daytona USA", "Virtua Fighter 2", and "Virtua Cop," drove sales dramatically and ensured the Saturn remained a competitor into 1997.
However, the Saturn failed to take the lead. After the launch of the Nintendo 64 in 1996, sales of the Saturn and its games were sharply reduced, while the PlayStation outsold the Saturn by three-to-one in the U.S. in 1997. The 1997 release of "Final Fantasy VII" significantly increased the PlayStation's popularity in Japan. As of August 1997, Sony controlled 47% of the console market, Nintendo 40%, and Sega only 12%. Neither price cuts nor high-profile game releases proved helpful. Reflecting decreased demand for the system, worldwide Saturn shipments during March to September 1997 declined from 2.35 million to 600,000 versus the same period in 1996; shipments in North America declined from 800,000 to 50,000. Due to the Saturn's poor performance in North America, 60 of Sega of America's 200 employees were laid off in the fall of 1997.
As a result of Sega's deteriorating financial situation, Nakayama resigned as president in January 1998 in favor of Irimajiri. Stolar subsequently acceded to president of Sega of America. Following five years of generally declining profits, in the fiscal year ending March 31, 1998 Sega suffered its first parent and consolidated financial losses since its 1988 listing on the Tokyo Stock Exchange. Due to a 54.8% decline in consumer product sales (including a 75.4% decline overseas), the company reported a net loss of ¥43.3 billion (US$327.8 million) and a consolidated net loss of ¥35.6 billion (US$269.8 million).
Shortly before announcing its financial losses, Sega announced that it was discontinuing the Saturn in North America to prepare for the launch of its successor. Only 12 Saturn games were released in North America in 1998 ("Magic Knight Rayearth" was the final official release), compared to 119 in 1996. The Saturn would last longer in Japan. Rumors about the upcoming Dreamcast—spread mainly by Sega itself—were leaked to the public before the last Saturn games were released. The Dreamcast was released on November 27, 1998 in Japan and on September 9, 1999 in North America. The decision to abandon the Saturn effectively left the Western market without Sega games for over one year. Sega suffered an additional ¥42.881 billion consolidated net loss in the fiscal year ending March 1999, and announced plans to eliminate 1,000 jobs, nearly a quarter of its workforce.
Worldwide Saturn sales include at least the following amounts in each territory: 5.75 million in Japan (surpassing the Genesis' sales of 3.58 million there), 1.8 million in the United States, 1 million in Europe, and 530,000 elsewhere. With lifetime sales of 9.26 million units, the Saturn is considered a commercial failure, although its install base in Japan surpassed the Nintendo 64's 5.54 million. Lack of distribution has been cited as a significant factor contributing to the Saturn's failure, as the system's surprise launch damaged Sega's reputation with key retailers. Conversely, Nintendo's long delay in releasing a 3D console and damage caused to Sega's reputation by poorly supported add-ons for the Genesis are considered major factors allowing Sony to gain a foothold in the market.
Featuring eight processors, the Saturn's main central processing units are two Hitachi SH-2 microprocessors clocked at 28.6 MHz and capable of 56 MIPS. It uses a Motorola 68EC000 running at 11.3 MHz as a sound controller; a custom sound processor with an integrated Yamaha FH1 DSP running at 22.6 MHz capable of up to 32 sound channels with both FM synthesis and 16-bit PCM sampling at a maximum rate of 44.1 kHz; and two video display processors: the VDP1 (which handles sprites, textures and polygons) and the VDP2 (which handles backgrounds). Its double-speed CD-ROM drive is controlled by a dedicated Hitachi SH-1 processor to reduce load times. The System Control Unit (SCU), which controls all buses and functions as a co-processor of the main SH-2 CPU, has an internal DSP running at 14.3 MHz. It features a cartridge slot that allows for memory expansion, 16 Mbit of work random-access memory (RAM), 12 Mbit of video RAM, 4 Mbit of RAM for sound functions, 4 Mbit of CD buffer RAM and 256 Kbit (32 KB) of battery backup RAM. Its video output, provided by a stereo AV cable, displays at resolutions from 320×224 to 704×224 pixels, and can display up to 16.77 million colors simultaneously. The Saturn measures . It was sold packaged with an instruction manual, one control pad, a stereo AV cable, and its 100 V AC power supply, with a power consumption of approximately 15W.
The Saturn had technically impressive hardware at the time of its release, but its complexity made harnessing this power difficult for developers accustomed to conventional programming. The greatest disadvantage was that both CPUs shared the same bus and were unable to access system memory at the same time. Making full use of the 4 kB of cache memory in each CPU was critical to maintaining performance. For example, "Virtua Fighter" used one CPU for each character, while "Nights" used one CPU for 3D environments and the other for 2D objects. The Visual Display Processor 2 (VDP2), which can generate and manipulate backgrounds, has also been cited as one of the system's most important features.
The Saturn's design elicited mixed commentary among game developers and journalists. Developers quoted by "Next Generation" in December 1995 described the Saturn as "a real coder's machine" for "those who love to get their teeth into assembly and really hack the hardware", with "more flexibility" and "more calculating power than the PlayStation". The sound board was also widely praised. By contrast, Lobotomy Software programmer Ezra Dreisbach described the Saturn as significantly slower than the PlayStation, whereas Kenji Eno of WARP observed little difference. In particular, Dreisbach criticized the Saturn's use of quadrilaterals as its basic geometric primitive, in contrast to the triangles rendered by the PlayStation and the Nintendo 64. Ken Humphries of Time Warner Interactive remarked that compared to the PlayStation, the Saturn was worse at generating polygons but better at sprites. Third-party development was initially hindered by the lack of useful software libraries and development tools, requiring developers to write in assembly language. During early Saturn development, programming in assembly could offer a two-to-fivefold speed increase over higher-level languages such as C.
The Saturn hardware is extremely difficult to emulate. Sega responded to complaints about the difficulty of programming for the Saturn by writing new graphics libraries which were claimed to make development easier. Sega of America also purchased a United Kingdom-based development firm, Cross Products, to produce the Saturn's development system. Despite these challenges, Treasure CEO Masato Maegawa stated that the Nintendo 64 was more difficult to develop for than the Saturn. Traveller's Tales founder Jon Burton felt that while the PlayStation was easier "to get started on ... you quickly reach [its] limits", whereas the Saturn's "complicated" hardware had the ability to "improve the speed and look of a game when all used together correctly". A major criticism was the Saturn's use of 2D sprites to generate polygons and simulate 3D space. The PlayStation functioned similarly, but also featured a dedicated "Geometry Transfer Engine" that rendered additional polygons. As a result, several analysts described the Saturn as an "essentially" 2D system. For example, Steven L. Kent stated: "Although Nintendo and Sony had true 3D game machines, Sega had a 2D console that did a good job with 3D objects but wasn't optimized for 3D environments."
Several Saturn models were produced in Japan. An updated model in a recolored light gray (officially white) was released at ¥20,000 to reduce the system's cost and raise its appeal among women and younger children. Two models were released by third parties: Hitachi released the Hi-Saturn (a smaller model equipped with a car navigation function), while JVC released the V-Saturn. Saturn controllers came in various color schemes to match different models of the console. The system also supports several accessories. A wireless controller powered by AA batteries uses infrared signal to connect. Designed to work with "Nights", the Saturn 3D Pad includes both a control pad and an analog stick for directional input. Sega also released several versions of arcade sticks as peripherals, including the Virtua Stick, the Virtua Stick Pro, the Mission Analog Stick, and the Twin Stick. Sega also created a light gun peripheral, the Virtua Gun, for shooting games such as "Virtua Cop" and "The Guardian", and the Arcade Racer, a wheel for racing games. The Play Cable allows two Saturn consoles to be connected for multiplayer gaming across two screens, while a multitap allows up to six players to play on the same console. The Saturn was designed to support up to 12 players on a single console, by using two multitaps. RAM cartridges expand the memory. Other accessories include a keyboard, mouse, floppy disk drive, and movie card.
Like the Genesis, the Saturn had an internet-based gaming service. The Sega NetLink was a 28.8k modem that fit into the cartridge slot in the Saturn for direct dial multiplayer. In Japan, a pay-to-play service was used. It could also be used for web browsing, sending email, and online chat. Because the NetLink was released before the Saturn keyboard, Sega produced a series of CDs containing hundreds of website addresses so that Saturn owners could browse with the joypad. The NetLink functioned with "Daytona USA", "Duke Nukem 3D", "Saturn Bomberman", "Sega Rally", and "". In 1995, Sega announced a variant of the Saturn featuring a built-in NetLink modem under the code name "Sega Pluto", but it was never released.
Sega developed an arcade board based on the Saturn's hardware, the Sega ST-V (or Titan), intended as an affordable alternative to Sega's Model 2 arcade board and as a testing ground for upcoming Saturn software. The Titan was criticized for its comparatively weak performance by Sega AM2's Yu Suzuki and was overproduced by Sega's arcade division. Because Sega already had the "Die Hard" license, members of Sega AM1 working at the Sega Technical Institute developed "Die Hard Arcade" for the Titan to clear excess inventory. "Die Hard" became the most successful Sega arcade game produced in the United States at that point. Other games released for the Titan include "" and "Virtua Fighter Kids".
Much of the Saturn's library comes from Sega's arcade ports, including "Daytona USA", "The House of the Dead", "Last Bronx", "Sega Rally Championship", the "Virtua Cop" series, the "Virtua Fighter" series, and "Virtual-On". Saturn ports of 2D Capcom fighting games including "Darkstalkers 3", "Marvel Super Heroes vs. Street Fighter", and "Street Fighter Alpha 3" were noted for their faithfulness to their arcade counterparts. "Fighters Megamix", developed by Sega AM2 for the Saturn rather than arcades, combined characters from "Fighting Vipers" and "Virtua Fighter" to positive reviews. Highly rated Saturn exclusives include "Panzer Dragoon Saga", "Dragon Force", "Guardian Heroes", "Nights", "Panzer Dragoon II Zwei", and "Shining Force III". PlayStation games such as "", "Resident Evil", and "Wipeout 2097" received Saturn ports with mixed results. Lobotomy Software's "PowerSlave" featured some of the most impressive 3D graphics on the system, leading Sega to contract them to produce Saturn ports of "Duke Nukem 3D" and "Quake". While Electronic Arts' limited support for the Saturn and Sega's failure to develop a football game for the 1996 fall season gave Sony the lead in the sports genre, "Sega Sports" published Saturn sports games including the well-regarded "World Series Baseball" and "Sega Worldwide Soccer" series. With about 600 official releases, the Saturn's library is nearly twice as large as the Nintendo 64's.
Due to the cancellation of "Sonic X-treme", the Saturn lacks an exclusive "Sonic the Hedgehog" platformer; instead it received a graphically enhanced port of the Genesis game "Sonic 3D Blast", the compilation "Sonic Jam", and a racing game, "Sonic R". The platformer "Bug!" received attention for its eponymous main character being a potential mascot for the Saturn, but it failed to catch on as the "Sonic" series had. Considered one of the most important Saturn releases, Sonic Team developed "Nights into Dreams", a score attack game that attempted to simulate both the joy of flying and the fleeting sensation of dreams. The gameplay of "Nights" involves steering the imp-like androgynous protagonist, Nights, as it flies on a mostly 2D plane across surreal stages broken into four segments each. The levels repeat for as long as an in-game time limit allows, while flying over or looping around various objects in rapid succession earns additional points. Although it lacked the fully 3D environments of Nintendo's "Super Mario 64", "Nights"' emphasis on unfettered movement and graceful acrobatic techniques showcased the intuitive potential of analog control. Sonic Team's "Burning Rangers", a fully 3D action-adventure game involving a team of outer-space firefighters, garnered praise for its transparency effects and distinctive art direction, but was released in limited quantities late in the Saturn's lifespan and criticized for its short length.
Some of the games that made the Saturn popular in Japan, such as "Grandia" and the "Sakura Wars" series, never saw a Western release due to Sega of America's policy of not localizing RPGs and other Japanese games that might have damaged the system's reputation in North America. Despite appearing first on the Saturn, games such as "Dead or Alive", "Grandia", and "" only saw a Western release on the PlayStation. Working Designs localized several Japanese Saturn games before a public feud between Sega of America's Bernie Stolar and Working Designs president Victor Ireland resulted in the company switching their support to the PlayStation. "Panzer Dragoon Saga" was praised as perhaps the finest RPG for the system due to its cinematic presentation, evocative plot, and unique battle system—with a tactical emphasis on circling around opponents to identify weak points and the ability to "morph" the physical attributes of the protagonist's dragon companion during combat—but Sega released fewer than 20,000 retail copies of the game in North America in what IGN's Levi Buchanan characterized as one example of the Saturn's "ignominious send-off" in the region. Similarly, only the first of three installments of "Shining Force III" was released outside Japan. The Saturn's library also garnered criticism for its lack of sequels to high-profile Genesis-era Sega franchises, with Sega of Japan's cancellation of a planned third installment in Sega of America's popular "Eternal Champions" series cited as a significant source of controversy.
Later ports of Saturn games including "Guardian Heroes", "Nights", and "" continued to garner positive reviews. Partly due to rarity, Saturn games such as "Panzer Dragoon Saga" and "Radiant Silvergun" have been noted for their cult following. Due to the system's commercial failure and hardware limitations, planned Saturn releases such as "Resident Evil 2", "Shenmue", "Sonic Adventure", and "Virtua Fighter 3" were cancelled and moved to the Dreamcast.
At the time of the Saturn's release, "Famicom Tsūshin" awarded it 24 out of 40, higher than the PlayStation's 19 out of 40. In June 1995, Dennis Lynch of the "Chicago Tribune" and Albert Kim of "Entertainment Weekly" praised the Saturn as the most advanced console available; Lynch praised the double-speed CD-ROM drive and "intense surround-sound capabilities" and Kim cited "Panzer Dragoon" as a "lyrical and exhilarating epic" demonstrating the ability of new technology to "transform" the industry. In December 1995, "Next Generation" gave the Saturn three and a half stars out of five, highlighting Sega's marketing and arcade background as strengths but the system's complexity as a weakness. Four critics in "Electronic Gaming Monthly"s December 1996 Buyer's Guide rated the Saturn 8, 6, 7, and 8 out of 10 and the PlayStation 9, 10, 9, and 9. By December 1998, "EGM"s reviews were more mixed, with reviewers citing the lack of games as a major problem. According to "EGM" reviewer Crispin Boyer, "the Saturn is the only system that can thrill me one month and totally disappoint me the next".
Retrospective feedback of the Saturn is mixed, but generally praises its game library. According to Greg Sewart of 1UP.com, "the Saturn will go down in history as one of the most troubled, and greatest, systems of all time". In 2009, IGN named the Saturn the 18th best console of all time, praising its unique game library. According to the reviewers, "While the Saturn ended up losing the popularity contest to both Sony and Nintendo ... "Nights into Dreams", the "Virtua Fighter" and "Panzer Dragoon" series are all examples of exclusive titles that made the console a fan favorite." "Edge" noted "hardened loyalists continue to reminisce about the console that brought forth games like "Burning Rangers", "Guardian Heroes", "Dragon Force" and "Panzer Dragoon Saga"." In 2015, "The Guardian"s Keith Stuart wrote that "the Saturn has perhaps the strongest line up of 2D shooters and fighting games in console history".
"Retro Gamer"s Damien McFerran wrote: "Even today, despite the widespread availability of sequels and re-releases on other formats, the Sega Saturn is still a worthwhile investment for those who appreciate the unique gameplay styles of the companies that supported it." IGN's Adam Redsell wrote "[Sega's] devil-may-care attitude towards game development in the Saturn and Dreamcast eras is something that we simply do not see outside of the indie scene today." Necrosoft Games director Brandon Sheffield felt that "the Saturn was a landing point for games that were too 'adult' in content for other systems, as it was the only one that allowed an 18+ rating for content in Japan ... some games, like "Enemy Zero" used it to take body horror to new levels, an important step toward the expansion of games and who they served." Sewart praised the Saturn's first-party games as "Sega's shining moment as a game developer", with Sonic Team demonstrating its creative range and AM2 producing numerous technically impressive arcade ports. He also commented on the many Japan-exclusive Saturn releases, which he connected with a subsequent boom in the game import market. IGN's Travis Fahs was critical of the Saturn library's lack of "fresh ideas" and "precious few high-profile franchises", in contrast to what he described as Sega's more creative Dreamcast output.
Sega has been criticized for its management of the Saturn. McFerran felt its management staff had "fallen out of touch with both the demands of the market and the industry". Stolar has also been criticized; according to Fahs, "Stolar's decision to abandon the Saturn made him a villain to many Sega fans, but ... it was better to regroup than to enter the next fight battered and bruised. Dreamcast would be Stolar's redemption." Stolar defended his decision, saying, "I felt Saturn was hurting the company more than helping it. That was a battle that we weren't going to win." Sheffield said that the Saturn's quadrilaterals undermined third-party support, but because "nVidia invested in quads" at the same time, there had been "a remote possibility" they could have "become the standard instead of triangles ... if somehow, magically, the Saturn were the most popular console of that era." Speaking more positively, former Working Designs president Victor Ireland described the Saturn as "the start of the future of console gaming" because it "got the better developers thinking and designing with parallel-processing architecture in mind for the first time". In GamesRadar, Justin Towell wrote that the Saturn's 3D Pad "set the template for every successful controller that followed, with analog shoulder triggers and left thumbstick ... I don't see any three-pronged controllers around the office these days."
Douglass C. Perry of Gamasutra noted that, from its surprise launch to its ultimate failure, the Saturn "soured many gamers on Sega products". Sewart and IGN's Levi Buchanan cited the failure of the Saturn as the major reason for Sega's downfall as a hardware manufacturer, but USgamer's Jeremy Parish described it as "more a symptom ... than a cause" of the decline, which began with add-ons for the Genesis that fragmented the market and continued with Sega of America's and Sega of Japan's competing designs for the Dreamcast. Sheffield portrayed Sega's mistakes with the Saturn as emblematic of the broader decline of the Japanese gaming industry: "They thought they were invincible, and that structure and hierarchy were necessary for their survival, but more flexibility, and a greater participation with the West could have saved them." According to Stuart, Sega "didn't see ... the roots of a prevailing trend, away from arcade conversions and traditional role-playing adventures and toward a much wider console development community with fresh ideas about gameplay and structure." Pulp365 reviews editor Matt Paprocki concluded that "the Saturn is a relic, but an important one, which represents the harshness of progress and what it can leave in its wake". | https://en.wikipedia.org/wiki?curid=29035 |
Dreamcast
The is a home video game console released by Sega on November 27, 1998 in Japan, September 9, 1999 in North America, and October 14, 1999 in Europe. It was the first in the sixth generation of video game consoles, preceding Sony's PlayStation 2, Nintendo's GameCube and Microsoft's Xbox. The Dreamcast was Sega's final home console, marking the end of the company's 18 years in the console market.
In contrast to the expensive hardware of the unsuccessful Sega Saturn, the Dreamcast was designed to reduce costs with "off-the-shelf" components, including a Hitachi SH-4 CPU and an NEC PowerVR2 GPU. Released in Japan to a subdued reception, the Dreamcast enjoyed a successful U.S. launch backed by a large marketing campaign, but interest in the system steadily declined as Sony built hype for the upcoming PlayStation 2. Sales did not meet Sega's expectations despite several price cuts, and the company continued to incur significant financial losses. After a change in leadership, Sega discontinued the Dreamcast on March 31, 2001, withdrawing from the console business and restructuring itself as a third-party publisher. 9.13 million Dreamcast units were sold worldwide.
Although the Dreamcast had a short lifespan and limited third-party support, reviewers have considered the console ahead of its time. Its library contains many games considered creative and innovative, including "Crazy Taxi", "Jet Set Radio" and "Shenmue", as well as high-quality ports from Sega's NAOMI arcade system board. The Dreamcast was also the first console to include a built-in modular modem for internet support and online play.
Released in 1988, the Sega Genesis (known as the Mega Drive in Japan, Europe and Brazil) was Sega's entry into the fourth generation of video game consoles. Selling 30.75 million units worldwide, the Genesis was the most successful console Sega ever released. The successor to the Genesis, the Sega Saturn, was released in Japan in 1994. The Saturn was a CD-ROM-based console that displayed both 2D and 3D computer graphics, but its complex dual-CPU architecture made it more difficult to program for than its chief competitor, the Sony PlayStation. Although the Saturn debuted before the PlayStation in both Japan and the United States, its surprise U.S. launch—which came four months earlier than originally scheduled—was marred by a lack of distribution, which remained a continuing problem for the system. Moreover, Sega's early release was undermined by Sony's simultaneous announcement that the PlayStation would retail for US$299—compared to the Saturn's initial price of $399. Nintendo's long delay in releasing a competing 3D console and the damage done to Sega's reputation by poorly supported add-ons for the Genesis (particularly the Sega 32X) allowed Sony to establish a foothold in the market. The PlayStation was immediately successful in the U.S., in part due to a massive advertising campaign and strong third-party support engendered by Sony's excellent development tools and liberal $10 licensing fee. Sony's success was further aided by a price war in which Sega lowered the price of the Saturn from $399 to $299 and then from $299 to $199 in order to match the price of the PlayStation–even though Saturn hardware was more expensive to manufacture and the PlayStation enjoyed a larger software library. Losses on the Saturn hardware contributed to Sega's financial problems, which saw the company's revenue decline between 1992 and 1995 as part of an industry-wide slowdown. Furthermore, Sega's focus on the Saturn over the Genesis prevented it from fully capitalizing on the continued strength of the 16-bit market.
Due to long-standing disagreements with Sega of Japan, Sega of America CEO Tom Kalinske became less interested in his position. On July 16, 1996, Sega announced that Shoichiro Irimajiri had been appointed chairman and CEO of Sega of America, while Kalinske would be leaving Sega after September 30 of that year. Sega also announced that Sega Enterprises cofounder David Rosen and Sega of Japan CEO Hayao Nakayama had resigned from their positions as chairman and co-chairman of Sega of America, though both men remained with the company. Bernie Stolar, a former executive at Sony Computer Entertainment of America, was named Sega of America's executive vice president in charge of product development and third-party relations. Stolar did not support the Saturn due to his belief that the hardware was poorly designed and publicly announced at E3 1997 that "The Saturn is not our future." After the launch of the Nintendo 64, sales of the Saturn and Sega's 32-bit software were sharply reduced. As of August 1997, Sony controlled 47 percent of the console market, Nintendo controlled 40 percent, and Sega controlled only 12 percent. Neither price cuts nor high-profile games were proving helpful to the Saturn's success. Due to the Saturn's poor performance in North America, Sega of America laid off 60 of its 200 employees in the fall of 1997.
As a result of the company's deteriorating financial situation, Nakayama resigned as president of Sega in January 1998 in favor of Irimajiri. Stolar would subsequently accede to become CEO and president of Sega of America. Following five years of generally declining profits, in the fiscal year ending March 31, 1998, Sega suffered its first parent and consolidated financial losses since its 1988 listing on the Tokyo Stock Exchange. Due to a 54.8% decline in consumer product sales (including a 75.4% decline overseas), the company reported a consolidated net loss of ¥35.6 billion (US$269.8 million). Shortly before announcing its financial losses, Sega revealed that it was discontinuing the Saturn in North America, with the goal of preparing for the launch of its successor. This decision effectively left the Western market without Sega games for over one year. Rumors about the upcoming Dreamcast—spread mainly by Sega itself—leaked to the public before the last Saturn games were released.
As early as 1995, reports surfaced that Sega would collaborate with Lockheed Martin, The 3DO Company, Matsushita, or Alliance Semiconductor to create a new graphics processing unit, which conflicting accounts said would be used for a 64-bit "Saturn 2" or an add-on peripheral. Development of the Dreamcast was wholly unrelated to this rumored project. In light of the Saturn's poor market performance, Irimajiri decided to start looking outside of the company's internal hardware development division to create a new console. In 1997, Irimajiri enlisted the services of IBM's Tatsuo Yamamoto to lead an 11-man team to work on a secret hardware project in the United States, which was referred to as "Blackbelt". Accounts vary on how an internal team led by Hideki Sato also began development on Dreamcast hardware; one account specifies that Sega of Japan tasked both teams, while another suggests that Sato was bothered by Irimajiri's choice to begin development externally and chose to have his hardware team begin development. Sato and his group chose the Hitachi SH-4 processor architecture and the VideoLogic PowerVR2 graphics processor, manufactured by NEC, in the production of their mainboard. Initially known as "Whitebelt", this project was later codenamed "Dural", after the metallic female fighter from Sega's "Virtua Fighter" series.
Yamamoto's group opted to use 3dfx Voodoo 2 and Voodoo Banshee graphics processors alongside a Motorola PowerPC 603e central processing unit (CPU), but Sega management later asked them to also use the SH-4 chip. Both processors have been described as "off the shelf" components. In 1997, 3dfx began its IPO, and as a result of legal obligations unveiled its contracts with Sega, including the development of the new console. This angered Sega of Japan executives, who eventually decided to use the Dural chipset and cut ties with 3dfx. According to former Sega of America vice president of communications and former NEC brand manager Charles Bellfield, presentations of games using the NEC solution showcased the performance and low cost delivered by the SH-4 and PowerVR architecture. He further stated that "Sega's relationship with NEC, a Japanese company, probably made a difference [in Sega's decision to adopt the Japanese team's design] too." Stolar, on the other hand, "felt the U.S. version, the 3Dfx version, should have been used. Japan wanted the Japanese version, and Japan won." As a result, 3dfx filed a lawsuit against both Sega and NEC claiming breach of contract, which would eventually be settled out of court. The choice to use the PowerVR architecture concerned Electronic Arts (EA), a longtime developer for Sega's consoles. EA had invested in 3dfx but was unfamiliar with the selected architecture, which was reportedly less powerful. As recounted by Shiro Hagiwara (a general manager at Sega's hardware division) and Ian Oliver (the managing director of Sega subsidiary Cross Products), the SH-4 was chosen while it was still in development and following a lengthy deliberation process because it was the only available processor that "could adapt to deliver the 3D geometry calculation performance necessary." By February 1998, Sega had renamed the Dural "Katana" (after the Japanese sword), although certain hardware specifications such as random access memory (RAM) were not yet finalized.
Knowing the Sega Saturn had been set back by its high production costs and complex hardware, Sega took a different approach with the Dreamcast. Like previous Sega consoles, the Dreamcast was designed around intelligent subsystems working in parallel with one another, but the selections of hardware were more in line with what was common in personal computers than video game consoles, reducing the system's cost. According to Damien McFerran, "the motherboard was a masterpiece of clean, uncluttered design and compatibility." Chinese economist and future Sega.com CEO Brad Huang convinced Sega chairman Isao Okawa to include a modem with every Dreamcast despite significant opposition from Okawa's staff over the additional $15 cost per unit. To account for rapid changes in home data delivery, Sega designed the Dreamcast's modem to be modular. Sega selected the GD-ROM media format for the system. The GD-ROM, which was jointly developed by Sega and Yamaha Corporation, could be mass-produced at a similar price to a normal CD-ROM, thus avoiding the greater expense of DVD-ROM technology. As the GD-ROM format can hold about 1 GB of data, illegally copying Dreamcast games onto a 650 MB CD-ROM sometimes required the removal of certain game features, although this did not prevent copying of Dreamcast software. Microsoft developed a custom Dreamcast version of Windows CE with DirectX API and dynamic-link libraries, making it easy to port PC games to the platform, although programmers would ultimately favor Sega's development tools over those from Microsoft.
Sega held a public competition to name its new system and considered over 5,000 different entries before choosing "Dreamcast"—a portmanteau of "dream" and "broadcast". According to Katsutoshi Eguchi, Japanese game developer Kenji Eno submitted the name and created the Dreamcast's spiral logo, but this claim has not been verified by Sega. The Dreamcast's start-up sound was composed by the Japanese musician Ryuichi Sakamoto. Because the Saturn had tarnished Sega's reputation, the company planned to remove its name from the console entirely and establish a new gaming brand similar to Sony's PlayStation, but Irimajiri's management team ultimately decided to retain Sega's logo on the Dreamcast's exterior. Sega spent US$50–80 million on hardware development, $150–200 million on software development, and $300 million on worldwide promotion—a sum which Irimajiri, a former Honda executive, humorously compared to the investments required to design new automobiles.
Despite taking massive losses on the Saturn, including a 75 percent drop in half-year profits just before the Japanese launch of the Dreamcast, Sega felt confident about its new system. The Dreamcast attracted significant interest and drew many pre-orders. Sega announced that "Sonic Adventure", the next game starring company mascot Sonic the Hedgehog, would arrive in time for the Dreamcast's launch and promoted the game with a large-scale public demonstration at the Tokyo Kokusai Forum Hall. However, Sega could not achieve its shipping goals for the Dreamcast's Japanese launch due to a shortage of PowerVR chipsets caused by a high failure rate in the manufacturing process. As more than half of its limited stock had been pre-ordered, Sega stopped pre-orders in Japan. On November 27, 1998, the Dreamcast launched in Japan at a price of JP¥29,000, and the entire stock sold out by the end of the day. However, of the four games available at launch, only one—a port of "Virtua Fighter 3", the most successful arcade game Sega ever released in Japan—sold well. Sega estimated that an additional 200,000-300,000 Dreamcast units could have been sold with sufficient supply. Key Dreamcast games "Sonic Adventure" and "Sega Rally Championship 2", which had been delayed, arrived within the following weeks, but sales continued to be slower than expected. Irimajiri hoped to sell over 1 million Dreamcast units in Japan by February 1999, but less than 900,000 were sold, undermining Sega's attempts to build up a sufficient installed base to ensure the Dreamcast's survival after the arrival of competition from other manufacturers. There were reports of disappointed Japanese consumers returning their Dreamcasts and using the refund to purchase additional PlayStation software. "Seaman", released in July 1999, was considered the Dreamcast's first major hit in Japan. Prior to the Western launch, Sega reduced the price of the Dreamcast to JP¥19,900, effectively making the hardware unprofitable but increasing sales. The price reduction and release of Namco's "Soulcalibur" helped Sega to gain 17 percent on its shares.
Before the Dreamcast's release, Sega was dealt a blow when EA—the largest third-party video game publisher—announced it would not develop games for the system. EA chief creative officer Bing Gordon said that Sega "had flip-flopped on the configuration [over whether to include a modem, and picking the then-unknown PowerVR over an established player like 3Dfx], and because the Dreamcast became the system that EA developers least wanted to work on in the history of systems at EA, that was pretty much it. In the end, it felt like Sega was not acting like a competent hardware company". Gordon also claimed, "[Sega] couldn't afford to give us [EA] the same kind of license that EA has had over the last five years". Stolar had a different account of the breakdown in negotiations with EA, recalling that EA president Larry Probst specifically wanted "exclusive rights to be the only sports brand on Dreamcast", which Stolar could not accept due to Sega's recent $10 million purchase of sports game developer Visual Concepts. While EA's Madden NFL series did have established brand power, Stolar regarded NFL 2K as far superior and providing "a breakthrough experience" to launch the Dreamcast. While the Dreamcast would have none of EA's popular sports games, "Sega Sports" games developed mainly by Visual Concepts helped to fill that void.
Working closely with Midway Games (which developed four launch games for the system) and taking advantage of the ten months following the Dreamcast's release in Japan, Sega of America worked to ensure a more successful U.S. launch with a minimum of 15 launch games. Despite lingering bitterness over the Saturn's early release, Stolar successfully managed to repair relations with major U.S. retailers, with whom Sega presold 300,000 Dreamcast units. In addition, a pre-launch promotion enabled consumers to rent the system from Hollywood Video in the months preceding its September launch. Sega of America's senior vice president of marketing Peter Moore, a fan of the attitude previously associated with Sega's brand, worked with Foote, Cone & Belding and Access Communications to develop the "It's Thinking" campaign of 15-second television commercials, which emphasized the Dreamcast's hardware power. According to Moore, "We needed to create something that would really intrigue consumers, somewhat apologize for the past, but invoke all the things we loved about Sega, primarily from the Genesis days." On August 11, Sega of America confirmed that Stolar had been fired, leaving Moore to direct the launch.
The Dreamcast launched in North America on September 9, 1999 at a price of $199—which Sega's marketing dubbed "9/9/99 for $199". Eighteen launch games were available for the Dreamcast in the U.S. Sega set a new sales record by selling more than 225,132 Dreamcast units in 24 hours, earning the company $98.4 million in what Moore called "the biggest 24 hours in entertainment retail history". Within two weeks, U.S. Dreamcast sales exceeded 500,000. By Christmas, Sega held 31 percent of the North American video game marketshare. Significant launch games included "Soul Calibur", an arcade fighting game graphically enhanced for the system and went on to sell one million units, and Visual Concepts' high-quality football simulation "NFL 2K". On November 4, Sega announced it had sold over one million Dreamcast units. Nevertheless, the launch was marred by a glitch at one of Sega's manufacturing plants, which produced defective GD-ROMs.
Sega released the Dreamcast in Europe on October 14, 1999, at a price of GB£200. By November 24, 400,000 consoles had been sold in Europe. By Christmas of 1999, Sega of Europe reported selling 500,000 units, placing it six months ahead of schedule. Sales did not continue at this pace, and by October 2000, Sega had sold only about 1 million units in Europe. As part of Sega's promotions of the Dreamcast in Europe, the company sponsored four European football clubs: Arsenal F.C. (England), AS Saint-Étienne (France), U.C. Sampdoria (Italy), and Deportivo de La Coruña (Spain).
Though the Dreamcast launch had been successful, Sony still held 60 percent of the overall video game market share in North America with the PlayStation at the end of 1999. On March 2, 1999, in what one report called a "highly publicized, vaporware-like announcement" Sony revealed the first details of its "next generation PlayStation", which Ken Kutaragi claimed would allow video games to convey unprecedented emotions. The center of Sony's marketing plan and the upcoming PlayStation 2 itself was a new CPU (clocked at 294 MHz) jointly developed by Sony and Toshiba—the "Emotion Engine"—which Kutaragi announced would feature a graphics processor with 1,000 times more bandwidth than contemporary PC graphics processors and a floating-point calculation performance of 6.2 gigaflops, rivaling most supercomputers. Sony, which invested $1.2 billion in two large-scale integration semiconductor fabrication plants to manufacture the PlayStation 2's "Emotion Engine" and "Graphics Synthesizer", designed the machine to push more raw polygons than any video game console in history. Sony claimed the PlayStation 2 could render 75 million raw polygons per second with absolutely no effects, and 38 million without accounting for features such as textures, artificial intelligence, or physics. With such effects, Sony estimated the PlayStation 2 could render 7.5 million to 16 million polygons per second, whereas independent estimates ranged from 3 million to 20 million, compared to Sega's estimates of more than 3 million to 6 million for the Dreamcast. The system would also utilize the DVD-ROM format, which could hold substantially more data than the Dreamcast's GD-ROM format. Because it could connect to the Internet while playing movies, music, and video games, Sony hyped PlayStation 2 as the future of home entertainment. Rumors spread that the PlayStation 2 was a supercomputer capable of guiding missiles and displaying "Toy Story"-quality graphics, while Kutaragi boasted its online capabilities would give consumers the ability to "jack into "‘The Matrix’"!" In addition, Sony emphasized the PlayStation 2 would be backwards compatible with hundreds of popular PlayStation games. Sony's specifications appeared to render the Dreamcast obsolete months before its U.S. launch, although reports later emerged that the PlayStation 2 was not as powerful as expected and distinctly difficult to program games for. The same year, Nintendo announced its next generation console would meet or exceed anything on the market, and Microsoft began development of its own console.
Sega's initial momentum proved fleeting as U.S. Dreamcast sales—which exceeded 1.5 million by the end of 1999—began to decline as early as January 2000. Poor Japanese sales contributed to Sega's ¥42.88 billion ($404 million) consolidated net loss in the fiscal year ending March 2000, which followed a similar loss of ¥42.881 billion the previous year and marked Sega's third consecutive annual loss. Although Sega's overall sales for the term increased 27.4%, and Dreamcast sales in North America and Europe greatly exceeded the company's expectations, this increase in sales coincided with a decrease in profitability due to the investments required to launch the Dreamcast in Western markets and poor software sales in Japan. At the same time, increasingly poor market conditions reduced the profitability of Sega's Japanese arcade business, prompting the company to close 246 locations. Knowing that "they have to fish where the fish are biting", Sega of America president Peter Moore (who assumed his position after Stolar had been fired) and Sega of Japan's developers focused on the U.S. market to prepare for the upcoming launch of the PS2. To that end, Sega of America launched its own Internet service provider, Sega.com, led by CEO Brad Huang. On September 7, 2000, Sega.com launched SegaNet, the Dreamcast's Internet gaming service, at a subscription price of $21.95 per month. Although Sega had previously released only one Dreamcast game in the U.S. that featured online multiplayer ("ChuChu Rocket!", a puzzle game developed by Sonic Team), the launch of SegaNet (which allowed users to chat, send email, and surf the web) combined with "NFL 2K1" (a football game including a robust online component) was intended to increase demand for the Dreamcast in the U.S. market. The service would later support games including "Bomberman Online", "Quake III Arena", and "Unreal Tournament". The September 7 launch coincided with a new advertising campaign to promote SegaNet, including via the MTV Video Music Awards of the same day, which Sega sponsored for the second consecutive year. Sega employed aggressive pricing strategies with relation to online gaming. In Japan, every Dreamcast sold included a free year of Internet access, which Okawa personally paid for. Prior to the launch of SegaNet, Sega had already offered a $200 rebate to any Dreamcast owner who purchased two years of Internet access from Sega.com. To increase SegaNet's appeal in the U.S., Sega dropped the price of the Dreamcast to $149 (compared to the PS2's U.S. launch price of $299) and offered a rebate for the full $149 price of a Dreamcast (and a free Dreamcast keyboard) with every 18-month SegaNet subscription.
Moore stated that the Dreamcast would need to sell 5 million units in the U.S. by the end of 2000 in order to remain a viable platform, but Sega ultimately fell short of this goal with some 3 million units sold. Moreover, Sega's attempts to spur increased Dreamcast sales through lower prices and cash rebates caused escalating financial losses. Instead of an expected profit, for the six months ending September 2000, Sega posted a ¥17.98 billion ($163.11 million) loss, with the company projecting a year-end loss of ¥23.6 billion. This estimate was more than doubled to ¥58.3 billion, and in March 2001, Sega posted a consolidated net loss of ¥51.7 billion ($417.5 million). While the PS2's October 26 U.S. launch was marred by shortages, this did not benefit the Dreamcast as much as expected; many consumers continued to wait for a PS2, while the PSone, a remodeled version of the original PlayStation, was the best-selling console in the U.S. at the start of the 2000 holiday season. According to Moore, "the PlayStation 2 effect that we were relying upon did not work for us ... people will hang on for as long as possible ... What effectively happened is the PlayStation 2 lack of availability froze the marketplace." Eventually, Sony and Nintendo held 50 and 35 percent of the U.S. video game market, respectively, while Sega held only 15 percent. According to Bellfield, Dreamcast software sold at an 8-to-1 ratio with the hardware, but this ratio "on a small install base didn't give us the revenue ... to keep this platform viable in the medium to long term."
On May 22, 2000, Okawa replaced Irimajiri as president of Sega. Okawa had long advocated that Sega abandon the console business. His sentiments were not unique; Sega co-founder David Rosen had "always felt it was a bit of a folly for them to be limiting their potential to Sega hardware", and Stolar had previously suggested Sega should have sold their company to Microsoft. In September 2000, in a meeting with Sega's Japanese executives and the heads of the company's major Japanese game development studios, Moore and Bellfield recommended that Sega abandon its console business and focus on software—prompting the studio heads to walk out.
Nevertheless, on January 31, 2001, Sega announced the discontinuation of the Dreamcast after March 31 and the restructuring of the company as a "platform-agnostic" third-party developer. The decision was Moore's. Sega also announced a Dreamcast price reduction to $99 to eliminate its unsold inventory, which was estimated at 930,000 units as of April 2001. After a further reduction to $79, the Dreamcast was cleared out of stores at $49.95. The final Dreamcast unit manufactured was autographed by the heads of all nine of Sega's internal game development studios as well as the heads of Visual Concepts and Wave Master and given away with 55 first-party Dreamcast games through a competition organized by "GamePro" magazine. Okawa, who had previously loaned Sega $500 million in the summer of 1999, died on March 16, 2001; shortly before his death, he forgave Sega's debts to him and returned his $695 million worth of Sega and CSK stock, helping the company survive the third-party transition. As part of this restructuring, nearly one-third of Sega's Tokyo workforce was laid off in 2001.
9.13 million Dreamcast units were sold worldwide. After the Dreamcast's discontinuation, commercial games were still developed and released for the system, particularly in Japan. In the United States, game releases continued until the end of the first half of 2002. Sega of Japan continued to repair Dreamcast units until 2007. As of 2014, the console is still supported through various MIL-CD independent releases. After five consecutive years of financial losses, Sega finally posted a profit for the fiscal year ending March 2003.
Reasons cited for the failure of the Dreamcast include hype for the PS2; a lack of support from EA and Squaresoft, considered the most popular third-parties in the U.S. and Japan respectively; disagreement among Sega executives over the company's future, and Okawa's lack of commitment to the product; Sega's lack of advertising money, with Bellfield doubting that Sega spent even "half" the $100 million it had pledged to promote the Dreamcast in the U.S.; that the market was not yet ready for online gaming; Sega's focus on "hardcore" gamers over the mainstream consumer; and poor timing. Perhaps the most frequently cited reason is the damage to Sega's reputation caused by several previous poorly supported Sega platforms. Writing for "GamePro", Blake Snow stated "the much beloved console launched years ahead of the competition but ultimately struggled to shed the negative reputation [Sega] had gained during the Saturn, Sega 32X, and Sega CD days. As a result, casual gamers and jaded third-party developers doubted Sega's ability to deliver." Eurogamer's Dan Whitehead noted that the "wait and see" approach of consumers and the lack of support from EA were symptoms rather the cause of Sega's decline, concluding "Sega's misadventures during the 1990s had left both gamers and publishers wary of any new platform bearing its name." According to 1UP.com's Jeremy Parish, "While it would be easy to point an accusatory finger at Sony and blame them for killing the Dreamcast by overselling the PS2 ... there's a certain level of intellectual dishonesty in such a stance ... [Sega]'s poor U.S. support for hardware like the Sega CD, the 32X, and the Saturn made gamers gun shy. Many consumers felt burned after investing in expensive Sega machines and finding the resulting libraries comparatively lacking".
The announcement of Sega's third-party transition was met with widespread enthusiasm. According to IGN's Travis Fahs, "Sega was a creatively fertile company with a rapidly expanding stable of properties to draw from. It seemed like they were in a perfect position to start a new life as a developer/publisher." Former Working Designs president Victor Ireland wrote, "It's actually a good thing ... because now Sega will survive, doing what they do best: software." The staff of "Newsweek" remarked "From "Sonic" to "Shenmue", Sega's programmers have produced some of the most engaging experiences in the history of interactive media ... Unshackled by a struggling console platform, this platoon of world-class software developers can do what they do best for any machine on the market". Rosen predicted "they have the potential to catch Electronic Arts". "Game Informer", commenting on Sega's tendency to produce under-appreciated cult classics, stated: "Let us rejoice in the fact that Sega is making games equally among the current console crop, so that history will not repeat itself."
Physically, the Dreamcast measures and weighs . The Dreamcast's main CPU is a two-way 360 MIPS superscalar Hitachi SH-4 32-bit RISC clocked at 200 MHz with an 8 Kbyte instruction cache and 16 Kbyte data cache and a 128-bit graphics-oriented floating-point unit delivering 1.4 GFLOPS. Its 100 MHz NEC PowerVR2 rendering engine, integrated with the system's ASIC, is capable of drawing more than 3 million polygons per second and of deferred shading. Sega estimated the Dreamcast was theoretically capable of rendering 7 million raw polygons per second, or 6 million with textures and lighting, but noted that "game logic and physics reduce peak graphic performance." Graphics hardware effects include trilinear filtering, gouraud shading, z-buffering, spatial anti-aliasing, per-pixel translucency sorting and bump mapping. The system can output approximately 16.77 million colors simultaneously and displays interlaced or progressive scan video at 640 × 480 video resolution. Its 67 MHz Yamaha AICA sound processor, with a 32-bit ARM7 RISC CPU core, can generate 64 voices with PCM or ADPCM, providing ten times the performance of the Saturn's sound system. The Dreamcast has 16 MB main RAM, along with an additional 8 MB of RAM for graphic textures and 2 MB of RAM for sound. The system reads media using a 12x speed Yamaha GD-ROM Drive. In addition to Windows CE, the Dreamcast supports several Sega and middleware application programming interfaces. In most regions, the Dreamcast included a removable modem for online connectivity, which was modular for future upgrades. The original Japanese model and all PAL models had a transfer rate of 33.6 kbit/s, while consoles sold in the US and in Japan after September 9, 1999 featured a 56 kbit/s dial-up modem.
Sega constructed numerous Dreamcast models, most of which were exclusive to Japan. A refurbished Dreamcast known as the R7 was originally used as a network console in Japanese pachinko parlors. Another model, the Divers 2000 CX-1, possesses a shape similar to Sonic's head and includes a television and software for teleconferencing. A "Hello Kitty" version, limited to 2000 units produced, was targeted at Japanese female gamers. Special editions were created for "Seaman" and "". Color variations were sold through a service called "Dreamcast Direct" in Japan. Toyota also offered special edition Dreamcast units at 160 of its dealers in Japan. In North America, a limited edition black Dreamcast was released with a Sega Sports logo on the lid, which included matching Sega Sports-branded black controllers and two games.
The Dreamcast controller includes both an analog stick and a digital pad, four action buttons, start button and two analog triggers. The system has four ports for controller inputs, although it was bundled with only one controller. The design of the Dreamcast's controller, described by the staff of "Edge" as "an ugly evolution of Saturn's 3D controller," was called "[not] that great" by 1UP.com's Sam Kennedy and "lame" by "Game Informer"'s Andy McNamara. The staff of IGN wrote that "unlike most controllers, Sega's pad forces the user's hands into an uncomfortable parallel position." In contrast to the Sega CD and Sega Saturn, which included internal backup memory, the Dreamcast uses a 128 kbyte memory card called the VMU (or "Visual Memory Unit") for data storage. The VMU features a small LCD screen, audio output from a one-channel PWM sound source, non-volatile memory, a directional pad, and four buttons. The VMU can present game information, be used as a minimal handheld gaming device, and connect to certain Sega arcade machines. For example, players use the VMU to call plays in "NFL 2K" or raise virtual pets in "Sonic Adventure". Sega officials noted that the VMU could be used "as a private viewing area, the absence of which has prevented effective implementation of many types of games in the past." After a VMU slot was incorporated into the controller's design, Sega's engineers found many additional uses for it, so a second slot was added. This slot was generally used for vibration packs providing force feedback like Sega's "Jump Pack" and Performance's "Tremor Pack", although it could also be used for other peripherals including a microphone enabling voice control and player communication. Various third-party cards provide storage, and some contain the LCD screen addition. Iomega announced a Dreamcast-compatible zip drive that could store up to 100 MB of data on removable discs, but it was never released.
Various third-party controllers from companies like Mad Catz include additional buttons and other extra features; third-parties also manufactured arcade-style joysticks for fighting games, such as Agetech's Arcade Stick and Interact's Alloy Arcade Stick. Mad Catz and Agetec created racing wheels for racing games. Sega decided against releasing its official light gun in the U.S., but some third party light guns were available. The Dreamcast supports a Sega fishing "reel and rod" motion controller and a keyboard for text entry. Although it was designed for fishing games such as "Sega Bass Fishing", "Soul Calibur" was playable with the fishing controller, which translated vertical and horizontal movements into on-screen swordplay in a manner that was retroactively cited as a predecessor to the Wii Remote. The Japanese Dreamcast port of Sega's "Cyber Troopers Virtual-On Oratorio Tangram" supported a "Twin Sticks" peripheral, but the game's American publisher, Activision, opted not to release it in the U.S. The Dreamcast could connect to SNK's Neo Geo Pocket Color, predating Nintendo's GameCube–Game Boy Advance link cable. Sega also produced the Dreameye, a digital camera that could be connected to the Dreamcast and used to exchange pictures and participate in video chat over the system's Internet connection. Sega hoped developers would use the Dreameye for future software, as some later did with Sony's similar EyeToy peripheral. In addition, Sega investigated systems that would have allowed users to make telephone calls with the Dreamcast, and discussed with Motorola the development of an Internet-enabled cell phone that would have used technology from the console to enable quick downloads of games and other data.
The console can supply video through several different accessories. The console came with A/V cables, considered at the time to be the standard for video and audio connectivity. Sega and various third parties also manufactured RF modulator connectors and S-Video cables. A VGA adapter allows Dreamcast games to be played on computer displays or enhanced-definition television sets in 480p.
Before the launch of the Dreamcast in Japan, Sega announced the release of its New Arcade Operation Machine Idea (NAOMI) arcade board, which served as a cheaper alternative to the Sega Model 3. NAOMI shared the same technology as the Dreamcast—albeit with twice as much system, video, and audio memory and a 160 Mbyte flash ROM board in place of a GD-ROM drive—allowing nearly identical home conversions of arcade games. Games were ported from NAOMI to the Dreamcast by several leading Japanese arcade companies, including Capcom ("Marvel vs. Capcom 2" and "Project Justice"), Tecmo ("Dead or Alive 2"), Treasure ("Ikaruga"), and Sega itself ("F355 Challenge" and "Crazy Taxi").
In what has been called "a brief moment of remarkable creativity", in 2000, Sega restructured its arcade and console development teams into nine semi-autonomous studios headed by the company's top designers. Studios included United Game Artists (UGA) (headed by former "Sega Rally Championship" producer Tetsuya Mizuguchi), Hitmaker (headed by "Crazy Taxi" creator and future Sega president Hisao Oguchi), Smilebit (headed by Shun Arai and including many former "Panzer Dragoon" and future "Yakuza" developers from Team Andromeda), Overworks (headed by Noriyoshi Oba and composed of developers from Sega franchises including "Sakura Wars", "Shinobi" and "Streets of Rage"), Sega AM2 (Sega's most famous arcade studio and the developer of Sega's "Virtua Fighter" fighting game series, headed by the company's top developer, Yu Suzuki), and Sonic Team (the developer of Sega's flagship series, "Sonic the Hedgehog", headed by Yuji Naka). Sega's design houses were encouraged to experiment and benefited from a relatively lax approval process, resulting in games such as "Rez" (an attempt to simulate synaesthesia in the form of a rail shooter), "The Typing of the Dead" (a version of "The House of the Dead 2" remade into a touch typing trainer), "Seaman" (a pet simulator in which players use a microphone to interact with a grotesque humanoid fish whose growth is narrated by Leonard Nimoy), and "Segagaga" (a Japan-exclusive role-playing-game employing commentary on the perceived over-abundance of sequels produced by the video game industry, in which players are tasked with preventing Sega from going out of business). Sega also revived franchises from the Genesis era, such as "". Sega's internal studios were consolidated starting in 2003, with Mizuguchi leaving the company following the merger of UGA with Sonic Team.
UGA created the music game "Space Channel 5", in which players help a female outer space news reporter named Ulala fight aliens with "groove energy" by dancing. Intended for a "female casual" audience, "Space Channel 5" is considered one of Sega's "most daring and beloved" original properties, combining a "defiantly retro" and "uplifting" soundtrack with "dazzling" and "colorful" visual presentation—despite "a lack of real gameplay substance." Neither "Space Channel 5" nor UGA's "Rez" were commercially successful, and "Rez" was only available in the U.S. market through a PS2 port released in limited quantities. Hitmaker's arcade ports included "Crazy Taxi"—an open-world arcade racing game known for its addictive gameplay, which sold over one million copies and has been frequently cited as one of the best Dreamcast games—and "Virtua Tennis"—which revitalized the tennis game genre with a simple two-button control scheme and use of minigames to test the player's technique. Smilebit's "Jet Set Radio"—in which players control a Tokyo-based gang of youthful, rebellious inline skaters called the "GGs", who use graffiti to claim territory from rival gangs while evading an oppressive police force—has been cited as a major example of Sega's commitment to original game concepts during the Dreamcast's lifespan. Lauded for composer Hideki Naganuma's "punchy, psychedelic" soundtrack incorporating elements of "J-pop and electro-funk" as well as its message of "self-expression and non-violent dissent", the game also popularized cel shaded graphics. Despite wide praise for its style, some criticized "Jet Set Radio"'s gameplay as mediocre, and it failed to meet Sega's sales expectations. Produced by Rieko Kodama, the Overworks-developed traditional role-playing game "Skies of Arcadia" was acclaimed for its surreal Jules Verne-inspired fantasy world of floating islands and sky pirates, charming protagonists, unique emphasis on the environmental properties of weapons, exciting airship battles, and memorable plot (including a sequence viewed from multiple perspectives).
AM2 developed what Sega hoped would be the Dreamcast's killer app, "Shenmue", a "revenge epic in the tradition of Chinese cinema." The action-adventure game involved the quest of protagonist Ryo Hazuki to avenge his father's murder, but its main selling point was its rendition of the Japanese city of Yokosuka, which included a level of detail considered unprecedented for a video game. Incorporating a simulated day/night cycle with variable weather, non-player characters with regular schedules, and the ability to pick up and examine detailed objects (also introducing the Quick-time event in its modern form), "Shenmue" went over budget and was rumored to have cost Sega over $50 million. Originally planned as the first installment in an 11-part saga, "Shenmue" was eventually downsized to a trilogy—and only one sequel was ever released. While "Shenmue" was lauded for its innovation, visuals and music, its critical reception was mixed; points of criticism included "invisible walls" which limited the player's sense of freedom, boredom caused by the inability to progress without waiting for events scheduled to occur at specific times, excessive in-game cutscenes and a lack of challenge. According to Moore, "Shenmue" sold "extremely well", but the game had no chance of making a profit due to the Dreamcast's limited installed base. "Shenmue II" "was completed for a much more reasonable sum", while Sato defended "Shenmue" as an "investment [which] will someday be recouped" because "the development advances we learned ... can be applied to other games". In addition to the mixed reception for "Shenmue", IGN's Travis Fahs stated that "the [Dreamcast] era wasn't as kind to [AM2] as earlier years"—citing (among others) "F355 Challenge" as an "acclaimed" arcade game that "didn't do much at home", and Genki's port of "Virtua Fighter 3" as inferior to the arcade version, "which was already a couple years old and never as popular as its predecessors." The "Virtua Fighter" series would experience a "tremendous comeback" with the universally acclaimed "Virtua Fighter 4"—which saw a console release exclusively on PS2.
As the first fully 3D platforming game starring Sega's mascot, Sonic the Hedgehog, Sonic Team's "Sonic Adventure" was considered "the centerpiece of the [Dreamcast] launch". "Adventure" garnered criticism for technical problems including erratic camera angles and glitches, but was praised for its "luscious" visuals, "vast, twisting environments" and iconic set pieces —including a segment in which Sonic runs down the side of a skyscraper —and has been described as the "Sonic" series' creative apex. However, it failed "to catch on with players in nearly the way that [Nintendo's] "Mario 64" had done", perhaps due to a perceived lack of gameplay depth. Distinguished by its innovative use of multiple storylines with varied forms of play, "Adventure" sold 2.5 million copies, making it the Dreamcast's best-selling game. Sonic Team also developed the Dreamcast's first online game—"ChuChu Rocket!"—which was widely complimented for its addictive puzzle gameplay and "frantic" multiplayer matches, and the critically successful music game "Samba de Amigo", which was noted for its expensive maracas peripheral and colorful aesthetic. Perhaps the most influential of Sonic Team's Dreamcast releases was "Phantasy Star Online", the first online console RPG. Developed after Okawa requested an online game from Sonic Team, "PSO" was heavily influenced by the PC action RPG "Diablo", but refined and simplified its style of gameplay to appeal to console audiences.
In sports, Visual Concepts' "NFL 2K" football series and its "NBA 2K" basketball series were critically acclaimed. "NFL 2K" was considered an outstanding launch game for its high-quality visuals and "insightful, context-friendly, and, yes, even funny commentary", while "NFL 2K1" featured groundbreaking online multiplayer earlier than its chief competitor, EA's "Madden NFL" series. "Madden" and "2K" continued to compete on other platforms through 2004—with the "2K" series introducing innovations such as a first person perspective new to the genre, and eventually launching "ESPN NFL 2K5" at the aggressively low price point of $19.95—until EA signed an exclusive agreement with the National Football League, "effectively putting every other pro-football game out of business." After Sega sold Visual Concepts for $24 million in 2005, the "NBA 2K" series continued with publisher Take-Two Interactive. During the Dreamcast's lifespan, Visual Concepts also collaborated with "Sonic the Hedgehog" level designer Hirokazu Yasuhara on the action-adventure game "Floigan Bros." and developed the critically successful action game "Ooga Booga".
To appeal to the European market, Sega formed a French affiliate called No Cliché, which developed games such as "Toy Commander". Sega Europe also approached Bizarre Creations to develop the critically successful racing game "Metropolis Street Racer", which featured detailed recreations of London, Tokyo, and San Francisco—complete with consistent time zones and fictional radio stations—and 262 individual race tracks.
Although Acclaim, SNK, Ubisoft, Midway, Activision, Infogrames, and Capcom supported the system during its first year, third-party developer support proved difficult to obtain due to the failure of the Sega Saturn and the profitability of publishing for the PlayStation. Namco's "Soul Calibur", for example, was released for the Dreamcast because of the relative unpopularity of the "Soul" series at the time; Namco's more successful "Tekken" franchise was associated with the PlayStation console and PlayStation-based arcade boards. Nevertheless, "Soul Calibur" received overwhelming critical acclaim and has been frequently described as one of the best games for the system. Capcom produced a number of fighting games for the system, including the "Power Stone" series, in addition to a temporary exclusive in the popular "Resident Evil" series called "Resident Evil Code: Veronica". The Dreamcast is also known for several shoot 'em ups, most notably Treasure's "Bangai-O" and "Ikaruga".
In January 2000, three months after the system's North American launch, "Electronic Gaming Monthly" offered praise for the game library, stating, "...with triple-A stuff like "Soul Calibur", "NBA 2K", and soon "Crazy Taxi" to kick around, we figure you're happy you took the 128-bit plunge." In a retrospective, "PC Magazine"'s Jeffrey L. Wilson referred to Dreamcast's "killer library" and emphasized Sega's creative influence and visual innovation as being at its peak during the lifetime of the system. The staff of "Edge" agreed with this assessment on Dreamcast's original games, as well as Sega's arcade conversions, stating that the system "delivered the first games that could meaningfully be described as arcade perfect." "GamePro" writer Blake Snow referred to the library as being "much celebrated". Damien McFerran of "Retro Gamer" praised Dreamcast's NAOMI arcade ports, opining "The thrill of playing "Crazy Taxi" in the arcade knowing full well that a pixel-perfect conversion (and not some cut-down port) was set to arrive on the Dreamcast is an experience gamers are unlikely to witness again." Nick Montfort and Mia Consalvo, writing in "Loading... The Journal of the Canadian Game Studies Association", argued that "the Dreamcast hosted a remarkable amount of videogame development that went beyond the odd and unusual and is interesting when considered as avant-garde ... it is hard to imagine a commercial console game expressing strong resistance to the commodity perspective and to the view that game production is commerce. But even when it comes to resisting commercialization, it is arguable that Dreamcast games came closer to expressing this attitude than any other console games have." 1UP.com's Jeremy Parish favorably compared Sega's Dreamcast output, which included some of "the most varied, creative, and fun [games] the company had ever produced", with its "enervated" status as a third-party. Fahs noted, "The Dreamcast's life was fleeting, but it was saturated with memorable titles, most of which were completely new properties." According to author Steven L. Kent, "From "Sonic Adventure" and "Shenmue" to "Space Channel 5" and "Seaman", Dreamcast delivered and delivered and delivered."
In December 1999, "Next Generation" rated the Dreamcast 4 out of 5 stars and stated, "If you want the most powerful system available now, showcasing the best graphics at a reasonable price, this system is for you." However, "Next Generation" rated the Dreamcast's future prognosis as 3 stars out of 5 in the same article, noting that Sony would ship a superior hardware product in the PlayStation 2 in the next year, and that Nintendo had said it would do the same with the GameCube. At the beginning of 2000, "Electronic Gaming Monthly" had five reviewers score the Dreamcast 8.5, 8.5, 8.5, 8.0, and 9.0 out of 10 points. By 2001, the reviewers for "Electronic Gaming Monthly" gave the Dreamcast scores of 9.0, 9.0, 9.0, 9.0, and 9.5 out of 10. "BusinessWeek" recognized the Dreamcast as one of the best products of 1999.
In 2009, IGN named the Dreamcast the 8th greatest video game console of all time, giving credit to the innovations and software for the system. According to IGN, "The Dreamcast was the first console to incorporate a built-in modem for online play, and while the networking lacked the polish and refinement of its successors, it was the first time users could seamlessly power on and play with users around the globe." In 2010, "PC Magazine"'s Jeffrey L. Wilson named the Dreamcast the greatest video game console, emphasizing that the system was "gone too soon". In 2013, "Edge" named the Dreamcast the 10th best console of the last 20 years, highlighting innovations that it added to console video gaming, including in-game voice chat, downloadable content, and second screen technology through the use of VMUs. "Edge" explained the system's poor performance by stating, "Sega's console was undoubtedly ahead of its time, and it suffered at retail for that reason... [b]ut its influence can still be felt today." Writing in "1001 Video Games You Must Play Before You Die", Duncan Harris noted "One of the reasons that older gamers mourned the loss of the Dreamcast was that it signaled the demise of arcade gaming culture ... Sega's console gave hope that things were not about to change for the worse and that the tenets of fast fun and bright, attractive graphics were not about to sink into a brown and green bog of realistic war games." Parish, writing for USgamer, contrasted the Dreamcast's diverse library with the "suffocating sense of conservatism" that pervaded the gaming industry in the following decade. Dan Whitehead of Eurogamer, discussing the Dreamcast's portrayal "as a small, square, white plastic JFK", commented that the system's short lifespan "may have sealed its reputation as one of the greatest consoles ever": "Nothing builds a cult like a tragic demise". According to IGN's Travis Fahs, "Many hardware manufacturers have come and gone, but it's unlikely any will go out with half as much class as Sega." | https://en.wikipedia.org/wiki?curid=29036 |
SH3 domain
The SRC Homology 3 Domain (or SH3 domain) is a small protein domain of about 60 amino acid residues. Initially, SH3 was described as a conserved sequence in the viral adaptor protein v-Crk. This domain is also present in the molecules of phospholipase and several cytoplasmic tyrosine kinases such as Abl and Src. It has also been identified in several other protein families such as: PI3 Kinase, Ras GTPase-activating protein, CDC24 and cdc25. SH3 domains are found in proteins of signaling pathways regulating the cytoskeleton, the Ras protein, and the Src kinase and many others. The SH3 proteins interact with adaptor proteins and tyrosine kinases. Interacting with tyrosine kinases SH3 proteins usually bind far away from the active site. Approximately 300 SH3 domains are found in proteins encoded in the human genome. In addition to that, the SH3 domain was responsible for controlling protein-protein interactions in the signal transduction pathways and regulating the interactions of proteins involved in the cytoplasmic signaling.
The SH3 domain has a characteristic beta-barrel fold that consists of five or six β-strands arranged as two tightly packed anti-parallel β sheets. The linker regions may contain short helices. The SH3-type fold is an ancient fold found in eukaryotes as well as prokaryotes.
The classical SH3 domain is usually found in proteins that interact with other proteins and mediate assembly of specific protein complexes, typically via binding to proline-rich peptides in their respective binding partner. Classical SH3 domains are restricted in humans to intracellular proteins, although the small human MIA family of extracellular proteins also contain a domain with an SH3-like fold.
Many SH3-binding epitopes of proteins have a consensus sequence that can be represented as a regular expression or Short linear motif:
with 1 and 4 being aliphatic amino acids, 2 and 5 always and 3 sometimes being proline. The sequence binds to the hydrophobic pocket of the SH3 domain. More recently, SH3 domains that bind to a core consensus motif R-x-x-K have been described. Examples are the C-terminal SH3 domains of adaptor proteins like Grb2 and Mona (a.k.a. Gads, Grap2, Grf40, GrpL etc.). Other SH3 binding motifs have emerged and are still emerging in the course of various molecular studies, highlighting the versatility of this domain.
SH3 domain mediated protein-protein interaction networks, "i.e.", SH3 interactomes, revealed that worm SH3 interactome resembles the analogous yeast network because it is significantly enriched for proteins with roles in endocytosis. Nevertheless, orthologous SH3 domain-mediated interactions are highly rewired between worm and yeast. | https://en.wikipedia.org/wiki?curid=29039 |
Stokes' theorem
In vector calculus, and more generally differential geometry, Stokes' theorem (sometimes spelled Stokes's theorem, and also called the generalized Stokes theorem or the Stokes–Cartan theorem) is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. Stokes' theorem says that the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole of , i.e.,
Stokes' theorem was formulated in its modern form by Élie Cartan in 1945, following earlier work on the generalization of the theorems of vector calculus by Vito Volterra, Édouard Goursat, and Henri Poincaré.
This modern form of Stokes' theorem is a vast generalization of a classical result that Lord Kelvin communicated to George Stokes in a letter dated July 2, 1850. | https://en.wikipedia.org/wiki?curid=29040 |
Superfetation
Superfetation (also spelled superfoetation – see fetus) is the simultaneous occurrence of more than one stage of developing offspring in the same animal.
In mammals, it manifests as the formation of an embryo from a different menstrual cycle
while another embryo or fetus is already present in the uterus. When two separate instances of fertilisation occur during the same menstrual cycle, it is known as superfecundation.
Superfetation is claimed to be common in some species of animals. In mammals, it can occur only where there are two uteri, or where the estrous cycle continues through pregnancy.
Animals that have been claimed to be subject to superfetation include rodents (mice and rats), rabbits, horse, sheep, marsupials (kangaroos and sugar gliders), felines, and primates (humans). Superfetation has also been clearly demonstrated and is normal for some species of poeciliid fishes.
While proposed cases of superfetation have been reported in humans, the existence of this phenomenon in humans is deemed unlikely. Better explanations include differential growth between twins due to various reasons such as twin-to-twin transfusion syndrome. Artificially induced superfetation has been demonstrated, although only up to a short period after insemination.
In 2017, it was reported that an American woman who had agreed to act as a surrogate for a Chinese couple birthed two babies initially believed to be twins. Before the adoptive parents could return home to China, however, it was discovered that one of the babies was, in fact, the biological son of the surrogate. Doctors confirmed that the birth-mother had become pregnant with her and her partner's child roughly three weeks after becoming pregnant with the Chinese couple's child.
There have been multiple cases reported to local US doctors with a week or less difference in age of twins and women who report two surges of ovulation occurring within a few days of each other. Though rare, this condition is believed to affect as many as 0.3% of women but often one twin is lost so the true numbers are not known. Research has found 10% of women ovulate twice per month. | https://en.wikipedia.org/wiki?curid=29042 |
Speciesism
Speciesism () or specism denotes discrimination based on species membership. Such discrimination involves treating members of one species as morally more important than members of other species.
Philosophers such as Peter Singer argue that speciesism plays a role in the practice of factory farming, the use of animals for entertainment such as in bullfighting and rodeos, the taking of animals' fur and skin, experimentation on animals, and the refusal to aid wild animals that suffer due to natural processes. Singer explains that it is not speciesist if there is another reason for giving greater consideration to a member of one species over another, such as if the individual has certain traits such as consciousness.
Some possible examples of speciesism include:
Philosophers continue to debate the ethics, morality and concept of speciesism.
The term "speciesism", and the argument that it is simply a prejudice, first appeared in 1970 in a privately printed pamphlet written by British psychologist Richard D. Ryder. Ryder was a member of a group of academics in Oxford, England, the nascent animal rights community, now known as the Oxford Group. One of the group's activities was distributing pamphlets about areas of concern; the pamphlet titled "Speciesism" was written to protest against animal experimentation.
Ryder stated in the pamphlet that "[s]ince Darwin, scientists have agreed that there is no 'magical' essential difference between humans and other animals, biologically-speaking. Why then do we make an almost total distinction morally? If all organisms are on one physical continuum, then we should also be on the same moral continuum." He wrote that, at that time in the UK, 5,000,000 animals were being used each year in experiments, and that attempting to gain benefits for our own species through the mistreatment of others was "just 'speciesism' and as such it is a selfish emotional argument rather than a reasoned one". Ryder used the term again in an essay, "Experiments on Animals", in "Animals, Men and Morals" (1971), a collection of essays on animal rights edited by philosophy graduate students Stanley and Roslind Godlovitch and John Harris, who were also members of the Oxford Group. Ryder wrote:
In as much as both "race" and "species" are vague terms used in the classification of living creatures according, largely, to physical appearance, an analogy can be made between them. Discrimination on grounds of race, although most universally condoned two centuries ago, is now widely condemned. Similarly, it may come to pass that enlightened minds may one day abhor "speciesism" as much as they now detest "racism." The illogicality in both forms of prejudice is of an identical sort. If it is accepted as morally wrong to deliberately inflict suffering upon innocent human creatures, then it is only logical to also regard it as wrong to inflict suffering on innocent individuals of other species. ... The time has come to act upon this logic.
Those who claim that speciesism is unfair to individuals of nonhuman species have often invoked mammals and chickens in the context of research or farming. There is not yet a clear definition or line agreed upon by a significant segment of the movement as to which species are to be treated equally with humans or in some ways additionally protected: mammals, birds, reptiles, arthropods, insects, bacteria, etc. This question is all the more complex since a study by Miralles et al. (2019) has brought to light the evolutionary component of human empathic and compassionate reactions and the influence of anthropomorphic mechanisms in our affective relationship with the living world as a whole: the more an organism is evolutionarily distant from us, the less we recognize ourselves in it and the less we are moved by its fate.
The term was popularized by the Australian philosopher Peter Singer in his book "Animal Liberation" (1975). Singer had known Ryder from his own time as a graduate philosophy student at Oxford. He credited Ryder with having coined the term and used it in the title of his book's fifth chapter: "Man's Dominion ... "a short history of speciesism"", defining it as "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species":
Racists violate the principle of equality by giving greater weight to the interests of members of their own race when there is a clash between their interests and the interests of those of another race. Sexists violate the principle of equality by favouring the interests of their own sex. Similarly, speciesists allow the interests of their own species to override the greater interests of members of other species. The pattern is identical in each case.
Singer stated from a preference-utilitarian perspective, writing that speciesism violates the principle of equal consideration of interests, the idea based on Jeremy Bentham's principle: "each to count for one, and none for more than one." Singer stated that, although there may be differences between humans and nonhumans, they share the capacity to suffer, and we must give equal consideration to that suffering. Any position that allows similar cases to be treated in a dissimilar fashion fails to qualify as an acceptable moral theory. The term caught on; Singer wrote that it was an awkward word but that he could not think of a better one. It became an entry in the "Oxford English Dictionary" in 1985, defined as "discrimination against or exploitation of animal species by human beings, based on an assumption of mankind's superiority." In 1994 the "Oxford Dictionary of Philosophy" offered a wider definition: "By analogy with racism and sexism, the improper stance of refusing respect to the lives, dignity, or needs of animals of other than the human species."
More recently, animal rights groups such as Farm Animal Rights Movement and People for the Ethical Treatment of Animals have attempted to popularize the concept by promoting a World Day Against Speciesism on June 5.
Paola Cavalieri writes that the current humanist paradigm is that only human beings are members of the moral community, and that all are worthy of equal protection. Species membership, she writes, is "ipso facto" moral membership. The paradigm has an inclusive side (all human beings deserve equal protection) and an exclusive one (only human beings have that status).
She writes that it is not only philosophers who have difficulty with this concept. Richard Rorty (1931–2007) stated that most human beings – those outside what he called our "Eurocentric human rights culture" – are unable to understand why membership of a species would in itself be sufficient for inclusion in the moral community: "Most people live in a world in which it would be just too risky – indeed, it would often be insanely dangerous – to let one's sense of moral community stretch beyond one's family, clan or tribe." Rorty wrote:
Such people are "morally" offended by the suggestion that they should treat someone who is not kin as if he were a brother, or a nigger as if he were white, or a queer as if he were normal, or an infidel as if she were a believer. They are offended by the suggestion that they treat people whom they do not think of as human as if they were human. When utilitarians tell them that all pleasures and pains felt by members of our biological species are equally relevant to moral deliberation, or when Kantians tell them that the ability to engage in such deliberation is sufficient for membership in the moral community, they are incredulous. They rejoin that these philosophers seem oblivious to blatantly obvious moral distinctions, distinctions that any decent person will draw.
Much of humanity is similarly offended by the suggestion that the moral community be extended to nonhumans. Nonhumans do possess some moral status in many societies, but it generally extends only to protection against what Cavalieri calls "wanton cruelty". Anti-speciesists state that the extension of moral membership to all humanity, regardless of individual properties such as intelligence, while denying it to nonhumans, also regardless of individual properties, is internally inconsistent. According to the argument from marginal cases, if infants, the senile, the comatose, and the cognitively disabled (marginal-case human beings) have a certain moral status, then nonhuman animals must be awarded that status too, since there is no morally relevant ability that the marginal-case humans have that nonhumans lack.
American legal scholar Steven M. Wise states that speciesism is a bias as arbitrary as any other. He cites the philosopher R.G. Frey (1941–2012), a leading animal rights critic, who wrote in 1983 that, if forced to choose between abandoning experiments on animals and allowing experiments on "marginal-case" humans, he would choose the latter, "not because I begin a monster and end up choosing the monstrous, but because I cannot think of anything at all compelling that cedes all human life of any quality greater value than animal life of any quality".
Richard Dawkins, the evolutionary biologist, stated against speciesism in "The Blind Watchmaker" (1986), "The Great Ape Project" (1993), and "The God Delusion" (2006), elucidating the connection with evolutionary theory. He compares former racist attitudes and assumptions to their present-day speciesist counterparts. In the chapter "The one true tree of life" in "The Blind Watchmaker", he states that it is not only zoological taxonomy that is saved from awkward ambiguity by the extinction of intermediate forms, but also human ethics and law. Dawkins states that what he calls the "discontinuous mind" is ubiquitous, dividing the world into units that reflect nothing but our use of language, and animals into discontinuous species:
The director of a zoo is entitled to "put down" a chimpanzee that is surplus to requirements, while any suggestion that he might "put down" a redundant keeper or ticket-seller would be greeted with howls of incredulous outrage. The chimpanzee is the property of the zoo. Humans are nowadays not supposed to be anybody's property, yet the rationale for discriminating against chimpanzees is seldom spelled out, and I doubt if there is a defensible rationale at all. Such is the breathtaking speciesism of our Christian-inspired attitudes, the abortion of a single human zygote (most of them are destined to be spontaneously aborted anyway) can arouse more moral solicitude and righteous indignation than the vivisection of any number of intelligent adult chimpanzees! ... The only reason we can be comfortable with such a double standard is that the intermediates between humans and chimps are all dead.
Dawkins elaborated in a discussion with Singer at The Center for Inquiry in 2007, when asked whether he continues to eat meat: "It's a little bit like the position which many people would have held a couple of hundred years ago over slavery. Where lots of people felt morally uneasy about slavery but went along with it because the whole economy of the South depended upon slavery."
David Sztybel states in his paper, "Can the Treatment of Animals Be Compared to the Holocaust?" (2006), that the racism of the Nazis is comparable to the speciesism inherent in eating meat or using animal by-products, particularly those produced on factory farms. Y. Michael Barilan, an Israeli physician, states that speciesism is not the same thing as Nazi racism, because the latter extolled the abuser and condemned the weaker and the abused. He describes speciesism as the recognition of rights on the basis of group membership, rather than solely on the basis of moral considerations.
"Libertarian extension" is the idea that the intrinsic value of nature can be extended beyond sentient beings. This seeks to apply the principle of individual rights not only to all animals but also to objects without a nervous system such as trees, plants, and rocks. Ryder rejects this argument, writing that "value cannot exist in the absence of consciousness or potential consciousness. Thus, rocks and rivers and houses have no interests and no rights of their own. This does not mean, of course, that they are not of value to us, and to many other painients, including those who need them as habitats and who would suffer without them."
A common theme in defending speciesism is the argument that humans have the right to exploit other species to defend their own. Philosopher Carl Cohen stated in 1986: "Speciesism is not merely plausible; it is essential for right conduct, because those who will not make the morally relevant distinctions among species are almost certain, in consequence, to misapprehend their true obligations." Cohen writes that racism and sexism are wrong because there are no relevant differences between the sexes or races. Between people and animals, he states, there are significant differences; his view is that animals do not qualify for Kantian personhood, and as such have no rights.
Nel Noddings, the American feminist, has criticized Singer's concept of speciesism for being simplistic, and for failing to take into account the context of species preference, as concepts of racism and sexism have taken into account the context of discrimination against humans. Peter Staudenmaier has stated that comparisons between speciesism and racism or sexism are trivializing:
The central analogy to the civil rights movement and the women's movement is trivializing and ahistorical. Both of those social movements were initiated and driven by members of the dispossessed and excluded groups themselves, not by benevolent men or white people acting on their behalf. Both movements were built precisely around the idea of reclaiming and reasserting a shared humanity in the face of a society that had deprived it and denied it. No civil rights activist or feminist ever argued, "We're sentient beings too!" They argued, "We're fully human too!" Animal liberation doctrine, far from extending this humanist impulse, directly undermines it.
A similar argument was made by Bernard Williams, who observed that a difference between speciesism versus racism and sexism is that racists and sexists deny any input from those of a different race or sex when it comes to questioning how they should be treated. Conversely, when it comes to how animals should be treated by humans, Williams observed that it is only possible for humans to discuss that question. Williams observed that being a human being is often used as an argument against discrimination on the grounds of race or sex, whereas racism and sexism are seldom deployed to counter discrimination.
Williams also stated in favour of speciesism (which he termed 'humanism'), arguing that "Why are fancy properties which are grouped under the label of personhood "morally relevant" to issues of destroying a certain kind of animal, while the property of being a human being is not?" Williams states that to respond by arguing that it is because these are properties considered valuable by human beings does not undermine speciesism as humans also consider human beings to be valuable, thus justifying speciesism. Williams then states that the only way to resolve this would be by arguing that these properties are "simply better" but in that case one would need to justify why these properties are better if not because of human attachment to them. Christopher Grau supported Williams, arguing that if one used properties like rationality, sentience and moral agency as criteria for moral status as an alternative to species-based moral status, then it would need to be shown why these particular properties are to be used instead of others; there must be something that gives them special status. Grau states that to claim these are simply better properties would require the existence of an impartial observer, an "enchanted picture of the universe", to state them to be so. Thus Grau states that such properties have no greater justification as criteria for moral status than being a member of a species does. Grau also states that even if such an impartial perspective existed, it still wouldn't necessarily be against speciesism, since it is entirely possible that there could be reasons given by an impartial observer for humans to care about humanity. Grau then further observes that if an impartial observer existed and valued only minimalizing suffering, it would likely be overcome with horror at the suffering of all individuals and would rather have humanity annihilate the planet than allow it to continue. Grau thus concludes that those endorsing the idea of deriving values from an impartial observer do not seem to have seriously considered the conclusions of such an idea.
Another criticism of animal-type anti-speciesism is based on the distinction between demanding rights one wants and being put into those one may not want. Many people, who are now over 18 but remember their time as minors as a time when their alleged children's rights were legalized torture, doubt if animal rights do animals any good, especially since animals cannot even say what they consider to be horrible. A distinction is made between people who are extrinsically denied their possibility to say what they think by 18 year limits, psychiatric diagnoses based on domain-specific hypotheses, or other constructed laws on one hand, and marginal case humans intrinsically incapable of opining about their situation on the other. The former is considered comparable to racism and sexism, the latter is considered comparable to animals. This extends to questioning and rejecting the very definition of "wanton cruelty". One example that has been pointed out is that since we do not know whether or not animals are aware of death, all ethical considerations on putting animals down are benighted. Advocates of this way of partly accepting speciesism generally do not subscribe to arguments about alleged dehumanization or other legalistic type arguments, and have no problem with accepting possible future encounters with extraterrestrial intelligence or artificial intelligence as equals.
Ayn Rand's Objectivism holds that humans are the only beings who have what Rand called a conceptual consciousness, and the ability to reason and develop a moral system. She stated that humans are therefore the only species entitled to rights. Objectivist philosopher Leonard Peikoff stated: "By its nature and throughout the animal kingdom, life survives by feeding on life. To demand that man defer to the 'rights' of "other" species is to deprive man himself of the right to life. This is 'other-ism,' i.e. altruism, gone mad."
Douglas Maclean agreed that Singer raised important questions and challenges, particularly with his argument from marginal cases. However, Maclean questioned if different species can be fitted with human morality, observing that animals were generally held exempt from morality; if a man were to kidnap and try to kill a woman, most people would be outraged and anyone who intervened would be lauded as a hero, yet if a hawk captured and killed a marmot, most people would react in awe of nature and criticize anyone who tried to intervene. Maclean thus suggests that morality only makes sense under human relations, with the further one gets from it the less it can be applied. Maclean further stated that species membership is used to humanize other people and create concepts such as dignity, respect and the capacity to be treated as something more than creatures driven by survival and reproduction.
The British philosopher, Roger Scruton, regards the emergence of the animal rights and anti-speciesism movement as "the strangest cultural shift within the liberal worldview", because the idea of rights and responsibilities is, he states, distinctive to the human condition, and it makes no sense to spread them beyond our own species. Scruton states that if animals have rights, then they also have duties, which animals would routinely violate, with almost all of them being "habitual law-breakers" and predatory animals such as foxes, wolves and killer whales being "inveterate murderers" who "should be permanently locked up". He accuses anti-speciesism advocates of "pre-scientific" anthropomorphism, attributing traits to animals that are, he says, Beatrix Potter-like, where "only man is vile." It is, he states, a fantasy, a world of escape.
Thomas Wells, while agreeing that humans should have duties towards the natural world, stated that Peter Singer's arguments were incoherent. Wells states that Singer's call for ending animal suffering would justify simply exterminating every animal on the planet in order to prevent the numerous ways in which they suffer, as they could no longer feel any pain. Wells also stated that by focusing on the suffering humans inflict on animals and ignoring suffering animals inflict upon themselves or that inflicted by nature, Singer is creating a hierarchy where some suffering is more important than others, despite claiming to be committed to equality of suffering. Wells also states that the capacity to suffer, Singer's criteria for moral status, is one of degree rather than absolute categories; Wells observes that Singer denies moral status to plants on the grounds they cannot subjectively feel anything (even though they react to stimuli), yet Wells states there is no indication that animals feel pain and suffering the way humans do. Wells thus concludes "The inconvenient topography of sentience, and the hierarchy of interests it implies has to be flattened out, lest the reader conclude that something more sophisticated than hedonic utilitarianism is required."
Robert Nozick notes that if species membership is irrelevant, then this would mean that endangered animals have no special claim.
The Rev. John Tuohey, founder of the Providence Center for Health Care Ethics, writes that the logic behind the anti-speciesism critique is flawed, and that, although the animal rights movement in the United States has been influential in slowing animal experimentation, and in some cases halting particular studies, no one has offered a compelling argument for species equality.
Some proponents of speciesism believe that animals exist so that humans may make use of them. They state that this special status conveys special rights, such as the right to life, and also unique responsibilities, such as stewardship of the environment. This belief in human exceptionalism is often rooted in the Abrahamic religions, such as the Book of Genesis 1:26: "Then God said, "Let Us make man in Our image, according to Our likeness; and let them rule over the fish of the sea and over the birds of the sky and over the cattle and over all the earth, and over every creeping thing that creeps on the earth." Animal rights advocates state that dominion refers to stewardship, not ownership. Jesus Christ taught that a person is worth more than many sparrows. But the Imago Dei may be personhood itself, although we humans have only achieved efficiencies in educating and otherwise acculturating humans. Proverbs 12:10 says that "Whoever is righteous has regard for the life of his beast, but the mercy of the wicked is cruel."
Psychologists have also considered examining speciesism as a specific psychological construct or attitude (as opposed to speciesism as a philosophy). Studies have found that speciesism is a stable construct that differs amongst personalities and correlates with other variables. For example, speciesism has been found to have a weak positive correlation with homophobia and right-wing authoritarianism, as well as slightly stronger correlations with political conservatism, racism and system justification. Moderate positive correlations were found with social dominance orientation and sexism. Social dominance orientation was theorised to be underpinning most of the correlations; controlling for social dominance orientation reduces all correlations substantially and renders many statistically insignificant. Speciesism likewise predicts levels of prosociality toward animals and behavioural food choices.
Some research has found that laypeople are aware of a connection between speciesism and other forms of prejudice; laypeople tend to infer similar personality traits and prejudices from a speciesist as they do from a racist, sexist, or homophobe. However, it is not clear if there is a link between speciesism and non-traditional forms of prejudice such as negative attitudes towards the overweight or towards Christians.
Psychological studies have furthermore argued that people tend to "morally value individuals of certain species less than others even when beliefs about intelligence and sentience are accounted for."
The first major statute addressing animal protection in the United States, titled "An Act for the More Effectual Prevention of Cruelty to Animals", was enacted in 1867. It provided the right to incriminate and enforce protection with regards to animal cruelty. The act, which has since been revised to suit modern cases state by state, originally addressed such things as animal neglect, abandonment, torture, fighting, transport, impound standards, and licensing standards. Although an animal rights movement had already started as early as the late 1800s, some of the laws that would shape the way animals would be treated as industry grew, were enacted around the same time that Richard Ryder was bringing the notion of Speciesism to the conversation. Legislation was being proposed and passed in the U.S. that would reshape animal welfare in industry and science. Bills such as Humane Slaughter Act, which was created to alleviate some of the suffering felt by livestock during slaughter, was passed in 1958. Later the Animal Welfare Act of 1966, passed by the 89th United States Congress and signed into law by President Lyndon B. Johnson, was designed to put much stricter regulations and supervisions on the handling of animals used in laboratory experimentation and exhibition but has since been amended and expanded. These groundbreaking laws foreshadowed and influenced the shifting attitudes toward nonhuman animals in their rights to humane treatment which Richard D. Ryder and Peter Singer would later popularize in the 1970s and 1980s.
Great ape personhood is the idea that the attributes of nonhuman great apes are such that their sentience and personhood should be recognized by the law, rather than simply protecting them as a group under animal cruelty legislation. Awarding personhood to nonhuman primates would require that their individual interests be taken into account. | https://en.wikipedia.org/wiki?curid=29045 |
Steelman language requirements
The Steelman language requirements were a set of requirements which a high-level general-purpose programming language should meet, created by the United States Department of Defense in "The Department of Defense Common High Order Language program" in 1978. The predecessors of this document were called, in order, "Strawman", "Woodenman", "Tinman" and "Ironman".
The requirements focused on the needs of embedded computer applications, and emphasised reliability, maintainability, and efficiency. Notably, they included exception handling facilities, run-time checking, and parallel computing.
It was concluded that no existing language met these criteria to a sufficient extent, so a contest was called to create a language that would be closer to fulfilling them. The design that won this contest became the Ada programming language.
The resulting language followed the Steelman requirements closely, though not exactly.
The Ada 95 revision of the language went beyond the Steelman requirements, targeting general-purpose systems in addition to embedded ones, and adding features supporting object-oriented programming. | https://en.wikipedia.org/wiki?curid=29047 |
Single-sideband modulation
In radio communications, single-sideband modulation (SSB) or single-sideband suppressed-carrier modulation (SSB-SC) is a type of modulation used to transmit information, such as an audio signal, by radio waves. A refinement of amplitude modulation, it uses transmitter power and bandwidth more efficiently. Amplitude modulation produces an output signal the bandwidth of which is twice the maximum frequency of the original baseband signal. Single-sideband modulation avoids this bandwidth increase, and the power wasted on a carrier, at the cost of increased device complexity and more difficult tuning at the receiver.
Radio transmitters work by mixing a radio frequency (RF) signal of a specific frequency, the carrier wave, with the audio signal to be broadcast. In AM transmitters this mixing usually takes place in the final RF amplifier (high level modulation). It is less common and much less efficient to do the mixing at low power and then amplify it in a linear amplifier. Either method produces a set of frequencies with a strong signal at the carrier frequency and with weaker signals at frequencies extending above and below the carrier frequency by the maximum frequency of the input signal. Thus the resulting signal has a spectrum whose bandwidth is twice the maximum frequency of the original input audio signal.
SSB takes advantage of the fact that the entire original signal is encoded in each of these "sidebands". It is not necessary to transmit both sidebands plus the carrier, as a suitable receiver can extract the entire original signal from either the upper or lower sideband. There are several methods for eliminating the carrier and one sideband from the transmitted signal. Producing this single sideband signal is too complicated to be done in the final amplifier stage as with AM. SSB Modulation must be done at a low level and amplified in a linear amplifier where lower efficiency partially offsets the power advantage gained by eliminating the carrier and one sideband. Nevertheless, SSB transmissions use the available amplifier energy considerably more efficiently, providing longer-range transmission for the same power output. In addition, the occupied spectrum is less than half that of a full carrier AM signal.
SSB reception requires frequency stability and selectivity well beyond that of inexpensive AM receivers which is why broadcasters have seldom used it. In point to point communications where expensive receivers are in common use already they can successfully be adjusted to receive whichever sideband is being transmitted.
The first U.S. patent application for SSB modulation was filed on December 1, 1915 by John Renshaw Carson. The U.S. Navy experimented with SSB over its radio circuits before World War I. SSB first entered commercial service on January 7, 1927, on the longwave transatlantic public radiotelephone circuit between New York and London. The high power SSB transmitters were located at Rocky Point, New York, and Rugby, England. The receivers were in very quiet locations in Houlton, Maine, and Cupar Scotland.
SSB was also used over long distance telephone lines, as part of a technique known as frequency-division multiplexing (FDM). FDM was pioneered by telephone companies in the 1930s. With this technology, many simultaneous voice channels could be transmitted on a single physical circuit, for example in L-carrier. With SSB, channels could be spaced (usually) only 4,000 Hz apart, while offering a speech bandwidth of nominally 300 Hz to 3,400 Hz.
Amateur radio operators began serious experimentation with SSB after World War II. The Strategic Air Command established SSB as the radio standard for its aircraft in 1957. It has become a de facto standard for long-distance voice radio transmissions since then.
Single-sideband has the mathematical form of quadrature amplitude modulation (QAM) in the special case where one of the baseband waveforms is derived from the other, instead of being independent messages:
where formula_1 is the message (real-valued), formula_2 is its Hilbert transform, and formula_3 is the radio carrier frequency.
To understand this formula, we may express formula_4 as the real part of a complex-valued function, with no loss of information:
where formula_6 represents the imaginary unit. formula_7 is the analytic representation of formula_8 which means that it comprises only the positive-frequency components of formula_4:
where formula_11 and formula_12 are the respective Fourier transforms of formula_7 and formula_14 Therefore, the frequency-translated function formula_15 contains only one side of formula_16 Since it also has only positive-frequency components, its inverse Fourier transform is the analytic representation of formula_17
and again the real part of this expression causes no loss of information. With Euler's formula to expand formula_19 we obtain :
Coherent demodulation of formula_21 to recover formula_4 is the same as AM: multiply by formula_23 and lowpass to remove the "double-frequency" components around frequency formula_24. If the demodulating carrier is not in the correct phase (cosine phase here), then the demodulated signal will be some linear combination of formula_4 and formula_26, which is usually acceptable in voice communications (if the demodulation carrier frequency is not quite right, the phase will be drifting cyclically, which again is usually acceptable in voice communications if the frequency error is small enough, and amateur radio operators are sometimes tolerant of even larger frequency errors that cause unnatural-sounding pitch shifting effects).
formula_4 can also be recovered as the real part of the complex-conjugate, formula_28 which represents the negative frequency portion of formula_16 When formula_3 is large enough that formula_31 has no negative frequencies, the product formula_32 is another analytic signal, whose real part is the actual "lower-sideband" transmission:
The sum of the two sideband signals is:
which is the classic model of suppressed-carrier double sideband AM.
One method of producing an SSB signal is to remove one of the sidebands via filtering, leaving only either the upper sideband (USB), the sideband with the higher frequency, or less commonly the lower sideband (LSB), the sideband with the lower frequency. Most often, the carrier is reduced or removed entirely (suppressed), being referred to in full as single sideband suppressed carrier (SSBSC). Assuming both sidebands are symmetric, which is the case for a normal AM signal, no information is lost in the process. Since the final RF amplification is now concentrated in a single sideband, the effective power output is greater than in normal AM (the carrier and redundant sideband account for well over half of the power output of an AM transmitter). Though SSB uses substantially less bandwidth and power, it cannot be demodulated by a simple envelope detector like standard AM.
An alternate method of generation known as a Hartley modulator, named after R. V. L. Hartley, uses phasing to suppress the unwanted sideband. To generate an SSB signal with this method, two versions of the original signal are generated, mutually 90° out of phase for any single frequency within the operating bandwidth. Each one of these signals then modulates carrier waves (of one frequency) that are also 90° out of phase with each other. By either adding or subtracting the resulting signals, a lower or upper sideband signal results. A benefit of this approach is to allow an analytical expression for SSB signals, which can be used to understand effects such as synchronous detection of SSB.
Shifting the baseband signal 90° out of phase cannot be done simply by delaying it, as it contains a large range of frequencies. In analog circuits, a wideband 90-degree phase-difference network is used. The method was popular in the days of vacuum tube radios, but later gained a bad reputation due to poorly adjusted commercial implementations. Modulation using this method is again gaining popularity in the homebrew and DSP fields. This method, utilizing the Hilbert transform to phase shift the baseband audio, can be done at low cost with digital circuitry.
Another variation, the Weaver modulator, uses only lowpass filters and quadrature mixers, and is a favored method in digital implementations.
In Weaver's method, the band of interest is first translated to be centered at zero, conceptually by modulating a complex exponential formula_35 with frequency in the middle of the voiceband, but implemented by a quadrature pair of sine and cosine modulators at that frequency (e.g. 2 kHz). This complex signal or pair of real signals is then lowpass filtered to remove the undesired sideband that is not centered at zero. Then, the single-sideband complex signal centered at zero is upconverted to a real signal, by another pair of quadrature mixers, to the desired center frequency.
Conventional amplitude-modulated signals can be considered wasteful of power and bandwidth because they contain a carrier signal and two identical sidebands. Therefore, SSB transmitters are generally designed to minimize the amplitude of the carrier signal. When the carrier is removed from the transmitted signal, it is called "suppressed-carrier SSB".
However, in order for a receiver to reproduce the transmitted audio without distortion, it must be tuned to exactly the same frequency as the transmitter. Since this is difficult to achieve in practice, SSB transmissions can sound unnatural, and if the error in frequency is great enough, it can cause poor intelligibility. In order to correct this, a small amount of the original carrier signal can be transmitted so that receivers with the necessary circuitry to synchronize with the transmitted carrier can correctly demodulate the audio. This mode of transmission is called reduced-carrier single-sideband.
In other cases, it may be desirable to maintain some degree of compatibility with simple AM receivers, while still reducing the signal's bandwidth. This can be accomplished by transmitting single-sideband with a normal or slightly reduced carrier. This mode is called "compatible (or full-carrier) SSB" or "amplitude modulation equivalent (AME)". In typical AME systems, harmonic distortion can reach 25%, and intermodulation distortion can be much higher than normal, but minimizing distortion in receivers with envelope detectors is generally considered less important than allowing them to produce intelligible audio.
A second, and perhaps more correct, definition of "compatible single sideband" (CSSB) refers to a form of amplitude and phase modulation in which the carrier is transmitted along with a series of sidebands that are predominantly above or below the carrier term. Since phase modulation is present in the generation of the signal, energy is removed from the carrier term and redistributed into the sideband structure similar to that which occurs in analog frequency modulation. The signals feeding the phase modulator and the envelope modulator are further phase-shifted by 90° with respect to each other. This places the information terms in quadrature with each other; the Hilbert transform of information to be transmitted is utilized to cause constructive addition of one sideband and cancellation of the opposite primary sideband. Since phase modulation is employed, higher-order terms are also generated. Several methods have been employed to reduce the impact (amplitude) of most of these higher-order terms. In one system, the phase-modulated term is actually the log of the value of the carrier level plus the phase-shifted audio/information term. This produces an ideal CSSB signal, where at low modulation levels only a first-order term on one side of the carrier is predominant. As the modulation level is increased, the carrier level is reduced while a second-order term increases substantially in amplitude. At the point of 100% envelope modulation, 6 dB of power is removed from the carrier term, and the second-order term is identical in amplitude to carrier term. The first-order sideband has increased in level until it is now at the same level as the formerly unmodulated carrier. At the point of 100% modulation, the spectrum appears identical to a normal double-sideband AM transmission, with the center term (now the primary audio term) at a 0 dB reference level, and both terms on either side of the primary sideband at −6 dB. The difference is that what appears to be the carrier has shifted by the audio-frequency term towards the "sideband in use". At levels below 100% modulation, the sideband structure appears quite asymmetric. When voice is conveyed by a CSSB source of this type, low-frequency components are dominant, while higher-frequency terms are lower by as much as 20 dB at 3 kHz. The result is that the signal occupies approximately 1/2 the normal bandwidth of a full-carrier, DSB signal. There is one catch: the audio term utilized to phase-modulate the carrier is generated based on a log function that is biased by the carrier level. At negative 100% modulation, the term is driven to zero (0), and the modulator becomes undefined. Strict modulation control must be employed to maintain stability of the system and avoid splatter. This system is of Russian origin and was described in the late 1950s. It is uncertain whether it was ever deployed.
A second series of approaches was designed and patented by Leonard R. Kahn. The various Kahn systems removed the hard limit imposed by the use of the strict log function in the generation of the signal. Earlier Kahn systems utilized various methods to reduce the second-order term through the insertion of a predistortion component. One example of this method was also used to generate one of the Kahn independent-sideband (ISB) AM stereo signals. It was known as the STR-77 exciter method, having been introduced in 1977. Later, the system was further improved by use of an arcsine-based modulator that included a 1-0.52E term in the denominator of the arcsin generator equation. E represents the envelope term; roughly half the modulation term applied to the envelope modulator is utilized to reduce the second-order term of the arcsin "phase"-modulated path; thus reducing the second-order term in the undesired sideband. A multi-loop modulator/demodulator feedback approach was used to generate an accurate arcsin signal. This approach was introduced in 1984 and became known as the STR-84 method. It was sold by Kahn Research Laboratories; later, Kahn Communications, Inc. of NY. An additional audio processing device further improved the sideband structure by selectively applying pre-emphasis to the modulating signals. Since the envelope of all the signals described remains an exact copy of the information applied to the modulator, it can be demodulated without distortion by an envelope detector such as a simple diode. In a practical receiver, some distortion may be present, usually at a low level (in AM broadcast, always below 5%), due to sharp filtering and nonlinear group delay in the IF filters of the receiver, which act to truncate the compatibility sideband – those terms that are not the result of a linear process of simply envelope modulating the signal as would be the case in full-carrier DSB-AM – and rotation of phase of these compatibility terms such that they no longer cancel the quadrature distortion term caused by a first-order SSB term along with the carrier. The small amount of distortion caused by this effect is generally quite low and acceptable.
The Kahn CSSB method was also briefly used by Airphone as the modulation method employed for early consumer telephone calls that could be placed from an aircraft to ground. This was quickly supplanted by digital modulation methods to achieve even greater spectral efficiency.
While CSSB is seldom used today in the AM/MW broadcast bands worldwide, some amateur radio operators still experiment with it.
The front end of an SSB receiver is similar to that of an AM or FM receiver, consisting of a superheterodyne RF front end that produces a frequency-shifted version of the radio frequency (RF) signal within a standard intermediate frequency (IF) band.
To recover the original signal from the IF SSB signal, the single sideband must be frequency-shifted down to its original range of baseband frequencies, by using a product detector which mixes it with the output of a beat frequency oscillator (BFO). In other words, it is just another stage of heterodyning. For this to work, the BFO frequency must be exactly adjusted.
If the BFO frequency is off, the output signal will be frequency-shifted (up or down), making speech sound strange and "Donald Duck"-like, or unintelligible.
For audio communications, there is a common agreement about the BFO oscillator shift of 1.7 kHz. A voice signal is sensitive to about 50 Hz shift, with up to 100 Hz still bearable. Some receivers use a carrier recovery system, which attempts to automatically lock on to the exact IF frequency. The carrier recovery doesn't solve the frequency shift. It gives better S/N ratio on the detector output.
As an example, consider an IF SSB signal centered at frequency formula_36 = 45000 Hz. The baseband frequency it needs to be shifted to is formula_37 = 2000 Hz. The BFO output waveform is formula_38. When the signal is multiplied by (aka "heterodyned with") the BFO waveform, it shifts the signal to formula_39, "and" to formula_40, which is known as the "beat frequency" or "image frequency". The objective is to choose an formula_41 that results in formula_42 = 2000 Hz. (The unwanted components at formula_43 can be removed by a lowpass filter; for which an output transducer or the human ear may serve).
There are two choices for formula_44: 43000 Hz and 47000 Hz, called "low-side" and "high-side" injection. With high-side injection, the spectral components that were distributed around 45000 Hz will be distributed around 2000 Hz in the reverse order, also known as an inverted spectrum. That is in fact desirable when the IF spectrum is also inverted, because the BFO inversion restores the proper relationships. One reason for that is when the IF spectrum is the output of an inverting stage in the receiver. Another reason is when the SSB signal is actually a lower sideband, instead of an upper sideband. But if both reasons are true, then the IF spectrum is not inverted, and the non-inverting BFO (43000 Hz) should be used.
If formula_45 is off by a small amount, then the beat frequency is not exactly formula_37, which can lead to the speech distortion mentioned earlier.
SSB techniques can also be adapted to frequency-shift and frequency-invert baseband waveforms (voice inversion). This voice scrambling method was made by running the audio of one side band modulated audio sample though its opposite (e.g. running an LSB modulated audio sample through a radio running USB modulation).
These effects were used, in conjunction with other filtering techniques, during World War II as a simple method for speech encryption. Radiotelephone conversations between the US and Britain were intercepted and "decrypted" by the Germans; they included some early conversations between Franklin D. Roosevelt and Churchill. In fact, the signals could be understood directly by trained operators. Largely to allow secure communications between Roosevelt and Churchill, the SIGSALY system of digital encryption was devised.
Today, such simple inversion-based speech encryption techniques are easily decrypted using simple techniques and are no longer regarded as secure.
Limitation of single-sideband modulation being used for voice signals and not available for video/TV signals leads to the usage of vestigial sideband. A vestigial sideband (in radio communication) is a sideband that has been only partly cut off or suppressed. Television broadcasts (in analog video formats) use this method if the video is transmitted in AM, due to the large bandwidth used. It may also be used in digital transmission, such as the ATSC standardized 8VSB.
The broadcast or transport channel for TV in countries that use NTSC or ATSC has a bandwidth of 6 MHz. To conserve bandwidth, SSB would be desirable, but the video signal has significant low-frequency content (average brightness) and has rectangular synchronising pulses. The engineering compromise is vestigial-sideband transmission. In vestigial sideband, the full upper sideband of bandwidth W2 = 4.0 MHz is transmitted, but only W1 = 0.75 MHz of the lower sideband is transmitted, along with a carrier. The carrier frequency is 1.25 MHz above the lower edge of the 6MHz wide channel. This effectively makes the system AM at low modulation frequencies and SSB at high modulation frequencies. The absence of the lower sideband components at high frequencies must be compensated for, and this is done in the IF amplifier.
When single-sideband is used in amateur radio voice communications, it is common practice that for frequencies below 10 MHz, lower sideband (LSB) is used and for frequencies of 10 MHz and above, upper sideband (USB) is used. For example, on the 40 m band, voice communications often take place around 7.100 MHz using LSB mode. On the 20 m band at 14.200 MHz, USB mode would be used.
An exception to this rule applies to the five discrete amateur channels on the 60-meter band (near 5.3 MHz) where FCC rules specifically require USB.
Extended single sideband is any J3E (SSB-SC) mode that exceeds the audio bandwidth of standard or traditional 2.9 kHz SSB J3E modes (ITU 2K90J3E) to support higher-quality sound.
Amplitude-companded single sideband (ACSSB) is a narrowband modulation method using a single sideband with a pilot tone, allowing an expander in the receiver to restore the amplitude that was severely compressed by the transmitter. It offers improved effective range over standard SSB modulation while simultaneously retaining backwards compatibility with standard SSB radios. ACSSB also offers reduced bandwidth and improved range for a given power level compared with narrow band FM modulation.
The generation of standard SSB modulation results in large envelope overshoots well above the average envelope level for a sinusoidal tone (even when the audio signal is peak-limited). The standard SSB envelope peaks are due to truncation of the spectrum and nonlinear phase distortion from the approximation errors of the practical implementation of the required Hilbert transform. It was recently shown that suitable overshoot compensation (so-called controlled-envelope single-sideband modulation or CESSB) achieves about 3.8 dB of peak reduction for speech transmission. This results in an effective average power increase of about 140%.
Although the generation of the CESSB signal can be integrated into the SSB modulator, it is feasible to separate the generation of the CESSB signal (e.g. in form of an external speech preprocessor) from a standard SSB radio. This requires that the standard SSB radio's modulator be linear-phase and have a sufficient bandwidth to pass the CESSB signal. If a standard SSB modulator meets these requirements, then the envelope control by the CESSB process is preserved.
In 1982, the International Telecommunication Union (ITU) designated the types of amplitude modulation: | https://en.wikipedia.org/wiki?curid=29048 |
Szlachta
The szlachta (, exonym: "Nobility") was a legally privileged noble class in the Kingdom of Poland and in the Grand Duchy of Lithuania. After the Union of Lublin in 1569, the Grand Duchy and its neighbouring Kingdom became a single state, the Polish–Lithuanian Commonwealth.
The origins of the szlachta are obscure and have several theories. Traditionally, its members were landowners, often in the form of "manorial estates" or so-called "folwarks". The nobility won substantial and increasing political and legal privileges for itself throughout its entire history until the decline and end of the Polish–Lithuanian Commonwealth in the late 18th century. Apart from providing officers for the army, among its chief civic obligations were electing the monarch, plus filling advisory and honorary roles at court, e.g., "Stolnik" - "Master of the King's Pantry," or their assistant, "Podstoli", and in the state government, e.g. "Podskarbi", "Minister to the Treasury". They served as elected representatives in the Sejm (National Parliament) and in local "Sejmiki" assemblies, appointing officials and overseeing judicial and financial governance, including tax-raising, at the provincial level. Their roles included "Voivodeship", "Marshal of Voivodeship", "Castellan", and "Starosta".
The szlachta gained considerable institutional privileges between 1333 and 1370 in the Kingdom of Poland during the reign of King Casimir III the Great. In 1413, following a series of tentative personal unions between the Grand Duchy of Lithuania and the Crown of the Kingdom of Poland, the existing Lithuanian-Ruthenian nobility formally joined this class. As the Polish-Lithuanian Commonwealth (1569–1795) evolved and expanded in territory, its membership grew to include the leaders of Ducal Prussia and Livonia. During the Partitions of Poland from 1772 to 1795, minor szlachta began to lose these legal privileges and social status, while elites became part of the nobility of partitioning countries.
Although szlachta members had greatly unequal status due to wealth and political influence, few official distinctions existed between elites and common nobility. The juridic principle of szlachta equality existed because land held by szlachta was allodial, not feudal, having no requirements of feudal service to a liege Lord. As szlachta land tenure was allodial, not feudal, this produced a disdain for distinction by way of titles. The relatively few hereditary titles in the Kingdom of Poland were bestowed by foreign monarchs, including personal hereditary titles granted by the Pope — see Feliks Sobański. In the Grand Duchy of Lithuania, Ruthenia, and Samogitia princely titles were mostly inherited by descendants of Old Lithuanian-Ruthenian Rurikid and Gediminids princely families, or by princely dynasties of Tatar origin settled there.
The Polish term "szlachta" is derived from the Old High German word "slahta". In modern German "Geschlecht" - which originally came from the Proto-Germanic *"slagiz", "blow", "strike", and shares the Anglo-Saxon root for "slaughter" or the verb "to slug" – means "breeding" or gender. Like many other Polish words pertaining to nobility, it derives from Germanic words: the Polish for a "knight" is ""rycerz"", a cognate of the German ""Ritter"". The Polish word for "coat of arms" is ""herb"" from the German ""Erbe"" or "heritage".
17th century Poles assumed ""szlachta"" came from the German ""schlachten"" "to slaughter" or "to butcher", and was therefore related to the German word for battle, ""Schlacht"". Some early Polish historians thought the term might have derived from the name of the legendary proto-Polish chief, Lech, mentioned in Polish and Czech writings. In Gaelic society, large landholders with hereditary tenures formed the principal lineages of their own "clans" or "septs," and they were traced to a "sliocht", or branch, of the ruling families. In ancient Ireland, ""sliocht"" meant progeny. The posterity of Hugh Slany, ruler of the Kingdom of Meath, were known in ancient Ireland as "Sliocht Aodha Slaine", the progeny of King Hugh Slany. In Shakespeare's play, Macbeth says of King Duncan, "... First, as I am his kinsman and his subject, ..." The szlachta traced their descent from Lech/Lekh, who allegedly founded the Polish kingdom in about the fifth century.
A few exceptionally wealthy and powerful szlachta members constituted the "magnateria" and were known as magnates (Magnates of Poland and Lithuania).
The Polish term ""szlachta"" designated the formalized, hereditary noble class of the Polish-Lithuanian Commonwealth, which constituted the nation itself, and ruled without competition. In official Latin documents of the old Commonwealth, the hereditary szlachta were referred to as ""nobilitas"" from the Latin term, and could be compared in legal status to English or British peers of the realm, or to the ancient Roman idea of "cives", "citizen". Until the second half of the 19th century, the Polish term "obywatel" (wiktionary:obywatel) ("Citizen") was used as a synonym for szlachta landlords.
Today the word "szlachta" simply translates as "nobility". In its broadest sense, it can also denote some non-hereditary honorary knighthoods and baronial titles granted by other European monarchs, including the Holy See. Occasionally, 19th-century landowners of non-noble descent were referred to as "szlachta" by courtesy or error, when they owned manorial estates, but were not in fact noble by birth. "Szlachta" also denotes the Ruthenian and Lithuanian nobility from before the old-Commonwealth.
In the past, a misconception sometimes led to the mistranslation of ""szlachta"" as "gentry" rather than "nobility". This mistaken practice began due to the inferior economic status of many "szlachta" members compared to that of the nobility in other European countries (see also Estates of the Realm "regarding wealth and nobility"). The "szlachta" included those rich and powerful enough to be magnates down to the indigent with a noble lineage, but with no land, no castle, no money, no village, and no subject peasants. At least 60,000 families belonged to the nobility, however, only about 100 were wealthy (less than 0.167%); all the rest were poor (greater than 99.83%).
Over time, numerically most "lesser" szlachta became poorer, or were poorer than, their few rich peers in their social class, and many were worse off than the non-noble gentry. They were called "szlachta zagrodowa", that is, "farm nobility", from "zagroda", a farm, often little different than a peasant's dwelling, sometimes referred to as "drobna szlachta", "petty nobles" or yet, "szlachta okoliczna", meaning "local". Particularly impoverished szlachta families were often forced to become tenants of their wealthier peers. They were described as "szlachta czynszowa", or "tenant nobles" who paid rent. In doing so, they nevertheless retained all their constitutional and lawful prerogatives because aristocratic lineage and hereditary juridical status determined Polish nobility, not wealth nor lifestyle, as was achievable by the gentry.
An individual nobleman was called ""szlachcic"", while a noblewoman ""szlachcianka"".
The origins of the szlachta, while ancient, have always been considered obscure. As a result, its members often referred to it as "odwieczna" (perennial). Two popular historical theories about its origins have been put forward by its members and early historians and chroniclers. The first theory involved a presumed descent from the ancient Iranian tribe known as Sarmatians, who in the 2nd century AD, occupied lands in Eastern Europe, and the Middle East. The second theory involved a presumed szlachta descent from Japheth, one of Noah's sons. By contrast, the peasantry were said to be the offspring of another son of Noah, Ham — and hence subject to bondage under the Curse of Ham. The Jews were considered the offspring of Shem. Other fanciful theories included its foundation by Julius Caesar, Alexander the Great, or regional leaders who had not mixed their bloodlines with those of 'slaves, prisoners, or aliens'.
Another theory describes its derivation from a non-Slavic warrior class, forming a distinct element known as the Lechici/Lekhi ("Lechitów") within the ancient Polonic tribal groupings (Indo-European caste systems). Similar to Nazi racist ideology, which dictated the Polish elite were largely Nordic (the szlachta Boreyko coat of arms heralds a swastika), this hypothesis states this upper class was not of Slavonic extraction and was of a different origin than the Slavonic peasants () over which they ruled.
In old Poland, there were two nations - nobles and peasants. The Szlachta were differentiated from the rural population. In harshly stratified and elitist Polish society, the nobleman's sense of distinction led to practices that in later periods would be characterized as racism. Wacław Potocki, herbu Śreniawa (1621 - 1696), proclaimed "by nature" are "chained to the land and plow," that even an educated peasant would always remain a peasant, because "it is impossible to transform a dog into a lynx." The Szlachta were noble in the Aryan (see "Alans") sense -- "noble" in contrast to the people over whom they ruled after coming into contact with them.
The szlachta traced their descent from Lech/Lekh, who allegedly founded the Polish kingdom in about the fifth century. Lechia was the name of Poland in antiquity, and the szlachta's own name for themselves was Lechici/Lekhi. An exact counterpart of Szlachta society was the Meerassee (wiktionary:mirasdar) system of tenure of southern India—an aristocracy of equality—settled as conquerors among a separate race. The Polish state paralleled the Roman Empire in that full rights of citizenship were limited to the szlachta. Rome devoted its attention nearly exclusively to agriculture as did old Poland. The szlachta ideal also paralleled that of a Greek polis—a body of citizens, a small merchant class, and a multitude of laborers. The szlachta had the exclusive right to enter the clergy until the time of the three partitions of Poland, and the szlachta and clergy believed they were genetically superior to peasants. The szlachta regarded peasants as a lower species. Quoting Bishop of Poznań, Wawrzyniec Goślicki, herbu Grzymała (between 1530 and 1540 - 1607):
"The kingdome of Polonia doth also consist of the said three sortes, that is, the king, nobility and people. But it is to be noted, that this word people includeth only knights and gentlemen. ... The gentlemen of Polonia doe represent the popular state, for in them consisteth a great part of the government, and they are as a Seminarie from whence Councellors and Kinges are taken."
The szlachta were a caste, a military caste, as in Hindu society. In the year 1244, Bolesław, Duke of Masovia, identified members of the knights' clan as members of a "genealogia:"
"I received my good servitors [Raciborz and Albert] from the land of [Great] Poland, and from the clan ["genealogia"] called Jelito, with my well-disposed knowledge [i.e., consent and encouragement] and the cry ["vocitatio"], [that is], the "godło," [by the name of] "Nagody," and I established them in the said land of mine, Masovia, [on the military tenure described elsewhere in the charter]."
The documentation regarding Raciborz and Albert's tenure is the earliest surviving of the use of the clan name and cry defining the honorable status of Polish knights. The names of knightly "genealogiae" only came to be associated with heraldic devices later in the Middle Ages and in the early modern period. The Polish clan name and cry ritualized the "ius militare," i.e., the power to command an army; and they had been used some time before 1244 to define knightly status. .
"In Poland, the Radwanice were noted relatively early (1274) as the descendants of Radwan, a knight [more properly a "rycerz" from the German "ritter"] active a few decades earlier. ..."
Escutcheons and hereditary coats of arms with eminent privileges attached is an honor derived from the ancient Germans. Where Germans did not inhabit, and where German customs were unknown, no such thing existed. The usage of coats of arms in Poland was brought in by knights arriving from Silesia, Lusatia, Meissen, and Bohemia. Migrations from here were the most frequent, and the time period was the thirteenth and fourteenth centuries. However, unlike other European chivalry, coats of arms were associated with Polish knights' clans' ("genealogiae") names and war cries ("godło"), where heraldic devices came to be held in common by entire clans, fighting in regiments. .
Around the 14th century, there was little difference between knights and the "szlachta" in Poland. Members of the szlachta had the personal obligation to defend the country ("pospolite ruszenie"), thereby becoming the kingdom's most privileged social class. Inclusion in the class was almost exclusively based on inheritance.
Concerning the early Polish tribes, geography contributed to long-standing traditions. The Polish tribes were internalized and organized around a unifying religious cult, governed by the "wiec", an assembly of free tribesmen. Later, when safety required power to be consolidated, an elected prince was chosen to govern. The election privilege was usually limited to elites.
The tribes were ruled by clans () consisting of people related by blood or marriage and theoretically descending from a common ancestor, giving the ród/clan a highly developed sense of solidarity. (See "gens".) The "starosta" (or "starszyna") had judicial and military power over the ród/clan, although this power was often exercised with an assembly of elders. Strongholds called "grόd" were built where the religious cult was powerful, where trials were conducted, and where clans gathered in the face of danger. The "opole" was the territory occupied by a single tribe. The family unit of a tribe is called the "rodzina", while a collection of tribes is a .
Mieszko I of Poland (c. 935 – 25 May 992) established an elite knightly retinue from within his army, which he depended upon for success in uniting the Lekhitic tribes and preserving the unity of his state. Documented proof exists of Mieszko I's successors utilizing such a retinue, as well.
Another class of knights were granted land by the prince, allowing them the economic ability to serve the prince militarily. A Polish nobleman living at the time prior to the 15th century was referred to as a "rycerz", very roughly equivalent to the English "knight," the critical difference being the status of "rycerz" was almost strictly hereditary; the class of all such individuals was known as the "rycerstwo". Representing the wealthier families of Poland and itinerant knights from abroad seeking their fortunes, this other class of rycerstwo, which became the szlachta/nobility ("szlachta" becomes the proper term for Polish nobility beginning about the 15th century), gradually formed apart from Mieszko I's and his successors' elite retinues. This rycerstwo/nobility obtained more privileges granting them favored status. They were absolved from particular burdens and obligations under ducal law, resulting in the belief only rycerstwo (those combining military prowess with high/noble birth) could serve as officials in state administration.
Select rycerstwo were distinguished above the other rycerstwo, because they descended from past tribal dynasties, or because early Piasts' endowments made them select beneficiaries. These rycerstwo of great wealth were called możni (Magnates). Socially they were not a distinct class from the rycerstwo from which they all originated and to which they would return were their wealth lost.
The Period of Division from, A.D., 1138 – A.D., 1314, which included nearly 200 years of feudal fragmentation and which stemmed from Bolesław III's division of Poland among his sons, was the genesis of the social structure which saw the economic elevation of the great landowning feudal nobles (możni/Magnates, both ecclesiastical and lay) from the rycerstwo they originated from. The prior social structure was one of Polish tribes united into the historic Polish nation under a state ruled by the Piast dynasty, this dynasty appearing circa 850 A.D.
Some możni (Magnates) descending from past tribal dynasties regarded themselves as co-proprietors of Piast realms, even though the Piasts attempted to deprive them of their independence. These możni (Magnates) constantly sought to undermine princely authority. In Gall Anonym's chronicle, there is noted the nobility's alarm when the Palatine Sieciech "elevated those of a lower class over those who were noble born" entrusting them with state offices.
In Lithuania Propria and in Samogitia, prior to the creation of the Kingdom of Lithuania by Mindaugas, nobles were called "die beste leuten" in German sources. In Lithuanian, nobles were named "ponai". The higher nobility were named "kunigai" or "kunigaikščiai" (dukes) — a loanword from Scandinavian "konung". They were the established local leaders and warlords. During the development of the state, they gradually became subordinated to higher dukes, and later to the King of Lithuania. Because of Lithuanian expansion into the lands of Ruthenia in the middle of the 14th century, a new term for nobility appeared — "bajorai", from Ruthenian "бояре". This word is used to this day in Lithuania to refer to nobility in general, including those from abroad.
After the Union of Horodło, the Lithuanian nobility acquired equal status with its Polish counterparts. Over time they became increasingly polonized, although they did preserve their national consciousness, and in most cases recognition of their Lithuanian family roots. In the 16th century, some of the Lithuanian nobility claimed that they were descended from the Romans, and that the Lithuanian language was derived from Latin. This led to a conundrum: Polish nobility claimed its own ancestry from Sarmatian tribes, but Sarmatians were considered enemies of the Romans. Thus, a new Roman-Sarmatian theory was created. Strong cultural ties with Polish nobility led to a new term for Lithuanian nobility appearing in the 16th century — "šlėkta", a direct loanword from Polish "szlachta". Recently, Lithuanian linguists advocated dropping the usage of this Polish loanword.
The process of polonization took place over a lengthy period. At first only the leading members of the nobility were involved. Gradually the wider population became affected. Major effects on the lesser Lithuanian nobility occurred after various sanctions were imposed by the Russian Empire, such as removing "Lithuania" from the names of the "Gubernyas" shortly after the November Uprising. After the January Uprising the sanctions went further, and Russian officials announced that "Lithuanians were actually Russians seduced by Poles and Catholicism" and began to intensify russification, and to ban the printing of books in Lithuanian.
After the principalities of Halych and Volhynia became integrated with the Grand Duchy, Ruthenia's nobility gradually rendered loyalty to the multilingual and cultural melting pot that was the Grand Duchy of Lithuania. Many noble Ruthenian families intermarried with Lithuanians.
The rights of Orthodox nobles were nominally equal to those enjoyed by the Polish and Lithuanian nobility, but they were put under cultural pressure to convert to Catholicism. It was a policy that was greatly eased in 1596 by the Union of Brest. See, for example, the careers of Senator Adam Kisiel and Jerzy Franciszek Kulczycki.
In Polish "dąb" means "oak." "Dąbrowa" means "oak forest," and "Dąbrówka" means "little oak forest" (or grove). In antiquity, the nobility used topographic surnames to identify themselves. The expression "z" (meaning "from" sometimes "at") plus the name of one's patrimony or estate (dominion) carried the same prestige as "de" in French names such as "de Châtellerault", and "von" or "zu" in German names such as "von Weizsäcker" or "zu Rhein". In Polish "z Dąbrówki" and "Dąbrowski" mean the same thing: "of, from Dąbrówka." More precisely, "z Dąbrówki" means owning the patrimony or estate Dąbrówka, not necessarily originating from. Almost all the surnames of genuine Polish szlachta can be traced back to a patrimony or locality, despite time scattering most families far from their original home. John of Zamość called himself John Zamoyski, Stephen of Potok called himself Potocki.
At least since the 17th century the surnames/cognomens of noble families became fixed and were inherited by following generations, remaining in that form until today. Prior to that time, a member of the family would simply use his Christian name (e.g., Jakub, Jan, Mikołaj, etc.), and the name of the coat of arms common to all members of his clan. A member of the family would be identified as, for example, "Jakub z Dąbrówki", herbu Radwan, (Jacob to/at Dąbrówki of the knights' clan Radwan coat of arms), or "Jakub z Dąbrówki, Żądło (cognomen) (later a przydomkiem/nickname/agnomen), herbu Radwan" (Jacob to/at [owning] Dąbrówki with the distinguishing name Żądło of the knights' clan Radwan coat of arms), or "Jakub Żądło, herbu Radwan".
The Polish state paralleled the Roman Empire in that full rights of citizenship were limited to the nobility/szlachta. The nobility/szlachta in Poland, where Latin was written and spoken far and wide, used the Roman naming convention of the tria nomina (praenomen, nomen, and cognomen) to distinguish Polish citizens/nobles/szlachta from the peasantry and foreigners, hence why multiple surnames are associated with many Polish coat of arms.
Example - Jakub: Radwan Żądło-Dąbrowski (sometimes Jakub: Radwan Dąbrowski-Żądło)
Praenomen
Jakub
Nomen (nomen gentile—name of the gens/ or knights' clan):
Radwan
Cognomen (name of the family branch/sept within the Radwan gens):
For example—Braniecki, Dąbrowski, Czcikowski, Dostojewski, Górski, Nicki, Zebrzydowski, etc.
Agnomen (nickname, Polish wiktionary:przydomek):
Żądło (prior to the 17th century, was a cognomen)
Bartosz Paprocki gives an example of the Rościszewski family taking different surnames from the names of various patrimonies or estates they owned. The branch of the Rościszewski family that settled in Chrapunia became the Chrapunski family, the branch of the Rościszewski family that settled in Strykwina became the Strykwinski family, and the branch of the Rościszewski family that settled in Borkow became known as the Borkowski family. Each family shared a common ancestor and belonged to the same knights' clan, so they bore the same coat of arms as the Rościszewski family.
Each knights' clan/gens/ród had its coat of arms, and there were only a limited number. Almost without exception, there were no family coat of arms. Each coat of arms bore a name, the clan's call word. In most instances, the coat of arms belonged to many families within the clan. The Polish state paralleled the Roman Empire, and the Polish nobility had a different origin and structure in law than Western Europe's feudal nobility. The clan/gens/ród system survived the whole of Polish history.
The number of legally granted ennoblements after the 15th century was minimal.
In the Kingdom of Poland and later in the Polish-Lithuanian Commonwealth, ennoblement ("nobilitacja") may be equated with an individual given legal status as a "szlachcic" member of the Polish nobility. Initially, this privilege could be granted by the monarch, but from 1641 onward, this right was reserved for the sejm. Most often the individual being ennobled would join an existing noble szlachta clan and assume the undifferentiated coat of arms of that clan.
According to heraldic sources, the total number of legal ennoblements issued between the 14th century and the mid-18th century is estimated at approximately 800. This is an average of only about two ennoblements per year, or only 0.000,000,14 – 0.000,001 of the historical population. Compare: historical demography of Poland. Charles-Joseph, 7th Prince of Ligne, when trying to obtain Polish noble status, supposedly said in 1784, ""It is easier to become a duke in Germany, than to be counted among Polish nobles.""
The close of the late 18th century (see below) was a period in which a definite increase in the number of ennoblements can be noted. This can most readily be explained in terms of the ongoing decline and eventual collapse of the Commonwealth and the resulting need for soldiers and other military leaders (see: Partitions of Poland, King Stanisław August Poniatowski).
According to heraldic sources 1,600 is the total estimated number of all legal ennoblements throughout the history of Kingdom of Poland and Polish-Lithuanian Commonwealth from the 14th century onward (half of which were performed in the final years of the late 18th century).
Types of ennoblement:
In the late 14th century, in the Grand Duchy of Lithuania, Vytautas the Great reformed the Grand Duchy's army: instead of calling all men to arms, he created forces comprising professional warriors—"bajorai" ("nobles"; see the cognate ""boyar""). As there were not enough nobles, Vytautas trained suitable men, relieving them of labor on the land and of other duties; for their military service to the Grand Duke, they were granted land that was worked by hired men (veldams). The newly formed noble families generally took up, as their family names, the Lithuanian pagan given names of their ennobled ancestors; this was the case with the Goštautai, Radvilos, Astikai, Kęsgailos and others. These families were granted their coats of arms under the Union of Horodlo (1413).
In 1506, King Sigismund I the Old confirmed the position of the Lithuanian Council of Lords in state politics and limited entry into the nobility.
Specific rights of the szlachta included:
Significant legislative changes in the status of the szlachta, as defined by Robert Bideleux and Ian Jeffries, consist of its 1374 exemption from the land tax, a 1425 guarantee against the 'arbitrary arrests and/or seizure of property' of its members, a 1454 requirement that military forces and new taxes be approved by provincial Sejms, and statutes issued between 1496 and 1611 that prescribed the rights of commoners.
Nobles were born into a noble family, adopted by a noble family (this was abolished in 1633), or achieved noble rank through Ennoblement by a king or Sejm for reasons such as bravery in combat, services to the state, etc. Yet this proved to be the rarest means of gaining noble status. Many nobles were, in fact, usurpers who were commoners that had moved to another part of the country and falsely claimed noble status. Hundreds of such "false nobles" were denounced by Hieronim Nekanda Trepka in his ""Liber generationis plebeanorum"", or ""Liber chamorum"", in the first half of the 16th century. The law forbade non-nobles to own "folwarks" and promised such estates as a reward to denouncers. Trepka was himself an impoverished nobleman who lived a town dweller's life and documented hundreds of such false claims hoping to take over one of the usurped estates. He does not seem to have succeeded in his quest despite his employment as the king's secretary. Many sejms issued decrees over the centuries in an attempt to resolve this issue, but with little success. It is unknown what percentage of the Polish nobility came from the 'lower orders' of society, but most historians agree nobles of such base origins formed a 'significant' element of the szlachta.
Self-promotion and aggrandizement were not confined to commoners. Often, members of the lower szlachta sought further ennoblement from foreign, therefore less verifiable, sources. That is, they might acquire by legitimate means or otherwise, such as by purchase, one of a selection of foreign titles ranging from Baron, Marchese, Freiherr to Comte, all readily translatable into the Polish "Hrabia". Alternatively, they would simply appropriate a title by conferring it upon themselves. An example of this is cited in the case of the last descendant of the Ciechanowiecki family, who managed to restore a genuinely old Comital title, but whose actual origins are shrouded in 18th-century mystery.
Polish nobility enjoyed many rights that were not available to their equivalents in other countries. Over time, each new monarch ceded to them further privileges. Those privileges became the basis of the "Golden Liberty" in the Polish–Lithuanian Commonwealth. Despite having a king, Poland was considered the 'nobility's Commonwealth' because Royal elections in Poland were in the hands of members of a hereditary class. Poland was therefore the domain of this class, and not that of the king or the ruling dynasty. This arose in part because of the extinction of male heirs in the original royal dynasties: first, the Piasts, then the Jagiellons. As a result, the nobility took it upon itself to choose "the Polish king" from among the dynasties' matrilinial descendants.
Poland's successive kings granted privileges to the nobility upon their election to the throne - the privileges having been specified in the king-elect's Pacta conventa - and at other times, in exchange for "ad hoc" leave to raise an extraordinary tax or a "pospolite ruszenie", a military call up. Poland's nobility thus accumulated a growing array of privileges and immunities.
In 1355 in Buda King Casimir III the Great issued the first country-wide privilege for the nobility, in exchange for their agreeing that if Casimir had no male heirs, the throne would pass to his nephew, Louis I of Hungary. Casimir further decreed that the nobility would no longer be subject to 'extraordinary' taxes or have to use their own funds for foreign military expeditions. Casimir also promised that when the royal court toured, the king and the court would cover all expenses, instead of requiring facilities to be provided by the local nobility.
In 1374 King Louis of Hungary approved the Privilege of Koszyce ("przywilej koszycki") to guarantee the Polish throne for his daughter, Jadwiga. He broadened the definition of membership of the nobility and exempted the entire class from all but one tax ("łanowy") a limit of 2 groszes per "łan" of land, Old Polish units of measurement. In addition, the King's right to raise taxes was effectively abolished: no new taxes would be levied without the agreement of the nobility. Henceforth, district offices were also reserved exclusively for local nobility, as the Privilege of Koszyce forbade the king to grant official posts and major Polish castles to foreign knights. Finally, the privilege obliged the king to pay indemnities to nobles injured or taken captive during a war outside Polish borders.
In 1422 King Władysław II Jagiełło was constrained by the Privilege of Czerwińsk ("przywilej czerwiński"), which established the inviolability of nobles' property. Their estates could not be confiscated except upon the verdict of a court. It also made him cede some jurisdiction over fiscal policy to the Royal Council, later, the Senate of Poland, including the right to mint coinage.
In 1430, with the Privileges of Jedlnia, confirmed at Kraków in 1433, Polish: "przywileje jedlneńsko-krakowskie", based partially on his earlier Brześć Kujawski privilege (April 25, 1425), King Władysław II Jagiełło granted the nobility a guarantee against arbitrary arrest, similar to the English Magna Carta's habeas corpus, known from its own Latin name as "neminem captivabimus nisi jure victum". Henceforth, no member of the nobility could be imprisoned without a warrant from a court of justice. The king could neither punish nor imprison any noble on a whim. King Władysław's "quid pro quo" for the easement was the nobles' guarantee that the throne would be inherited by one of his sons, who would be bound to honour the privileges granted earlier to the nobility. On May 2, 1447 the same king issued the "Wilno Pact, or Wilno Privilege", which gave the Lithuanian boyars the same rights as those already secured by the Polish "szlachta".
In 1454, King Casimir IV granted the Nieszawa Statutes - Polish: "statuty cerkwicko-nieszawskie", clarifying the legal basis of voivodship sejmiks - local parliaments. The king could promulgate new laws, raise taxes, or call for a mass military call up "pospolite ruszenie", only with the consent of the sejmiks, and the nobility were protected from judicial abuses. The Nieszawa Statutes also curbed the power of the magnates, as the Sejm, the national parliament, had the right to elect many officials, including judges, voivods and castellans. These privileges were demanded by the "szlachta" in exchange for their participation in the Thirteen Years' War.
The first "free election" (Polish: "wolna elekcja") of a king took place in 1492. In fact, some earlier Polish kings had been elected with help from assemblies such as those that put Casimir II on the throne, thereby setting a precedent for free elections. Only senators voted in the 1492 free election, which was won by John I Albert. For the duration of the Jagiellonian Dynasty, only members of that royal family were considered for election. Later, there would be no restrictions on the choice of candidates.
In 1493 the Sejm, began meeting every two years at Piotrków. It comprised two chambers:
The numbers of senators and deputies later increased.
On April 26, 1496 King John I Albert granted the Privilege of Piotrków. The Statutes of Piotrków increased the nobility's feudal power over serfs. It bound the peasant to the land, and only one son though not the eldest, was permitted to leave the village. Townsfolk "mieszczaństwo" were prohibited from owning land. Positions in the Church hierarchy were restricted to nobles.
On 23 October 1501, the Polish–Lithuanian union was reformed by the Union of Mielnik. It was there that the tradition of a coronation Sejm was founded. Here again, the lesser nobility, lesser in wealth only - not in rank - attempted to reduce the power of the Magnates with a law that made them impeachable before the Senate for malfeasance. However, the Act of Mielnik of 25 October did more to strengthen the Magnate-dominated Senate of Poland than the lesser nobility. Nobles as a whole were given the right to disobey the King or his representatives — "non praestanda oboedientia", and to form confederations, armed opposition against the king or state officials if the nobles found that the law or their legitimate privileges were being infringed.
On 3 May 1505 King Alexander I Jagiellon granted the Act of "Nihil novi nisi commune consensu" - "I accept nothing new except by common consent". This forbade the king to pass new laws without the consent of the representatives of the nobility in the assembled Sejm, thus greatly strengthening the nobility's powers. Essentially, this act marked the transfer of legislative power from the king to the Sejm. It also marks the beginning of the First Rzeczpospolita, the period of a "szlachta"-run "Commonwealth".
In 1520 the Act of Bydgoszcz granted the Sejm the right to convene every four years, with or without the king's permission. At about that time the "Executionist Movement", seeking to oversee law enforcement, began to take shape. Its members sought to curb the power of the Magnates at the Sejm and to strengthen the power of the monarch. In 1562 at the Sejm in Piotrków they forced the Magnates to return many leased crown lands to the king, and the king to create a standing army wojsko kwarciane. One of the most famous members of this movement was Jan Zamoyski.
Until the death of Sigismund II Augustus, the last king of the Jagiellonian dynasty, all monarchs had to be elected from within the royal family. However, from 1573, practically any Polish noble or foreigner of royal blood could potentially become a Polish–Lithuanian monarch. Every newly elected king was supposed to sign two documents: the "Pacta conventa", the king's "pre-election pact", and the "Henrican articles", named after the first freely elected king, Henry of Valois. The latter document was a virtual "Polish constitution" and contained the basic laws of the Commonwealth:
In 1578 king, Stefan Batory, created the Crown Tribunal to reduce the enormous pressure on the Royal Court. This placed much of the monarch's juridical power in the hands of the elected szlachta deputies, further strengthening the nobility as a class. In 1581 the Crown Tribunal was joined by a counterpart in Lithuania, the Lithuanian Tribunal.
For many centuries, wealthy and powerful members of the szlachta sought to gain legal privileges over their peers. Few szlachta were wealthy enough to be known as Magnates, "karmazyni", the "Crimsons" - from the crimson colour of their boots. A true Magnate had to be able to trace his ancestry for many generations and own at least 20 villages or estates. He also had to hold high office in the Commonwealth.
. Thus, out of about one million szlachta, only 200–300 persons could be classed as great Magnates with country-wide possessions and influence. Of these some 30–40 were considered as having significant impact on Poland's politics. Magnates often received gifts from monarchs, which greatly increased their wealth. Although such gifts were only temporary leases, often the Magnates never returned them. This gave rise in the 16th century, to a self-policing trend by the szlachta, known as the "ruch egzekucji praw" — movement for the enforcement of the law - against usurping Magnates to force them to return leased lands back to their rightful owner, the monarch.
One of the most important victories of the Magnates was the late 16th century right to create "Ordynacjas", similar to Fee tails under English law, which ensured that a family which gained landed wealth could more easily preserve it. The "Ordynacjas" that belonged to families such as the Radziwiłł, Zamoyski, Potocki or Lubomirskis often rivalled the estates of the king and were important power bases for them.
Very high offices of the Polish crown were de facto "hereditary" and guarded by the magnateria of Poland, leaving the lower offices below for "middling" nobility ("the baronage" -- SEE: Offices in the Polish-Lithuanian Commonwealth for a sense of the hierarchy). The prestige of lower offices depended on the wealth of the region. The Masovia region of Poland had a long-standing reputation of being rather poor due to the condition of the soil.
The difference between the "magnateria" and the rest of the szlachta was primarily one of wealth and life-style, as both belonged to the same legally defined class being members of the same clans. Consequently, any power wrested from the king by the magnates was consequently trickled down to the entirety of the szlachta. This often meant the rest of the szlachta tended to cooperate with the magnates rather than struggle against them.
The notion of the szlachta's accrued sovereignty ended in 1795 with the final Partitions of Poland, and until 1918 their legal status was dependent on the policies of the Russian Empire, the Kingdom of Prussia or the Habsburg Monarchy. A project begun in the Golden Age of Poland was finally eclipsed, but arguably the memory of it has lingered in succeeding generations.
In the 1840s Nicholas I reduced 64,000 of lesser szlachta to a particular commoner status known as "" (literally "single-householders"). Despite this, 62.8% of all Russia's nobles were Polish szlachta in 1858 and still 46.1% in 1897.
Serfdom was abolished in Russian Poland on February 19, 1864. It was deliberately enacted with the aim of ruining the szlachta. Only in the Russian Partition did peasants pay the market price for land redemption, the average for the rest of the Russian Empire was 34% above the market rates. All land taken from Polish peasants since 1846 was to be returned to them without redemption payments. The ex-serfs could only sell land to other peasants, not szlachta. 90% of the ex-serfs in the empire who actually gained land after 1861 lived in the 8 western provinces. Along with Romania, Polish landless or domestic serfs were the only ones to be given land after serfdom was abolished. All this was to punish the szlachta's role in the uprisings of 1830 and 1863.
By 1864 80% of szlachta were "déclassé" - downward social mobility. One quarter of petty nobles were worse off than the average serf. While 48.9% of the land in Russian Poland was in peasant hands, nobles still held onto 46%. In the Second Polish Republic the privileges of the nobility were legally abolished by the March Constitution in 1921 and as such not reinstated by any succeeding Polish law.
Despite preoccupations with warring, politics and status, the szlachta in Poland, as did people from all social classes, played its part in contributing in fields ranging from literature, art and architecture, philosophy, education, agriculture and the many branches of science, to technology and industry. Perhaps foremost among the cultural determinants of the nobility in Poland were its continuing international connections with the Rome-based Catholic Church. It was from the ranks of the szlachta that were drawn the church's leading Prelates until the 20th century. Other international influences came through the more or less secretive and powerful Christian and lay organisations such as the Sovereign Military Order of Malta, focused on hospital and other charitable activity. The most notable Polish Maltese Knight was the Pozńan commander, Bartłomiej Nowodworski, founder in 1588 of the oldest school in Poland. One alumnus was John III Sobieski.
In the 18th century, after several false starts, international Freemasonry, "wolnomularstwo", from western lodges, became established among the higher échelons of the szlachta, and in spite of membership of some clergy, it was intermittently but strongly opposed by the Catholic Church. After the partitions it became a cover for opposition to the occupying powers. Also in the 18th century there was a marked development in Patronage of the arts during the reign of Stanisław August Poniatowski, himself a freemason, and with the growth of social awareness, in Philanthropy.
High-born women in Polish-Lithuanian Commonwealth exerted political and cultural influence throughout history in their own country and abroad, as queens, princesses
and the wives or widows of magnates. Their cultural activities came into sharper relief in the 18th century with their hosting of salons in the French manner. They went on to publish as translators and writers and as facilitators of educational and social projects.
Notable women members of the szlachta who exerted political and/or cultural influence include:
The szlachta, no less than the rest of the population, placed a particular accent on food. It was at the centre of courtly and estate entertaining and in good times, at the heart of village life. During the Age of Enlightenment, King Stanislaw August Poniatowski emulated the French Salons by holding his famed Thursday Lunches for intellectuals and artists, drawn chiefly from the szlachta. His "Wednesday Lunches" were gatherings for policy makers in science, education and politics.
There was a tradition, particularly in Mazovia, kept till the 20th century, of estate owners laying on a festive banquet at the completion of harvest for their staff, known as "Dożynki", as a way of expressing an acknowledgment of their work. It was equivalent to a harvest festival. Polish food varied according to region, as elsewhere in Europe, and was influenced by settlers, especially Jewish cuisine, and occupying armies.
One of the favourite szlachta pastimes was hunting "Łowiectwo". Before the formation of Poland as a state, hunting was accessible to everyone. With the introduction of rulers and rules, big game, generically "zwierzyna": Aurochs, bison, deer and boar became the preserve of kings and princes on penalty of poachers' death. From the 13th century on the king would appoint a high-ranking courtier to the role of Master of the Hunt, "Łowczy". In time, the penalties for poaching were commuted to fines and from around the 14th century, landowners acquired the right to hunt on their land. Small game, foxes, hare, badger and stoat etc. were 'fair game' to all comers. Hunting became one of the most popular social activities of the szlachta until the partitions, when different sets of restrictions in the three territories were introduced. This was with a view to curbing social interaction among the subject Poles. Over the centuries, at least two breeds of specialist hounds were bred in Poland. One was the Polish Hunting Dog, the "brach". The other was the Ogar Polski. Count Xavier Branicki was so nostalgic about Polish hunting, that when he settled in France in the mid 19th century, and restored his estate at the Chateau de Montresor, he ordered a brace of Ogar Polski hounds from the Polish breeder and "szlachcic", Piotr Orda.
The szlachta differed in many respects from the nobility in other countries. The most important difference was that, while in most European countries the nobility lost power as the ruler strove for absolute monarchy, in the Polish-Lithuanian Commonwealth a reverse process occurred: the nobility actually gained power at the expense of the king, and enabled the political system to evolve into an oligarchy.
Szlachta members were also proportionately more numerous than their equivalents in all other European countries, constituting 6–12% of the entire population. By contrast, nobles in other European countries, except for Spain, amounted to a mere 1–3%. Most of the szlachta were "minor nobles" or smallholders. In Lithuania the minor nobility made up to 3/4 of the total szlachta population. By the mid-16th century the szlachta class consisted of at least 500,000 persons (some 25,000 families) and was possibly a million strong in 1795.
The proportion of nobles in the population varied across regions. In the 16th century, the highest proportion of nobles lived in the Płock Voivodeship (24,6%) and in Podlachia (26,7%), while Galicia had numerically the largest szlachta population. In districts, such as Wizna and Łomża, the szlachta constituted nearly half of the population. Regions with the lowest percentage of nobles were the Kraków Voivodeship with (1,7%), Royal Prussia with (3%) and the Sieradz Voivodeship with 4,6%. Before the Union of Lublin, inequality among nobles in terms of wealth and power was far greater in the Grand Duchy of Lithuania than in the Polish Kingdom. The further south and east one went, the more the territory was dominated by magnate families and other nobles. In the Lithuanian and Ruthenian palatinates, poor nobles were more likely to rent smallholdings from magnates than to own land themselves.
It has been said that the ruling elites were the only socio-political milieu to whom a sense of national consciousness could be attributed. All szlachta members, irrespective of their cultural/ethnic background, were regarded as belonging to a single "political nation" within the Commonwealth. Arguably, a common culture, the Catholic religion and the Polish language were seen as the main unifying factors in the dual state. Prior to the Partitions there was said to have been no Polish national identity as such. Only szlachta members, irrespective of their ethnicity or culture of origin, were considered as "Poles".
Despite polonisation in Lithuania and Ruthenia in the XVII-XVIII centuries, a large part of the lower szlachta managed to retain their cultural identity in various ways. Due to poverty most of the local szlachta had never had access to formal education nor to Polish language teaching and hence could not be expected to self-identify as "Poles". It was common even for wealthy and in practice polonised szlachta members still to refer to themselves as Lithuanian, "Litwin" or Ruthenian, "Rusyn".
According to Polish estimates from the 1930s, 300,000 members of the common nobles -"szlachta zagrodowa" - inhabited the subcarpathian region of the Second Polish Republic out of 800,000 in the whole country. 90% of them were Ukrainian-speaking and 80% were Ukrainian Greek Catholics. In other parts of the Ukraine with a significant szlachta population, such as the Bar or the Ovruch regions, the situation was similar despite russification and earlier polonization. As an example:
However the era of sovereign rule by the szlachta ended earlier than in other countries, excluding France, in 1795 (see Partitions of Poland). Since then their legitimacy and fate depended on the legislation and policies of the Russian Empire, Kingdom of Prussia and Habsburg Monarchy. Their privileges became increasingly limited, and were ultimately dissolved by the March Constitution of Poland in 1921.
There were a number of avenues to upward social mobility and the attainment of nobility. The szlachta was not rigidly exclusive or closed as a class, but according to heraldic sources, the total number of legal ennoblements issued between the 14th and mid-18th century, is estimated at approximately 800. This is an average of about two ennoblements per year, or 0.000,000,14 – 0.000,001 of the historical population.
According to two English journalists Richard Holt Hutton and Walter Bagehot writing on the subject in 1864,
and
Sociologist and historian, Jerzy Ryszard Szacki said in this context,
Others assert the szlachta were not a social class, but a caste, among them, historian Adam Zamoyski,
Jerzy Szacki continues,
Low-born individuals, including townsfolk "mieszczanie", peasants "chłopi", but not Jews "Żydzi", could and did rise to official ennoblement in Commonwealth society, although Charles-Joseph, 7th Prince of Ligne, while trying to obtain Polish noble status, is supposed to have said in 1784,
According to heraldic sources 1,600 is the total estimated number of all legal ennoblements throughout the history of Kingdom of Poland and Polish-Lithuanian Commonwealth from the 14th century onward, half of which were enacted in the final years of the late 18th century. Hutton and Bagehot,
Each "szlachcic" was said to hold enormous potential influence over the country's politics, far greater than that enjoyed by the citizens of modern democratic countries. Between 1652 and 1791, any nobleman could potentially nullify all the proceedings of a given "sejm" or "sejmik" by exercising his individual right of "liberum veto" - Latin for "I do not allow" - except in the case of a confederated sejm or confederated sejmik.
In old Poland, a nobleman could only marry a noblewoman, as intermarriage between "castes" was fraught with difficulties. (wiktionary:endogamy); but, children of a legitimate marriage followed the condition of the father, never the mother, therefore, only the father transmitted his nobility to his children. See "patrilineality". A noble woman married to a commoner could not transmit her nobility to her husband and their children. Any individual could attain ennoblement ("") for special services to the state. A foreign noble might be naturalized as a Polish noble through the mechanism called the "Indygenat", certified by the king. Later, from 1641, it could only be done by a general sejm. By the eighteenth century all these trends contributed to the great increase in the proportion of szlachta in the total population.
In theory all szlachta members were social equals and were formally legal peers. Those who held civic appointments were more privileged but their roles were not hereditary. Those who held honorary appointments were superior in the hierarchy but these positions were only granted for a lifetime. Some tenancies became hereditary and went with both privilege and title. Nobles who were not direct Lessees of the Crown but held land from other lords were only peers "de iure". The poorest enjoyed the same rights as the wealthiest magnate. The exceptions were a few symbolically privileged families such as the Radziwiłł, Lubomirski and Czartoryski, who held honorary aristocratic titles bestowed by foreign courts and recognised in Poland which granted them use of titles such as "Prince" or "Count". See also The Princely Houses of Poland. All other szlachta simply addressed each other by their given name or as "Brother, Sir" "Panie bracie" or the feminine equivalent. The other forms of address would be "Illustrious and Magnificent Lord", "Magnificent Lord", "Generous Lord" or "Noble Lord" in descending order, or simply "His/Her Grace Lord/Lady".
The notion that all Polish nobles were social equals, regardless of their financial status or offices held, is enshrined in a traditional Polish adage:
equivalent to:
According to their wealth, the nobility were divided into:
Landed szlachta - "ziemianie" or "ziemiaństwo" - meant any nobleman who owned land, including magnates, the lesser nobility, and those who owned at least part of the village. Since titular manorial lordships were also open to burgers of certain privileged cities with a royal charter, not all landed gentry had hereditary title to noble status.
Coats of arms were very important to the szlachta. Its heraldic system evolved together with neighbouring states in Central Europe, while differing in many ways from the heraldry of other European countries. Polish Knighthood had its counterparts, links and roots in Moravia, e.g. Poraj coat of arms and in Germany, e.g. Junosza coat of arms.
Families who had a common origin would also share a coat of arms. They would also share their crest with families adopted into the clan. Sometimes unrelated families would be falsely attributed to a clan on the basis of similarity of crests. Some noble families inaccurately claimed clan membership. The number of coats of arms in this system was comparatively low and did not exceed 200 in the late Middle Ages. There were 40,000 in the late 18th century.
At the Union of Horodło, forty-seven families of Catholic Lithuanian lords and boyars were adopted by Polish szlachta families and allowed to use Polish coats of arms.
The tradition of differentiating between a coat of arms and a lozenge granted to women, did not develop in Poland. By the 17th century, invariably, men and women inherited a coat of arms from their father. When mixed marriages developed after the partitions, that is between commoners and members of the nobility, as a courtesy, children could claim a coat of arms from their distaff side, but this was only tolerated and could not be passed on into the next generation. The brisure was rarely used. All children would inherit the coat of arms and title of their father. This partly accounts for the relatively large proportion of Polish families who had claim to a coat of arms by the 18th century. Another factor was the arrival of titled foreign settlers, especially from the German lands and the Habsburg Empire.
Illegitimate children could adopt the mother's surname and title by the consent of the mother's father, but would sometimes be adopted and raised by the natural father's family, thereby acquiring the father's surname, though not the title or arms.
The "szlachta"s prevalent ideology, especially in the 17th and 18th centuries, was manifested in its adoption of "Sarmatism", a word derived from the legend that its origins reached back to the ancient tribe of an Iranic people, the Sarmatians. This nostalgic belief system embracing chivalry and courtliness, became an important part of "szlachta" culture and affected all aspects of their lives. It was popularized by poets who exalted traditional village life, peace and pacifism. It was also manifested in oriental-style apparel, the "żupan", "kontusz", "sukmana", "pas kontuszowy", "delia" and made the scimitar-like "szabla" a near-obligatory item of everyday "szlachta" apparel. Sarmatism served to integrate a nobility of disparate provenance, as it sought to create a sense of national unity and pride in the szlachta's "Golden Liberty" "złota wolność". It was marked furthermore by a linguistic affectation among the "szlachta" of mixing Polish and Latin vocabulary, producing a form of Polish Dog Latin peppered with "macaronisms" in everyday conversation.
Prior to the Reformation, the Polish nobility were either Roman Catholic or Orthodox with a small group of Muslims. See the Muslim, Haroun Tazieff of princely Tartar extraction. Many families, however, went on to adopt the Reformed Christian faith. Jan Łaski or "Johannes Alasco" (1499-1560) was a cleric, whose uncle, the eponymous Jan Łaski (1456-1531) was Grand Chancellor of the Crown, Archbishop of Gniezno and Primate of Poland. His nephew was an early convert to Calvinism and had a hand in implementing (c. 1543–1555) Reformation in England where he is known as "John Laski".
After the Counter-Reformation, when the Roman Catholic Church regained power in Poland, the nobility became almost exclusively Catholic. Approximately 45% of the population were Roman Catholic or members of Protestant denominations, 36% were Greek Catholic, 4% Orthodox, of whom some were members of the Armenian Apostolic or the Armenian Catholic Churches and the Georgian Orthodox Church. The remaining 15% were made up of a substantial minority of Jews. In the 18th century, the followers of Jacob Frank were ennobled as a result of their conversion to Roman Catholicism. Although Judaism "per se" had not been a bar to noble status, in practice there were laws that favoured religious conversion to Christianity by rewarding it with ennoblement (see: Neophyte). A later example, in 1839, of certifying the noble status of converts is the Wołowski family with the Bawół coat of arms.
"a." Estimates of the proportion of szlachta vary widely: 10–12% of the total population of historic Polish–Lithuanian Commonwealth, around 8% of the total population in 1791 (up from 6.6% in the 16th century) or 6-8%. | https://en.wikipedia.org/wiki?curid=29050 |
Syntactic sugar
In computer science, syntactic sugar is syntax within a programming language that is designed to make things easier to read or to express. It makes the language "sweeter" for human use: things can be expressed more clearly, more concisely, or in an alternative style that some may prefer.
For example, many programming languages provide special syntax for referencing and updating array elements. Abstractly, an array reference is a procedure of two arguments: an array and a subscript vector, which could be expressed as codice_1. Instead, many languages provide syntax such as codice_2. Similarly an array element update is a procedure consisting of three arguments, for example codice_3, but many languages provide syntax such as codice_4.
A construct in a language is called "syntactic sugar" if it can be removed from the language without any effect on what the language can do: functionality and expressive power will remain the same.
Language processors, including compilers and static analyzers, often expand sugared constructs into more fundamental constructs before processing, a process sometimes called "desugaring".
The term "syntactic sugar" was coined by Peter J. Landin in 1964 to describe the surface syntax of a simple ALGOL-like programming language which was defined semantically in terms of the applicative expressions of lambda calculus, centered on lexically replacing λ with "where".
Later programming languages, such as CLU, ML and Scheme, extended the term to refer to syntax within a language which could be defined in terms of a language core of essential constructs; the convenient, higher-level features could be "desugared" and decomposed into that subset. This is, in fact, the usual mathematical practice of building up from primitives.
Building on Landin's distinction between essential language constructs and syntactic sugar, in 1991, Matthias Felleisen proposed a codification of "expressive power" to align with "widely held beliefs" in the literature. He defined "more expressive" to mean that without the language constructs in question, a program would have to be completely reorganized.
Some programmers feel that these syntax usability features are either unimportant or outright frivolous. Notably, special syntactic forms make a language less uniform and its specification more complex, and may cause problems as programs become large and complex. This view is particularly widespread in the Lisp community, as Lisp has very simple and regular syntax, and the surface syntax can easily be modified.
For example, Alan Perlis once quipped in "Epigrams on Programming", in a reference to bracket-delimited languages, that "Syntactic sugar causes cancer of the semi-colons".
The metaphor has been extended by coining the term "syntactic salt", which indicates a feature designed to make it harder to write bad code. Specifically, syntactic salt is a hoop that programmers must jump through just to prove that they know what is going on, rather than to express a program action. For example, in Java and Pascal assigning a float value to a variable declared as an int without additional syntax explicitly stating that intention will result in a compile error, while C and C++ will automatically truncate any floats assigned to an int. However this is not syntax, but semantics.
In C#, when hiding an inherited class member, a compiler warning is issued unless the codice_34 keyword is used to specify that the hiding is intentional. To avoid potential bugs owing to the similarity of the switch statement syntax with that of C or C++, C# requires a codice_35 for each non-empty codice_36 label of a codice_37 (unless codice_38, codice_39, or codice_40 is used) even though it does not allow implicit "fall-through". (Using codice_38 and specifying the subsequent label produces a C/C++-like "fall-through".)
Syntactic salt may defeat its purpose by making the code unreadable and thus worsen its quality – in extreme cases, the essential part of the code may be shorter than the overhead introduced to satisfy language requirements.
An alternative to syntactic salt is generating compiler warnings when there is high probability that the code is a result of a mistake – a practice common in modern C/C++ compilers.
Other extensions are "syntactic saccharin" and "syntactic syrup", meaning gratuitous syntax that does not make programming any easier.
Data types with core syntactic support are said to be "sugared types."
Common examples include quote-delimited strings, curly braces for object and record types, and square brackets for Arrays. | https://en.wikipedia.org/wiki?curid=29054 |
Sonic the Hedgehog (character)
Sonic the Hedgehog is the protagonist of the "Sonic the Hedgehog" video game series released by Sega, as well as numerous spin-off comics, animations, and other media. Sonic is a blue anthropomorphic hedgehog who can run at supersonic speeds and curl into a ball, primarily to attack enemies. In most games, Sonic must race through levels, collecting power-up rings and avoiding obstacles and enemies.
Programmer Yuji Naka and artist Naoto Ohshima are generally credited with creating Sonic. Most of the games are developed by Sonic Team. The original "Sonic the Hedgehog" (1991) was released to provide Sega with a mascot to rival Nintendo's flagship character Mario. Sonic was redesigned by Yuji Uekawa for "Sonic Adventure" (1998), with a more mature look designed to appeal to older players.
Sonic is one of the world's best-known video game characters and a gaming icon. His series had sold more than 80 million copies by 2011. In 2005, Sonic was one of the first game character inductees into the Walk of Game alongside Mario and Link.
While Sega was seeking a flagship series to compete with Nintendo's Mario series, several character designs were submitted by its research and development department. Many results came forth from their experiments with character design, including an armadillo (who was later developed into Mighty the Armadillo), a dog, a Theodore Roosevelt look-alike in pajamas (who would later be the basis of Dr. Robotnik/Eggman's design), and a rabbit (who would use its extendable ears to collect objects, an aspect later incorporated in "Ristar"). Naoto Ohshima took some of these internal designs with him on a trip to New York City and sought feedback by asking random passersby at Central Park their opinions; of the designs, the spiky teal hedgehog, initially codenamed "Mr. Needlemouse", led this informal poll, followed by Eggman and the dog character. Ohshima felt that people selected it because it "transcends race and gender and things like that". On return to Japan, Ohshima pitched this to the department, and the hedgehog was ultimately selected as the new mascot.
The detailed design of Sonic was aimed to be something that could be easily drawn by children and be familiar, as well as exhibit a "cool" attitude, representative of the United States at the time. Sonic's blue pigmentation was chosen to match Sega's cobalt blue logo, and his shoes evolved from a design inspired by Michael Jackson's boots with the addition of the color red, which was inspired by both Santa Claus and the contrast of those colors on Jackson's 1987 album "Bad"; his personality was based on then-presidential candidate and later President of the United States Bill Clinton's "Get it done" attitude during the 1992 presidential campaign. To help sell the idea to Sega's higher-ups, Ohshima pitched the concept framed by a fictional fighter pilot that had earned the name "Hedgehog" due to his spiky hair, and had decorated his plane with images of Sonic. When this pilot retired, he married a children's book author, who wrote stories about the Sonic character, the first which became the plot for the first "Sonic" game; Ohshima stated that this influence can be seen in the logo of the game, which features Sonic in a pilot's wing emblem.
The origins of "Sonic" can be traced farther back to a tech demo created by Yuji Naka, who had developed an algorithm that allowed a sprite to move smoothly on a curve by determining its position with a dot matrix. Naka's original prototype was a platform game that involved a fast-moving character rolling in a ball through a long winding tube, and this concept was subsequently fleshed out with Oshima's character design and levels conceived by designer Hirokazu Yasuhara.
Sonic was created without the ability to swim because of a mistaken assumption by Yuji Naka that all hedgehogs could not do so. A group of fifteen people started working on the first "Sonic the Hedgehog" game, and renamed themselves Sonic Team. The game's soundtrack was composed by Masato Nakamura of the band Dreams Come True. Sega sponsored the group's "Wonder 3" tour, painting Sonic on the tour bus, distributing pamphlets advertising the game, and having footage of the game broadcast above stage prior to its release. The original concepts gave Sonic fangs and put him in a band with a human girlfriend named Madonna. However, a team from Sega of America, led by Madeline Schroeder, who calls herself "Sonic's mother", "softened" the character up for an American audience by removing those elements. This sparked a heated issue with Sonic Team. Naka later admitted that it was probably for the best.
Sonic's appearance varies greatly depending on the medium and the style in which he is drawn. In the video games, Sonic's original design by Oshima was short and round, with short quills, a round body, and no visible irises. Artwork featuring this design and drawn by Akira Watanabe was displayed on the package artwork for "Sonic the Hedgehog". Sonic's proportions would change for the release of "Sonic the Hedgehog 2" on the Mega Drive; Sonic's head to height ratio was changed from 1:2 to 1:2.5. For the 1998 release of "Sonic Adventure", Sonic was redesigned by Yuji Uekawa as a character with longer legs and a less spherical body, longer and more drooping quills, and green-colored irises. For the 2006 game, Sonic was redesigned to make him look adult-like and taller to appeal to the next generation players. This was also done because Sonic would interact with humans more often and his design was supposed to fit. An alternative "Werehog" form was introduced in "Sonic Unleashed", placing more emphasis on Sonic's melee skills rather than speed. Although Tetsu Katano acknowledged the large negative fan response to the Werehog, he believes it could return in a future game.
Bob Raffei, CEO of "Sonic Boom" developer Big Red Button, stated that "Sonic Boom" Sonic is "very different ... both in tone and art direction".
Different actors have provided Sonic's voice in his game appearances. Sonic originally had a few voice samples in "Sonic CD", with designer Masato Nishimura providing the voice. Sonic's first true voice actor was Takeshi Kusao for the arcade game "SegaSonic the Hedgehog", with Junichi Kanemaru continually voicing the role beginning with the release of "Sonic Adventure". Kanemaru also voices Sonic in "Sonic X", "Sonic Boom", and the Japanese dub of the "Wreck-It Ralph" films. In "Sonic Unleashed", Sonic was voiced by Tomokazu Seki while in Werehog form.
Starting with "Sonic Adventure", Sonic was voiced in English by Ryan Drummond. Sonic was cast to Jason Griffith starting from "Sonic X", with Griffith voicing Sonic within the games starting with "Shadow The Hedgehog" in 2005. Griffith was later replaced by Roger Craig Smith, starting with "Sonic Free Riders" and "Sonic Colors" in November 2010. Actor Jaleel White voiced the character in all of the DiC-produced animated series: "Adventures of Sonic the Hedgehog", "Sonic SatAM", and "Sonic Underground" as well as the Christmas special, "Sonic Christmas Blast". In "Underground", White also voiced Sonic's brother and sister, Manic and Sonia. Actor Ben Schwartz voiced the character in the Paramount Pictures feature film, which was released on February 14, 2020.
Sonic's first shown appearance in a video game was in the 1991 arcade racing game "Rad Mobile", as a decorative ornament hanging from a rearview mirror. Sonic's first playable appearance was in the platform game "Sonic the Hedgehog" for the Sega Mega Drive/Genesis, which also introduced his nemesis Dr. Robotnik. His two-tailed fox friend Tails joined him in the game's 1992 sequel, "Sonic the Hedgehog 2". "Sonic CD", released in 1993, introduced Sonic's self-appointed girlfriend Amy Rose and recurring robotic doppelgänger Metal Sonic as Sonic traveled through time to ensure a good future for the world. "Sonic 3" and its direct sequel "Sonic & Knuckles", both released in 1994, saw Sonic and Tails battle Robotnik again, with the additional threat of Knuckles, who is tricked by Robotnik into thinking Sonic is a threat. "" (2010–2012) continues where the story of "Sonic 3" left off, reducing Sonic to the only playable character and releasing in episodic installments. The second episode sees the return of both Tails as Sonic's sidekick and Metal Sonic as a recurring enemy.
Other two-dimensional platformers starring Sonic include "Sonic Chaos" (1993), "Sonic Triple Trouble" (1994), "Sonic Blast" (1996), "Sonic the Hedgehog Pocket Adventure" (1999), "Sonic Advance" (2001), "Sonic Advance 2" (2002), "Sonic Advance 3" (2004), "Sonic Rush" (2005), "Sonic Rush Adventure" (2007), "Sonic Colors" (2010), and "Sonic Generations" (2011), all in which were released for handheld consoles.
"Sonic Adventure" (1998) was Sonic Team's return to the character for a major game. It featured Sonic returning from vacation to find the city of Station Square under attack by a new foe named Chaos, under the control of Dr. Robotnik (now known as Dr. Eggman). It was also the first Sonic game to feature a complete voice-over. "Sonic Adventure 2" (2001) placed Sonic on-the-run from the military (G.U.N.) after being mistaken for Shadow the Hedgehog. "Sonic Heroes" (2003) featured Sonic teaming up with Tails and Knuckles, along with other character teams like Team Rose and Chaotix, against the newly rebuilt Metal Sonic, who had betrayed his master with intentions of world domination. "Sonic the Hedgehog" (2006) features Sonic in the city of water, "Soleanna," where he must rescue Princess Elise from Dr. Eggman while trying to avoid a new threat to his own life, Silver the Hedgehog. He is the only playable character in "Sonic Unleashed" (2008), in which he unwillingly gains a new personality, "Sonic the Werehog," the result of Sonic being fused with Dark Gaia's power. He gains strength and flexibility in exchange for his speed, and new friends including a strange creature named Chip who helps him along the way. In "Sonic Colors" (2010), Eggman tries to harness the energy of alien beings known as "Wisps" for a mind-control beam. "Sonic Generations" (2011) features two playable incarnations of Sonic: the younger "classic" Sonic, whose gameplay is presented in a style reminiscent of the Mega Drive/Genesis games, and present-day "modern" Sonic, who uses the gameplay style present in "Unleashed" and "Colors", going through stages from past games to save their friends. "Sonic Generations" features various theme songs including modern and retro versions that are able to be selected from throughout Sonic's twenty-year history. In April 2013, Sega announced that "Sonic Lost World" would launch in October 2013 for the Wii U and Nintendo 3DS.
"Sonic and the Secret Rings" (2007) features Sonic in the storybook world of "One Thousand and One Nights". A sequel, "Sonic and the Black Knight" (2009), continued the storybook theme, this time taking place within the realm of the Arthurian legend.
Sonic has also been featured in other games of many genres other than 2D and 3D platform games. These include "Sonic Spinball", "Sonic Labyrinth" (1995), the racing games "Sonic Drift" (1994), "Sonic Drift 2" (1995), "Sonic R" (1996), "Sonic Riders" (2006), "Sonic Rivals" (2006), "Sonic Rivals 2" (2007), "" (2008), and "Sonic Free Riders" (2010), the fighting games "Sonic the Fighters" (1996) and "Sonic Battle" (2003), the "mobile game" "Sonic Jump" (2005), and the role-playing video game "" (2008).
Video games such as "Dr. Robotnik's Mean Bean Machine" (1993), "Knuckles' Chaotix" (1995), "Tails' Skypatrol" (1995), "Tails Adventure" (1995), and "Shadow the Hedgehog" (2005) starred supporting characters of the "Sonic" series, although Sonic himself cameos in most of them.
Sonic has made many cameo appearances in different games, most notably in other Sega games, such as being a power-up in "Billy Hatcher and the Giant Egg", walking around the main hallway in "Phantasy Star Universe" on the anniversary of his first game's release (June 23), and appearing in the 2008 remake of "Samba de Amigo". He is also a playable character in "Christmas NiGHTS into Dreams". Nintendo, Sega's former rival, made reference to Sonic in "", by showing Sonic's shoes next to a trash can that reads "No Hopers" on the Cranky's Video Game Heroes screen.
Sonic has appeared in several crossover games, including playable appearances in "Super Smash Bros. Brawl" (2008), "Super Smash Bros. for Nintendo 3DS and Wii U" (2014), and "Super Smash Bros. Ultimate" (2018). He appears in the crossover party game "Mario & Sonic at the Olympic Games" and in its sequels "Mario & Sonic at the Olympic Winter Games", "Mario & Sonic at the London 2012 Olympic Games", "Mario & Sonic at the Sochi 2014 Olympic Winter Games", "Mario & Sonic at the Rio 2016 Olympic Games" and "Mario & Sonic at the Olympic Games Tokyo 2020". Sonic is also a playable character in all three "Sega Superstars" games.
In June 2016, it was announced that Sonic would be a playable character in the second wave of characters coming to "Lego Dimensions".
The first animated series, "Adventures of Sonic the Hedgehog", aired in 1993. It was a comical take on Sonic and Tails' adventures battling Robotnik, filled with slapstick humor and loosely based upon the plot of the games. Pierre De Celles, an animator who worked on "Adventures of Sonic the Hedgehog", described the show as "fun and humorous." Also premiering in 1993 was "Sonic the Hedgehog". It was a more dramatic series which portrayed Sonic as a member of a band of Freedom Fighters that fight to free their world from the evil dictator, Dr. Robotnik.
In 1996, a two-part OVA, "Sonic the Hedgehog," was released in Japan. For the American release, the two episodes combined and released as "Sonic the Hedgehog: The Movie" by ADV Films.
A third American animated series, "Sonic Underground," debuted in 1999. It featured the introductions of Sonic's siblings, Sonia the Hedgehog and Manic the Hedgehog, and Sonic's mother, Queen Aleena, who must defeat Robotnik and rule Mobius as the "Council of Four". The show ran for one season in syndication on the Bohbot Kids Network block before it was cancelled.
A new series, "Sonic X," began airing in 2003. The 78-episode anime series detailed Sonic's struggle to protect the Chaos Emeralds from Eggman and new villains. Featuring a cross-world and interstellar journey, "Sonic X" depicted Sonic and his friend Chris Thorndyke in quests to save the world. "Sonic: Night of the Werehog" is a short film produced by Sega's VE Animation Studio, released to coincide with the release of "Sonic Unleashed". In the film, Sonic and Chip enter a haunted house, and must deal with two ghosts trying to scare them. Sonic also makes multiple cameo appearances in the Disney films, "Wreck-It Ralph" and its sequel "Ralph Breaks the Internet".
In October 2013, Sega announced a new animated series, "Sonic Boom". The show ran for 104 11-minute episodes between 2014 and 2017 on Cartoon Network in the US and Canal J and Gulli in France. Sonic makes a guest appearance in the "OK K.O.! Let's Be Heroes" episode "Let's Meet Sonic" and the "Hi-sCoool! SeHa Girls" episode "Eggman vs. Sonic with the Sega Hard Girls".
On June 10, 2014, a film based on the "Sonic" series was announced. Known as simply "Sonic the Hedgehog", it was produced by Neal Moritz on his Original Film banner alongside Takeshi Ito and Mie Onishi, with Toby Ascher is executive producing. The film was written by Pat Casey and Josh Miller and produced as a joint venture between Paramount Pictures and Marza Animation Planet. The film is a live-action and CGI hybrid. The movie was filmed in 2018, with a release date initially set for November 8, 2019. Upon the release of the film's first trailer in late April 2019, however, Sonic's appearance was heavily criticized, leading to the director, Jeff Fowler, to announce a redesign of him, pushing back the release date to February 14, 2020. The second trailer for the film was released on November 12, 2019, featuring the redesign, which drew in a far more positive response from both fans and critics alike.
Sonic's first comic appearance was in a promotional comic printed in "Disney Adventures" magazine (and also given away as a free pull-out with a copy of "Mean Machines" magazine), which established a backstory for the character involving the origin of his color and abilities and the transformation of kindly scientist Dr. Ovi Kintobor into the evil Dr. Ivo Robotnik. Numerous British publications, including "Sega handbook" "Stay Sonic" (1993), four novels published by Virgin Books (1993–1994) and the comic book "Sonic the Comic" (1993–2001), published by Fleetway Publications/Egmont Publishing, used this premise as their basis.
The American comics published by Archie Comics, "Sonic the Hedgehog" (1993–2017), "Sonic X" (2005–2008), and "Sonic Universe" (2009–2017) are based on the settings established by earlier animated TV series, the ABC "SatAM" cartoon, the "Sonic X" anime, and an expansion to the series, respectively. The former series is currently the second longest-running licensed comic series in the history of American comic books, second only to Marvel's Conan series (first issue released in 1970).
In France two comic books named "Sonic Adventures" were published by Sirène in 1994. "Guinness World Records" recognized Sonic comic as the longest-running comic based in a game. Archie Comics also released a twelve part crossover with Mega Man beginning in 2013.
Sonic has also been featured in two different manga. One series was simply called "Sonic the Hedgehog", and featured a story about a normal hedgehog boy named Nicky Parlouzer who can change into Sonic. The other series was a compilation of short stories and was separated into two volumes, the first being called "Dash and Spin", and the other called "Super Fast Sonic!!".
According to various official materials from Sega, Sonic is described as a character who is "like the wind": a drifter who lives as he wants, and makes life a series of events and adventures. Sonic hates oppression and staunchly defends freedom. Although he is mostly quick-witted and easygoing, he has a short temper and is often impatient with slower things. Sonic is a habitual daredevil hedgehog who is honest, loyal to friends, keeps his promises, and dislikes tears. He took the young Tails under his wing like a little brother, and is uninterested in marital proposals from Amy Rose. In times of crisis, he focuses intensely on the challenge as if his personality had undergone an astonishing change.
Sonic's greatest strength is his running speed, being known in the game's universe as the world's fastest hedgehog. Many of his abilities are variations on the tendency for hedgehogs to roll into tight balls for protection with the addition of spinning his body. Since his introduction in 1991's "Sonic the Hedgehog", Sonic's primary offensive maneuver is the basic "Spin Attack" (or "Sonic Spin Attack"). Later games in the series expanded on this basic attack and two of these enhancements have become mainstays of his: the Spin Dash which was introduced in "Sonic the Hedgehog 2" and involves Sonic spinning on the spot before blasting off at full speed, and the Homing Attack, officially introduced in "Sonic Adventure", in which Sonic dashes toward a target in midair. Sonic's only weakness is that he cannot swim, sinking like a rock if plunged to a deep body of water. The only exception is that he can swim in the Sonic the Hedgehog Adventure Gamebooks. When the seven Chaos Emeralds are collected and used, Sonic transforms into "Super Sonic", a faster and invulnerable version of himself that can fly. In Super Sonic form, Sonic's irises turn red and his body becomes golden.
As Sega's mascot and one of the key reasons for the company's success during the 16-bit era of video game consoles, Sonic is one of the most famous video game characters in the world. In 1993, Sonic became the first video game character to have a balloon in Macy's Thanksgiving Day Parade. In 1996, Sonic was also the first video game character to be seen in a Rose Parade. Sonic was one of the three game characters inducted on the inaugural Walk of Game class in 2005, along with former rivals Mario and Link (both from Nintendo). One of a class of genes involved in fruit fly embryonic development, called hedgehog genes, has been named "sonic hedgehog" after him. He is also named in the song "Abiura di me" of the Italian rapper Caparezza.
On the other hand, Sonic's apparent romantic relationship with Princess Elise in the 2006 video game resulted in major criticism. Sonic's characterization and relationship with Eggman in "Sonic Boom" earned a positive response by Patrick Lee of "The A.V. Club" and Emily Ashby of Common Sense Media.
Sonic has also been used as a symbol for Sega's various sponsorships. Between 1993 and 1997, Sega sponsored the JEF United Ichihara Chiba football team, during which period Sonic appeared in the team's uniform. During the 1993 Formula One championship, Sega sponsored the Williams Grand Prix team, which won the Constructors' Championship that year, as well as the team's lead driver, Alain Prost, winning the Drivers' Championship. Sonic was featured in the cars, helmets, and their rivals McLaren used to paint a squashed hedgehog after winning races over Williams. The 1993 European Grand Prix featured a Sonic balloon and Sonic billboards. In 1992, according to Sega of America marketing director Al Nilsen, Sonic was found to be more recognizable than Mickey Mouse in the six-to eleven-year-old demographic, based on the character's respective Q Scores, although this claim could not be confirmed by Q Score developer Marketing Evaluations, Inc.
Nintendo Power listed Sonic as their sixth favorite hero, stating that while he was originally Mario's nemesis, he seems at home on Nintendo platforms. They added that he has remained as one of gaming's greatest icons. In 2004, the character won a Golden Joystick Award for "The Sun Ultimate Gaming Hero". The character's popularity declined in the mid-1990s, and Sonic failed to place in "Electronic Gaming Monthly"s Coolest Mascot of 1996 in either the editors' or readers' picks, being beaten out by not only competitors Mario and Crash Bandicoot, but Sega's own Nights; however, in a 2008 poll of 500 people, Sonic was voted the most popular video game character in the UK with a 24% vote while his old rival Mario came second with 21% of the vote. Later that year, Sonic was ranked as the most iconic video game character in an MSN rankings list. In 2011, "Empire" ranked him as the 14th greatest video game character. And he was voted 10th out of the top 50 video game characters of all time in "Guinness World Records" 2011 Gamers' Edition. Sonic ranked ninth on GameDaily's Top 10 Smash Bros characters list. GameDaily also listed his "next-generation stumble" in their list of video game characters' worst moments, using his relationship with a human female as one of the worst parts of it.
Ken Balough, Sega's former associate brand manager, said that Sonic's appeal endured because the character is "a gaming legend, first and foremost" who originated "from a series of games that defined a generation in gaming history, and his iconic personality was the epitome of speed in the early ‘90s, pushing the limits of what gamers knew and expected from high-speed action and platforming games."
A Japanese team developing the Radio & Plasma Wave Investigation (RPWI) instrumentation for the upcoming "Jupiter Icy Moons Explorer" spacecraft, to be launched by ESA and Airbus in 2022, was able to gain Sega's approval to use Sonic as the mascot for the device.
An Internet meme called "Sanic" has been used based on a poorly drawn Sonic; typically, the meme uses one of Sonic's catchphrases but with poor grammar. Sega's official Sonic Twitter account has made numerous references to it, and it appeared in official downloadable content for "Sonic Forces" on in-game shirts. The meme also appears as a drawing in the theatrical film. | https://en.wikipedia.org/wiki?curid=29056 |
Satyr
In Greek mythology, a satyr ( "", ), also known as a silenos ( ), is a male nature spirit with ears and a tail resembling those of a horse, as well as a permanent, exaggerated erection. Early artistic representations sometimes include horse-like legs, but, by the sixth century BC, they were more often represented with human legs. Comically hideous, they have mane-like hair, bestial faces, and snub noses and are always shown naked. Satyrs were characterized by their ribaldry and were known as lovers of wine, music, dancing, and women. They were companions of the god Dionysus and were believed to inhabit remote locales, such as woodlands, mountains, and pastures. They often attempted to seduce or rape nymphs and mortal women alike, usually with little success. They are sometimes shown masturbating or engaging in bestiality.
In classical Athens, satyrs made up the chorus in a genre of play known as a "satyr play", which was a parody of tragedy and was known for its bawdy and obscene humor. The only complete surviving play of this genre is "Cyclops" by Euripides, although a significant portion of Sophocles's "Ichneutae" has also survived. In mythology, the satyr Marsyas is said to have challenged the god Apollo to a musical contest and been flayed alive for his hubris. Though superficially ridiculous, satyrs were also thought to possess useful knowledge, if they could be coaxed into revealing it. The satyr Silenus was the tutor of the young Dionysus and a story from Ionia told of a "silenos" who gave sound advice when captured.
Over the course of Greek history, satyrs gradually became portrayed as more human and less bestial. They also began to acquire goat-like characteristics in some depictions as a result of conflation with the Pans, plural forms of the god Pan with the legs and horns of goats. The Romans identified satyrs with their native nature spirits, fauns. Eventually the distinction between the two was lost entirely. Since the Renaissance, satyrs have been most often represented with the legs and horns of goats. Representations of satyrs cavorting with nymphs have been common in western art, with many famous artists creating works on the theme. Since the beginning of the twentieth century, satyrs have generally lost much of their characteristic obscenity, becoming more tame and domestic figures. They commonly appear in works of fantasy and children's literature, in which they are most often referred to as "fauns".
The etymology of the name "satyr" (, "") is unclear, and several different etymologies have been proposed for it, including a possible Pre-Greek origin. Some scholars have linked the second part of name to the root of the Greek word θηρίον ("thēríon"), meaning "wild animal". This proposal may be supported by the fact that Euripides at one point refers to satyrs as "theres". Another proposed etymology derives the name from an ancient Peloponnesian word meaning "the full ones", alluding to their permanent state of sexual arousal. Eric Partridge suggested that the name may be related to the root "sat-", meaning "to sow", which has also been proposed as the root of the name of the Roman god Saturn. Satyrs are usually indistinguishable from "silenoi", whose iconography is virtually identical. According to "Brewer's Dictionary of Phrase and Fable", the name "satyr" is sometimes derogatorily applied to a "brutish or lustful man". The term satyriasis refers to a medical condition in males characterized by excessive sexual desire. It is the male equivalent of nymphomania.
According to classicist Martin Litchfield West, satyrs and silenoi in Greek mythology are similar to a number of other entities appearing in other Indo-European mythologies, indicating that they probably go back, in some vague form, to Proto-Indo-European mythology. Like satyrs, these other Indo-European nature spirits are often human-animal hybrids, frequently bearing specifically equine or asinine features. Human-animal hybrids known as Kiṃpuruṣas or Kiṃnaras are mentioned in the "Rāmāyaṇa", an Indian epic poem written in Sanskrit. According to Augustine of Hippo (354 – 430 AD) and others, the ancient Celts believed in "dusii", which were hairy demons believed to occasionally take human form and seduce mortal women. Later figures in Celtic folklore, including the Irish "bocánach", the Scottish "ùruisg" and "glaistig", and the Manx "goayr heddagh", are part human and part goat. The lexicographer Hesychius of Alexandria (fifth or sixth century AD) records that the Illyrians believed in satyr-like creatures called "Deuadai". The Slavic "lešiy" also bears similarities to satyrs, since he is described as being covered in hair and having "goat's horns, ears, feet, and long clawlike fingernails."
Like satyrs, these similar creatures in other Indo-European mythologies are often also tricksters, mischief-makers, and dancers. The "lešiy" was believed to trick travelers into losing their way. The Armenian Pay(n) were a group of male spirits said to dance in the woods. In Germanic mythology, elves were also said to dance in woodland clearings and leave behind fairy rings. They were also thought to play pranks, steal horses, tie knots in people's hair, and steal children and replace them with changelings. West notes that satyrs, elves, and other nature spirits of this variety are a "motley crew" and that it is difficult to reconstruct a prototype behind them. Nonetheless, he concludes that "we can recognize recurrent traits" and that they can probably be traced back to the Proto-Indo-Europeans in some form.
On the other hand, a number of commentators have noted that satyrs are also similar to beings in the beliefs of ancient Near Eastern cultures. Various demons of the desert are mentioned in ancient Near Eastern texts, although the iconography of these beings is poorly-attested. Beings possibly similar to satyrs called "śě’îrîm" are mentioned several times in the Hebrew Bible. "Śĕ’îr" was the standard Hebrew word for "he-goat", but it could also apparently sometimes refer to demons in the forms of goats. They were evidently subjects of veneration, because forbids Israelites from making sacrificial offerings to them and mentions that a special cult was established for the "śě’îrîm" of Jeroboam I. Like satyrs, they were associated with desolate places and with some variety of dancing. predicts, in Karen L. Edwards's translation: "But "wild animals" ["ziim"] will lie down there, and its houses will be full of "howling creatures" ["ohim"]; there ostriches will live, and there "goat-demons" ["śĕ’îr"] will dance." Similarly, declares: ""Wildcats" ["ziim"] shall meet with "hyenas" ["iim"], "goat-demons" ["śĕ’îr"] shall call to each other; there too "Lilith" ["lilit"] shall repose and find a place to rest." "Śě’îrîm" were understood by at least some ancient commentators to be goat-like demons of the wilderness. In the Latin Vulgate translation of the Old Testament, "śĕ’îr" is translated as ""pilosus"", which also means "hairy". Jerome, the translator of the Vulgate, equated these figures with satyrs. Both satyrs and "śě’îrîm" have also been compared to the jinn of Pre-Islamic Arabia, who were envisioned as hairy demons in the forms of animals who could sometimes change into other forms, including human-like ones.
In archaic and classical Greek art, satyrs are shown with the ears and tails of horses. They walk upright on two legs, like human beings. They are usually shown with bestial faces, snub noses, and manelike hair. They are often bearded and balding. Like other Greek nature spirits, satyrs are always depicted nude. Sometimes they also have the legs of horses, but, in ancient art, including both vase paintings and in sculptures, satyrs are most often represented with human legs and feet.
Satyrs' genitals are always depicted as either erect or at least extremely large. Their erect phalli represent their association with wine and women, which were the two major aspects of their god Dionysus's domain. In some cases, satyrs are portrayed as very human-like, lacking manes or tails. As time progressed, this became the general trend, with satyrs losing aspects of their original bestial appearance over the course of Greek history and gradually becoming more and more human. In the most common depictions, satyrs are shown drinking wine, dancing, playing flutes, chasing nymphs, or consorting with Dionysus. They are also frequently shown masturbating or copulating with animals. In scenes from ceramic paintings depicting satyrs engaging in orgies, satyrs standing by and watching are often shown masturbating.
One of the earliest written sources for satyrs is the "Catalogue of Women", which is attributed to the Boeotian poet Hesiod. Here satyrs are born alongside the nymphs and Kouretes and are described as "good-for-nothing, prankster Satyrs". Satyrs were widely seen as mischief-makers who routinely played tricks on people and interfered with their personal property. They had insatiable sexual appetites and often sought to seduce or ravish both nymphs and mortal women alike, though these attempts were not always successful. Satyrs almost always appear in artwork alongside female companions of some variety. These female companions may be clothed or nude, but the satyrs always treat them as mere sexual objects. A single elderly satyr named Silenus was believed to have been the tutor of Dionysus on Mount Nysa. After Dionysus grew to maturity, Silenus became one of his most devout followers, remaining perpetually drunk.
This image was reflected in the classical Athenian satyr play. Satyr plays were a genre of plays defined by the fact that their choruses were invariably made up of satyrs. These satyrs are always led by Silenus, who is their "father". According to Carl A. Shaw, the chorus of satyrs in a satyr play were "always trying to get a laugh with their animalistic, playfully rowdy, and, above all, sexual behavior." The satyrs play an important role in driving the plot of the production, without any of them actually being the lead role, which was always reserved for a god or tragic hero. Many satyr plays are named for the activity in which the chorus of satyrs engage during the production, such as Δικτυουλκοί ("Diktyoulkoí"; "Net-Haulers"), Θεωροὶ ἢ Ἰσθμιασταί ("Theōroì ē Isthmiastaí"; "Spectators or Competitors at the Isthmian Games"), and Ἰχνευταί ("Ichneutaí"; "Searchers"). Like tragedies, but unlike comedies, satyr plays were set in the distant past and dealt with mythological subjects. The third or second-century BC philosopher Demetrius of Phalerum famously characterized the satiric genre in his treatise "De Elocutione" as the middle ground between tragedy and comedy: a "playful tragedy" (, "tragōdía paízdousa").
The only complete extant satyr play is Euripides's "Cyclops", which is a burlesque of a scene from the eighth-century BC epic poem, the "Odyssey", in which Odysseus is captured by the Cyclops Polyphemus in a cave. In the play, Polyphemus has captured a tribe of satyrs led by Silenus, who is described as their "Father", and forced them to work for him as his slaves. After Polyphemus captures Odysseus, Silenus attempts to play Odysseus and Polyphemus off each other for his own benefit, primarily by tricking them into giving him wine. As in the original scene, Odysseus manages to blind Polyphemus and escape. Approximately 450 lines, most of which are fragmentary, have survived of Sophocles's satyr play "Ichneutae" ("Tracking Satyrs"). In the surviving portion of the play, the chorus of satyrs are described as "lying on the ground like hedgehogs in a bush, or like a monkey bending over to fart at someone." The character Cyllene scolds them: "All you [satyrs] do you do for the sake of fun!... Cease to expand your smooth phallus with delight. You should not make silly jokes and chatter, so that the gods will make you shed tears to make me laugh."
In Dionysius's fragmentary satyr play "Limos" ("Starvation"), Silenus attempts to give the hero Heracles an enema. A number of vase paintings depict scenes from satyr plays, including the Pronomos Vase, which depicts the entire cast of a victorious satyr play, dressed in costume, wearing shaggy leggings, erect phalli, and horse tails. The genre's reputation for crude humor is alluded to in other texts as well. In Aristophanes's comedy "Thesmophoriazusae", the tragic poet Agathon declares that a dramatist must be able to adopt the "personae" of his characters in order to successfully portray them on stage. In lines 157–158, Euripides's unnamed relative retorts: "Well, let me know when you're writing satyr plays; I'll get behind you with my hard-on and show you how." This is the only extant reference to the genre of satyr plays from a work of ancient Greek comedy and, according to Shaw, it effectively characterizes satyr plays as "a genre of 'hard-ons.'"
In spite of their bawdy behavior, however, satyrs were still revered as semi-divine beings and companions of the god Dionysus. They were thought to possess their own kind of wisdom that was useful to humans if they could be convinced to share it. In Plato's "Symposium", Alcibiades praises Socrates by comparing him to the famous satyr Marsyas. He resembles him physically, since he is balding and has a snub-nose, but Alcibiades contends that he resembles him mentally as well, because he is "insulting and abusive", in possession of irresistible charm, "erotically inclined to beautiful people", and "acts as if he knows nothing". Alcibiades concludes that Socrates's role as a philosopher is similar to that of the paternal satyr Silenus, because, at first, his questions seem ridiculous and laughable, but, upon closer inspection, they are revealed to be filled with much wisdom. One story, mentioned by Herodotus in his "Histories" and in a fragment by Aristotle, recounts that King Midas once captured a silenus, who provided him with wise philosophical advice.
According to classicist William Hansen, although satyrs were popular in classical art, they rarely appear in surviving mythological accounts. Different classical sources present conflicting accounts of satyrs' origins. According to a fragment from the Hesiodic "Catalogue of Women", satyrs are sons of the five granddaughters of Phoroneus and therefore siblings of the Oreads and the Kouretes. The satyr Marsyas, however, is described by mythographers as the son of either Olympos or Oiagros. Hansen observes that "there may be more than one way to produce a satyr, as there is to produce a Cyclops or a centaur." The classical Greeks recognized that satyrs obviously could not self-reproduce since there were no female satyrs, but they seem to have been unsure whether satyrs were mortal or immortal.
Rather than appearing "en masse" as in satyr-plays, when satyrs appear in myths it is usually in the form of a single, famous character. The comic playwright Melanippides of Melos ( 480–430 BC) tells the story in his lost comedy "Marsyas" of how, after inventing the "aulos", the goddess Athena looked in the mirror while she was playing it. She saw how blowing into it puffed up her cheeks and made her look silly, so she threw the aulos away and cursed it so that whoever picked it up would meet an awful death. The aulos was picked up by the satyr Marsyas, who challenged Apollo to a musical contest. They both agreed beforehand that whoever won would be allowed to do whatever he wanted to the loser. Marsyas played the aulos and Apollo played the lyre. Apollo turned his lyre upside-down and played it. He asked Marsyas to do the same with his instrument. Since he could not, Apollo was deemed to victor. Apollo hung Marsyas from a pine tree and flayed him alive to punish him for his hubris in daring to challenge one of the gods. Later, this story became accepted as canonical and the Athenian sculptor Myron created a group of bronze sculptures based on it, which was installed before the western front of the Parthenon in around 440 BC. Surviving retellings of the legend are found in the "Library" of Pseudo-Apollodorus, Pausanias's "Guide to Greece", and the "Fabulae" of Pseudo-Hyginus.
In a myth referenced in multiple classical texts, including the "Bibliotheke" of Pseudo-Apollodorus and the "Fabulae" of Pseudo-Hyginus, a satyr from Argos once attempted to rape the nymph Amymone, but she called to the god Poseidon for help and he launched his trident at the satyr, knocking him to the ground. This myth may have originated from Aeschylus's lost satyr play "Amymone". Scenes of one or more satyrs chasing Amymone became a common trope in Greek vase paintings starting in the late fifth century BC. Among the earliest depictions of the scene come from a bell krater in the style of the Peleus Painter from Syracuse (PEM 10, pl. 155) and a bell krater in the style of the Dinos Painter from Vienna (DM 7).
The iconography of satyrs was gradually conflated with that of the Pans, plural forms of the god Pan, who were regularly depicted with the legs and horns of a goat. By the Hellenistic Period (323–31 BC), satyrs were beginning to sometimes be shown with goat-like features. Meanwhile, both satyrs and Pans also continued to be shown as more human and less bestial. Scenes of satyrs and centaurs were very popular during the Hellenistic Period. They often appear dancing or playing the aulos. The maenads that often accompany satyrs in Archaic and Classical representations are often replaced in Hellenistic portrayals with wood nymphs.
Artists also began to widely represent scenes of nymphs repelling the unwanted advances of amorous satyrs. Scenes of this variety were used to express the dark, beastly side of human sexuality at a remove by attributing that sexuality to satyrs, who were part human and part animal. In this way, satyrs became vehicles of a metaphor for a phenomenon extending far beyond the original narrative purposes in which they served during earlier periods of Greek history. Some variants on this theme represent a satyr being rebuffed by a hermaphrodite, who, from the satyr's perspective, appears to be a beautiful, young girl. These sculptures may have been intended as kind of sophisticated erotic joke.
The Athenian sculptor Praxiteles's statue "Pouring Satyr" represented the eponymous satyr as very human-like. The satyr was shown as very young, in line with Praxiteles's frequent agenda of representing deities and other figures as adolescents. This tendency is also attested in the descriptions of his sculptures of Dionysus and the Archer Eros written in the third or fourth century AD by the art critic Callistratus. The original statue is widely assumed to have depicted the satyr in the act of pouring an "oinochoe" over his head into a cup, probably a "kantharos". Antonio Corso describes the satyr in this sculpture as a "gentle youth" and "a precious and gentle being" with "soft and velvety" skin. The only hints at his "feral nature" were his ears, which were slightly pointed, and his small tail.
The shape of the sculpture was an S-shape, shown in three-quarter view. The satyr had short, boyish locks, derived from those of earlier Greek athletic sculpture. Although the original statue has been lost, a representation of the pouring satyr appears in a late classical relief sculpture from Athens and twenty-nine alleged "copies" of the statue from the time of the Roman Empire have also survived. Olga Palagia and J. J. Pollitt argue that, although the "Pouring Satyr" is widely accepted as a genuine work of Praxiteles, it may not have been a single work at all and the supposed "copies" of it may merely be Roman sculptures repeating the traditional Greek motif of pouring wine at "symposia".
The Romans identified satyrs with their own nature spirits, fauns. Although generally similar to satyrs, fauns differed in that they were usually seen as "shy, woodland creatures" rather than the drunk and boisterous satyrs of the classical Greeks. Also, fauns generally lacked the association Greek satyrs had with secret wisdom. Unlike classical Greek satyrs, fauns were unambiguously goat-like; they had the upper bodies of men, but the legs, hooves, and horns of goats. The first-century BC Roman poet Lucretius mentions in his lengthy poem "De rerum natura" that people of his time believed in "goat-legged" (""capripedes"") satyrs, along with nymphs who lived in the mountains and fauns who played rustic music on stringed instruments and pipes.
In Roman-era depictions, satyrs and fauns are both often associated with music and depicted playing the Pan pipes or "syrinx". The poet Virgil, who flourished during the early years of the Roman Empire, recounts a story in his sixth "Eclogue" about two boys who tied up the satyr Silenus while he was in a drunken stupor and forced him to sing them a song about the beginning of the universe. The first-century AD Roman poet Ovid makes Jupiter, the king of the gods, express worry that the viciousness of humans will leave fauns, nymphs, and satyrs without a place to live, so he gives them a home in the forests, woodlands, and mountains, where they will be safe. Ovid also retells the story of Marsyas's hubris. He describes a musical contest between Marsyas, playing the aulos, and the god Apollo, playing the lyre. Marsyas loses and Apollo flays him as punishment.
The Roman naturalist and encyclopedist Pliny the Elder conflated satyrs with gibbons, which he describes using the word "satyrus", a Latinized form of the Greek "satyros". He characterizes them as "a savage and wild people; distinct voice and speech they have none, but in steed thereof, they keep a horrible gnashing and hideous noise: rough they are and hairie all over their bodies, eies they have red like the houlets [owls] and toothed they be like dogs."
The second-century Greek Middle Platonist philosopher Plutarch records a legendary incident in his "Life of Sulla", in which the soldiers of the Roman general Sulla are reported to have captured a satyr sleeping during a military campaign in Greece in 89 BC. Sulla's men brought the satyr to him and he attempted to interrogate it, but it spoke only in an unintelligible sound: a cross between the neighing of a horse and the bleating of a goat. The second-century Greek travel writer Pausanias reports having seen the tombs of deceased silenoi in Judaea and at Pergamon. Based on these sites, Pausanias concludes that silenoi must be mortal.
The third-century Greek biographer Philostratus records a legend in his "Life of Apollonius of Tyana" of how the ghost of an Aethiopian satyr was deeply enamored with the women from the local village and had killed two of them. Then, the philosopher Apollonius of Tyana set a trap for it with wine, knowing that, after drinking it, the ghost-satyr would fall asleep forever. The wine diminished from the container before the onlookers' eyes, but the ghost-satyr himself remained invisible. Once all the wine had vanished, the ghost-satyr fell asleep and never bothered the villagers again. Amira El-Zein notes similarities between this story and later Arabic accounts of jinn. The treatise "Saturnalia" by the fifth-century AD Roman poet Macrobius connects both the word "satyr" and the name "Saturn" to the Greek word for "penis". Macrobius explains that this is on account of satyrs' sexual lewdness. Macrobius also equates Dionysus and Apollo as the same deity and states that a festival in honor of Bacchus is held every year atop Mount Parnassus, at which many satyrs are often seen.
Starting in late antiquity, Christian writers began to portray satyrs and fauns as dark, evil, and demonic. Jerome ( 347 – 420 AD) described them as symbols of Satan on account of their lasciviousness. Despite this, however, satyrs were sometimes clearly distinguished from demons and sometimes even portrayed as noble. Because Christians believed that the distinction between humans and animals was spiritual rather than physical, it was thought that even a satyr could attain salvation. Isidore of Seville ( 560 – 636) records an anecdote later recounted in the "Golden Legend", that Anthony the Great encountered a satyr in the desert who asked to pray with him to their common God. During the Early Middle Ages, features and characteristics of satyrs and the god Pan, who resembled a satyr, became absorbed into traditional Christian iconography of Satan.
Medieval storytellers in Western Europe also frequently conflated satyrs with wild men. Both satyrs and wild men were conceived as part human and part animal and both were believed to possess unrestrained sexual appetites. Stories of wild men during the Middle Ages often had an erotic tone and were primarily told orally by peasants, since the clergy officially disapproved of them. In this form, satyrs are sometimes described and represented in medieval bestiaries, where a satyr is often shown dressed in an animal skin, carrying a club and a serpent. In the "Aberdeen Bestiary", the "Ashmole Bestiary", and MS Harley 3244, a satyr is shown as a nude man holding a wand resembling a jester's club and leaning back, crossing his legs. Satyrs are sometimes juxtaposed with apes, which are characterized as "physically disgusting and akin to the Devil". In other cases, satyrs are usually shown nude, with enlarged phalli to emphasize their sexual nature. In the Second-Family Bestiary, the name "satyr" is used as the name of a species of ape, which is described as having a "very agreeable face, restless, however, in its twitching movements."
During the Renaissance, satyrs and fauns began to reappear in works of European art. During the Renaissance, no distinction was made between satyrs and fauns and both were usually given human and goat-like features in whatever proportion the artist deemed appropriate. A goat-legged satyr appears at the base of Michelangelo's statue "Bacchus" (1497). Renaissance satyrs still sometimes appear in scenes of drunken revelry like those from antiquity, but they also sometimes appear in family scenes, alongside female and infant or child satyrs. This trend towards more familial, domestic satyrs may have resulted from conflation with wild men, who, especially in Renaissance depictions from Germany, were often portrayed as living relatively peaceful lives with their families in the wilderness. The most famous representation of a domestic satyr is Albrecht Dürer's 1505 engraving "The Satyr's Family", which has been widely reproduced and imitated. This popular portrayal of satyrs and wild men may have also helped give rise to the later European concept of the noble savage.
Satyrs occupied a paradoxical, liminal space in Renaissance art, not only because they were part human and part beast, but also because they were both antique and natural. They were of classical origin, but had an iconographical canon of their own very different from the standard representations of gods and heroes. They could be used to embody what Stephen J. Campbell calls a "monstrous double" of the category in which human beings often placed themselves. It is in this aspect that satyrs appear in Jacopo de' Barbari's 1495 series of prints depicting satyrs and naked men in combat and in Piero di Cosimo's "Stories of Primitive Man", inspired by Lucretius. Satyrs became seen as "pre-human", embodying all the traits of savagery and barbarism associated with animals, but in human-like bodies. Satyrs also became used to question early modern humanism in ways which some scholars have seen as similar to present-day posthumanism, as in Titian's "Flaying of Marsyas" ( 1570–1576). "The Flaying of Marysas" depicts the scene from Ovid's "Metamorphoses" in which the satyr Marysas is flayed alive. According to Campbell, the people performing the flaying are shown calmly absorbed in their task, while Marsyas himself even displays "an unlikely patience". The painting reflects a broad continuum between the divine and the bestial.
In the 1560 Geneva Bible, the word "sa’ir" in both of the instances in Isaiah is translated into English as "satyr". The 1611 King James Version follows this translation and likewise renders "sa’ir" as "satyr". Edwards states that the King James Version's translation of this phrase and others like it was intended to reduce the strangeness and unfamiliarity of the creatures described in the original Hebrew text by rendering them as names of familiar entities. Edmund Spenser refers to a group of woodland creatures as Satyrs in his epic poem "The Faerie Queene". In Canto VI, Una is wandering through the forest when she stumbles upon a "troupe of Fauns and Satyrs far away Within the wood were dancing in a round." Although Satyrs are often negatively characterized in Greek and Roman mythology, the Satyrs in this poem are docile, helpful creatures. This is evident by the way they help protect Una from Sansloy. Sylvanus, the leader, and the rest of the Satyrs become enamored by Una's beauty and begin to worship her as if she is a deity. However, the Satyrs prove to be simple minded creatures because they begin to worship the donkey she was riding.
In the seventeenth century, satyrs became identified with great apes. In 1699, the English anatomist Edward Tyson (1651–1708) published an account of his dissection of a creature which scholars have now identified as chimpanzee. In this account, Tyson argued that stories of satyrs, wild men, and other hybrid mythological creatures had all originated from the misidentification of apes or monkeys. The French materialist philosopher Julien Offray de La Mettrie (1709–1751) included a section titled "On savage men, called Satyrs" in his "Oeuvres philosophiques", in which he describes great apes, identifying them with both satyrs and wild men. Many early accounts of the orangutan describe the males as being sexually aggressive towards human women and towards females of its own species, much like classical Greek satyrs. The first scientific name given to this ape was "Simia satyrus".
Relationships between satyrs and nymphs of this period are often portrayed as consensual. This trend is exemplified by the 1623 painting "Satyr and Nymph" by Gerard van Honthorst, which depicts a satisfied satyr and nymph lasciviously fondling each other after engaging in obviously consensual sex. Both are smiling and the nymph is showing her teeth, a sign commonly used by painters of the era to signify that the woman in question is of loose morals. The satyr's tongue is visible as the nymph playfully tugs on his goat beard and he strokes her chin. Even during this period, however, depictions of satyrs uncovering sleeping nymphs are still common, indicating that their traditional associations with rape and sexual violence had not been forgotten.
During the nineteenth century, satyrs and nymphs came to often function as a means of representing sexuality without offending Victorian moral sensibilities. In the novel "The Marble Faun" (1860) by the American author Nathaniel Hawthorne, the Italian count Donatello is described as bearing a remarkable resemblance to one of Praxiteles's marble satyr statues. Like the satyrs of Greek legend, Donatello has a carefree nature. His association with satyrs is further cemented by his intense sexual attraction to the American woman Miriam.
Satyrs and nymphs provided a classical pretext which allowed sexual depictions of them to be seen as objects of high art rather than mere pornography. The French emperor Napoleon III awarded the Academic painter Alexandre Cabanel the Legion of Honour, partly on account of his painting "Nymph Abducted by a Faun". In 1873, another French Academicist William-Adolphe Bouguereau painted "Nymphs and Satyr", which depicts four nude nymphs dancing around "an unusually submissive satyr", gently coaxing him into the water of a nearby stream. This painting was bought that same year by an American named John Wolfe, who displayed it publicly in a prominent location in the bar at the Hoffman House, a hotel he owned on Madison Square and Broadway. Despite its risqué subject, many women came to the bar to view the painting. The painting was soon mass reproduced on ceramic tiles, porcelain plates, and other luxury items in the United States.
In 1876, Stéphane Mallarmé wrote "The Afternoon of a Faun", a first-person narrative poem about a faun who attempts to kiss two beautiful nymphs while they are sleeping together. He accidentally wakes them up. Startled, they transform into white water birds and fly away, leaving the faun to play his pan pipes alone. Claude Debussy composed a symphonic poem "Prélude à l'après-midi d'un faune" ("Prelude to the Afternoon of a Faun"), which was first performed in 1894.
The late nineteenth-century German Existentialist philosopher Friedrich Nietzsche was either unaware of or chose to ignore the fact that, in all the earliest representations, satyrs are depicted as horse-like. He accordingly defined a satyr as a "bearded" creature "who derived his name and attributes from the goat." Nietzsche excluded the horse-like satyrs of Greek tradition from his consideration entirely and argued that tragedy had originated from a chorus of men dressed up as satyrs or goats ("tragoi"). Thus, Nietzsche held that tragedy had begun as a Dionysian activity. Nietzsche's rejection of the early evidence for horse-like satyrs was a mistake his critics severely excoriated him for. Nonetheless, he was the first modern scholar to recognize the full importance of satyrs in Greek culture and tradition, as Dionysian symbols of humanity's close ties to the animal kingdom. Like the Greeks, Nietzsche envisioned satyrs as essentially humans stripped down to their most basic and bestial instincts.
In 1908, the French painter Henri Matisse produced his own "Nymph and Satyr" painting in which the animal nature of the satyr is drastically minimized. The satyr is given human legs, but is exceptionally hairy. The seduction element is removed altogether; the satyr simply extends his arms towards the nymph, who lies on the ground, defeated. Penny Florence writes that the "generic scene displays little sensuality" and that the main factor distinguishing it is its tone, because "It does not seem convincing as a rape, despite the nymph's reluctance." In 1912, Vaslav Nijinsky choreographed Debussy's symphonic poem "Prelude to the Afternoon of a Faun" as a ballet and danced in it as the lead role of the faun. The choreography of the ballet and Nijinsky's performance were both highly erotic and sexually charged, causing widespread scandal among upper-class Parisians. In the 1980 biographical film "Nijinsky", directed by Herbert Ross, Nijinsky, who is played by George de la Peña, is portrayed as actually masturbating on stage in front of the entire live audience during the climax of the dance.
The 1917 Italian silent film "Il Fauno", directed by Febo Mari, is about a statue of a faun who comes to life and falls in love with a female model. Fauns appear in the animated dramatization of Ludwig van Beethoven's Symphony No. 6 (1808) in the 1940 Disney animated film "Fantasia". Their goat-legs are portrayed as brightly colored, but their hooves are black. They play the Pan pipes and, like traditional satyrs and fauns, are portrayed as mischievous. One young faun plays hide-and-seek with a unicorn and imitates a statue of a faun atop a pedestal. Though the fauns are not portrayed as overtly sexual, they do assist the Cupids in pairing the centaurs into couples. A drunken Bacchus appears in the same scene.
A faun named Mr. Tumnus appears in the classic juvenile fantasy novel "The Lion, the Witch and the Wardrobe" (1950) by C. S. Lewis. Mr. Tumnus has goat legs and horns, but also a tail long enough for him to carry it draped over his arm to prevent it from dragging in the snow. He is a domesticated figure who lacks the bawdiness and hypersexuality that characterized classical satyrs and fauns. Instead, Mr. Tumnus wears a scarf and carries an umbrella and lives in a cozy cave with a bookshelf with works such as "The Life and Letters of Silenus", "Nymphs and their Ways", and "Is Man a Myth?". He entertains Lucy Pevensie, the first child to visit Narnia, hoping to put her to sleep so he can give her over to the White Witch, but his conscience stops him and he instead escorts her back home. Later, the children discover him missing from his home and, eventually, they discover that the White Witch has turned him to stone for his disobedience.
The satyr has appeared in all five editions of the "Dungeons & Dragons" role-playing game, having been introduced in 1976 in the earliest edition, in Supplement IV: "Gods, Demi-Gods & Heroes" (1976), then in the first edition of the Monster Manual (1977), where it is described as a sylvan woodland inhabitant primarily interested in sport such as frolicking, piping, and chasing wood nymphs. The life history of satyrs was further detailed in "Dragon" No. 155 (March 1990), in "The Ecology of the Satyr." The satyr was later detailed as a playable character race in "The Complete Book of Humanoids" (1993), and is later presented as a playable character race again in "" (1995). The satyr appears in the Monster Manual for the 3.0 edition. "Savage Species" (2003) presented the satyr as both a race and a playable class. The satyr appears in the revised Monster Manual for version 3.5 and also appears in the Monster Manual for the 4th edition, and as a playable character race in the "Heroes of the Feywild" sourcebook (2011).
Matthew Barney's art video "Drawing Restraint 7" (1993) includes two satyrs wrestling in the backseat of a moving limousine. A satyr named Grover Underwood appears in the young adult fantasy novel "The Lightning Thief" (2005) by American author Rick Riordan, as well as in subsequent novels in the series "Percy Jackson & the Olympians". Though consistently referred to as a "satyr", Grover is described as having goat legs, pointed ears, and horns. Grover is not portrayed with the sexually obscene traits that characterized classical Greek satyrs. Instead, he is the loyal protector to the main character Percy Jackson, who is the son of a mortal woman and the god Poseidon. | https://en.wikipedia.org/wiki?curid=29067 |
Sturgeon-class submarine
The "Sturgeon" class (known colloquially in naval circles as the 637 class) was a class of nuclear-powered fast attack submarines (SSN) in service with the United States Navy from the 1960s until 2004. They were the "workhorses" of the Navy's attack submarine fleet throughout much of the Cold War. The boats were phased out in the 1990s and early 21st century, as their successors, the , followed by the and -class boats, entered service.
The "Sturgeon"s were essentially lengthened and improved variants of the "Thresher/Permit" class that directly preceded them. The five-compartment arrangement of the "Permit"s was retained, including the bow compartment, operations compartment, reactor compartment, auxiliary machinery room no. 2, and the engine room. The extra length was in the operations compartment, including longer torpedo racks to accommodate additional Mark 37 torpedoes, the most advanced in service at the time of the class's design in the late 1950s. The class was designed to SUBSAFE requirements, with seawater, main ballast, and other systems redesigned for improved safety. The biggest difference was the much larger sail, which permitted a second periscope and additional intelligence-gathering masts. The fairwater planes mounted on the sail could rotate 90 degrees, allowing the submarine to surface through thin ice. Because the S5W reactor was used, the same as in the "Skipjack"s and "Thresher/Permit"s, and the displacement was increased, the "Sturgeon"s' top speed was , 2 knots slower than the "Thresher/Permit"s.
The last nine "Sturgeon"s were lengthened to provide more space for electronic equipment and habitability. The extra space also helped facilitate the use of dry deck shelters first deployed in 1982.
The class received mid-life upgrades in the 1980s, including the BQQ-5 sonar suite with a retractable towed array, Mk 117 torpedo fire control equipment, and other electronics upgrades.
The "Sturgeon"-class boats were equipped to carry the Harpoon missile, the Tomahawk cruise missile, the UUM-44 SUBROC, the Mark 67 SLMM and Mark 60 CAPTOR mines, and the MK-48 and ADCAP torpedoes. Torpedo tubes were located amidships to accommodate the bow-mounted sonar. The bow covering the sonar sphere was made from steel or glass reinforced plastic (GRP), both varieties having been produced both booted and not booted. Booted domes are covered with a half-inch layer of rubber. The GRP domes improved the bow sonar sphere performance; though for intelligence gathering missions, the towed-array sonar was normally used as it was a much more sensitive array.
Several "Sturgeon" boats and related submarines were modifications of the original designs to test ways to reduce noise.
Beginning with , units of this class had a longer hull, giving them more living and working space than previous submarines. received an additional hull extension containing cable tapping equipment that brought her total length to . A number of the long hull "Sturgeon"-class SSNs, including "Parche", "L. Mendel Rivers", and "Richard B. Russell" were involved in top-secret reconnaissance missions, including cable tap operations in the Barents and Okhotsk seas. "Parche" received nine Presidential Unit Citations for successful missions.
A total of seven boats were modified to carry the SEAL Dry Deck Shelter (DDS). The DDS is a submersible launch hangar with a lockout chamber attached to the ship's midships weapons shipping hatch, facilitating the use of SEAL Delivery Vehicles. DDS-equipped boats were tasked with the covert insertion of special forces.
From "Register of Ships of the US Navy, 1775-1990".
Two other Navy vessels, both considered one-ship classes, were based on the "Sturgeon" hull, but were modified for experimental reasons: | https://en.wikipedia.org/wiki?curid=29068 |
Seawolf-class submarine
The "Seawolf" class is a class of nuclear-powered fast attack submarines (SSN) in service with the United States Navy. The class was the intended successor to the , and design work began in 1983. A fleet of 29 submarines was to be built over a ten-year period, but that was reduced to 12 submarines. The end of the Cold War and budget constraints led to the cancellation of any further additions to the fleet in 1995, leaving the "Seawolf" class limited to just three boats. This, in turn, led to the design of the smaller . The "Seawolf" class cost about $3 billion per unit ($3.5 billion for USS "Jimmy Carter"), making it the most expensive SSN submarine and second most expensive submarine ever, after the French SSBN .
The "Seawolf" design was intended to combat the threat of advanced Soviet ballistic missile submarines such as the , and attack submarines such as the in a deep-ocean environment. "Seawolf"-class hulls are constructed from HY-100 steel, which is stronger than the HY-80 steel employed in previous classes, in order to withstand water pressure at greater depths.
"Seawolf" submarines are larger, faster, and significantly quieter than previous "Los Angeles"-class submarines; they also carry more weapons and have twice as many torpedo tubes. The boats are able to carry up to 50 UGM-109 Tomahawk cruise missiles for attacking land and sea surface targets. The boats also have extensive equipment to allow shallow water operations. The class uses the more advanced ARCI Modified AN/BSY-2 combat system, which includes a larger spherical sonar array, a wide aperture array (WAA), and a new towed-array sonar. Each boat is powered by a single S6W nuclear reactor, delivering to a low-noise pump-jet.
As a result of their advanced design, however, "Seawolf" submarines were much more expensive. The projected cost for 12 submarines of this class was $33.6 billion, but construction was stopped at three boats when the Cold War ended.
is roughly longer than the other two boats of her class, due to the insertion of a section known as the Multi-Mission Platform (MMP) which allows launch and recovery of Remotely operated underwater vehicles (ROV) and Navy SEALs. The MMP may also be used as an underwater splicing chamber for tapping of undersea fiber optic cables. This role was formerly filled by the decommissioned . "Jimmy Carter" was modified for this role by General Dynamics Electric Boat at the cost of $887 million. | https://en.wikipedia.org/wiki?curid=29069 |
SunOS
SunOS is a Unix-branded operating system developed by Sun Microsystems for their workstation and server computer systems. The "SunOS" name is usually only used to refer to versions 1.0 to 4.1.4, which were based on BSD, while versions 5.0 and later are based on UNIX System V Release 4, and are marketed under the brand name "Solaris".
SunOS 1 only supported the Sun-2 series systems, including Sun-1 systems upgraded with Sun-2 (68010) CPU boards. SunOS 2 supported Sun-2 and Sun-3 (68020) series systems. SunOS 4 supported Sun-2 (until release 4.0.3), Sun-3 (until 4.1.1), Sun386i (4.0, 4.0.1 and 4.0.2 only) and Sun-4 (SPARC) architectures. Although SunOS 4 was intended to be the first release to fully support Sun's new SPARC processor, there was also a SunOS 3.2 release with preliminary support for Sun-4 systems.
SunOS 4.1.2 introduced support for Sun's first sun4m-architecture multiprocessor machines (the SPARCserver 600MP series); since it had only a single lock for the kernel, only one CPU at a time could execute in the kernel.
The last release of SunOS 4 was 4.1.4 (Solaris 1.1.2) in 1994. The sun4, sun4c and sun4m architectures were supported in 4.1.4; sun4d was not supported.
Sun continued to ship SunOS 4.1.3 and 4.1.4 until December 27, 1998; they were supported until September 30, 2003.
In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix flavors on the market at that time: BSD (including many of the features then unique to SunOS), System V, and Xenix. This would become System V Release 4 (SVR4).
On September 4, 1991, Sun announced that its next major OS release would switch from its BSD-derived source base to one based on SVR4. Although the internal designation of this release would be "SunOS 5", from this point Sun began using the marketing name "Solaris". The justification for this new "overbrand" was that it encompassed not only SunOS, but also the OpenWindows desktop environment and Open Network Computing (ONC) functionality.
Even though the new SVR4-based OS was not expected to ship in volume until the following year, Sun immediately began using the new "Solaris" name to refer to the currently shipping SunOS 4 release (also including OpenWindows). Thus SunOS 4.1.1 was rebranded "Solaris 1.0"; SunOS 5.0 would be considered a part of Solaris 2.0. SunOS 4.1."x" micro versions continued to be released through 1994, and each of these was also given a "Solaris 1."x equivalent name. In practice, these were often still referred to by customers and even Sun personnel by their SunOS release names. Matching the version numbers was not straightforward:
Today, SunOS 5 is universally known as "Solaris", although the "SunOS" name is still visible within the OS itself in the startup banner, the output of the uname command, and man page footers, among other places.
Matching a SunOS 5.x release to its corresponding Solaris marketing name is simple: each Solaris release name includes its corresponding SunOS 5 minor version number. For example, Solaris 2.4 incorporated SunOS 5.4. There is one small twist: after Solaris 2.6, the "2." was dropped from the Solaris name and the SunOS minor number appears by itself. The latest Solaris release is named "Solaris 11" and incorporates SunOS 5.11.
GUI environments bundled with earlier versions of SunOS included SunTools (later SunView) and NeWS. In 1989, Sun released OpenWindows, an OPEN LOOK-compliant X11-based environment which also supported SunView and NeWS applications. This became the default SunOS GUI in SunOS 4.1.1. | https://en.wikipedia.org/wiki?curid=29071 |
SANS Institute
The SANS Institute (officially the Escal Institute of Advanced Technologies) is a private U.S. for-profit company founded in 1989 that specializes in information security, cybersecurity training, and selling certificates. Topics available for training include cyber and network defenses, penetration testing, incident response, digital forensics, and auditing. The information security courses are developed through a consensus process involving administrators, security managers, and information security professionals. The courses cover security fundamentals and technical aspects of information security. The institute has been recognized for its training programs and certification programs. SANS stands for SysAdmin, Audit, Network, and Security.
The SANS Institute sponsors the Internet Storm Center, an internet monitoring system staffed by a global community of security practitioners, and the "SANS Reading Room", a research archive of information security policy and research documents. SANS is one of the founding organizations of the Center for Internet Security.
SANS offers news and analysis through Twitter feeds and e-mail newsletters. Additionally, there is a weekly news and vulnerability digest available to subscribers.
When originally organized in 1989, SANS training events functioned like traditional technical conferences showcasing technical presentations. By the mid-1990s, SANS offered events which combined training with tradeshows. Beginning in 2006, SANS offered asynchronous online training (SANS OnDemand) and a virtual, synchronous classroom format (SANS vLive). Free webcasts and email newsletters (@Risk, Newsbites, Ouch!) have been developed in conjunction with security vendors. The actual content behind SANS training courses and training events remain "vendor-agnostic." Vendors cannot pay to offer their own official SANS course, although they can teach a SANS "hosted" event via sponsorship.
In 1999, the SANS Institute formed Global Information Assurance Certification (GIAC), an independent entity that grants certifications in information security topics.
It has developed and operates "NetWars", a suite of interactive learning tools for simulating scenarios such as cyberattacks. NetWars is in use by the US Air Force and the US Army.
The majority of SANS faculty are not SANS employees, but industry professionals and respected experts in the field of information security. The faculty is organized into six different levels: Mentors, Community, Certified Instructors, Principal Instructors, Senior Instructors, and Fellows.
SANS established the SANS Technology Institute, an accredited college based on SANS training and GIAC certifications. On November 21, 2013, SANS Technology Institute was granted regional accreditation by the Middle States Commission on Higher Education.
SANS Technology Institute focuses exclusively on cybersecurity, offering a Master of Science degree program in Information Security Engineering (MSISE), five post-baccalaureate certificate programs (Penetration Testing & Ethical Hacking, Incident Response, Industrial Control Systems, Cyber Defense Operations, and Cybersecurity Engineering (Core)), and an upper-division undergraduate certificate program (Applied Cybersecurity).
SANS continues to offer free security content via the SANS Technology Institute Leadership Lab and IT/Security related leadership information. | https://en.wikipedia.org/wiki?curid=29072 |
Sun Myung Moon
Sun Myung Moon (Korean 문선명 Mun Seon-myeong; born Mun Yong-myeong; 6 January 1920 – 3 September 2012) was a Korean religious leader, also known for his business ventures and support for political causes. A messiah claimant, he was the founder of the Unification movement (members of which considered him and his wife Hak Ja Han to be their "True Parents"), and of its widely noted "Blessing" or mass wedding ceremony, and the author of its unique theology the "Divine Principle". He was an opponent of communism and an advocate for Korean reunification, for which he was recognized by the governments of both North and South Korea. Businesses he promoted included News World Communications, an international news media corporation known for its American subsidiary "The Washington Times", and Tongil Group, a South Korean business group (chaebol), as well as various related organizations.
Moon was born in what is now North Korea. When he was a child, his family converted to Christianity. In 1947, he was convicted by the North Korean government of spying for South Korea and given a five-year sentence to the Hŭngnam labor camp. In 1954, he founded the Holy Spirit Association for the Unification of World Christianity in Seoul, South Korea based on conservative, family-oriented teachings from new interpretations of the Bible. In 1971, he moved to the United States and became well known after giving a series of public speeches on his beliefs. In the 1982 case "United States v. Sun Myung Moon" he was found guilty of willfully filing false federal income tax returns and sentenced to 18 months in federal prison. His case generated protests from clergy and civil libertarians, who said that the trial was biased against him.
Moon was criticized for making high demands of his followers. His wedding ceremonies also drew criticism, especially after they involved members of other churches, including Roman Catholic archbishop Emmanuel Milingo. He was also criticized for his relationships with political and religious figures, including U.S. Presidents Richard Nixon, George H. W. Bush and George W. Bush, Soviet President Mikhail Gorbachev, North Korean President Kim Il Sung, and Nation of Islam leader Louis Farrakhan.
Sun Myung Moon was born Moon Yong Myeong on 6 January 1920, in modern-day North P'yŏng'an Province, North Korea, at a time when Korea was under Japanese rule. He was the younger of two sons in a farming family of eight children. Moon's family followed Confucianist beliefs until he was around 10 years old, when they converted to Christianity and joined the Presbyterian Church.
In 1941, Moon began studying electrical engineering at Waseda University in Japan. During this time he cooperated with Communist Party members in the Korean independence movement against Imperial Japan. In 1943, he returned to Seoul and married Sun Kil Choi on 28 April 1945. On 2 April 1946 their son, Sung Jin Moon was born. In the 1940s, Moon attended a church in Sangdo dong that was led by Baek Moon Kim, who said that he had been given by Jesus the mission to spread the message of a "new Israel" throughout the world. Around this time Moon changed his given name to Sun Myung.
Following World War II, Korea was divided along the 38th parallel into two trusteeships: the United States and the Soviet Union. Pyongyang was the center of Christian activity in Korea until 1945. From the late forties 166 priests and other religious figures were killed or disappeared in concentration camps, including Francis Hong Yong-ho, bishop of Pyongyang and all monks of Tokwon abbey. In 1947 Moon was convicted by the North Korean government of spying for South Korea and given a five-year sentence to the Hŭngnam labor camp. In 1950, during the Korean War, United Nations troops raided Hŭngnam, and the guards fled. Moon escaped and traveled to Busan, South Korea.
Moon emerged from his years in the labor camp as a staunch anti-communist. His teachings viewed the Cold War between democracy and communism as the final conflict between God and Satan, with divided Korea as its primary front line. In 1954, Moon formally founded the Holy Spirit Association for the Unification of World Christianity in Seoul. He quickly drew young acolytes who helped to build the foundations of church affiliated business and cultural organizations. At his new church, he preached a conservative, family-oriented value system and his interpretation of the Bible. On 8 January 1957, Moon and Choi divorced.
Moon has said that when he was fifteen years old Jesus anointed him to carry out his unfinished work by becoming parent to all of humanity. The "Divine Principle" or "Exposition of the Divine Principle" (Korean 원리강론/原理講論, translit. ) is the main theological textbook of the Unification movement. It was co-written by Moon and early disciple Hyo Won Eu and first published in 1966. A translation entitled "Divine Principle" was published in English in 1973. The book lays out the core of Unification theology, and is held to have the status of scripture by believers. Following the format of systematic theology, it includes (1) God's purpose in creating human beings, (2) the fall of man, and (3) restoration – the process through history by which God is working to remove the ill effects of the fall and restore humanity back to the relationship and position that God originally intended.
God is viewed as the creator, whose nature combines both masculinity and femininity, and is the source of all truth, beauty, and goodness. Human beings and the universe reflect God's personality, nature, and purpose. "Give-and-take action" (reciprocal interaction) and "subject and object position" (initiator and responder) are "key interpretive concepts", and the self is designed to be God's object. The purpose of human existence is to return joy to God. The "four-position foundation" (Origin, Subject, Object and Union) is "another important and interpretive concept", and explains in part the emphasis on the family.
Moon married his second wife, Hak Ja Han who was 17 at the time, on 11 April 1960, soon after Moon turned 40 years old, in a ceremony called the Holy Marriage. Han is called "Mother" or "True Mother". She and Moon together are referred to as the "True Parents" by members of the Unification Church and their family as the "True Family". Jesus was divine but not God; he was supposed to be the second Adam who would create a perfect family by joining with the ideal wife and creating a pure family that would have begun humanity's liberation from its sinful condition. When Jesus was crucified before marrying, he redeemed mankind spiritually but not physically. That task was left to the "True Parents" – Moon and Han – who would link married couples and their families to God.
Blessing ceremonies have attracted attention in the press and in the public imagination, often being labeled "mass weddings". Some couples are already married and those that are engaged are later legally married according to the laws of their own countries. Meant to highlight the church's emphasis on traditional morality, they brought Moon both fame and controversy.
36 couples participated in the first ceremony in 1961 for members of the early church in Seoul, South Korea. The ceremonies continued to grow in scale; over 2,000 couples participated in the 1982 one at New York's Madison Square Garden, the first outside South Korea. In 1997, about 30,000 couples took part in a ceremony in Washington, DC.
Moon said that he matched couples from differing races and nationalities because of his belief that all of humanity should be united: "International and intercultural marriages are the quickest way to bring about an ideal world of peace. People should marry across national and cultural boundaries with people from countries they consider to be their enemies so that the world of peace can come that much more quickly."
In 1971, Moon moved to the United States, which he had first visited in 1965. He remained a citizen of the Republic of Korea and maintained a residence in South Korea. In 1972, Moon founded the International Conference on the Unity of the Sciences, a series of scientific conferences. The first conference had 20 participants, while the largest conference in Seoul in 1982, had 808 participants from over 100 countries. Participants included Nobel laureates John Eccles (Physiology or Medicine 1963, who chaired the 1976 conference)
and Eugene Wigner (Physics 1963).
In 1974, Moon asked church members in the United States to support President Richard Nixon during the Watergate scandal when Nixon was being pressured to resign his office. Church members prayed and fasted in support of Nixon for three days in front of the United States Capitol, under the motto: "Forgive, Love and Unite." On 1 February 1974 Nixon publicly thanked them for their support and officially received Moon. This brought the church into widespread public and media attention.
In the 1970s, Moon, who had seldom before spoken to the general public, gave a series of public speeches to audiences in the United States, Japan, and South Korea. The largest were a rally in 1975 against North Korean aggression in Seoul and a speech at an event organized by the Unification Church in Washington D.C.
In 1982, Moon was convicted in the United States of filing false federal income tax returns and conspiracy. His conviction was upheld on appeal in a split decision. Moon was given an 18-month sentence and a $15,000 fine. He served 13 months of the sentence at the Federal Correctional Institution, Danbury before being released on good behavior to a halfway house.
The case was the center of national freedom of religion and free speech debates. Prof. Laurence H. Tribe of the Harvard University Law School argued that the trial by jury had "doomed (Moon) to conviction based on religious prejudice." The American Baptist Churches in the U.S.A, the National Council of Churches, the National Black Catholic Clergy Caucus, and the Southern Christian Leadership Conference filed briefs in support of Moon. Many notable clergy, including Jerry Falwell and Joseph Lowery, signed petitions protesting the government's case and spoke out in defense of Moon.
In 1982 "The Washington Times" was founded by News World Communications, an international media conglomerate associated with Moon which also owned newspapers in South Korea, Japan, and South America, as well as the news agency United Press International. The political views of "The Washington Times" have often been described as conservative. The "Times" was read by many Washington DC insiders, including Ronald Reagan. By 2002 Moon had invested roughly $1.7 billion to support the "Times", which he called "the instrument in spreading the truth about God to the world".
In 2000, Moon sponsored a United Nations conference which proposed the formation of "a religious assembly, or council of religious representatives, within the structure of the United Nations."
In 2003, Moon sponsored the first Peace Cup international club football tournament. The Los Angeles Galaxy, which competes in Major League Soccer, played in South Korea in the Peace Cup. During the event Pelé, widely regarded as the best soccer player of all time and former Brazilian Sports Minister, met with Moon.
In 2009, Moon's autobiography, "As a Peace-Loving Global Citizen" (), was published by Gimm-Young Publishers in South Korea. The book became a best-seller in Korea and Japan. Said to be the inspiration of Gimm-Young CEO Eun Ju Park, a devout Buddhist, the book focused more on Moon's role as a Korean patriot and an international peace advocate than as a religious figure.
By 2010, Moon had given much of the responsibility for Family Federation for World Peace and Unification religious and business activities to his children, who were then in their 30s and 40s. In 2012, the South Korean press reported that Moon traveled worldwide in his private jet which cost $50 million.
On 14 August 2012, after suffering from pneumonia earlier in the month, Moon was admitted to Saint Mary's Hospital at The Catholic University of Korea in Seoul. On 15 August 2012, he was reported to be gravely ill and was put on a ventilator at the intensive care unit of St. Mary’s Hospital. On 31 August 2012, Moon was transferred to a church-owned hospital near his home in Gapyeong, northeast of Seoul, after suffering multiple organ failure. Moon died on the morning of 3 September 2012 (1:54 am KST) at the age of 92.
In 1964 Moon founded the Korean Culture and Freedom Foundation, which promoted the interests of South Korea and sponsored Radio Free Asia. Former U.S. Presidents Harry S Truman, Dwight D. Eisenhower and Richard Nixon were honorary presidents or directors at various times.
In 1972 Moon predicted the decline of communism, based on the teachings of the "Divine Principle": "After 7,000 biblical years — 6,000 years of restoration history plus the millennium, the time of completion — communism will fall in its 70th year. Here is the meaning of the year 1978. Communism, begun in 1917, could maintain itself approximately 60 years and reach its peak. So 1978 is the border line and afterward communism will decline; in the 70th year it will be altogether ruined. This is true. Therefore, now is the time for people who are studying communism to abandon it."
In 1980, Moon asked church members to found CAUSA International as an anti-communist educational organization, based in New York. In the 1980s, it was active in 21 countries. In the United States it sponsored educational conferences for Christian leaders as well as seminars and conferences for Senate staffers and other activists. In 1986, it produced the anti-communist documentary film "Nicaragua Was Our Home". CAUSA supported the Nicaraguan Contras.
In August 1985 the Professors World Peace Academy, an organization founded by Moon, sponsored a conference in Geneva to debate the theme "The situation in the world after the fall of the communist empire." In April 1990, Moon visited the Soviet Union and met with President Mikhail Gorbachev. Moon expressed support for the political and economic transformations under way in the Soviet Union. At the same time the Unification Church was expanding into formerly communist nations. After the dissolution of the Soviet Union in 1991, some American conservatives criticized Moon for his softening of his previous anti-communist stance.
In 1991, Moon met with Kim Il Sung, the North Korean President, to discuss ways to achieve peace on the Korean peninsula, as well as on international relations, tourism, etc. In 1994, Moon was officially invited to the funeral of Kim Il Sung, in spite of the absence of diplomatic relations between North Korea and South Korea. Moon and his church are known for their efforts to promote Korean unification.
In 2003, Korean Unification Church members started a political party in South Korea. It was named "The Party for God, Peace, Unification, and Home." In its inauguration declaration, the new party said it would focus on preparing for Korean reunification by educating the public about God and peace. Moon was a member of the Honorary Committee of the Unification Ministry of the Republic of Korea. In 2012 Moon was posthumously awarded North Korea's National Reunification Prize.
In 2005, Rev. Sun Myung Moon and his wife, Dr. Hak Ja Han Moon, founded the Universal Peace Federation (UPF), an NGO in Special Consultative Status with the United Nations Economic and Social Council (ECOSOC). "We support and promote the work of the United Nations and the achievement of the Sustainable Development Goals."
Moon's projects have been lobbied in the National Congress of Brazil by Brazilian MPs. Moon has held dialogues between members of the Israeli Knesset and the Palestinian Parliament as part of his Middle East Peace Initiatives.
Tongil Group is a South Korean business group (chaebol "Tongil" is Korean for "unification," the name of the Unification Church in Korean is "Tongilgyo."), founded in 1963 by Moon as a nonprofit organization to provide revenue for the church. Its core focus was manufacturing but in the 1970s and 1980s it expanded by founding or acquiring businesses in pharmaceuticals, tourism, and publishing. Among Tongil Group’s chief holdings are: The Ilwha Company, which produces ginseng and related products; Ilshin Stone, building materials; and Tongil Heavy Industries, machine parts including hardware for the South Korean military.
News World Communications is an international news media corporation founded by Moon in 1976. It owns United Press International, "The World and I", "Tiempos del Mundo" (Latin America), "The Segye Ilbo" (South Korea), "The Sekai Nippo" (Japan), the "Zambezi Times" (South Africa), and "The Middle East Times" (Egypt). Until 2008 it published the Washington D.C.-based newsmagazine "Insight on the News". Until 2010, it owned the "Washington Times". On 2 November 2010, Sun Myung Moon and a group of former "Times" editors purchased the "Times" from News World.
In 1982, Moon sponsored the film "Inchon", an historical drama about the Battle of Inchon during the Korean War. It was not successful critically or financially, and was criticized for its unfair treatment of the North Korean government.
In 1989, Moon founded Seongnam Ilhwa Chunma, the most successful soccer club in Korean football, having won a record 7 league titles, 2 FA Cups, 3 League Cups, and 2 AFC Champions League titles.
The church is the largest owner of U.S. sushi restaurants and in the Kodiak region of Alaska, is the area's largest employer. The church owns the only automobile manufacturing plant in North Korea, Pyeonghwa Motors, and is the second largest exporter of Korean goods.
In 2011, construction of $18 million Yeosu Expo Hotel was completed; the hotel located at Moon-owned The Ocean Resort in Yeosu, the venue of the Expo 2012. The opening ceremony was attended by the governor of the province. Another one, The Ocean Hotel, was completed in February 2012. Moon-owned Yeongpyeong Resort, The Ocean Resort and Pineridge Resort are scheduled to host the Expo 2012, 2018 Winter Olympics and Formula 1. Moon also managed the FIFA-accredited Peace Cup. The FIFA itself has funded more than $2m for the Peace Cup since 2003.
Moon took a strong stance against racism and racial discrimination. In 1974 he urged Unification Church members to support an African American president of the United States: "We have had enough of white presidents. So, let's this time elect a president from the Negro race. What will you do if I say so? There's no question there. We must never forget that we are brothers and sisters in a huge human family. In any level of community, we must become like a family."
In 1981 he said that he himself was a victim of racial prejudice in the United States (concerning his prosecution on tax charges in United States v. Sun Myung Moon), saying: "I would not be standing here today if my skin were white or my religion were Presbyterian. I am here today only because my skin is yellow and my religion is Unification Church. The ugliest things in this beautiful country of America are religious bigotry and racism."
Several African American organizations and individuals spoke out in defense of Moon at this time including the National Black Catholic Clergy Caucus, the Southern Christian Leadership Conference, the National Conference of Black Mayors, and Joseph Lowery who was then the head of the Southern Christian Leadership Conference.
In a later controversy over the use of the word "Moonie" by the American news media, which was said to be offensive, Moon's position was supported by civil rights activists Ralph Abernathy and James Bevel.
In 2000 Moon and The Nation of Islam leader Louis Farrakhan got together to sponsor the Million Family March, a rally in Washington D.C. to celebrate family unity and racial and religious harmony; as well as to address other issues, including abortion, capital punishment, health care, education, welfare and Social Security reform, substance abuse prevention, and overhaul of the World Bank and International Monetary Fund. In his keynote speech Farrakhan called for racial harmony.
In 1962, Moon and other church members founded the Little Angels Children’s Folk Ballet of Korea, a children's dance troop which presents traditional Korean folk dances. He said that this was to project a positive image of South Korea to the world. In 1984, Moon founded the $8-million Universal Ballet project, with Soviet-born Oleg Vinogradov as its art director and Moon's daughter-in-law Julia as its prima ballerina. It was described by "The New York Times" as the top ballet company in Asia. In 1989, Moon founded Universal Ballet Academy which changed its name later to Kirov Academy of Ballet in Washington, D.C.
Moon held honorary degrees from more than ten universities and colleges worldwide; at least one of which, the University of Bridgeport, received significant funding from his organizations. He was a member of the Honorary Committee of the Unification Ministry of the Republic of Korea. In 1985, he and his wife received Doctor of Divinity degrees from Shaw University.
In 2004, at event in the Dirksen Senate Office Building, in Washington, D.C., Moon was honored as the Messiah. This attracted much public attention and was criticized by "The New York Times" and "The Washington Post" as a possible violation of the principle of separation of church and state in the United States. Some of the political figures who had attended the event later told reporters that they had been misled as to its nature.
Several months after his death, an award named after him and his wife (Sunhak Peace Prize) was proposed, inheriting his will to "recognize and empower innovations in human development, conflict resolution and ecological conservation." Its laureates receive a certificate, a medal, and US$1 million.
Moon was posthumously awarded North Korea's National Reunification Prize in 2012 and a meritorious award by K-League. On the first anniversary of Moon's death, North Korean leader Kim Jong-un expressed condolences to Han and the family saying: "Kim Jong-un prayed for the repose of Moon, who worked hard for national concord, prosperity and reunification and world peace."
In 2013, Zimbabwean Prime Minister Morgan Tsvangirai stated: "I remain greatly inspired by people like Reverend Dr. Sun Myung Moon, whose work and life across continents continue to impact positively on the lives of millions of others in the world."
Moon's claim to be the Messiah and the Second Coming of Christ has been disputed by both Christian and Jewish scholars. The "Divine Principle" was labeled as heretical by Protestant churches in South Korea, including Moon’s own Presbyterian Church. In the United States it was rejected by ecumenical organizations as being non-Christian. Protestant commentators have also criticized Moon's teachings as being contrary to the Protestant doctrine of salvation by faith alone. In their influential book "The Kingdom of the Cults" (first published in 1965), Walter Ralston Martin and Ravi K. Zacharias disagreed with the "Divine Principle" on the issues of the divinity of Christ, the virgin birth of Jesus, Moon's belief that Jesus should have married, the necessity of the crucifixion of Jesus, a literal resurrection of Jesus, as well as a literal second coming of Jesus. Commentators have criticized the "Divine Principle" for saying that the First World War, the Second World War, the Holocaust, and the Cold War served as indemnity conditions to prepare the world for the establishment of the Kingdom of God. In 2003, George D. Chryssides of the University of Wolverhampton criticized Moon for introducing doctrines which tended to divide the Christian church rather than uniting it, which was his stated purpose in founding the Unification movement (originally named the Holy Spirit Association for the Unification of World Christianity). In his 2009 autobiography Moon himself wrote that he did not originally intend on founding a separate denomination.
During the Cold War Moon was criticized by both the mainstream media and the alternative press for his anti-communist activism, which many said could lead to World War Three and a nuclear holocaust. Moon’s anti-communist activities received financial support from controversial Japanese millionaire and activist Ryōichi Sasakawa. In 1977 the Subcommittee on International Organizations of the Committee on International Relations, of the United States House of Representatives, while investigating the Koreagate scandal found that the South Korean National Intelligence Service (KCIA) had worked with the Unification Church to gain political influence within the United States, with some members working as volunteers in Congressional offices. Together they founded the Korean Cultural Freedom Foundation, a nonprofit organization which undertook public diplomacy for the Republic of Korea. The committee also investigated possible KCIA influence on the Moon's campaign in support of Richard Nixon. After the dissolution of the Soviet Union in 1991, some American conservatives criticized Moon for his softening of his previous anti-communist stance.
In the 1990s when Moon began to offer the Unification marriage blessing ceremony to members of other churches and religions he was criticized for creating possible confusion. In 1998, journalist Peter Maass reported that some Unification members were dismayed and grumbled when Moon extended the Blessing to non-members because they had not gone through the same course that members had. In 2001, Moon came into conflict with the Roman Catholic Church when Catholic archbishop Emmanuel Milingo and Maria Sung, a 43-year-old Korean acupuncturist, married in a blessing ceremony, presided over by Rev. and Mrs. Moon. Following his marriage the Archbishop was called to the Vatican by Pope John Paul II, where he was asked not to see his wife anymore, and to move to a Capuchin monastery. Sung went on a hunger strike to protest their separation. This attracted much media attention. Milingo is now an advocate of the removal of the requirement for celibacy by priests in the Catholic Church. He is the founder of Married Priests Now!.
In 2000 Moon was criticized, including by some members of his church, for his support of controversial Nation of Islam leader Louis Farrakhan's Million Family March. Moon was also criticized for his relationship with controversial Jewish scholar Richard L. Rubenstein, an advocate of the "death of God theology" of the 1960s. Rubenstein was a defender of the Unification Church and served on its advisory council, as well as on the board of directors of the church-owned "Washington Times" newspaper. In the 1990s, he served as president of the University of Bridgeport which was then affiliated with the church. In 1998 the Egyptian newspaper "Al-Ahram" criticized Moon's possible relationship with Israeli president Benjamin Netanyahu and wrote that the "Washington Times" editorial policy was "rabidly anti-Arab, anti-Muslim and pro-Israel."
Moon opposed homosexuality and compared gay people to "dirty dung-eating dogs". He said that "gays will be eliminated" in a "purge on God's orders". In 2009 Moon's support for the Japan–Korea Undersea Tunnel was criticized in Japan and South Korea as a possible threat to both nations' interests and national identities. Other criticisms include: Moon's neglect of his wife, Hak Ja Han, and his appointments of their children and their spouses to leadership positions in the church and related businesses, including their daughter In Jin Moon to the presidency of the Unification Church of the United States against the wishes of many church members; his support of conservatives within the government of South Korea, his assignment of movement members and resources to business projects and political activism, including "The Washington Times"; as well as the relationship between the Unification Church and Islam, especially following the September 11 attacks in New York City. Moon has also been criticized for his advocacy of a world-wide "automatic theocracy," as well as for telling his followers that they should become "crazy for God."
The "Divine Principle" itself says about Moon: "With the fullness of time, God has sent one person to this earth to resolve the fundamental problems of human life and the universe. His name is Sun Myung Moon. For several decades he wandered through the spirit world so vast as to be beyond imagining. He trod a bloody path of suffering in search of the truth, passing through tribulations that God alone remembers. Since he understood that no one can find the ultimate truth to save humanity without first passing through the bitterest of trials, he fought alone against millions of devils, both in the spiritual and physical worlds, and triumphed over them all. Through intimate spiritual communion with God and by meeting with Jesus and many saints in Paradise, he brought to light all the secrets of Heaven."
In 1978 Rodney Sawatsky wrote in an article in "Theology Today": "Why trust Rev. Moon's dreams and visions of the new age and his role in it, we ask? Most converts actually have had minimal contact with him. Frederick Sontag (Sun Myung Moon and the Unification Church, Abingdon, 1977) in his interviews with Moon appears to have found a pleasant but not an overwhelming personality. Charisma, as traditionally understood, seems hardly applicable here. Rather, Moon provides a model. He suffered valiantly, he knows confidently, he prays assuredly, he lives lovingly, say his followers. The Divine Principle is not an unrealizable ideal; it is incarnate in a man, it lives, it is imitable. His truth is experienced to be their truth. His explanation of the universe becomes their understanding of themselves and the world in which they live."
In 1980 sociologist Irving Louis Horowitz commented: "The Reverend Moon is a fundamentalist with a vengeance. He has a belief system that admits of no boundaries or limits, an all-embracing truth. His writings exhibit a holistic concern for the person, society, nature, and all things embraced by the human vision. In this sense the concept underwriting the Unification church is apt, for its primary drive and appeal is unity, urging a paradigm of essence in an overly complicated world of existence. It is a ready-made doctrine for impatient young people and all those for whom the pursuit of the complex has become a tiresome and fruitless venture."
In 1998 investigative journalist Peter Maass wrote in an article in "The New Yorker": "There are, certainly, differing degrees of devotion among Moon's followers; the fact that they bow at the right moment or shout "Mansei!" in unison doesn't mean they believe everything Moon says, or do precisely what he commands. Even on important issues, like Moon's claiming to be the messiah, there are church members whom I met, including a close aide to Moon, who demur. A religious leader whom they respect and whose theology they believe, yes; the messiah, perhaps not."
In his 2004 book "The New Religious Movement Experience in America" Eugene V. Gallagher wrote: "The "Divine Principle's" analysis of the Fall sets the stage for the mission of Rev. Moon, who in the last days brings a revelation that offers humankind the chance to return to an Edenic state. The account in the "Divine Principle" offers Unificationists a comprehensive context for understanding human suffering." | https://en.wikipedia.org/wiki?curid=29074 |
Statute of frauds
The statute of frauds refers to the requirement that certain kinds of contracts be memorialized in writing, signed by the party to be charged, with sufficient content to evidence the contract.
The term "statute of frauds" comes from an Act of the Parliament of England (29 Chas. 2 c. 3) passed in 1677 (authored by Lord Nottingham assisted by Sir Matthew Hale, Sir Francis North and Sir Leoline Jenkins. and passed by the Cavalier Parliament), the title of which is An Act for Prevention of Frauds and Perjuries. Many common law jurisdictions have made similar statutory provisions, while a number of civil law jurisdictions have equivalent legislation incorporated into their civil codes. The original English statute itself may still be in effect in a number of Canadian provinces, depending on the constitutional or reception statute of English law, and any subsequent legislative developments.
Traditionally, the statute of frauds requires a signed writing in the following circumstances:
In an action for specific performance of a contract to convey land, the agreement must be in writing to satisfy the statute of frauds. The statute is satisfied if the contract to convey is evidenced by a writing or writings containing the essential terms of a purchase and sale agreement and signed by the party against whom the contract is to be enforced. If there is no written agreement, a court of equity can specifically enforce an oral agreement to convey only if the part performance doctrine is satisfied. In most jurisdictions, part performance is proven when the purchaser pays the purchase price, has possession of the land and makes improvements on the land, all with the permission of the seller. No jurisdiction is satisfied by payment of the purchase price alone.
Under common law, the statute of frauds also applies to contract modifications. For example, in an oral agreement for the lease of a car for nine months, immediately after taking possession, the lessor then decides that he really likes the car and makes an oral offer to the lessee to extend the term of the lease by an additional six months. Although neither agreement alone comes under the statute of frauds, the oral extension modifies the original contract to make it a fifteen-month lease (nine months plus the additional six), thereby bringing it under the statute as the contract now exceeds twelve months in duration. In theory, the same principle works in reverse as well, such that an agreement to reduce a lease from fifteen months to nine months would not require a writing. However, many jurisdictions have enacted statutes that require a writing for such situations.
A defendant in a contract case who wants to use the statute of frauds as a defense must raise it as an affirmative defense in a timely manner. The burden of proving that a written contract exists comes into play only when a statute of frauds defense is raised by the defendant.
An agreement may be enforced even if it does not comply with the statute of frauds in the following situations:
The Statute of Frauds recites that it was enacted for the ". . . prevention of many fraudulent practices which are commonly endeavored to be upheld by perjury . . .". The mischief arising from claimants asserting oral agreements was to be avoided by requiring that certain contracts be evidenced by "some memorandum or note thereof . . . in writing and signed by the party to be charged therewith . . .". Contracts respecting land "created by livery and seisen only or by parole" would not be enforced absent such a writing.
It quickly became apparent to the common law judges that the Statute might itself become an instrument of fraud (or at least injustice) if it was strictly enforced with respect to contracts that were wholly or partly performed.
The courts developed the concept of "part performance" as an exception. If a contract concerning land was partly performed, that could displace the need for a note or memorandum in writing signed by the party to be charged.
It was one thing to create an exception that displaced the need for a memorandum in writing, but something else to completely nullify the Statute's operation. The thrust of the Statute was that contracts concerning land could not be proved by parol evidence alone. Thus, part performance might be an exception, but it could not, in effect, mean that the underlying contract could be proven by parol evidence. In developing the "part performance" exception, a balancing of the competing considerations was required. An important factor in the case law became that the part performance must be "unequivocally" related to the alleged contract.
The Statute of Frauds was passed in 1695 in Ireland. The statute is one of the few pre-Independence laws that survived the Statute Law Revision (Pre-1922) Act 2005 and the Statute Law Revision Act 2007, and remains largely in force today.
Some effects of the law have been softened by equity, for example the requirement that all contracts for sale of land be evidenced in writing can be circumvented by reliance on the doctrine of part performance.
The Statute of Frauds (1677) was largely repealed in England and Wales by the Law Reform (Enforcement of Contracts) Act 1954 (2 & 3 Eliz 2 c 34). The only provision of it extant is part of Section 4 which means that contracts of guarantee (surety for another's debt) are unenforceable unless evidenced in writing. This requirement is clarified by section 3 of the Mercantile Law Amendment Act 1856 (19 & 20 Vict 97) which provides that the consideration for the guarantee need not appear in writing or by necessary inference from a written document.
Section 6 of the Statute of Frauds Amendment Act 1828 (9 Geo 4 c 14) (commonly known as Lord Tenterden's Act) was enacted to prevent Section 4 being circumvented by bringing an action against a verbal guarantor for the tort of deceit (the tort in "Freeman v. Palsey"). A common summary of the law is "a verbal guarantee (for a debt) isn't worth the paper it is written on".
Provisions in section 4 as to formalities for contracts for the sale of land were repealed by Schedule 7 to the Law of Property Act 1925 (15 Geo 5 c 20), however the requirement that contracts for the sale of land be evidenced in writing was maintained by section 40 of that Act, subsequently replaced by section 2 of the Law of Property (Miscellaneous Provisions) Act 1989 (c 34).
Section 6 of the Mercantile Law Amendment Act Scotland 1856 was derived from those parts of section 4 of the Statute of Frauds (1677) which relate to contracts of guarantee and from section 6 of the Statute of Frauds Amendment Act 1828.
It was repealed on 1 August 1995 by the Requirements of Writing (Scotland) Act 1995, sections 14(2) and Schedule 5 (with ss. 9(3)(5)(7), 13, 14(3)).
In the United States, for contracts for the sale of goods that fall under the Uniform Commercial Code, additional exceptions may apply:
Every state has a statute that requires certain types of contracts to be in writing and signed by the party to be charged. The most common requirements are for contracts that involve the sale or transfer of land, and contracts that cannot be completed within one year. When the statute of frauds applies, a typical statute requires that the writing commemorating the agreement identify the contracting parties, recite the subject matter of the contract so that it is reasonably identifiable, and include the important terms and conditions of agreement.
The statute of frauds in various states comes in three types:
In addition to the statute of frauds as conventionally defined, the State of Texas has two rules that govern the litigation process, each of which also has the character of a statute of frauds. One is a rule of general applicability and requires agreements between counsel (or a party, if self-represented) to be in writing to be enforceable. Tex. R. Civ. P. 11.
Agreements under Texas Rule of Civil Procedure 11 are called "Rule 11 Agreements" and may either concern settlement or any procedural aspect, such as an agreement regarding scheduling, continuances of trial settings, or discovery matters. The rule has existed since 1840 and has contained the filing requirement since 1877. The number designation can cause confusion to non-Texas attorneys because the federal rule 11 is the sanctions rule, whose state-court counterpart has the number designation 13 under the Texas Rules of Civil Procedure (TRCP).
The other rule that is in the nature of a statute of frauds governs fee agreements with clients when the attorney is to be compensated based on the outcome of the case. The Texas Government Code requires that "[a] contingent fee contract for legal services must be in writing and signed by the attorney and client." TEX. GOV'T CODE ANN. § 82.065(a).
The classic example is a contingent fee contract in a personal injury case that provides for the claimant's lawyer to receive a certain percentage of the settlement amount (or of the amount awarded by judgment) net of litigation costs, with the percentages typically staggered and increasing based on whether a settlement was obtained before lawsuit is filed, after a lawsuit was filed but before trial, or whether a judgment favorable to the client was obtained through trial. The other scenario is a contingency fee contract based on cost savings achieved (for a client who is a defendant sued for a money judgment) or based on other specified litigation objectives. In those cases, the client will not recover any money from his opponent in the lawsuit, and will have to pay his attorney from his or her own funds in accordance with the terms of the agreement, once the matter is concluded favorably. When the client does not pay, some attorneys then sue the client on the contingency fee contract, or in quantum meruit in the alternative. See, e.g., Shamoun & Norman, LLP v. Hill, 483 S.W.3d 767 (Tex. App.-Dallas 2016), reversed on other grounds by Hill v. Shamoun & Norman, LLP, No. 16-0107 (Tex. April 13, 2018). The attorney-vs-client fee-dispute issue generally does not arise in personal injury cases because the settlement funds from the settling party or judgment-debtor are disbursed through the attorney of the party entitled to them, net of costs and the contingency fee component.
In addition to general statutes of frauds, under Article 2 of the Uniform Commercial Code (UCC), every state except Louisiana has adopted an additional statute of frauds that relates to the sale of goods. Pursuant to the UCC, contracts for the sale of goods where the price equals $500 or more fall under the statute of frauds, with the exceptions for professional merchants performing their normal business transactions, and for any custom-made items designed for one specific buyer.
The application of the statute of frauds to dealings between merchants has been modified by provisions of the UCC. There is a "catch-all" provision in the UCC for personal property not covered by any other specific law, stating that a contract for the sale of such property where the purchase price exceeds $500 is not enforceable unless memorialized by a signed writing. The most recent UCC revision increases the triggering point for the UCC Statute of Frauds to $5,000, but states have been slow to amend their versions of the statute to increase the trigger point.
For purposes of the UCC, a defendant who admits the existence of the contract in his pleadings, under oath in a deposition or affidavit, or at trial, may not use the statute of frauds as a defense. However, a statute of frauds defense may still be available under a state's general statute.
With respect to securities transactions, the Uniform Commercial Code has abrogated the statute of frauds. The drafters of the most recent revision commented that "with the increasing use of electronic means of communication, the statute of frauds is unsuited to the realities of the securities business." | https://en.wikipedia.org/wiki?curid=29079 |
Sovereign immunity
Sovereign immunity, or crown immunity, is a legal doctrine whereby a sovereign or state cannot commit a legal wrong and is immune to civil suit or criminal prosecution, strictly speaking in modern texts in its own courts. A similar, stronger rule as regards foreign courts is named state immunity.
In its older sense, sovereign immunity is the original forebear of state immunity based on the classical concept of sovereignty in the sense that a sovereign could not be subjected without his or her approval to the jurisdiction of another.
There are two forms of sovereign immunity:
Immunity from suit means that neither a sovereign/head of state in person nor any "in absentia" or representative form (nor to a lesser extent the state) can be a defendant or subject of court proceedings, nor in most equivalent forums such as under arbitration awards and tribunal awards/damages. Immunity from enforcement means that even if a person succeeds in any way against their sovereign or state, they and the judgment may find itself without means of enforcement. Separation of powers or natural justice coupled with a political status other than a totalitarian state dictates there be broad exceptions to immunity such as statutes which expressly bind the state (a prime example being constitutional laws) and judicial review.
Furthermore, sovereign immunity of a state entity may be waived. A state entity may waive its immunity by:
In constitutional monarchies the sovereign is the historical origin of the authority which creates the courts. Thus the courts had no power to compel the sovereign to be bound by them as they were created by the sovereign for the protection of his or her subjects. This rule was commonly expressed by the popular legal maxim "rex non potest peccare", meaning "the king can do no wrong".
There is no automatic Crown immunity in Australia and the Australian Constitution does not establish a state of unfettered immunity of the Crown in respect of the States and the Commonwealth. The Constitution of Australia establishes items which the States and the Commonwealth legislate on independently of each other, in practice resulting in the States legislating on some things and the Commonwealth legislating on others. In some circumstances this can create ambiguity as to the applicability of legislation where there is no clearly established Crown immunity. The Australian Constitution does however, in s. 109, declare that, "When a law of a State is inconsistent with a law of the Commonwealth, the latter shall prevail, and the former shall, to the extent of the inconsistency, be invalid." Based on this, depending on the context of application and whether a particular statute infringes on the executive powers of the State or the Commonwealth the Crown may or may not be immune from any particular statute.
Many Acts passed in Australia, both at the State or the Federal level, contain a section declaring whether the Act binds the Crown, and, if so, in what respect:
Whilst there is no ambiguity surrounding the first aspect of this declaration with respect to binding the Crown with respect to the State in question, there have been several cases in respect of the interpretation of the second aspect extending it to the Crown in its other capacities. Rulings by the High Court of Australia on specific matters of conflict between the application of States laws on Commonwealth agencies have provided the interpretation that the Crown in all of its other capacities includes the Commonwealth, therefore if a State Act contains this text then the act may bind the Commonwealth subject to the s. 109 test of inconsistency.
A landmark case which set a precedent for challenging broad Crown immunity and established tests for the applicability of State laws on the Commonwealth was "Henderson v Defence Housing Authority" in 1997. This case involved the arbitration of a dispute between Mr. Henderson and the Defence Housing Authority (DHA). Mr. Henderson owned a house which the DHA had leased to provide housing to members of the Australian Defence Force (ADF). Under the NSW "Residential Tenancies Act 1997", Mr. Henderson sought orders from the Residential Tenancies Tribunal to enter the premises for the purposes of conducting inspections. In response, DHA claimed that as a Commonwealth agency the legislation of NSW did not apply to it and further sought writs of prohibition attempting to restrain Mr. Henderson from pursuing the matter further. Up until this point the Commonwealth and its agencies claimed an unfettered immunity from State legislation and had used s. 109 to justify this position, specifically that the NSW Act was in conflict with the Act which created the DHA and s. 109 of the Constitution applied. Mr. Henderson took the case to the High Court and a panel of 7 justices to arbitrate the matter. By a majority decision of 6:1 the court ruled that the DHA was bound by the NSW Act on the basis that the NSW Act did not limit, deny or restrict the activities of the DHA but sought to regulate them, an important distinction which was further explained in the rulings of several of the justices. It was ruled that the NSW Act was one of general application and therefore the Crown (in respect of the Commonwealth) could not be immune from it, citing other cases in which the same ruling had been made and that it was contrary to the rule of law. As a result of this case, the Commonwealth cannot claim a broad constitutional immunity from State legislation.
In practice, three tests have been developed to determine whether a State law applies to the Commonwealth (and vice versa):
If these three tests are satisfied then the Act binds the Crown in respect of the Commonwealth. It is important to note that in Australia there is no clear automatic Crown immunity or lack of it, as such there is a rebuttable presumption that the Crown is not bound by a statute, as noted in "Bropho v State of Western Australia". The Crown's immunity may also apply to other parties in certain circumstances, as held in "Australian Competition and Consumer Commission v Baxter Healthcare".
Article 88 of the Constitution of Belgium states: "The King’s person is inviolable; his ministers are accountable."
According to the constitution of Bhutan, the monarch is not answerable in a court of law for
his or her actions.
Canada inherited common law version of Crown immunity from British law. However, over time the scope of Crown immunity has been steadily reduced by statute law. As of 1994, section 14 of the "Alberta Interpretation Act" stated, "No enactment is binding on Her Majesty or affects Her Majesty or Her Majesty's rights or prerogatives in any manner, unless the enactment expressly states that it binds Her Majesty." However, in more recent times "All Canadian provinces ... and the federal government (the Crown Liability Act) have now rectified this anomaly by passing legislation which leaves the 'Crown' liable in tort as a normal person would be. Thus, the tort liability of the government is a relatively new development in Canada, statute-based, and is not a fruit of common law."
Since 1918, it has been held that provincial legislatures cannot bind the federal Crown, as Fitzpatrick CJ noted in "Gauthier v The King":
It has also been a constitutional convention that the Crown in right of each province is immune from the jurisdiction of the courts in other provinces. However this is now in question.
Lieutenant Governors do not enjoy the same immunity as the Sovereign in matters not relating to the powers of the office. In 2013, the Supreme Court refused to hear the request of former Lieutenant Governor of Quebec Lise Thibault to have charges against her dropped. She was being prosecuted by the Attorney General of Quebec for misappropriation of public funds but invoked royal immunity on the basis that "the Queen can do no wrong". As per convention, the court did not disclose its reasons for not considering the matter. Thibault later petitioned the Court of Quebec for the same motives. Judge St-Cyr again rejected her demand, noting that constitutional law does not grant a lieutenant-governor the same benefits as the Queen and that in her case, royal immunity would only apply to actions involving official state functions, not personal ones. She was eventually declared guilty and sentenced to 18 months in jail but was granted conditional release after serving six months.
China has consistently claimed that a basic principle of international law is for states and their property to have absolute sovereign immunity. China objects to restrictive sovereign immunity. It is held that a state can waive its immunity by voluntarily stating so, but that should a government intervene in a suit (e.g. to make protests), it should not be viewed as waiver of immunity. Chinese state-owned companies considered instrumental to the state have claimed sovereign immunity in lawsuits brought against them in foreign courts before. China's view is that sovereign immunity is a lawful right and interest that their enterprises are entitled to protect. Some examples of Chinese state-owned companies that have claimed sovereign immunity in foreign lawsuits are the Aviation Industry Corporation of China (AVIC) and China National Building Material.
Article 13 of the Constitution of Denmark states: "The King shall not be answerable for his actions; his person shall be sacrosanct. The Ministers shall be responsible for the conduct of the government; their responsibility shall be determined by Statute." Accordingly, the monarch cannot be sued in his or her personal capacity. On the other hand, this immunity from lawsuits does not extend to the state as such and article 63 explicitly authorises the courts to judge the executive authority: "The courts of justice shall be empowered to decide any question relating to the scope of the executive's authority; though any person wishing to question such authority shall not, by taking the case to the courts of justice, avoid temporary compliance with orders given by the executive authority." Furthermore, no other member of the royal family can be prosecuted for any crime under Article 25 of the old absolutist constitution Lex Regia (The King's Law), currently still valid, which states.: "They shall answer to no magistrate judges, but their first and last Judge shall be the King, or to whom He to that decrees.".
The Holy See, of which the current pope is head (often referred to by metonymy as the Vatican or Vatican City State, a distinct entity), claims sovereign immunity for the pope, supported by many international agreements.
In 2011, the Hong Kong Court of Final Appeal ruled that absolute sovereign immunity applies in Hong Kong, as the Court found that Hong Kong, as a Special Administrative Region of China, could not have policies on state immunity that was inconsistent with China. The ruling was an outcome of the "Democratic Republic of the Congo v FG Hemisphere Associates" case in 2011.
The Democratic Republic of the Congo and its state-owned electricity company Société nationale d'électricité (SNEL) defaulted on payments of a debt owed to an energy company, Energoinvest. During arbitration, Energoinvest was awarded damages against the Congolese government and SNEL. This was reassigned by Energoinvest to FG Hemisphere Associates LLC.
FG Hemisphere subsequently learned that the Congolese government entered into a separate joint venture with Chinese companies later, in which the Congolese government would be paid US$221 million in mining entry fees. As a result, FG Hemisphere applied to collect these fees in order to enforce the earlier arbitral award. The Congolese government asserted sovereign immunity in the legal proceedings. This was eventually brought to the Hong Kong Court of Final Appeal, when the Congolese government fought to overturn an earlier Court of Appeal decision which had ruled that:
The Hong Kong Court of Final Appeal ruled 3:2 that the Congolese government had not waived its immunity in the Hong Kong courts, and that as a Special Administrative Region of China, Hong Kong could not have policies on state immunity that was inconsistent with China's. Therefore, the doctrine of sovereign immunity applied in Hong Kong should be absolute, and may be invoked when jurisdiction is sought in the foreign court in relation to an application to enforce a foreign judgment or arbitral award, or when execution is sought against assets in the foreign state. This means that sovereign states are absolutely immune to the jurisdiction in Hong Kong courts, including in commercial claims, unless the state waives its immunity. In order to waive immunity, there must be express, unequivocal submission to the jurisdiction of the Hong Kong courts "in the face of the court". Claimants should establish that the state party has waived their entitlement to immunity at the relevant stage, before proceedings can occur in court.
According to article 11 of the Constitution of Iceland the president can only be held accountable and be prosecuted with the consent of parliament.
According to Article 361 Constitution of India no legal action in the court of law can be taken against President of India and Governors of states of India as long as that person is holding either office. However, he/she can be impeached and then sued for his/her actions.
In "Byrne v Ireland", the Irish Supreme Court declared that sovereign immunity had not survived the creation of the Irish Free State in 1922, and that accordingly the state could be sued for and held vicariously liable for the acts and omissions of its servants and agents.
According to the Constitution, the President of the Italian Republic is not accountable, and he is not responsible for any act of his office, unless he has committed high treason or attempted to subvert the Constitution, as stated in Article 90:
The Italian Penal Code makes it a criminal offence to insult the honor and prestige of the President (Art. 278), and until 2006 it was an offence to publicly give the President responsibility for actions of the Government (Art. 279 – abrogated).
The Italian Constitutional Court has declared the partial incompatibility with the Italian Constitution of a law that forced courts to delay all trials against the Italian Prime Minister while he is in office. The revised version says that the trial hearings have to be scheduled in agreement between the Judge and the Government.
In Malaysia, an amendment to the constitution in 1993 made it possible to bring proceedings against the king or any ruler of a component state in the Special Court. Prior to 1993, rulers, in their personal capacity, were immune from any proceedings brought against them.
Section 308 of the Nigerian constitution of 1999 provides immunity from court proceedings, i.e., proceedings that will compel their attendance in favour of elected executive officers, namely the President and his vice and the Governors of the states and the deputies. This immunity extends to acts done in their official capacities so that they are not responsible for acts done on behalf of the state. However, this immunity does not extend to acts done in abuse of the powers of their office of which they are liable upon the expiration of their tenure. It is important to note that the judiciary has absolute immunity for actions decisions taken in their official capacity.
Article 5 of the Constitution of Norway states: "The King's person is sacred; he cannot be censured or accused. The responsibility
rests with his Council."
Accordingly, the monarch cannot be prosecuted or sued in his or her personal capacity, but this immunity does not extend to the state as such. Neither does immunity extend to the monarch in his capacity as an owner or stakeholder in real property, or as an employer, provided that the suit does not allege personal responsibility for the monarch.
Article XVI, Section 3 of the Philippines Constitution states: "The State may not be sued without its consent."
The Spanish monarch is personally immune from prosecution for acts committed by government ministers in the King's name, according to Title II, Section 56, Subsection 3 of the Spanish Constitution of 1978.
At the time of the June 2014 abdication of King Juan Carlos the Spanish constitution did not state whether an abdicated monarch retains his legal immunity, but the government was planning to make changes to allow this. Legislation has been passed, although unlike his previous immunity, the new legislation does not completely shield the former sovereign. Juan Carlos must answer to the supreme court, in a similar type of protection afforded to many high-ranking civil servants and politicians in Spain. The legislation stipulates that all outstanding legal matters relating to the former king be suspended and passed "immediately" to the supreme court.
By the Constitution of Sri Lanka, the President of Sri Lanka has sovereign immunity (till the period of office).
Chapter 5, Article 8 of the Swedish Constitution states: "The King or Queen
who is Head of State cannot be prosecuted for his or her actions. Nor can a Regent be prosecuted for his or her actions as Head of State." This only concerns the King as a private person, since he does not appoint the government, nor do any public officials act in his name. It does not concern other members of the Royal Family, except in such cases as they are exercising the office of Regent when the King is unable to serve. It is a disputed matter among Swedish constitutional lawyers whether the article also implies that the King is immune against lawsuits in civil cases, which do not involve prosecution.
In Singapore, state immunities are codified in the State Immunity Act of 1979, which closely resembles the United Kingdom's State Immunity Act 1978. Singapore's State Immunity Act has phrases identical to that of Section 9 of United Kingdom's State Immunity Act, and does not allow a foreign state, which has agreed to submit a dispute to arbitration, to claim jurisdictional immunity in judicial proceedings relating to the agreed arbitration, i.e. "where a State has agreed in writing to submit a dispute which has arisen, or may arise, to arbitration, the state is not immune as respects proceedings in the courts in Singapore which relate to the arbitration".
The President of Singapore does to a certain extent have sovereign immunity subjected to clause 22k(4). (See Part V under government regarding the President of Singapore)
Historically, the general rule in the United Kingdom has been that the Crown has never been able to be prosecuted or proceeded against in either criminal or civil cases. The only means by which civil proceedings could be brought were:
The position was drastically altered by the Crown Proceedings Act 1947 which made the Crown (when acting as the government) liable as of right in proceedings where it was previously only liable by virtue of a grant of a fiat. With limited exceptions, this had the effect of allowing proceedings for tort and contract to be brought against the Crown. Proceedings to bring writs of mandamus and prohibition were always available against ministers, because their actions derive from the royal prerogative.
Criminal proceedings are still prohibited from being brought against Her Majesty's Government unless expressly permitted by the Crown Proceedings Act.
As the Crown Proceedings Act only affected the law in respect of acts carried on by or on behalf of the British government, the monarch remains personally immune from criminal and civil actions. However, civil proceedings can, in theory, still be brought using the two original mechanisms outlined above – by petition of right or by suit against the Attorney General for a declaration.
The monarch is immune from arrest in all cases; members of the royal household are immune from arrest in civil proceedings. No arrest can be made "in the monarch's presence", or within the "verges" of a royal palace. When a royal palace is used as a residence (regardless of whether the monarch is actually living there at the time), judicial processes cannot be executed within that palace.
The monarch's goods cannot be taken under a writ of execution, nor can distress be levied on land in their possession. Chattels owned by the Crown, but present on another's land, cannot be taken in execution or for distress. The Crown is not subject to foreclosure.
In United States law, state, federal and tribal governments generally enjoy immunity from lawsuits. Local governments typically enjoy immunity from some forms of suit, particularly in tort.
In the US, sovereign immunity falls into two categories:
In some situations, sovereign immunity may have been waived by law.
Judicial immunity is a specific form of absolute immunity.
The federal government has sovereign immunity and may not be sued anywhere in the United States unless it has waived its immunity or consented to suit. The United States has waived sovereign immunity to a limited extent, mainly through the Federal Tort Claims Act, which waives the immunity if a tortious act of a federal employee causes damage, and the Tucker Act, which waives the immunity over claims arising out of contracts to which the federal government is a party. The United States as a sovereign is immune from suit unless it unequivocally consents to being sued. The United States Supreme Court in "Price v. United States" observed: "It is an axiom of our jurisprudence. The government is not liable to suit unless it consents thereto, and its liability in suit cannot be extended beyond the plain language of the statute authorizing it." "Price v. United States", 174 U.S. 373, 375-76 (1899).
In "Hans v. Louisiana" (1890), the Supreme Court of the United States held that the Eleventh Amendment (1795) re-affirms that states possess sovereign immunity and are therefore generally immune from being sued in federal court without their consent. In later cases, the Supreme Court has strengthened state sovereign immunity considerably. In "Blatchford v. Native Village of Noatak" (1991), the court explained that
In "Alden v. Maine" (1999), the Court explained that while it has
Writing for the Court in "Alden", Justice Anthony Kennedy argued that in view of this, and given the limited nature of congressional power delegated by the original unamended Constitution, the court could not "conclude that the specific Article I powers delegated to Congress necessarily include, by virtue of the Necessary and Proper Clause or otherwise, the incidental authority to subject the States to private suits as a means of achieving objectives otherwise within the scope of the enumerated powers".
However, a "consequence of [the] Court's recognition of preratification sovereignty as the source of immunity from suit is that "only" States and "arms of the State" possess immunity from suits authorized by federal law". "Northern Insurance Company of New York v. Chatham County" (2006, emphasis added). Thus, cities and municipalities lack sovereign immunity, "Jinks v. Richland County" (2003), and counties are not generally considered to have sovereign immunity, even when they "exercise a 'slice of state power. "Lake Country Estates, Inc. v. Tahoe Regional Planning Agency" (1979). Nor are school districts, per "Mt. Healthy City School District Board of Education v. Doyle" (1977).
Additionally, Congress can abrogate state sovereign immunity when it acts pursuant to powers delegated to it by any amendments ratified after the Eleventh Amendment. The abrogation doctrine, established by the Supreme Court in "Fitzpatrick v. Bitzer" (1976), is most often implicated in cases that involve Section 5 of the Fourteenth Amendment, which explicitly allows Congress to enforce its guarantees on the states. | https://en.wikipedia.org/wiki?curid=29081 |
Social geography
Social geography is the branch of human geography that is most closely related to social theory in general and sociology in particular, dealing with the relation of social phenomena and its spatial components. Though the term itself has a tradition of more than 100 years, there is no consensus on its explicit content. In 1968, Anne Buttimer noted that "[w]ith some notable exceptions, (...) social geography can be considered a field created and cultivated by a number of individual scholars rather than an academic tradition built up within particular schools". Since then, despite some calls for convergence centred on the structure and agency debate, its methodological, theoretical and topical diversity has spread even more, leading to numerous definitions of social geography and, therefore, contemporary scholars of the discipline identifying a great variety of different "social geographies". However, as Benno Werlen remarked, these different perceptions are nothing else than different answers to the same two (sets of) questions, which refer to the spatial constitution of society on the one hand, and to the spatial expression of social processes on the other.
The different conceptions of social geography have also been overlapping with other sub-fields of geography and, to a lesser extent, sociology. When the term emerged within the Anglo-American tradition during the 1960s, it was basically applied as a synonym for the search for patterns in the distribution of social groups, thus being closely connected to urban geography and urban sociology. In the 1970s, the focus of debate within American human geography lay on political economic processes (though there also was a considerable number of accounts for a phenomenological perspective on social geography), while in the 1990s, geographical thought was heavily influenced by the "cultural turn". Both times, as Neil Smith noted, these approaches "claimed authority over the 'social'". In the American tradition, the concept of cultural geography has a much more distinguished history than social geography, and encompasses research areas that would be conceptualized as "social" elsewhere. In contrast, within some continental European traditions, social geography was and still is considered an approach to human geography rather than a sub-discipline, or even as identical to human geography in general.
The term "social geography" (or rather "géographie sociale") originates from France, where it was used both by geographer Élisée Reclus and by sociologists of the Le Play School, perhaps independently from each other. In fact, the first proven occurrence of the term derives from a review of Reclus' "Nouvelle géographie universelle" from 1884, written by Paul de Rousiers, a member of the Le Play School. Reclus himself used the expression in several letters, the first one dating from 1895, and in his last work "L'Homme et la terre" from 1905. The first person to employ the term as part of a publication's title was Edmond Demolins, another member of the Le Play School, whose article "Géographie sociale de la France" was published in 1896 and 1897. After the death of Reclus as well as the main proponents of Le Play's ideas, and with Émile Durkheim turning away from his early concept of social morphology, Paul Vidal de la Blache, who noted that geography "is a science of places and not a science of men", remained the most influential figure of French geography. One of his students, Camille Vallaux, wrote the two-volume book "Géographie sociale", published in 1908 and 1911. Jean Brunhes, one of Vidal's most influential disciples, included a level of (spatial) interactions among groups into his fourfold structure of human geography. Until the Second World War, no more theoretical framework for social geography was developed, though, leading to a concentration on rather descriptive rural and regional geography. However, Vidal's works were influential for the historical Annales School, who also shared the rural bias with the contemporary geographers, and Durkheim's concept of social morphology was later developed and set in connection with social geography by sociologists Marcel Mauss and Maurice Halbwachs.
The first person in the Anglo-American tradition to use the term "social geography" was George Wilson Hoke, whose paper "The Study of Social Geography" was published in 1907, yet there is no indication it had any academic impact. Le Play's work, however, was taken up in Britain by Patrick Geddes and Andrew John Herbertson. Percy M. Roxby, a former student of Herbertson, in 1930 identified social geography as one of human geography's four main branches. By contrast, the American academic geography of that time was dominated by the Berkeley School of Cultural Geography led by Carl O. Sauer, while the spatial distribution of social groups was already studied by the Chicago School of Sociology. Harlan H. Barrows, a geographer at the University of Chicago, nevertheless regarded social geography as one of the three major divisions of geography.
Another pre-war concept that combined elements of sociology and geography was the one established by Dutch sociologist Sebald Rudolf Steinmetz and his Amsterdam School of Sociography. However, it lacked a definitive subject, being a combination of geography and ethnography created as the more concrete counterpart to the rather theoretical sociology. In contrast, the Utrecht School of Social geography, which emerged in the early 1930s, sought to study the relationship between social groups and their living spaces.
In the German-language geography, this focus on the connection between social groups and the landscape was further developed by Hans Bobek and Wolfgang Hartke after the Second World War. For Bobek, groups of "Lebensformen" (patterns of life)—influenced by social factors—that formed the landscape, were at the center of his social geographical analysis. In a similar approach, Hartke considered the landscape a source for indices or traces of certain social groups' behaviour. The best-known example of this perspective was the concept of "Sozialbrache" (social-fallow), i.e. the abandoning of tillage as an indicator for occupational shifts away from agriculture.
Though the French "Géographie Sociale" had been a great influence especially on Hartke's ideas, no such distinct school of thought formed within the French human geography. Nonetheless, Albert Demangeon paved the way for a number of more systematic conceptualizations of the field with his (posthumously published) notion that social groups ought to be within the center of human geographical analysis. That task was carried out by Pierre George and Maximilien Sorre, among others. Then a Marxist, George's stance was dominated by a socio-economic rationale, but without the structuralist interpretations found in the works of some the French sociologists of the time. However, it was another French Marxist, the sociologist Henri Lefebvre, who introduced the concept of the (social) production of space. He had written on that and related topics since the 1930s, but fully expounded it in "La Production de L'Espace" as late as 1974. Sorre developed a schema of society related to the ecological idea of habitat, which was applied to an urban context by the sociologist Paul-Henry Chombart de Lauwe. For the Dutch geographer Christiaan van Paassen, the world consisted of socio-spatial entities of different scales formed by what he referred to as a "syn-ecological complex", an idea influenced by existentialism.
A more analytical ecological approach on human geography was the one developed by Edgar Kant in his native Estonia in the 1930s and later at Lund University, which he called "anthropo-ecology". His awareness of the temporal dimension of social life would lead to the formation of time geography through the works of Torsten Hägerstrand and Sven Godlund. | https://en.wikipedia.org/wiki?curid=29082 |
Segway
Segway is a two-wheeled, self-balancing personal transporter invented by Dean Kamen and brought to market in 2001 as the Segway HT, subsequently as the Segway PT, and manufactured by Segway Inc.. "HT" is an initialism for "human transporter" and "PT" for "personal transporter".
Ninebot Inc., a Beijing-based transportation robotics startup rival, acquired Segway in April 2015, broadened the company to include other transportation devices, and announced in June 2020 it would no longer make a two-wheeled, self-balancing product.
The Segway PT, referred during development and initial marketing as the Segway HT, was developed from the self-balancing iBOT wheelchair which was initially developed at University of Plymouth, in conjunction with BAE Systems and Sumitomo Precision Products. Segway's first patent was filed in 1994 and granted in 1997 followed by others including one submitted in June 1999 and granted in October 2001.
Just prior to its introduction, a book leaked information about the invention, development, and financing of the Segway, led to speculation about the device and its importance. John Doerr speculated that it would be more important than the Internet. "South Park" devoted an episode to making fun of the hype before the product was released. Steve Jobs was quoted as saying that it was "as big a deal as the PC", (but later retracted that saying that it "sucked", presumably referring to "the design" but commenting about the boutique price, asking, "You're "sure" your market is upscale consumers for transportation?") The device was unveiled on 3 December 2001, following months of public speculation, in Bryant Park, New York City, on the ABC News morning program "Good Morning America" with the first units delivered to customers in early 2002.
The original Segway models featured three speed settings: , with faster turning, and . Steering of early versions was controlled using a twist grip that varied the speeds of the two motors. The range of the p-Series was on a fully charged nickel metal hydride (NiMH) battery with a recharge time of 4–6 hours. In September 2003, the Segway PT was recalled, because if users ignored repeated low battery warnings on the PTs, it could ultimately lead them to fall. With a software patch to version 12.0, the PT would automatically slow down and stop in response to detecting low battery power.
In August 2006 Segway discontinued all previous models and introduced the i2 and x2 products which were steered by leaning the handlebars to the right or left, had a maximum speed of from a pair of Brushless DC electric motors with regenerative braking and a range of up to , depending on terrain, riding style and state of the batteries.
Recharging took 8–10 hours. The i2 and x2 also introduced the wireless InfoKey which could show mileage and a trip odometer, and put the Segway into Security mode, which locked the wheels and set off an alarm if it was moved, and could also be used to turn on the PT from up to away.
Versions of the product prior to 2011 included (in order of release):
In March 2014, Segway announced third generation designs, including the i2 SE and x2 SE sport, new LeanSteer frame and powerbase designs, with integrated lighting.
Ninebot Inc., a Beijing-based transportation robotics startup and a Segway rival, acquired Segway in April 2015 having raised $80M from Xiaomi and Sequoia Capital.
In June 2016 the company launched the Segway miniPRO, a smaller self-balancing scooter.
In June 2020, Ninebot, the owner of the Segway brand, announced that it would no longer make the namesake two-wheeled, self-balancing product.
A relatively small number of 140,000 units were sold during the lifetime of the product, however the Segway PT only made up 1.5% of total company profit. Factors contributing to the end of production include the price ($5,000 US at launch), and the learning curve in learning to balance on a Segway which has led to notable accidents including Usain Bolt, George W Bush and the Segway Company owner Jimi Heselden. While the Segway has remained popular for security and tourism, electric scooters have been more popular for personal mobility.
the following self-balancing scooters were available from Segway. (For other Segway products see Segway Inc.)
The dynamics of the Segway PT are similar to a classic control problem, the inverted pendulum. It uses brushless DC electric motors in each wheel powered by lithium-ion batteries with balance achieved using tilt sensors, and gyroscopic sensors developed by BAE Systems' Advanced Technology Centre. The wheels are driven forward or backward as needed to return its pitch to upright.
In 2011 the Segway i2 was being marketed to the emergency medical services community. The special police forces trained to protect the public during the 2008 Summer Olympics used the Segway for mobility.
In 2018, the police of Stockholm adopted segways as permanent transportation method for the patrollers of the old town.
The Segway miniPro is also available to be used as the mobility section of a robot.
Disability Rights Advocates for Technology worked to supply Segway PTs to veterans who had trouble walking. Segway Inc. cannot market its devices in the US as medical devices. Kamen sold the intellectual property rights for medical purposes to Johnson & Johnson, makers of the iBOT, a self-balancing wheelchair).
The maximum speed of the Segway PT is . The product is capable of covering on a fully charged lithium-ion battery, depending on terrain, riding style, and the condition of the batteries. The U.S. Consumer Product Safety Commission does not have Segway-specific recommendations but does say that bicycle helmets are adequate for "low-speed, motor-assisted" scooters. | https://en.wikipedia.org/wiki?curid=29083 |
Slayers
In the "Slayers" universe, the ultimate being is the Lord of Nightmares, the creator of at least four parallel worlds. An artifact known as the Claire Bible contains information about the Lord of Nightmares' task to regain its "true form", which is only attainable by destroying these worlds and returning them to the chaos (sea of darkness) that it itself is. For unexplained reasons, though, the Lord of Nightmares has not acted upon this desire by itself so far. On each of these worlds are gods ("shinzoku", lit. "godly race") and monsters ("mazoku", lit. "demon race"), fighting without end. Should the gods win the war in a world, that world will be at peace. Should the monsters win, the world will be destroyed and returned to the Sea of Chaos.
In the world where the "Slayers" takes place, Flare Dragon Ceiphied and the Ruby-Eye Shabranigdo are, respectively, the supreme god and monster. Long ago, their war ended more or less in a stalemate, when Ceiphied was able to split Shabranigdo's existence into seven pieces in order to prevent him from coming back to life, then seal them within human souls. As the souls are reincarnated, the individual fragments would wear down until Shabranigdo himself would be destroyed. However, Ceiphied was so exhausted by this that he himself sank into the Sea of Chaos, leaving behind four parts of himself in the world. A millennium before the events in "Slayers", one of Ruby-Eye's fragments (which was sealed in the body of Lei Magnus, a very powerful sorcerer) revived and began the against one of the parts of Ceiphied, the Water Dragon King, also known as Aqualord Ragradia. Ultimately, the piece of Shabranigdo won, but Aqualord, using the last remnants of her power, sealed him into a block of magical ice within the Kataart Mountains. Nevertheless, Shabranigdo's lieutenants remained at liberty, sealing a part of the world within a magical barrier, through which only mazoku could pass.
There are four types of magic within the "Slayers" universe: Black, White, Shamanistic, and Holy. Black magic spells, such as the famous Dragon Slave, call directly on the powers of the mazoku and are capable of causing enormous damage. White magic spells are of an obscure origin and are used for healing or protection. Shamanistic magic is focused on manipulation and alteration of the basic elements of the natural world (earth, wind, fire, water and spirit) and contains spells for both offense and convenience, such as Raywing, Fireball, or Elmekia Lance. Holy magic uses the power of the shinzoku, but the aforementioned barrier made its usage impossible for anyone inside before the death of the mazoku Hellmaster Phibrizzo. As a rule, mazoku can only be harmed by spiritual (astral) shamanistic magic, holy magic, or black magic which draws power from another mazoku with greater might than the target.
Above all other magic, however, are the immensely destructive spells drawing power from the Lord of Nightmares. The two spells of this class are the Ragna Blade, capable of cutting through any obstacle or being, and the Giga Slave, which can kill any opponent, but which could also destroy the world itself if the spell is miscast. Some have claimed that these terrible spells, drawing their power directly from the Lord of Nightmares, constitute a fifth form of magic: Chaos magic.
"Slayers" was originally serialized in "Dragon Magazine" in 1989 as a short story series written by Hajime Kanzaka, and with artwork by Rui Araizumi. The serialized chapters were then published as "Slayers" light novels across 15 volumes from January 25, 1990 to May 15, 2000. On September 7, 2004, Tokyopop began publishing the light novels in English, ending with the release of Volume 8 on January 2, 2008. On October 20, 2018, volume 16 was published by Fujimi Shobo under their Fujimi Fantasia Bunko imprint. On October 19, 2019, volume 17 was published.
Between July 26, 2008 and March 2009, a new manga series entitled "Slayers Light Magic" (スレイヤーズ ライト・マジック) was serialised in Kadokawa Shoten's "Kerokero Ace". The series was written by Yoshijirō Muramatsu and illustrated Shin Sasaki, and set in a technological world instead of a fantasy world.
In July 1998, Central Park Media announced they had licensed the manga for distribution in North America. On June 15, 1999, "Slayers: Medieval Mayhem" was released. The four-volume series "Slayers Special" was published between October 12, 2002, and June 25, 2003. A seven-volume series "Super-Explosive Demon Story" followed between July 9, 2002 and December 1, 2004. Finally, "Slayers Premium" was published in North America on July 5, 2005.
The self-titled first season of the anime adapts volumes 1 and 3 of the light novel.
The second season, "Slayers NEXT", adapts volumes 2, 4, 5, 7, and 8 of the light novel.
The third season, "Slayers TRY", is an original story.
However, a fourth season, "Slayers AGAIN", was rumored following the success of "TRY", but early scheduling conflicts caused interest in the project to dissipate.
A fourth anime series, "Slayers Revolution", premiered in Japan on July 2, 2008. Megumi Hayashibara, the voice actress for main character Lina Inverse, performed both the opening and ending theme songs. The new plot is told across two 13-episode arcs and follows an original storyline that has subplots based on events in the novels, with series director Takashi Watanabe and production studio J.C.Staff reprising their duties from the three original TV series. A fifth "Slayers" series titled "Slayers Evolution-R" is the second 13-episode arc of "Slayers Revolution" and was aired on AT-X starting on January 12, 2009 in Japan.
Central Park Media licensed and distributed the anime in North America under the Software Sculptors label on VHS and Laserdisc between 1996 and 1998, collected in eight volumes. It was a commercial success for Central Park, which led them to license "Slayers NEXT" and "Slayers TRY"; "NEXT" was first shipped from April 1999 in a similar format. A box set of the first four volumes was released in July 1999, and a box set of the second four volumes in October. "Slayers TRY" was released later in 2000. The first three seasons were subsequently re-released on DVD (in season box sets). Months before Central Park's license for the anime properties expired, FUNimation Entertainment was able to obtain the license and it aired as part of the new owner's programming block on CoLours TV, as well as the FUNimation Channel. The first bilingual DVD box set after FUNimation's rescue of the license was released on August 27, 2007 retaining the Software Sculptors-produced English dub. A boxset of "Slayers", "NEXT" and "TRY" was released by Funimation on August 4, 2009.
Fox Kids won the rights to broadcast "Slayers" but eventually did not air the anime since it would be too heavy to edit it for content. The first North American television broadcast of "The Slayers" was February 17, 2002 on the International Channel. In 2009, MVM Films began releasing the series in the United Kingdom on a monthly basis. The first series was released on four DVDs between January 5, and April 6, 2009. The first volume of "Slayers NEXT" was released on May 11, 2009. Episodes have also been made available on the streaming video sites Hulu, YouTube, Crackle, Anime News Network, Netflix, and Funimation's website.
FUNimation licensed both "Slayers Revolution" and "Slayers Evolution-R" for American release; the episodes in Japanese with English subtitles were uploaded to YouTube, as well as Funimation's website in July 2009. Funimation contracted NYAV Post to produce the English version of the series, with dialogue being recorded in both New York City and Los Angeles. NYAV Post was able to reunite most of the original Central Park Media main character cast for the new season. However, Michael Sinterniklaas replaced David Moo as Xellos. Other notable characters, such as Sylphiel, Prince Phil, and Naga the Serpent were also recast with new voice actors. In December 2009, Funimation announced that the first "Slayers Revolution" boxset would be released on March 16, 2010. Funimation released the first four English-dubbed episodes of "Slayers Revolution" to YouTube on January 19, 2010. They have also uploaded the first two English-dubbed episodes of "Evolution-R" to YouTube and released "Evolution-R" on DVD in June 2010. Funimation released both "Slayers Revolution" and "Evolution-R" on Blu-ray on September 21, 2010 Both seasons were later re-released together in a DVD/Blu-ray combo pack. Both "Revolution" and "Evolution-R" made their North American television debut when they began airing on the FUNimation Channel on September 6, 2010.
In North America, "Slayers Special" was initially sold as two separate titles, "Slayers: Dragon Slave" and "Slayers: Explosion Array" on VHS by licensee ADV Films. All three episodes were later compiled into "Slayers: The Book of Spells", shipped on November 21, 2000. ADV Films released all the OVAs to VHS and DVD in both North America and the UK.
Most of the films were produced by J.C.Staff and licensed for home video release in North America by ADV Films. "Slayers Return" was adapted into a manga version. "Slayers Premium" however was animated by Hal Film Maker.
The series was adapted into an add-on for the Japanese role-playing game "MAGIUS" ("Slayers MAGIUS RPG"). In 2003, Guardians of Order published a licensed role-playing game "The Slayers d20" using the d20 System, as well as three guidebooks including pages of game statistics in their ""Big Eyes, Small Mouth"" game system for the TV series' major characters, spells and weapons. A collectible card game "Slayers Fight" (スレイヤーズふぁいと) was developed by ORG and published by Kadokawa Shoten between 1999–2001.
A series of five "Slayers" role-playing video games were released exclusively in Japan between 1994 and 1998 for different platforms. There are two different 16-bit games released in 1994 and titled simply "Slayers" (including one for the Super Famicom), followed by three 32-bit console games: 1997's "Slayers Royal", and 1998's "Slayers Royal 2" and "Slayers Wonderful". In addition, some "Slayers" characters are featured in 2012's "Heroes Phantasia" and in the doujin game "Magical Battle Arena".
The first volume of the Slayers series was published in January 1990. So far it has 50 volumes in total (original series: 15 volumes, Special series: 30 volumes, the Smash series: five volumes), and its total print reached over 18 million copies, as of July 2015. As of 2018, it has sold 20million copies.
Of the various media which make up the "Slayers" franchise, the anime has by far reached the largest audience and is considered to be one of the most popular series of the 1990s. As it is a parody of the high fantasy genre, the series's driving force lies in comic scenarios alluding to other specific anime, or more general genre tropes and clichés. Due to the series' comedic nature, less development is given to plot and characters, which some consider predictable. Nevertheless, the series' focus on humor and entertainment and "old school" anime feel make it a nostalgic classic to many.
In "Anime Essentials: Every Thing a Fan Needs to Know", Gilles Poitras wrote: "More humorous and less serious looking than the characters in the "Lodoss War" series, the stars of "Slayers" provide action and laughs." In "", Helen McCarthy similarly called it "the antidote to the deadly serious "Record of Lodoss War", with a cynical cast modeled on argumentative role-players. (...) Ridiculing its own shortcomings, "Slayers" has successfully kept a strong following that watches for what some might call biting satire, and others bad workmen blaming their tools."
Joseph Luster of "Otaku USA" called it "the very definition of an all-encompassing media franchise. (...) "Slayers" certainly has that in its memorable lineup, and they'll likely cast some sort of spell on you, regardless of age." Paul Thomas Chapman from the same magazine opined it is a "franchise whose remarkable longevity and popularity is matched only by its remarkable averageness," especially regarding the various aspects of the TV series, but still appealing to him and making him return to it when he looks for a light entertainment. | https://en.wikipedia.org/wiki?curid=29084 |
Security through obscurity
Security through obscurity (or security by obscurity) is the reliance in security engineering on design or implementation secrecy as the main method of providing security to a system or component. Security experts have rejected this view as far back as 1851, and advise that obscurity should never be the only security mechanism.
An early opponent of security through obscurity was the locksmith Alfred Charles Hobbs, who in 1851 demonstrated to the public how state-of-the-art locks could be picked. In response to concerns that exposing security flaws in the design of locks could make them more vulnerable to criminals, he said: "Rogues are very keen in their profession, and know already much more than we can teach them".
There is scant formal literature on the issue of security through obscurity. Books on security engineering cite Kerckhoffs' doctrine from 1883, if they cite anything at all. For example, in a discussion about secrecy and openness in Nuclear Command and Control:
[T]he benefits of reducing the likelihood of an accidental war were considered to outweigh the possible benefits of secrecy. This is a modern reincarnation of Kerckhoffs' doctrine, first put forward in the nineteenth century, that the security of a system should depend on its key, not on its design remaining obscure.
In the field of legal academia, Peter Swire has written about the trade-off between the notion that "security through obscurity is an illusion" and the military notion that "loose lips sink ships" as well as how competition affects the incentives to disclose.
The principle of security through obscurity was more generally accepted in cryptographic work in the days when essentially all well-informed cryptographers were employed by national intelligence agencies, such as the National Security Agency. Now that cryptographers often work at universities, where researchers publish many or even all of their results, and publicly test others' designs, or in private industry, where results are more often controlled by patents and copyrights than by secrecy, the argument has lost some of its former popularity. An early example was PGP, whose source code is publicly available to anyone. The security technology in some of the best commercial browsers is also considered highly secure despite being open source.
There are conflicting stories about the origin of this term. Fans of MIT's Incompatible Timesharing System (ITS) say it was coined in opposition to Multics users down the hall, for whom security was far more an issue than on ITS. Within the ITS culture the term referred, self-mockingly, to the poor coverage of the documentation and obscurity of many commands, and to the attitude that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community. One instance of deliberate security through obscurity on ITS has been noted: the command to allow patching the running ITS system (altmode altmode control-R) echoed as $$^D. Typing Alt Alt Control-D set a flag that would prevent patching the system even if the user later got it right.
In January 2020, NPR reported that party officials in Iowa declined to share information regarding the security of its caucus app, to "make sure we are not relaying information that could be used against us." Cybersecurity experts replied that "to withhold the technical details of its app doesn't do much to protect the system."
Security by obscurity alone is discouraged and not recommended by standards bodies. The National Institute of Standards and Technology (NIST) in the United States sometimes recommends against this practice: "System security should not depend on the secrecy of the implementation or its components."
The technique stands in contrast with security by design and open security, although many real-world projects include elements of all strategies.
Knowledge of how the system is built differs from concealment and camouflage. The efficacy of obscurity in operations security depends by whether the obscurity lives on top of other good security practices, or if it is being used alone. When used as an independent layer, obscurity is considered a valid security tool.
In recent years, security through obscurity has gained support as a methodology in cybersecurity through Moving Target Defense and cyber deception. NIST's cyber resiliency framework, 800-160 Volume 2, recommends the usage of security through obscurity as a complementary part of a resilient and secure computing environment. The research firm Forrester recommends the usage of environment concealment to protect messages against Advanced Persistent Threats. | https://en.wikipedia.org/wiki?curid=29087 |
Snuff film
A snuff film is a genre that purports to show scenes of actual homicide. The promotion of these films depends on sensational claims which are generally impossible to prove, and there are sophisticated special effects for simulating murder.
A snuff film, or snuff movie, is "a movie in a purported genre of movies in which a person is actually murdered or commits suicide. It may or may not be made for financial gain, but is supposedly "circulated amongst a jaded few for the purpose of entertainment". Some filmed records of executions and murders exist, but in those cases, the death was not specifically staged for financial gain or entertainment.
The first known use of the term "snuff movie" is in a 1971 book by Ed Sanders, "The Family: The Story of Charles Manson's Dune Buggy Attack Battalion". He alleges that the Manson Family was involved in making such a film in California to record their murders.
The noun "snuff" originally meant the part of a candle wick that has already burned; the verb "snuff" meant to cut this off, and by extension to extinguish or kill. The word has been used in this sense in English slang for hundreds of years. It was defined in 1874 as a "term very common among the lower orders of London, meaning to die from disease or accident".
Film studies professor Boaz Hagin argues that the concept of snuff films originated decades earlier than is commonly believed, at least as early as 1907. That year, Polish-French writer Guillaume Apollinaire published the short story "A Good Film" about newsreel photojournalists who stage and film a murder due to public fascination with crime news; in the story, the public believes the murder is real but police determine that the crime was faked. Hagin also proposes that the film "Network" (1976) contains an explicit (fictional) snuff film depiction when television news executives orchestrate the on-air murder of a news anchor to boost ratings.
According to film critic Geoffrey O'Brien, "whether or not commercially distributed 'snuff' movies actually exist, the possibility of such movies is implicit in the stock B-movie motif of the mad artist killing his models, as in "A Bucket of Blood" (1959), "Color Me Blood Red" (1965), or "Decoy for Terror" (1967) also known as "Playgirl Killer". The concept of "snuff films" being made for profit became more widely known with the commercial film "Snuff" (1976). This low-budget exploitation horror film, originally titled "Slaughter", was directed by Michael and Roberta Findlay. In an interview decades later, Roberta Findlay said the film's distributor Allan Shackleton had read about snuff films being imported from South America and retitled "Slaughter" to "Snuff", to exploit the idea; he also added a new ending that depicted an actress being murdered on a film set. The promotion of "Snuff" on its second release suggested it featured the murder of an actress: "The film that could only be made in South America... where life is CHEAP", but that was false advertising. Shackleton put out false newspaper clippings that reported a citizens group's crusading against the film and hired people to act as protesters to picket screenings.
The first two films in the Japanese "Guinea Pig" series are designed to look like snuff films; the video is grainy and unsteady, as if recorded by amateurs, and extensive practical and special effects are used to imitate such features as internal organs and graphic wounds. The sixth film in the series, "Mermaid in a Manhole", allegedly served as an inspiration for Japanese serial killer Tsutomu Miyazaki, who murdered several preschool girls in the late 1980s.
In 1991, actor Charlie Sheen became convinced that "Flower of Flesh and Blood" (1985), the second film in the series, depicted an actual homicide and contacted the FBI. The Bureau initiated an investigation but closed it after the series' producers released a "making of" film demonstrating the special effects used to simulate the murders.
The Italian director Ruggero Deodato was charged after rumors that the depictions of the killing of the main actors in his film "Cannibal Holocaust" (1980) were real. He was able to clear himself of the charges after the actors made an appearance in court.
Other than graphic gore, the film contains several scenes of sexual violence and the genuine deaths of six animals onscreen and one off screen, issues which find "Cannibal Holocaust" in the midst of controversy to this day. It has also been claimed that "Cannibal Holocaust" is banned in over 50 countries, although this has never been verified. In 2006, "Entertainment Weekly" magazine named "Cannibal Holocaust" as the 20th most controversial film of all-time.
This trilogy of films, purportedly portraying amateur footage made by a serial killer and his friends, and depicting gore, sex, torture and murders, has some of its scenes distributed on the darknet as if the footage were real. | https://en.wikipedia.org/wiki?curid=29089 |
Software testing
Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use.
Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test:
As the number of possible tests for even simple software components is practically infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources. As a result, software testing typically (but not exclusively) attempts to execute a program or application with the intent of finding software bugs (errors or other defects). The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can even create new ones.
Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors.
Software testing can be conducted as soon as executable software (even if partially complete) exists. The overall approach to software development often determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and then implemented in testable programs. In contrast, under an agile approach, requirements, programming, and testing are often done concurrently.
Although software testing can determine the correctness of software under the assumption of some specific hypotheses (see the hierarchy of testing difficulty below), testing cannot identify all the defects within the software. Instead, it furnishes a "criticism" or "comparison" that compares the state and behavior of the product against test oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.
A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, but only that it does not function properly under specific conditions. The scope of software testing often includes the examination of code as well as the execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing assists in making this assessment.
Not all software defects are caused by coding errors. One common source of expensive defects is requirement gaps, i.e., unrecognized requirements that result in errors of omission by the program designer. Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, performance, and security.
Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will necessarily result in failures. For example, defects in the dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software. A single defect may result in a wide range of failure symptoms.
A fundamental problem with software testing is that testing under "all" combinations of inputs and preconditions (initial state) is not feasible, even with a simple product. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed to "be" versus what it is supposed to "do")—usability, scalability, performance, compatibility, reliability—can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.
Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use combinatorial test design methods to build structured variation into their test cases.
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided, if better software testing was performed.
Outsourcing software testing because of costs is very common, with China, the Philippines and India being preferred destinations.
Software testing can be done by dedicated software testers; until the 1980s, the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing, different roles have been established, such as "test manager", "test lead", "test analyst", "test designer", "tester", "automation developer", and "test administrator". Software testing can also be performed by non-dedicated software testers.
Glenford J. Myers initially introduced the separation of debugging from testing in 1979. Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error.") it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
There are many approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas executing programmed code with a given set of test cases is referred to as dynamic testing.
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for these are either using stubs/drivers or execution from a debugger environment.
Static testing involves verification, whereas dynamic testing also involves validation.
Passive testing means verifying the system behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions. This is related to offline runtime verification and log analysis.
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."
Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box testing may also be applied to software testing methodology. With the concept of grey-box testing—which develops tests from specific design elements—gaining prominence, this "arbitrary distinction" between black- and white-box testing has faded somewhat.
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g., in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration, and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Code coverage as a software metric can be reported as a percentage for:
100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly. Pseudo-tested functions and methods are those that are covered but not specified (it is possible to remove their body without breaking any test case).
Black-box testing (also known as functional testing) treats the software as a "black box," examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing, and specification-based testing.
Specification-based testing aims to test the functionality of software according to the applicable requirements. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case.
Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.
One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight." Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case or leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.
Component interface testing
Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component. The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units. The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected performance in the next unit.
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information she or he requires, and the information is expressed clearly.
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence she or he requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in more rigorous examination of defect fixes. However, unless strict documentation of the procedures are maintained, one of the limits of ad hoc testing is lack of repeatability.
Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary." Grey-box testing may also include reverse engineering (using dynamic code analysis) to determine, for instance, boundary values or error messages. Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for the test.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.
Broadly speaking, there are at least three levels of testing: unit testing, integration testing, and system testing. However, a fourth level, acceptance testing, may be included by developers. This may be in the form of operational acceptance testing or be simple end-user (beta) testing, testing to ensure the software meets functional expectations. Based on the ISTQB Certified Test Foundation Level syllabus, test levels includes those four levels, and the fourth level is named acceptance testing. Tests are frequently grouped into one of these levels by where they are added in the software development process, or by the level of specificity of the test.
Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.
These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other.
Unit testing is a software development process that involves a synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development life cycle. Unit testing aims to eliminate construction errors before code is promoted to additional testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, unit testing might include static code analysis, data-flow analysis, metrics analysis, peer code reviews, code coverage analysis and other software testing practices.
Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed.
Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.
Integration tests usually involve a lot of code, and produce traces that are larger than those produced by unit tests. This has an impact on the ease of localizing the fault when an integration test fails. To overcome this issue, it has been proposed to automatically cut the large tests in smaller pieces to improve fault localization.
System testing tests a completely integrated system to verify that the system meets its requirements. For example, a system test might involve testing a login interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff.
Commonly this level of Acceptance testing include the following four types:
User acceptance testing and Alpha and beta testing are described in the next #Testing types section.
Operational acceptance is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or Operations readiness and assurance (OR&A) testing. Functional testing within OAT is limited to those tests that are required to verify the "non-functional" aspects of the system.
In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.
Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two testings can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.
Different labels and ways of grouping testing may be testing types, software testing tactics or techniques.
Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known as installation testing.
A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a Web application, which must render in a Web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.
Sanity testing determines whether it is reasonable to proceed with further testing.
Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as build verification test.
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development, due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported.
Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. In regression testing, it is important to have strong assertions on the existing behavior. For this, it is possible to generate and add new assertions in existing test cases, this is known as automatic test amplification.
Acceptance testing can mean one of two things:
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta versions can be made available to the open public to increase the feedback field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).
Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing.
Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
"Load testing" is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as "endurance testing". "Volume testing" is a way to test software functions even when certain components (for example a file or database) increase radically in size. "Stress testing" is a way to test reliability under unexpected or rare workloads. "Stability testing" (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.
Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.
Usability testing is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled UI designers.
Accessibility testing may include compliance with standards such as:
Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.
The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."
Testing for internationalization and localization validates that the software can be used with different languages and geographic regions. The process of pseudolocalization is used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product.
Globalization testing verifies that the software is adapted for a new culture (such as different currencies or time zones).
Actual translation to human languages must be tested, too. Possible localization and globalization failures include:
Development Testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development Testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software testing practices.
A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome.
Concurrent or concurrency testing assesses the behaviour and performance of software and systems that use concurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling.
In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
Creating a display expected output, whether as data comparison of text or screenshots of the UI, is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies.
A common practice in waterfall development is that testing is performed by an independent group of testers. This can happen:
However, even in the waterfall development model, unit testing is often done by the software development team even when further testing is done by a separate team.
In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). The tests are expected to fail initially. Each failing test is followed by writing just enough code to make it pass. This means the test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).
The ultimate goals of this test process are to support continuous integration and to reduce defect rates.
This methodology increases the testing effort done by development, before reaching any formal testing team. In some other development models, most of the test execution occurs after the requirements have been defined and the coding process has been completed.
Although variations exist between organizations, there is a typical cycle for testing. The sample below is common among organizations employing the Waterfall development model. The same practices are commonly found in other development models, but might not be as clear or explicit.
Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system.
While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful.
Program testing and fault detection can be aided significantly by testing tools and debuggers.
Testing/debug tools include features such as:
Some of these features may be incorporated into a single composite tool or an Integrated Development Environment (IDE).
Quality measures include such topics as correctness, completeness, security and ISO/IEC 9126 requirements such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.
Based on the amount of test cases required to construct a complete test suite in each context (i.e. a test suite such that, if it is applied to the implementation under test, then we collect enough information to precisely determine whether the system is correct or incorrect according to some specification), a hierarchy of testing difficulty has been proposed.
It includes the following testability classes:
It has been proved that each class is strictly included in the next. For instance, testing when we assume that the behavior of the implementation under test can be denoted by a deterministic finite-state machine for some known finite sets of inputs and outputs and with some known number of states belongs to Class I (and all subsequent classes). However, if the number of states is not known, then it only belongs to all classes from Class II on. If the implementation under test must be a deterministic finite-state machine failing the specification for a single trace (and its continuations), and its number of states is unknown, then it only belongs to classes from Class III on. Testing temporal machines where transitions are triggered if inputs are produced within some real-bounded interval only belongs to classes from Class IV on, whereas testing many non-deterministic systems only belongs to Class V (but not all, and some even belong to Class I). The inclusion into Class I does not require the simplicity of the assumed computation model, as some testing cases involving implementations written in any programming language, and testing implementations defined as machines depending on continuous magnitudes, have been proved to be in Class I. Other elaborated cases, such as the testing framework by Matthew Hennessy under must semantics, and temporal machines with rational timeouts, belong to Class II.
A software testing process can produce several artifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs.
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. Note that a few practitioners argue that the testing field is not ready for certification, as mentioned in the Controversy section.
Some of the major software testing controversies include:
It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found. For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.
The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis:
The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points.
Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.
Software testing is used in association with verification and validation:
The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to the IEEE Standard Glossary of Software Engineering Terminology:
And, according to the ISO 9000 standard:
The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings.
In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below).
But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification.
So, when these words are defined in common terms, the apparent contradiction disappears.
Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it.
Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document.
Software testing may be considered a part of a software quality assurance (SQA) process. In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers. | https://en.wikipedia.org/wiki?curid=29090 |
Ship-Submarine Recycling Program
The Ship/Submarine Recycling Program (SRP) is the process that the United States Navy uses to dispose of decommissioned nuclear vessels. SRP takes place only at the Puget Sound Naval Shipyard (PSNS) in Bremerton, Washington, but the preparations can begin elsewhere.
Before SRP can begin, the vessel's nuclear fuel must be removed, and defueling usually coincides with decommissioning. Until the fuel is removed, the vessel is referred to as "USS "Name"," but afterward the "USS" is dropped and it is referred to as "ex-"Name"." Reusable equipment is removed at the same time as the fuel.
Spent nuclear fuel is shipped by rail to the Naval Reactor Facility in the Idaho National Laboratory (INL), located northwest of Idaho Falls, Idaho, where it is stored in special canisters.
At PSNS the SRP proper begins. The salvage workers cut the submarine into three or four pieces: the aft section, the reactor compartment, the missile compartment if one exists, and the forward section. Missile compartments are dismantled according to the provisions of the Strategic Arms Reductions Treaty.
Until 1991, the forward and aft sections of the submarines were rejoined and placed in floating storage. Various proposals for disposal of those hulls were considered, including sinking them at sea, but none was economically practical. Some submarines built prior to the 1978 banning of polychlorinated biphenyl products (PCBs) had the chemicals on board, which are considered hazardous materials by the Environmental Protection Agency and United States Coast Guard, requiring their removal. Since then, and to help reduce costs, the remaining submarine sections are recycled, returning reusable materials to production. In the process of submarine recycling, all hazardous and toxic wastes are identified and removed, reusable equipment is removed and put into inventory. Scrap metals and all other materials are sold to private companies or reused. The overall process is not profitable, but does provide some cost relief. Disposal of submarines by the SRP costs the Navy US$25–50 million per submarine.
Once the reactor compartment is removed, it is sealed at both ends and shipped by barge and multiple-wheel high-capacity trailers to the Department of Energy's Hanford Nuclear Reservation in Washington state, where they are currently, , kept in open dry storage and slated to be eventually buried. Russian submarine reactor compartments are stored in similar fashion near Murmansk.The burial trenches have been evaluated to be secure for at least 600 years before the first pinhole penetration of some lead containment areas of the reactor compartment packages occurs, and several thousand years before leakage becomes possible.
In 1959 the US Navy removed a nuclear reactor from the submarine and replaced it with a new type. The removed reactor was scuttled in the Atlantic Ocean, east of Delaware, at a depth of .
In 1972, the London Dumping Convention restricted ocean disposal of radioactive waste and in 1993, ocean disposal of radioactive waste was completely banned. The US Navy began a study on scrapping nuclear submarines; two years later shallow land burial of reactor compartments was selected as the most suitable option.
In 1990, was the first US nuclear-powered submarine to be scrapped.
By the end of 2005, 195 nuclear submarines had been ordered or built in the US (including the NR-1 Deep Submergence Craft and , but none of the later ). The last of the regular attack boats, , was decommissioned in 2001, and , a highly modified "Sturgeon", was decommissioned in 2004. The last of the initial "41 for Freedom" fleet ballistic missile (FBM) submarines, , was decommissioned in 2002. Decommissioning of the boats began in 1995 with . Additionally, a handful of nuclear-powered cruisers have entered the program, and their dismantling is ongoing. The first aircraft carrier due for decommissioning that would enter the SRP is planned to be , which was withdrawn in 2013. Hulls waiting or already processed by the recycling program are listed below.
† A dagger after a completion date indicates that portions of the hull were preserved as memorials. See the individual articles for details.
(note) ex-"Long Beach" has been partially dismantled and remains moored in Puget Sound Naval Shipyard in 2018.
Some of these submarines (the "George Washington" class) were fleet ballistic missile boats for the vast majority of their careers. However, they were briefly converted to SSNs before decommissioning and arrival at PSNS, and so are listed under that designation here.
† A dagger after a completion date indicates that portions of the hull were preserved as memorials. See the individual articles for details.
‡ Date given for ex-"Parche" is official date used to secure FY2004 funding; work did not begin until 19 October.
"La Jolla" (SSN-701) is currently undergoing conversion to a moored training ship at Norfolk Naval Shipyard. San Francisco (SSN-711) will be converted after decommissioning.
Some of these submarines (the "Lafayette" class) were fleet ballistic missile boats for the vast majority of their careers. However, they were converted to SSNs for use as moored training platforms and are not currently scheduled for recycling.
† A dagger after a completion date indicates that portions of the hull were preserved as memorials. See the individual articles for details.
Because the program is underway, this list is almost certainly incomplete.
Note for ships marked with refit:
"Sam Rayburn" (SSBN-635) was converted into a training platform – Moored Training Ship (MTS-635). "Sam Rayburn" arrived for conversion on 1 February 1986, and on 29 July 1989 the first moored training ship achieved initial criticality. Modifications included special mooring arrangements including a mechanism to absorb power generated by the main propulsion shaft. "Daniel Webster" (SSBN-626) was converted to the second Moored Training Ship (MTS-2 / MTS-626) in 1993. The Moored Training Ship Site is located at Naval Weapons Station Charleston in Goose Creek, South Carolina. "Sam Rayburn" is scheduled to operate as an MTS until 2014 while undergoing shipyard availabilities at four-year intervals. | https://en.wikipedia.org/wiki?curid=29091 |
Shaolin Monastery
The Shaolin Monastery (), also known as the Shaolin Temple, is a Chan ("Zen") Buddhist temple in Dengfeng County, Henan Province, China. Believed to have been founded in the fifth century, the Shaolin Temple is the main temple of the Shaolin school of Buddhism to this day.
Located west of the city of Zhengzhou, the Shaolin Monastery and its Pagoda Forest were inscribed as a UNESCO World Heritage Site in 2010 as part of the "Historic Monuments of Dengfeng".
The name refers to the woods of Shaoshi () mountain, one of the seven peaks of the Song mountains. The first Shaolin Monastery abbot was Batuo (also called "Fotuo" or "Buddhabhadra"), a dhyāna master who came to ancient China from ancient India or from Central Asia in 464 AD to spread Buddhist teachings.
According to the "Continued Biographies of Eminent Monks" (645 AD) by Daoxuan, Shaolin Monastery was built on the north side of Shaoshi, the central peak of Mount Song, one of the Sacred Mountains of China, by Emperor Xiaowen of the Northern Wei dynasty in 477 AD, to accommodate the Indian master beside the capital Luoyang city. Yang Xuanzhi, in the "Record of the Buddhist Monasteries of Luoyang" (547 AD), and Li Xian, in the "Ming Yitongzhi" (1461), concur with Daoxuan's location and attribution. The "Jiaqing Chongxiu Yitongzhi" (1843) specifies that this monastery, located in the province of Henan, was built in the 20th year of the "Taihe" era of the Northern Wei dynasty, that is, the monastery was built in 495 AD.
As the center of Chan Buddhism, the Shaolin Temple attracted many emperors’ attention in China’s history. During the Tang dynasty 618–907 AD Empress Wu Zetian (AD 625–705) paid several visits to the Shaolin Temple discussing Chan philosophy with high monk Tan Zong. According to legend, Emperor Taizong granted the Shaolin Temple extra land and a special "imperial dispensation" to consume meat and alcohol during the Tang dynasty. If true, this would have made Shaolin the only temple in China that did not prohibit alcohol. Regardless of historical veracity, these rituals are not practiced today. This legend is not corroborated in any period documents, such as the Shaolin Stele erected in 728 AD. The stele does not list any such imperial dispensation as reward for the monks' assistance during the campaign against Wang Shichong, only land and a water mill are granted. The founder of the Yuan dynasty, Kublai Khan (AD 1215–1294) ordered all Buddhist temples in China to be led by the Shaolin Temple; eight princes during the Ming dynasty converted to Shaolin.
Traditionally Bodhidharma is credited as founder of the martial arts at the Shaolin Temple. However, martial arts historians have shown this legend stems from a 17th-century qigong manual known as the "Yijin Jing".
The authenticity of the "Yi Jin Jing" has been discredited by some historians including Tang Hao, Xu Zhen and Ryuchi Matsuda. This argument is summarized by modern historian Lin Boyuan in his "Zhongguo wushu shi":
The oldest available copy was published in 1827. The composition of the text itself has been dated to 1624. Even then, the association of Bodhidharma with martial arts only became widespread as a result of the 1904–1907 serialization of the novel "The Travels of Lao Ts'an" in "Illustrated Fiction Magazine":
Other scholars see an earlier connection between Da Mo and the Shaolin Monastery. Scholars generally accept the historicity of Da Mo (Bodhidharma) who arrived in China from his country India around 480. Da Mo (Bodhidharma) and his disciples are said to have lived at a spot about a mile from the Shaolin Temple that is now a small nunnery.
In the 6th century, around 547 AD, The Record of the Buddhist Monasteries says Da Mo visited the area near Mount Song. In 645 AD, The Continuation of the Biographies of Eminent Monks, describes him as being active in the Mount Song region. Around 710 AD, Da Mo is identified specifically with the Shaolin Temple (Precious Record of Dharma's Transmission or Chuanfa Baoji) and writes of his sitting facing a wall in meditation for many years. It also speaks of Huike's many trials in his efforts to receive instruction from Da Mo. In the 11th century (1004 AD) a work embellishes the Da Mo legends with great detail. A stele inscription at the Shaolin Monastery dated 728 Ad reveals Da Mo residing on Mount Song. Another stele from 798 AD speaks of Huike seeking instruction from Da Mo. Another engraving dated 1209 depicts the barefoot saint holding a shoe according to the ancient legend of Da Mo. A plethora of 13th- and 14th-century steles feature Da Mo in various roles. One 13th-century image shows him riding a fragile stalk across the Yangtze River. In 1125 a special temple was constructed in his honor at the Shaolin Monastery.
The monastery has been destroyed and rebuilt many times. During the Red Turban Rebellion in the 14th century, bandits ransacked the monastery for its real or supposed valuables, destroying much of the temple and driving the monks away. The monastery was likely abandoned from 1351 or 1356 (the most likely dates for the attack) to at least 1359, when government troops retook Henan. The events of this period would later figure heavily in 16th-century legends of the temple's patron saint Vajrapani, with the story being changed to claim a victory for the monks, rather than a defeat.
In 1641, rebel forces led by Li Zicheng sacked the monastery due to the monks' support of the Ming dynasty and the possible threat they posed to the rebels. This effectively destroyed the temple's fighting force. The temple fell into ruin and was home to only a few monks until the early 18th century, when the government of the Qing dynasty patronized and restored the temple.
Perhaps the best-known story of the Temple's destruction is that it was destroyed by the Qing government for supposed anti-Qing activities. Variously said to have taken place in 1647 under the Shunzhi Emperor, in 1674, 1677, or 1714 under the Kangxi Emperor, or in 1728 or 1732 under the Yongzheng Emperor, this destruction is also supposed to have helped spread Shaolin martial arts throughout China by means of the five fugitive monks. Some accounts claim that a supposed southern Shaolin Temple was destroyed instead of, or in addition to, the temple in Henan: Ju Ke, in the "Qing bai lei chao" (1917), locates this temple in Fujian province. These stories commonly appear in legendary or popular accounts of martial history, and in "wuxia" fiction.
While these latter accounts are popular among martial artists, and often serve as origin stories for various martial arts styles, they are viewed by scholars as fictional. The accounts are known through often inconsistent 19th-century secret society histories and popular literature, and also appear to draw on both Fujianese folklore and popular narratives such as the classical novel "Water Margin". Modern scholarly attention to the tales is mainly concerned with their role as folklore.
There is evidence of Shaolin martial arts being exported to Japan beginning in the 18th century. Martial arts such as Okinawan Shōrin-ryū () style of Karate, for example, has a name meaning "Shaolin School" and the Japanese Shorinji Kempo () is translated as "Shaolin Temple Fist Method". Other similarities can be seen in centuries-old Chinese and Japanese martial arts manuals.
In 1928, the warlord Shi Yousan set fire to the monastery, burning it for over 40 days, destroying a significant portion of the buildings, including many manuscripts of the temple library.
The Cultural Revolution launched in 1966 targeted religious orders including the monastery. The monks who were present at the monastery when the Red Guards attacked were shackled and made to wear placards declaring the crimes charged against them. The monks were imprisoned after being publicly flogged and then paraded through the streets as people threw rubbish at them. The film crew for the Jet Li movie "Martial Arts of Shaolin" was shocked to find that the remaining monks have for a time left the compound when they filmed at the monastery complex in 1986.
Martial arts groups from around the world have made donations for the upkeep of the temple and grounds, and are subsequently honored with carved stones near the entrance of the temple. In the past, many have tried to capitalise on Shaolin Monastery fame by building their own schools on Mount Song. However, the Chinese government eventually outlawed this; the schools were moved to the nearby towns.
A dharma gathering was held from 19-20th August 1999, in Shaolin Monastery for Shi Yongxin's assumption of office as Abbot. Over the next two decades the Monastery grew into a global business empire. In March 2006, Russian President Vladimir Putin became the first foreign leader to visit the monastery. In 2007, the Chinese government partially lifted the 300-year ban of the Jieba, the ancient ceremony of the nine marks which are burned onto the head with sticks of incense. The ban was lifted only for those who were mentally and physically prepared to participate in the tradition.
Two modern bathrooms were recently added to the temple for use by monks and tourists. The new bathrooms reportedly cost three million yuan to build. Films have also been released like "Shaolin Temple" and more recently, "Shaolin" starring Andy Lau.
In 1994 the temple registered its name as a trademark. In the late 2000s, Shi Yongxin began authorizing Shaolin branches outside of mainland China in what has been called a franchise scheme. The branches are run by current and former monks and allow dispersion of Shaolin culture and study of Shaolin kung fu around the globe. As of January, 2011, Yongxin and the temple operated over 40 companies in cities across the world, including London and Berlin, which have purchased land and property.
In 2018, for the first time in its 1500-year history, the Shaolin Monastery raised the national flag as a part of a "patriotism drive" under the new National Religious Affairs Administration, a part of the United Front Work Department which "oversees propaganda efforts as well as relations with the global Chinese diaspora". Senior theology lecturer Sze Chi Chan of Hong Kong Baptist University analyzes this move as General Secretary Xi Jinping making an example of the Shaolin Monastery to send a message to other temples and the Chinese Catholic Church.
The Shaolin Monastery was historically led by an abbot, but the communist era restrictions on religious expression and independence have since changed this ancient system. The monastery is currently led by a committee composed primarily of government officials. The treasurer is appointed by the government, and as such the abbot has little control over finances. Profits are split with Dengfeng; the municipality takes two thirds of the profits and the monastery retains one third.
The temple's inside area is , that is, . It has seven main halls on the axis and seven other halls around, with several yards around the halls. The temple structure includes:
A number of traditions make reference to a Southern Shaolin Monastery located in Fujian province. There has also been a Northern Shaolin monastery in northern China. Associated with stories of the supposed burning of Shaolin by the Qing government and with the tales of the Five Elders, this temple, sometimes known by the name Changlin, is often claimed to have been either the target of Qing forces or a place of refuge for monks displaced by attacks on the Shaolin Monastery in Henan. Besides the debate over the historicity of the Qing-era destruction, it is currently unknown whether there was a true southern temple, with several locations in Fujian given as the location for the monastery. Fujian does have a historic monastery called Changlin, and a monastery referred to as a "Shaolin cloister" has existed in Fuqing, Fujian, since the Song dynasty, but whether these have an actual connection to the Henan monastery or a martial tradition is still unknown. The Southern Temple has been a popular subject of "wuxia" fiction, first appearing in the 1893 novel "Shengchao Ding Sheng Wannian Qing", where it is attacked by the Qianlong Emperor with the help of the White Eyebrow Taoist. | https://en.wikipedia.org/wiki?curid=29098 |
Seymour Cray
Seymour Roger Cray (September 28, 1925 – October 5, 1996) was an American electrical engineer and supercomputer architect who designed a series of computers that were the fastest in the world for decades, and founded Cray Research which built many of these machines. Called "the father of supercomputing", Cray has been credited with creating the supercomputer industry. Joel S. Birnbaum, then chief technology officer of Hewlett-Packard, said of him: "It seems impossible to exaggerate the effect he had on the industry; many of the things that high performance computers now do routinely were at the farthest edge of credibility when Seymour envisioned them." Larry Smarr, then director of the National Center for Supercomputing Applications at the University of Illinois said that Cray is "the Thomas Edison of the supercomputing industry."
Cray was born in 1925 in Chippewa Falls, Wisconsin, to Seymour R. and Lillian Cray. His father was a civil engineer who fostered Cray's interest in science and engineering. As early as the age of ten he was able to build a device out of Erector Set components that converted punched paper tape into Morse code signals. The basement of the family home was given over to the young Cray as a "laboratory".
Cray graduated from Chippewa Falls High School in 1943 before being drafted for World War II as a radio operator. He saw action in Europe, and then moved to the Pacific theatre where he worked on breaking Japanese naval codes. On his return to the United States he earned a B.Sc. in electrical engineering at the University of Minnesota, graduating in 1949, followed by a M.Sc. in applied mathematics in 1951.
In 1951, Cray joined Engineering Research Associates (ERA) in Saint Paul, Minnesota. ERA had formed out of a former United States Navy laboratory that had built codebreaking machines, a tradition ERA carried on when such work was available. ERA was introduced to computer technology during one such effort, but in other times had worked on a wide variety of basic engineering as well.
Cray quickly came to be regarded as an expert on digital computer technology, especially following his design work on the ERA 1103, the first commercially successful scientific computer. He remained at ERA when it was bought by Remington Rand and then Sperry Corporation in the early 1950s. At the newly formed Sperry Rand, ERA became the "scientific computing" arm of their UNIVAC division.
Cray, along with William Norris, later became dissatisfied with ERA, then spun off as Sperry Rand. In 1957, they founded a new company, Control Data Corporation.
By 1960 he had completed the design of the CDC 1604, an improved low-cost ERA 1103 that had impressive performance for its price range. Even as the CDC 1604 was starting to ship to customers in 1960, Cray had already moved on to designing other computers. He first worked on the design of an upgraded version (the CDC 3000 series), but company management wanted these machines targeted toward "business and commercial" data processing for average customers. Cray did not enjoy working on such "mundane" machines, constrained to design for low-cost construction, so CDC could sell lots of them. His desire was to ""produce the largest [fastest] computer in the world"". So after some basic design work on the CDC 3000 series, he turned that over to others and went on to work on the CDC 6600. Nonetheless, several special features of the 6600 first started to appear in the 3000 series.
Although in terms of hardware the 6600 was not on the leading edge, Cray invested considerable effort into the design of the machine in an attempt to enable it to run as fast as possible. Unlike most high-end projects, Cray realized that there was considerably more to performance than simple processor speed, that I/O bandwidth had to be maximized as well in order to avoid "starving" the processor of data to crunch. He later noted, "Anyone can build a fast CPU. The trick is to build a fast system."
The 6600 was the first commercial supercomputer, outperforming everything then available by a wide margin. While expensive, for those that needed the absolutely fastest computer available there was nothing else on the market that could compete. When other companies (namely IBM) attempted to create machines with similar performance, they stumbled (IBM 7030 Stretch). Indeed, the 6600 solved a critical design problem — "imprecise interrupts" — that was largely responsible for IBM's failure. He did this by replacing I/O interrupts with a polled request issued by one of ten so-called peripheral processors, which were built-in mini-computers that did all transfers in and out of the 6600's central memory. He then further increased the challenge in the later-released five-fold faster CDC 7600.
In 1963, in a "Business Week" article announcing the CDC 6600, Seymour Cray clearly expressed an idea that is often misattributed to Herb Grosch as so-called Grosch's law:
During this period Cray had become increasingly annoyed at what he saw as interference from CDC management. Cray always demanded an absolutely quiet work environment with a minimum of management overhead, but as the company grew he found himself constantly interrupted by middle managers who – according to Cray – did little but gawk and use him as a sales tool by introducing him to prospective customers.
Cray decided that in order to continue development he would have to move from St. Paul, far enough that it would be too long a drive for a "quick visit" and long distance telephone charges would be just enough to deter most calls, yet close enough that real visits or board meetings could be attended without too much difficulty. After some debate, Norris backed him and set up a new laboratory on land Cray owned in his hometown of Chippewa Falls. Part of the reason for the move may also have to do with Cray's worries about an impending nuclear war, which he felt made Minneapolis a serious safety concern. His house, built a few hundred yards from the new CDC laboratory, included a huge bomb shelter.
The new Chippewa Lab was set up during the middle of the 6600 project, although it does not seem to have delayed the project. After the 6600 shipped, the successor CDC 7600 system was the next product to be developed in Chippewa Falls, offering peak computational speeds of ten times the 6600. The failed follow-on to the 7600, the CDC 8600, was the project that finally ended his run of successes at CDC in 1972.
Although the 6600 and 7600 had been huge successes in the end, both projects had almost bankrupted the company while they were being designed. The 8600 was running into similar difficulties and Cray eventually decided that the only solution was to start over fresh. This time Norris was not willing to take the risk, and another project within the company, the CDC STAR-100, seemed to be progressing more smoothly. Norris said he was willing to keep the project alive at a low level until the STAR was delivered, at which point full funding could be put into the 8600. Cray was unwilling to work under these conditions and left the company.
The split was fairly amicable, and when he started Cray Research in a new laboratory on the same Chippewa property a year later, Norris invested $250,000 in start-up money. Like CDC's organization, Cray R&D was based in Chippewa Falls and business headquarters were in Minneapolis. Unlike CDC, Cray's manufacturing was also in Chippewa Falls.
At first there was some question as to what exactly the new company should do. It did not seem that there would be any way for them to afford to develop a new computer, given that the now-large CDC had been unable to support more than one. When the President in charge of financing traveled to Wall Street to look for seed money, he was surprised to find that Cray's reputation was very well known. Far from struggling for some role to play in the market, the financial world was more than willing to provide Cray with all the money they would need to develop a new machine..
After several years of development, their first product was released in 1976 as the Cray-1. As with earlier Cray designs, the Cray-1 made sure that the "entire" computer was fast, as opposed to just the processor. When it was released it easily beat almost every machine in terms of speed, including the STAR-100 that had beaten the 8600 for funding. The only machine able to perform on the same sort of level was the ILLIAC IV, a specialized one-off machine that rarely operated near its maximum performance, except on very specific tasks. In general, the Cray-1 beat anything on the market by a wide margin.
Serial number 001 was "lent" to Los Alamos National Laboratory in 1976, and that summer the first full system was sold to the National Center for Atmospheric Research (NCAR) for $8.8 million. The company's early estimates had suggested that they might sell a dozen such machines, based on sales of similar machines from the CDC era, so the price was set accordingly. Eventually, well over 80 Cray-1s were sold, the company was a huge success financially, and Cray's innovations with super computers won him the nickname "The Wizard of Chippewa Falls".
Follow-up success was not as easy. While he worked on the Cray-2, other teams delivered the two-processor Cray X-MP, which was another huge success and later the four-processor X-MP. When the Cray-2 was finally released after six years of development it was only marginally faster than the X-MP, largely due to very fast and large main memory, and thus sold in much smaller numbers. The Cray-2 ran at 250 MHz with a very deep pipeline, making it harder to write code than for the shorter pipe X-MP.
As the Cray-3 project started, he found himself once again being "bothered" too much with day-to-day tasks. In order to concentrate on design, Cray left the CEO position of Cray Research in 1980 to become an independent contractor. In 1988 he moved the Cray 3 project from Chippewa Falls to a laboratory in Colorado Springs, Colorado.
In 1989 Cray was faced with a repeat of history when the Cray-3 started to run into difficulties. An upgrade of the X-MP using high-speed memory from the Cray-2 was under development and seemed to be making real progress, and once again management was faced with two projects and limited budgets. They eventually decided to take the safer route, releasing the new design as the Cray Y-MP.
Cray decided to spin off the Colorado Springs laboratory to form Cray Computer Corporation. This new entity took the Cray-3 project with them.
The 500 MHz Cray-3 proved to be Cray's second major failure. In order to provide the tenfold increase in performance that he always demanded of his newest machines, Cray decided that the machine would have to be built using gallium arsenide semiconductors. In the past Cray had always avoided using anything even near the state of the art, preferring to use well-known solutions and designing a fast machine based on them. In this case, Cray was developing every part of the machine, even the chips inside it.
Nevertheless, the team were able to get the machine working and delivered their first example to NCAR on 24 May 1993.
The machine was still essentially a prototype, and the company was using the installation to debug the design. By this time a number of massively parallel machines were coming into the market at price/performance ratios the Cray-3 could not touch. Cray responded through "brute force", starting design of the Cray-4 which would run at 1 GHz and outpower these machines, regardless of price.
In 1995 there had been no further sales of the Cray-3, and the ending of the Cold War made it unlikely anyone would buy enough Cray-4s to offer a return on the development funds. The company ran out of money and filed for Chapter 11 bankruptcy 24 March 1995.
Cray had always resisted the massively parallel solution to high-speed computing, offering a variety of reasons that it would never work as well as one very fast processor. He famously quipped "If you were plowing a field, which would you rather use: two strong oxen or 1024 chickens?" By the mid-1990s this argument was becoming increasingly difficult to justify, and modern compiler technology made developing programs on such machines not much more difficult than their simpler counterparts.
Cray set up a new company, SRC Computers, and started the design of his own massively parallel machine. The new design concentrated on communications and memory performance, the bottleneck that hampered many parallel designs. Design had just started when Cray died suddenly as a result of a car accident. SRC Computers carried on development and specialized in reconfigurable computing.
Cray frequently cited two important aspects to his design philosophy: remove heat, and ensure that all signals that are supposed to arrive somewhere at the same time do indeed arrive at the same time.
His computers were equipped with built-in cooling systems, extending ultimately to coolant channels cast into the mainframes and thermally coupled to metal plates within the circuit boards, and to systems immersed in coolants. In a story he told about himself, he realized early in his career that he should interlock the computers with the cooling systems so that the computers would not operate unless the cooling systems were operational. It did not originally occur to him to interlock in the other direction until a customer reported that localized power outages had shut down their computer, but left the cooling system running — so they arrived in the morning to find the machine encased in ice.
Cray addressed the problem of skew by ensuring that every signal path in his later computers was the same electrical length, so that values that were to be acted upon at a particular time were indeed all valid values. When required, he would run the traces back and forth on the circuit boards until the desired length was achieved, and he employed Maxwell's equations in design of the boards to ensure that any radio frequency effects which altered the signal velocity and hence the electrical path length were accounted for.
When asked what kind of CAD tools he used for the Cray-1, Cray said that he liked #3 pencils with 8-1/2" x 11" quadrille paper pads. Cray recommended using the backs of the pages so that the lines were not so dominant.
Cray avoided publicity, and there are a number of unusual tales about his life away from work, termed "Rollwagenisms", from then-CEO of Cray Research, John A. Rollwagen. He enjoyed skiing, windsurfing, tennis, and other sports. Another favorite pastime was digging a tunnel under his home; he attributed the secret of his success to "visits by elves" while he worked in the tunnel: "While I'm digging in the tunnel, the elves will often come to me with solutions to my problem."
One story has it that when Cray was asked by management to provide detailed one-year and five-year plans for his next machine, he simply wrote, "Five-year goal: Build the biggest computer in the world. One year goal: One-fifth of the above." And another time, when expected to write a multi-page detailed status report for the company executives, Cray's two sentence report read: "Activity is progressing satisfactorily as outlined under the June plan. There have been no significant changes or deviations from the June plan."
Cray died on October 5, 1996, two weeks after his automobile was struck on the highway and rolled several times.
The IEEE Computer Society's Seymour Cray Computer Engineering Award, established in late 1997, recognizes innovative contributions to high performance computing systems exemplifying Cray's creative spirit.
He married Geri Harrand (one son, two daughters). | https://en.wikipedia.org/wiki?curid=29103 |
Signature block
A signature block (often abbreviated as signature, sig block, sig file, .sig, dot sig, siggy, or just sig) is a personalized block of text automatically appended at the bottom of an email message, Usenet article, or forum post.
An email signature is a block of text appended to the end of an email message often containing the sender's name, address, phone number, disclaimer or other contact information.
"Traditional" internet cultural .sig practices assume the use of monospaced ASCII text because they pre-date MIME and the use of HTML in email. In this tradition, it is common practice for a signature block to consist of one or more lines containing some brief information on the author of the message such as phone number and email address, URLs for sites owned or favoured by the author—but also often a quotation (occasionally automatically generated by such tools as fortune), or an ASCII art picture.
Among some groups of people it has been common to include .
Most email clients, including Mozilla Thunderbird, the built-in mail tool of the web browser Opera, Microsoft Outlook and Outlook Express, and Eudora, can be configured to automatically append an email signature with each new message. A shortened form of a signature block (sometimes called a "signature line"), only including one's name, often with some distinguishing prefix, can be used to simply indicate the end of a post or response. Most email servers can be configured to append email signatures to all outgoing mail as well.
An email signature generator is an app or an online web app that allows users to create a designed email signature using a pre-made template (with no need for HTML coding skills).
Signature blocks are also used in the Usenet discussion system.
Businesses often automatically append signature blocks to messages—or have policies mandating a certain style. Generally they resemble standard business cards in their content—and often in their presentation—with company logos and sometimes even the exact appearance of a business card. In some cases, a vCard is automatically attached.
In addition to these standard items, email disclaimers of various sorts are often automatically appended. These are typically couched in legal jargon, but it is unclear what weight they have in law, and they are routinely lampooned.
Business emails may also use some signature block elements mandated by local laws:
While criticized by some as overly bureaucratic, these regulations only extend existing laws for paper business correspondence to email.
The Usenet standard RFC 3676 specifies that a signature block should be displayed as plain text in a fixed-width font (no HTML, images, or other rich text), and should be delimited from the body of the message by a single line consisting of exactly two hyphens, followed by a space, followed by the end of line (i.e., in C-notation: codice_1). This latter prescription, which goes by many names, including "sig dashes", "signature cut line", "sig-marker", "sig separator" and "signature delimiter", allows software to automatically mark or remove the sig block as the receiver desires.
Most email and Usenet clients (including, for example, Mozilla Thunderbird and K-9) will recognize the “dash dash space” delimiter and cut off the signature below it when inserting a quote of the original message into the composition window for a reply.
On web forums, the rules are often less strict on how a signature block is formatted, as Web browsers typically are not operated within the same constraints as text interface applications. Users will typically define their signature as part of their profile. Depending on the board's capabilities, signatures may range from a simple line or two of text to an elaborately constructed HTML piece. Images are often allowed as well, including dynamically updated images usually hosted remotely and modified by a server-side script. In some cases avatars or hackergotchis take over some of the role of signatures.
With FidoNet, echomail and netmail software would often add an origin line at the end of a message. This would indicate the FidoNet address and name of the originating system (not the user). The user posting the message would generally not have any control over the origin line. However, single-line taglines, added under user control, would often contain a humorous or witty saying. Multi-line user signature blocks were rare.
However, a tearline standard for FidoNet was included in FTS-0004 and clarified in FSC-0068 as three dashes optionally followed by a space optionally followed by text. | https://en.wikipedia.org/wiki?curid=29105 |
Semantics
Semantics (from "sēmantikós", "significant") is the linguistic and philosophical study of meaning in language, programming languages, formal logic, and semiotics. It is concerned with the relationship between "signifiers"—like words, phrases, signs, and symbols—and what they stand for in reality, their denotation.
In the international scientific vocabulary semantics is also called "semasiology". The word "semantics" was first used by Michel Bréal, a French philologist. It denotes a range of ideas—from the popular to the highly technical. It is often used in ordinary language for denoting a problem of understanding that comes down to word selection or connotation. This problem of understanding has been the subject of many formal enquiries, over a long period of time, especially in the field of formal semantics. In linguistics, it is the study of the interpretation of signs or symbols used in agents or communities within particular circumstances and contexts. Within this view, sounds, facial expressions, body language, and proxemics have semantic (meaningful) content, and each comprises several branches of study. In written language, things like paragraph structure and punctuation bear semantic content; other forms of language bear other semantic content.
The formal study of semantics intersects with many other fields of inquiry, including lexicology, syntax, pragmatics, etymology and others. Independently, semantics is also a well-defined field in its own right, often with synthetic properties. In the philosophy of language, semantics and reference are closely connected. Further related fields include philology, communication, and semiotics. The formal study of semantics can therefore be manifold and complex.
Semantics contrasts with syntax, the study of the combinatorics of units of a language (without reference to their meaning), and pragmatics, the study of the relationships between the symbols of a language, their meaning, and the users of the language. Semantics as a field of study also has significant ties to various representational theories of meaning including truth theories of meaning, coherence theories of meaning, and correspondence theories of meaning. Each of these is related to the general philosophical study of reality and the representation of meaning.
In linguistics, semantics is the subfield that is devoted to the study of meaning, as inherent at the levels of words, phrases, sentences, and larger units of discourse (termed "texts", or "narratives"). The study of semantics is also closely linked to the subjects of representation, reference and denotation. The basic study of semantics is oriented to the examination of the meaning of signs, and the study of relations between different linguistic units and compounds: homonymy, synonymy, antonymy, hypernymy, hyponymy, meronymy, metonymy, holonymy, paronymy. A key concern is how meaning attaches to larger chunks of text, possibly as a result of the composition from smaller units of meaning. Traditionally, semantics has included the study of sense and reference, truth conditions, argument structure, thematic roles, discourse analysis, and the linkage of all of these to syntax.
Originates from Montague's work (see above). A highly formalized theory of natural language semantics in which expressions are assigned denotations (meanings) such as individuals, truth values, or functions from one of these to another. The truth of a sentence, and its logical relation to other sentences, is then evaluated relative to a model.
Pioneered by the philosopher Donald Davidson, another formalized theory, which aims to associate each natural language sentence with a meta-language description of the conditions under which it is true, for example: 'Snow is white' is true if and only if snow is white. The challenge is to arrive at the truth conditions for any sentences from fixed meanings assigned to the individual words and fixed rules for how to combine them. In practice, truth-conditional semantics is similar to model-theoretic semantics; conceptually, however, they differ in that truth-conditional semantics seeks to connect language with statements about the real world (in the form of meta-language statements), rather than with abstract models.
This theory is an effort to explain properties of argument structure. The assumption behind this theory is that syntactic properties of phrases reflect the meanings of the words that head them. With this theory, linguists can better deal with the fact that subtle differences in word meaning correlate with other differences in the syntactic structure that the word appears in. The way this is gone about is by looking at the internal structure of words. These small parts that make up the internal structure of words are termed "semantic primitives".
Cognitive semantics approaches meaning from the perspective of cognitive linguistics. In this framework, language is explained via general human cognitive abilities rather than a domain-specific language module. The techniques native to cognitive semantics are typically used in lexical studies such as those put forth by Leonard Talmy, George Lakoff, Dirk Geeraerts, and Bruce Wayne Hawkins. Some cognitive semantic frameworks, such as that developed by Talmy, take into account syntactic structures as well. Semantics, through modern researchers can be linked to the Wernicke's area of the brain and can be measured using the event-related potential (ERP). ERP is the rapid electrical response recorded with small disc electrodes which are placed on a persons scalp.
A linguistic theory that investigates word meaning. This theory understands that the meaning of a word is fully reflected by its context. Here, the meaning of a word is constituted by its contextual relations. Therefore, a distinction between degrees of participation as well as modes of participation are made. In order to accomplish this distinction any part of a sentence that bears a meaning and combines with the meanings of other constituents is labeled as a semantic constituent. Semantic constituents that cannot be broken down into more elementary constituents are labeled minimal semantic constituents.
Various fields or disciplines have long been contributing to cross-cultural semantics. Are words like "love", "truth", and "hate" universals? Is even the word "sense" – so central to semantics – a universal, or a concept entrenched in a long-standing but culture-specific tradition? These are the kind of crucial questions that are discussed in cross-cultural semantics. Translation theory, ethnolinguistics, linguistic anthropology and cultural linguistics specialize in the field of comparing, contrasting, and translating words, terms and meanings from one language to another (see Herder, W. von Humboldt, Boas, Sapir, and Whorf). But philosophy, sociology, and anthropology have long established traditions in contrasting the different nuances of the terms and concepts we use. And online encyclopaedias such as the Stanford encyclopedia of philosophy, https://plato.stanford.edu, and more and more Wikipedia itself have greatly facilitated the possibilities of comparing the background and usages of key cultural terms. In recent years the question of whether key terms are translatable or untranslatable has increasingly come to the fore of global discussions, especially since the publication of Barbara Cassin's "Dictionary of Untranslatables: A Philosophical Lexicon", in 2014.
Computational semantics is focused on the processing of linguistic meaning. In order to do this concrete algorithms and architectures are described. Within this framework the algorithms and architectures are also analyzed in terms of decidability, time/space complexity, data structures that they require and communication protocols.
Many of the formal approaches to semantics in mathematical logic and computer science originated in philosophy of language. Initially, the most influential semantic theory stemmed from Gottlob Frege and Bertrand Russell. Frege and Russell are seen as the originators of a tradition in analytic philosophy to explain meaning via syntax and mathematical functionality. Ludwig Wittgenstein, a former student of Russell, is also seen as one of the seminal figures in the analytic tradition. All three of these early philosophers of language were concerned with how sentences expressed information in the form of propositions and with the truth values or truth conditions a given sentence has in virtue of the proposition it expresses.
In the late 1960s, Richard Montague proposed a system for defining semantic entries in the lexicon in terms of the lambda calculus. In these terms, the syntactic parse of the sentence "John ate every bagel" would consist of a subject ("John") and a predicate ("ate every bagel"); Montague demonstrated that the meaning of the sentence altogether could be decomposed into the meanings of its parts and in relatively few rules of combination. The logical predicate thus obtained would be elaborated further, e.g. using truth theory models, which ultimately relate meanings to a set of Tarskian universals, which may lie outside the logic. The notion of such meaning atoms or primitives is basic to the language of thought hypothesis from the 1970s.
Despite its elegance, Montague grammar was limited by the context-dependent variability in word sense, and led to several attempts at incorporating context, such as:
Metasemantics is the study of the foundations of natural language semantics.
In computer science, the term "semantics" refers to the meaning of language constructs, as opposed to their form (syntax). According to Euzenat, semantics "provides the rules for interpreting the syntax which do not provide the meaning directly but constrains the possible interpretations of what is declared."
The semantics of programming languages and other languages is an important issue and area of study in computer science. Like the syntax of a language, its semantics can be defined exactly.
For instance, the following statements use different syntaxes, but cause the same instructions to be executed, namely, perform an arithmetical addition of 'y' to 'x' and store the result in a variable called 'x':
Various ways have been developed to describe the semantics of programming languages formally, building on mathematical logic:
The Semantic Web refers to the extension of the World Wide Web via embedding added semantic metadata, using semantic data modeling techniques such as Resource Description Framework (RDF) and Web Ontology Language (OWL).
On the Semantic Web, terms such as "semantic network" and "semantic data model" are used to describe particular types of data model characterized by the use of directed graphs in which the vertices denote concepts or entities in the world and their properties, and the arcs denote relationships between them. These can formally be described as description logic concepts and roles, which correspond to OWL classes and properties.
In psychology, "semantic memory" is memory for meaning – in other words, the aspect of memory that preserves only the "gist", the general significance, of remembered experience – while episodic memory is memory for the ephemeral details – the individual features, or the unique particulars of experience. The term 'episodic memory' was introduced by Tulving and Schacter in the context of 'declarative memory' which involved simple association of factual or objective information concerning its object. Word meaning is measured by the company they keep, i.e. the relationships among words themselves in a semantic network. The memories may be transferred intergenerationally or isolated in one generation due to a cultural disruption. Different generations may have different experiences at similar points in their own time-lines. This may then create a vertically heterogeneous semantic net for certain words in an otherwise homogeneous culture. In a network created by people analyzing their understanding of the word (such as Wordnet) the links and decomposition structures of the network are few in number and kind, and include "part of", "kind of", and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and support vector machines as well as natural language processing, artificial neural networks and predicate calculus techniques.
Ideasthesia is a psychological phenomenon in which activation of concepts evokes sensory experiences. For example, in synesthesia, activation of a concept of a letter (e.g., that of the letter "A") evokes sensory-like experiences (e.g., of red color).
In the 1960s, psychosemantic studies became popular after Charles E. Osgood's massive cross-cultural studies using his semantic differential (SD) method that used thousands of nouns and adjective bipolar scales. A specific form of the SD, Projective Semantics method uses only most common and neutral nouns that correspond to the 7 groups (factors) of adjective-scales most consistently found in cross-cultural studies (Evaluation, Potency, Activity as found by Osgood, and Reality, Organization, Complexity, Limitation as found in other studies). In this method, seven groups of bipolar adjective scales corresponded to seven types of nouns so the method was thought to have the object-scale symmetry (OSS) between the scales and nouns for evaluation using these scales. For example, the nouns corresponding to the listed 7 factors would be: Beauty, Power, Motion, Life, Work, Chaos, Law. Beauty was expected to be assessed unequivocally as “very good” on adjectives of Evaluation-related scales, Life as “very real” on Reality-related scales, etc. However, deviations in this symmetric and very basic matrix might show underlying biases of two types: scales-related bias and objects-related bias. This OSS design meant to increase the sensitivity of the SD method to any semantic biases in responses of people within the same culture and educational background.
Another set of concepts related to fuzziness in semantics is based on prototypes. The work of Eleanor Rosch in the 1970s led to a view that natural categories are not characterizable in terms of necessary and sufficient conditions, but are graded (fuzzy at their boundaries) and inconsistent as to the status of their constituent members. One may compare it with Jung's archetype, though the concept of archetype sticks to static concept. Some post-structuralists are against the fixed or static meaning of the words. Derrida, following Nietzsche, talked about slippages in fixed meanings.
Systems of categories are not objectively "out there" in the world but are rooted in people's experience. These categories evolve as learned concepts of the world – meaning is not an objective truth, but a subjective construct, learned from experience, and language arises out of the "grounding of our conceptual systems in shared embodiment and bodily experience".
A corollary of this is that the conceptual categories (i.e. the lexicon) will not be identical for different cultures, or indeed, for every individual in the same culture. This leads to another debate (see the Sapir–Whorf hypothesis or Eskimo words for snow). | https://en.wikipedia.org/wiki?curid=29107 |
Semantic network
A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields.
Typical standardized semantic networks are expressed as semantic triples.
Semantic networks are used in natural language processing applications such as semantic parsing and word-sense disambiguation.
Examples of the use of semantic networks in logic, directed acyclic graphs as a mnemonic tool, dates back centuries. The earliest documented use being the Greek philosopher Porphyry's commentary on Aristotle’s categories in the third century AD.
In computing history, "Semantic Nets" for the propositional calculus were first implemented for computers by Richard H. Richens of the Cambridge Language Research Unit in 1956 as an "interlingua" for machine translation of natural languages. Although the importance of this work and the CLRU was only belatedly realized.
Semantic networks were also independently implemented by Robert F. Simmons and Sheldon Klein, using the first order predicate calculus as a base, after being inspired by a demonstration of Victor Yngve. The "line of research was originated by the first President of the Association [Association for Computational Linguistics], Victor Yngve, who in 1960 had published descriptions of algorithms for using a phrase structure grammar to generate syntactically well-formed nonsense sentences. Sheldon Klein and I about 1962-1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text." Other researchers, most notably M. Ross Quillian and others at System Development Corporation helped contribute to their work in the early 1960s as part of the SYNTHEX project. It's from these publications at SDC that most modern derivatives of the term "semantic network" cite as their background. Later prominent works were done by Allan M. Collins and Quillian (e.g., Collins and Quillian; Collins and Loftus Quillian). Still later in 2006, Hermann Helbig fully described MultiNet.
In the late 1980s, two Netherlands universities, Groningen and Twente, jointly began a project called "Knowledge Graphs", which are semantic networks but with the added constraint that edges are restricted to be from a limited set of possible relations, to facilitate algebras on the graph. In the subsequent decades, the distinction between semantic networks and knowledge graphs was blurred. In 2012, Google gave their knowledge graph the name Knowledge Graph.
The Semantic Link Network was systematically studied as a social semantics networking method. Its basic model consists of semantic nodes, semantic links between nodes, and a semantic space that defines the semantics of nodes and links and reasoning rules on semantic links. The systematic theory and model was published in 2004. This research direction can trace to the definition of inheritance rules for efficient model retrieval in 1998 and the Active Document Framework ADF. Since 2003, research has developed toward social semantic networking. This work is a systematic innovation at the age of the World Wide Web and global social networking rather than an application or simple extension of the Semantic Net (Network). Its purpose and scope are different from that of the Semantic Net (or network). The rules for reasoning and evolution and automatic discovery of implicit links play an important role in the Semantic Link Network. Recently it has been developed to support Cyber-Physical-Social Intelligence. It was used for creating a general summarization method. The self-organised Semantic Link Network was integrated with a multi-dimensional category space to form a semantic space to support advanced applications with multi-dimensional abstractions and self-organised semantic links It has been verified that Semantic Link Network play an important role in understanding and representation through text summarisation applications. To investigate special social semantics, competition relation and symbiosis relation as well as their roles in evolving society were studied in the emerging topic: Cyber-Physical-Social Intelligence
More specialized forms of semantic networks has been created for specific use. For example, in 2008, Fawsy Bendeck's PhD thesis formalized the Semantic Similarity Network (SSN) that contains specialized relationships and propagation algorithms to simplify the semantic similarity representation and calculations.
A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another.
Most semantic networks are cognitively based. They also consist of arcs and nodes which can be organized into a taxonomic hierarchy. Semantic networks contributed ideas of spreading activation, inheritance, and nodes as proto-objects.
Using an association list.
(setq *database*
'((canary (is-a bird)
You would use the "assoc" function with a key of "canary" to extract all the information about the "canary" type.
An example of a semantic network is WordNet, a lexical database of English. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. Some of the most common semantic relations defined are meronymy (A is a meronym of B if A is part of B), holonymy (B is a holonym of A if B contains A), hyponymy (or troponymy) (A is subordinate of B; A is kind of B), hypernymy (A is superordinate of B), synonymy (A denotes the same as B) and antonymy (A denotes the opposite of B).
WordNet properties have been studied from a network theory perspective and compared to other semantic networks created from Roget's Thesaurus and word association tasks. From this perspective the three of them are a small world structure.
It is also possible to represent logical descriptions using semantic networks such as the existential graphs of Charles Sanders Peirce or the related conceptual graphs of John F. Sowa. These have expressive power equal to or exceeding standard first-order predicate logic. Unlike WordNet or other lexical or browsing networks, semantic networks using these representations can be used for reliable automated logical deduction. Some automated reasoners exploit the graph-theoretic features of the networks during processing.
Other examples of semantic networks are Gellish models. Gellish English with its Gellish English dictionary, is a formal language that is defined as a network of relations between concepts and names of concepts. Gellish English is a formal subset of natural English, just as Gellish Dutch is a formal subset of Dutch, whereas multiple languages share the same concepts. Other Gellish networks consist of knowledge models and information models that are expressed in the Gellish language. A Gellish network is a network of (binary) relations between things. Each relation in the network is an expression of a fact that is classified by a relation type. Each relation type itself is a concept that is defined in the Gellish language dictionary. Each related thing is either a concept or an individual thing that is classified by a concept. The definitions of concepts are created in the form of definition models (definition networks) that together form a Gellish Dictionary. A Gellish network can be documented in a Gellish database and is computer interpretable.
SciCrunch is a collaboratively edited knowledge base for scientific resources. It provides unambiguous identifiers (Research Resource IDentifiers or RRIDs) for software, lab tools etc. and it also provides options to create links between RRIDs and from communities.
Another example of semantic networks, based on category theory, is ologs. Here each type is an object, representing a set of things, and each arrow is a morphism, representing a function. Commutative diagrams also are prescribed to constrain the semantics.
In the social sciences people sometimes use the term semantic network to refer to co-occurrence networks. The basic idea is that words that co-occur in a unit of text, e.g. a sentence, are semantically related to one another. Ties based on co-occurrence can then be used to construct semantic networks.
There are also elaborate types of semantic networks connected with corresponding sets of software tools used for lexical knowledge engineering, like the Semantic Network Processing System (SNePS) of Stuart C. Shapiro or the MultiNet paradigm of Hermann Helbig, especially suited for the semantic representation of natural language expressions and used in several NLP applications.
Semantic networks are used in specialized information retrieval tasks, such as plagiarism detection. They provide information on hierarchical relations in order to employ semantic compression to reduce language diversity and enable the system to match word meanings, independently from sets of words used.
The Knowledge Graph proposed by Google in 2012 is actually an application of semantic network in search engine.
Modeling multi-relational data like semantic networks in low-dimensional spaces through forms of embedding has benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE (NIPS 2013). Applications of embedding knowledge base data include Social network analysis and Relationship extraction. | https://en.wikipedia.org/wiki?curid=29109 |
Stockholm Bloodbath
The Stockholm Bloodbath (Swedish: Stockholms blodbad, Danish: Det Stockholmske Blodbad) was a trial that led to a series of executions in Stockholm between 7 and 9 November 1520. The events were initiated directly after the coronation of Christian II (who after the bloodbath became known in Sweden as "Kristian Tyrann", 'Christian the Tyrant') as the new king of Sweden, after the guests on the crowning party were invited to a meeting at the castle. Archbishop Gustav Trolle demanding economic compensation for things such as the demolition of Almarestäket's fortress led to the question whether the former Swedish regent Sten Sture the Younger and his supporters had been guilty of heresy. Supported by canon law, nearly 100 persons were executed in the days following the meeting. Among the executed, there were many people from the aristocracy that had been supporting the "Sture Party" in the previous years.
The Stockholm Bloodbath was a consequence of conflict between Swedish pro-unionists (in favour of the Kalmar Union, then dominated by Denmark) and anti-unionists (supporters of Swedish independence), and also between the anti-unionists and the Danish aristocracy, which in other aspects was opposed to King Christian. The anti-unionist party was headed by Sten Sture the Younger, and the pro-unionist party by the archbishop Gustavus Trolle.
King Christian, who had already taken measures to isolate Sweden politically, intervened to help Archbishop Trolle, who was under siege in his fortress at Stäket, but he was defeated by Sture and his peasant soldiers at Vedila, and forced to return to Denmark. A second attempt to bring Sweden back under his control in 1518 was also countered by Sture's victory at Brännkyrka. Eventually, a third attempt made in 1520 with a large army of French, German and Scottish mercenaries proved successful.
Sture was mortally wounded at the Battle of Bogesund, on 19 January. The Danish army, unopposed, was approaching Uppsala, where the members of the Swedish Riksdag of the Estates had already assembled. The senators agreed to render homage to Christian, on condition that he give a full amnesty for past actions and a guarantee that Sweden should be ruled according to Swedish laws and customs. A convention to this effect was confirmed by the king and the Danish Privy Council on 31 March. Sture's widow, Lady Kristina, was still resisting in Stockholm with support from the peasants of central Sweden, and defeated the Danes at Balundsås on 19 March. Eventually, her forces were defeated at the Battle of Uppsala ("långfredagsslaget vid Uppsala") on Good Friday, 6 April.
In May, the Danish fleet arrived and Stockholm was attacked by land and sea. Lady Kristina resisted for four months longer, and in the beginning of autumn the tide of war started to turn in Kristina's favor. The inhabitants of Stockholm had a large supply of food and fared relatively well. Christian realized that his stockpile was dwindling and that it would doom his army to maintain the siege throughout the winter. Through Bishop Mattias, Hemming Gadh and other Swedes of high stature, Christian sent a proposal for retreat that was very advantageous for the Swedes. During a meeting on what is thought to be Beckholmen outside of Djurgården, Christian swore that all acts against him would be forgotten, and gave pardon to several named persons (including Gustav Vasa, who had escaped to Denmark, where he had been held hostage). Lady Kristina would be given Hörningsholm and all Mörkön as a fief, and also promised Tavastehus in Finland. When this had been written down on paper, the mayor of the city delivered the keys to the city on Södermalm and Christian held his grand entry. Shortly after, he sailed back to Denmark, to return in October for his coronation.
On 4 November, Christian was anointed by Gustavus Trolle in Storkyrkan Cathedral and took the usual oath to rule the kingdom through native-born Swedes only. A banquet was held for the next three days.
On 7 November, the events of the Stockholm bloodbath began to unfold. On the evening of that day, Christian summoned many Swedish leaders to a private conference at the palace. At dusk on 8 November, Danish soldiers, with lanterns and torches, entered a great hall of the royal palace and took away several noble guests. Later in the evening, many more of the king's guests were imprisoned. All these people had previously been marked down on Archbishop Trolle's proscription list.
The following day, 9 November, a council, headed by Archbishop Trolle, sentenced the proscribed to death for being heretics; the main point of accusation was their having united in a pact to depose Trolle a few years earlier. However many of them were also leading men of the Sture party and thus potential opponents of the Danish kings. At noon, the anti-unionist bishops of Skara and Strängnäs were led out into the great square and beheaded. Fourteen noblemen, three burgomasters, fourteen town councillors and about twenty common citizens of Stockholm were then hanged or beheaded.
The executions continued throughout the following day (10 November). According to the chief executioner Jörgen Homuth 82 people were executed. It has been claimed that Christian also took revenge on Sten Sture's body, having it dug up and burnt, as well as the body of his child. Sture's widow Lady Kristina, and many other noblewomen, were taken as prisoners to Denmark.
Christian justified the massacre in a proclamation to the Swedish people as a measure necessary to avoid a papal interdict, but, when apologising to the Pope for the decapitation of the bishops, he blamed his troops for performing unauthorised acts of vengeance.
If the intention behind the executions had been to frighten the anti-unionist party into submission, it proved wholly counterproductive. Gustav Vasa was a son of Erik Johansson, one of the victims of the executions. Vasa, upon hearing of the massacre, travelled north to the province of Dalarna to seek support for a new revolt. The population, informed of what had happened, rallied to his side. They were ultimately able to defeat Christian's forces in the Swedish War of Liberation. The massacre became the catalyst that permanently separated Sweden from Denmark.
The Stockholm Bloodbath precipitated a lengthy hostility towards Danes in Sweden, and thenceforth the two nations were almost continuously hostile toward each other. These hostilities, developing into a struggle for hegemony in the Scandinavian and North German area, lasted for nearly three hundred years. Memory of the Bloodbath served to let Swedes depict themselves (and often, actually regard themselves) as the wronged and aggrieved party, even when they were the ones who eventually took the political and military lead, such as the conquest and annexation of Scania until the Treaty of Roskilde in 1658.
The event earned Christian II the nickname of "Kristian Tyrann" ("Christian Tyrant") in Sweden which he retains until the present day. It is a common misconception in Sweden that King Christian II, contrarily, is bynamed "Christian den Gode" ("Christian the Good") in Denmark, but this is apocryphal.
According to Danish historians, no bynames have been given to Christian II in Danish historical tradition. In an interview with Richardson in 1979, Danish historian Mikael Venge, author of the article about Christian II in "Dansk Biografisk Leksikon "said: "I think you ought to protest the next time the Swedish radio claims anything so utterly unfounded that could be understood as if the Danes approved of the Stockholm bloodbath." Despite this, even today, tourist guides in Stockholm spice up their guiding of the Old Town (Gamla Stan) with the news about Christian II's "rehabilitation" back in Denmark.
The event is depicted in the 1901 novel, "Kongens Fald" ("The Fall of the King"), by Nobel Laureate Johannes V. Jensen. The bloodbath forms a large part of the 1948 historical novel "The Adventurer" (original title "Mikael Karvajalka") by the Finnish writer Mika Waltari. The events are depicted as seen by Mikael Karvajalka, a young Finn in Stockholm at the time. A number of references to the Stockholm Bloodbath appear in "Freddy's Book" (1980) by American novelist John Gardner. A 2005 book "Bruden fra Gent" (translated in Nl. "De Gentse Bruid", or "The Bride From Ghent") by the Danish writer Dorrit Willumsen, referenced these events. It illuminates the life of Christian II as seen from his relationship with his mistress, the Dutch Dyveke, and his wife Isabella of Austria, sister of Charles the Fifth. | https://en.wikipedia.org/wiki?curid=29113 |
Signals intelligence
Signals intelligence (SIGINT) is intelligence-gathering by interception of signals, whether communications between people (communications intelligence—abbreviated to COMINT) or from electronic signals not directly used in communication (electronic intelligence—abbreviated to ELINT). Signals intelligence is a subset of intelligence collection management.
As sensitive information is often encrypted, signals intelligence in turn involves the use of cryptanalysis to decipher the messages. Traffic analysis—the study of who is signaling whom and in what quantity—is also used to integrate information again.
Electronic interceptions appeared as early as 1900, during the Boer War of 1899–1902. The British Royal Navy had installed wireless sets produced by Marconi on board their ships in the late 1890s and the British Army used some limited wireless signalling. The Boers captured some wireless sets and used them to make vital transmissions. Since the British were the only people transmitting at the time, no special interpretation of the signals that were intercepted by the British was necessary.
The birth of signals intelligence in a modern sense dates from the Russo-Japanese War of 1904–1905. As the Russian fleet prepared for conflict with Japan in 1904, the British ship HMS "Diana" stationed in the Suez Canal intercepted Russian naval wireless signals being sent out for the mobilization of the fleet, for the first time in history.
Over the course of the First World War, the new method of signals intelligence reached maturity. Failure to properly protect its communications fatally compromised the Russian Army in its advance early in World War I and led to their disastrous defeat by the Germans under Ludendorff and Hindenburg at the Battle of Tannenberg. In 1918, French intercept personnel captured a message written in the new ADFGVX cipher, which was cryptanalyzed by Georges Painvin. This gave the Allies advance warning of the German 1918 Spring Offensive.
The British in particular built up great expertise in the newly emerging field of signals intelligence and codebreaking. On the declaration of war, Britain cut all German undersea cables. This forced the Germans to use either a telegraph line that connected through the British network and could be tapped, or through radio which the British could then intercept. Rear-Admiral Henry Oliver appointed Sir Alfred Ewing to establish an interception and decryption service at the Admiralty; Room 40. An interception service known as 'Y' service, together with the post office and Marconi stations grew rapidly to the point where the British could intercept almost all official German messages.
The German fleet was in the habit each day of wirelessing the exact position of each ship and giving regular position reports when at sea. It was possible to build up a precise picture of the normal operation of the High Seas Fleet, to infer from the routes they chose where defensive minefields had been placed and where it was safe for ships to operate. Whenever a change to the normal pattern was seen, it immediately signalled that some operation was about to take place and a warning could be given. Detailed information about submarine movements was also available.
The use of radio receiving equipment to pinpoint the location of the transmitter was also developed during the war.
Captain H.J. Round working for Marconi, began carrying out experiments with direction finding radio equipment for the army in France in 1915. By May 1915, the Admiralty was able to track German submarines crossing the North Sea. Some of these stations also acted as 'Y' stations to collect German messages, but a new section was created within Room 40 to plot the positions of ships from the directional reports.
Room 40 played an important role in several naval engagements during the war, notably in detecting major German sorties into the North Sea. The battle of Dogger Bank was won in no small part due to the intercepts that allowed the Navy to position its ships in the right place. It played a vital role in subsequent naval clashes, including at the Battle of Jutland as the British fleet was sent out to intercept them. The direction-finding capability allowed for the tracking and location of German ships, submarines and Zeppelins. The system was so successful, that by the end of the war over 80 million words, comprising the totality of German wireless transmission over the course of the war had been intercepted by the operators of the Y-stations and decrypted. However its most astonishing success was in decrypting the Zimmermann Telegram, a telegram from the German Foreign Office sent via Washington to its ambassador Heinrich von Eckardt in Mexico.
With the importance of interception and decryption firmly established by the wartime experience, countries established permanent agencies dedicated to this task in the interwar period. In 1919, the British Cabinet's Secret Service Committee, chaired by Lord Curzon, recommended that a peace-time codebreaking agency should be created. The Government Code and Cypher School (GC&CS) was the first peace-time codebreaking agency, with a public function "to advise as to the security of codes and cyphers used by all Government departments and to assist in their provision", but also with a secret directive to "study the methods of cypher communications used by foreign powers". GC&CS officially formed on 1 November 1919, and produced its first decrypt on 19 October. By 1940, GC&CS was working on the diplomatic codes and ciphers of 26 countries, tackling over 150 diplomatic cryptosystems.
The US Cipher Bureau was established in 1919 and achieved some success at the Washington Naval Conference in 1921, through cryptanalysis by Herbert Yardley. Secretary of War Henry L. Stimson closed the US Cipher Bureau in 1929 with the words "Gentlemen do not read each other's mail."
The use of SIGINT had even greater implications during World War II. The combined effort of intercepts and cryptanalysis for the whole of the British forces in World War II came under the code name "Ultra" managed from Government Code and Cypher School at Bletchley Park. Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities which made Bletchley's attacks feasible.
Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". "Ultra" decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions.
Winston Churchill was reported to have told King George VI: "It is thanks to the secret weapon of General Menzies, put into use on all the fronts, that we won the war!" Supreme Allied Commander, Dwight D. Eisenhower, at the end of the war, described Ultra as having been "decisive" to Allied victory. Official historian of British Intelligence in World War II Sir Harry Hinsley, argued that Ultra shortened the war "by not less than two years and probably by four years"; and that, in the absence of Ultra, it is uncertain how the war would have ended.
The United States Department of Defense has defined the term "signals intelligence" as:
Being a broad field, SIGINT has many sub-disciplines. The two main ones are communications intelligence (COMINT) and electronic intelligence (ELINT).
A collection system has to know to look for a particular signal. "System", in this context, has several nuances. Targeting is an output of the process of developing "collection requirements":
First, atmospheric conditions, sunspots, the target's transmission schedule and antenna characteristics, and other factors create uncertainty that a given signal intercept sensor will be able to "hear" the signal of interest, even with a geographically fixed target and an opponent making no attempt to evade interception. Basic countermeasures against interception include frequent changing of radio frequency, polarization, and other transmission characteristics. An intercept aircraft could not get off the ground if it had to carry antennas and receivers for every possible frequency and signal type to deal with such countermeasures.
Second, locating the transmitter's position is usually part of SIGINT. Triangulation and more sophisticated radio location techniques, such as time of arrival methods, require multiple receiving points at different locations. These receivers send location-relevant information to a central point, or perhaps to a distributed system in which all participate, such that the information can be correlated and a location computed.
Modern SIGINT systems, therefore, have substantial communications among intercept platforms. Even if some platforms are clandestine, there is still a broadcast of information telling them where and how to look for signals. A United States targeting system under development in the late 1990s, PSTS, constantly sends out information that helps the interceptors properly aim their antennas and tune their receivers. Larger intercept aircraft, such as the EP-3 or RC-135, have the on-board capability to do some target analysis and planning, but others, such as the RC-12 GUARDRAIL, are completely under ground direction. GUARDRAIL aircraft are fairly small, and usually work in units of three to cover a tactical SIGINT requirement, where the larger aircraft tend to be assigned strategic/national missions.
Before the detailed process of targeting begins, someone has to decide there is a value in collecting information about something. While it would be possible to direct signals intelligence collection at a major sports event, the systems would capture a great deal of noise, news signals, and perhaps announcements in the stadium. If, however, an anti-terrorist organization believed that a small group would be trying to coordinate their efforts, using short-range unlicensed radios, at the event, SIGINT targeting of radios of that type would be reasonable. Targeting would not know where in the stadium the radios might be located, or the exact frequency they are using; those are the functions of subsequent steps such as signal detection and direction finding.
Once the decision to target is made, the various interception points need to cooperate, since resources are limited.
Knowing what interception equipment to use becomes easier when a target country buys its radars and radios from known manufacturers, or is given them as military aid. National intelligence services keep libraries of devices manufactured by their own country and others, and then use a variety of techniques to learn what equipment is acquired by a given country.
Knowledge of physics and electronic engineering further narrows the problem of what types of equipment might be in use. An intelligence aircraft flying well outside the borders of another country will listen for long-range search radars, not short-range fire control radars that would be used by a mobile air defense. Soldiers scouting the front lines of another army know that the other side will be using radios that must be portable and not have huge antennas.
Even if a signal is human communications (e.g., a radio), the intelligence collection specialists have to know it exists. If the targeting function described above learns that a country has a radar that operates in a certain frequency range, the first step is to use a sensitive receiver, with one or more antennas that listen in every direction, to find an area where such a radar is operating. Once the radar is known to be in the area, the next step is to find its location.
If operators know the probable frequencies of transmissions of interest, they may use a set of receivers, preset to the frequencies of interest. These are the frequency (horizontal axis) versus power (vertical axis) produced at the transmitter, before any filtering of signals that do not add to the information being transmitted. Received energy on a particular frequency may start a recorder, and alert a human to listen to the signals if they are intelligible (i.e., COMINT). If the frequency is not known, the operators may look for power on primary or sideband frequencies using a spectrum analyzer. Information from the spectrum analyzer is then used to tune receivers to signals of interest. For example, in this simplified spectrum, the actual information is at 800 kHz and 1.2 MHz.
Real-world transmitters and receivers usually are directional. In the figure to the left, assume that each display is connected to a spectrum analyzer connected to a directional antenna aimed in the indicated direction.
Spread-spectrum communications is an electronic counter-countermeasures (ECCM) technique to defeat looking for particular frequencies. Spectrum analysis can be used in a different ECCM way to identify frequencies not being jammed or not in use.
The earliest, and still common, means of direction finding is to use directional antennas as goniometers, so that a line can be drawn from the receiver through the position of the signal of interest. (See HF/DF.) Knowing the compass bearing, from a single point, to the transmitter does not locate it. Where the bearings from multiple points, using goniometry, are plotted on a map, the transmitter will be located at the point where the bearings intersect. This is the simplest case; a target may try to confuse listeners by having multiple transmitters, giving the same signal from different locations, switching on and off in a pattern known to their user but apparently random to the listener.
Individual directional antennas have to be manually or automatically turned to find the signal direction, which may be too slow when the signal is of short duration. One alternative is the Wullenweber array technique. In this method, several concentric rings of antenna elements simultaneously receive the signal, so that the best bearing will ideally be clearly on a single antenna or a small set. Wullenweber arrays for high-frequency signals are enormous, referred to as "elephant cages" by their users.
An alternative to tunable directional antennas, or large omnidirectional arrays such as the Wullenweber, is to measure the time of arrival of the signal at multiple points, using GPS or a similar method to have precise time synchronization. Receivers can be on ground stations, ships, aircraft, or satellites, giving great flexibility.
Modern anti-radiation missiles can home in on and attack transmitters; military antennas are rarely a safe distance from the user of the transmitter.
When locations are known, usage patterns may emerge, from which inferences may be drawn. Traffic analysis is the discipline of drawing patterns from information flow among a set of senders and receivers, whether those senders and receivers are designated by location determined through direction finding, by addressee and sender identifications in the message, or even MASINT techniques for "fingerprinting" transmitters or operators. Message content, other than the sender and receiver, is not necessary to do traffic analysis, although more information can be helpful.
For example, if a certain type of radio is known to be used only by tank units, even if the position is not precisely determined by direction finding, it may be assumed that a tank unit is in the general area of the signal. The owner of the transmitter can assume someone is listening, so might set up tank radios in an area where he wants the other side to believe he has actual tanks. As part of Operation Quicksilver, part of the deception plan for the invasion of Europe at the Battle of Normandy, radio transmissions simulated the headquarters and subordinate units of the fictitious First United States Army Group (FUSAG), commanded by George S. Patton, to make the German defense think that the main invasion was to come at another location. In like manner, fake radio transmissions from Japanese aircraft carriers, before the Battle of Pearl Harbor, were made from Japanese local waters, while the attacking ships moved under strict radio silence.
Traffic analysis need not focus on human communications. For example, if the sequence of a radar signal, followed by an exchange of targeting data and a confirmation, followed by observation of artillery fire, this may identify an automated counterbattery system. A radio signal that triggers navigational beacons could be a landing aid system for an airstrip or helicopter pad that is intended to be low-profile.
Patterns do emerge. Knowing a radio signal, with certain characteristics, originating from a fixed headquarters may be strongly suggestive that a particular unit will soon move out of its regular base. The contents of the message need not be known to infer the movement.
There is an art as well as science of traffic analysis. Expert analysts develop a sense for what is real and what is deceptive. Harry Kidder, for example, was one of the star cryptanalysts of World War II, a star hidden behind the secret curtain of SIGINT.
Generating an electronic order of battle (EOB) requires identifying SIGINT emitters in an area of interest, determining their geographic location or range of mobility, characterizing their signals, and, where possible, determining their role in the broader organizational order of battle. EOB covers both COMINT and ELINT. The Defense Intelligence Agency maintains an EOB by location. The Joint Spectrum Center (JSC) of the Defense Information Systems Agency supplements this location database with five more technical databases:
For example, several voice transmitters might be identified as the command net (i.e., top commander and direct reports) in a tank battalion or tank-heavy task force. Another set of transmitters might identify the logistic net for that same unit. An inventory of ELINT sources might identify the medium- and long-range counter-artillery radars in a given area.
Signals intelligence units will identify changes in the EOB, which might indicate enemy unit movement, changes in command relationships, and increases or decreases in capability.
Using the COMINT gathering method enables the intelligence officer to produce an electronic order of battle by traffic analysis and content analysis among several enemy units. For example, if the following messages were intercepted:
This sequence shows that there are two units in the battlefield, unit 1 is mobile, while unit 2 is in a higher hierarchical level, perhaps a command post. One can also understand that unit 1 moved from one point to another which are distant from each 20 minutes with a vehicle. If these are regular reports over a period of time, they might reveal a patrol pattern. Direction-finding and radio frequency MASINT could help confirm that the traffic is not deception.
The EOB buildup process is divided as following:
Separation of the intercepted spectrum and the signals intercepted from each sensor must take place in an extremely small period of time, in order to separate the different signals to different transmitters in the battlefield. The complexity of the separation process depends on the complexity of the transmission methods (e.g., hopping or time division multiple access (TDMA)).
By gathering and clustering data from each sensor, the measurements of the direction of signals can be optimized and get much more accurate than the basic measurements of a standard direction finding sensor. By calculating larger samples of the sensor's output data in near real-time, together with historical information of signals, better results are achieved.
Data fusion correlates data samples from different frequencies from the same sensor, "same" being confirmed by direction finding or radiofrequency MASINT. If an emitter is mobile, direction finding, other than discovering a repetitive pattern of movement, is of limited value in determining if a sensor is unique. MASINT then becomes more informative, as individual transmitters and antennas may have unique side lobes, unintentional radiation, pulse timing, etc.
Network build-up, or analysis of emitters (communication transmitters) in a target region over a sufficient period of time, enables creation of the communications flows of a battlefield.
COMINT (Communications Intelligence) is a sub-category of signals intelligence that engages in dealing with messages or voice information derived from the interception of foreign communications. COMINT is commonly referred to as SIGINT, which can cause confusion when talking about the broader intelligence disciplines. The US Joint Chiefs of Staff defines it as "Technical information and intelligence derived from foreign communications by other than the intended recipients".
COMINT, which is defined to be communications among people, will reveal some or all of the following:
A basic COMINT technique is to listen for voice communications, usually over radio but possibly "leaking" from telephones or from wiretaps. If the voice communications are encrypted, traffic analysis may still give information.
In the Second World War, for security the United States used Native American volunteer communicators known as code talkers, who used languages such as Navajo, Comanche and Choctaw, which would be understood by few people, even in the U.S. Even within these uncommon languages, the code talkers used specialized codes, so a "butterfly" might be a specific Japanese aircraft. British forces made limited use of Welsh speakers for the same reason.
While modern electronic encryption does away with the need for armies to use obscure languages, it is likely that some groups might use rare dialects that few outside their ethnic group would understand.
Morse code interception was once very important, but Morse code telegraphy is now obsolete in the western world, although possibly used by special operations forces. Such forces, however, now have portable cryptographic equipment. Morse code is still used by military forces of former Soviet Union countries.
Specialists scan radio frequencies for character sequences (e.g., electronic mail) and fax.
A given digital communications link can carry thousands or millions of voice communications, especially in developed countries. Without addressing the legality of such actions, the problem of identifying which channel contains which conversation becomes much simpler when the first thing intercepted is the "signaling channel" that carries information to set up telephone calls. In civilian and many military use, this channel will carry messages in Signaling System 7 protocols.
Retrospective analysis of telephone calls can be made from Call detail record (CDR) used for billing the calls.
More a part of communications security than true intelligence collection, SIGINT units still may have the responsibility of monitoring one's own communications or other electronic emissions, to avoid providing intelligence to the enemy. For example, a security monitor may hear an individual transmitting inappropriate information over an unencrypted radio network, or simply one that is not authorized for the type of information being given. If immediately calling attention to the violation would not create an even greater security risk, the monitor will call out one of the BEADWINDOW codes used by Australia, Canada, New Zealand, the United Kingdom, the United States, and other nations working under their procedures. Standard BEADWINDOW codes (e.g., "BEADWINDOW 2") include:
In WWII, for example, the Japanese Navy, by poor practice, identified a key person's movement over a low-security cryptosystem. This made possible Operation Vengeance, the interception and death of the Combined Fleet commander, Admiral Isoroku Yamamoto.
Electronic signals intelligence (ELINT) refers to intelligence-gathering by use of electronic sensors. Its primary focus lies on non-communications signals intelligence. The Joint Chiefs of Staff define it as "Technical and geolocation intelligence derived from foreign noncommunications electromagnetic radiations emanating from sources other than nuclear detonations or radioactive sources."
Signal identification is performed by analyzing the collected parameters of a specific signal, and either matching it to known criteria, or recording it as a possible new emitter. ELINT data are usually highly classified, and are protected as such.
The data gathered are typically pertinent to the electronics of an opponent's defense network, especially the electronic parts such as radars, surface-to-air missile systems, aircraft, etc. ELINT can be used to detect ships and aircraft by their radar and other electromagnetic radiation; commanders have to make choices between not using radar (EMCON), intermittently using it, or using it and expecting to avoid defenses. ELINT can be collected from ground stations near the opponent's territory, ships off their coast, aircraft near or in their airspace, or by satellite.
Combining other sources of information and ELINT allows traffic analysis to be performed on electronic emissions which contain human encoded messages. The method of analysis differs from SIGINT in that any human encoded message which is in the electronic transmission is not analyzed during ELINT. What is of interest is the type of electronic transmission and its location. For example, during the Battle of the Atlantic in World War II, Ultra COMINT was not always available because Bletchley Park was not always able to read the U-boat Enigma traffic. But high-frequency direction finding ("huff-duff") was still able to detect U-boats by analysis of radio transmissions and the positions through triangulation from the direction located by two or more huff-duff systems. The Admiralty was able to use this information to plot courses which took convoys away from high concentrations of U-boats.
Other ELINT disciplines include intercepting and analyzing enemy weapons control signals, or the identification, friend or foe responses from transponders in aircraft used to distinguish enemy craft from friendly ones.
A very common area of ELINT is intercepting radars and learning their locations and operating procedures. Attacking forces may be able to avoid the coverage of certain radars, or, knowing their characteristics, electronic warfare units may jam radars or send them deceptive signals. Confusing a radar electronically is called a "soft kill", but military units will also send specialized missiles at radars, or bomb them, to get a "hard kill". Some modern air-to-air missiles also have radar homing guidance systems, particularly for use against large airborne radars.
Knowing where each surface-to-air missile and anti-aircraft artillery system is and its type means that air raids can be plotted to avoid the most heavily defended areas and to fly on a flight profile which will give the aircraft the best chance of evading ground fire and fighter patrols. It also allows for the jamming or spoofing of the enemy's defense network (see electronic warfare). Good electronic intelligence can be very important to stealth operations; stealth aircraft are not totally undetectable and need to know which areas to avoid. Similarly, conventional aircraft need to know where fixed or semi-mobile air defense systems are so that they can shut them down or fly around them.
Electronic support measures (ESM) or electronic surveillance measures are ELINT techniques using various "electronic surveillance systems", but the term is used in the specific context of tactical warfare. ESM give the information needed for electronic attack (EA) such as jamming, or directional bearings (compass angle) to a target in "signals intercept" such as in the huff-duff radio direction finding (RDF) systems so critically important during the World War II Battle of the Atlantic. After WWII, the RDF, originally applied only in communications, was broadened into systems to also take in ELINT from radar bandwidths and lower frequency communications systems, giving birth to a family of NATO ESM systems, such as the shipboard US AN/WLR-1—AN/WLR-6 systems and comparable airborne units. EA is also called electronic counter-measures (ECM). ESM provides information needed for electronic counter-counter measures (ECCM), such as understanding a spoofing or jamming mode so one can change one's radar characteristics to avoid them.
Meaconing is the combined intelligence and electronic warfare of learning the characteristics of enemy navigation aids, such as radio beacons, and retransmitting them with incorrect information.
FISINT (Foreign instrumentation signals intelligence) is a sub-category of SIGINT, monitoring primarily non-human communication. Foreign instrumentation signals include (but not limited to) telemetry (TELINT), tracking systems, and video data links. TELINT is an important part of national means of technical verification for arms control.
Still at the research level are techniques that can only be described as , which would be part of a SEAD campaign. It may be informative to compare and contrast counter-ELINT with ECCM.
Signals intelligence and measurement and signature intelligence (MASINT) are closely, and sometimes confusingly, related.
The signals intelligence disciplines of communications and electronic intelligence focus on the information in those signals themselves, as with COMINT detecting the speech in a voice communication or ELINT measuring the frequency, pulse repetition rate, and other characteristics of a radar.
MASINT also works with collected signals, but is more of an analysis discipline. There are, however, unique MASINT sensors, typically working in different regions or domains of the electromagnetic spectrum, such as infrared or magnetic fields. While NSA and other agencies have MASINT groups, the Central MASINT Office is in the Defense Intelligence Agency (DIA).
Where COMINT and ELINT focus on the intentionally transmitted part of the signal, MASINT focuses on unintentionally transmitted information. For example, a given radar antenna will have sidelobes emanating from a direction other than that in which the main antenna is aimed. The RADINT (radar intelligence) discipline involves learning to recognize a radar both by its primary signal, captured by ELINT, and its sidelobes, perhaps captured by the main ELINT sensor, or, more likely, a sensor aimed at the sides of the radio antenna.
MASINT associated with COMINT might involve the detection of common background sounds expected with human voice communications. For example, if a given radio signal comes from a radio used in a tank, if the interceptor does not hear engine noise or higher voice frequency than the voice modulation usually uses, even though the voice conversation is meaningful, MASINT might suggest it is a deception, not coming from a real tank.
See HF/DF for a discussion of SIGINT-captured information with a MASINT flavor, such as determining the frequency to which a "receiver" is tuned, from detecting the frequency of the beat frequency oscillator of the superheterodyne receiver.
Since the invention of the radio, the international consensus has been that the radio-waves are no one's property, and thus the interception itself is not illegal. There can however be national laws on who is allowed to collect, store and process radio traffic, and for what purposes.
Monitoring traffic in cables (i.e. telephone and Internet) is far more controversial, since it most of the time requires physical access to the cable and thereby violating ownership and expected privacy. | https://en.wikipedia.org/wiki?curid=29122 |
Semantic Web
The Semantic Web is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable. To enable the encoding of semantics with the data, technologies such as Resource Description Framework (RDF) and Web Ontology Language (OWL) are used. These technologies are used to formally represent metadata. For example, ontology can describe concepts, relationships between entities, and categories of things. These embedded semantics offer significant advantages such as reasoning over data and operating with heterogeneous data sources.
These standards promote common data formats and exchange protocols on the Web, fundamentally the RDF. According to the W3C, "The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries." The Semantic Web is therefore regarded as an integrator across different content and information applications and systems.
The term was coined by Tim Berners-Lee for a web of data (or data web) that can be processed by machines—that is, one in which much of the meaning is machine-readable. While its critics have questioned its feasibility, proponents argue that applications in library and information science, industry, biology and human sciences research have already proven the validity of the original concept.
Berners-Lee originally expressed his vision of the Semantic Web in 1999 as follows:
The 2001 "Scientific American" article by Berners-Lee, Hendler, and Lassila described an expected evolution of the existing Web to a Semantic Web. In 2006, Berners-Lee and colleagues stated that: "This simple idea…remains largely unrealized".
In 2013, more than four million Web domains contained Semantic Web markup.
In the following example, the text 'Paul Schuster was born in Dresden' on a Website will be annotated, connecting a person with its place of birth. The following HTML-fragment shows how a small graph is being described, in RDFa-syntax using a schema.org vocabulary and a Wikidata ID:
The example defines the following five triples (shown in Turtle Syntax). Each triple represents one edge in the resulting graph: the first element of the triple (the "subject") is the name of the node where the edge starts, the second element (the "predicate") the type of the edge, and the last and third element (the "object") either the name of the node where the edge ends or a literal value (e.g. a text, a number, etc.).
The triples result in the graph shown in the given figure.
One of the advantages of using Uniform Resource Identifiers (URIs) is that they can be dereferenced using the HTTP protocol. According to the so-called Linked Open Data principles, such a dereferenced URI should result in a document that offers further data about the given URI. In this example, all URIs, both for edges and nodes (e.g. http://schema.org/Person, http://schema.org/birthPlace, http://www.wikidata.org/entity/Q1731) can be dereferenced and will result in further RDF graphs, describing the URI, e.g. that Dresden is a city in Germany, or that a person, in the sense of that URI, can be fictional.
The second graph shows the previous example, but now enriched with a few of the triples from the documents that result from dereferencing http://schema.org/Person (green edge) and http://www.wikidata.org/entity/Q1731 (blue edges).
Additionally to the edges given in the involved documents explicitly, edges can be automatically inferred: the triple
from the original RDFa fragment and the triple
from the document at http://schema.org/Person (green edge in the Figure) allow to infer the following triple, given OWL semantics (red dashed line in the second Figure):
The concept of the semantic network model was formed in the early 1960s by researchers such as the cognitive scientist Allan M. Collins, linguist M. Ross Quillian and psychologist Elizabeth F. Loftus as a form to represent semantically structured knowledge. When applied in the context of the modern internet, it extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. This enables automated agents to access the Web more intelligently and perform more tasks on behalf of users. The term "Semantic Web" was coined by Tim Berners-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium ("W3C"), which oversees the development of proposed Semantic Web standards. He defines the Semantic Web as "a web of data that can be processed directly and indirectly by machines".
Many of the technologies proposed by the W3C already existed before they were positioned under the W3C umbrella. These are used in various contexts, particularly those dealing with information that encompasses a limited and defined domain, and where sharing data is a common necessity, such as scientific research or data exchange among businesses. In addition, other technologies with similar goals have emerged, such as microformats.
Many files on a typical computer can also be loosely divided into human-readable documents and machine-readable data. Documents like mail messages, reports, and brochures are read by humans. Data, such as calendars, addressbooks, playlists, and spreadsheets are presented using an application program that lets them be viewed, searched and combined.
Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags provide a method by which computers can categorize the content of web pages. In the examples below, the field names "keywords", "description" and "author" are assigned values such as "computing", and "cheap widgets for sale" and "John Doe".
Because of this metadata tagging and categorization, other computer systems that want to access and share this data can easily identify the relevant values.
With HTML and a tool to render it (perhaps web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as "this document's title is 'Widget Superstore, but there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product. Rather, HTML can only say that the span of text "X586172" is something that should be positioned near "Acme Gizmo" and "€199", etc. There is no way to say "this is a catalog" or even to establish that "Acme Gizmo" is a kind of title or that "€199" is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page.
Semantic HTML refers to the traditional HTML practice of markup following intention, rather than specifying layout details directly. For example, the use of denoting "emphasis" rather than , which specifies italics. Layout details are left up to the browser, in combination with Cascading Style Sheets. But this practice falls short of specifying the semantics of objects such as items for sale or prices.
Microformats extend HTML syntax to create machine-readable semantic markup about objects including people, organisations, events and products. Similar initiatives include RDFa, Microdata and Schema.org.
The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts.
These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessible databases, or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research.
An example of a tag that would be used in a non-semantic web page:
blog
Encoding similar information in a semantic web page might look like this:
Semantic Web
Tim Berners-Lee calls the resulting network of Linked Data the Giant Global Graph, in contrast to the HTML-based World Wide Web. Berners-Lee posits that if the past was document sharing, the future is data sharing. His answer to the question of "how" provides three points of instruction. One, a URL should point to the data. Two, anyone accessing the URL should get data back. Three, relationships in the data should point to additional URLs with data.
Tim Berners-Lee has described the semantic web as a component of Web 3.0.
"Semantic Web" is sometimes used as a synonym for "Web 3.0", though the definition of each term varies. Web 3.0 has started to emerge as a movement away from the centralisation of services like search, social media and chat applications that are dependent on a single organisation to function.
Guardian journalist John Harris reviewed the concept favorably in early2019 and, in particular, work by BernersLee on a project called 'Solid', based around personal data stores or 'Pods', over which individuals retain control. BernersLee has formed a startup, Inrupt, to advance the idea and attract volunteer developers.
Some of the challenges for the Semantic Web include vastness, vagueness, uncertainty, inconsistency, and deceit. Automated reasoning systems will have to deal with all of these issues in order to deliver on the promise of the Semantic Web.
This list of challenges is illustrative rather than exhaustive, and it focuses on the challenges to the "unifying logic" and "proof" layers of the Semantic Web. The World Wide Web Consortium (W3C) Incubator Group for Uncertainty Reasoning for the World Wide Web (URW3-XG) final report lumps these problems together under the single heading of "uncertainty". Many of the techniques mentioned here will require extensions to the Web Ontology Language (OWL) for example to annotate conditional probabilities. This is an area of active research.
Standardization for Semantic Web in the context of Web 3.0 is under the care of W3C.
The term "Semantic Web" is often used more specifically to refer to the formats and technologies that enable it. The collection, structuring and recovery of linked data are enabled by technologies that provide a formal description of concepts, terms, and relationships within a given knowledge domain. These technologies are specified as W3C standards and include:
The Semantic Web Stack illustrates the architecture of the Semantic Web. The functions and relationships of the components can be summarized as follows:
Well-established standards:
Not yet fully realized:
The intent is to enhance the usability and usefulness of the Web and its interconnected resources by creating Semantic Web Services, such as:
Such services could be useful to public search engines, or could be used for knowledge management within an organization. Business applications include:
In a corporation, there is a closed group of users and the management is able to enforce company guidelines like the adoption of specific ontologies and use of semantic annotation. Compared to the public Semantic Web there are lesser requirements on scalability and the information circulating within a company can be more trusted in general; privacy is less of an issue outside of handling of customer data.
Critics question the basic feasibility of a complete or even partial fulfillment of the Semantic Web, pointing out both difficulties in setting it up and a lack of general-purpose usefulness that prevents the required effort from being invested. In a 2003 paper, Marshall and Shipman point out the cognitive overhead inherent in formalizing knowledge, compared to the authoring of traditional web hypertext:
According to Marshall and Shipman, the tacit and changing nature of much knowledge adds to the knowledge engineering problem, and limits the Semantic Web's applicability to specific domains. A further issue that they point out are domain- or organisation-specific ways to express knowledge, which must be solved through community agreement rather than only technical means. As it turns out, specialized communities and organizations for intra-company projects have tended to adopt semantic web technologies greater than peripheral and less-specialized communities. The practical constraints toward adoption have appeared less challenging where domain and scope is more limited than that of the general public and the World-Wide Web.
Finally, Marshall and Shipman see pragmatic problems in the idea of (Knowledge Navigator-style) intelligent agents working in the largely manually curated Semantic Web:
Cory Doctorow's critique ("metacrap") is from the perspective of human behavior and personal preferences. For example, people may include spurious metadata into Web pages in an attempt to mislead Semantic Web engines that naively assume the metadata's veracity. This phenomenon was well known with metatags that fooled the Altavista ranking algorithm into elevating the ranking of certain Web pages: the Google indexing engine specifically looks for such attempts at manipulation. Peter Gärdenfors and Timo Honkela point out that logic-based semantic web technologies cover only a fraction of the relevant phenomena related to semantics.
Enthusiasm about the semantic web could be tempered by concerns regarding censorship and privacy. For instance, text-analyzing techniques can now be easily bypassed by using other words, metaphors for instance, or by using images in place of words. An advanced implementation of the semantic web would make it much easier for governments to control the viewing and creation of online information, as this information would be much easier for an automated content-blocking machine to understand. In addition, the issue has also been raised that, with the use of FOAF files and geolocation meta-data, there would be very little anonymity associated with the authorship of articles on things such as a personal blog. Some of these concerns were addressed in the "Policy Aware Web" project and is an active research and development topic.
Another criticism of the semantic web is that it would be much more time-consuming to create and publish content because there would need to be two formats for one piece of data: one for human viewing and one for machines. However, many web applications in development are addressing this issue by creating a machine-readable format upon the publishing of data or the request of a machine for such data. The development of microformats has been one reaction to this kind of criticism. Another argument in defense of the feasibility of semantic web is the likely falling price of human intelligence tasks in digital labor markets, such as Amazon's Mechanical Turk.
Specifications such as eRDF and RDFa allow arbitrary RDF data to be embedded in HTML pages. The GRDDL (Gleaning Resource Descriptions from Dialects of Language) mechanism allows existing material (including microformats) to be automatically interpreted as RDF, so publishers only need to use a single format, such as HTML.
The first research group explicitly focusing on the Corporate Semantic Web was the ACACIA team at INRIA-Sophia-Antipolis, founded in 2002. Results of their work include the RDF(S) based Corese search engine, and the application of semantic web technology in the realm of E-learning.
Since 2008, the Corporate Semantic Web research group, located at the Free University of Berlin, focuses on building blocks: Corporate Semantic Search, Corporate Semantic Collaboration, and Corporate Ontology Engineering.
Ontology engineering research includes the question of how to involve non-expert users in creating ontologies and semantically annotated content and for extracting explicit knowledge from the interaction of users within enterprises.
Tim O'Reilly, who coined the term Web 2.0 proposed a long-term vision of the Semantic Web as a web of data, where sophisticated applications manipulate the data web. The data web transforms the Web from a distributed file system into a distributed database system. | https://en.wikipedia.org/wiki?curid=29123 |
Soviet submarine K-219
K-219 was a Project 667A "Navaga"-class ballistic missile submarine (NATO reporting name Yankee I) of the Soviet Navy. It carried 16 R-27U liquid-fuel missiles powered by UDMH with nitrogen tetroxide (NTO), and equipped with either 32 or 48 nuclear warheads. | https://en.wikipedia.org/wiki?curid=29124 |
Soviet submarine K-8
K-8 was a of the Soviet Northern Fleet that sank in the Bay of Biscay with her nuclear weapons on board on April 12, 1970. A fire on April 8 had disabled the submarine and it was being towed in rough seas. Fifty-two crewmen were killed attempting the salvage of the boat when it sank.
On 13 October 1960, while operating in the Barents Sea, "K-8" suffered a ruptured steam generator tube, causing a loss-of-coolant accident. While the crew jury-rigged a system to supply emergency cooling water to the reactor, preventing a reactor core meltdown, large amounts of radioactive gas leaked out which contaminated the entire vessel. The gas radiation levels could not be determined because instrumentation could not measure such large scales. Three of the crew suffered visible radiation injuries, and many crewmen were exposed to doses of up to 1.8–2 Sv (180–200 rem).
During the large-scale "Ocean-70" naval exercise, K-8 suffered fires in two compartments simultaneously on 8 April 1970. Due to short circuits that took place in III and VII compartments simultaneously at a depth of , a fire spread through the air-conditioning system. Both nuclear reactors were shut down.
The captain ordered his entire crew to abandon ship but was countermanded once a towing vessel arrived. Fifty-two crewmen, including the commander, Captain 2nd Rank Vsevolod Borisovich Bessonov, re-boarded the surfaced submarine that was to be towed. This was the first loss of a Soviet nuclear-powered submarine, which sank in rough seas as it was being towed in the Bay of Biscay of the North Atlantic Ocean. Eight mariners had already died due to certain compartments being locked to prevent further flooding as well as the spread of the fire as soon as it was detected. All hands on board died due to CO poisoning and the flooding of the surfaced submarine during 80 hours of damage control in stormy conditions. Seventy-three crewmen survived. "K-8" sank with four nuclear torpedoes out of total 24 on board to a depth of approximately northwest of Spain. | https://en.wikipedia.org/wiki?curid=29125 |
Soviet submarine K-19
K-19 (Russian: К-19) was the first submarine of the Project 658 (Russian: проект-658, lit: "Projekt-658") class (NATO reporting name ), the first generation of Soviet nuclear submarines equipped with nuclear ballistic missiles, specifically the R-13 SLBM. The boat was hastily built by the Soviets in response to United States' developments in nuclear submarines as part of the arms race. Before it was launched, 10 civilian workers and a sailor died due to accidents and fires. After it was commissioned, it had multiple breakdowns and accidents, several of which threatened to sink the submarine.
On its initial voyage on 4 July 1961, it suffered a complete loss of coolant to its reactor. A backup system included in the design was not installed, so the captain ordered members of the engineering crew to find a solution to avoid a nuclear meltdown. Sacrificing their own lives, the engineering crew jury-rigged a secondary coolant system and kept the reactor from a meltdown. Twenty-two crew members died during the following two years. The submarine experienced several other accidents, including two fires and a collision. The series of accidents inspired crew members to nickname the submarine "Hiroshima".
In the late 1950s, the leaders of the Soviet Union were determined to catch up with the United States and began to build a nuclear submarine fleet. The boat was pushed through production and rushed through testing. It suffered from poor workmanship and was accident-prone from the beginning. Many Soviet naval officers felt that the ships were not fit for combat. The crew aboard the first nuclear submarines of the Soviet fleet was provided with a very high quality standard of food including smoked fish, sausages, fine chocolates, and cheeses, unlike the standard fare given the crews of other naval vessels.
"K-19" was ordered by the Soviet Navy on 16 October 1957. Her keel was laid on 17 October 1958 at the naval yard in Severodvinsk. Several workers died building the submarine: two workers were killed when a fire broke out, and later six women gluing rubber lining to a water cistern were killed by fumes. While missiles were being loaded, an electrician was crushed to death by a missile-tube cover, and an engineer fell between two compartments and died.
The ship was launched and christened on 8 April 1959. Breaking with tradition, a man (Captain 3rd Rank V. V. Panov of the 5th Urgent Unit) instead of a woman, was chosen to smash the ceremonial champagne bottle across the ship's stern. The bottle failed to break, instead sliding along the screws and bouncing off the rubber-coated hull. This is traditionally viewed among sea crews as a sign that the ship is unlucky. Captain 1st Rank Nikolai Vladimirovich Zateyev was the first commander of the submarine. Vasily Arkhipov was the executive officer of the new Hotel-class ballistic missile submarine K-19
In January 1960, confusion among the crew during a watch change led to improper operation of the reactor and a reactor-control rod was bent. The damage required the reactor to be dismantled for repairs. The officers on duty were removed and Captain Panov was demoted.
The submarine's ensign was hoisted for the first time on 12 July 1960. It underwent sea trials from 13 through 17 July 1960 and again from 12 August through 8 November 1960, travelling . The ship was considered completed on 12 November 1960. After surfacing from a full-power run, the crew discovered that most of the hull's rubber coating had detached, and the entire surface of the boat had to be re-coated.
During a test dive to the maximum depth of , flooding was reported in the reactor compartment, and Captain Zateyev ordered the submarine to immediately surface, where it heeled over on its port side due to the water it had taken on. It was later determined that during construction the workers had failed to replace a gasket. In October 1960, the galley crew disposed of wood from equipment crates through the galley's waste system, clogging it. This led to flooding of the ninth compartment, which filled one third full of water. In December 1960, a loss of coolant was caused by failure of the main circuit pump. Specialists called from Severodvinsk managed to repair it at sea within a week.
The boat was commissioned on 30 April 1961. The submarine had a total of 139 men aboard, including missile men, reactor officers, torpedo men, doctors, cooks, stewards, and several observing officers who were not part of the standard crew.
On 4 July 1961, under the command of Captain First Rank Nikolai Vladimirovich Zateyev, "K-19" was conducting exercises in the North Atlantic off the south-east coast of Greenland. At 04:15 local time the pressure in the starboard nuclear reactor's cooling system dropped to zero. The reactor department crew found a major leak in the reactor coolant system, causing the coolant pumps to fail. The boat could not contact Moscow and request assistance because a separate accident had damaged the long-range radio system. The control rods were automatically inserted by the emergency SCRAM system, but the reactor temperature rose uncontrollably. Decay heat from fission products produced during normal operation eventually heated the reactor to .
Making a drastic decision, Zateyev ordered the engineering section to fabricate a new coolant system by cutting off an air vent valve and welding a water-supply pipe to it. This required men to work in high radiation for extended periods. The jury-rigged cooling water system successfully reduced the temperature in the reactor.
The accident released radioactive steam containing fission products that were drawn into the ship's ventilation system and spread to other compartments of the ship. The entire crew was irradiated as was most of the ship and some of the ballistic missiles on board. All seven members of the engineering crew and their divisional officer died of radiation exposure within the next month. Fifteen more sailors died within the next two years.
Instead of continuing on the mission's planned route, the captain decided to head south to meet diesel-powered submarines expected to be there. Worries about a potential crew mutiny prompted Zateyev to have all small arms thrown overboard except for five pistols distributed to his most trusted officers. A diesel submarine, , picked up "K-19"s low-power distress transmissions and joined up with it.
American warships nearby had also heard the transmission and offered to help, but Zateyev, afraid of giving away Soviet military secrets to the West, refused and sailed to meet "S-270". He evacuated the crew and had the boat towed to its home base.
Over the next two years, repair crews removed and replaced the damaged reactors. The repair process contaminated the nearby environment, in a zone within , and the repair crew. The Soviet Navy dumped the original radioactive compartment into the Kara Sea. "K-19" returned to the fleet with the nickname "Hiroshima".
According to the government's official explanation of the disaster, the repair crews discovered that the catastrophe had been caused by a faulty welding incident during initial construction. They discovered that during installation of the primary cooling system piping, a welder had failed to cover exposed pipe surfaces with asbestos drop cloths (required to protect piping systems from accidental exposure to welding sparks), due to the cramped working space. A drop from a welding electrode fell on an unprotected surface, producing an invisible crack. This crack was subject to prolonged and intensive pressure (over 200 atmospheres), compromising the pipe's integrity and finally causing it to fail.
Others disputed this conclusion. Retired Rear-Admiral Nikolai Mormul asserted that when the reactor was first started ashore, the construction crew had not attached a pressure gauge to the primary cooling circuit. Before anyone realized there was a problem, the cooling pipes were subjected to a pressure of 400 atmospheres, double the acceptable limit.
On 1 February 2006, former President of the Soviet Union Mikhail Gorbachev proposed in a letter to the Norwegian Nobel Committee that the crew of "K-19" be nominated for a Nobel Peace Prize for their actions on 4 July 1961.
Several crew members received fatal doses of radiation during repairs on the reserve coolant system of Reactor #8. Eight died between one and three weeks after the accident from severe radiation sickness. A person who receives a dose of 4 to 5 Sv (about 400-500 rem) over a short period has a 50% chance of dying within 30 days.
Fourteen other crew members died within two years. Many other crew members also received radiation doses exceeding permissible levels. They underwent medical treatment during the following year. Many others experienced chest pains, numbness, cancer, and kidney failure. Their treatment was devised by Professor Z. Volynskiy and included bone marrow transplantation and blood transfusion. It saved, among others, Chief Lieutenant Mikhail Krasichkov and Captain 3rd class Vladimir Yenin, who had received doses of radiation that were otherwise considered deadly. For reasons of secrecy, the official diagnosis was not "radiation sickness" but "astheno-vegetative syndrome", a mental disorder.
On 6 August 1961, 26 members of the crew were decorated for courage and valor shown during the accident.
On 14 December 1961, the boat was fully upgraded to the Hotel II ("658м") variant, which included upgrading to R-21 missiles, which had twice the effective range of the earlier missiles.
At 07:13 on 15 November 1969, "K-19" collided with the attack submarine in the Barents Sea at a depth of . It was able to surface using an emergency main ballast tank blow. The impact completely destroyed the bow sonar systems and mangled the covers of the forward torpedo tubes. "K-19" was able to return to port where it was repaired and returned to the fleet. "Gato" was relatively undamaged and continued her patrol.
On 24 February 1972, a fire broke out while the submarine was at a depth of , some from Newfoundland, Canada. The boat surfaced and the crew was evacuated to surface warships except for 12 men trapped in the aft torpedo room. Towing was delayed by a gale, and rescuers could not reach the aft torpedo room because of conditions in the engine room. The fire killed 28 sailors aboard "K-19" and two others who died after they were transferred to rescue ships. Investigators determined that the fire was caused by a hydraulic fluid leak onto a hot filter.
The rescue operation lasted more than 40 days and involved over 30 ships. From 15 June through 5 November 1972, "K-19" was repaired and put back into service.
On 15 November 1972, another fire broke out in compartment 6, but it was put out by the chemical fire-extinguisher system and there were no casualties.
On 25 July 1977, "K-19" was reclassified in the Large Submarine class, and on 26 July 1979, she was reclassified as a communications submarine and given the symbol KS-19 (КС-19). On 15 August 1982, an electrical short circuit resulted in severe burns to two sailors; one, V. A. Kravchuk, died five days later.
On 28 November 1985, the ship was upgraded to the 658s (658с) variant.
On 19 April 1990 the submarine was decommissioned, and was transferred in 1994 to the naval repair yard at Polyarny. In March 2002, it was towed to the Nerpa Shipyard, Snezhnogorsk, Murmansk, to be scrapped.
In 2006, a section of "K-19" was purchased by Vladimir Romanov, who once served on the submarine as a conscript, with the intention of "Turning it into a Moscow-based meeting place to build links between submarine veterans from Russia and other countries." So far, the plans remain on hold, and many of "K-19"s survivors have objected to them.
In 1969 writer Vasily Aksyonov wrote a play about the nuclear incident.
The movie "" (2002), starring Harrison Ford and Liam Neeson, is based on the story of the "K-19"s first disaster. However, the real participants of these events wrote an open letter to the movie makers to ask them not to show fake cowardness and a fake revolt which were a part of the plot. It was not heard by the producers. The production company attempted in March 2002 to secure access to the boat as a set for its production, but the Russian Navy declined. The nickname "The Widowmaker" referred to by the movie was fictional. The submarine did not gain a nickname until the nuclear accident on 4 July 1961, when it was called "Hiroshima". | https://en.wikipedia.org/wiki?curid=29126 |
Super Bowl I
The first AFL-NFL World Championship Game in professional American football, known retroactively as Super BowlI and referred to in some contemporaneous reports, including the game's radio broadcast, as the Super Bowl, was played on January 15, 1967 at the Los Angeles Memorial Coliseum in Los Angeles, California. The National Football League (NFL) champion Green Bay Packers defeated the American Football League (AFL) champion Kansas City Chiefs by the score of 35–10.
Coming into this game, considerable animosity existed between the AFL and NFL, thus the teams representing the two rival leagues (Kansas City and Green Bay, respectively) felt pressure to win. The Chiefs posted an 11–2–1 record during the 1966 AFL season, and defeated the Buffalo Bills 31–7, in the AFL Championship Game. The Packers finished the 1966 NFL season at 12–2, and defeated the Dallas Cowboys 34–27 in the NFL Championship Game. Still, many sports writers and fans believed any team in the older NFL was vastly superior to any club in the upstart AFL, and so expected Green Bay would blow out Kansas City.
The first half of Super Bowl I was competitive, as the Chiefs outgained the Packers in total yards, to come within at halftime. Early in the 3rd quarter, Green Bay safety Willie Wood intercepted a pass and returned it 50 yards to the 5-yard line. The turnover sparked the Packers to score 21 unanswered points in the second half. Green Bay quarterback Bart Starr, who completed 16 of 23 passes for 250 yards and two touchdowns, with one interception, was named MVP.
It remains the only Super Bowl to have been simulcast in the United States by two networks. NBC had the rights to nationally televise AFL games, while CBS held the rights to broadcast NFL games; both were allowed to televise the game.
When the NFL began its 41st season in , it had a new and unwanted rival: the American Football League. The NFL had successfully fended off several other rival leagues in the past, and so the older league initially ignored the new upstart and its eight teams, figuring it would be made up of nothing but NFL rejects, and that fans were unlikely to prefer it to the NFL. But unlike the NFL's prior rivals, the AFL survived and prospered, in part by signing "NFL rejects" who turned out to be highly talented players the older league had badly misjudged. Soon the NFL and AFL found themselves locked in a massive bidding war for the top free agents and prospects coming out of college. Originally, there was a tacit agreement between the two not to raid each other by signing players who were already under contract with a team from an opposing league. This policy broke down in early 1966 when the NFL's New York Giants signed Pete Gogolak, a placekicker who was under contract with the AFL's Buffalo Bills. The AFL owners considered this an "act of war" and immediately struck back, signing several contracted NFL players, including eight of their top quarterbacks.
Eventually the NFL had enough and started negotiations with the AFL in an attempt to resolve the issue. As a result of the negotiations, the leagues signed a merger agreement on June 9, 1966. Among the details, both leagues agreed to share a common draft in order to end the bidding war for the top college players, as well as merge into a single league after the 1969 season. In addition, an "AFL-NFL World Championship Game" was established, in which the AFL and NFL champions would play against each other in a game at the end of the season to determine which league had the best team.
Los Angeles wasn't awarded the game until December 1, less than seven weeks prior to the kickoff; likewise, the date of the game was not set until December 13. Since the AFL Championship Game originally was scheduled for Monday, December 26, and the NFL Championship Game for Sunday, January1, the "new" championship game was suggested to be played Sunday, January 8. An unprecedented TV doubleheader was held on January 1, with the AFL Championship Game telecast from Buffalo on NBC and the NFL Championship Game telecast from Dallas on CBS three hours later.
Coming into this "first" game, considerable animosity still existed between the two rival leagues, with both of them putting pressure on their respective champions to trounce the other and prove each league's dominance in professional football. Still, many sports writers and fans believed the game was a mismatch, and any team from the long-established NFL was far superior to the best team from the upstart AFL. The Green Bay Packers played the Kansas City Chiefs, with the Packers winning 35–10.
The players' shares were $15K each for the winning team and $7,500 each for the losing team. This was in addition to the league championship money earned two weeks earlier: the Packers shares were $8,600 each and the Chiefs were $5,308 each.
The Chiefs entered the game after recording an 11–2–1 mark during the regular season. In the AFL championship game, they defeated the Buffalo Bills 31–7.
Kansas City's high-powered offense led the AFL in points scored (448) and total rushing yards (2,274). Their trio of running backs, Mike Garrett (801 yards), Bert Coan (521 yards), and Curtis McClinton (540 yards) all ranked among the top-ten rushers in the AFL. Quarterback Len Dawson was the top-rated passer in the AFL, completing 159 of 284 (56%) of his passes for 2,527 yards and 26 touchdowns. Wide receiver Otis Taylor provided the team with a great deep threat by recording 58 receptions for 1,297 yards and eight touchdowns. Receiver Chris Burford added 58 receptions for 758 yards and eight touchdowns, and tight end Fred Arbanas, who had 22 catches for 305 yards and four touchdowns, was one of six Chiefs offensive players who were named to the All-AFL team.
The Chiefs also had a strong defense, with All-AFL players Jerry Mays and Buck Buchanan anchoring their line. Linebacker Bobby Bell, who was also named to the All-AFL team, was great at run stopping and pass coverage. The strongest part of their defense, though, was their secondary, led by All-AFL safeties Johnny Robinson and Bobby Hunt, who each recorded 10 interceptions, and defensive back Fred Williamson, who recorded four. Their head coach was Hank Stram.
The Packers were an NFL dynasty, turning around what had been a losing team just eight years earlier. The team had posted an NFL-worst 1–10–1 record in 1958 before legendary head coach Vince Lombardi was hired in January 1959. "Their offense was like a conga dance," one sportswriter quipped. "1, 2, 3and kick."
Lombardi was determined to build a winning team. During the preseason, he signed Fred "Fuzzy" Thurston, who had been cut from three other teams, but ended up becoming an All-Pro left guard for Green Bay. In addition Lombardi also made a big trade with the Cleveland Browns that brought three players to the team who would become cornerstones of the defense: linemen Henry Jordan, Willie Davis, and Bill Quinlan.
Lombardi's hard work paid off, and the Packers improved to a 7–5 regular season record in 1959. They surprised the league during the following year by making it all the way to the 1960 NFL Championship Game. Although the Packers lost 17–13 to the Philadelphia Eagles, they had sent a clear message that they were no longer losers. Green Bay went on to win NFL Championships in 1961, 1962, 1965, and 1966.
Packers veteran quarterback Bart Starr was the top-rated quarterback in the NFL for 1966, and won the NFL Most Valuable Player Award, completing 156 of 251 (62.2%) passes for 2257 yards (9.0 per attempt), 14 touchdowns, and only three interceptions. His top targets were wide receivers Boyd Dowler and Carroll Dale, who combined for 63 receptions for 1,336 yards. Fullback Jim Taylor was the team's top rusher with 705 yards, adding four touchdowns, and caught 41 passes for 331 yards and two touchdowns. (Before the season, Taylor had informed the team that instead of returning to the Packers in 1967, he would become a free agent and sign with the expansion New Orleans Saints. Lombardi, infuriated at what he considered to be Taylor's disloyalty, refused to speak to Taylor the entire season.) The team's starting halfback, Paul Hornung, was injured early in the season and replaced by running back Elijah Pitts, who gained 857 all purpose yards. The Packers' offensive line was also a big reason for the team's success, led by All-Pro guards Jerry Kramer, and Fuzzy Thurston, and tackle Forrest Gregg.
Green Bay also had an excellent defense that displayed their talent in the NFL championship game, stopping the Dallas Cowboys on four consecutive plays starting from the Packers 2-yard line on the final drive to win the game. Lionel Aldridge had replaced Quinlan, but Jordan and Davis still anchored the defensive line; linebacker Ray Nitschke excelled at run stopping and pass coverage, while the secondary was led by defensive backs Herb Adderley and Willie Wood. Wood was another example of how Lombardi found talent nobody else could see. Wood had been a quarterback in college and was not drafted by an NFL team. When Wood joined the Packers in 1960, he was converted to a free safety, and went on to make the All-Pro team nine times in his 12-year career.
Many people considered it fitting that the Chiefs and the Packers would be the teams to play in the first ever AFL-NFL World Championship Game. Chiefs owner Lamar Hunt had founded the AFL, while Green Bay was widely considered one of the better teams in NFL history (even if they could not claim to be founding members of their own league, as the Packers joined the NFL in 1921, a year after the league's formation). Lombardi was under intense pressure from the entire NFL to make sure the Packers not only won the game, but preferably won big to demonstrate the superiority of the NFL. CBS announcer Frank Gifford, who interviewed Lombardi prior to the game, said Lombardi was so nervous, "he held onto my arm and he was shaking like a leaf. It was incredible." The Chiefs saw this game as an opportunity to show they were good enough to play against any NFL team. One player who was really looking forward to competing in this game was Len Dawson, who had spent three years as a backup in the NFL before joining the Chiefs. However, the Chiefs were also nervous. Linebacker E. J. Holub said, "the Chiefs were scared to death. Guys in the tunnel were throwing up."
In the week prior to the game, Chiefs cornerback Fred "The Hammer" Williamson garnered considerable publicity by boasting he would use his "hammer" – forearm blows to the head – to destroy the Packers' receivers, stating, "Two hammers to (Boyd) Dowler, one to (Carroll) Dale should be enough." His prediction turned out to be partially correct as Dowler was knocked out of the game early in the first quarter (although it was because of an exacerbation of an injury he had previously received during the NFL Championship game in Dallas on January 1). However, Willamson himself was knocked out cold and carried off the field on a stretcher near the end of the game.
The two teams played with their respective footballs from each league; the Chiefs used the AFL ball, the slightly narrower and longer J5V by Spalding, and the Packers played with the NFL ball, "The Duke" by Wilson.
The AFL's two-point conversion rule was not in force; the NFL added the two-point conversion in and it was first used in the Super Bowl that same season, Super Bowl XXIX in January 1995.
This is also the only Super Bowl where the numeric yard markers were five yards apart, rather than 10 as is customary today.
Justin Peters of "Slate" watched all the Super Bowls over a two-month period in 2015 before Super Bowl 50. He mentioned the first Super Bowl's having "two dudes in rocket packs who flew around the stadium. I can forgive a lot of bad football as long as the game features two dudes in honest-to-God rocket packs."
The temperature was mild with clear skies.
This game is the only Super Bowl to have been broadcast in the United States by two television networks simultaneously (no other NFL game was subsequently carried nationally on more than one network until December 29, 2007, when the New England Patriots faced the New York Giants on NBC, CBS, and the NFL Network). At the time, NBC held the rights to nationally televise AFL games while CBS had the rights to broadcast NFL games. Both networks were allowed to cover the game. During the week, tensions flared between the staffs of the two networks (longtime arch-rivals in American broadcasting), who each wanted to win the ratings war, to the point where a fence was built between the CBS and NBC trucks.
Each network used its own announcers: Ray Scott (doing play-by-play for the first half), Jack Whitaker (doing play-by-play for the second half) and Frank Gifford provided commentary on CBS, while Curt Gowdy and Paul Christman were on NBC. While Rozelle allowed NBC to telecast the game, he decreed it would not be able to use its cameramen and technical personnel, instead forcing it to use the feed provided by CBS, since the Coliseum was home to the NFL's Rams.
Super BowlI was the only Super Bowl that was not a sellout, despite the TV blackout in Los Angeles (at the time, local blackout was required even at a neutral site and even if the stadium did sell out). Of the 94,000-seat capacity in the Coliseum, 33,000 went unsold. Days before the game, local newspapers printed editorials about what they viewed as a then-exorbitant $12 ($92.18 in 2019 money) price for tickets, and wrote stories about how viewers could pull in the game from stations in surrounding markets such as Bakersfield, Santa Barbara and San Diego.
This is the only Super Bowl that Curt Gowdy called for NBC where the NFL or NFC team won (the AFL/AFC teams won the others, even though the Baltimore Colts and Pittsburgh Steelers were part of the old NFL before moving to the AFC following the AFL-NFL merger).
All known broadcast tapes of the game in its entirety were subsequently wiped by both NBC and CBS to save costs, a common practice in the TV industry at the time as videotapes were very expensive (one half-hour tape cost around $300 at the time, equivalent to $2260 in 2019 dollars), plus it was not foreseen how big the game was going to become. This has prevented studies comparing each network's respective telecast.
For many years, only two small samples of the telecasts were known to have survived, showing Max McGee's opening touchdown and Jim Taylor's first touchdown run. Both were shown in 1991 on HBO's "Play by Play: A History of Sports Television" and on the Super Bowl XXV pregame show.
In January 2011, a partial recording of the CBS telecast was reported to have been found in a Pennsylvania attic and restored by the Paley Center for Media in New York. The two-inch color videotape is the most complete version of the broadcast yet discovered, missing only the halftime show and most of the third quarter. The NFL owns the broadcast copyright and has blocked its sale or distribution. After remaining anonymous and communicating with the media only through his lawyer since the recording's discovery, the owner of the recording, Troy Haupt, came forward to "The New York Times" in 2016 to tell his side of the story.
NFL Films had a camera crew present, and retains a substantial amount of film footage in its archives, some of which has been released in its film productions. One such presentation was the "NFL's Greatest Games" episode about this Super Bowl, entitled "The Spectacle of a Sport" (also the title of the Super BowlI highlight film).
On January 11, 2016, the NFL announced that, "in an exhaustive process that took months to complete, NFL Films searched its enormous archives of footage and were able to locate all 145 plays from Super BowlI from more than a couple dozen disparate sources. Once all the plays were located, NFL Films was able to put the plays in order and stitch them together while fully restoring, re-mastering, and color-correcting the footage. Finally, audio from the NBC Sports radio broadcast featuring announcers Jim Simpson and George Ratterman was layered on top of the footage to complete the broadcast. The final result represents the only known video footage of the entire action from Super BowlI." It then announced that NFL Network would broadcast the newly pieced together footage in its entirety on January 15, 2016–the 49th anniversary of the contest. This footage was nearly all on film with the exception of several player introductions and a post game locker room chat between Pat Summerall and Pete Rozelle.
The Los Angeles Ramettes, majorettes who had performed at all Rams home games, entertained during pregame festivities and after each quarter. Also during the pregame, the University of Arizona band created a physical outline of the continental United States at the center of the field, with the famed Anaheim High School drill team placing banners of each NFL and AFL team at each team's geographical location.
The halftime show featured trumpeter Al Hirt, the marching bands from the University of Arizona and Grambling State University, 300 pigeons, 10,000 balloons and a flying demonstration by the hydrogen-peroxide-propelled Bell Rocket Air Men.
The postgame trophy presentation ceremony was handled by CBS' Pat Summerall and NBC's George Ratterman. Summerall and Ratterman were forced to share a single microphone.
Balls from both leagues were used – when the Chiefs were on offense, the official AFL football (Spalding JV-5) was used, and when the Packers were on offense, the official NFL ball (Wilson's "The Duke") was used. Even the officiating crew was a combination of AFL and NFL referees, with the NFL's Norm Schachter as the head referee.
The teams traded punts on their first possessions, then the Packers jumped to an early 7–0 lead, driving 80 yards in six plays. The drive was highlighted by Starr's passes, to Marv Fleming for 11, to Elijah Pitts for 22 yards on a scramble, and to Carroll Dale for 12 yards. On the last play, Bart Starr threw a pass to reserve receiver Max McGee, who had replaced re-injured starter Boyd Dowler earlier in the drive. (Dowler had injured his shoulder two weeks prior after scoring a third quarter touchdown; Cowboys defensive back Mike Gaechter had upended him several steps after scoring and he landed awkwardly.) McGee slipped past Chiefs cornerback Willie Mitchell, made a one-handed catch at the 23-yard line, and then took off for a 37-yard touchdown reception (McGee had also caught a touchdown pass after replacing an injured Dowler in the NFL championship game). On their ensuing drive, the Chiefs moved the ball to Green Bay's 33-yard line, but kicker Mike Mercer missed a 40-yard field goal.
Early in the second quarter, Kansas City drove 66 yards in six plays, featuring a 31-yard reception by receiver Otis Taylor, to tie the game on a seven-yard pass to Curtis McClinton from quarterback Len Dawson. But the Packers responded on their next drive, advancing 73 yards down the field and scoring on fullback Jim Taylor's 14-yard touchdown run with the team's famed "Power Sweep" play. Taylor's touchdown run was the first in Super Bowl history. This drive was again highlighted by Starr's key passes. He hit McGee for 10 yards on third and five; Dale for 15 on third and ten; Fleming for 11 on third and five; and Pitts for 10 yards on third and seven to set up Taylor's TD run on the next play.
Dawson was sacked for an eight-yard loss on the first play of the Chiefs' next drive, but he followed it up with four consecutive completions for 58 yards, including a 27-yarder to Chris Burford. This set up Mercer's 31-yard field goal to make the score 14–10 at the end of the half.
At halftime, the Chiefs appeared to have a chance to win. Many people watching the game were surprised how close the score was and how well the AFL's champions were playing. Kansas City actually outgained Green Bay in total yards, 181–164, and had 11 first downs compared to the Packers' nine. The Chiefs were exuberant at halftime. Hank Stram said later, "I honestly thought we would come back and win it." The Packers were disappointed with the quality of their play in the first half. "The coach was "concerned"", said defensive end Willie Davis later. Lombardi told them the game plan was sound, but that they had to tweak some things and execute better.
On their first drive of the second half, the Chiefs advanced to their own 49-yard line. But on a third-down pass play, a heavy blitz by linebackers Dave Robinson and Lee Roy Caffey collapsed the Chiefs pocket. Robinson, tackle Henry Jordan, and Packer right end Lionel Aldridge converged on Dawson who threw weakly toward tight end Fred Arbanas. The wobbly pass was intercepted by Willie Wood. Wood raced 50 yards to Kansas City's five-yard line where Mike Garrett dragged him down from behind. This was "the biggest play of the game," wrote Starr later. On their first play after the turnover, running back Elijah Pitts scored on a five-yard touchdown run off left tackle to give the Packers a 21–10 lead. Stram agreed that it was the critical point of the game. The Packers defense then dominated the Chiefs offense for the rest of the game, allowing them to cross midfield only once, and for just one play. The Chiefs were forced to deviate from their game plan, and that hurt them. The Kansas City offense totaled only 12 yards in the third quarter, and Dawson was held to five of 12 second-half pass completions for 59 yards.
Meanwhile, Green Bay forced Kansas City to punt from their own two-yard line after sacking Dawson twice and got the ball back with good field position on their own 44-yard line (despite a clipping penalty on the punt return). McGee subsequently caught three passes for 40 yards on a 56-yard drive. Taylor ran for one first down, Starr hit McGee for 16 yards on third-and-11, and a third down sweep with Taylor carrying gained eight yards and a first down at the Kansas City 13. The drive ended with Starr's 13-yard touchdown toss to McGee on a post pattern.
Midway through the fourth quarter, Starr completed a 25-yard pass to Carroll Dale and a 37-yard strike to McGee, moving the ball to the Chiefs 18-yard line. Four plays later, Pitts scored his second touchdown on a one-yard run to close out the scoring, giving the Packers the 35–10 win. Also in the fourth quarter, Fred Williamson, who had boasted about his "hammer" prior to the game, was knocked out when his head collided with running back Donny Anderson's knee, and then suffered a broken arm when Chiefs linebacker Sherrill Headrick fell on him. Williamson had three tackles for the game.
Hornung was the only Packer to not see any action. Lombardi had asked him in the fourth quarter if he wanted to go in, but Hornung declined, not wanting to aggravate a pinched nerve in his neck. McGee, who caught only four passes for 91 yards and one touchdown during the season, finished Super BowlI with seven receptions for 138 yards and two touchdowns. After the game was over, a reporter asked Vince Lombardi about if he thought Kansas City was a good team. Lombardi responded that though the Chiefs were an excellent, well-coached club, he thought several NFL teams such as Dallas were better.
Sources: NFL.com Super Bowl I, Super Bowl Play Finder GB, Super Bowl Play Finder KC
Note: According to NBC Radio announcer Jim Simpson's report at halftime of the game, Kansas City led 11 to9 in first downs at halftime, 181 to 164 in total yards, and 142 to 113 in passing yards (Green Bay led 51 to 39 in rushing yards). Bart Starr completed eight of 13 with no interceptions, while Len Dawson was 11 of 15 with no interceptions. Green Bay led 14–10 at halftime. Green Bay had the ball five times, although only for a minute or so on the last possession; they punted on their first possession, scored a touchdown on their second, punted on their third, scored a touchdown on their fourth, and had the ball when the half ended on their fifth. Kansas City had the ball four times – punting on their first possession, driving to a missed field goal on their second possession, scoring a touchdown on their third, and kicking a field goal on their fourth.
This means, in the second half, Green Bay led 12 to six in first downs, 197 to 58 in total yards, 115 to 25 in passing yards, and 82 to 33 in rushing yards (the Packers won the second half, 21–0). Starr and his late-game replacement, Zeke Bratkowski, were eight for 11 with one interception; Dawson and his late-game replacement, Pete Beathard, were just six for 17, also with one interception. Each team had the ball seven times in the second half, although Green Bay's first possession was just one play and their seventh possession was abbreviated because the game ended. Green Bay scored a touchdown on their first (one play) possession, punted on their second, scored a touchdown on their third, was intercepted at KC's 15-yard line on their fourth (just Starr's fourth interception of the year), scored a touchdown on their fifth, punted on their sixth, and had the ball when the game ended on their seventh possession. Kansas City was intercepted on their first possession – Wood's return to the five set up Pitts' touchdown which made the score 21–10 – and then punted on each of their next six possessions.
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
Because this was the first Super Bowl, a new record was set in every category. All categories are listed in the 2016 NFL Fact book. The following records were set in Super BowlI, according to the official NFL.com boxscore and the Pro-Football-Reference.com game summary. Some records have to meet NFL minimum number of attempts to be recognized. The minimums are shown (in parenthesis).
Turnovers are defined as the number of times losing the ball on interceptions and fumbles.
Source:
Note: A six-official system was used by the NFL from through the season.
Since officials from the NFL and AFL wore different uniform designs, a "neutral" uniform was designed for this game. These uniforms had the familiar black and white stripes, but the sleeves were all black with the official's uniform number. This design was also worn in Super Bowl II, but was discontinued after that game when AFL officials began wearing uniforms identical to those of the NFL during the 1968 season, in anticipation of the AFL-NFL merger in . | https://en.wikipedia.org/wiki?curid=29128 |
Super Bowl II
The second AFL-NFL World Championship Game in professional football, known retroactively as Super Bowl II, was played on January 14, 1968, at the Orange Bowl in Miami, Florida. The National Football League (NFL)'s defending champion Green Bay Packers defeated the American Football League (AFL) champion Oakland Raiders by the score of 33–14. This game and Super Bowl III are the only two Super Bowl games to be played in back-to-back years in the same stadium.
Coming into this game, like during the first Super Bowl, many sports writers and fans believed that any team in the NFL was vastly superior to any club in the AFL. The Packers, the defending champions, posted a 9–4–1 record during the 1967 NFL season before defeating the Dallas Cowboys, 21–17, in the 1967 NFL Championship Game (also popularly known as the "Ice Bowl"). The Raiders finished the 1967 AFL season at 13–1, and defeated the Houston Oilers, 40–7, in the 1967 AFL Championship Game.
As expected, Green Bay dominated Oakland throughout most of Super Bowl II. The Raiders could only score two touchdown passes from quarterback Daryle Lamonica. Meanwhile, Packers kicker Don Chandler made four field goals, including three in the first half, while defensive back Herb Adderley had a 60-yard interception return for a touchdown. Green Bay quarterback Bart Starr was named the MVP for the second straight time, becoming the first back-to-back Super Bowl MVP for his 13 of 24 passes for 202 yards and one touchdown.
The game was awarded to Miami on May 25, 1967, at the owners meetings held in New York City.
The Packers advanced to their second straight AFL-NFL World Championship Game, but had a much more difficult time than in the previous season. Both of their starting running backs from the previous year, future Pro Football Hall of Famers Paul Hornung and Jim Taylor, had left the team. Their replacements, Elijah Pitts and Jim Grabowski, both went down with season-ending injuries, forcing Green Bay coach Vince Lombardi to use veteran reserve running back Donny Anderson and rookie Travis Williams. Fullbacks Chuck Mercein and Ben Wilson, who were signed as free agents after being discarded by many other teams, were also used to help compensate for the loss of Hornung and Taylor. Meanwhile, the team's 33-year-old veteran quarterback Bart Starr had missed 4 games during the season with injuries, and finished the season with nearly twice as many interceptions (17) as touchdown passes (9).
The team's deep threat was provided by veteran receivers Carroll Dale, who recorded 35 receptions for 738 yards (a 21.1 average), and 5 touchdowns; and Pro Bowler Boyd Dowler, who had 54 catches for 846 yards and 4 touchdowns. The Packers still had the superb blocking of linemen Jerry Kramer, Fred Thurston and Forrest Gregg. On special teams, Williams returned 18 kickoffs for 749 yards and an NFL record 4 touchdowns, giving him a whopping 41.1 yards per return average. But overall the team ranked just 9th out of 16 NFL teams in scoring with 332 points.
The Packers defense, however, allowed only 209 points, the 3rd best in the NFL. Even this figure was misleading, since Green Bay had yielded only 131 points in the first 11 games (when they clinched their division), the lowest total in professional football. Three members of Green Bay's secondary, the strongest aspect of their defense, were named to the Pro Bowl: defensive backs Willie Wood, Herb Adderley, and Bob Jeter. The Packers also had a superb defensive line led by Henry Jordan and Willie Davis. Behind them, the Packers linebacking core was led by Ray Nitschke.
The Packers won the NFL's Central Division with a 9–4–1 regular season record, clinching the division in the 11th week of the season. During the last three weeks, the Packers gave up an uncharacteristic total of 78 points, after having yielded only about a dozen points per game in their first 11 contests. In the playoffs, Green Bay returned to its dominant form, blowing away their first playoff opponent, the Los Angeles Rams, in the Western Conference Championship Game, 28–7. The next week, Green Bay then came from behind to defeat the Dallas Cowboys in the NFL championship game for the second year in a row, in one of the most famous games in NFL lore: The Ice Bowl.
The Raiders, led by head coach John Rauch, had stormed to the top of the AFL with a 13–1 regular season record (their only defeat was an October 7 loss to the New York Jets, 27–14), and went on to crush the Houston Oilers, 40–7, in the AFL Championship game. They had led all AFL and NFL teams in scoring with 468 points. And starting quarterback Daryle Lamonica had thrown for 3,228 yards and an AFL-best 30 touchdown passes.
The offensive line was anchored by center Jim Otto and guard Gene Upshaw, along with Pro Bowlers Harry Schuh and Wayne Hawkins. Wide receiver Fred Biletnikoff led the team with 40 receptions for 876 yards, an average of 21.3 yards per catch. On the other side of the field, tight end Billy Cannon caught 32 passes for 629 yards and scored 10 touchdowns. In the backfield, the Raiders had three running backs, Clem Daniels, Hewritt Dixon, and Pete Banaszak, who carried the ball equally and combined for 1,510 yards and 10 touchdowns. On special teams, defensive back Rodger Bird led the AFL with 612 punt return yards and added another 148 yards returning kickoffs.
The main strength of the Raiders was their defense, nicknamed "The 11 Angry Men". The defensive line was anchored by Pro Bowlers Tom Keating and Ben Davidson. Davidson was an extremely effective pass rusher who had demonstrated his aggressiveness in a regular season game against the New York Jets by breaking the jaw of Jets quarterback Joe Namath while sacking him. Behind them, Pro Bowl linebacker Dan Conners excelled at blitzing and pass coverage, recording 3 interceptions. The Raiders also had two Pro Bowl defensive backs: Willie Brown, who led the team with 7 interceptions, and Kent McCloughan, who had 2 interceptions. Safety Warren Powers recorded 6 interceptions, returning them for 154 yards and 2 touchdowns.
Despite Oakland's accomplishments, and expert consensus that this was the weakest of all the Packer NFL championship teams, Green Bay was a 14-point favorite to win the Super Bowl. Like the previous year, most fans and sports writers believed that the top NFL teams were superior to the best AFL teams.
Thus, most of the drama and discussions surrounding the game focused not on which team would win, but on the rumors that Lombardi might retire from coaching after the game. The game also proved to be the final one for Packers wide receiver Max McGee, one of the heroes of Super Bowl I, and place kicker Don Chandler.
This was the first Super Bowl to use the Y-shaped goalposts (with one supporting post instead of two) invented by Jim Trimble and Joel Rottman; they had made their debut at the start of the season for both the AFL and NFL, and first appeared at the pro level in Canada.
The game was televised in the United States by CBS, with Ray Scott handling the play-by-play duties and color commentators Pat Summerall and Jack Kemp in the broadcast booth. Kemp was the first Super Bowl commentator who was still an active player (with Buffalo of the AFL) at the time of the broadcast. The CBS telecast of this game is considered lost; all that survives are in-game photos, most of which were shown in the January 8, 1969 edition of Sports Illustrated. Not even NFL Films, the league's official filmmaker, has a copy of the full game available; however, they do have game footage that they used for their game highlight film.
Unlike the previous year's game, Super Bowl II was televised live on only one network, which has been the case for all subsequent Super Bowl games. While the Orange Bowl was sold out for the game, the NFL's unconditional blackout rules in place then prevented the live telecast from being shown in the Miami area.
During the latter part of the second quarter, and again for three minutes of halftime, almost 80 percent of the country (with the exceptions of New York City, Cleveland, Philadelphia and much of the Northeast) lost the video feed of the CBS broadcast. CBS, who had paid $2.5 million for broadcast rights, blamed the glitch on a breakdown in AT&T cable lines. The overnight Arbitron rating was 43.0, a slight increase from Super Bowl I's combined CBS-NBC rating of 42.2.
The pregame ceremonies featured two giant figures, one dressed as a Packers player and the other dressed as a Raiders player. They appeared on opposite ends of the field and then faced each other near the 50-yard line.
The Grambling State University band performed the national anthem as well as during the halftime show. The same band was part of the halftime show of Super Bowl I the previous year.
On Oakland's first offensive play, Ray Nitschke shot through a gap and literally upended fullback Hewritt Dixon in what was one of Nitschke's signature plays of his entire career. The hit was so vicious, it prompted Jerry Green, a "Detroit News" columnist sitting in the press box with fellow journalists, to say in a deadpan, that the game was "over". The Packers opened up the scoring with Don Chandler's 39-yard field goal after marching 34 yards on their first drive of the game. Meanwhile, the Raiders were forced to punt on their first two possessions.
The Packers then started their second possession at their own 3-yard line, and in the opening minutes of the second quarter, they drove 84 yards to the Raiders 13-yard line. However, they once again had to settle for a Chandler field goal to take a 6–0 lead. Later in the period, the Packers took the ball on their own 38-yard line following an Oakland punt. Raider cornerback Kent McCloughan jammed Packer split end Boyd Dowler at the line of scrimmage but then allowed him to head downfield, thinking that a safety would pick him up. However, McCloughan and left safety Howie Williams were both influenced by the Packer backs who were executing a "flood" pattern, with halfback Travis Williams and fullback Ben Wilson running pass routes to the same side as Dowler. Dowler ran a quick post and was wide open down the middle. He grabbed Starr's pass well behind middle linebacker Dan Conners, and right safety Rodger Bird could not get over quickly enough. Dowler outran the defense to score on a 62-yard touchdown reception, increasing the Packer lead to 13–0.
After being completely dominated until this point, the Raiders offense finally struck back their next possession, advancing 79 yards in 9 plays, and scoring on a 23-yard touchdown pass from Daryle Lamonica to receiver Bill Miller. The score seemed to fire up the Raiders' defense, and they forced the Packers to punt on their next drive. Raiders returner Rodger Bird gave them great field position with a 12-yard return to Green Bay's 40-yard line, but Oakland could only gain 1 yard with their next 3 plays and came up empty when George Blanda's 47-yard field goal attempt fell short of the goal posts. Oakland's defense again forced Green Bay to punt after 3 plays on the ensuing drive, but this time after calling for a fair catch, Bird fumbled punter Donny Anderson's twisting, left footed kick, and Green Bay's Dick Capp recovered the ball. After 2 incomplete passes, Starr threw a 9-yard completion to Dowler (despite a heavy rush from Ike Lassiter) to set up Chandler's third field goal from the 43 as time expired in the first half, giving the Packers a 16–7 lead.
At halftime, Packers guard Jerry Kramer said to his teammates (referring to Lombardi), "Let's play the last 30 minutes for the old man."
Any chance the Raiders might have had to make a comeback seemed to completely vanish in the second half. The Packers had the ball three times in the third quarter, and held it for all but two and a half minutes. On the Packers second drive of the half starting at their own 17, Ben Wilson ripped up the middle for 14 yards on a draw play. Anderson picked up 8 yards on a sweep, and Wilson carried to within inches of the first down. Starr then pulled one of his favorite plays on third down and short yardage, faking to Wilson and completing a 35-yard pass to wide receiver Max McGee who had slipped past three Raiders at the line of scrimmage. This was McGee's only reception of the game, and the final one of his career. Starr then hit Carroll Dale on a sideline route at the Oakland 13. Starr overthrew Donny Anderson wide open in the end zone, but on the next play he rolled out to the right and threw back to Anderson who was tackled on the two by linebacker Gus Otto. The next play was a broken play, as Anderson thought he saw daylight to the right but ran into Starr. The Packers were not rattled, and the line and fullback Ben Wilson wiped out the Raiders on Anderson's 2-yard touchdown run over right tackle, making the score 23–7.
Packer guard Jerry Kramer must have taken to heart his plea to play the second half for Coach Lombardi. On this drive, game films show him blowing Dan Conners out of Wilson's path on the draw play, then flattening Conners again on Anderson's scoring run.
Again the Green Bay defense forced Oakland to go three-and-out, and the Raiders punted. The Packers drove from their own 39 to the Raider 24 and increased their lead to 26–7 as Chandler kicked his fourth field goal (which hit the crossbar from 31 yards out and bounced over).
Early in the fourth quarter, Starr was knocked out of the game when he jammed the thumb on his throwing hand on a sack by Davidson. (Starr was replaced by Zeke Bratkowski, who was then sacked on his only pass attempt.) But later in the period, the Packers put the game completely out of reach when defensive back Herb Adderley intercepted a pass intended for Fred Biletnikoff and returned it 60 yards for a touchdown, making the score 33–7. Adderley laid back as the Raider end ran a curl route, then dashed in front of him to snare the ball and scored with the help of a crushing downfield block by tackle Ron Kostelnik.
Oakland did manage to score on their next drive after the turnover with a second 23-yard touchdown pass from Lamonica to Miller, set up by Pete Banaszak's 41-yard reception on the previous play. But all the Raiders' second touchdown did was make the final score look remotely more respectable, 33–14.
At the end of the game, coach Lombardi was carried off the field by his victorious Packers in one of the more memorable images of early Super Bowl history. It was in fact Lombardi's last game as Packer coach and his ninth consecutive playoff victory.
Oakland's Bill Miller was the top receiver of the game with 5 receptions for 84 yards and 2 touchdowns. Green Bay fullback Ben Wilson was the leading rusher of the game with 62 yards despite missing most of the fourth quarter while looking for a lost contact lens on the sidelines. Don Chandler ended his Packer career in style with 4 field goals. Lamonica finished the game with 15 out of 34 pass completions for 208 yards, 2 touchdowns, and 1 interception. Bart Starr completed 13 of 24 (with a couple of dropped passes) for 202 yards and one touchdown; his passer rating for the game was 96.2 to Lamonica's 71.7. The Packers outgained the Raiders in rushing yardage 160 to 107, led in time of possession by 35:54 to 24:06, had no turnovers, and only one penalty. Packer guard Jerry Kramer later recalled the mental mistakes his team made in the game, which only highlights the impossibly high standards held by Lombardi's team.
Sources:"The NFL's Official Encyclopedic History of Professional Football", (1973), p. 139, Macmillan Publishing Co. New York, NY, LCCN 73-3862, NFL.com Super Bowl II, Super Bowl II Play Finder GB, Super Bowl II Play Finder Oak
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
The following records were set or tied in Super Bowl II, according to the official NFL.com boxscore and the ProFootball reference.com game summary. Some records have to meet NFL minimum number of attempts to be recognized. The minimums are shown (in parenthesis).
Turnovers are defined as the number of times losing the ball on interceptions and fumbles.
Source:
Alternates
"Note: A seven-official system was not used until 1978" | https://en.wikipedia.org/wiki?curid=29129 |
Super Bowl IV
Super Bowl IV, the fourth and final AFL–NFL World Championship Game in professional American football, was played on January 11, 1970, at Tulane Stadium in New Orleans, Louisiana. The American Football League (AFL) champion Kansas City Chiefs defeated the National Football League (NFL) champion Minnesota Vikings by the score of 23–7. This victory by the AFL squared the Super Bowl series with the NFL at two games apiece as the two leagues merged into one after the game.
Despite the AFL's New York Jets winning the previous season's Super Bowl, many sports writers and fans thought it was a fluke and continued to believe that the NFL was still superior to the AFL, and thus fully expected the Vikings to defeat the Chiefs; the Vikings entered the Super Bowl as 12½ to 13½-point favorites. Minnesota posted a 12–2 record in , then defeated the Los Angeles Rams 23–20 for the Western Conference title, and the Cleveland Browns 27–7 in the NFL Championship Game. The Chiefs, who previously appeared in the first Super Bowl, finished the regular season at 11–3; they continued with two road wins in the AFL playoffs, dethroning the New York Jets 13–6, and then taking down division rival Oakland Raiders 17–7 in the final AFL title game.
Under wet conditions, the Chiefs defense dominated Super Bowl IV by limiting the Minnesota offense to only 67 rushing yards, forcing three interceptions, and recovering two fumbles. Kansas City's Len Dawson became the fourth consecutive winning quarterback to be named Super Bowl MVP. He completed 12 of 17 passes for 142 yards and one touchdown, with one interception. Dawson also recorded three rushing attempts for 11 yards.
Super Bowl IV is also notable for NFL Films miking up the Chiefs' Hank Stram during the game, the first time that a head coach had worn a microphone during a Super Bowl.
The game was awarded to New Orleans on March 19, 1969, at the owners meetings held in Palm Springs, California.
The Minnesota Vikings, led by head coach Bud Grant, entered the game with an NFL best 12–2 regular season record, leading the older league in total points scored (379) and fewest points allowed (133). They had scored 50 or greater points in three different games. They lost their first and last games of the season, but in between had 12 straight victories, the longest single-season winning streak in 35 years. Their defense, considered the most intimidating in the NFL, was anchored by a defensive line nicknamed the "Purple People Eaters", consisting of defensive tackles Gary Larsen and Alan Page, and defensive ends Carl Eller and Jim Marshall. The secondary was led by defensive backs Bobby Bryant (8 interceptions, 97 return yards), Earsell Mackbee (6 interceptions, 100 return yards), and Paul Krause (5 interceptions, 82 return yards, 1 touchdown).
On offense, quarterback Joe Kapp was known for his superb leadership and his running ability, both throwing on the run and running for extra yards. And when Kapp did take off and run, instead of sliding when he was about to be tackled like most quarterbacks, he lowered his shoulder and went right at the tackler. This style of play earned him the nickname "Indestructible". In the NFL Championship Game against the Cleveland Browns, he collided with linebacker Jim Houston while running for a first down, and Houston had to be helped off the field after the play ended. Also, Kapp was known for being an extremely unselfish leader: when he was voted the Vikings Most Valuable Player, he turned the award down and said that every player on the team was equally valuable: "There is no one most valuable Viking. There are 40 most valuable Vikings."
Running back Dave Osborn was the team's top rusher with 643 yards and seven touchdowns. He also caught 22 passes for 236 yards and another touchdown. In the passing game, Pro Bowl wide receiver Gene Washington averaged 21.1 yards per catch by recording 821 yards and nine touchdowns from 39 receptions. Wide receiver John Henderson caught 34 passes for 553 yards and 5 touchdowns. The Vikings' offensive line was anchored by Pro Bowlers Grady Alderman and Mick Tingelhoff.
By winning the 1969 NFL Championship, the Vikings became the last possessors of the Ed Thorp Memorial Trophy.
Meanwhile, it seemed that the Chiefs, led by head coach Hank Stram, and especially quarterback Len Dawson, were jinxed throughout the year. In the second game of the regular season, Dawson suffered a knee injury that kept him from playing the next six games. Then in the following week, second string quarterback Jacky Lee went down for the season with a broken ankle in a loss to the Cincinnati Bengals. However, third string quarterback Mike Livingston engineered five wins of the next six starts, with Dawson coming off the bench in the second half of the sixth to clinch the win. The Chiefs (11–3) managed to finish in second place behind the Oakland Raiders (12–1–1) in the AFL's Western Division, after suffering a tough 10–6 loss to Oakland in the final game of the regular season. After that game, many sports writers and fans heavily criticized the team and Dawson for the poor play calling (Dawson called between 80 and 90 percent of the plays during the season).
After a 34–16 road win over the New York Jets on November 16, the Chiefs clinched a playoff spot at 9–1 with four games remaining. Wanting to set itself up more like the NFL right before the merger, the AFL expanded its 1969 playoffs to four teams, with the second place teams from each division traveling to play the first place teams from the other division (Western champion vs. Eastern runner-up, and vice versa). As a result of the new playoff format, many critics thought the Chiefs entered the playoffs through a "back-door" as the runner-up in the Western division. However, Dawson silenced the critics and led Kansas City to a strong finish with two road wins in the playoffs, defeating the defending champion Jets 13–6, and the Raiders (who had beaten them 41–6 in the previous year's postseason and twice in the 1969 season) 17–7 in the AFL Championship Game. This essentially made the Chiefs the first wild card team to play in the Super Bowl. (Dawson says he thinks both the Jets and the Raiders could have beaten the Vikings.)
Still, many people felt that Dawson's level of play in the AFL was not comparable to the NFL. Dawson himself had spent five seasons in the NFL as a backup before going to the AFL and becoming one of its top quarterbacks. "The AFL saved my career," said Dawson. In his 8 AFL seasons, he had thrown more touchdown passes (182) than any other professional football quarterback during that time. But because many still viewed the AFL as being inferior to the NFL, his records were not considered significant. Dawson's first chance to prove himself against an NFL team ended in failure, with his Chiefs losing 35–10 to the Green Bay Packers in Super Bowl I, reinforcing the notion that his success was only due to playing in the "inferior league".
Offensively, the Chiefs employed innovative formations and strategies designed by Stram to disrupt the timing and positioning of the defense. Besides Dawson, the Chiefs main offensive weapon was running back Mike Garrett (1965 Heisman Trophy winner), who rushed for 732 yards and 6 touchdowns. He also recorded 43 receptions for 432 yards and another 2 touchdowns. Running back Robert Holmes had 612 rushing yards, 266 receiving yards, and 5 touchdowns. Running back Warren McVea rushed for 500 yards and 7 touchdowns, while adding another 318 yards returning kickoffs. In the passing game, wide receiver Otis Taylor caught 41 passes for 696 yards and 7 touchdowns. The offensive line was anchored by AFL All-Stars Ed Budde and Jim Tyrer. According to Len Dawson, placekicker Jan Stenerud and punter Jerrel Wilson were the best kickers in football.
The Chiefs defense led the AFL in fewest points allowed (177). Like the Vikings, the Chiefs also had an outstanding defensive line, which was led by defensive tackles Buck Buchanan and Curley Culp, and defensive ends Jerry Mays and Aaron Brown. The Chiefs also had AFL All-Star linebacker Willie Lanier, who recorded 4 interceptions and 1 fumble recovery during the season. The Kansas City secondary was led by defensive backs Emmitt Thomas (9 interceptions for 146 return yards and a touchdown), Jim Kearney (5 interceptions for 154 return yards and a touchdown) and Johnny Robinson (8 interceptions for 158 return yards).
Kansas City's defense had shown their talent in the AFL title game when they defeated the Raiders. Raiders quarterback Daryle Lamonica had completed 13 of 17 passes for 276 yards and a record setting 6 touchdowns in a 56–7 divisional rout of the Houston Oilers in their previous game, and had shredded the Chiefs with 347 yards and 5 touchdowns in their 41–6 win in the previous season's playoffs. But in the 1969 AFL Championship Game, the Chiefs defense held him to just 15 of 39 completions and intercepted him 3 times in the fourth quarter.
This was the last Super Bowl appearance for the Chiefs and their last championship until Super Bowl LIV 50 years later.
Kansas City advanced to the Super Bowl with wins over the two previous AFL champions. First they defeated the New York Jets in a defensive struggle 13–6, with Dawson's 61-yard completion to Taylor setting up the game winning score on his 19-yard touchdown pass to Gloster Richardson. Kansas City held New York to just 234 yards and forced 4 turnovers.
The Chiefs then faced the Raiders, who took a 7–0 lead over them in the first quarter, but their only score of the game. Meanwhile, Dawson's 41-yard completion to Frank Pitts in the second quarter set up a 1-yard touchdown run by Wendell Hayes. Then in the third quarter, Emmitt Thomas' clutch interception in the end zone and Dawson's long completion to Taylor sparked a 95-yard drive that ended with a touchdown run by Robert Holmes. Kansas City went into the fourth quarter with a 14–7 lead, and held on for the win by forcing four turnovers (3 interceptions and a turnover on downs) in the final period.
Meanwhile, the ninth-year Vikings recorded their first postseason win in franchise history by defeating the Los Angeles Rams 23–20. Though the Rams held the lead for most of the time in regulation, Kapp led a touchdown drive to give the team a 21–20 fourth quarter lead. Eller made a key play to preserve the lead, sacking Rams quarterback Roman Gabriel in the end zone for a safety and Alan Page intercepted a pass with thirty seconds remaining.
Then Minnesota quickly demolished the Cleveland Browns in the NFL championship game, jumping to a 24–0 halftime lead and going on to win 27–7. The Vikings offense gained 381 yards without turning the ball over, with Kapp passing for 169 yards and a touchdown, while Osborn rushed for 108 yards and Washington gained 125 yards on just 3 receptions.
Many sports writers and fans fully expected that the Vikings would easily defeat the Chiefs. Although the AFL's New York Jets won Super Bowl III at the end of the previous season, many were convinced that it was a fluke. They continued to believe that all of the NFL teams were far and away superior to all of the AFL teams. And regardless of the differences among the leagues, the Vikings simply appeared to be a superior team. Minnesota had the NFL's best record and outscored their opponents by 246 points, while Kansas City had not even won their own division.
Super Bowl IV provided another chance to show that Dawson belonged at the same level with all of the great NFL quarterbacks. But five days before the Super Bowl, news leaked that his name had been linked to a Detroit federal gambling investigation. Although Dawson was eventually cleared of any charges, the controversy added to the pressure he was already under while preparing for the game, causing him to lose sleep and concentration. "It was, beyond a doubt, the toughest week of my life," said Dawson.
Bud Grant became the first Super Bowl coach not to wear a tie. His counterpart, Hank Stram, wore a three-piece suit, with a red vest and a blazer with the Chiefs' helmet logo emblazoned on the breast pocket.
The attendance mark of 80,562 is the highest for the first four pre-merger Super Bowl games played.
Super Bowl IV was broadcast in the United States by CBS with play-by-play announcer Jack Buck and color commentator Pat Summerall, with Frank Gifford and Jack Whitaker reporting from the winning and losing locker rooms, respectively. While the game was sold out at Tulane Stadium, the NFL's unconditional blackout rules in place then prohibited the live telecast from being shown in the New Orleans area.
CBS erased the videotape a few days after the game; the same thing they did with Super Bowls I and II, which they broadcast. Videotape was expensive then and networks did not believe old games were worth saving. The only reason this game exists is because the CBC and the French version on Radio-Canada in Canada and in Québec carried the broadcast and because the Vikings were located so close to Canada and had a lot of Canadian and Québec fans (and Bud Grant was a legendary player and coach in the CFL), the CBC decided to save it for their archives. As previously mentioned, as videotape was too expensive in those days to save, they transferred the footage to black & white film (kinescope). This therefore, enabled them to reuse the videotape.
The night before the game, Ed Sabol of NFL Films met with Hank Stram and convinced Stram to wear a hidden microphone during the game so his comments could be recorded for the NFL Films Super Bowl IV film. They agreed the microphone would be kept secret. This would be the first time that a head coach had worn a microphone during a Super Bowl. This has led to one of the best-known and most popular of the NFL Films Super Bowl films due to the constant chatter and wisecracking of Stram. Ed Sabol had his number one sound man, Jack Newman – who also wired Vince Lombardi in a previous playoff game – place the microphone on Stram. Newman, a multiple Emmy award-winning sound man and cameraman, shot Stram for the entire game as well as monitored the sound to make sure it continued to work. The success and popularity of this first Super Bowl wiring of a winning head coach led to 24 years of Newman continuing to wire players and coaches for NFL Films.
Some excerpts of Stram include:
Chiefs head coach Hank Stram, who was also the team's offensive coordinator, devised an effective game plan against the Vikings. He knew Minnesota's secondary was able to play very far off receivers because Viking defensive ends Carl Eller and Jim Marshall knocked down short passes or put pressure on the quarterback. Stram decided to double-team Marshall and Eller; most of quarterback Len Dawson's completions would be short passes, and neither Marshall nor Eller knocked down any passes. Stram also concluded that the Vikings' aggressiveness on defense also made them susceptible to trap plays; Mike Garrett's rushing touchdown would come on a trap play. The Vikings' inside running game depended on center Mick Tingelhoff blocking linebackers. Stram put 285-pound Buck Buchanan or 295-pound Curley Culp in front of Tingelhoff, who weighed only 235 pounds. To Minnesota's credit, the NFL used the so-called light "greyhound" centers while the AFL used big centers. It was a mismatch that disrupted the Vikings' running game. Wrote Dawson, "It was obvious that their offense had never seen a defense like ours." Minnesota would rush for only two first downs.
The Vikings began the game by receiving the opening kickoff and marching from their own 20-yard line to the Kansas City 39-yard line with quarterback Joe Kapp completing his first two passes for 36 yards. Kapp's next pass was also a completion but running back Bill Brown was slowed by linebacker Bobby Bell, then brought down by left defensive end Jerry Mays for a yard loss to make it third down, on which Kapp failed to connect with tight end John Beasley. Minnesota rushed for only 6 yards on the drive and chose to punt. The Chiefs then drove 42 yards in eight plays. Included was a 20-yard reception by wide receiver Frank Pitts after Vikings defensive back Ed Sharockman gambled trying to make an interception. Kansas City then scored on placekicker Jan Stenerud's Super Bowl record 48-yard field goal. This record would stand for 24 years until broken by Steve Christie in Super Bowl XXVIII. (According to Dawson, the Vikings were shocked that the Chiefs would attempt a 48-yard field goal. Stenerud was among the first soccer-style placekickers in professional football. The others included brothers Charlie and Pete Gogolak. The soccer-style placekickers used the instep of the foot while the conventional professional football placekickers kicked straight on with their toes. "Stenerud was a major factor," Dawson said.) Minnesota then managed to reach midfield on its next drive, but chose to punt again.
On the first play of their ensuing drive, Dawson threw a 20-yard completion to Pitts, followed by a 9-yard pass to wide receiver Otis Taylor.
Four plays later, on the first play of the second quarter, a pass interference penalty on Sharockman nullified Dawson's third down incompletion and gave Kansas City a first down at the Minnesota 31-yard line. However, on third down and 4 at the 25-yard line, Vikings cornerback Earsell Mackbee broke up a deep pass intended for Taylor. Stenerud then kicked another field goal to increase the Chiefs' lead to 6–0.
On the second play of their next drive, Vikings wide receiver John Henderson fumbled the ball after catching a 16-yard reception, and Chiefs defensive back Johnny Robinson recovered the ball at the Minnesota 46-yard line. But the Vikings made key defensive plays. First defensive tackle Alan Page tackled running back Garrett for a 1-yard loss, and then safety Paul Krause intercepted Dawson's pass at the 7-yard line on the next play.
However, the Vikings also could not take advantage of the turnover. Kapp's two incompletions and a delay of game penalty forced Minnesota to punt from its own 5-yard line. The Chiefs then took over at the Viking 44-yard line after punter Bob Lee's kick traveled only 39 yards. A 19-yard run by Pitts on an end around play fooled the overaggressive, overpursuing Viking defense to set up another field goal attempt by Stenerud, which was good to increase Kansas City's lead to 9–0.
On the ensuing kickoff, Vikings returner Charlie West fumbled the football, and Kansas City's Remi Prudhomme recovered it at the Minnesota 19-yard line. ("That was a key, key play," said Dawson.) Defensive end Jim Marshall sacked Dawson for an 8-yard loss on the first play of the drive; however, a 13-yard run on a draw play by running back Wendell Hayes and a 10-yard reception by Taylor gave the Chiefs a first down at the Vikings' 4. Three plays later, Garrett's five-yard touchdown run on a trap draw play, aided by pulling right guard Mo Moorman's block on Page that cleared a huge hole, gave Kansas City a 16–0 lead. This play is forever known as the 65 Toss Power Trap.
West returned the ensuing kickoff 27 yards to the 32-yard line. On the first play of the drive, Kapp completed a 27-yard pass to Henderson to advance the ball to the Kansas City 41-yard line. However, the next three plays, Kapp threw two incompletions and was sacked by Chief defensive tackle Buck Buchanan for an eight-yard loss. On fourth down, kicker Fred Cox's 56-yard field goal attempt fell way short of the goal posts. For the first half, Minnesota rushed for only 24 yards and failed to convert any of five third downs.
In the third quarter, the Vikings managed to build momentum. After the Chiefs punted on their opening possession, Kapp completed four consecutive passes for 47 yards and rushed for seven yards. Minnesota also made its first third down conversion as it drove 69 yards in 10 plays to score on fullback Dave Osborn's four-yard rushing touchdown, reducing the lead to 16–7. However, Kansas City responded on its next possession with a six-play, 82-yard drive. Pitts picked up a key first down with a 7-yard left-to-right run on a reverse play. Then after a 15-yard personal foul penalty against the Vikings, Dawson hit Taylor with a short pass. Taylor caught the ball at the Minnesota 41-yard line, broke Earsell Mackbee's tackle, raced down the sideline, broke through Vikings' safety Karl Kassulke's tackle and scored the clinching touchdown on a 46-yard play.
The Vikings were demoralized after the game-breaking touchdown and the Chiefs' defense would continue to shut them down in the fourth quarter, forcing three interceptions on three Minnesota possessions to clinch the 23–7 victory. The defeat was total for the Vikings, as even their "Indestructible" quarterback Joe Kapp had to be helped off the field in the fourth quarter after being sacked by Chiefs defensive lineman Aaron Brown. Kapp was replaced by Gary Cuozzo. Fittingly, the Vikings' final play was an interception Cuozzo threw to cornerback Emmitt Thomas.
Kansas City running back and future University of Southern California Athletic Director Mike Garrett, the 1965 Heisman Trophy recipient, was the top rusher of the game, recording 11 carries for 39 yards and a touchdown. He also caught two passes for 25 yards and returned a kickoff for 18 yards. Taylor was the Chiefs' leading receiver with six catches for 81 yards and a touchdown. Kapp finished the game with 16 of 25 completions for 183 yards, with two costly interceptions. Henderson was the top receiver of the game with seven catches for 111 yards. The Chiefs defense completely shut down Minnesota's vaunted rushing attack. In the NFL championship game, Osborn had rushed for 108 yards while Kapp rushed for 57. In Super Bowl IV, however, the two rushed for a combined total of 24 yards. In addition, Kansas City's secondary held Minnesota All Pro receiver Gene Washington to one reception for 9 yards.
Referring to the Vikings' three interceptions, three fumbles, and six penalties, Vikings safety Karl Kassulke said, "We made more mental mistakes in one game than we did in one season." Kapp would never play again for the Vikings, as he would play out the option of his contract and sign with the Boston Patriots for the 1970 season.
Kansas City is, , the only team in the Super Bowl era to win the title without allowing as much as 10 points in any postseason game.
Sources:"The NFL's Official Encyclopedic History of Professional Football", (1973), p. 144, Macmillan Publishing Co. New York, NY, LCCN 73-3862, NFL.com Super Bowl IV, USA Today Super Bowl IV Play by Play, Super Bowl IV Play Finder KC, Super Bowl IV Play Finder Min
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
The following records were set or tied in Super Bowl IV, according to the official NFL.com boxscore and the ProFootball reference.com game summary. Some records have to meet NFL minimum number of attempts to be recognized. The minimums are shown (in parenthesis).
"Note: A seven-official system was not used until 1978" | https://en.wikipedia.org/wiki?curid=29130 |
Super Bowl V
Super Bowl V, the fifth edition of the Super Bowl and first modern-era National Football League (NFL) championship game, was an American football game played between the American Football Conference (AFC) champion Baltimore Colts and the National Football Conference (NFC) champion Dallas Cowboys to decide the NFL champion for the 1970 season. The Colts defeated the Cowboys by the score of 16–13 on a field goal with 5 seconds left in the game. The game was played on January 17, 1971, at the Orange Bowl in Miami, Florida, the first Super Bowl game played on artificial turf, on first-generation Poly-Turf.
This was the first Super Bowl played after the completion of the AFL–NFL merger. Beginning with this game and continuing to the present day, the Super Bowl has served as the NFL's league championship game, with the winner of the AFC Championship Game and the winner of the NFC Championship Game facing off in the culmination of the NFL playoffs. As per the merger agreement, all 26 AFL and NFL teams were divided into two conferences with 13 teams in each. Along with the Colts, the Cleveland Browns and Pittsburgh Steelers agreed to join the ten AFL teams to form the AFC; the remaining 13 NFL teams formed the NFC. This explains why the Colts represented the NFL in Super Bowl III, but the AFC for Super Bowl V. Baltimore advanced to Super Bowl V after posting an regular season record. Meanwhile, the Cowboys were making their first Super Bowl appearance after posting a regular season record.
The game is sometimes called the "Blunder Bowl," "Blooper Bowl," or "Stupor Bowl" because it was filled with poor play, a missed PAT, penalties, turnovers, and officiating miscues. The two teams combined for a Super Bowl record 11 turnovers, with five in the fourth quarter. The Colts' seven turnovers remain the most committed by a Super Bowl champion. Dallas also set a Super Bowl record with 10 penalties, costing them 133 yards. It was finally settled when Colts rookie kicker Jim O'Brien made a 32-yard field goal with five seconds left in regulation time. In order to win the game, Baltimore had to overcome a 13–6 deficit after three quarters, and losing their starting quarterback Johnny Unitas in the second quarter. It is the only Super Bowl in which the Most Valuable Player Award was given to a member of the losing team: Cowboys' linebacker Chuck Howley, the first non-quarterback to win the award, after making two interceptions (sacks and tackles were not yet recorded).
The NFL awarded hosting rights for Super Bowl V to the city of Miami 10 months earlier on March 17, 1970, at the owners' meeting held in Honolulu.
The Colts were an unspectacular but well-balanced veteran team, led by 37-year-old star quarterback Johnny Unitas. He had regained his starting spot on the team in 1969 upon recovering from an injury that led him to miss the majority of the 1968 season. Unitas played inconsistently during the 1970 regular season; he threw for 2,213 yards, but recorded more interceptions than touchdowns. He also had injury problems, missing two regular season games and giving Earl Morrall more significant playing time. Morrall put up better statistics (792 yards, 9 touchdowns, 4 interceptions, and a 97.6 passer rating), but head coach Don McCafferty decided to start Unitas for the playoffs. (According to Jim O'Brien, Morrall was just as good as Unitas in the players' opinion.)
In addition, Baltimore had three solid weapons in the passing game: wide receivers Eddie Hinton and Roy Jefferson, and future Hall of Fame tight end John Mackey combined for 119 receptions, 1,917 yards, and 15 touchdowns. In the backfield, running back Norm Bulaich was the team's top rusher with 426 yards and 3 touchdowns, while also catching 11 passes for another 123 yards.
The Colts' main strength was their defense. Pro Bowl defensive tackle Bubba Smith anchored the line. Behind him, the Colts had two outstanding linebackers: Pro Bowler Mike Curtis, who recorded 5 interceptions, and Ted Hendricks. In the secondary, Pro Bowl safety Jerry Logan recorded 6 interceptions for 92 return yards and 2 touchdowns, while safety Rick Volk had 4 interceptions for 61 return yards.
Don Klosterman, formerly with San Diego, Kansas City, and Houston in the AFL, became the Colts' general manager in 1970. Future Colts GM Ernie Accorsi was the public relations director.
Baltimore finished the regular season winning the AFC East with an record, the best in the AFC. Only the Minnesota Vikings had a better record among all NFL teams at
The Cowboys overcame many obstacles during the regular season. Running back Calvin Hill, the team's second leading rusher with 577 yards and four touchdowns, was lost for the year after suffering a leg injury late in the regular season. And wide receiver Bob Hayes was benched by head coach Tom Landry for poor performances on several occasions.
Most significantly, the Cowboys had a quarterback controversy between Craig Morton and Roger Staubach; the two alternated as starters during the regular season. Landry eventually settled on Morton for most of the latter half of the season, because he felt less confident that Staubach would follow his game plan (Landry called all of Morton's plays). Also, Morton had done extremely well in the regular season, throwing for 1,819 yards and 15 touchdowns, with only seven interceptions, earning him a passer rating of 89.8. In contrast, Staubach, although a noted scrambler and able to salvage broken plays effectively, threw for 542 yards, and only two touchdowns with eight interceptions, giving him a 42.9 rating.
Hayes was the main deep threat on the team, catching 34 passes for 889 yards (a 26.1 yards per catch average) and ten touchdowns, while also rushing four times for 34 yards and another touchdown, and adding another 116 yards returning punts. On the other side of the field, wide receiver Lance Rentzel (who would be deactivated for the last few weeks of the season and postseason following an indecent exposure charge; being replaced in the starting lineup by Reggie Rucker) recorded 28 receptions for 556 yards and 5 touchdowns.
However, the main strength on the Cowboys offense was their running game. Rookie running back Duane Thomas rushed 151 times for 803 yards (a 5.1 yards per carry average) and five touchdowns, while adding another 416 yards returning kickoffs. Fullback Walt Garrison, who replaced the injured Hill, provided Thomas with excellent blocking and rushed for 507 yards and three touchdowns. Garrison was also a good receiver out of the backfield, catching 21 passes for 205 yards and 2 touchdowns. Up front, Pro Bowl guard John Niland and Rayfield Wright anchored the offensive line.
Like the Colts, the Cowboys' main strength was their defense. Nicknamed the "Doomsday Defense", they allowed just one touchdown in their last six games prior to the Super Bowl. Their line was anchored by future Hall of Fame defensive tackle Bob Lilly. Behind him, linebackers Lee Roy Jordan, Dave Edwards, and Chuck Howley excelled at stopping the run and pass coverage. The Cowboys also had an outstanding secondary, led by Mel Renfro and Herb Adderley, who combined for seven interceptions. Safety Charlie Waters led the team with five interceptions, while safety Cliff Harris recorded two.
Dallas finished the regular season winning the NFC East with a record, winning their final five regular season games to overcome the St. Louis Cardinals (who lost their final three games and fell to third place in the final standings) and New York Giants (who lost their finale to the Los Angeles Rams; a Giants victory would have given New York the NFC East title based upon a better division record and forced a coin toss between the Cowboys and Detroit Lions for the wild card playoff spot).
In the playoffs, Dallas defeated Detroit in sunny weather at the Cotton Bowl, with a field goal and a safety. Then the Cowboys overcame the San Francisco 49ers in the NFC championship game, aided by Thomas' 143 rushing yards, along with interceptions by Renfro and Jordan late in the third quarter that were both converted into touchdowns.
Meanwhile, the Colts advanced to the Super Bowl by beating the Cincinnati Bengals and the Oakland Raiders in the playoffs at Memorial Stadium.
For the Colts, Super Bowl V represented a chance to redeem themselves for their humiliating loss to the New York Jets in Super Bowl III. Volk commented, "Going to the game a second time took away some of the awe. I think we were able to focus better. There was no way we were going to let ourselves get beat again."
Meanwhile, the game was a chance for the Cowboys to lose their nickname of "next year's champions" and their reputation of "not being able to win the big games". In the past five seasons, Dallas had won more games, 52 of 68, than any other professional football team, but they had yet to win a league title. The Cowboys had chances to go to the first two Super Bowls, but narrowly lost to the Green Bay Packers in both the 1966 and 1967 NFL Championship games. In the 1966 title game, the Cowboys failed to score a potential tying touchdown on four attempts starting from the Packers two-yard line on the game's final drive. Then in the 1967 title game (the "Ice Bowl"), the Cowboys lost because they allowed the Packers to score a touchdown with sixteen seconds left in the game.
As the designated home team, Dallas was forced to wear its blue jerseys for the Super Bowl under rules in place at the time, which did not allow the home team its choice of jersey color, unlike the regular season and playoff games leading up to the Super Bowl. Dallas had not worn its blue jerseys at home since 1963, as Cowboys general manager Tex Schramm opted to have the team wear white at home in order to present fans with a consistent look. The Cowboys wore their blue jerseys twice during the 1970 season, losing 20–7 at St. Louis in week four and winning 6–2 at Cleveland in week 13. The designated home team was first allowed its choice of jersey color for Super Bowl XIII, allowing the Cowboys to wear white vs. the Pittsburgh Steelers.
Vice President Spiro Agnew, a Colts fan since the team began playing in Baltimore in 1953, attended the game . Agnew was Governor of Maryland prior to his election as Richard Nixon's running mate in 1968. Nixon himself was a huge football fan and had a vacation home in Key Biscayne, approximately ten miles from the Orange Bowl.
Also in attendance was boxing great Muhammad Ali...who signed autographs for many young fans.
Kickoff for this game was at 2:00 pm, making it the earliest starting time in the Eastern Time Zone in Super Bowl history, and one of only three Super Bowls to start in the morning for viewers in the Pacific Time Zone (the others were Super Bowl VI in New Orleans and Super Bowl X in Miami).
The game was broadcast in the United States by NBC with play-by-play announcer Curt Gowdy, color commentator Kyle Rote, and sideline reporter Bill Enis. Although the Orange Bowl was sold out for the event, unconditional blackout rules in the NFL in the era prohibited the live telecast from being shown in the Miami area. The blackout was challenged in Miami-Dade District Court by attorney Ellis Rubin, and although the judge denied Rubin's request since he felt he did not have the power to overrule the NFL, he agreed with Rubin's argument that the blackout rule was unnecessary for the Super Bowl. The game was also the first Super Bowl to be carried live in the state of Alaska; thanks to NBC's then-parent company RCA acquiring the Alaska Communications System from the United States Air Force.
The video of the complete original broadcast, up until Chuck Howley's second interception, the first play of the fourth quarter exists, however the rest of the fourth quarter is missing from network vaults. The complete audio, including the post-game, does exist. Broadcast excerpts of the crucial fourth-quarter plays, recovered from the Canadian feed of NBC's original, also exist and circulate among collectors. (Two different NFL Films game compilations also cover the fourth quarter plays, in part.)
The bands from Southern University and Southeast Missouri State College performed before the game, while trumpeter Tommy Loy played the national anthem. Loy also played the anthem before every Cowboys' home game from the mid-1960s until the late-1980s. The Southeast Missouri State Golden Eagle Band was featured during the halftime show along with singer Anita Bryant.
The first three possessions of Super Bowl V ended quietly with each team punting after a three-and-out. Then, on the first play of the Colts' second drive, Cowboys linebacker Chuck Howley intercepted a pass from Johnny Unitas and returned it 22 yards to the Colts' 46-yard line, the first of 11 combined turnovers committed by both teams. The Cowboys failed to take advantage of the turnover, with a 15-yard holding penalty 10 yards behind the line of scrimmage pushing them back to a 3rd-and-33 situation. Walt Garrison gained 11 yards and Dallas had to punt. However, Colts punt returner Ron Gardin muffed the return, and the loose ball was recovered by Cowboys safety Cliff Harris at the Colts' 9-yard line. The Cowboys were unable to score a touchdown and settled for kicker Mike Clark's 14-yard field goal to establish a 3–0 lead.
After a Colts punt which they failed to keep from reaching the end zone, Cowboys quarterback Craig Morton completed a 41-yard pass to Bob Hayes to reach the Colts' 12-yard line, with a roughing the passer penalty adding 6 yards (half the distance to the goal), but Dallas was denied the end zone by the Baltimore defense for a second time. Linebacker Ted Hendricks deflected Morton's pass on first down and running back Duane Thomas was tackled for a 1-yard loss on second down.
Morton committed a 15-yard intentional grounding penalty on third down to open the 2nd quarter, pushing the Cowboys back to the 22-yard line and forcing them to settle for Clark's 30-yard field goal, stretching the score to 6-0.
On their next possession the Colts offense got a break. After two straight incompletions to open the drive, Unitas uncorked a pass to Eddie Hinton that was both high and behind the receiver. The ball ricocheted off Hinton's hands, was tipped by Dallas defensive back Mel Renfro, then landed in the arms of tight end John Mackey, who sprinted 75 yards for a touchdown. The Cowboys subsequently blocked Jim O'Brien's extra point attempt to keep the score tied at 6-6, with O'Brien later saying that he was "awfully nervous" and hesitated a second too long before kicking it.
Six minutes into the second quarter, Cowboys linebacker Lee Roy Jordan tackled Unitas, causing him to fumble. Defensive lineman Jethro Pugh recovered the loose ball at the Baltimore 28 and Dallas capitalized three plays later, scoring on a 7-yard touchdown pass from Morton to Thomas to establish a 13-6 lead. The next time the Colts had the ball they quickly turned it over yet again, with Unitas unleashing a fluttering interception to Renfro while being hit fiercely on a pass. Unitas was knocked out of the game permanently on the play with a rib injury and was replaced by Earl Morrall, who was widely blamed for the Colts loss in Super Bowl III. The Cowboys, starting from their own 15, were unable to score any points off the turnover. After sustaining a 15-yard pass interference penalty, they punted. After regaining possession, the Colts offense, led by Morrall, stormed all the way to the Cowboys 2-yard line with less than two minutes remaining in the half. However, the Cowboys defense stiffened. Colts running back Norm Bulaich was stuffed on three consecutive rushing attempts from inside the 2-yard line. On fourth down, Morrall threw an incomplete pass, turning the ball over on downs and ending the half with Dallas leading 13–6.
The second half was a parade of turnovers, sloppy play, penalties, and missed opportunities.
Colts returner Jim Duncan fumbled the opening kickoff of the second half and Dallas recovered. Then the Cowboys drove to the Colts' 1-yard line, but Mike Curtis punched the ball loose from Cowboys running back Duane Thomas before crossing the end zone, and the Colts took over at the 1 as Duncan was credited with the recovery–-a controversial call because when the resulting pile-up was sorted out, Dallas center Dave Manders was holding the ball. The energized Colts then drove to the Cowboys' 44-yard line but came up empty when O'Brien's 52-yard field goal attempt fell short of the goal posts. However, instead of attempting to return the missed field goal, Renfro allowed it to bounce inside their own 1-yard line where it was downed by center Tom Goode (NFL rules prior to 1974 allowed a field goal that fell short of the goal posts to be downed just like a punt; that rule is still in effect in high school football). "I thought it would carry into the end zone", Renfro explained after the game.
Dallas, backed up to its own end zone, punted after three plays. The Colts would have received the ball inside Dallas territory following the punt, but a 15-yard clipping penalty pushed the Colts back to their own 39 to begin the drive. Two plays later, Morrall completed a 45-yard pass to running back Tom Nowatzke to reach the Cowboys 15-yard line.
Three plays later, on the first play of the fourth quarter, Morrall threw an interception to Howley in the end zone to preserve the Cowboys' 13-6 lead.
After forcing the Cowboys to punt, the Colts regained the ball on their own 18-yard line, still trailing 13-6. Aided by a pass interference call and a 23-yard completion, the Colts advanced into Dallas territory. The Colts then attempted to fool the Cowboys with a flea-flicker, resulting in one of the oddest plays in Super Bowl history. Running back Sam Havrilak took a handoff and ran right, intending to lateral the ball back to Morrall, but Pugh stormed into the backfield and prevented him from doing so. Havrilak (who played quarterback at Bucknell) then threw a pass intended for Mackey, but it was caught instead by Hinton, who promptly took off for the end zone. However, as Hinton raced toward a touchdown, Cowboys defensive back Cornell Green stripped him from behind at the 11. The loose ball bounced wildly in the field of play, evading recovery attempts by six different players until it was eventually pushed 20 yards through the back of the end zone for a touchback, thus returning the ball to the Cowboys at their 20.
Three plays after the turnover the Cowboys returned the favor. Morton threw a pass that was intercepted by Colts safety Rick Volk, who returned the ball 30 yards to the Cowboys' 3 (Morrall later referred to that play as the play of the game). Two plays later, the Colts scored on a two-yard touchdown run by Nowatzke. O'Brien's extra point sailed through the uprights to tie the game at 13–13. (O'Brien says he was much calmer and more confident on this extra point than on the first one, which was blocked.)
The next two possessions ended in traded punts, with the Cowboys eventually taking over in excellent field position at the Colts 48-yard line with less than two minutes left in the game.
On the second play of this potential game-winning drive, Dallas committed a 15-yard holding penalty (its second offensive holding of the game) on the 42-yard line, which was a spot foul, pushing the team all the way back to its own 27-yard line (the NFL did not reduce the penalty for offensive holding to 10 yards until 1974). Then, on second down and 35, Morton threw a pass that slipped through the hands of running back Dan Reeves and bounced for an interception into the arms of linebacker Mike Curtis, who then returned the ball 13 yards to the Cowboys' 28-yard line.
Two plays later, with nine seconds left in the game, O'Brien kicked the go ahead 32-yard field goal, giving Baltimore a 16–13 lead. O'Brien says he was "on automatic" and was so calm and concentrating so hard that he didn't hear anything and saw only the ball. In an enduring image from Super Bowl V, after O'Brien's game-winning field goal Bob Lilly took off his helmet and hurled it through the air in disgust.
The Cowboys received the ball again on their 40 with a few seconds remaining after O'Brien's ensuing squib kick, but Morton's pass to Garrison was intercepted by Logan at the Baltimore 29, and time expired.
Morrall was the top passer of the game, with 7 out of 15 completions for 147 yards, with 1 interception. Before being knocked out of the game, Unitas completed 3 out of 9 passes for 88 yards and a touchdown, with 2 interceptions. Morton completed more passes than Morrall and Unitas combined (12), but finished the game with 118 fewer passing yards (127), and was intercepted 3 times (all in the fourth quarter). Mackey was the top receiver of the game with 2 receptions for 80 yards and a touchdown. Nowatzke was the Colts' leading rusher with 33 yards and a touchdown, while also catching a pass for 47 yards. Dallas running back Walt Garrison was the leading rusher of the game with 65 rushing yards, and added 19 yards on 2 pass receptions.
Referencing the numerous turnovers, Morrall said, "It really was a physical game. I mean, people were flying into one another out there." "It was really a hard-hitting game," wrote O'Brien. "It wasn't just guys dropping the ball. They fumbled because they got the snot knocked out of them." Said Tom Landry:
I haven't been around many games where the players hit harder. Sometimes people watch a game and see turnovers and they talk about how sloppy the play was. The mistakes in that game weren't invented, at least not by the people who made them. Most were forced.
"We figured we could win if our offense didn't put us into too many holes", said 35-year-old Colts lineman Billy Ray Smith, who was playing in his last NFL game, "Let me put it this way, they didn't put us into any holes we couldn't get out of".
Colts defensive end Bubba Smith would later refuse to wear his Super Bowl V ring because of the "sloppy" play.
Don McCafferty became the first rookie head coach to win a Super Bowl. The feat was not repeated until George Seifert led the San Francisco 49ers to victory in Super Bowl XXIV. McCafferty was also the first Super Bowl-winning coach who did not wear coat and tie, opting for a short-sleeved T-shirt with a mock turtleneck.
Two rule changes that were adopted before the 1974 season were:
These would have reduced the severity of the two Dallas offensive holding penalties in Super Bowl V.
Sources:"The NFL's Official Encyclopedic History of Professional Football", (1973), p. 149, Macmillan Publishing Co. New York, NY, LCCN 73-3862, NFL.com Super Bowl V, Super Bowl V Play Finder Bal, Super Bowl V Play Finder Dal
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
The following records were set or tied in Super Bowl V, according to the official NFL.com boxscore, the 2016 NFL Record & Fact Book and the ProFootball reference.com game summary. Some records have to meet a required minimum number of attempts in order to be recognized. The minimums are shown (in parenthesis).
Turnovers are defined as the number of times losing the ball on interceptions and fumbles.
Source:
"Note: A seven-official system was not used until 1978, also Back Judge and Field swapped titles in 1998." | https://en.wikipedia.org/wiki?curid=29131 |
Super Bowl VI
Super Bowl VI was an American football game between the National Football Conference (NFC) champion Dallas Cowboys and the American Football Conference (AFC) champion Miami Dolphins to decide the National Football League (NFL) champion for the 1971 season. The Cowboys defeated the Dolphins by the score of 24–3, to win their first Super Bowl. The game was played on January 16, 1972, at Tulane Stadium in New Orleans, Louisiana, the second time the Super Bowl was played in that city. Despite the southerly location, it was unseasonably cold at the time, with the kickoff air temperature of making this the coldest Super Bowl played.
Dallas, in its second Super Bowl appearance, entered the game with a reputation of not being able to win big playoff games such as Super Bowl V and the 1966 and 1967 NFL Championship Games prior to the 1970 AFL–NFL merger. They posted an 11–3 record during the 1971 regular season before defeating the Minnesota Vikings and the San Francisco 49ers in the playoffs. The Dolphins were making their first Super Bowl appearance after building a 10–3–1 regular season record, including eight consecutive wins, and posting postseason victories over the Kansas City Chiefs and the Baltimore Colts.
The Cowboys dominated Super Bowl VI, setting Super Bowl records for the most rushing yards (252), the most first downs (23), and the fewest points allowed (3). For the next 47 years, they would be the only team to prevent their opponent from scoring a touchdown in the Super Bowl, a feat matched by the 2018 New England Patriots in Super Bowl LIII. The game was close in the first half, with the Cowboys only leading 10–3 at halftime. But Dallas opened the third quarter with a 71-yard, 8-play touchdown drive, and then Dallas linebacker Chuck Howley's 41-yard interception return in the fourth quarter set up another score. Cowboys quarterback Roger Staubach, who completed 12 out of 18 passes for 119 yards, threw 2 touchdown passes, and rushed 5 times for 18 yards, was named the Super Bowl's Most Valuable Player.
This was the last Super Bowl to be blacked out in the TV market in which the game was played. Under the NFL's unconditional blackout rules at the time, the Super Bowl could not be broadcast locally even if the local team did not advance to the Super Bowl, and it was a sellout. The following year, the league changed their rules to allow games to be broadcast in the local market if sold out 72 hours in advance. It was the last Super Bowl played with the hashmarks (also called the inbound lines) set at 40 feet apart (20 yards from the sidelines), and the last NFL game overall; the next season, they were brought in to 18 feet, the width of the goalposts, where they remain.
The NFL awarded Super Bowl VI to New Orleans on March 23, 1971 at the owners meetings held in Palm Beach, Florida.
The Cowboys entered the season still having the reputation of "not being able to win the big games" and "next year's champion". The Super Bowl V loss added more fuel to that widely held view. As in the previous season, Dallas had a quarterback controversy as Staubach and Craig Morton alternated as starting quarterback (in a loss to the Bears in game 7, Morton and Staubach alternated "plays"). The Cowboys were 4–3 at the season midpoint, including a 24–14 loss to the New Orleans Saints at Tulane Stadium. But after head coach Tom Landry settled on Staubach, the Cowboys won their last seven regular season games to finish with an 11–3 record.
Staubach finished the regular season as the NFL's top rated passer (101.8) by throwing for 1,882 yards, 15 touchdowns, and only 4 interceptions. He was also a terrific rusher, gaining 343 yards and 2 touchdowns on 41 carries. Dallas also had an outstanding trio of running backs, Walt Garrison, Duane Thomas, and Calvin Hill, who rushed for a combined total of 1,690 yards and 14 touchdowns during the season. Garrison led the team in receptions during the season (40). (Thomas, upset that the Cowboys would not renegotiate his contract after his excellent rookie year, had stopped talking to the press and to almost everyone on the team). Wide Receivers Bob Hayes and Lance Alworth also provided a deep threat, catching a combined total of 69 passes for 1,327 yards and 10 touchdowns. The offensive line, anchored by all-pro tackle Rayfield Wright, Pro Bowlers John Niland and Ralph Neely, and future Hall of Famer Forrest Gregg, was also a primary reason for their success on offense. Neely had broken his leg in November in a dirt-bike accident, and was replaced first by Gregg and then by Tony Liscio, who came out of retirement.
The Dallas defense (nicknamed the "Doomsday Defense") had given up only one touchdown in the last 14 quarters prior to the Super Bowl. Their defensive line was anchored by Pro Bowl defensive tackle Bob Lilly, who excelled at pressuring quarterbacks and breaking up running plays. Dallas also had an outstanding trio of linebackers: Pro Bowler Chuck Howley, who recorded 5 interceptions and returned them for 122 yards; Dave Edwards 2 interceptions; and Lee Roy Jordan, who recorded 2 interceptions. The Cowboys secondary was led by 2 future Hall of Fame cornerbacks Herb Adderley (6 interceptions for 182 return yards) and Mel Renfro (4 interceptions for 11 yards). Safeties Cliff Harris and Pro Bowler Cornell Green combined for 4 interceptions. Harris added 29 kickoff returns for 823 yards, an average of 28.4 yards per return (3rd in the NFL). They were also helped out by weak side linebacker D.D. Lewis.
The Dolphins, who advanced to the Super Bowl just five years after their founding in 1966, were based primarily around their league-leading running attack, led by running backs Larry Csonka and Jim Kiick. Csonka rushed for 1,051 yards, averaging over five yards per carry, and scored seven touchdowns. Versatile Jim Kiick rushed for 738 yards and three touchdowns, and was second on the Dolphins in receiving with 40 receptions for 338 yards. They fumbled once (by Kiick) between the two of them during the regular season. But Miami also had a threatening passing game. Quarterback Bob Griese, the AFC's leading passer and most valuable player, put up an impressive performance during the season, completing 145 passes for 2,089 yards and 19 touchdowns with only 9 interceptions. Griese's major weapon was wide receiver Paul Warfield, who caught 43 passes for 996 yards (a 23.2 yards per catch average) and a league-leading 11 touchdowns. The Dolphins also had an excellent offensive line to open up holes for their running backs and protect Griese on pass plays, led by future Hall of Fame guard Larry Little.
Miami's defense was a major reason why the team built a 10–3–1 regular season record, including eight consecutive wins. Future Hall of Fame linebacker Nick Buoniconti was a major force reading and stopping plays, while safety Jake Scott recorded 7 interceptions.
Before this season, the Dolphins had never won a playoff game in franchise history, but they surprised the entire NFL by advancing to the Super Bowl with wins against the two previous Super Bowl champions.
First Miami defeated the Kansas City Chiefs (winners of Super Bowl IV), 27–24, in the longest game in NFL history with kicker Garo Yepremian's game-winning field goal after 22 minutes and 40 seconds of overtime play in the final Chiefs game at Municipal Stadium. Later, Miami shut out the defending Super Bowl champion Baltimore Colts, 21–0, in the AFC Championship Game, with safety Dick Anderson intercepting 3 passes from Colts quarterback Johnny Unitas and returning one of them for a 62-yard touchdown.
Meanwhile, the Cowboys marched to the Super Bowl with playoff wins over the Minnesota Vikings, 20–12 in the NFC Divisional Playoffs, and the San Francisco 49ers, 14–3 in the NFC Championship Game, giving up only one touchdown in the two games.
Soon after the Dolphins' win in the AFC Championship Game, Shula received a phone call at his home from President Richard Nixon at 1:30 in the morning. Nixon had a play he thought would work, a particular pass to Warfield. (That particular play, which was called late in the first quarter, was broken up by Mel Renfro.)
When asked about the Dolphins' defensive team prior to Super Bowl VI, Landry said that he could not recall any of the players' names, but they were a big concern to him. Over the years this remark has been regarded as the origin of the nickname "No-Name Defense". However, it was Miami defensive coordinator Bill Arnsparger who had originally given his squad the nickname after the Dolphins had beaten the Baltimore Colts in the AFC Championship.
According to Tom Landry, the Cowboys were very confident. "When they talked among themselves they said there was no way they were going to lose that game."
The Cowboys used the New Orleans Saints' practice facility in Metairie as its training headquarters for the game. The Dolphins split their practices between Tulane Stadium and Tad Gormley Stadium in New Orleans' City Park. Dallas' team hotel was the Hilton across from New Orleans International Airport in Kenner, and Miami lodged at the Fontainebleau Motor Hotel in New Orleans' Mid-City neighborhood.
On Media Day, Duane Thomas refused to answer any questions and sat silently until his required time was up. Roger Staubach surmises that Duane Thomas would have been named MVP if he had cooperated with the press prior to the game. In the Cowboys' locker room after the game, flustered CBS reporter Tom Brookshier asked Duane Thomas a long-winded question, the gist of which was "You're fast, aren't you?" Thomas, who had shunned the press all season, simply said "Evidently." Thomas became the first player to score touchdowns in back-to-back Super Bowls, having a receiving touchdown in Super Bowl V.
Dolphins safety Jake Scott entered Super Bowl VI with a broken left hand. He broke his right wrist during the game but never came out. With both hands in casts for three months, he said "When I go to the bathroom, that's when I find out who my real friends are."
This was the first Super Bowl to match two teams which played its home games on artificial turf. Both of the Cowboys' home stadiums of 1971, the Cotton Bowl and Texas Stadium, had turf, as did the Dolphins' Orange Bowl (specifically Poly-Turf). The previous year, the Cowboys became the first team to play its home games on turf to make it to a Super Bowl.
Through Super Bowl LIII, this is the only Super Bowl in which both teams played their home games in states which were members of the Confederate States of America during the Civil War. The Washington Redskins, who faced the Dolphins in Super Bowl VII and Super Bowl XVII, have their training facilities in Virginia, which was a Confederate state during the Civil War, but have never played home games in the state, moving from Washington, D.C. proper to Maryland in 1997.
This game was originally scheduled to be the last to be played in Tulane Stadium. It was hoped the Louisiana Superdome would be ready in time for the 1972 NFL season. However, political wrangling led to a lengthy delay in construction, and groundbreaking did not take place until August 11, 1971, five months before this game. The Superdome was not completed until August 1975, forcing Super Bowl IX to be moved to Tulane Stadium. That Super Bowl proved to be the final NFL game in the stadium, which was demolished in late 1979.
The temperature at kickoff was a sunny and windy , making this the coldest Super Bowl to date.
The game was broadcast in the United States by CBS with play-by-play announcer Ray Scott and color commentator Pat Summerall. Although Tulane Stadium was sold out for the game, unconditional blackout rules in the NFL prohibited the live telecast from being shown in the New Orleans area. This was the last Super Bowl to be blacked out in the TV market in which the game was played. The game was not blacked out in Baton Rouge, which was blacked out during Saints home games.
The following year, the NFL allowed Super Bowl VII to be televised live in the host city (Los Angeles) when all tickets were sold. In , the league changed its blackout policy to allow any game to be broadcast in the home team's market if sold out 72 hours in advance. The blackout rule has been suspended since .
The night before the game, Joe Frazier successfully defended his heavyweight boxing championship with a fourth-round knockout of Terry Daniels at the Rivergate Convention Center, which was approximately one mile south of the construction site for the Superdome on Poydras Street.
This game was featured in the movie "Where the Buffalo Roam" where the protagonist character Hunter S. Thompson is sent to cover the game by "Rolling Stone" magazine, although the host site set in the movie is Los Angeles Memorial Coliseum (site of Super Bowl VII), not Tulane Stadium.
The Tyler Junior College Apache Belles drill team performed during the pregame and halftime festivities. Later, the U.S. Air Force Academy Chorale sang the national anthem. This was followed by an eight-plane flyover of F-4 Phantoms from Eglin Air Force Base, which featured a plane in the missing man formation.
The halftime show was a "Salute to Louis Armstrong" featuring jazz singer Ella Fitzgerald, actress and singer Carol Channing, trumpeter Al Hirt and the U.S. Marine Corps Drill Team. Armstrong, a New Orleans native, died in July 1971.
Despite being the second Super Bowl after the AFL–NFL merger, Super Bowl VI was the first one to have the NFL logo painted at the 50-yard line. The NFL would do this for all but one Super Bowl after this until Super Bowl XXXI (the exception was Super Bowl XXV, when the Super Bowl logo was painted at midfield instead).
According to Roger Staubach, the Cowboys' game plan was to neutralize the Dolphins' key offensive and defensive players—Paul Warfield and Nick Buoniconti. Warfield was double-teamed by Green and Renfro. "They pretty much shut him down", wrote Staubach. Since the running game was the key to the Cowboys' offense, they wanted to take the quick-reacting Buoniconti out of each play. Two linemen, usually Niland and center Dave Manders, were assigned to block Buoniconti. Combined with counterplays and the excellent cutback running of Thomas, this tactic proved very successful. Buoniconti sustained a concussion which he suffered from throughout the second half, during which he did not keep track of the score, thinking it was still 10-3 when it had become 24-3.
Miami's defense was designed to stop Staubach's scrambling. According to Staubach, although his scrambing was shut down this did not work to the Dolphins' benefit because it opened things up for the other backs.
Miami won the coin toss and elected to receive. Neither team could mount a drive on its first possession. On the first play of the Dolphins' second possession, Larry Csonka, on his first carry of the game, gained 12 yards on a sweep aided by a big block
by Larry Little on Herb Adderley. That would be his longest gain of the day. On the next play, Csonka fumbled a handoff from Bob Griese, his first fumble of the season, and it was recovered by linebacker Chuck Howley at the Cowboys 48-yard line. A pair of runs for 18 total yards by Walt Garrison put Dallas within field goal range, but Staubach was sacked by Jim Riley and Bob Heinz for a 12-yard loss. However, Staubach found Bob Hayes open for an 18-yard pass and then Staubach passed to Duane Thomas for 11 yards to bring up first and goal. On third and goal, Dick Anderson made a great play to keep Thomas out of the end zone. Dallas kicker Mike Clark kicked a 9-yard field goal to give the Cowboys a 3–0 lead.
On the third play of the Dolphins' next possession at their own 38-yard line, Griese was sacked by Bob Lilly for a Super Bowl record 29-yard loss, which still stands as the longest negative play from scrimmage in Super Bowl history (A picture of Griese being chased by Larry Cole, Lilly and Jethro Pugh is the game's most famous photograph).
Early in the second quarter, Miami drove to the Cowboys 42-yard line with the aid of a 20-yard reception by receiver Howard Twilley, but the drive stalled and ended with no points after kicker Garo Yepremian missed a 49-yard field goal attempt.
Starting with 6:15 left in the period, Dallas drove 76 yards in 10 plays, including a 21-yard reception by Lance Alworth and Calvin Hill's three carries for 25 yards, and then scored on a 7-yard touchdown pass from Staubach to Alworth to increase their lead to 10–0. Miami started the ensuing drive with just 1:15 left in the half, and quarterback Bob Griese completed three consecutive passes, two to receiver Paul Warfield and one to running back Jim Kiick, for 44 total yards to reach the Dallas 24-yard line. On the next play Griese threw to Warfield, who was open at the 2-yard line, but the ball was deflected by Green and bounced off Warfield's chest. Miami had to settle for Yepremian's 31-yard field goal to cut the Dolphins deficit to 10–3 going into halftime.
But Dallas dominated the second half, preventing any chance of a Miami comeback. Dallas reasoned that Miami would make adjustments to stop the Cowboys' inside running game which had been so successful in the first half. So the Cowboys decided to run outside. The Cowboys opened the third period with a 71-yard, 8-play drive, which included four runs by Thomas for 37 yards, a reverse by Hayes for 16 yards, and only one pass, scoring on Thomas' 3-yard sweep to make the score 17–3. This seemed to fire up the Dallas defense, who managed to prevent Miami from getting a single first down in the entire third quarter. The farthest advance Miami had in the third quarter was to its own 42-yard line as Griese and the offense were, as Don Shula put it, "destroyed." On an incomplete pass, Jake Scott hit Roger Staubach on a blitz that shook him up late in the third quarter, but Staubach returned in the fourth.
Miami managed to advance to midfield early in the final period, opening the fourth quarter with their first third down conversion of the game. Howley ended the drive, however, by intercepting a pass from Griese intended for Kiick in the flat. After returning the ball 41 yards, Howley tripped and fell at the Dolphins 9-yard line with no one near him. But three plays later, Staubach threw a 7-yard touchdown pass to tight end Mike Ditka, increasing the Dallas lead to 24–3 with twelve minutes left in the game.
Miami began their next possession at their own 23-yard line and mounted only their third sustained drive of the game, reaching the Dallas 16-yard line in six plays. However, Griese fumbled the snap and the ball was recovered by Cowboys left end Larry Cole at the 20-yard line. The Cowboys then mounted an eleven-play drive to the Miami 1-yard line which featured just one pass and a fake field goal attempt on fourth-and-one at the Miami 20-yard line. However, on first-and-goal at the 1-yard line, Hill fumbled while attempting to dive across the goal line, and the ball was recovered at the 4-yard line by Dolphins defensive tackle Manny Fernandez with just under two minutes left. Miami then ran four meaningless plays to end the game.
Staubach became the first quarterback of a winning team in the Super Bowl to play the entire game. Wrote Staubach, "I can say that I don't think I ever felt any better as an athlete than how I felt after that game..." Nick Buoniconti wrote, "I was knocked senseless...The Cowboys seemed to be moving so much faster than we were...We were overmatched psychologically as well as physically." Jim Kiick said, "Dallas wasn't that much better, but football is momentum. We lost it in the first quarter when we fumbled and they scored, and we never got it back." Said the Dolphins' Howard Twilley:
It's so hard to figure. We went in confident. We really thought we'd win and win handily. Something happened, though, during the week. I guess it was that week. The week has its own momentum, like nothing we'd been in before...[Shula] said we'd been embarrassed. He said we didn't even compete...That's the sickest feeling I've ever had.
Said Cornell Green, "The difference between the Dolphins and Cowboys was that the Dolphins were just happy to be in the game and the Cowboys came to win the game.".
Griese completed the same amount of passes as Staubach (12), and threw for 15 more yards (134), but threw no touchdown passes and was intercepted once. Csonka and Kiick, were held to just 80 combined rushing yards (40 yards each), no touchdowns, and lost 1 fumble on 19 carries. Warfield was limited to just 4 receptions for 39 yards. Thomas was the top rusher of the game with 19 carries for 95 yards and a touchdown. He also caught 3 passes for 17 yards. Dallas running back Walt Garrison added 74 rushing yards and caught 2 passes for 11 yards.
The Dallas Cowboys became the first team to win the Super Bowl after losing it the previous year. The Miami Dolphins would duplicate this feat the following season by winning Super Bowl VII. This would be the only game the Dolphins would lose in 1972, going undefeated the next season prior to their Super Bowl VII win. Miami's 3 points scored set a Super Bowl record, which was tied by the Los Angeles Rams in Super Bowl LIII in 2019.
Sources:"The NFL's Official Encyclopedic History of Professional Football", (1973), p. 153, Macmillan Publishing Co. New York, LCCN 73-3862, NFL.com Super Bowl VI, Super Bowl VI Play Finder Dal, Super Bowl VI Play Finder Mia, Super Bowl VI Play by Play
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
The following records were set or tied in Super Bowl VI, according to the official NFL.com boxscore and the ProFootball reference.com game summary. Some records have to meet NFL minimum number of attempts to be recognized. The minimums are shown (in parenthesis).
Turnovers are defined as the number of times losing the ball on interceptions and fumbles.
Source:
"Note: A seven-official system was not used until the season" | https://en.wikipedia.org/wiki?curid=29132 |
Super Bowl VII
Super Bowl VII was an American football game between the American Football Conference (AFC) champion Miami Dolphins and the National Football Conference (NFC) champion Washington Redskins to decide the National Football League (NFL) champion for the 1972 season. The Dolphins defeated the Redskins by the score of 14–7, and became the first and still the only team in NFL history to complete a perfect undefeated season. They also remain the only Super Bowl champion to win despite having been shut out in the second half of the game. The game was played on January 14, 1973 at the Los Angeles Memorial Coliseum in Los Angeles, the second time the Super Bowl was played in that city. At kickoff, the temperature was , making the game the warmest Super Bowl.
This was the Dolphins' second Super Bowl appearance; they had lost Super Bowl VI to Dallas the previous year. The Dolphins posted an undefeated 14–0 regular season record before defeating the Cleveland Browns and Pittsburgh Steelers in the playoffs. The Redskins were making their first Super Bowl appearance after posting an 11–3 regular-season record and playoff victories over the Green Bay Packers and Dallas Cowboys. Despite being undefeated, the Dolphins were actually one-point underdogs, largely based on the weakness of their regular-season schedule.
Super Bowl VII was largely dominated by the Dolphins, and is the second-lowest-scoring Super Bowl to date with a total of only 21 points (three touchdowns and three extra points), second only to the 13–3 score of Super Bowl LIII. The only real drama occurred during the final minutes of the game, in what was later known as "Garo's Gaffe". Miami attempted to cap their 17–0 perfect season with a 17–0 shutout by means of a 42-yard field goal by Garo Yepremian, but instead the game and the season was jeopardized when his kick was blocked. Instead of falling on the loose ball, the Dolphins kicker picked it up, attempted a forward pass, but batted it in the air, and Redskins cornerback Mike Bass (who was Garo's former teammate on the Detroit Lions years earlier) caught it and returned it 49 yards for a touchdown. This remains the longest period in a Super Bowl for one team to be shut out, as Washington was held scoreless until 2:07 remained in the fourth quarter. Because of the turnover and score, what was a Miami-dominated game became close, and the Dolphins had to stop Washington's final drive for the tying touchdown as time expired.
Dolphins safety Jake Scott was named Most Valuable Player. He recorded two interceptions for 63 return yards, including a 55-yard return from the end zone during the fourth quarter. Scott became the second defensive player in Super Bowl history (after linebacker Chuck Howley in Super Bowl V) to earn a Super Bowl MVP award.
The NFL awarded Super Bowl VII to Los Angeles on March 21, 1972.
The Dolphins went undefeated during the season, despite losing their starting quarterback. In the fifth game of the regular season, starter Bob Griese suffered a fractured right leg and dislocated ankle. In his place, 38-year-old Earl Morrall, a 17-year veteran, led Miami to victory in their nine remaining regular-season games, and was the 1972 NFL Comeback Player of the Year. Morrall had previously played for Dolphins head coach Don Shula when they were both with the Baltimore Colts, where Morrall backed up quarterback Johnny Unitas and started in Super Bowl III.
But Miami also had the same core group of young players who had helped the team advance to the previous year's Super Bowl VI. (The only Dolphins starter in Super Bowl VII over the age of 30 was 32-year-old Nick Buoniconti.) The Dolphins still had a powerful running attack, spearheaded by running backs Larry Csonka, Jim Kiick and Eugene "Mercury" Morris. (Morris, who in previous seasons had been used primarily as a kick returner, took over the starting halfback position from Kiick, who had been the starter the previous four years. However, the more-experienced Kiick would start in Super Bowl VII.) Csonka led the team with 1,117 yards and six touchdowns. Kiick contributed 521 yards and five touchdowns, and also caught 21 passes for 147 yards and another touchdown. Morris, a breakaway runner, rushed for 1,000 yards, caught 15 passes for 168 yards, added another 334 yards returning kickoffs, and scored a league-leading 12 rushing touchdowns. Overall, Miami set a record with 2,960 total rushing yards during the regular season, and became the first team ever to have two players rush for 1,000 yards in one season. Miami led the NFL in points scored (385).
Receiver Paul Warfield once again provided the run-based Dolphins with an effective deep-threat option, catching 29 passes for 606 yards, an average of 20.9 yards per catch. Miami's offensive line, led by future Hall of Famers Jim Langer and Larry Little, was also a key factor in the Dolphins' offensive production. Miami's "No-Name Defense" (a nickname inspired by Dallas Cowboys head coach Tom Landry when he could not recall the names of any Dolphins defenders just before Super Bowl VI), led by future Hall of Fame linebacker Nick Buoniconti, allowed the fewest points in the league during the regular season (171), and ranked second in the NFL with 26 interceptions. Safety Jake Scott recorded five interceptions, while Lloyd Mumphord had four picks and safety Dick Anderson had three interceptions and led the NFL with five fumble recoveries. Because of injuries to defensive linemen (at the beginning of the season the Dolphins were down to four healthy players at the position), defensive coordinator Bill Arnsparger created what he called the "53" defense, in which the versatile Bob Matheson (number 53) would be used as either a defensive end in the standard 4–3 defense or as a fourth linebacker in a 3–4 defense, with Manny Fernandez at nose tackle. As a linebacker, Matheson would either rush or drop back into coverage. Said Nick Buoniconti, "Teams would be totally confused." Linebacker Doug Swift was also a playmaker with three interceptions and a fumble recovery.
The Dolphins' undefeated, untied regular season was the third in NFL history, and the first of the post-merger era. The previous two teams to do so, the 1934 and 1942 Chicago Bears, both lost the NFL Championship game. The Cleveland Browns also completed a perfect season in 1948, including a league championship, while part of the All-America Football Conference (AAFC), but this feat is only recognized by the Pro Football Hall of Fame (the NFL does not officially recognize any AAFC records).
Following the death of Redskins head coach Vince Lombardi 17 days prior to the start of the 1970 season, Washington finished 6–8 under interim coach Bill Austin. Shortly after the conclusion of the 1970 season, the Redskins hired George Allen as their head coach, hoping he could turn the team's fortunes around. Allen's philosophy was that veteran players win games, so immediately after taking over the team, he traded away most of the younger team members and draft choices for older, more established players. His motto was "The future is now." Washington quickly became the oldest team in the NFL and earned the nickname "The Over-the-Hill Gang." The average age of starters was 31 years old. However, Allen's strategy turned the Redskins around, as the team improved to a 9–4–1 record in 1971, and finished the 1972 season with an NFC-best 11–3 record.
Washington was led by 33-year-old quarterback Billy Kilmer, who completed 120 out of 225 passes for 1,648 yards and a league-leading 19 touchdowns during the regular season, with only 11 interceptions, giving him an NFL-best 84.8 passer rating. Kilmer had started the first three games of the season, was replaced in Game 4 by 38-year-old Sonny Jurgensen, then replaced Jurgensen when he was lost for the season with an Achilles tendon injury. The Redskins' powerful rushing attack featured two backs. Larry Brown gained 1,216 yards (first in the NFC and second in the NFL, behind only O.J. Simpson's 1,251 rushing yards) on 285 carries during the regular season, caught 32 passes for 473 yards and scored 12 touchdowns, earning him both the NFL Most Valuable Player Award and the NFL Offensive Player of the Year Award. Charley Harraway ran for 567 yards on 148 carries. Future Hall of Fame wide receiver Charley Taylor and wide receiver Roy Jefferson provided the team with a solid deep threat, combining for 84 receptions, 1,223 receiving yards and 10 touchdowns. Veteran tight end Jerry Smith added 21 receptions for 353 yards and 7 touchdowns.
Washington also had a solid defense led by linebacker Chris Hanburger (four interceptions, 98 return yards, one touchdown) and cornerbacks Pat Fischer (four interceptions, 61 return yards) and Mike Bass (three interceptions, 53 return yards)
Morrall led the Dolphins to a 20–14 playoff win over the Cleveland Browns. However, Griese started the second half of the AFC Championship Game to help rally the Dolphins to a 21–17 victory over the Pittsburgh Steelers. A fake punt by Miami's Larry Seiple made the difference.
Meanwhile, the Redskins advanced to the Super Bowl without having allowed a touchdown in either their 16–3 playoff win over the Green Bay Packers or their 26–3 NFC Championship Game victory over the Cowboys.
Much of the pregame hype surrounded the chances of the Dolphins completing a perfect, undefeated season, as well as their quarterback controversy between Griese and Morrall. Griese was eventually picked to start the Super Bowl because Shula felt more comfortable with Morrall as the backup just in case Griese was ineffective following his recent inactivity. Miami was also strongly motivated to win the Super Bowl after having been humiliated by the Dallas Cowboys in Super Bowl VI. Wrote Nick Buoniconti, "There was no way we were going to lose the Super Bowl; there was no way." Head coach Don Shula, loser of Super Bowls III and VI, was also determined to win. Although Shula was relaxed and charming when dealing with the press, it was all an act; Dolphins players described him as "neurotic" and "absolutely crazy." He was also sick during Super Bowl week with the flu, which he kept secret.
Still, many favored the Redskins to win the game because of their group of "Over the Hill Gang" veterans, and because Miami had what some considered an easy schedule (only two opponents, Kansas City and the New York Giants, posted winning records, and both of those teams were 8–6) and had struggled in the playoffs. While Washington had easily crushed both playoff opponents, Miami had narrowly defeated theirs. Most surprisingly, the Dolphins needed to mount a fourth-quarter comeback against the Browns, whom they were heavily favored to defeat.
Allen had a reputation for spying on opponents. A school overlooked the Rams facility that the NFL designated as the Dolphins practice field, so the Dolphins found a more secure field at a local community college. Dolphins employees inspected the trees every day for spies.
Miami cornerback Tim Foley, a future broadcaster who was injured and would not play in Super Bowl VII, was writing daily stories for a Miami newspaper and interviewed George Allen and his players, provoking charges from Allen that Foley was actually spying for Shula.
Allen was extremely uptight and prickly dealing with the press Super Bowl week, and accused the press of ruining his team's preparation. Allen pushed the team so hard in practices that the players joked among themselves that they should have left Allen in Washington.
During practice the day before Super Bowl VII, the Dolphins' 5'7" 150-pound kicker, Garo Yepremian, relaxed by throwing 30-yard passes to Dave Shula, Don Shula's son. During the pregame warmups, he consistently kicked low line drives and couldn't figure out why.
This was the first Super Bowl in which neither coach wore a tie. Shula wore a coat and tie for Super Bowl VI, but wore a white short-sleeved polo shirt for this game, as did Allen. For Super Bowl VIII, Shula would wear a sportcoat, but with a shirt underneath that was similar to the one he wore in Super Bowl VII.
The game was broadcast in the United States by NBC with play-by-play announcer Curt Gowdy, color commentator Al DeRogatis and sideline reporter Bill Enis. This was Enis' final Super Bowl telecast before his death on December 14, 1973.
This was the first Super Bowl to be televised live in the city in which it was being played. Despite unconditional blackout rules in the NFL that normally would have prohibited the live telecast from being shown locally, the NFL allowed the game to be telecast in the Los Angeles area on an experimental basis when all tickets for the game were sold. The league then changed its blackout rules the following season to allow any game sold out at least 72 hours in advance to be televised in the host market. No subsequent Super Bowl has ever been blacked out under this rule, as all have been sold out (owing to its status as the marquee event on the NFL schedule, meaning that tickets sell out quickly).
This game is featured on "NFL's Greatest Games" under the title "17–0".
The pregame show was a tribute to Apollo 17, the sixth and last mission to land on the Moon and the final one of Project Apollo. The show featured the Michigan Marching Band and the crew of Apollo 17 who exactly one month earlier had been the final humans to date to leave the Moon.
Later, the Little Angels of Chicago's Angels Church from Chicago performed the national anthem.
The halftime show, featuring Woody Herman and the Michigan Marching Band along with The Citrus College Singers and Andy Williams, was titled "Happiness Is".
According to Shula, the Dolphins' priority on defense was to stop Larry Brown and force Billy Kilmer to pass. Buoniconti looked at Washington's offensive formation on each play and shifted the defense so it was strongest where he felt Brown would run. This strategy proved successful. Washington's offensive line also had trouble handling Dolphins' defensive tackle/nose tackle Manny Fernandez, who was very quick. "He beat their center Len Hauss like a drum", wrote Buoniconti. Miami's defenders had also drilled in maintaining precise pursuit angles on sweeps to prevent the cut-back running that Duane Thomas had used to destroy the Dolphins in Super Bowl VI.
Washington's priority on defense was to disrupt Miami's ball-control offense by stopping Larry Csonka. They also intended to shut down Paul Warfield by double-covering him.
With a game-time kickoff temperature of , this is the warmest Super Bowl to date. It came the year after the coldest game in Super Bowl VI which registered a temperature at kickoff of .
As they had in Super Bowl VI, Miami won the toss and elected to receive. Most of the first quarter was a defensive battle with each team punting on their first two possessions. The Dolphins would, however, get two key breaks. Howard Kindig appeared to mishandle the snap on their first punt from the Miami 27 and lose the ball to the Redskins' Harold McLinton, but McLinton was called for slapping at the ball while it was being snapped, for a 5-yard penalty. On the replay of the down, Larry Seiple got the kick away safely. Later, after stopping Washington for the second time, safety Jake Scott did not call for a fair catch, as he had not been told to do so by Dick Anderson. He fumbled, but fortunately Anderson made the recovery. Miami then started this drive on its own 37-yard line with 2:55 left in the first quarter. Running back Jim Kiick started out the drive with two carries for eleven yards. Then quarterback Bob Griese completed an 18-yard pass to wide receiver Paul Warfield to reach the Washington 34-yard line. After two more running plays, on third and four Griese threw a 28-yard touchdown pass to receiver Howard Twilley for his only catch of the game. Twilley fooled Pat Fischer by faking a route to the inside, then broke to the outside and caught the ball at the five-yard line, dragging Fischer into the end zone. "Griese read us real good all day", said Fischer. Yepremian's extra point gave the Dolphins a 7–0 lead with one second remaining in the period. (Yepremian noticed that the kick was too low, just like his practice kicks).
On the third play of the Redskins' ensuing drive, Scott intercepted quarterback Billy Kilmer's pass down the middle intended for Taylor and returned it eight yards to the Washington 47-yard line. However a 15-yard illegal man downfield penalty on left guard Bob Kuechenberg nullified a 20-yard pass completion to tight end Marv Fleming on the first play after the turnover, and the Dolphins were forced to punt after three more plays.
After the Redskins were forced to punt again, Miami reached the 47-yard line with a 13-yard run by Larry Csonka and an 8-yard run by Kiick. But on the next play, Griese's 47-yard touchdown pass to Warfield was nullified by an illegal procedure penalty on receiver Marlin Briscoe (Briscoe's first, and only, play of the game). On third down, Redskins defensive tackle Diron Talbert sacked Griese for a 6-yard loss and the Dolphins had to punt.
The Redskins then advanced from their own 17-yard line to the Miami 48-yard line (their first incursion into Miami territory) with less than two minutes left in the half. But on third down and three yards to go, Dolphins linebacker Nick Buoniconti intercepted Kilmer's pass to tight end Jerry Smith at the Miami 41-yard line and returned it 32 yards to the Washington 27-yard line. From there, Kiick and Csonka each ran once for three yards, and then Griese completed a 19-yard pass (his sixth completion in six attempts) to tight end Jim Mandich, who made a diving catch at the 2-yard line. Two plays later, Kiick scored on a 1-yard blast behind Little and Csonka with just 18 seconds left in the half, and Yepremian's extra point gave the Dolphins a lead of 14–0 before halftime (once again, Yepremian noticed the kick was too low).
Miami's defense dominated the Redskins in the first half, limiting Washington to 49 yards rushing, 23 yards passing, and four first downs.
The Redskins had more success moving the ball in the second half. They took the second half kickoff and advanced across midfield for only the second time in the game, driving from their own 30-yard line to Miami's 17-yard line in a seven-play drive that featured just two runs. On first down at Miami's 17-yard line, Kilmer threw to wide receiver Charley Taylor, who was open at the 2-yard line, but Taylor stumbled right before the ball arrived and the ball glanced off his fingertips. After a second-down screen pass to Harraway fell incomplete, defensive tackle Manny Fernandez sacked Kilmer on third down for a loss of eight yards, and Washington's drive ended with no points after kicker Curt Knight's ensuing 32-yard field goal attempt was wide right. "That was an obvious turning point", said Allen. Later in the period, the Dolphins drove 78 yards to Washington's 5-yard line, featuring a 49-yard run by Csonka, the second-longest run in Super Bowl history at the time. However, Redskins defensive back Brig Owens intercepted a pass intended for Fleming in the end zone for a touchback.
Early in the fourth quarter, Washington threatened to score by mounting its most impressive drive of the game, driving 79 yards from its own 11 to Miami's 10-yard line in twelve plays. On second down at the Miami 10-yard line, Kilmer threw to tight end Jerry Smith in the end zone. Smith was wide open, but the ball hit the crossbar of the goalpost and fell incomplete. Then on third down, Scott intercepted Kilmer's pass to Taylor in the end zone and returned it 55 yards to the Redskins 48-yard line.
Miami moved the ball to the 34-yard line on their ensuing drive. Leading 14–0 on 4th down with 4 yards to go, Shula could have tried for a conversion, but thought "What a hell of a way to remember this game" if they could end a perfect 17–0 season with a 17–0 Super Bowl final score. He called on kicker Garo Yepremian to attempt a 42-yard field goal in what is now remembered as one of the most famous blunders in NFL lore: "Garo's Gaffe". As had been the case all day, Yepremian's kick was too low, and it was blocked by Washington defensive tackle Bill Brundige. The ball bounced to Yepremian's right and he reached it before holder Earl Morrall. But instead of falling on the ball, Yepremian picked it up and, with Brundige bearing down on him, made a frantic attempt to pass the ball to Csonka, who blocked on field goals. Unfortunately for Miami, the ball slipped out of his hands and went straight up in the air. Yepremian attempted to bat the ball out of bounds, but instead batted it back up into the air, and it went right into the arms of Redskins cornerback Mike Bass, who returned the fumble 49 yards for a touchdown, the first fumble recovery returned for a touchdown in Super Bowl history, to make the score 14–7 with 2:07 left in the game.
To the surprise of some, the Redskins did not try an onside kick, but instead kicked deep. The Redskins were forced to use up all of their timeouts on the Dolphins' ensuing five-play possession, but forced Miami to punt (nearly blocking the punt) from its own 36-yard line with 1:14 remaining in the game, giving themselves a chance to drive for the tying touchdown. However, Miami's defense forced two incompletions and a 4-yard loss on a swing pass, and then defensive end Vern Den Herder's 9-yard sack on fourth down as time expired in the game.
Griese finished the game having completed 8 out of 11 passes for 88 yards and a touchdown, with one interception. Csonka was the game's leading rusher with 15 carries for 112 yards. Kiick had 38 rushing yards, two receptions for six yards, and a touchdown. Morris had 34 rushing yards. Manny Fernandez had 11 solo tackles and six assists. Kilmer completed six more passes than Griese, but finished the game with just 16 more total passing yards and was intercepted three times. Said Kilmer, "I wasn't sharp at all. Good as their defense is, I still should have thrown better." Washington's Larry Brown rushed for 72 yards on 22 carries and also had five receptions for 26 yards. Redskins receiver Roy Jefferson was the top receiver of the game, with five catches for 50 yards. Washington amassed almost as many total yards (228) as Miami (253), and actually more first downs (16 to Miami's 12).
The Dolphins never made the traditional post-game visit to the White House due to the Watergate scandal, but in August 2013 finally made the trip at the behest of Barack Obama, minus Manny Fernandez, Jim Langer, and Bob Kuechenberg, who did not attend due to their opposition to the Obama administration. Garo Yepremian was a longtime Republican supporter and friend of former Florida Governor Jeb Bush but made the trip anyway and had an amusing exchange with President Obama over his long-ago bumble in the game.
Sources:"The NFL's Official Encyclopedic History of Professional Football", (1973), p. 153, Macmillan Publishing Co. New York, LCCN 73-3862, NFL.com Super Bowl VII, Super Bowl VII Play Finder Mia, Super Bowl VII Play Finder Was
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
The following records were set or tied in Super Bowl VII, according to the official NFL.com boxscore and the ProFootball reference.com game summary. Some records have to meet NFL minimum number of attempts to be recognized. The minimums are shown (in parenthesis).
Source:
"Note: A seven-official system was not used until 1978. Back Judge and Field Judge swapped titles prior to the 1998 NFL season."
As Shula was being carried off the field after the end of the game, a kid who shook his hand stripped off his watch. Shula got down, chased after the kid, and retrieved his watch.
Manny Fernandez was a strong contender for MVP. Wrote Nick Buoniconti, "It was the game of his life–in fact, it was the most dominant game by a defensive lineman in the history of the game, and he would never be given much credit for it. They should have given out two game balls and made Manny Fernandez the co-MVP with Jake Scott." Larry Csonka also said he thought Fernandez should have been the MVP. The MVP was selected by Dick Schaap, the editor of "SPORT" magazine. Schaap admitted later that he had been out late the previous night, struggled to watch the defense-dominated game, and was not aware that Fernandez had 17 tackles.
When Garo Yepremian went back to the Dolphins' sideline after his botched field goal attempt, Nick Buoniconti told him that if they lost he would "Hang you up by one of your ties." Yepremian would joke to reporters after the game, "This is the first time the goat of the game is in the winner's locker room." But Yepremian would be so traumatized by his botched attempt that he had to be helped from the post-game party by his brother because of a stress-induced stabbing pain down his right side. Depressed, he spent two weeks in seclusion until he was cheered up by a letter, apparently from Shula, praising him for his contributions to the team and urging him to ignore criticism. Yepremian kept the letter and mentioned it to Shula in 2000, but Shula had no knowledge of it. They concluded the letter was actually written by Shula's wife Dorothy, who died from breast cancer in 1991. She had signed her husband's name to it. Nevertheless, "Garo's Gaffe" made Yepremian famous and led to a lucrative windfall of speaking engagements and endorsements. "It's been a blessing", said Yepremian, who died in 2015.
The same teams met 10 years later in Super Bowl XVII, which was also played in the Los Angeles area, at the Rose Bowl in Pasadena. The Redskins won that game, 27–17. Two starters from Miami's undefeated team, guard Bob Kuechenberg and defensive end Vern Den Herder, were still active during the strike-shortened 1982 season. The Redskins had no players remaining from Super Bowl VII on their Super Bowl XVII roster. The last member of the 1972 Redskins still active with the franchise, offensive tackle Terry Hermeling, retired after the 1980 season.
Redskins linebacker and defensive captain Jack Pardee retired immediately following this game, ending a 16-year career. He coached the Chicago Bears for three seasons (1975–77) before succeeding Allen as Redskins coach in 1978. Pardee was fired following a 6–10 campaign in 1980 and was replaced by Joe Gibbs, who led the Redskins to three Super Bowl championships (XVII, XXII, XXVI) and 171 victories to earn induction into the Pro Football Hall of Fame. After coaching the Houston Gamblers of the United States Football League in 1984 and '85, Pardee coached at the University of Houston (1987–89) and the Houston Oilers (1990–94).
The Miami Dolphins became the second team to win the Super Bowl after losing it the previous year. They are the last team to do so until the New England Patriots in Super Bowl LIII. | https://en.wikipedia.org/wiki?curid=29133 |
Super Bowl VIII
Super Bowl VIII was an American football game between the National Football Conference (NFC) champion Minnesota Vikings and the American Football Conference (AFC) champion Miami Dolphins to decide the National Football League (NFL) champion for the 1973 season. The Dolphins defeated the Vikings by the score of 24–7 to win their second consecutive Super Bowl, the first team to do so since the Green Bay Packers in Super Bowls I and II, and the first AFL/AFC team to do so.
The game was played on January 13, 1974 at Rice Stadium in Houston, Texas. This was the first time the Super Bowl venue was not home to that of an NFL franchise. This was also the first Super Bowl not to be held in either the Los Angeles, Miami or New Orleans areas. It was also the last Super Bowl, and penultimate game overall (the 1974 Pro Bowl in Kansas City played the next week was the last) to feature goal posts at the front of the end zone (they were moved to the endline, in the back of the endzone, the next season).
This was the Dolphins' third consecutive Super Bowl appearance. They posted a 12–2 record during the regular season, then defeated the Cincinnati Bengals and the Oakland Raiders in the playoffs. The Vikings were making their second Super Bowl appearance after also finishing the regular season with a 12–2 record, and posting postseason victories over the Washington Redskins and the Dallas Cowboys.
Super Bowl VIII was largely dominated by the Dolphins, who scored 24 unanswered points during the first three quarters, including two touchdowns on their first two drives. Minnesota's best chance to threaten Miami occurred with less than a minute left in the first half, but Vikings running back Oscar Reed fumbled the ball away at the Dolphins' 6-yard line, and his team was unable to overcome Miami's lead in the second half. The Dolphins' Larry Csonka became the first running back to be named Super Bowl MVP; both his 145 rushing yards and his 33 carries were Super Bowl records.
The NFL awarded Super Bowl VIII to Houston on March 21, 1972 at the owners' meetings held in Honolulu. Houston was the first Super Bowl host to have more than a year to prepare for the game, and lead time has grown greatly in succeeding years (Los Angeles was also awarded Super Bowl VII at the March 1972 meeting).
Although the Dolphins were unable to match their 17–0 perfect season of 1972, many sports writers, fans and Dolphins players themselves felt that the 1973 team was better. While the 1972 team faced no competition that possessed a record better than 8–6 in the regular season, the 1973 team played a much tougher schedule that included games against the Oakland Raiders, Pittsburgh Steelers and Dallas Cowboys (all playoff teams), plus two games against a resurgent Buffalo Bills squad that featured 2,000-yard rusher O.J. Simpson. Despite this, the Dolphins finished the 1973 season giving up fewer points (150) than in 1972, and recorded a 12–2 record, including an opening-game victory over the San Francisco 49ers that tied an NFL record with 18 consecutive wins. The Dolphins' winning streak ended in Week 2 with a 12–7 loss to the Raiders in Berkeley, California.
Just like the two previous seasons, Miami's offense relied primarily on its rushing attack. Fullback Larry Csonka recorded his third consecutive 1,000-rushing-yard season (1,003 yards), while running back Mercury Morris rushed for 954 yards and scored 10 touchdowns. Running back Jim Kiick was also a key contributor, rushing for 257 yards and catching 27 passes for 208 yards. Quarterback Bob Griese, the AFC's second-leading passer, completed only 116 passes for 1,422 yards, but threw more than twice as many touchdown passes (17) as interceptions (8), and earned an 84.3 passer rating. He became the first quarterback to start three Super Bowls and is joined by Jim Kelly and Tom Brady as the only quarterbacks to start at least three consecutive Super Bowls. Wide receiver Paul Warfield remained the main deep threat on the team, catching 29 passes for 514 yards and 11 touchdowns. The offensive line was strong, once again led by center Jim Langer and right guard Larry Little. Griese, Csonka, Warfield, Langer, Nick Buoniconti and Little would all eventually be elected to the Pro Football Hall of Fame. Bobby Beathard was also elected to the Pro Football Hall of Fame.
Miami's "No Name Defense" continued to dominate their opponents. Future Hall of Fame linebacker Nick Buoniconti recovered three fumbles and returned one for a touchdown. Safety Dick Anderson led the team with eight interceptions, which he returned for 163 yards and two touchdowns on route to winning NFL Defensive Player of the Year. And safety Jake Scott, the previous season's Super Bowl MVP, had four interceptions and 71 return yards. The Dolphins were still using their "53" defense devised at the beginning of the 1971 season, in which Bob Matheson (#53) would be brought in as a fourth linebacker in a 3–4 defense, with Manny Fernandez at nose tackle. Matheson could either rush the passer or drop back into coverage.
The Vikings also finished the regular season with a 12–2 record, winning their first nine games before a 20-14 loss on "Monday Night Football" to the Atlanta Falcons. The Vikings' other loss was a 27-0 shutout in Week 12 to the eventual AFC Central Division Champion Cincinnati Bengals, whom the Dolphins defeated in the AFC divisional playoffs.
Minnesota's offense was led by 13-year veteran quarterback Fran Tarkenton. During the regular season, Tarkenton completed 61.7 percent of his passes for 2,113 yards, 15 touchdowns and just seven interceptions. He also rushed for 202 yards and another touchdown. The team's primary deep threat was Pro Bowl wide receiver John Gilliam, who caught 42 passes for 907 yards, an average of 21.6 yards per catch, and scored eight touchdowns. Tight end Stu Voigt was also a key element of the passing game, with 23 receptions for 318 yards and two touchdowns.
The Vikings' main rushing weapon was NFL Rookie of the Year running back Chuck Foreman, who rushed for 801 yards, caught 37 passes for 362 yards and scored six touchdowns. The Vikings had four other significant running backs – Dave Osborn, Bill Brown, Oscar Reed and Ed Marinaro – who combined for 1,469 rushing/receiving yards and 11 touchdowns. The Vikings' offensive line was also very talented, led by right tackle Ron Yary and six-time Pro Bowl center Mick Tingelhoff.
The Minnesota defense was again anchored by a defensive line nicknamed the "Purple People Eaters", consisting of defensive tackles Gary Larsen and Alan Page, and defensive ends Jim Marshall and Carl Eller. Behind them, cornerback Bobby Bryant (seven interceptions, 105 return yards, one touchdown) and safety Paul Krause (four interceptions) led the defensive secondary.
The Vikings earned their second appearance in the Super Bowl after defeating the wild card Washington Redskins, 27–20, and the NFC East champion Dallas Cowboys, 27–10, in the playoffs. Meanwhile, the Dolphins defeated the AFC Central champion Cincinnati Bengals 34–16 in the divisional round, and the AFC West Champion Oakland Raiders, 27–10 for the AFC Championship. The Dolphins were the first team to appear in three consecutive Super Bowls. Just as in the regular season, Miami relied primarily on their run game in the playoffs, racking up 241 rushing yards against Cincinnati and 266 vs the Raiders. The ground game was particularly crucial against Oakland, as it enabled them to win despite completing just 3 of 6 passes for 34 yards in the game.
This was the first Super Bowl in which a former AFL franchise was the favorite. The 1970 AFC champion Baltimore Colts had been the favorite in Super Bowl V, but they were an original NFL franchise prior the 1970 merger.
This was also the first Super Bowl played in a stadium that was not the current home to an NFL or AFL team, as no team had called Rice Stadium home since the Houston Oilers moved into the Astrodome in 1968. It was also the first Super Bowl game played on the then-popular AstroTurf artificial playing surface, not surprising since Houston's Astrodome was the first facility to install AstroTurf in 1966. (Super Bowl V and Super Bowl VI were played on Poly-Turf, another brand of artificial turf.)
The Vikings complained about their practice facilities at Houston ISD's Delmar Stadium, a 20-minute bus ride from their hotel. They said the locker room was cramped, uncarpeted, had no lockers and that most of the shower heads didn't work. The practice field had no blocking sleds. "I don't think our players have seen anything like this since junior high school", said Vikings head coach Bud Grant. The Dolphins, meanwhile, trained at the Oilers' facility, since they were an AFC team like Miami.
There were reports of dissension among the Dolphins arising from owner Joe Robbie's decision to allow married players to bring their wives at the club's expense. The single players were reportedly angry that they couldn't bring their girlfriends, mothers or sisters.
Vikings defensive tackle Alan Page and Dolphins left guard Bob Kuechenberg were former teammates at the University of Notre Dame. Kuechenberg, who would be blocking Page in the game, had sustained a broken arm in a game against the Colts and wore a cast while playing in the Super Bowl. Paul Warfield entered the game with a well-publicized hamstring injury to his left leg.
On television before the game, New York Jets quarterback Joe Namath said, "If Miami gets the kickoff and scores on the opening drive, the game is over.". Indeed, the Dolphins became the first team to score a touchdown after receiving the game's opening kickoff.
The Dolphins, who were designated as the home team, were obligated by a now-defunct policy to wear their aqua jerseys despite having normally worn white jerseys for home games (though Miami wore aqua for its final two regular-season home games vs. the Pittsburgh Steelers and Detroit Lions). Also, the Dolphins wore two slightly different helmet decals; some had the decal that the team would adopt in 1974 (with the mascot dolphin leaping through the sun), while others had the 1969–1973 decal (with the mascot dolphin halfway through the sun).
Famed "Gonzo" writer Hunter S. Thompson covered the game for "Rolling Stone" magazine, and his exploits in Houston are legendary.
This was the only Super Bowl in which the game ball had stripes. Until the late 1970s, the NFL permitted striped footballs for night games, indoor games and other special situations.
Head linesman Leo Miles was the first African-American to officiate in a Super Bowl.
The game was televised in the United States by CBS with play-by-play announcer Ray Scott and color commentators Pat Summerall and Bart Starr. This was Scott's final telecast for CBS. Midway through the following season Summerall would take Scott's place as the network's lead play-by-play announcer, holding that position through 1993, when CBS lost rights to the NFC television package to Fox.
The Longhorn Band from the University of Texas at Austin performed during the pregame festivities. Later, country music singer Charley Pride sang America The Beautiful and the national anthem. This game marked the first time that America the Beautiful was performed before a Super Bowl game.
The halftime show also featured the Longhorn Band, along with Judy Mallett, Miss Texas 1973, playing the fiddle, in a tribute to American music titled "A Musical America".
The pre-game party was held on the floor of the Astrodome the night before the game. It was attended by the players, the coaches, media, and celebrities. Entertainment was provided by The La France Sisters and Charley Pride.
The Dolphins' game plan on offense was to use misdirection, negative-influence traps, and cross-blocking to exploit the Minnesota defense's excellent pursuit. (The Kansas City Chiefs had used similar tactics against the same Vikings defensive line in Super Bowl IV). Wrote Jim Langer, "All this was successful right away. We kept ripping huge holes into their defense and Csonka kept picking up good yardage, especially to the right. We'd hear Alan [Page] cussing because those negative-influence plays were just driving him nuts. He didn't know what the hell to do." On defense, the Dolphins' goal was to neutralize Chuck Foreman by using cat-quick Manny Fernandez at nose tackle and to make passing difficult for Tarkenton by knocking down his receivers and double-teaming John Gilliam. They were also depending on defensive ends Bill Stanfill and Vern Den Herder to contain Tarkenton's scrambling. Coach Don Shula wrote, "In the case of Tarkenton we wanted to hem him in. In the case of Page, Eller and company, we wanted to try to turn their aggressiveness to our advantage. We decided to emphasize negative influence by misdirection and cross blocking, trying to make the Vikings Front Four commit to the influence of the play and then actually running it elsewhere. The Vikings responded as we anticipated. Then later in the game we found that the Vikings started hesitating, reducing their charge. When they did that, we beat them with straight blocking."
As they had the two previous Super Bowls, the Dolphins won the coin toss and elected to receive. The Dolphins dominated the Vikings right from the beginning, scoring touchdowns on two 10-play drives in the first quarter. Said Jim Langer, "It was obvious from the beginning that our offense could overpower their defense." First, Dolphins defensive back Jake Scott gave his team good field position by returning the opening kickoff 31 yards to the Miami 38-yard line. Then Mercury Morris ran right for four yards, Larry Csonka crashed through the middle for two, and quarterback Bob Griese completed a 13-yard pass to tight end Jim Mandich to advance the ball to the Vikings 43-yard line. Csonka then ran on second down for 16 yards, then Griese completed a six-yard pass to receiver Marlin Briscoe to the 21-yard line. Three more running plays, two by Csonka and one by Morris moved the ball to the Vikings 5-yard line. Csonka then finished the drive with a five-yard touchdown run.
Then after forcing Minnesota to punt after three plays, the Dolphins went 56 yards in 10 plays (aided with three runs by Csonka for eight, 12, and eight yards, and Griese's 13-yard pass to Briscoe) to score on running back Jim Kiick's one-yard run (his only touchdown of the season) to give them a 14–0 lead.
By the time the first quarter ended, Miami had run 20 plays for 118 yards and eight first downs, and scored touchdowns on their first two possessions, with Csonka carrying eight times for 64 yards and Griese completing all four of his passes for 40 yards. Meanwhile, the Miami defense held the Minnesota offense to only 25 yards, six plays from scrimmage, and one first down. The Vikings advanced only as far as their own 27-yard line. The Dolphins set the record which still stands for the largest Super Bowl lead (14 points) at the end of the first quarter. It has since been tied by the Oakland Raiders against the Philadelphia Eagles in Super Bowl XV (led 14-0) and the Green Bay Packers against the Pittsburgh Steelers in Super Bowl XLV (led 14-0).
The situation never got much better for the Vikings the rest of the game. After each team traded punts early in the second period, Miami mounted a seven-play drive starting from their own 35-yard line, culminating in a 28-yard field goal from kicker Garo Yepremian to make the score 17–0 midway through the second quarter. On the first play of the drive, Minnesota was penalized 15 yards for unsportsmanlike conduct on linebacker Wally Hilgenberg. On the previous series, Hilgenberg had thrown an elbow through Csonka's facemask, cutting Csonka above the eye, but had not been penalized. Later in the drive, Mercury Morris ran for 10 yards on a 3rd down play from the Minnesota 40-yard line to allow Miami to get into field goal range.
The Vikings then had their best opportunity to score in the first half on their ensuing drive. Starting at their own 20-yard line, Minnesota marched to the Miami 15-yard line in nine plays, aided by Fran Tarkenton's completions of 17 and 14 yards to tight end Stu Voigt and wide receiver John Gilliam's 30-yard reception. Tarkenton's eight-yard run on first down then advanced the ball to the 7-yard line. But on the next two plays, Vikings running back Oscar Reed gained only one yard on two rushes, bringing up a fourth-down-and-one with less than a minute left in the half. Instead of kicking a field goal, Minnesota attempted to convert the fourth down with another running play by Reed. However, Reed lost the ball while being tackled by linebacker Nick Buoniconti, and Scott recovered the fumble. About the decision to run with Reed on three straight plays, Grant defended the decision since the Vikes twice had converted in the NFC title game against Dallas. "If it's less than a yard, we go for it", he said. "We feel we have the plays to make it." The Dolphins, however, made the stop where the Cowboys had not.
Jim Langer wrote that at halftime, "We definitely knew that this game was over."
Gilliam returned the second half kickoff 65 yards, but a clipping penalty on the play moved the ball all the way back to the Minnesota 11-yard line. Two plays later, Tarkenton was sacked for a six-yard loss by defensive tackle Manny Fernandez on third down, forcing Minnesota to punt from their own 7-yard line. Scott then returned the punt 12 yards to the Minnesota 43-yard line.
Miami then marched 43 yards in eight plays to score on Csonka's two-yard touchdown run through Hilgenberg to increase their lead to 24–0 with almost nine minutes left in the third quarter. The key play was Griese's third-and-five, 27-yard pass to wide receiver Paul Warfield to the Minnesota 11-yard line. It was Griese's last pass of the game, his only pass of the second half and just the seventh overall, and only Warfield's second, and last, catch of the game. (Because of his hamstring injury, Warfield had earlier been limping through primarily decoy routes.) The Vikings might have had the drive held to a field goal attempt when Morris lost 8 yards on a third-and-4 play from the Minnesota 5, but Hilgenberg was called for holding, giving Miami an automatic first down at the 8. From there, Csonka carried twice to a score. On the scoring play, Griese forgot the snap count at the line of scrimmage. He asked Csonka, who said "two." Kiick said, "No, it's one." Griese chose to believe Csonka, which was a mistake; it was "one." Griese bobbled the ball slightly, but still managed to get it to Csonka.
After an exchange of punts, Minnesota got the ball back at their 43-yard line after Larry Seiple's kick went just 24 yards.
The Vikings then mounted a 10-play, 57-yard drive, with Tarkenton completing 5 passes for 43 yards, including a 15-yarder to Voigt on 3rd-and-8, and taking the ball into the end zone himself six plays into the 4th quarter on a 4-yard touchdown run. This was the first rushing touchdown by a quarterback in Super Bowl history.
Minnesota recovered the ensuing onside kick, but an offsides penalty on the Vikings nullified the play, and they subsequently kicked deep. Miami went three-and-out, but Seiple boomed a 57-yard punt and Minnesota got the ball back at its own 3-yard line. Eight plays later, the Vikings reached the Miami 32-yard line. After two incomplete passes, Tarkenton's pass intended for wide receiver Jim Lash was intercepted by Dolphins cornerback Curtis Johnson at the goal line. Miami got the ball back at their 10-yard line with 6:24 left in the game, and Csonka and Kiick were the ball carriers on all 12 remaining plays. The Dolphins picked up 2 first downs by rush and 2 by penalty on Minnesota in running out the clock. With less than four minutes to play, a frustrated Alan Page was called for a personal foul for a late hit on Griese, and then two plays later both Page and Kuechenberg were given offsetting personal fouls after getting in a scuffle with each other.
Wrote Jim Langer, "We just hit the Vikings defense so hard and so fast that they didn't know what hit them. Alan Page later said he knew we would dominate them after only the first couple of plays."
Griese finished the game with just six out of seven pass completions for 73 yards. Miami's seven pass attempts were the fewest ever thrown by a team in the Super Bowl. The Dolphins rushed for 196 yards, did not have any turnovers, and were not penalized in the first 52 minutes. Tarkenton set what was then a Super Bowl record for completions, 18 out of 28 for 182 yards, with one interception, and rushed for 17 yards and a touchdown. Reed was the leading rusher for the Vikings, but with just 32 yards. Tight end Stu Voigt was the top receiver of the game with three catches for 46 yards. The Vikings' lethargic performance was very similar to their performance in their loss to the Kansas City Chiefs in Super Bowl IV.
Sources: NFL.com Super Bowl VIII, Super Bowl VIII Play Finder Mia, Super Bowl VIII Play Finder Min
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
The following records were set or tied in Super Bowl VIII, according to the official NFL.com boxscore and the ProFootball reference.com game summary. Some records have to meet NFL minimum number of attempts to be recognized. The minimums are shown (in parenthesis).
Turnovers are defined as the number of times losing the ball on interceptions and fumbles.
In the Dolphins' locker room after the game, Csonka was asked about his battered face. Without naming Hilgenberg, he said, "It was a cheap shot, but an honest cheap shot. He came right at me and threw an elbow right through my mask. I could see the game meant something to him."
With their 32–2 record over two years, the still-young Dolphins appeared to have established a dynasty. In 1974, however, their offense was hurt by injuries to Csonka and the offensive line, and the defense was hurt by the departure of defensive coordinator Bill Arnsparger who became the New York Giants head coach. The Dolphins finished 11–3 but lost a dramatic playoff game ("The Sea of Hands") to the Oakland Raiders. In 1975 Csonka, Kiick, and Warfield left to join the World Football League. The Dolphins would not win another playoff game until 1982, and they have not won a Super Bowl since. They would appear in but lose two more, XVII and XIX.
Jim Langer ended his career with the Vikings in 1981, allowing him to play for the franchise closest to his native South Dakota. Langer lost his starting center job in 1980 to Dwight Stephenson, who like Langer is a member of the Hall of Fame.
Source:
"Note: A seven-official system was not used until the 1978 season."
Leo Miles was the first African-American to officiate in a Super Bowl. | https://en.wikipedia.org/wiki?curid=29134 |
Super Bowl IX
Super Bowl IX was an American football game played between the American Football Conference (AFC) champion Pittsburgh Steelers and the National Football Conference (NFC) champion Minnesota Vikings to decide the National Football League (NFL) champion for the 1974 season. The game was played on January 12, 1975, at Tulane Stadium in New Orleans, Louisiana. The Steelers defeated the Vikings by the score of 16–6 to win their first Super Bowl championship.
This game matched two of the NFL's best defenses and two future Pro Football Hall of Fame quarterbacks. Led by quarterback Terry Bradshaw and the "Steel Curtain" defense, the Steelers advanced to their first Super Bowl after posting a 10–3–1 regular season record and playoff victories over the Buffalo Bills and the Oakland Raiders. The Vikings were led by quarterback Fran Tarkenton and the "Purple People Eaters" defense; they advanced to their second consecutive Super Bowl and third overall after finishing the regular season with a 10–4 record and defeating the St. Louis Cardinals and the Los Angeles Rams in the playoffs.
The first half of Super Bowl IX was a defensive struggle, with the lone score being the first safety in Super Bowl history when Tarkenton was downed in his own end zone. The Steelers then recovered a fumble on the second half kickoff, and scored on fullback Franco Harris's 9-yard run. The Vikings cut the score, 9–6, early in the fourth quarter by recovering a blocked punt in Pittsburgh's end zone for a touchdown, but the Steelers then drove 66 yards on their ensuing possession to score on Larry Brown's 4-yard touchdown reception to put the game out of reach.
In total, the Steelers limited the Vikings to Super Bowl record lows of nine first downs, 119 total offensive yards, 17 rushing yards, and no offensive scores (Minnesota's only score came on a blocked punt, and they did not even score on the extra point attempt). The Steelers accomplished this despite losing starting linebackers Andy Russell and Jack Lambert, who were injured and replaced by Ed Bradley and Loren Toews for most of the second half. On the other hand, Pittsburgh had 333 yards of total offense. Harris, who ran for a Super Bowl record 158 yards (more than the entire Minnesota offense) and a touchdown, was named the Super Bowl's Most Valuable Player.
The NFL awarded Super Bowl IX to New Orleans on April 3, 1973, at the owners meetings held in Scottsdale, Arizona. This was the third time that the Super Bowl was played in New Orleans, after Super Bowls IV and VI. Super Bowl IX was originally planned to be held at the Louisiana Superdome. However, construction delays at the Superdome (which pushed its opening to August 1975) forced the league to move the game to Tulane Stadium, where the city's previous two Super Bowls were held. This ended up being the last professional American football game played at Tulane Stadium.
Pittsburgh advanced to their first Super Bowl and were playing for a league championship for the first time in team history. Their 73-year-old owner Art Rooney founded the Steelers as a 1933 NFL expansion team, but suffered through losing seasons for most of its 42-year history and had never made it to an NFL championship game or a Super Bowl. But in 1969, Rooney hired Chuck Noll to be the team's head coach and its fortunes started to turn following a disastrous 1–13 first year under the future Hall of Fame coach.
Noll rebuilt the Steelers through the NFL draft, selecting defensive tackle Joe Greene and defensive end L. C. Greenwood in his first season as head coach. In 1970, Noll drafted quarterback Terry Bradshaw and cornerback Mel Blount. In 1971, linebacker Jack Ham, defensive tackle Ernie Holmes, defensive end Dwight White, and safety Mike Wagner were selected by the team. Fullback Franco Harris was drafted in 1972. And in 1974, the Steelers picked linebacker Jack Lambert, center Mike Webster and wide receivers Lynn Swann and John Stallworth, and signed safety Donnie Shell as a free agent. Bradshaw, Webster, Swann, Stallworth and Harris ended up being Hall of Fame players on offense, while the others formed the core nucleus of their "Steel Curtain" defense, including future Hall of Famers Greene, Ham, Blount, Lambert and Shell.
But en route to Super Bowl IX, the Steelers had started the regular season slowly, as Bradshaw and Joe Gilliam fought to be the team's starting quarterback. Gilliam had started for the first four games of the season, but Noll eventually made Bradshaw the starter. Although Bradshaw ended up completing only 67 out of 148 passes for 785 yards, 7 touchdowns, and 8 interceptions, he helped lead the team to a 10–3–1 regular season record. The Steelers main offensive weapon, however, was running the ball. Harris rushed for 1,006 yards and five touchdowns, while also catching 23 passes for 200 yards and another touchdown. Running backs Rocky Bleier, Preston Pearson, and Steve Davis also made important contributions, gaining a combined total of 936 yards and eight touchdowns. Receiver Lynn Swann returned 41 punts for league leading 577 yards and a touchdown.
But the Steelers' main strength during the season was their staunch "Steel Curtain" defense, which led the league with the fewest total yards allowed (3,074) and the fewest passing yards allowed (1,466). Greene won the NFL Defensive Player of the Year Award for the second time in the previous three seasons, and he and L. C. Greenwood were named to the Pro Bowl. Both of the team's outside linebackers, Ham and Andy Russell, had been also selected to play in the Pro Bowl, while Lambert already had two interceptions for 19 yards in his rookie year. In the defensive backfield, Blount, Wagner, and Glen Edwards made a strong impact against opposing passing plays.
The Vikings came into the season trying to redeem themselves after a one sided Super Bowl VIII loss after which they became the first team to lose two Super Bowls (the other loss was in Super Bowl IV).
Minnesota's powerful offense was still led by veteran quarterback Fran Tarkenton, who passed for 2,598 yards and 17 touchdowns. The Vikings' primary offensive weapon was running back Chuck Foreman, who led the team in receptions with 53 for 586 yards and six touchdowns. He was also their leading rusher with 777 rushing yards and nine touchdowns. Wide receivers Jim Lash and John Gilliam were major deep threats, having 32 receptions for 631 yards (a 19.7 yards per catch average) and 26 receptions for 578 yards (a 22.5 ypc average), respectively. Fullback Dave Osborn contributed with 514 rushing yards, and 29 receptions for 196 yards. And the Vikings' offensive line, led by future Hall of Famers right tackle Ron Yary and center Mick Tingelhoff, allowed only 17 sacks.
Aided by the "Purple People Eaters" defense, led by future Hall of Fame defensive linemen Carl Eller and Alan Page, and future Hall of Fame safety Paul Krause, the Vikings won the NFC Central for the sixth time in the previous seven seasons.
For the first time in four years, the Miami Dolphins were not able to advance to the Super Bowl. While the Steelers defeated the Buffalo Bills 32–14 in the first round, the favored Dolphins lost to the Oakland Raiders 28–26, giving up Raiders running back Clarence Davis' 8-yard touchdown reception with 26 seconds remaining in the game with a play now known as the "Sea of Hands". The key play in the game occurred when the Dolphins were in control and were leading the Raiders 19–14 midway through the fourth quarter. Cliff Branch hauled in a 72-yard touchdown pass from Raiders quarterback Ken Stabler when third-year Dolphin defensive back Henry Stuckey, the man assigned to cover Branch on the play, fell down, and the resultant wide open Branch caught the bomb and sprinted to the end zone. After George Blanda kicked the PAT, the Raiders led 21-19. Dolphin fans were furious because fan favorite Lloyd Mumphord was replaced with Stuckey. Mumphord and head coach Don Shula were involved in a feud at the time, and it is thought that Stuckey was given the starting job for this game because of Shula's and Mumphord's differences of opinion. Afterwards, Stuckey was released in the offseason. Many believed that had Mumphord been in the game, there would have been no "Sea of Hands" play.
The Steelers defeated the Buffalo Bills 32–14 at home in the divisional round, then won the AFC Championship Game over the host Raiders, 24–13.
Meanwhile, Minnesota allowed only a combined 24 points in their playoff wins against the St. Louis Cardinals, 30–14, and their narrow defeat of the Los Angeles Rams, 14–10, after their defense stopped an attempted comeback touchdown drive from the Rams on the Vikings' own 2-yard line.
Sports writers and fans predicted that Super Bowl IX would be a low scoring game because of the two teams' defenses. The Steelers' "Steel Curtain" had led the AFC in fewest points allowed (189) and the Vikings' "Purple People Eaters" had only given up 195.
As the NFC was the designated "home team" for the game, by NFL rules at the time the Vikings were required to wear their purple jerseys. Although the league later relaxed the rule from Super Bowl XIII onwards, the Vikings would've likely worn their purple jerseys anyway, given that they've worn their purple jerseys at home for much of their history aside from a few games in the 1960s, when the NFL was encouraging (but not requiring) teams to wear white at home. This was the only one of the four Super Bowls the Steelers of the 1970s played in that the team wore their white jerseys, and the only Super Bowl the team would wear white at all until Super Bowl XL 31 years later.
When the NFL awarded Super Bowl IX to New Orleans on April 3, 1973, the game was originally scheduled to be played at the Louisiana Superdome. By July 1974, construction on the dome was not yet finished, and so the league reverted to Tulane Stadium, home field for Tulane University and the New Orleans Saints, and site of Super Bowls IV and VI. Dolphins owner Joe Robbie lobbied the NFL to move Super Bowl IX to the Orange Bowl, already scheduled to host Super Bowl X, and give New Orleans the January 1976 game, but the proposal was rejected.
This proved to be quite pivotal, because of the inclement conditions (low temperature and the field was slick from overnight rain). This was the last Super Bowl to be played in inclement weather for over thirty years, until Super Bowl XLI (and that game's weather issues in Miami were based on a driving rain, not the temperature). The game still holds the mark as the second-coldest outdoor temperature for an outdoor game, at a game-time temperature of (only Super Bowl VI, also played at Tulane Stadium, had a colder game-time temperature, ) and expectations that Super Bowl XLVIII would break these records due to its winter location in outdoor New Jersey did not come to pass. (Seven Super Bowls - XVI in Pontiac, XXVI and LII in Minneapolis, XXVIII and XXXIV in Atlanta, XL in Detroit and XLVI in Indianapolis - have had colder outdoor temperatures but were played fixed-roof stadiums, except XLVI at the retractable-roofed Lucas Oil Stadium.)
The change of venue meant this was not only the last of three Super Bowls played at Tulane Stadium, but the last professional game played in the stadium, which was demolished five years later and replaced for the 1975 NFL season by the Louisiana Superdome, which has hosted every Super Bowl held in New Orleans since.
The circumstances surrounding Super Bowl IX prompted the NFL to adopt a rule prohibiting a new stadium from hosting the Super Bowl following its first regular season.
The game was broadcast in the United States by NBC with play-by-play announcer Curt Gowdy and color commentators Al DeRogatis and Don Meredith. Charlie Jones served as the event's field reporter and covered the trophy presentation; while hosting the coverage was NBC News reporter Jack Perkins and Jeannie Morris (Morris, then the wife of former Chicago Bears wide receiver and WMAQ-TV sports anchor Johnny Morris) became the first woman to participate in Super Bowl coverage). Prior to the 1975 NFL season, NBC did not have a regular pregame show.
"The Mary Tyler Moore Show" on CBS (which was set in Minneapolis) used this game as a plotline on the episode aired the night before the game. Lou Grant was teaching Ted Baxter how to bet on football games, and used Ted's money, as well as some of his own to bet on the hometown Vikings winning the Super Bowl. The Vikings won the Super Bowl in this episode but Ted's hopes were dashed when it was revealed that Lou actually bet all the money on the Steelers. At the end of the show, Mary Tyler Moore announced the following over the credits: "If the Pittsburgh Steelers win the actual Super Bowl tomorrow, we want to apologize to the Pittsburgh team and their fans for this purely fictional story. If on the other hand, they lose, remember, you heard it here first." And, as it turned out, her apology did go into effect.
The Grambling State University Band performed during both the pregame festivities and the national anthem. During the national anthem, they were backed by the Mardi Gras Barbershop Chorus under the direction of Dr. Saul Schneider. The halftime show was a tribute to American jazz composer, pianist and bandleader Duke Ellington, also featuring the Grambling State University Band along with Ellington's son Mercer. Ellington had died the previous May.
As many predicted, the game was low scoring; both teams failed to score a touchdown or a field goal until the third quarter and ended up with the third lowest total of combined points in Super Bowl history.
The first quarter of Super Bowl IX was completely dominated by both teams' defenses. The Vikings were limited to 20 passing yards, zero rushing yards, and one first down. The Steelers did slightly better with 18 passing yards, 61 rushing yards, and four first downs. Pittsburgh even managed to get close enough for their kicker Roy Gerela to attempt two field goals, but Gerela missed his first attempt, and a bad snap prevented the second one from getting off the ground.
In the second quarter, the Vikings got an opportunity to score when defensive back Randy Poltl recovered a fumble from halfback Rocky Bleier at the Steelers' 24-yard line, but they could only move the ball two yards in their next three plays, and kicker Fred Cox missed a 39-yard field goal attempt. The Steelers then converted a third down with the longest gain so far in the game, a 22-yard pass from Terry Bradshaw to John Stallworth. Pittsburgh was forced to punt, but Bobby Walden booted a 39-yarder, and rookie Sam McCullum did not allow the ball to reach the end zone, then failed to make a return and was downed at the Viking 7-yard line. The first score of the game occurred two plays later, when halfback Dave Osborn fumbled a pitch from Tarkenton at the 10, and the ball rolled backward and across the goal line. Tarkenton quickly dove on the ball in the end zone to prevent a Steeler touchdown, but he was downed by Dwight White for a safety, giving Pittsburgh a 2–0 lead. It was the first safety scored in Super Bowl history. The Vikings forced a three-and-out, then threatened to score when Tarkenton led them on a 55-yard drive to the Steelers' 20-yard line. With 1:17 left in the half, Tarkenton threw a pass to receiver John Gilliam at the 5-yard line, but Steelers safety Glen Edwards hit him just as he caught the ball. The ball popped out of his hands and right into the arms of Mel Blount for an interception.
The half ended with the Steelers leading 2–0, the lowest halftime score in Super Bowl history and lowest possible, barring a scoreless tie.
On the opening kickoff of the second half, Minnesota's Bill Brown lost a fumble on an unintentional squib kick after Gerela slipped on the wet field and only extended his leg halfway for the kick. Marv Kellum recovered the ball for Pittsburgh at the Vikings' 30-yard line. Franco Harris then moved the ball to the 6-yard line with a 24-yard run. After being tackled for a three-yard loss, Harris carried the ball for nine yards and a touchdown, giving the Steelers a 9–0 lead.
After an exchange of punts, Minnesota got the ball back on their own 20-yard line. On the second play of drive, Tarkenton's pass was deflected behind the line of scrimmage by Pittsburgh defensive lineman L. C. Greenwood, and bounced back right into the arms of Tarkenton, who then threw a 41-yard completion to Gilliam. Officials ruled Tarkenton's first pass attempt was a completion to himself, and thus his second attempt was an illegal forward pass. After the penalty, facing third and 11, Minnesota got the first down with running back Chuck Foreman's 12-yard run. Three plays later, Tarkenton completed a 28-yard pass to tight end Stu Voigt at the Steelers' 45-yard line. But White deflected Tarkenton's next pass attempt, and Joe Greene intercepted the ball, ending the Vikings' best offensive scoring opportunity.
Early in the fourth quarter, the Vikings got another scoring opportunity when Minnesota safety Paul Krause recovered a fumble from Harris on the Steelers' 47-yard line. On the next play, a deep pass attempt from Tarkenton to Gilliam drew a 42-yard pass interference penalty on Pittsburgh defensive back Mike Wagner that moved the ball up to the 5-yard line. Once again, the Steelers stopped them from scoring when Greene forced and recovered a fumble from Foreman. Pittsburgh failed to get a first down on their next possession and was forced to punt from deep in their own territory. Minnesota linebacker Matt Blair burst through the line to block the punt, and Terry Brown recovered the ball in the end zone for a touchdown. Cox missed the extra point, but the Vikings had cut their deficit to 9–6 and were just a field goal away from a tie.
However, on the ensuing drive, the Steelers put the game out of reach with a 66-yard, 11-play scoring drive that took 6:47 off the clock and featured three successful third down conversions. The first was a key 30-yard pass completion from Bradshaw to tight end Larry Brown. Brown fumbled the ball as he was being tackled, and two officials (back judge Ray Douglas and field judge Dick Dolack) initially ruled the ball recovered for the Vikings by Jeff Siemon, but head linesman Ed Marion overruled their call, stating that Brown was downed at the contact before the ball came out of his hands. Faced with 2nd and 15 after a penalty, Pittsburgh then fooled the Vikings defense with a misdirection play. Harris ran left past Bradshaw after the snap, drawing in the defense with him, while Bleier took a handoff and ran right through a gaping hole in the line for a 17-yard gain to the Vikings 16-yard line. A few plays later, Bradshaw converted a 3rd and 5 situation with 6-yard pass to Bleier that put the ball on the Vikings' 5-yard line. The Steelers gained just one yard with their next two plays, setting up third and goal from the four. Bradshaw's 4-yard touchdown pass to Brown on third down gave the Steelers a 16–6 lead with only 3:31 remaining.
Vikings running back Brent McClanahan returned the ensuing kickoff 22 yards to the 39-yard line, but on the first play of the drive, Tarkenton's pass was intercepted by Wagner. The Steelers then executed 7 consecutive running plays, taking the game clock all the way down to 38 seconds remaining before turning the ball over on downs.
Harris finished the game with 34 carries for a Super Bowl record 158 yards and a touchdown; Harris' record stood until the Washington Redskins' John Riggins rushed for 166 yards in Super Bowl XVII. Bleier had 65 rushing yards, and two receptions for 11 yards. Pittsburgh finished with a total of 57 rushing attempts, which remains the Super Bowl record through Super Bowl LIV. Bradshaw completed nine out of 14 passes for 96 yards and a touchdown. Tarkenton completed 11 of 26 passes for just 102 yards with 3 interceptions, for a passer rating of only 14.1. Foreman was the Vikings' top offensive contributor, finishing the game as the team's leading rusher and receiver with 18 rushing yards and 50 receiving yards.
The loss was the Vikings' record-setting third in Super Bowl play. Bud Grant vented frustration by saying, "There were three bad teams out there - us, Pittsburgh and the officials.”
Sources: NFL.com Super Bowl IX, Super Bowl IX Play Finder Pit, Super Bowl IX Play Finder Min
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
The following records were set or tied in Super Bowl IX, according to the official NFL.com boxscore and the ProFootball reference.com game summary. Some records have to meet NFL minimum number of attempts to be recognized. The minimums are shown (in parenthesis).
Turnovers are defined as the number of times losing the ball on interceptions and fumbles.
Source:
Bruce Alford was the first official to be honored with three Super Bowl assignments.
Bernie Ulman was the first official to be the referee for a Super Bowl after working a previous Super Bowl at another position. This would not happen again until Dick Hantak was the referee for Super Bowl XXVII after serving as back judge for Super Bowl XVII.
"Note: A seven-official system was not used until the season" | https://en.wikipedia.org/wiki?curid=29135 |
Super Bowl X
Super Bowl X was an American football game between the National Football Conference (NFC) champion Dallas Cowboys and the American Football Conference (AFC) champion Pittsburgh Steelers to decide the National Football League (NFL) champion for the 1975 season. The Steelers defeated the Cowboys by the score of 21–17 to win their second consecutive Super Bowl. They were the third team to win back-to-back Super Bowls. (The Miami Dolphins won Super Bowls VII and VIII, and the Green Bay Packers won Super Bowls I and II.) It was also the first Super Bowl in which both participating teams had previously won a Super Bowl, as the Steelers were the defending champions and the Cowboys had won Super Bowl VI.
The game was played at the Orange Bowl in Miami, Florida, on January 18, 1976, one of the first major national events of the United States Bicentennial year. Both the pre-game and halftime show celebrated the Bicentennial, while players on both teams wore special patches on their jerseys with the Bicentennial logo.
Super Bowl X featured a contrast of playing styles between the Steelers and the Cowboys, which were, at the time, the two most popular teams in the league. The Steelers, dominating teams with their "Steel Curtain" defense and running game, finished the regular season with a league best 12–2 record and defeated the Baltimore Colts and the Oakland Raiders in the playoffs. The Cowboys, with their offense and "flex" defense, became the first NFC wild-card team to advance to the Super Bowl after posting a 10–4 regular season record and postseason victories over the Minnesota Vikings and the Los Angeles Rams.
Trailing 10–7 in the fourth quarter of Super Bowl X, the Steelers rallied to score 14 unanswered points, including a 64-yard touchdown reception by Pittsburgh wide receiver Lynn Swann. The Cowboys cut the score, 21–17, late in the game with wide receiver Percy Howard's 34-yard touchdown reception, but Pittsburgh safety Glen Edwards halted Dallas' rally with an end zone interception as time expired. Swann, who caught four passes for a Super Bowl record 161 yards and one touchdown, became the first wide receiver to be named Super Bowl MVP.
The NFL awarded Super Bowl X to Miami on April 3, 1973, at the owners' meetings held in Scottsdale, Arizona.
The Cowboys, considered a Cinderella team entering the Super Bowl, advanced to their third Super Bowl in team history with their rather high-tech offense and "flex" defense. Quarterback Roger Staubach had a solid season, passing for 2,666 yards and 17 touchdowns, while also rushing for 310 yards. Staubach's favorite target was wide receiver Drew Pearson who led the team with 46 receptions for 822 yards and 8 touchdowns. Wide receiver Golden Richards and tight end Jean Fugett were also reliable targets in the Cowboys' passing game, combining for 59 receptions and 939 receiving yards.
But despite their solid passing game, Dallas was a run-based team. Fullback Robert Newhouse was their leading rusher with
930 yards, and also caught 34 passes for 274 yards. Halfback Doug Dennison contributed 388 yards. Perhaps the most talented player in the backfield was halfback Preston Pearson (no relation to receiver Drew Pearson), who signed on the team as a free agent after being cut by the Steelers in the preseason. Preston rushed for 509 yards, caught 27 passes for 351 yards, and added another 391 yards returning kickoffs. Preston had been especially effective in the playoffs, where he caught 12 passes for 200 yards and three touchdowns, and was extremely eager to increase his numbers in the Super Bowl against the team that let him go. Up front, the offensive line was led by All-Pro right tackle Rayfield Wright.
The Cowboys' "Flex" defense was anchored by linemen Harvey Martin and Ed "Too Tall" Jones. Linebacker Lee Roy Jordan led the team with six interceptions, while linebacker D.D. Lewis was an effective weapon pass rushing. The starting players in Dallas' defensive secondary, future Hall of Fame cornerback Mel Renfro, cornerback Mark Washington, and safeties Charlie Waters and future Hall of Famer Cliff Harris, combined for 12 interceptions.
Even though the Cowboys finished in second place in the NFC East with a 10–4 record, they qualified for the playoffs as the NFC's wild-card team (during that time, only one wild card team from each conference entered the playoffs). The Dallas Cowboys became the first NFC wild card team to reach the Super Bowl.
The Steelers became the first official #1 seed to reach the Super Bowl. Playoff seeds were instituted in 1975. The Steelers finished the regular season with a league-best 12–2 record, dominating opponents with their "Steel Curtain" defense and powerful running game. Fullback Franco Harris ranked second in the league with 1,246 rushing yards and 10 touchdowns, while also catching 28 passes for 214 yards and another touchdown. Halfback Rocky Bleier had 528 rushing yards, and fullback John "Frenchy" Fuqua added 285 yards and 18 receptions. Still, the Steelers had a fine passing attack led by quarterback Terry Bradshaw. Bradshaw threw for 2,055 yards, 18 touchdowns, and nine interceptions while rushing for 210 yards and three touchdowns. One reason why Bradshaw's numbers were much improved from the previous season was the emergence of wide receivers Lynn Swann and John Stallworth. Both saw limited playing time in the previous season, but became significant contributors in 1975. Swann caught a team-leading 49 passes for 781 yards and 11 touchdowns. Stallworth only had 20 receptions, but he had an average of 21.2 yards per catch, recording a total of 423 reception yards.
The Steelers' "Steel Curtain" defense dominated the league, ranking third in fewest yards allowed (4,019) and sending 8 of their 11 starters to the Pro Bowl: defensive linemen Joe Greene (future Pro Football Hall of Fame player) and L. C. Greenwood; future Hall of Fame linebackers Jack Ham and Jack Lambert; Andy Russell, the team's third starting linebacker; future Hall of Fame defensive back Mel Blount; and safeties Glen Edwards and Mike Wagner.
Greene made the Pro Bowl despite missing six games with injuries. Ham and Lambert had the best seasons of their careers, while Blount led the league with 11 interceptions and was named the NFL's Defensive Player of the Year. Wagner had 4 interceptions and 3 fumble recoveries, while Edwards had 3 interceptions, while also returning 25 punts for 267 yards.
Dallas went on to defeat the Minnesota Vikings, 17–14, with a 50-yard touchdown pass from Staubach to Drew Pearson with less than a minute to play in what was called the "Hail Mary pass". They went on to crush the Los Angeles Rams, 37–7, in the NFC Championship Game. As a result, the Cowboys became the first wild card team to advance to the Super Bowl.
Meanwhile, even though Pittsburgh's offense lost a total of 12 turnovers in their two playoff games, the Steelers only gave up a combined total of 20 points in their victories over the Baltimore Colts in the AFC Divisional playoff game 28–10, and the Oakland Raiders in the AFC Championship Game 16–10.
Coming into Super Bowl X, most sports writers and fans expected that Swann would not play. He had suffered a severe concussion in the AFC Championship Game against the Raiders that forced him to spend two days in a hospital. If he did play, many assumed he would just be used as a decoy to draw coverage away from the other receivers.
Throughout the week leading up to the Super Bowl, Swann was unable to participate in several team practices or was limited to only a minor workout in them. However, a few days before the game, he received a verbal challenge from Dallas safety Cliff Harris, who stated, "I'm not going to hurt anyone intentionally. But getting hit again while he's running a pass route must be in the back of Swann's mind. I know it would be in the back of my mind."
Swann responded "I'm still not 100 percent. I value my health, but I've had no dizzy spells. I read what Harris said. He was trying to intimidate me. He said I'd be afraid out there. He needn't worry. He doesn't know Lynn Swann. He can't scare me or the team. I said to myself, 'The hell with it, I'm gonna play.' Sure, I thought about the possibility of being reinjured. But it's like being thrown by a horse. You have to get up and ride again immediately or you may be scared the rest of your life."
Super Bowl X was the final NFL officiating assignment for veteran referee Norm Schachter, who also served as the referee for Super Bowl I and Super Bowl V. Schachter worked as an officiating supervisor and instant replay official following his on-field retirement.
CBS televised the game in the United States with play-by-play announcer Pat Summerall (calling his first Super Bowl in that role) and color commentator Tom Brookshier. Toward the end of the game, Hank Stram took over for Brookshier, who had left the booth to head down to the locker room area to conduct the postgame interviews with the winning team. Two days after the Super Bowl, Stram was hired as coach of the New Orleans Saints, interrupting his broadcasting career for two seasons.
On radio, Verne Lundquist and Al Wisk announced the game for the Dallas Cowboys Radio Network, and Jack Fleming and Myron Cope called the game for the Pittsburgh Steelers Radio Network. Ed Ingles and Jim Kelly called the game nationally for CBS Radio. Hosting television coverage was "The NFL Today" crew of Brent Musburger; Irv Cross and Phyllis George. During this game, CBS began using Jack Trombey's "Horizontal Hold" as the theme music. That would be used the following season for the "NFL Today" pregame show between 1976 and 1980 in its original form, with a remake for 1981 followed by updates for 1984 and 1989 before its retirement.
The overall theme of the Super Bowl entertainment was to celebrate the United States Bicentennial. Each Cowboys and Steelers player wore a special patch with the Bicentennial logo on their jerseys.
This was the first Super Bowl where somebody other than the game's referee tossed the coin, in this case, John Warner who was the United States Secretary of the Navy from 1972-74. Prior to 1976, the coin toss was held a half-hour before kick-off.
The performance event group Up with People performed during both the pregame festivities and the halftime show titled "200 Years and Just a Baby: A Tribute to America's Bicentennial". Up with People dancers portrayed various American historical figures along with a rendition of Steve Goodman's "City of New Orleans". Singer Tom Sullivan sang the national anthem.
Scenes for the 1977 suspense film "Black Sunday," about a fictional terrorist attack on the Super Bowl via the Goodyear Blimp, were filmed during the game.
This was the last Super Bowl to kick off as early as 2:00 p.m. (EST), thereby allowing a finish time before the commencement of many of the nation's evening church services.
This was the first Super Bowl where the play clock was visible to teams and spectators. Visible play clocks were mandated by NFL rules beginning with the 1976 season.
The Steelers won their second straight Super Bowl, largely through the plays by Swann and by stopping a rally by the Cowboys late in the fourth quarter. Officials did not call a single penalty on the Steelers during the game, while the Cowboys were called for only 2 penalties for 20 yards.
On the opening kickoff, the Cowboys ran a reverse where rookie linebacker Thomas "Hollywood" Henderson took a handoff from Preston Pearson and returned the ball a Super Bowl-record 48 yards before kicker Roy Gerela forced him out of bounds at the Steelers' 44-yard line. Gerela suffered badly bruised ribs that appeared to affect his kicking performance all afternoon. On the first play of the game, Steelers defensive lineman L. C. Greenwood sacked Cowboys quarterback Roger Staubach, forcing him to fumble. Although Dallas recovered the fumble, they eventually were forced to punt. The sack was a foreshadow of things to come for Staubach, who was sacked seven times on the day. The Steelers managed to get one first down and advanced to their own 40-yard line, but then they too were forced to punt. Steelers punter Bobby Walden fumbled the snap. Walden managed to recover his own fumble, but Dallas took over on the Steelers' 29-yard line. On the very next play, Staubach threw a 29-yard touchdown pass to wide receiver Drew Pearson, taking a 7–0 lead. The score was the first touchdown permitted in the first quarter by the Steelers' defense in 1975.
Instead of trying to immediately tie the game on a long passing play, the Steelers ran the ball on the first four plays of their ensuing possession, and then quarterback Terry Bradshaw completed a 32-yard pass to wide receiver Lynn Swann to reach the Cowboys' 16-yard line. Swann soared over the outstretched reach of defensive back Mark Washington before tight-roping the sideline to make the reception. Two running plays further advanced the ball to the 7-yard line. Then on third down and one, the Steelers managed to fool the Cowboys. Pittsburgh brought in three tight ends, which usually signals a running play (Steelers guard Gerry Mullins was also an eligible receiver on the play as he moved to the tight end position). After the snap, tight end Randy Grossman faked a block to the inside as if it were a running play, but then ran a pass route into the end zone, and Bradshaw threw the ball to him for a touchdown, tying the game, 7–7. This marked the first Super Bowl that both teams scored in the first quarter.
Dallas responded on their next drive, advancing the ball 51 yards, all rushing, (30 of them on five carries from fullback Robert Newhouse) before incurring a third down false start penalty, and scoring on kicker Toni Fritsch's 36-yard field goal to take a 10–7 lead early in the second quarter. The 51 rushing yards the Cowboys amassed on the drive tripled what the Minnesota Vikings gained against Pittsburgh for all of Super Bowl IX. The Steelers subsequently advanced to the Cowboys' 36-yard line on their next possession, but on fourth down and two, Bradshaw's pass was broken up by Dallas safety Cliff Harris.
Later in the period, Dallas drove to the Steelers' 20-yard line. But in three plays, the Cowboys lost 25 yards. On first down, Newhouse was tackled for a 3-yard loss by linebacker Andy Russell. Then Greenwood sacked Staubach for a 12-yard loss. And on third down, Staubach was sacked again, this time for a 10-yard loss, by defensive end Dwight White. The sacks pushed Dallas out of field goal range and they were forced to punt. The Steelers' offense got the ball back their own 6-yard line with 3:47 left in the half. On the drive, Bradshaw completed a 53-yard pass to Swann to advance the ball to the Cowboys' 37-yard line; Swann's catch has become one of the most memorable acrobatic catches in Super Bowl history. On the very next play, Bradshaw just missed connections with Swann at the Dallas 6. Pittsburgh drove to the 19-yard line after the two-minute warning, but the drive stalled there and ended with no points after Gerela missed a 36-yard field goal attempt with 22 seconds remaining in the period.
Early in the third quarter, Pittsburgh got a great scoring opportunity when defensive back J. T. Thomas intercepted a pass from Staubach and returned it 35 yards to the Cowboys' 25-yard line. However, once again the Steelers failed to score as the Dallas defense kept Pittsburgh out of the end zone and Gerela missed his second field goal, a 33-yard attempt. After the miss, Harris mockingly patted Gerela on his helmet and thanked him for "helping Dallas out," but was immediately thrown to the ground by Steeler linebacker Jack Lambert. Lambert could have been ejected from the game for defending his teammate, but the officials decided to allow him to remain.
The third quarter was completely scoreless and the Cowboys maintained their 10–7 lead going into the final period. However, early in the fourth quarter, Dallas punter Mitch Hoopes was forced to punt from inside his own goal line. As Hoopes stepped up to make the kick, Steelers running back Reggie Harrison broke through the line and blocked the punt. The ball went through the end zone for a safety, cutting the Dallas lead to 10–9. It was the second safety recorded in Super Bowl history, the first occurring a year earlier when White downed Minnesota's Fran Tarkenton on a fumble recovery in the end zone. Then Steelers running back Mike Collier returned the free kick 25 yards to the Cowboys' 45-yard line. Dallas halted the ensuing drive at the 20-yard line, but this time Gerela successfully kicked a 36-yard field goal to give Pittsburgh their first lead of the game, 12–10. Then on the first play of the Cowboys' next drive, Steelers defensive back Mike Wagner intercepted a pass from Staubach and returned it 19 yards to the Dallas 7-yard line. Wagner's interception came off the same play Dallas used to score their opening touchdown. Instead of surveying the middle of the field, Wagner watched Pearson and recognized the pattern. Staubach later said: "It was our bread and butter play all season long. It was the first time it didn't work." The Cowboys defense again managed to prevent a touchdown, but Gerela kicked an 18-yard field goal to increase the Steelers lead to 15–10.
The Steelers forced a punt and regained possession of the ball on their own 30-yard line with 4:25 left in the final period, giving them a chance to either increase their lead or run out the clock to win the game. But after two plays, the Steelers found themselves facing 3rd-and-4 on their own 36-yard line. Assuming that the Cowboys would be expecting a short pass or a run, Bradshaw decided to try a long pass and told Swann in the huddle to run a deep post pattern. As Bradshaw dropped back to pass, Harris and linebacker D.D. Lewis both blitzed in an attempt to sack him. But Bradshaw managed to dodge Lewis and throw the ball just before being leveled by Harris and lineman Larry Cole, who landed a helmet-to-helmet hit on Bradshaw. Swann then caught the ball at the 5-yard line and ran into the end zone for a 64-yard touchdown completion. Bradshaw never did see Swann's catch or the touchdown since Cole's hit to Bradshaw's helmet knocked him out of the game with a head injury. It was only after he was assisted to the locker room that he was told what happened.
After play resumed, Gerela missed the extra point attempt, but the Steelers now had a 21–10 lead with 3:02 left in the game, and the Cowboys needed two touchdowns to come back.
Staubach then led his team 80 yards in 5 plays on the ensuing drive, scoring on a 34-yard touchdown pass to wide receiver Percy Howard and cutting their deficit to 21–17 (Howard's touchdown reception was the only catch of his NFL career; he was not mentioned by name by John Facenda in the highlight package produced by NFL Films). After Gerry Mullins recovered Dallas' onside kick attempt, the Steelers then tried to run out the clock on the next drive with four straight running plays, but the Cowboys defense stopped them on fourth down at their 39-yard line, giving Dallas one more chance to win. Some questioned why Noll would elect to go for it on fourth down but, as later explained by NFL Films, his entire kicking game had been suspect all game long with Gerela missing an extra point and two field goals while Walden fumbled a snap on a punt, and nearly had two punts blocked. (Gerela's problems may have begun on the opening kickoff when he was forced to make a touchdown saving tackle on Hollywood Henderson.)
With 1:22 left in the game, Staubach started out the drive with an 11-yard scramble to midfield, and then followed it up with a 12-yard completion to Preston Pearson at the Steelers' 38-yard line. Pearson inexplicably ran towards the middle rather than running out of bounds to stop the clock. On the next play, Staubach couldn't handle a low snap but managed to recover the ball and throw it downfield for an incompletion. On second down with 12 seconds left, he threw a pass intended for Howard in the end zone, but the ball bounced off Howard's helmet and a Hail Mary replay was not to be. Had Howard positioned himself inches back from his position in the end zone as the ball came down he would have had a better opportunity to catch the ball and write himself into Cowboy folklore. Then on third down, Staubach once again tried to complete a pass to Howard in the end zone, but the ball was tipped by Wagner into the arms of safety Glen Edwards for an interception as time expired, sealing Pittsburgh's victory.
Bradshaw finished the game with 9 out of 19 pass completions for 209 yards and two touchdowns, with no interceptions. He also added another 16 yards rushing the ball. Staubach completed 15 out of 24 passes for 204 yards and two touchdowns with three interceptions. He also rushed for 22 yards on five carries, but was sacked seven times. Steelers running back Franco Harris was the leading rusher of the game with 82 rushing yards, and also caught a pass for 26 yards. Newhouse was the Cowboys top rusher with 56 yards, and caught two passes for 12 yards. Greenwood recorded a Super Bowl record four sacks but it has gone unrecognized since the NFL didn't officially record sacks until 1982.
The game was remembered for being the most exciting of the first 10 Super Bowl games. Swann's heroics and Lambert's 14 tackles and throw-down of Cliff Harris are the indelible images from the game. After being benched to start the 1974 campaign and being booed for most of his first four seasons in Pittsburgh, Bradshaw became the first quarterback to throw two game-winning touchdown passes in Super Bowl competition. The Steelers' bid for three-consecutive championships ended in a 24–7 loss to the Oakland Raiders in the 1976 AFC Championship game after a season that saw Pittsburgh's defense shut out five opponents and allow only 28 points in a 9-game span. The loss to Pittsburgh coupled with an early playoff exit in 1976 largely influenced the Cowboys to draft Tony Dorsett in the 1977 Draft to help infuse life into Dallas' offense. Dorsett helped lead Dallas to a Super Bowl XII victory over the Denver Broncos, who defeated the Steelers in the first round of the playoffs that year.
Pittsburgh and Dallas would battle in another thriller in Super Bowl XIII (also played in Miami). The result was the same, as the Steelers prevailed 35–31. But Super Bowl X was the game that began the rivalry between the two storied franchises. The Cowboys gained a measure of revenge by defeating the Steelers 27–17 in Super Bowl XXX following the 1995 season.
This was the final football game to be played on artificial turf (specifically, Poly-Turf) at the Orange Bowl. The surface in 1976 reverted to natural grass, and remained so until the stadium's closure in 2007. Poly-Turf was first installed at the Orange Bowl in 1970 and replaced in 1972, but players complained often of the slickness of the surfaces, and fields became discolored due to the intense sunshine common to south Florida.
Sources: NFL.com Super Bowl X, Super Bowl X Play Finder Pit, Super Bowl X Play Finder Dal
1Completions/attempts
2Carries
3Long gain
4Receptions
5Times targeted
The following records were set in Super Bowl X, according to the official NFL.com boxscore, the 2016 NFL Record & Fact Book and the ProFootball reference.com game summary. Some records have to meet NFL minimum number of attempts to be recognized. The minimums are shown (in parenthesis).
Turnovers are defined as the number of times losing the ball on interceptions and fumbles.
This was the first Super Bowl in which the referee wore a wireless microphone to announce penalties and other rulings to the audience in the stadium, those listening on radio and those watching on television. The idea was pioneered by Cowboys GM Tex Schramm.
Norm Schachter retired following this game and became an officiating supervisor. He became the first official to serve as referee for three Super Bowls, a mark later equaled by Jim Tunney, Pat Haggerty, Bob McElwee and Terry McAulay, and surpassed by Jerry Markbreit with four.
"Note: A seven-official system was not used until 1978" | https://en.wikipedia.org/wiki?curid=29136 |
Opel
Opel Automobile GmbH () is a German automobile manufacturer, a subsidiary of French automaker Groupe PSA since August 2017. From 1929 until 2017, Opel was owned by American automaker General Motors. Opel vehicles are sold in Great Britain under the Vauxhall brand. Some Opel vehicles were badge-engineered in Australia under the Holden brand until 2020 and in North America and China under the Buick, Saturn, and Cadillac brands.
Opel traces its roots to a sewing machine manufacturer founded by Adam Opel in 1862 in Rüsselsheim am Main. The company began manufacturing bicycles in 1886 and produced its first automobile in 1899. After listing on the stock market in 1929, General Motors took a majority stake in Opel and then full control in 1931, establishing an American ownership of the German automaker for nearly 90 years.
In March 2017, Groupe PSA agreed to acquire Opel from General Motors for €2.2 billion, making the French automaker the second biggest in Europe, after Volkswagen.
Opel is headquartered in Rüsselsheim am Main, Hesse, Germany. The company designs, engineers, manufactures and distributes Opel-branded passenger vehicles, light commercial vehicles, and vehicle parts and together with its English sister marque Vauxhall they are present in over 60 countries around the world.
The company was founded in Rüsselsheim, Hesse, Germany, on 21 January 1862, by Adam Opel. In the beginning, Opel produced sewing machines. In 1888, production was relocated from a cowshed to a more spacious building in Rüsselsheim. Opel launched a new product in 1886: he began to sell high-wheel bicycles, also known as penny-farthings. Opel's two sons participated in high-wheel bicycle races, thus promoting this means of transportation. The production of high-wheel bicycles soon exceeded the production of sewing machines. At the time of Opel's death in 1895, he was the leader in both markets.
The first cars were designed in 1898 after Opel's widow Sophie and their two eldest sons entered into a partnership with Friedrich Lutzmann, a locksmith at the court in Dessau in Saxony-Anhalt, who had been working on automobile designs for some time. The first Opel production Patent Motor Car was built in Rüsselsheim early 1899, although These cars were not very successful (A total of 65 motor cars were delivered: [15] 11 in 1899, 24 copies in 1900 and 30 in 1901) and the partnership was dissolved after two years, following which Opel signed a licensing agreement in 1901 with the French Automobiles Darracq France to manufacture vehicles under the brand name Opel Darracq. These cars consisted of Opel bodies mounted on Darracq chassis, powered by two-cylinder engines.
The company first showed cars of its own design at the 1902 Hamburg Motor Show, and started manufacturing them in 1906, with Opel Darracq production being discontinued in 1907.
In 1909, the Opel 4/8 PS model, known as the "Doktorwagen" ("Doctor's Car") was produced. Its reliability and robustness were appreciated by physicians, who drove long distances to see their patients back when hard-surfaced roads were still rare. The "Doktorwagen" sold for only 3,950 marks, about half as much as the luxury models of its day.
In 1911, the company's factory was virtually destroyed by fire and a new one was built with more up-to-date machinery.
In the early 1920s, Opel became the first German car manufacturer to incorporate a mass-production assembly line in the building of their automobiles. In 1924, they used their assembly line to produce a new open two-seater called the "Laubfrosch" (Tree frog). The Laubfrosch was finished exclusively in green lacquer. The car sold for an expensive 3,900 marks (expensive considering the less expensive manufacturing process), but by the 1930s, this type of vehicle would cost a mere 1,930 marks – due in part to the assembly line, but also due to the skyrocketing demand for cars. Adam Opel led the way for motorised transportation to become not just a means for the rich, but also a reliable way for people of all classes to travel.
Opel had a 37.5% market share in Germany and was also the country's largest automobile exporter in 1928. The "Regent" – Opel's first eight-cylinder car – was offered. The RAK 1 and RAK 2 rocket-propelled cars made sensational record-breaking runs.
In March 1929, General Motors (GM), impressed by Opel's modern production facilities, bought 80% of the company, increasing this to 100% in 1931. The Opel family gained $33.3 million from the transaction. Subsequently, during 1935, a second factory was built at Brandenburg for the production of "Blitz" light trucks. In 1929 Opel licensed design of the radical Neander motorcycle, and produced it as the Opel Motoclub in 1929 and 1930, using Küchen, J.A.P., and Motosacoche engines. Fritz von Opel famously attached solid-fuel rockets to his Motoclub in a publicity stunt, riding the rocket-boosted motorcycle at the Avus racetrack.
In 1935, Opel became the first German car manufacturer to produce over 100,000 vehicles a year. This was based on the popular Opel P4 model. The selling price was a mere 1,650 marks and the car had a 1.1 L four-cylinder engine and a top speed of .
Opel also produced the first mass-production vehicle in Germany with a self-supporting ("unibody") all-steel body, closely following the 1934 Citroën Traction Avant. This was one of the most important innovations in automotive history. They called the car, launched in 1935, the Olympia. With its small weight and aerodynamics came an improvement in both performance and fuel consumption. Opel received a patent on this technology.
The 1930s was a decade of growth, and by 1937, with 130,267 cars produced, Opel's Rüsselsheim plant was Europe's top car plant in terms of output, while ranking seventh worldwide.
1938 saw the presentation of the highly successful Kapitän. With a 2.5 L six-cylinder engine, all-steel body, front independent suspension, hydraulic shock absorbers, hot-water heating (with electric blower), and central speedometer. 25,374 Kapitäns left the factory before the intensification of World War II brought automotive manufacturing to a temporary stop in the Autumn of 1940, by order of the government.
Opel automobile production ended in October 1940, after the company's American leadership had rejected an "invitation" to switch to munitions manufacture a few months earlier. In 1942 Opel switched to wartime production, making aircraft parts and tanks. They kept manufacturing trucks at the Brandenburg plant, where the 3.6-liter Opel Blitz truck had been built since 1938. These trucks were also built under license by Daimler-Benz in Mannheim.
After the end of the war, with the Brandenburg plant dismantled and transported to the Soviet Union, and 47% of the buildings in Rüsselsheim destroyed, former Opel employees began to rebuild the Rüsselsheim plant. The first postwar Opel Blitz truck was completed on 15 July 1946 in the presence of United States Army General Geoffrey Keyes and other local leaders and press reporters. Opel's Rüsselsheim plant also made Frigidaire refrigerators in the early post-war years.
During the 1970s and 1980s, the Vauxhall and Opel ranges were rationalised into one consistent range across Europe.
By the 1970s, Opel had emerged as the stronger of GM's two European brands; Vauxhall was the third-best selling brand in Great Britain after the British Motor Corporation (later British Leyland) but made only a modest impact elsewhere. The two companies were direct competitors outside of each other's respective home markets, but mirroring US automaker Ford's decision to merge its English and German subsidiaries in the late 1960s, GM followed the same precedent. Opel and Vauxhall had loosely collaborated before, but serious efforts to merge the two companies' operations and product families into one did not start until the 1970s - which had Vauxhall's complete product line replaced by vehicles built on Opel-based platforms - the only exception to the rule being the Bedford CF panel van, the only solely Vauxhall design which was marketed as an Opel on the Continent. By the turn of the 1980s, the two brands were in effect, one and the same.
Opel's first turbocharged car was the Opel Rekord 2.3 TD, first shown at Geneva in March 1984.
In the 1990s, Opel was considered to be GM's cash cow, with profit margins similar to that of Toyota. Opel's profit helped to offset GM's losses in North America and to fund GM's expansion into Asia. 1999 was the last time when Opel was profitable for the full year for almost 20 years.
Following the 2008 global financial crisis, on 10 September 2009, GM agreed to sell a 55% stake in Opel to the Magna group with the approval of the German government. The deal was later called off.
With ongoing restructuring plans, Opel announced the closure of its Antwerp plant in Belgium by the end of 2010.
In 2010, Opel announced that it would invest around €11 billion in the next five years. €1 billion of that was designated solely for the development of innovative and fuel-saving engines and transmissions.
On 29 February 2012, Opel announced the creation of a major alliance with PSA Peugeot Citroen resulting in GM taking a 7% share of PSA, becoming PSA's second-largest shareholder after the Peugeot family. The alliance was intended to enable $2 billion per year of cost savings through platform sharing, common purchasing, and other economies of scale. In December 2013, GM sold its 7% interest in PSA for £250 million, after plans of cost savings were not as successful. Opel was said to be among Europe's most aggressive discounters in mass-market. GM reported a 2016 loss of US$257 million from its European operations. It is reported that GM has lost about US$20 billion in Europe since 1999.
Opel's plant in Bochum closed in December 2014, after 52 years of activity, due to overcapacity.
Opel withdrew from China, where it had a network of 22 dealers, in early 2015 after General Motors decided to withdraw its Chevrolet brand from Europe starting in 2016.
In March 2017, Groupe PSA agreed to buy Opel, its English sister brand Vauxhall and their European auto lending business from General Motors for 2.2 billion. In return, General Motors will pay PSA US$3.2 billion for future European pension obligations and keep managing US$9.8 billion worth of plans for existing retirees. Furthermore, GM is responsible for paying about US$400 million annually for 15 years to fund the existing Great Britain and Germany pension plans.
In June 2017, Michael Lohscheller, Opel's chief financial officer replaced Karl-Thomas Neumann as CEO.
In the 2018 financial year, Opel achieved an operating income of 859 € million. It was the first positive income since 1999.
Opel operates 10 vehicle, powertrain, and component plants and four development and test centres in six countries, and employs around 30,000 people in Europe. The brand sells vehicles in more than 60 markets worldwide. Other plants are in Eisenach and Kaiserslautern, Germany; Szentgotthárd, Hungary; Figueruelas, Spain; Gliwice, and Tychy, Poland; Aspern, Austria; Ellesmere Port, and Luton, Great Britain. The Dudenhofen Test Center is located near the company's headquarters and is responsible for all technical testing and vehicle validations.
Around 6,250 people are responsible for the engineering and design of Opel/Vauxhall vehicles at the International Technical Development Center and European Design Center in Rüsselsheim. All in all, Opel plays an important role in Groupe PSA's global R&D footprint.
As of 2014 "Opel Group GmbH" Is the contracted original equipment manufacturer (OEM) of Opel/Vauxhall. "Adam Opel AG" is the main supplier (tier 1) for the OEM; all subsidiaries are tier 2 suppliers. Opel Group and Adam Opel are both first-tier subsidiaries of "General Motors Holdings, LLC" and second-tier subsidiaries of "General Motors Corporation (GMC)."
Plant controlled as first-tier subsidiary of "General Motors Europe Limited," second-tier subsidiary of "GM CME Holdings CV" and third-tier subsidiary of "General Motors Corporation (GMC):"
The first Opel logo contained the letters "A" and "O" – the initials of the company's founder, Adam Opel. The A was in bronze, the O kept in red.
In 1866, Opel expanded and started to produce bicycles. Around 1890, the logo was completely redesigned. The new logo also contained the words "Victoria Blitz" (referring to Lady Victory; they were certain of the triumph of their bicycles). The word "Blitz" (English: lightning) first appeared back then, but without a depiction.
Another redesign was commissioned in 1909. The new logo was much more spirited and contained only the company name Opel. It was placed on the motorcycles that they had started to produce in 1902, and on the first cars which were produced in 1909.
In 1910, the logo was the shape of an eye, and it was surrounded by laurels, with the text "Opel" in the centre.
From the mid-1930s to the 1960s, passenger cars carried a ring which was crossed by some kind of a flying thing pointing to the left, which in some form could be interpreted as a zeppelin, the same flying object being used also as a forward-pointing hood ornament. In some versions, it looked like an arrow; in others, like an aeroplane or a bird.
Besides the hood ornament flying through the ring, Opel also used a coat of arms in various forms, which mostly had a combination of white and yellow colours in it, a shade of yellow which is typical for Opel until today. One was oval, half white and half yellow. The Opel writing was black and in the middle of the oval symbol.
The origin of the lightning in the 2012 Opel logo lies in the truck Opel Blitz (German "Blitz" = English "lightning"), which had been a commercial success, widely used also within the Wehrmacht, Nazi Germany's military. Originally, the logo for this truck consisted of two stripes arranged loosely like a lightning symbol with the words "Opel" and "Blitz" in them, in later, 1950s models simplified to the horizontal form of lightning which appears in the current Opel logo. The jag in the lightning always follows the original from the "Opel Blitz" text stripes, in the form of a horizontally stretched letter "Z".
By the end of the 1960s, the two forms merged, and the horizontal lightning replaced the flying thing in the ring, giving way to the basic design which is used since then with variations. Through all its variations, this logo is simple and unique, and both easily recognisable and reproducible with just two strokes of a pen.
In the 1964 version, the lightning with a ring was used in a yellow rectangle, with the Opel writing below. The whole logo was again delimited by a black rectangle. The basic form and proportions of the Blitz logo have remained unchanged since the 1970 version, which made the lightning tails shorter so that the logo could fit proportionately within a yellow square, meaning it could be displayed next to the 'blue square' General Motors logo. In the mid-1970s, the Vauxhall "Griffin" logo was, in turn, resized and displayed within a corresponding red square, so that all three logos could be displayed together, thus signifying the unified GM Europe.
The SC Opel Rüsselsheim is a soccer club with over 450 members. RV 1888 Opel Rüsselsheim is a cycling club.
Opel's corporate tagline as of June 2017 is "The Future Is Everyone's (German: Die Zukunft Gehört Allen)."
New Ideas Coches Mejor (1997-2002)
Discover (2006,-2010)
Wir Leben Autos (2010-2017)
The Future is Everyone's (2017)
Opel currently has partnerships with association football clubs such as Bundesliga clubs Borussia Dortmund and 1. FSV Mainz 05.
Opel cooperates with French oil&gas company Total on plans for a battery cell factory.. From 1994 until 2006, Opel has been partnership with Milan and previously with Fiorentina from 1983 until 1986 in Italy, from 1995 until 2002 with Paris Saint-Germain in France and from 1989 until 2002 with Bayern Munchen in Germany.
The Opel brand is present in most of Europe, in parts of North Africa, in South Africa, the Middle East (EMEA), in Chile and in Singapore. Their models have been rebadged and sold in other countries and continents, such as Vauxhall in Great Britain, Chevrolet in Latin America, Holden in Australia and New Zealand, and previously, Saturn in the United States and Canada. Following the demise of General Motors Corporation's Saturn division in North America, Opel cars are currently rebadged and sold in the United States, Canada, Mexico, and China under the Buick name with models such as the Opel Insignia/Buick Regal, Opel Astra sedan/Buick Verano (both which share underpinnings with the Chevrolet Cruze), and Opel Mokka/Buick Encore.
In 2017, GM confirmed plans of a "hybrid global brand" which includes Vauxhall, Opel and Buick to use more synergies between the brands.
Opel cars appeared under their own name in the US from 1958 to 1975, when they were sold through Buick dealers as captive imports. The best-selling Opel models in the US were the 1964 to 1972 Opel Kadett, the 1971 to 1975 Opel Manta, and the 1968 to 1973 Opel GT. (The name "Opel" was also applied from 1976 to 1980 to vehicles manufactured by Isuzu (similar to the "Isuzu I-mark"), but mechanically those were entirely different cars).
Historically, Opel vehicles have also been sold at various times in the North American market as either heavily modified, or "badge-engineered" models under the Chevrolet, Buick, Pontiac, Saturn, and Cadillac brands - for instance the J-body platform, which was largely developed by Opel - was the basis of North American models such as the Chevrolet Cavalier and Cadillac Cimarron. Below is a list of current or recent Opel models which are sold under GM's North American brands.
The last two generations of the Buick Regal have been rebadged versions of the Opel Insignia. The main differences are the modified radiator grill and the altered colour of the passenger compartment illumination (blue instead of red). The Regal GS is comparable to the Insignia OPC. It was first assembled alongside the Insignia at the Opel plant in Rüsselsheim. In the first quarter of 2011, It began to be built on the flexible assembly line at the GM plant in Oshawa, Canada.
The Buick Cascada is a rebadged Opel Cascada, built in Poland and sold in the United States unchanged from the Opel in all but badging.
Unlike the vehicles listed above, the Buick LaCrosse is not a rebadged version of an Opel model. However, it is based on a long-wheelbase version of the Opel-developed Epsilon II-platform, so shares many key components with the Opel Insignia and thereby the Buick Regal.
The Astra H was sold in the US as the Saturn Astra for model years 2008 and 2009.
The Saturn L-Series was a modified version of the Opel Vectra B. Though the Saturn had different exterior styling and had plastic door panels, it shared the same body shape as the Opel. Both cars rode on the GM2900 platform. The Saturn also had a different interior, yet shared some interior parts, such as the inside of the doors.
The second generation of the Saturn VUE, introduced in 2007 for the 2008 model year, was a rebadged version of the German-designed Opel Antara, manufactured in Mexico. After the demise of the Saturn brand, the VUE was discontinued, but the car continued to be produced and sold as Chevrolet Captiva Sport in Mexican and South American markets. The Chevrolet Captiva Sport was introduced for the US commercial and fleet markets in late 2011 for the 2012 model.
The Opel Omega B was sold in the US as the Cadillac Catera.
Opel exports a variety of models to Algeria, Egypt, Morocco, and South Africa.
The 2015 Opel range in South Africa comprises the Opel Adam, Opel Astra, Opel Corsa, Opel Meriva, Opel Mokka, and Opel Vivaro. No diesel versions are offered.
From 1986 to 2003, Opel models were produced by Delta Motor Corporation, a company created through a management buyout following of GM's divestment from apartheid South Africa. Delta assembled the Opel Kadett, with the sedan version called the Opel Monza. This was replaced by the Opel Astra, although the Kadett name was retained for the hatchback and considered a separate model. A version of the Rekord Series E remained in production after the model had been replaced by the Omega in Europe, as was a Commodore model unique to South Africa, combining the bodyshell of the Rekord with the front end of the revised Senator. The Opel Corsa was introduced in 1996, with kits of the Brazilian-designed sedan and pick-up (known in South African English as a "bakkie") being locally assembled.
Although GM's passenger vehicle line-up in South Africa consisted of Opel-based models by the late 1970s, these were sold under the Chevrolet brand name, with only the Kadett being marketed as an Opel when it was released in 1980. In 1982, the Chevrolet brand name was dropped, with the Ascona, Rekord, Commodore, and Senator being rebadged as Opels.
Many Opel models or models based on Opel architectures have been sold in Australia and New Zealand under the Holden marque, such as the Holden Barina (1994-2005), which were rebadged versions of the Opel Corsa, the Holden Astra. a version of the Opel Astra, and the Captiva 5, a version of the Opel Antara. In New Zealand, the Opel Kadett and Ascona were sold as niche models by General Motors New Zealand in the 1980s, while the Opel brand was used on the Opel Vectra until 1994.
For the first time ever, the Opel brand was introduced to Australia on 1 September 2012, including the Corsa, Astra, Astra GTC, and Insignia models. On 2 August 2013, Opel announced it was ending exports to Australia due to poor sales, with only 1,530 vehicles sold in the first ten months.
After the closure of Opel Australia, Holden imports newer Opel models such as the Astra GTC (ceased 1 May 2017), Astra VXR (Astra OPC), Cascada (ceased 1 May 2017), and Insignia VXR (Insignia OPC, ceased 1 May 2017), under the Holden badge. The 2018 5th-gen Holden Commodore ZB is a badge-engineered Opel Insignia, replacing the Australian-made, rear-wheel drive Commodore with the German-made front-wheel/all-wheel drive Insignia platform.
Opel's presence in China recommenced in 2012 with the Antara, and added the Insignia estate in 2013. Opel-derived models are also sold as Buick. On 28 March 2014, Opel announced that it would leave China in 2015.
Opel was long General Motors' strongest marque in Japan, with sales peaking at 38,000 in 1996. However, the brand was withdrawn from the Japanese market in December 2006, with just 1,800 sales there in 2005. Since then, Opel has not sold any cars or SUVs in Japan. Opel is returning to the Japanese market in 2021.
A wide range of Opel models are exported to Singapore.
Opel was marketed in Malaysia beginning from the 1970s, and early models exported were Kadett, Gemini, and Manta. Opel had moderate sales from the 1980s until the early 2000s, when Malaysian car buyers favoured Japanese and Korean brand cars such as Toyota, Honda, Hyundai (Inokom) and Kia (Naza), which offered more competitive prices. Sales of Opel cars in Malaysia were dropped then, as Opel's prices were slightly higher than the same-segment Japanese, Korean, and local Proton and Perodua cars, and they were hard to maintain, had bad aftersales services, and spare parts were not readily available.
Opel was withdrawn from Malaysian market in 2003, and the last models sold were the Zafira, Astra, and Vectra, and the rebadged Isuzu MU as the Frontera, later replaced by Chevrolet.
The Astra F and Vectra B were ever manufactured in Taiwan by CAC company. Before that, Kadett E / Omega A was ever imported to the Taiwanese market. Some models, Astra G/H, Corsa B/C, Omega B and Zafira A/B were also ever imported to Taiwan.
Several Opel models were sold across Latin America for decades with Chevrolet badges, including the Corsa, Astra, Vectra, Meriva, and Zafira. In the 2010s, the Chevrolet line-up changed to adopt North American models such as the Spark, Sonic, and Cruze.
Opel exported a wide range of products to Chile since 2011.
In the 1980s, Opel became the sole GM brand name in Ireland, with the Vauxhall brand having been dropped. Vauxhall's Managing Director has also been Opel Ireland's Chief Executive since 2015.
There were two Opel-franchised assembly plants in Ireland in the 1960s. One in Ringsend, Dublin, was operated by Reg Armstrong Motors, which also assembled NSU cars and motorcycles. The second assembly plant was based in Cork and operated by O'Shea's, which also assembled Škoda cars and Zetor tractors. The models assembled were the Kadett and the Rekord. From 1966, the Admiral was imported as a fully built unit and became a popular seller.
Opel have produced five winners of the European Car of the Year competition:
Several models have been shortlisted, including the:
From the late 1930s to the 1980s, terms from the German Navy ("Kapitän, Admiral, Kadett") and from other official sectors ("Diplomat, Senator") were often used as model names. Since the late 1980s, the model names of Opel passenger cars end with an a. As Opels were no longer being sold in Great Britain, no need remained to have separate model names for essentially identical Vauxhall and Opel cars (although some exceptions were made to suit the British market). The last series to be renamed across the two companies was the Opel Kadett, being the only Opel to take the name of its Vauxhall counterpart, as Opel Astra. Although only two generations of Astra were built prior to the 1991 model, the new car was referred to across Europe as the Astra F, referring to its Kadett lineage. Until 1993, the Opel Corsa was known as the Vauxhall Nova in Great Britain, as Vauxhall had initially felt that Corsa sounded too much like "coarse", and would not catch on.
Exceptions to the nomenclature of ending names with an "a" include the under-licence built Monterey, the Speedster (also known as the Vauxhall VX220 in Great Britain), GT (which was not sold at all as a Vauxhall, despite the VX Lightning concept), the Signum, Karl, and the Adam. The Adam was initially supposed to be called, "Junior" as was its developmental codename and because the name 'Adam' had no history/importance to the Vauxhall marque.
Similar to the passenger cars, the model names of commercial vehicles end with an o (Combo, Vivaro, Movano), except the Corsavan and Astravan for obvious reasons.
Another unique aspect to Opel nomenclature is its use of the "Caravan" (originally styled as 'Car-A-Van') name to denote its station wagon body configuration, (similar to Volkswagen's "Variant" or Audi's "Avant" designations), a practice the company observed for many decades, which finally ceased with the 2008 Insignia and 2009 Astra, where the name "Sports Tourer" is now used for the estate/station wagon versions.
The following tables list current and announced Opel production vehicles as of 2019:
Opel Rally Team took part in World Rally Championship in the early 1980s with the Opel Ascona 400 and the Opel Manta 400, developed in conjunction with Irmscher and Cosworth. Walter Röhrl won the 1982 World Rally Championship drivers' title, and the 1983 Safari Rally was won by Ari Vatanen.
In the late 1990s, Opel took part in the International Touring Car Championship, and won the 1996 Championship with the Calibra. Opel took part in the German DTM race series between 2000 and 2005 with the Astra, and despite winning several races, it never won the DTM championship.
Opel returned to motorsport competition with the Adam in 2013.
In 2014, Opel presented a road-legal sport version of the Adam R2 Rally Car - Opel Adam S - powered by a 1.4 L turbocharged engine which generates 150 HP. The car makes 0–100 km/h in just 8.5 seconds. | https://en.wikipedia.org/wiki?curid=22284 |
Oligocene
The Oligocene ( ) is a geologic epoch of the Paleogene Period and extends from about 33.9 million to 23 million years before the present ( to ). As with other older geologic periods, the rock beds that define the epoch are well identified but the exact dates of the start and end of the epoch are slightly uncertain. The name Oligocene was coined in 1854 by the German paleontologist Heinrich Ernst Beyrich; the name comes from the Ancient Greek ("olígos", "few") and ("kainós", "new"), and refers to the sparsity of extant forms of molluscs. The Oligocene is preceded by the Eocene Epoch and is followed by the Miocene Epoch. The Oligocene is the third and final epoch of the Paleogene Period.
The Oligocene is often considered an important time of transition, a link between the archaic world of the tropical Eocene and the more modern ecosystems of the Miocene. Major changes during the Oligocene included a global expansion of grasslands, and a regression of tropical broad leaf forests to the equatorial belt.
The start of the Oligocene is marked by a notable extinction event called the Grande Coupure; it featured the replacement of European fauna with Asian fauna, except for the endemic rodent and marsupial families. By contrast, the Oligocene–Miocene boundary is not set at an easily identified worldwide event but rather at regional boundaries between the warmer late Oligocene and the relatively cooler Miocene.
Oligocene faunal stages from youngest to oldest are:
The Paleogene Period general temperature decline is interrupted by an Oligocene 7-million-year stepwise climate change. A deeper 8.2 °C, 400,000-year temperature depression leads the 2 °C, seven-million-year stepwise climate change 33.5 Ma (million years ago). The stepwise climate change began 32.5 Ma and lasted through to 25.5 Ma, as depicted in the PaleoTemps chart. The Oligocene climate change was a global increase in ice volume and a 55 m (181 feet) decrease in sea level (35.7–33.5 Ma) with a closely related (25.5–32.5 Ma) temperature depression. The 7-million-year depression abruptly terminated within 1–2 million years of the La Garita Caldera eruption at 28–26 Ma. A deep 400,000-year glaciated Oligocene Miocene boundary event is recorded at McMurdo Sound and King George Island.
During this epoch, the continents continued to drift toward their present positions. Antarctica became more isolated and finally developed an ice cap.
Mountain building in western North America continued, and the Alps started to rise in Europe as the African plate continued to push north into the Eurasian plate, isolating the remnants of the Tethys Sea. A brief marine incursion marks the early Oligocene in Europe. Marine fossils from the Oligocene are rare in North America. There appears to have been a land bridge in the early Oligocene between North America and Europe, since the faunas of the two regions are very similar. Sometime during the Oligocene, South America was finally detached from Antarctica and drifted north towards North America. It also allowed the Antarctic Circumpolar Current to flow, rapidly cooling the Antarctic continent.
Angiosperms continued their expansion throughout the world as tropical and sub-tropical forests were replaced by temperate deciduous forests. Open plains and deserts became more common and grasses expanded from their water-bank habitat in the Eocene moving out into open tracts. However, even at the end of the period, grass was not quite common enough for modern savannas.
In North America, subtropical species dominated with cashews and lychee trees present, and temperate trees such as roses, beeches, and pines were common. The legumes spread, while sedges, bulrushes, and ferns continued their ascent.
Even more open landscapes allowed animals to grow to larger sizes than they had earlier in the Paleocene epoch 30 million years earlier. Marine faunas became fairly modern, as did terrestrial vertebrate fauna on the northern continents. This was probably more as a result of older forms dying out than as a result of more modern forms evolving. Many groups, such as equids, entelodonts, rhinos, merycoidodonts, and camelids, became more able to run during this time, adapting to the plains that were spreading as the Eocene rainforests receded. The first felid, "Proailurus", originated in Asia during the late Oligocene and spread to Europe.
South America was isolated from the other continents and had evolved a quite distinct fauna by the Oligocene. The South American continent was home to animals such as pyrotheres and astrapotheres, as well as litopterns and notoungulates. Sebecosuchians, terror birds, and carnivorous metatheres, like the borhyaenids remained the dominant predators.
Brontotheres died out in the Earliest Oligocene, and creodonts died out outside Africa and the Middle East at the end of the period. Multituberculates, an ancient lineage of primitive mammals that originated back in the Jurassic, also became extinct in the Oligocene, aside from the gondwanatheres. The Oligocene was home to a wide variety of mammals. A good example of this would be the White River Fauna of central North America, which were formerly a semiarid prairie home to many different types of endemic mammals, including entelodonts like "Archaeotherium", camelids (such as "Poebrotherium"), running rhinoceratoids, three-toed equids (such as "Mesohippus"), nimravids, protoceratids, and early canids like "Hesperocyon". Merycoidodonts, an endemic American group, were very diverse during this time. In Asia during the Oligocene, a group of running rhinoceratoids gave rise to the indricotheres, like "Paraceratherium", which were the largest land mammals ever to walk the Earth.
The marine animals of Oligocene oceans resembled today's fauna, such as the bivalves. Calcareous cirratulids appeared in the Oligocene. The fossil record of marine mammals is a little spotty during this time, and not as well known as the Eocene or Miocene, but some fossils have been found. The baleen whales and toothed whales had just appeared, and their ancestors, the archaeocete cetaceans began to decrease in diversity due to their lack of echolocation, which was very useful as the water became colder and cloudier. Other factors to their decline could include climate changes and competition with today's modern cetaceans and the requiem sharks, which also appeared in this epoch. Early desmostylians, like "Behemotops", are known from the Oligocene. Pinnipeds appeared near the end of the epoch from an otter-like ancestor.
The Oligocene sees the beginnings of modern ocean circulation, with tectonic shifts causing the opening and closing of ocean gateways. Cooling of the oceans had already commenced by the Eocene/Oligocene boundary, and they continued to cool as the Oligocene progressed. The formation of permanent Antarctic ice sheets during the early Oligocene and possible glacial activity in the Arctic may have influenced this oceanic cooling, though the extent of this influence is still a matter of some significant dispute.
The opening and closing of ocean gateways: the opening of the Drake Passage; the opening of the Tasmanian Gateway and the closing of the Tethys seaway; along with the final formation of the Greenland–Iceland–Faroes Ridge; played vital parts in reshaping oceanic currents during the Oligocene. As the continents shifted to a more modern configuration, so too did ocean circulation.
The Drake Passage is located between South America and Antarctica. Once the Tasmanian Gateway between Australia and Antarctica opened, all that kept Antarctica from being completely isolated by the Southern Ocean was its connection to South America. As the South American continent moved north, the Drake Passage opened and enabled the formation of the Antarctic Circumpolar Current (ACC), which would have kept the cold waters of Antarctica circulating around that continent and strengthened the formation of Antarctic Bottom Water (ABW). With the cold water concentrated around Antarctica, sea surface temperatures and, consequently, continental temperatures would have dropped. The onset of Antarctic glaciation occurred during the early Oligocene, and the effect of the Drake Passage opening on this glaciation has been the subject of much research. However, some controversy still exists as to the exact timing of the passage opening, whether it occurred at the start of the Oligocene or nearer the end. Even so, many theories agree that at the Eocene/Oligocene (E/O) boundary, a yet shallow flow existed between South America and Antarctica, permitting the start of an Antarctic Circumpolar Current.
Stemming from the issue of when the opening of the Drake Passage took place, is the dispute over how great of an influence the opening of the Drake Passage had on the global climate. While early researchers concluded that the advent of the ACC was highly important, perhaps even the trigger, for Antarctic glaciation and subsequent global cooling, other studies have suggested that the δ18O signature is too strong for glaciation to be the main trigger for cooling. Through study of Pacific Ocean sediments, other researchers have shown that the transition from warm Eocene ocean temperatures to cool Oligocene ocean temperatures took only 300,000 years, which strongly implies that feedbacks and factors other than the ACC were integral to the rapid cooling.
The latest hypothesized time for the opening of the Drake Passage is during the early Miocene. Despite the shallow flow between South America and Antarctica, there was not enough of a deep water opening to allow for significant flow to create a true Antarctic Circumpolar Current. If the opening occurred as late as hypothesized, then the Antarctic Circumpolar Current could not have had much of an effect on early Oligocene cooling, as it would not have existed.
The earliest hypothesized time for the opening of the Drake Passage is around 30 Ma. One of the possible issues with this timing was the continental debris cluttering up the seaway between the two plates in question. This debris, along with what is known as the Shackleton Fracture Zone, has been shown in a recent study to be fairly young, only about 8 million years old. The study concludes that the Drake Passage would be free to allow significant deep water flow by around 31 Ma. This would have facilitated an earlier onset of the Antarctic Circumpolar Current.
Currently, an opening of the Drake Passage during the early Oligocene is favored.
The other major oceanic gateway opening during this time was the Tasman, or Tasmanian, depending on the paper, gateway between Australia and Antarctica. The time frame for this opening is less disputed than the Drake Passage and is largely considered to have occurred around 34 Ma. As the gateway widened, the Antarctic Circumpolar Current strengthened.
The Tethys Seaway was not a gateway, but rather a sea in its own right. Its closing during the Oligocene had significant impact on both ocean circulation and climate. The collisions of the African plate with the European plate and of the Indian subcontinent with the Asian plate, cut off the Tethys Seaway that had provided a low-latitude ocean circulation. The closure of Tethys built some new mountains (the Zagros range) and drew down more carbon dioxide from the atmosphere, contributing to global cooling.
The gradual separation of the clump of continental crust and the deepening of the tectonic ridge in the North Atlantic that would become Greenland, Iceland, and the Faroe Islands helped to increase the deep water flow in that area. More information about the evolution of North Atlantic Deep Water will be given a few sections down.
Evidence for ocean-wide cooling during the Oligocene exists mostly in isotopic proxies. Patterns of extinction and patterns of species migration can also be studied to gain insight into ocean conditions. For a while, it was thought that the glaciation of Antarctica may have significantly contributed to the cooling of the ocean, however, recent evidence tends to deny this.
Isotopic evidence suggests that during the early Oligocene, the main source of deep water was the North Pacific and the Southern Ocean. As the Greenland-Iceland-Faroe Ridge sank and thereby connected the Norwegian–Greenland sea with the Atlantic Ocean, the deep water of the North Atlantic began to come into play as well. Computer models suggest that once this occurred, a more modern in appearance thermo-haline circulation started.
Evidence for the early Oligocene onset of chilled North Atlantic deep water lies in the beginnings of sediment drift deposition in the North Atlantic, such as the Feni and Southeast Faroe drifts.
The chilling of the South Ocean deep water began in earnest once the Tasmanian Gateway and the Drake Passage opened fully. Regardless of the time at which the opening of the Drake Passage occurred, the effect on the cooling of the Southern Ocean would have been the same.
Recorded extraterrestrial impacts:
La Garita Caldera (28 through 26 million years ago) | https://en.wikipedia.org/wiki?curid=22286 |
Open-source license
An open-source license is a type of license for computer software and other products that allows the source code, blueprint or design to be used, modified and/or shared under defined terms and conditions. This allows end users and commercial companies to review and modify the source code, blueprint or design for their own customization, curiosity or troubleshooting needs. Open-source licensed software is mostly available free of charge, though this does not necessarily have to be the case.
Licenses which only permit non-commercial redistribution or modification of the source code for personal use only are generally not considered as open-source licenses. However, open-source licenses may have some restrictions, particularly regarding the expression of respect to the origin of software, such as a requirement to preserve the name of the authors and a copyright statement within the code, or a requirement to redistribute the licensed software only under the same license (as in a copyleft license). One popular set of open-source software licenses are those approved by the Open Source Initiative (OSI) based on their Open Source Definition (OSD).
The Free Software Foundation has related but distinct criteria for evaluating whether or not a license qualifies software as free software. Most free software licenses are also considered open-source software licenses. In the same way, the Debian project has its own criteria, the Debian Free Software Guidelines, on which the Open Source Definition is based. In the interpretation of the FSF, open-source license criteria focus on the availability of the "source code" and the ability to modify and share it, while free software licenses focuses on the user's freedom to use the "program", to modify it, and to share it.
Source-available licenses ensure source code availability, but do not necessarily meet the user freedom criteria to be classified as free software or open-source software.
Around 2004 lawyer Lawrence Rosen argued in the essay ""Why the public domain isn't a license"" software could not truly be waived into the public domain and can't therefore be interpreted as very permissive open-source license, a position which faced opposition by Daniel J. Bernstein and others. In 2012 the dispute was finally resolved when Rosen accepted the CC0 as an open-source license, while admitting that contrary to his previous claims copyright can be waived away, backed by Ninth Circuit decisions. | https://en.wikipedia.org/wiki?curid=22290 |
Occitan language
Occitan (, , ), also known as lenga d'òc (; ) by its native speakers, is a Romance language (or branch of numerous of these) spoken in Southern France, Monaco, Italy's Occitan Valleys, as well as Spain's Val d'Aran; collectively, these regions are sometimes referred to as Occitania. It is also spoken in the linguistic enclave of Guardia Piemontese (Calabria, Italy). Some include Catalan in Occitan, as the distance between this language and some Occitan dialects (such as the Gascon language) is similar to the distance among different Occitan dialects. In fact, Catalan was considered a dialect of Occitan until the end of the 19th century and still today remains its closest relative.
Occitan is an official language of Catalonia, where a subdialect of Gascon known as Aranese is spoken in the Val d'Aran. Since September 2010, the Parliament of Catalonia has considered Aranese Occitan to be the officially preferred language for use in the Val d'Aran.
Across history, the terms Limousin ("Lemosin"), Languedocien ("Lengadocian"), Gascon, and later Provençal ("Provençal", "Provençau" or "Prouvençau") have been used as synonyms for the whole of Occitan; nowadays, "Provençal" is understood mainly as the Occitan dialect spoken in Provence, in southeast France.
Unlike other Romance languages such as French or Spanish, there is no single written standard language called "Occitan", and Occitan has no official status in France, home to most of Occitania. Instead, there are competing norms for writing Occitan, some of which attempt to be pan-dialectal, whereas others are based on particular dialects. These efforts are hindered by the rapidly declining use of Occitan as a spoken language in much of southern France, as well as by the significant differences in phonology and vocabulary among different Occitan dialects.
In particular, the northern and easternmost dialects have more morphological and phonetic features in common with the Gallo-Italic and Oïl languages (e.g. nasal vowels; loss of final consonants; initial "cha/ja-" instead of "ca/ga-"; uvular; the front-rounded sound instead of a diphthong, instead of before a consonant), whereas the southernmost dialects have more features in common with the Ibero-Romance languages (e.g. betacism; voiced fricatives between vowels in place of voiced stops; -"ch"- in place of -"it"-), and Gascon has a number of unusual features not seen in other dialects (e.g. in place of ; loss of between vowels; intervocalic "-r-" and final "-t/ch" in place of medieval --). There are also significant lexical differences, where some dialects have words cognate with French, and others have Catalan and Spanish cognates ("maison"/"casa" "house", "testa"/"cap" "head", "petit"/"pichon" "small", "achaptar"/"crompar" "to buy", "entendre"/"ausir" "to hear", "se taire"/"se calar" "to be quiet", "tombar"/"caire" "to fall", "p(l)us"/"mai" "more", "totjorn"/"sempre" "always", etc.). Nonetheless, there is a significant amount of mutual intelligibility.
The long-term survival of Occitan is in grave doubt. According to the UNESCO Red Book of Endangered Languages, four of the six major dialects of Occitan (Provençal, Auvergnat, Limousin and Languedocien) are considered severely endangered, whereas the remaining two (Gascon and Vivaro-Alpine) are considered definitely endangered.
The name Occitan comes from "lenga d'òc" ("language of òc"), "òc" being the Occitan word for "yes." While the term would have been in use orally for some time after the decline of Latin, as far as historical records show, the Italian medieval poet Dante was the first to have recorded the term "lingua d'oc" in writing. In his "De vulgari eloquentia", he wrote in Latin, ""nam alii oc, alii si, alii vero dicunt oil"" ("for some say "òc", others "sì", yet others say "oïl""), thereby highlighting three major Romance literary languages that were well known in Italy, based on each language's word for "yes", the "òc language" (Occitan), the "oïl language" (French), and the "sì language" (Sicilian and Italian). This was not, of course, the only defining characteristic of each group.
The word "òc" came from Vulgar Latin "hoc" ("this"), while "oïl" originated from Latin "hoc illud" ("this [is] it"). Old Catalan, and now the Catalan of Northern Catalonia also have "hoc" ("òc"). Other Romance languages derive their word for "yes" from the Latin "sic," "thus [it is], [it was done], etc.", such as Spanish "sí", Eastern Lombard "sé", Sicilian and Italian "sì", or Portuguese "sim". In Modern Catalan, as in modern Spanish, "sí" is usually used as a response, although the language retains the word "oi", akin to "òc", which is sometimes used at the end of yes–no questions, and also in higher register as a positive response. French uses "si" to answer "yes" in response to questions that are asked in the negative sense: e.g., ""Vous n'avez pas de frères?"" ""Si, j'en ai sept."" ("You have no brothers?" "But yes, I have seven.").
The name "Occitan" was attested around 1300 as "occitanus", a crossing of "oc" and "aquitanus" (Aquitanian).
For many centuries, the Occitan dialects (together with Catalan) were referred to as "Limousin" or "Provençal", after the names of two regions lying within the modern Occitan-speaking area. After Frédéric Mistral's Félibrige movement in the 19th century, Provençal achieved the greatest literary recognition and so became the most popular term for Occitan.
According to Joseph Anglade, a philologist and specialist of medieval literature who helped impose the then archaic term "Occitan" as the sole correct name, the word "Lemosin" was first used to designate the language at the beginning of the 13th century by Catalan troubadour Raimon Vidal de Besalú(n) in his "Razós de trobar":
"La parladura Francesca val mais et [es] plus avinenz a far romanz e pasturellas; mas cella de Lemozin val mais per far vers et cansons et serventés; et per totas las terras de nostre lengage son de major autoritat li cantar de la lenga Lemosina que de negun'autra parladura, per qu'ieu vos en parlarai primeramen."
The French language is worthier and better suited for romances and pastourelles; but that (language) from Limousin is of greater value for writing poems and cançons and sirventés; and across the whole of the lands where our tongue is spoken, the literature in the Limousin language has more authority than any other dialect, wherefore I shall use this name in priority.
As for the word "Provençal", it should not be taken as strictly meaning the language of Provence, but of Occitania as a whole, for "in the eleventh, the twelfth, and sometimes also the thirteenth centuries, one would understand under the name of Provence the whole territory of the old Provincia romana Gallia Narbonensis and even Aquitaine". The term first came into fashion in Italy.
Currently, linguists use the terms "Provençal" and "Limousin" strictly to refer to specific varieties within Occitania, keeping the name "Occitan" for the language as a whole. Many non-specialists, however, continue to refer to the language as Provençal, causing some confusion.
One of the oldest written fragments of the language found dates back to 960, in an official text that was mixed with Latin:De ista hora in antea non DECEBRÀ Ermengaus filius Eldiarda Froterio episcopo filio Girberga NE Raimundo filio Bernardo vicecomite de castello de Cornone... NO·L LI TOLRÀ NO·L LI DEVEDARÀ NI NO L'EN DECEBRÀ... nec societatem non AURÀ, si per castellum recuperare NON O FA, et si recuperare potuerit in potestate Froterio et Raimundo LO TORNARÀ, per ipsas horas quæ Froterius et Raimundus L'EN COMONRÀ."
Carolingian litanies (c. 780), both written and sung in Latin, were answered to in Old Occitan by the audience ("Ora pro nos"; "Tu lo juva").
Other famous pieces include the "Boecis", a 258-line-long poem written entirely in the Limousin dialect of Occitan between the year 1000 and 1030 and inspired by Boethius's "The Consolation of Philosophy"; the Waldensian "La nobla leyczon" (dated 1100), "la Cançó de Santa Fe" (c. 1054–1076), the "Romance of Flamenca" (13th century), the "Song of the Albigensian Crusade" (1213–1219?), "Daurel e Betó" (12th or 13th century), "Las, qu'i non-sun sparvir, astur" (11th century) and "Tomida femina" (9th or 10th century).
Occitan was the vehicle for the influential poetry of the medieval troubadours ("trovadores") and "trobairitz": At that time, the language was understood and celebrated throughout most of educated Europe. It was the maternal language of the English queen Eleanor of Aquitaine and kings Richard I of England (who wrote troubadour poetry) and John, King of England. With the gradual imposition of French royal power over its territory, Occitan declined in status from the 14th century on. By the Ordinance of Villers-Cotterêts (1539) it was decreed that the "langue d'oïl" (French – though at the time referring to the Francien language and not the larger collection of dialects grouped under the name Langues d'oïl) should be used for all French administration. Occitan's greatest decline was during the French Revolution, during which diversity of language was considered a threat.
In 1903 the four Gospels Lis Evangèli i.e. Matthew, Mark, Luke and John were translated into the form of Provençal spoken in Cannes and Grasse. This was given the official Roman Catholic Imprimatur by A. Estellon, vicar general.
The literary renaissance of the late 19th century (which included a Nobel Prize for Frédéric Mistral) was attenuated by World War I, when Occitan speakers spent extended periods of time alongside French-speaking comrades.
Because the geographical territory in which Occitan is spoken is surrounded by regions in which other Romance languages are used, external influences could have influenced its origin and development. Many factors favoured its development as a language of its own.
Catalan in Spain's northern and central Mediterranean coastal regions and the Balearic Islands is closely related to Occitan, sharing many linguistic features and a common origin (see Occitano-Romance languages). The language was one of the first to gain prestige as a medium for literature among Romance languages in the Middle Ages. Indeed, in the 12th and 13th centuries, Catalan troubadours such as Guerau de Cabrera, Guilhem de Bergadan, Guilhem de Cabestany, Huguet de Mataplana, Raimon Vidal de Besalú, Cerverí de Girona, Formit de Perpinhan, and Jofre de Foixà wrote in Occitan.
At the end of the 11th century, the "Franks", as they were called at the time, started to penetrate the Iberian Peninsula through the Ways of St. James via Somport and Roncesvalles, settling on various spots of the Kingdoms of Navarre and Aragon enticed by the privileges granted them by the Navarrese kings. They established themselves in ethnic boroughs where Occitan was used for everyday life, e.g. Pamplona, Sangüesa, Estella-Lizarra, etc. The language in turn became the status language chosen by the Navarrese kings, nobility, and upper classes for official and trade purposes in the period stretching from the early 13th century to late 14th century. These boroughs in Navarre may have been close-knit communities with little mingling, in a context where the natural milieu was predominantly Basque-speaking. The variant chosen for written administrative records was a "koiné" based on the Languedocien dialect from Toulouse with fairly archaic linguistic features.
Evidence of a written account in Occitan from Pamplona revolving around the burning of borough San Nicolas from 1258 survives today, while the "History of the War of Navarre" by Guilhem Anelier (1276) albeit written in Pamplona shows a linguistic variant from Toulouse.
Things turned out slightly otherwise in Aragon, where the sociolinguistic situation was different, with a clearer Basque-Romance bilingual situation (cf. Basques from the Val d'Aran cited c. 1000), but a receding Basque language (Basque banned in the marketplace of Huesca, 1349). While the language was chosen as a medium of prestige in records and official statements along with Latin in the early 13th century, Occitan faced competition from the rising local Romance vernacular, the Navarro-Aragonese, both orally and in writing, especially after Aragon's territorial conquests south to Zaragoza, Huesca and Tudela between 1118 and 1134. It resulted that a second Occitan immigration of this period was assimilated by the similar Navarro-Aragonese language, which at the same time was fostered and chosen by the kings of Aragon. The language fell into decay in the 14th century across the whole southern Pyrenean area and became largely absorbed into Navarro-Aragonese first and Castilian later in the 15th century, after their exclusive boroughs broke up (1423, Pamplona's boroughs unified).
Gascon-speaking communities were called in for trading purposes by Navarrese kings in the early 12th century to the coastal fringe extending from San Sebastian to the Bidasoa River, where they settled down. The language variant used was different from the ones used in Navarre, i.e. a Béarnese dialect of Gascon, with Gascon being in use far longer than in Navarre and Aragon until the 19th century, thanks mainly to the close ties held by Donostia and Pasaia with Bayonne.
Though it was still an everyday language for most of the rural population of southern France well into the 20th century, it is now spoken by about 100,000 people in France according to 2012 estimates.
According to the 1999 census, there were 610,000 native speakers (almost all of whom are also native French speakers) and perhaps another million persons with some exposure to the language. Following the pattern of language shift, most of this remainder is to be found among the eldest populations. Occitan activists (called "Occitanists") have attempted, in particular with the advent of Occitan-language preschools (the "Calandretas"), to reintroduce the language to the young.
Nonetheless, the number of proficient speakers of Occitan is dropping precipitously. A tourist in the cities in southern France is unlikely to hear a single Occitan word spoken on the street (or, for that matter, in a home), and is likely to only find the occasional vestige, such as street signs (and, of those, most will have French equivalents more prominently displayed), to remind them of the traditional language of the area.
Occitans, as a result of more than 200 years of conditioned suppression and humiliation (see Vergonha), seldom speak their own language in the presence of foreigners, whether they are from abroad or from outside Occitania (in this case, often merely and abusively referred to as "Parisiens" or "Nordistes", which means "northerners"). Occitan is still spoken by many elderly people in rural areas, but they generally switch to French when dealing with outsiders.
Occitan's decline is somewhat less pronounced in Béarn because of the province's history (a late addition to the Kingdom of France), though even there the language is little spoken outside the homes of the rural elderly. The village of Artix is notable for having elected to post street signs in the local language.
The area where Occitan was historically dominant has approximately 16 million inhabitants. Recent research has shown it may be spoken as a first language by approximately 789,000 people in France, Italy, Spain and Monaco. In Monaco, Occitan coexists with Monégasque Ligurian, which is the other native language. Some researchers state that up to seven million people in France understand the language, whereas twelve to fourteen million fully spoke it in 1921. In 1860, Occitan speakers represented more than 39% of the whole French population (52% for francophones proper); they were still 26% to 36% in the 1920s and fewer than 7% in 1993.
Occitan is fundamentally defined by its dialects, rather than being a unitary language. That point is very conflictual in Southern France, as many people do not recognize Occitan as a real language and think that the next defined "dialects" are languages. Like other languages that fundamentally exist at a spoken, rather than written, level (e.g. the Rhaeto-Romance languages, Franco-Provençal, Astur-Leonese, and Aragonese), every settlement technically has its own dialect, with the whole of Occitania forming a classic dialect continuum that changes gradually along any path from one side to the other. Nonetheless, specialists commonly divide Occitan into six main dialects:
Gascon is the most divergent, and descriptions of the main features of Occitan often consider Gascon separately. Max Wheeler notes that "probably only its copresence within the French cultural sphere has kept [Gascon] from being regarded as a separate language", and compares it to Franco-Provençal, which is considered a separate language from Occitan but is "probably not more divergent from Occitan overall than Gascon is".
There is no general agreement about larger groupings of these dialects.
Max Wheeler divides the dialects into two groups:
Pierre Bec divides the dialects into three groups:
Bec also notes that some linguists prefer a "supradialectal" classification that groups Occitan with Catalan as a part of a wider Occitano-Romanic group. One such classification posits three groups:
According to this view, Catalan is an ausbau language that became independent from Occitan during the 13th century, but originates from the Aquitano-Pyrenean group.
Domergue Sumien proposes a slightly different supradialectal grouping.
All these regional varieties of the Occitan language are written, so Occitan can be considered as a pluricentric language. Standard Occitan, also called "occitan larg" (i.e., 'wide Occitan') is a synthesis that respects and admits soft regional adaptations (which are based on the convergence of previous regional koinés). The standardisation process began with the publication of "Gramatica occitana segon los parlars lengadocians", grammar of the languedocien dialect, by Louis Alibert (1935), followed by the "Dictionnaire occitan-français selon les parlers languedociens" (French-Occitan dictionary according to Languedocien) by the same author (1966), completed during the 1970s with the works of Pierre Bec (Gascon), Robèrt Lafont (Provençal) and others. But it has not been achieved yet. It is mostly supported by users of the classical norm. Due to the strong situation of diglossia, some users still reject the standardisation process and do not conceive Occitan as a language that could work just as other standardised languages.
There are two main linguistic norms currently used for Occitan, one (known as "classical"), which is based on that of Medieval Occitan, and one (sometimes known as "Mistralian", due to its use by Frédéric Mistral), which is based on modern French orthography. Sometimes, there is conflict between users of each system.
There are also two other norms but they have a lesser audience. The "Escòla dau Pò norm" (or "Escolo dóu Po norm") is a simplified version of the Mistralian norm and is used only in the Occitan Valleys (Italy), besides the classical norm. The "Bonnaudian norm" (or "écriture auvergnate unifiée, EAU") was created by Pierre Bonnaud and is used only in the Auvergnat dialect, besides the classical norm.
Note that Catalan version was translated from the Spanish, while the Occitan versions were translated from the French. The second part of the Catalan version may also be rendered as "Són dotades de raó i de consciència, i els cal actuar entre si amb un esperit de fraternitat", showing the similarities between Occitan and Catalan.
The majority of scholars think that Occitan constitutes a single language. Some authors, constituting a minority, reject this opinion and even the name "Occitan", thinking that there is a family of distinct rather than dialects of a single language.
Many Occitan linguists and writers, particularly those involved with the pan-Occitan movement centred on the Institut d'Estudis Occitans, disagree with the view that Occitan is a family of languages and think that Limousin, Auvergnat, Languedocien, Gascon, Provençal and Vivaro-Alpine are dialects of a single language. Although there are indeed noticeable differences between these varieties, there is a very high degree of mutual intelligibility between them; | https://en.wikipedia.org/wiki?curid=22292 |
Old Turkic script
The Old Turkic script (also known as variously Göktürk script, Orkhon script, Orkhon-Yenisey script, Turkic runes) is the alphabet used by the Göktürks and other early Turkic khanates during the 8th to 10th centuries to record the Old Turkic language.
The script is named after the Orkhon Valley in Mongolia where early 8th-century inscriptions were discovered in an 1889 expedition by Nikolai Yadrintsev. These Orkhon inscriptions were published by Vasily Radlov and deciphered by the Danish philologist Vilhelm Thomsen in 1893.
This writing system was later used within the Uyghur Khaganate. Additionally, a Yenisei variant is known from 9th-century Yenisei Kirghiz inscriptions, and it has likely cousins in the Talas Valley of Turkestan and the Old Hungarian alphabet of the 10th century. Words were usually written from right to left.
According to some sources, Orkhon script is derived from variants of the Aramaic alphabet, in particular via the Pahlavi and Sogdian alphabets of Persia, or possibly via Kharosthi used to write Sanskrit ("cf". the inscription at Issyk kurgan).
Vilhelm Thomsen (1893) connected the script to the reports of Chinese account ("Records of the Grand Historian", vol. ) from a 2nd-century BCE Yan renegade and dignitary named Zhonghang Yue (). Yue "taught the Chanyu (rulers of the Xiongnu) to write official letters to the Chinese court on a wooden tablet () 31 cm long, and to use a seal and large-sized folder". The same sources tell that when the Xiongnu noted down something or transmitted a message, they made cuts on a piece of wood ("gemu"). They also mention a "Hu script". At the Noin-Ula burial site and other Hun burial sites in Mongolia and regions north of Lake Baikal, the artifacts displayed over twenty carved characters. Most of these characters are either identical with or very similar to the letters of the Turkic Orkhon script. Turkic inscriptions dating from earlier than the Orkhon inscriptions used about 150 symbols, which may suggest that tamgas first imitated Chinese script and then gradually was refined into an alphabet.
Contemporary Chinese sources conflict as to whether the Turks had a written language by the 6th century. The "Book of Zhou", dating to the 7th century, mentions that the Turks had a written language similar to that of the Sogdians. Two other sources, the "Book of Sui" and the "History of the Northern Dynasties" claim that the Turks did not have a written language. According to István Vásáry, Old Turkic script was invented under the rule of the first khagans and that it was modelled after the Sogdian fashion. Several variants of the script came into being as early as the first half of the 6th century.
The Old Turkic corpus consists of about two hundred inscriptions, plus a number of manuscripts.
The inscriptions, dating from the 7th to 10th century, were discovered in present-day Mongolia (the area of the Second Turkic Khaganate and the Uyghur Khaganate that succeeded it), in the upper Yenisey basin of central-south Siberia, and in smaller numbers, in the Altay mountains and Xinjiang. The texts are mostly epitaphs (official or private), but there are also graffiti and a handful of short inscriptions found on archaeological artifacts, including a number of bronze mirrors.
The website of the Language Committee of Ministry of Culture and Information of the Republic of Kazakhstan lists 54 inscriptions from the Orkhon area, 106 from the Yenisei area, 15 from the Talas area, and 78 from the Altai area. The most famous of the inscriptions are the two monuments (obelisks) which were erected in the Orkhon Valley between 732 and 735 in honor of the Göktürk prince Kül Tigin and his brother the emperor Bilge Kağan. The Tonyukuk inscription, a monument situated somewhat farther east, is slightly earlier, dating to ca. 722. These inscriptions relate in epic language the legendary origins of the Turks, the golden age of their history, their subjugation by the Chinese (Tang-Gokturk wars), and their liberation by Bilge.
The Old Turkic manuscripts, of which there are none earlier than the 9th century, were found in present-day Xinjiang and represent Old Uyghur, a different Turkic dialect from the one represented in the Old Turkic inscriptions in the Orkhon valley and elsewhere. They include Irk Bitig, a 9th-century manuscript book on divination.
Old Turkic being a synharmonic language, a number of consonant signs are divided into two "synharmonic sets", one for front vowels and the other for back vowels. Such vowels can be taken as intrinsic to the consonant sign, giving the Old Turkic alphabet an aspect of an abugida script. In these cases, it is customary to use superscript numerals ¹ and ² to mark consonant signs used with back and front vowels, respectively. This convention was introduced by Thomsen (1893), and followed by Gabain (1941), Malov (1951) and Tekin (1968).
A colon-like symbol (⁚) is sometimes used as a word separator. In some cases a ring (⸰) is used instead.
A reading example (right to left): transliterated t²ñr²i, this spells the name of the Turkic sky god, Täñri ().
Variants of the script were found from Mongolia and Xinjiang in the east to the Balkans in the west. The preserved inscriptions were dated to between the 8th and 10th centuries.
These alphabets are divided into four groups by Kyzlasov (1994)
The Asiatic group is further divided into three related alphabets:
The Eurasiatic group is further divided into five related alphabets:
A number of alphabets are incompletely collected due to the limitations of the extant inscriptions. Evidence in the study of the Turkic scripts includes Turkic-Chinese bilingual inscriptions, contemporaneous Turkic inscriptions in the Greek alphabet, literal translations into Slavic languages, and paper fragments with Turkic cursive writing from religion, Manichaeism, Buddhist, and legal subjects of the 8th to 10th centuries found in Xinjiang.
The Unicode block for Old Turkic is U+10C00–U+10C4F. It was added to the Unicode standard in October 2009, with the release of version 5.2. It includes separate "Orkhon" and "Yenisei" variants of individual characters.
Since Windows 8 Unicode Old Turkic writing support was added in the Segoe font. | https://en.wikipedia.org/wiki?curid=22293 |
The Open Source Definition
The Open Source Definition is a document published by the Open Source Initiative, to determine whether a software license can be labeled with the open-source certification mark.
The definition was taken from the exact text of the Debian Free Software Guidelines, written and adapted primarily by Bruce Perens with input from the Debian developers on a private Debian mailing list. The document was created 9 months before the formation of the Open Source Initiative.
Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria:
The open source movement's definition of open source software by the Open Source Initiative and the official definitions of free software by the Free Software Foundation (FSF) basically refer to the same software licenses (with a few minor exceptions see Comparison of free and open-source software licenses), both definitions stand therefore for the same qualities and values. Despite that, FSF founder Richard Stallman stresses underlying philosophical differences when he comments:
Open Knowledge International (OKI) described in their Open Definition for open content, open data, and open licenses, "open/free" as synonymous in the definitions of open/free in the Open Source Definition, the FSF and the Definition of Free Cultural Works: | https://en.wikipedia.org/wiki?curid=22296 |
Open Source Initiative
The Open Source Initiative (OSI) is a California public benefit corporation, with 501(c)3 tax-exempt status, founded in 1998.
It promotes the usage of Open Source Software.
The organization was founded in late February 1998 by Bruce Perens and Eric S. Raymond, part of a group inspired by the Netscape Communications Corporation publishing the source code for its flagship Netscape Communicator product. Later, in August 1998, the organization added a board of directors.
Raymond was president from its founding until February 2005, followed briefly by Russ Nelson and then Michael Tiemann. In May 2012, the new board elected Simon Phipps as president and in May 2015 Allison Randal was elected as president when Phipps stepped down in preparation for the 2016 end of his Board term. Phipps became President again in September 2017. Molly de Blanc was elected President in May, 2019, followed by Josh Simmons in May, 2020.
As a campaign of sorts, "open source" was launched in 1998 by Jon "maddog" Hall, Larry Augustin, Eric S. Raymond, Bruce Perens, and others.
The group adopted the Open Source Definition for open-source software, based on the Debian Free Software Guidelines. They also established the Open Source Initiative (OSI) as a steward organization for the movement. However, they were unsuccessful in their attempt to secure a trademark for 'open source' to control the use of the term. In 2008, in an apparent effort to reform governance of the organization, the OSI Board invited 50 individuals to join a "Charter Members" group; by 26 July 2008, 42 of the original invitees had accepted the invitations. The full membership of the Charter Members has never been publicly revealed, and the Charter Members group communicated by way of a closed-subscription mailing list, "osi-discuss", with non-public archives.
In 2012, under the leadership of OSI director and then-president Simon Phipps, the OSI began transitioning towards a membership-based governance structure. The OSI initiated an Affiliate Membership program for "government-recognized non-profit charitable and not-for-profit industry associations and academic institutions anywhere in the world". Subsequently, the OSI announced an Individual Membership program and listed a number of Corporate Sponsors. As of 2020, Microsoft is listed as a corporate sponsor.
On November 8, 2013, OSI appointed Patrick Masson as its General Manager.
In January 2020, Bruce Perens left OSI over controversy regarding a license.
A few months later, Perens declared on social media:
"We created a tower of babel of licenses.
We did not design-in license compliance and we have a tremendous noncompliance problem that isn't getting better. We did not design a good framework for where proprietary software can go, and where it never should. Our license loopholes are exploited."
After Bruce Perens' exit, Eric Raymond, co-founder of the OSI was banned from the OSI in March 2020.
"Specifically, Raymond was banned from the mailing lists used to organize and communicate with the OSI. For an organization to ban their founder from communicating with the group (such as via a mailing list) is a noteworthy move."
Both the modern free software movement and the Open Source Initiative were born from a common history of Unix, Internet free software, and the hacker culture, but their basic goals and philosophy differ. The Open Source Initiative chose the term "open source," in founding member Michael Tiemann's words, to "dump the moralizing and confrontational attitude that had been associated with 'free software'" and instead promote open source ideas on "pragmatic, business-case grounds."
As early as 1999, OSI co-founder Perens objected to the "schism" that was developing between supporters of the Free Software Foundation (FSF) and the OSI because of their disparate approaches. (Perens had hoped the OSI would merely serve as an "introduction" to FSF principles for "non-hackers.") Richard Stallman of FSF has sharply criticized the OSI for its pragmatic focus and for ignoring what he considers the central "ethical imperative" and emphasis on "freedom" underlying free software as he defines it. Nevertheless, Stallman has described his free software movement and the Open Source Initiative as separate camps within the same broad free-software community and acknowledged that despite philosophical differences, proponents of open source and free software "often work together on practical projects."
As of April 2020, the Open Source Initiative board of Directors is:
Past board members include: | https://en.wikipedia.org/wiki?curid=22298 |
Oxygen
Oxygen is the chemical element with the symbol O and atomic number 8. It is a member of the chalcogen group in the periodic table, a highly reactive nonmetal, and an oxidizing agent that readily forms oxides with most elements as well as with other compounds. After hydrogen and helium, oxygen is the third-most abundant element in the universe by mass. At standard temperature and pressure, two atoms of the element bind to form dioxygen, a colorless and odorless diatomic gas with the formula . Diatomic oxygen gas constitutes 20.95% of the Earth's atmosphere. As compounds including oxides, the element makes up almost half of the Earth's crust.
Dioxygen provides the energy released in combustion and aerobic cellular respiration, and many major classes of organic molecules in living organisms contain oxygen atoms, such as proteins, nucleic acids, carbohydrates, and fats, as do the major constituent inorganic compounds of animal shells, teeth, and bone. Most of the mass of living organisms is oxygen as a component of water, the major constituent of lifeforms. Oxygen is continuously replenished in Earth's atmosphere by photosynthesis, which uses the energy of sunlight to produce oxygen from water and carbon dioxide. Oxygen is too chemically reactive to remain a free element in air without being continuously replenished by the photosynthetic action of living organisms. Another form (allotrope) of oxygen, ozone (), strongly absorbs ultraviolet UVB radiation and the high-altitude ozone layer helps protect the biosphere from ultraviolet radiation. However, ozone present at the surface is a byproduct of smog and thus a pollutant.
Oxygen was isolated by Michael Sendivogius before 1604, but it is commonly believed that the element was discovered independently by Carl Wilhelm Scheele, in Uppsala, in 1773 or earlier, and Joseph Priestley in Wiltshire, in 1774. Priority is often given for Priestley because his work was published first. Priestley, however, called oxygen "dephlogisticated air", and did not recognize it as a chemical element. The name "oxygen" was coined in 1777 by Antoine Lavoisier, who first recognized oxygen as a chemical element and correctly characterized the role it plays in combustion.
Common uses of oxygen include production of steel, plastics and textiles, brazing, welding and cutting of steels and other metals, rocket propellant, oxygen therapy, and life support systems in aircraft, submarines, spaceflight and diving.
One of the first known experiments on the relationship between combustion and air was conducted by the 2nd century BCE Greek writer on mechanics, Philo of Byzantium. In his work "Pneumatica", Philo observed that inverting a vessel over a burning candle and surrounding the vessel's neck with water resulted in some water rising into the neck. Philo incorrectly surmised that parts of the air in the vessel were converted into the classical element fire and thus were able to escape through pores in the glass. Many centuries later Leonardo da Vinci built on Philo's work by observing that a portion of air is consumed during combustion and respiration.
In the late 17th century, Robert Boyle proved that air is necessary for combustion. English chemist John Mayow (1641–1679) refined this work by showing that fire requires only a part of air that he called "spiritus nitroaereus". In one experiment, he found that placing either a mouse or a lit candle in a closed container over water caused the water to rise and replace one-fourteenth of the air's volume before extinguishing the subjects. From this he surmised that nitroaereus is consumed in both respiration and combustion.
Mayow observed that antimony increased in weight when heated, and inferred that the nitroaereus must have combined with it. He also thought that the lungs separate nitroaereus from air and pass it into the blood and that animal heat and muscle movement result from the reaction of nitroaereus with certain substances in the body. Accounts of these and other experiments and ideas were published in 1668 in his work "Tractatus duo" in the tract "De respiratione".
Robert Hooke, Ole Borch, Mikhail Lomonosov, and Pierre Bayen all produced oxygen in experiments in the 17th and the 18th century but none of them recognized it as a chemical element. This may have been in part due to the prevalence of the philosophy of combustion and corrosion called the "phlogiston theory", which was then the favored explanation of those processes.
Established in 1667 by the German alchemist J. J. Becher, and modified by the chemist Georg Ernst Stahl by 1731, phlogiston theory stated that all combustible materials were made of two parts. One part, called phlogiston, was given off when the substance containing it was burned, while the dephlogisticated part was thought to be its true form, or calx.
Highly combustible materials that leave little residue, such as wood or coal, were thought to be made mostly of phlogiston; non-combustible substances that corrode, such as iron, contained very little. Air did not play a role in phlogiston theory, nor were any initial quantitative experiments conducted to test the idea; instead, it was based on observations of what happens when something burns, that most common objects appear to become lighter and seem to lose something in the process.
Polish alchemist, philosopher, and physician Michael Sendivogius (Michał Sędziwój) in his work "De Lapide Philosophorum Tractatus duodecim e naturae fonte et manuali experientia depromti" (1604) described a substance contained in air, referring to it as 'cibus vitae' (food of life), and this substance is identical with oxygen. Sendivogius, during his experiments performed between 1598 and 1604, properly recognized that the substance is equivalent to the gaseous byproduct released by the thermal decomposition of potassium nitrate. In Bugaj's view, the isolation of oxygen and the proper association of the substance to that part of air which is required for life, lends sufficient weight to the discovery of oxygen by Sendivogius. This discovery of Sendivogius was however frequently denied by the generations of scientists and chemists which succeeded him.
It is also commonly claimed that oxygen was first discovered by Swedish pharmacist Carl Wilhelm Scheele. He had produced oxygen gas by heating mercuric oxide and various nitrates in 1771–2. Scheele called the gas "fire air" because it was then the only known agent to support combustion. He wrote an account of this discovery in a manuscript titled "Treatise on Air and Fire", which he sent to his publisher in 1775. That document was published in 1777.
In the meantime, on August 1, 1774, an experiment conducted by the British clergyman Joseph Priestley focused sunlight on mercuric oxide (HgO) contained in a glass tube, which liberated a gas he named "dephlogisticated air". He noted that candles burned brighter in the gas and that a mouse was more active and lived longer while breathing it. After breathing the gas himself, Priestley wrote: "The feeling of it to my lungs was not sensibly different from that of common air, but I fancied that my breast felt peculiarly light and easy for some time afterwards." Priestley published his findings in 1775 in a paper titled "An Account of Further Discoveries in Air", which was included in the second volume of his book titled "Experiments and Observations on Different Kinds of Air". Because he published his findings first, Priestley is usually given priority in the discovery.
The French chemist Antoine Laurent Lavoisier later claimed to have discovered the new substance independently. Priestley visited Lavoisier in October 1774 and told him about his experiment and how he liberated the new gas. Scheele had also dispatched a letter to Lavoisier on September 30, 1774, which described his discovery of the previously unknown substance, but Lavoisier never acknowledged receiving it. (A copy of the letter was found in Scheele's belongings after his death.)
Lavoisier conducted the first adequate quantitative experiments on oxidation and gave the first correct explanation of how combustion works. He used these and similar experiments, all started in 1774, to discredit the phlogiston theory and to prove that the substance discovered by Priestley and Scheele was a chemical element.
In one experiment, Lavoisier observed that there was no overall increase in weight when tin and air were heated in a closed container. He noted that air rushed in when he opened the container, which indicated that part of the trapped air had been consumed. He also noted that the tin had increased in weight and that increase was the same as the weight of the air that rushed back in. This and other experiments on combustion were documented in his book "Sur la combustion en général", which was published in 1777. In that work, he proved that air is a mixture of two gases; 'vital air', which is essential to combustion and respiration, and "azote" (Gk. "" "lifeless"), which did not support either. "Azote" later became "nitrogen" in English, although it has kept the earlier name in French and several other European languages.
Lavoisier renamed 'vital air' to "oxygène" in 1777 from the Greek roots " (oxys)" (acid, literally "sharp", from the taste of acids) and "-γενής (-genēs)" (producer, literally begetter), because he mistakenly believed that oxygen was a constituent of all acids. Chemists (such as Sir Humphry Davy in 1812) eventually determined that Lavoisier was wrong in this regard (hydrogen forms the basis for acid chemistry), but by then the name was too well established.
"Oxygen" entered the English language despite opposition by English scientists and the fact that the Englishman Priestley had first isolated the gas and written about it. This is partly due to a poem praising the gas titled "Oxygen" in the popular book "The Botanic Garden" (1791) by Erasmus Darwin, grandfather of Charles Darwin.
John Dalton's original atomic hypothesis presumed that all elements were monatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed that water's formula was HO, leading to the conclusion that the atomic mass of oxygen was 8 times that of hydrogen, instead of the modern value of about 16. In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen; and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the diatomic elemental molecules in those gases.
By the late 19th century scientists realized that air could be liquefied and its components isolated by compressing and cooling it. Using a cascade method, Swiss chemist and physicist Raoul Pierre Pictet evaporated liquid sulfur dioxide in order to liquefy carbon dioxide, which in turn was evaporated to cool oxygen gas enough to liquefy it. He sent a telegram on December 22, 1877 to the French Academy of Sciences in Paris announcing his discovery of liquid oxygen. Just two days later, French physicist Louis Paul Cailletet announced his own method of liquefying molecular oxygen. Only a few drops of the liquid were produced in each case and no meaningful analysis could be conducted. Oxygen was liquefied in a stable state for the first time on March 29, 1883 by Polish scientists from Jagiellonian University, Zygmunt Wróblewski and Karol Olszewski.
In 1891 Scottish chemist James Dewar was able to produce enough liquid oxygen for study. The first commercially viable process for producing liquid oxygen was independently developed in 1895 by German engineer Carl von Linde and British engineer William Hampson. Both men lowered the temperature of air until it liquefied and then distilled the component gases by boiling them off one at a time and capturing them separately. Later, in 1901, oxyacetylene welding was demonstrated for the first time by burning a mixture of acetylene and compressed . This method of welding and cutting metal later became common.
In 1923, the American scientist Robert H. Goddard became the first person to develop a rocket engine that burned liquid fuel; the engine used gasoline for fuel and liquid oxygen as the oxidizer. Goddard successfully flew a small liquid-fueled rocket 56 m at 97 km/h on March 16, 1926 in Auburn, Massachusetts, US.
In academic laboratories, oxygen can be prepared by heating together potassium chlorate mixed with a small proportion of manganese dioxide.
Oxygen levels in the atmosphere are trending slightly downward globally, possibly because of fossil-fuel burning.
At standard temperature and pressure, oxygen is a colorless, odorless, and tasteless gas with the molecular formula , referred to as dioxygen.
As "dioxygen", two oxygen atoms are chemically bound to each other. The bond can be variously described based on level of theory, but is reasonably and simply described as a covalent double bond that results from the filling of molecular orbitals formed from the atomic orbitals of the individual oxygen atoms, the filling of which results in a bond order of two. More specifically, the double bond is the result of sequential, low-to-high energy, or Aufbau, filling of orbitals, and the resulting cancellation of contributions from the 2s electrons, after sequential filling of the low σ and σ* orbitals; σ overlap of the two atomic 2p orbitals that lie along the O-O molecular axis and overlap of two pairs of atomic 2p orbitals perpendicular to the O-O molecular axis, and then cancellation of contributions from the remaining two of the six 2p electrons after their partial filling of the lowest and * orbitals.
This combination of cancellations and σ and overlaps results in dioxygen's double bond character and reactivity, and a triplet electronic ground state. An electron configuration with two unpaired electrons, as is found in dioxygen orbitals (see the filled * orbitals in the diagram) that are of equal energy—i.e., degenerate—is a configuration termed a spin triplet state. Hence, the ground state of the molecule is referred to as triplet oxygen. The highest energy, partially filled orbitals are antibonding, and so their filling weakens the bond order from three to two. Because of its unpaired electrons, triplet oxygen reacts only slowly with most organic molecules, which have paired electron spins; this prevents spontaneous combustion.
In the triplet form, molecules are paramagnetic. That is, they impart magnetic character to oxygen when it is in the presence of a magnetic field, because of the spin magnetic moments of the unpaired electrons in the molecule, and the negative exchange energy between neighboring molecules. Liquid oxygen is so magnetic that, in laboratory demonstrations, a bridge of liquid oxygen may be supported against its own weight between the poles of a powerful magnet.
Singlet oxygen is a name given to several higher-energy species of molecular in which all the electron spins are paired. It is much more reactive with common organic molecules than is molecular oxygen per se. In nature, singlet oxygen is commonly formed from water during photosynthesis, using the energy of sunlight. It is also produced in the troposphere by the photolysis of ozone by light of short wavelength, and by the immune system as a source of active oxygen. Carotenoids in photosynthetic organisms (and possibly animals) play a major role in absorbing energy from singlet oxygen and converting it to the unexcited ground state before it can cause harm to tissues.
The common allotrope of elemental oxygen on Earth is called dioxygen, , the major part of the Earth's atmospheric oxygen (see Occurrence). O2 has a bond length of 121 pm and a bond energy of 498 kJ/mol, which is smaller than the energy of other double bonds or pairs of single bonds in the biosphere and responsible for the exothermic reaction of O2 with any organic molecule. Due to its energy content, O2 is used by complex forms of life, such as animals, in cellular respiration. Other aspects of are covered in the remainder of this article.
Trioxygen () is usually known as ozone and is a very reactive allotrope of oxygen that is damaging to lung tissue. Ozone is produced in the upper atmosphere when combines with atomic oxygen made by the splitting of by ultraviolet (UV) radiation. Since ozone absorbs strongly in the UV region of the spectrum, the ozone layer of the upper atmosphere functions as a protective radiation shield for the planet. Near the Earth's surface, it is a pollutant formed as a by-product of automobile exhaust. At low earth orbit altitudes, sufficient atomic oxygen is present to cause corrosion of spacecraft.
The metastable molecule tetraoxygen () was discovered in 2001, and was assumed to exist in one of the six phases of solid oxygen. It was proven in 2006 that this phase, created by pressurizing to 20 GPa, is in fact a rhombohedral cluster. This cluster has the potential to be a much more powerful oxidizer than either or and may therefore be used in rocket fuel. A metallic phase was discovered in 1990 when solid oxygen is subjected to a pressure of above 96 GPa and it was shown in 1998 that at very low temperatures, this phase becomes superconducting.
Oxygen dissolves more readily in water than nitrogen, and in freshwater more readily than seawater. Water in equilibrium with air contains approximately 1 molecule of dissolved for every 2 molecules of (1:2), compared with an atmospheric ratio of approximately 1:4. The solubility of oxygen in water is temperature-dependent, and about twice as much (14.6 mg·L−1) dissolves at 0 °C than at 20 °C (7.6 mg·L−1). At 25 °C and of air, freshwater contains about 6.04 milliliters (mL) of oxygen per liter, and seawater contains about 4.95 mL per liter. At 5 °C the solubility increases to 9.0 mL (50% more than at 25 °C) per liter for water and 7.2 mL (45% more) per liter for sea water.
Oxygen condenses at 90.20 K (−182.95 °C, −297.31 °F), and freezes at 54.36 K (−218.79 °C, −361.82 °F). Both liquid and solid are clear substances with a light sky-blue color caused by absorption in the red (in contrast with the blue color of the sky, which is due to Rayleigh scattering of blue light). High-purity liquid is usually obtained by the fractional distillation of liquefied air. Liquid oxygen may also be condensed from air using liquid nitrogen as a coolant.
Oxygen is a highly reactive substance and must be segregated from combustible materials.
The spectroscopy of molecular oxygen is associated with the atmospheric processes of aurora and airglow. The absorption in the Herzberg continuum and Schumann–Runge bands in the ultraviolet produces atomic oxygen that is important in the chemistry of the middle atmosphere. Excited state singlet molecular oxygen is responsible for red chemiluminescence in solution.
Naturally occurring oxygen is composed of three stable isotopes, 16O, 17O, and 18O, with 16O being the most abundant (99.762% natural abundance).
Most 16O is synthesized at the end of the helium fusion process in massive stars but some is made in the neon burning process. 17O is primarily made by the burning of hydrogen into helium during the CNO cycle, making it a common isotope in the hydrogen burning zones of stars. Most 18O is produced when 14N (made abundant from CNO burning) captures a 4He nucleus, making 18O common in the helium-rich zones of evolved, massive stars.
Fourteen radioisotopes have been characterized. The most stable are 15O with a half-life of 122.24 seconds and 14O with a half-life of 70.606 seconds. All of the remaining radioactive isotopes have half-lives that are less than 27 s and the majority of these have half-lives that are less than 83 milliseconds. The most common decay mode of the isotopes lighter than 16O is β+ decay to yield nitrogen, and the most common mode for the isotopes heavier than 18O is beta decay to yield fluorine.
Oxygen is the most abundant chemical element by mass in the Earth's biosphere, air, sea and land. Oxygen is the third most abundant chemical element in the universe, after hydrogen and helium. About 0.9% of the Sun's mass is oxygen. Oxygen constitutes 49.2% of the Earth's crust by mass as part of oxide compounds such as silicon dioxide and is the most abundant element by mass in the Earth's crust. It is also the major component of the world's oceans (88.8% by mass). Oxygen gas is the second most common component of the Earth's atmosphere, taking up 20.8% of its volume and 23.1% of its mass (some 1015 tonnes). Earth is unusual among the planets of the Solar System in having such a high concentration of oxygen gas in its atmosphere: Mars (with 0.1% by volume) and Venus have much less. The surrounding those planets is produced solely by the action of ultraviolet radiation on oxygen-containing molecules such as carbon dioxide.
The unusually high concentration of oxygen gas on Earth is the result of the oxygen cycle. This biogeochemical cycle describes the movement of oxygen within and between its three main reservoirs on Earth: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is photosynthesis, which is responsible for modern Earth's atmosphere. Photosynthesis releases oxygen into the atmosphere, while respiration, decay, and combustion remove it from the atmosphere. In the present equilibrium, production and consumption occur at the same rate.
Free oxygen also occurs in solution in the world's water bodies. The increased solubility of at lower temperatures (see Physical properties) has important implications for ocean life, as polar oceans support a much higher density of life due to their higher oxygen content. Water polluted with plant nutrients such as nitrates or phosphates may stimulate growth of algae by a process called eutrophication and the decay of these organisms and other biomaterials may reduce the content in eutrophic water bodies. Scientists assess this aspect of water quality by measuring the water's biochemical oxygen demand, or the amount of needed to restore it to a normal concentration.
Paleoclimatologists measure the ratio of oxygen-18 and oxygen-16 in the shells and skeletons of marine organisms to determine the climate millions of years ago (see oxygen isotope ratio cycle). Seawater molecules that contain the lighter isotope, oxygen-16, evaporate at a slightly faster rate than water molecules containing the 12% heavier oxygen-18, and this disparity increases at lower temperatures. During periods of lower global temperatures, snow and rain from that evaporated water tends to be higher in oxygen-16, and the seawater left behind tends to be higher in oxygen-18. Marine organisms then incorporate more oxygen-18 into their skeletons and shells than they would in a warmer climate. Paleoclimatologists also directly measure this ratio in the water molecules of ice core samples as old as hundreds of thousands of years.
Planetary geologists have measured the relative quantities of oxygen isotopes in samples from the Earth, the Moon, Mars, and meteorites, but were long unable to obtain reference values for the isotope ratios in the Sun, believed to be the same as those of the primordial solar nebula. Analysis of a silicon wafer exposed to the solar wind in space and returned by the crashed Genesis spacecraft has shown that the Sun has a higher proportion of oxygen-16 than does the Earth. The measurement implies that an unknown process depleted oxygen-16 from the Sun's disk of protoplanetary material prior to the coalescence of dust grains that formed the Earth.
Oxygen presents two spectrophotometric absorption bands peaking at the wavelengths 687 and 760 nm. Some remote sensing scientists have proposed using the measurement of the radiance coming from vegetation canopies in those bands to characterize plant health status from a satellite platform. This approach exploits the fact that in those bands it is possible to discriminate the vegetation's reflectance from its fluorescence, which is much weaker. The measurement is technically difficult owing to the low signal-to-noise ratio and the physical structure of vegetation; but it has been proposed as a possible method of monitoring the carbon cycle from satellites on a global scale.
In nature, free oxygen is produced by the light-driven splitting of water during oxygenic photosynthesis. According to some estimates, green algae and cyanobacteria in marine environments provide about 70% of the free oxygen produced on Earth, and the rest is produced by terrestrial plants. Other estimates of the oceanic contribution to atmospheric oxygen are higher, while some estimates are lower, suggesting oceans produce ~45% of Earth's atmospheric oxygen each year.
A simplified overall formula for photosynthesis is:
or simply
Photolytic oxygen evolution occurs in the thylakoid membranes of photosynthetic organisms and requires the energy of four photons. Many steps are involved, but the result is the formation of a proton gradient across the thylakoid membrane, which is used to synthesize adenosine triphosphate (ATP) via photophosphorylation. The remaining (after production of the water molecule) is released into the atmosphere.
The chemical energy of oxygen is released in mitochondria to generate ATP during oxidative phosphorylation. The reaction for aerobic respiration is essentially the reverse of photosynthesis and is simplified as:
In vertebrates, diffuses through membranes in the lungs and into red blood cells. Hemoglobin binds , changing color from bluish red to bright red ( is released from another part of hemoglobin through the Bohr effect). Other animals use hemocyanin (molluscs and some arthropods) or hemerythrin (spiders and lobsters). A liter of blood can dissolve 200 cm3 of .
Until the discovery of anaerobic metazoa, oxygen was thought to be a requirement for all complex life.
Reactive oxygen species, such as superoxide ion () and hydrogen peroxide (), are reactive by-products of oxygen use in organisms. Parts of the immune system of higher organisms create peroxide, superoxide, and singlet oxygen to destroy invading microbes. Reactive oxygen species also play an important role in the hypersensitive response of plants against pathogen attack. Oxygen is damaging to obligately anaerobic organisms, which were the dominant form of early life on Earth until began to accumulate in the atmosphere about 2.5 billion years ago during the Great Oxygenation Event, about a billion years after the first appearance of these organisms.
An adult human at rest inhales 1.8 to 2.4 grams of oxygen per minute. This amounts to more than 6 billion tonnes of oxygen inhaled by humanity per year.
The free oxygen partial pressure in the body of a living vertebrate organism is highest in the respiratory system, and decreases along any arterial system, peripheral tissues, and venous system, respectively. Partial pressure is the pressure that oxygen would have if it alone occupied the volume.
Free oxygen gas was almost nonexistent in Earth's atmosphere before photosynthetic archaea and bacteria evolved, probably about 3.5 billion years ago. Free oxygen first appeared in significant quantities during the Paleoproterozoic eon (between 3.0 and 2.3 billion years ago). Even if there was much dissolved iron in the oceans when oxygenic photosynthesis was getting more common, it appears the banded iron formations were created by anoxyenic or micro-aerophilic iron-oxidizing bacteria which dominated the deeper areas of the photic zone, while oxygen-producing cyanobacteria covered the shallows. Free oxygen began to outgas from the oceans 3–2.7 billion years ago, reaching 10% of its present level around 1.7 billion years ago.
The presence of large amounts of dissolved and free oxygen in the oceans and atmosphere may have driven most of the extant anaerobic organisms to extinction during the Great Oxygenation Event ("oxygen catastrophe") about 2.4 billion years ago. Cellular respiration using enables aerobic organisms to produce much more ATP than anaerobic organisms. Cellular respiration of occurs in all eukaryotes, including all complex multicellular organisms such as plants and animals.
Since the beginning of the Cambrian period 540 million years ago, atmospheric levels have fluctuated between 15% and 30% by volume. Towards the end of the Carboniferous period (about 300 million years ago) atmospheric levels reached a maximum of 35% by volume, which may have contributed to the large size of insects and amphibians at this time.
Variations in atmospheric oxygen concentration have shaped past climates. When oxygen declined, atmospheric density dropped, which in turn increased surface evaporation, causing precipitation increases and warmer temperatures.
At the current rate of photosynthesis it would take about 2,000 years to regenerate the entire in the present atmosphere.
One hundred million tonnes of are extracted from air for industrial uses annually by two primary methods. The most common method is fractional distillation of liquefied air, with distilling as a vapor while is left as a liquid.
The other primary method of producing is passing a stream of clean, dry air through one bed of a pair of identical zeolite molecular sieves, which absorbs the nitrogen and delivers a gas stream that is 90% to 93% . Simultaneously, nitrogen gas is released from the other nitrogen-saturated zeolite bed, by reducing the chamber operating pressure and diverting part of the oxygen gas from the producer bed through it, in the reverse direction of flow. After a set cycle time the operation of the two beds is interchanged, thereby allowing for a continuous supply of gaseous oxygen to be pumped through a pipeline. This is known as pressure swing adsorption. Oxygen gas is increasingly obtained by these non-cryogenic technologies (see also the related vacuum swing adsorption).
Oxygen gas can also be produced through electrolysis of water into molecular oxygen and hydrogen. DC electricity must be used: if AC is used, the gases in each limb consist of hydrogen and oxygen in the explosive ratio 2:1. A similar method is the electrocatalytic evolution from oxides and oxoacids. Chemical catalysts can be used as well, such as in chemical oxygen generators or oxygen candles that are used as part of the life-support equipment on submarines, and are still part of standard equipment on commercial airliners in case of depressurization emergencies. Another air separation method is forcing air to dissolve through ceramic membranes based on zirconium dioxide by either high pressure or an electric current, to produce nearly pure gas.
Oxygen storage methods include high pressure oxygen tanks, cryogenics and chemical compounds. For reasons of economy, oxygen is often transported in bulk as a liquid in specially insulated tankers, since one liter of liquefied oxygen is equivalent to 840 liters of gaseous oxygen at atmospheric pressure and . Such tankers are used to refill bulk liquid oxygen storage containers, which stand outside hospitals and other institutions that need large volumes of pure oxygen gas. Liquid oxygen is passed through heat exchangers, which convert the cryogenic liquid into gas before it enters the building. Oxygen is also stored and shipped in smaller cylinders containing the compressed gas; a form that is useful in certain portable medical applications and oxy-fuel welding and cutting.
Uptake of from the air is the essential purpose of respiration, so oxygen supplementation is used in medicine. Treatment not only increases oxygen levels in the patient's blood, but has the secondary effect of decreasing resistance to blood flow in many types of diseased lungs, easing work load on the heart. Oxygen therapy is used to treat emphysema, pneumonia, some heart disorders (congestive heart failure), some disorders that cause increased pulmonary artery pressure, and any disease that impairs the body's ability to take up and use gaseous oxygen.
Treatments are flexible enough to be used in hospitals, the patient's home, or increasingly by portable devices. Oxygen tents were once commonly used in oxygen supplementation, but have since been replaced mostly by the use of oxygen masks or nasal cannulas.
Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the 'bends') are sometimes addressed with this therapy. Increased concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in the blood. Increasing the pressure of as soon as possible helps to redissolve the bubbles back into the blood so that these excess gasses can be exhaled naturally through the lungs.
Normobaric oxygen administration at the highest available concentration is frequently used as first aid for any diving injury that may involve inert gas bubble formation in the tissues. There is epidemiological support for its use from a statistical study of cases recorded in a long term database.
An application of as a low-pressure breathing gas is in modern space suits, which surround their occupant's body with the breathing gas. These devices use nearly pure oxygen at about one-third normal pressure, resulting in a normal blood partial pressure of . This trade-off of higher oxygen concentration for lower pressure is needed to maintain suit flexibility.
Scuba and surface-supplied underwater divers and submariners also rely on artificially delivered . Submarines, submersibles and atmospheric diving suits usually operate at normal atmospheric pressure. Breathing air is scrubbed of carbon dioxide by chemical extraction and oxygen is replaced to maintain a constant partial pressure. Ambient pressure divers breathe air or gas mixtures with an oxygen fraction suited to the operating depth. Pure or nearly pure use in diving at pressures higher than atmospheric is usually limited to rebreathers, or decompression at relatively shallow depths (~6 meters depth, or less), or medical treatment in recompression chambers at pressures up to 2.8 bar, where acute oxygen toxicity can be managed without the risk of drowning. Deeper diving requires significant dilution of with other gases, such as nitrogen or helium, to prevent oxygen toxicity.
People who climb mountains or fly in non-pressurized fixed-wing aircraft sometimes have supplemental supplies. Pressurized commercial airplanes have an emergency supply of automatically supplied to the passengers in case of cabin depressurization. Sudden cabin pressure loss activates chemical oxygen generators above each seat, causing oxygen masks to drop. Pulling on the masks "to start the flow of oxygen" as cabin safety instructions dictate, forces iron filings into the sodium chlorate inside the canister. A steady stream of oxygen gas is then produced by the exothermic reaction.
Oxygen, as a mild euphoric, has a history of recreational use in oxygen bars and in sports. Oxygen bars are establishments found in the United States since the late 1990s that offer higher than normal exposure for a minimal fee. Professional athletes, especially in American football, sometimes go off-field between plays to don oxygen masks to boost performance. The pharmacological effect is doubted; a placebo effect is a more likely explanation. Available studies support a performance boost from oxygen enriched mixtures only if it is breathed "during" aerobic exercise.
Other recreational uses that do not involve breathing include pyrotechnic applications, such as George Goble's five-second ignition of barbecue grills.
Smelting of iron ore into steel consumes 55% of commercially produced oxygen. In this process, is injected through a high-pressure lance into molten iron, which removes sulfur impurities and excess carbon as the respective oxides, and . The reactions are exothermic, so the temperature increases to 1,700 °C.
Another 25% of commercially produced oxygen is used by the chemical industry. Ethylene is reacted with to create ethylene oxide, which, in turn, is converted into ethylene glycol; the primary feeder material used to manufacture a host of products, including antifreeze and polyester polymers (the precursors of many plastics and fabrics).
Most of the remaining 20% of commercially produced oxygen is used in medical applications, metal cutting and welding, as an oxidizer in rocket fuel, and in water treatment. Oxygen is used in oxyacetylene welding burning acetylene with to produce a very hot flame. In this process, metal up to thick is first heated with a small oxy-acetylene flame and then quickly cut by a large stream of .
The oxidation state of oxygen is −2 in almost all known compounds of oxygen. The oxidation state −1 is found in a few compounds such as peroxides. Compounds containing oxygen in other oxidation states are very uncommon: −1/2 (superoxides), −1/3 (ozonides), 0 (elemental, hypofluorous acid), +1/2 (dioxygenyl), +1 (dioxygen difluoride), and +2 (oxygen difluoride).
Water () is an oxide of hydrogen and the most familiar oxygen compound. Hydrogen atoms are covalently bonded to oxygen in a water molecule but also have an additional attraction (about 23.3 kJ/mol per hydrogen atom) to an adjacent oxygen atom in a separate molecule. These hydrogen bonds between water molecules hold them approximately 15% closer than what would be expected in a simple liquid with just van der Waals forces.
Due to its electronegativity, oxygen forms chemical bonds with almost all other elements to give corresponding oxides. The surface of most metals, such as aluminium and titanium, are oxidized in the presence of air and become coated with a thin film of oxide that passivates the metal and slows further corrosion. Many oxides of the transition metals are non-stoichiometric compounds, with slightly less metal than the chemical formula would show. For example, the mineral FeO (wüstite) is written as formula_1, where "x" is usually around 0.05.
Oxygen is present in the atmosphere in trace quantities in the form of carbon dioxide (). The Earth's crustal rock is composed in large part of oxides of silicon (silica , as found in granite and quartz), aluminium (aluminium oxide , in bauxite and corundum), iron (iron(III) oxide , in hematite and rust), and calcium carbonate (in limestone). The rest of the Earth's crust is also made of oxygen compounds, in particular various complex silicates (in silicate minerals). The Earth's mantle, of much larger mass than the crust, is largely composed of silicates of magnesium and iron.
Water-soluble silicates in the form of , , and are used as detergents and adhesives.
Oxygen also acts as a ligand for transition metals, forming transition metal dioxygen complexes, which feature metal–. This class of compounds includes the heme proteins hemoglobin and myoglobin. An exotic and unusual reaction occurs with , which oxidizes oxygen to give O2+PtF6−, dioxygenyl hexafluoroplatinate.
Among the most important classes of organic compounds that contain oxygen are (where "R" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone () and phenol () are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms. The element is similarly found in almost all biomolecules that are important to (or generated by) life.
Oxygen reacts spontaneously with many organic compounds at or below room temperature in a process called autoxidation. Most of the organic compounds that contain oxygen are not made by direct action of . Organic compounds important in industry and commerce that are made by direct oxidation of a precursor include ethylene oxide and peracetic acid.
The NFPA 704 standard rates compressed oxygen gas as nonhazardous to health, nonflammable and nonreactive, but an oxidizer. Refrigerated liquid oxygen (LOX) is given a health hazard rating of 3 (for increased risk of hyperoxia from condensed vapors, and for hazards common to cryogenic liquids such as frostbite), and all other ratings are the same as the compressed gas form.
Oxygen gas () can be toxic at elevated partial pressures, leading to convulsions and other health problems. Oxygen toxicity usually begins to occur at partial pressures more than 50 kilopascals (kPa), equal to about 50% oxygen composition at standard pressure or 2.5 times the normal sea-level partial pressure of about 21 kPa. This is not a problem except for patients on mechanical ventilators, since gas supplied through oxygen masks in medical applications is typically composed of only 30%–50% by volume (about 30 kPa at standard pressure).
At one time, premature babies were placed in incubators containing -rich air, but this practice was discontinued after some babies were blinded by the oxygen content being too high.
Breathing pure in space applications, such as in some modern space suits, or in early spacecraft such as Apollo, causes no damage due to the low total pressures used. In the case of spacesuits, the partial pressure in the breathing gas is, in general, about 30 kPa (1.4 times normal), and the resulting partial pressure in the astronaut's arterial blood is only marginally more than normal sea-level partial pressure.
Oxygen toxicity to the lungs and central nervous system can also occur in deep scuba diving and surface supplied diving. Prolonged breathing of an air mixture with an partial pressure more than 60 kPa can eventually lead to permanent pulmonary fibrosis. Exposure to a partial pressures greater than 160 kPa (about 1.6 atm) may lead to convulsions (normally fatal for divers). Acute oxygen toxicity (causing seizures, its most feared effect for divers) can occur by breathing an air mixture with 21% at or more of depth; the same thing can occur by breathing 100% at only .
Highly concentrated sources of oxygen promote rapid combustion. Fire and explosion hazards exist when concentrated oxidants and fuels are brought into close proximity; an ignition event, such as heat or a spark, is needed to trigger combustion. Oxygen is the oxidant, not the fuel, but nevertheless the source of most of the chemical energy released in combustion.
Concentrated will allow combustion to proceed rapidly and energetically. Steel pipes and storage vessels used to store and transmit both gaseous and liquid oxygen will act as a fuel; and therefore the design and manufacture of systems requires special training to ensure that ignition sources are minimized. The fire that killed the Apollo 1 crew in a launch pad test spread so rapidly because the capsule was pressurized with pure but at slightly more than atmospheric pressure, instead of the normal pressure that would be used in a mission.
Liquid oxygen spills, if allowed to soak into organic matter, such as wood, petrochemicals, and asphalt can cause these materials to detonate unpredictably on subsequent mechanical impact. | https://en.wikipedia.org/wiki?curid=22303 |
Osmium
Osmium (from Greek ὀσμή "osme", "smell") is a chemical element with the symbol Os and atomic number 76. It is a hard, brittle, bluish-white transition metal in the platinum group that is found as a trace element in alloys, mostly in platinum ores. Osmium is the densest naturally occurring element, with an experimentally measured (using x-ray crystallography) density of . Manufacturers use its alloys with platinum, iridium, and other platinum-group metals to make fountain pen nib tipping, electrical contacts, and in other applications that require extreme durability and hardness. The element's abundance in the Earth's crust is among the rarest.
Osmium has a blue-gray tint and is the densest stable element; it is approximately twice as dense as lead and slightly denser than iridium. Calculations of density from the X-ray diffraction data may produce the most reliable data for these elements, giving a value of for osmium, slightly denser than the of iridium; both metals are nearly 23 times as dense as water, and times as dense as gold.
Osmium is a hard but brittle metal that remains lustrous even at high temperatures. It has a very low compressibility. Correspondingly, its bulk modulus is extremely high, reported between and , which rivals that of diamond (). The hardness of osmium is moderately high at . Because of its hardness, brittleness, low vapor pressure (the lowest of the platinum-group metals), and very high melting point (the third highest of all elements, after only tungsten, and rhenium), solid osmium is difficult to machine, form, or work.
Osmium forms compounds with oxidation states ranging from −2 to +8. The most common oxidation states are +2, +3, +4, and +8. The +8 oxidation state is notable for being the highest attained by any chemical element aside from iridium's +9 and is encountered only in xenon, ruthenium, hassium, and iridium. The oxidation states −1 and −2 represented by the two reactive compounds and are used in the synthesis of osmium cluster compounds.
The most common compound exhibiting the +8 oxidation state is osmium tetroxide. This toxic compound is formed when powdered osmium is exposed to air. It is a very volatile, water-soluble, pale yellow, crystalline solid with a strong smell. Osmium powder has the characteristic smell of osmium tetroxide. Osmium tetroxide forms red osmates upon reaction with a base. With ammonia, it forms the nitrido-osmates . Osmium tetroxide boils at 130 °C and is a powerful oxidizing agent. By contrast, osmium dioxide (OsO2) is black, non-volatile, and much less reactive and toxic.
Only two osmium compounds have major applications: osmium tetroxide for staining tissue in electron microscopy and for the oxidation of alkenes in organic synthesis, and the non-volatile osmates for organic oxidation reactions.
Osmium pentafluoride (OsF5) is known, but osmium trifluoride (OsF3) has not yet been synthesized. The lower oxidation states are stabilized by the larger halogens, so that the trichloride, tribromide, triiodide, and even diiodide are known. The oxidation state +1 is known only for osmium iodide (OsI), whereas several carbonyl complexes of osmium, such as triosmium dodecacarbonyl (), represent oxidation state 0.
In general, the lower oxidation states of osmium are stabilized by ligands that are good σ-donors (such as amines) and π-acceptors (heterocycles containing nitrogen). The higher oxidation states are stabilized by strong σ- and π-donors, such as and .
Despite its broad range of compounds in numerous oxidation states, osmium in bulk form at ordinary temperatures and pressures resists attack by all acids, including aqua regia, but is attacked by fused alkalis.
Osmium has seven naturally occurring isotopes, six of which are stable: , , , , , and (most abundant) . undergoes alpha decay with such a long half-life years, approximately times the age of the universe, that for practical purposes it can be considered stable. Alpha decay is predicted for all seven naturally occurring isotopes, but it has been observed only for , presumably due to very long half-lives. It is predicted that and can undergo double beta decay but this radioactivity has not been observed yet.
Osmium was discovered in 1803 by Smithson Tennant and William Hyde Wollaston in London, England. The discovery of osmium is intertwined with that of platinum and the other metals of the platinum group. Platinum reached Europe as "platina" ("small silver"), first encountered in the late 17th century in silver mines around the Chocó Department, in Colombia. The discovery that this metal was not an alloy, but a distinct new element, was published in 1748.
Chemists who studied platinum dissolved it in aqua regia (a mixture of hydrochloric and nitric acids) to create soluble salts. They always observed a small amount of a dark, insoluble residue. Joseph Louis Proust thought that the residue was graphite. Victor Collet-Descotils, Antoine François, comte de Fourcroy, and Louis Nicolas Vauquelin also observed iridium in the black platinum residue in 1803, but did not obtain enough material for further experiments. Later the two French chemists Antoine-François Fourcroy and Nicolas-Louis Vauquelin identified a metal in a platinum residue they called ‘"ptène"’.
In 1803, Smithson Tennant analyzed the insoluble residue and concluded that it must contain a new metal. Vauquelin treated the powder alternately with alkali and acids and obtained a volatile new oxide, which he believed was of this new metal—which he named "ptene", from the Greek word (ptènos) for winged. However, Tennant, who had the advantage of a much larger amount of residue, continued his research and identified two previously undiscovered elements in the black residue, iridium and osmium. He obtained a yellow solution (probably of "cis"–[Os(OH)2O4]2−) by reactions with sodium hydroxide at red heat. After acidification he was able to distill the formed OsO4. He named it osmium after Greek "osme" meaning "a smell", because of the ashy and smoky smell of the volatile osmium tetroxide. Discovery of the new elements was documented in a letter to the Royal Society on June 21, 1804.
Uranium and osmium were early successful catalysts in the Haber process, the nitrogen fixation reaction of nitrogen and hydrogen to produce ammonia, giving enough yield to make the process economically successful. At the time, a group at BASF led by Carl Bosch bought most of the world's supply of osmium to use as a catalyst. Shortly thereafter, in 1908, cheaper catalysts based on iron and iron oxides were introduced by the same group for the first pilot plants, removing the need for the expensive and rare osmium.
Nowadays osmium is obtained primarily from the processing of platinum and nickel ores.
Osmium is one of the even-numbered elements, which puts it in the upper half of elements commonly found in space. It is, however, the least abundant stable element in Earth's crust, with an average mass fraction of 50 parts per trillion in the continental crust.
Osmium is found in nature as an uncombined element or in natural alloys; especially the iridium–osmium alloys, osmiridium (osmium rich), and iridosmium (iridium rich). In nickel and copper deposits, the platinum group metals occur as sulfides (i.e., (Pt,Pd)S)), tellurides (e.g., PtBiTe), antimonides (e.g., PdSb), and arsenides (e.g., PtAs2); in all these compounds platinum is exchanged by a small amount of iridium and osmium. As with all of the platinum group metals, osmium can be found naturally in alloys with nickel or copper.
Within Earth's crust, osmium, like iridium, is found at highest concentrations in three types of geologic structure: igneous deposits (crustal intrusions from below), impact craters, and deposits reworked from one of the former structures. The largest known primary reserves are in the Bushveld Igneous Complex in South Africa, though the large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin in Canada are also significant sources of osmium. Smaller reserves can be found in the United States. The alluvial deposits used by pre-Columbian people in the Chocó Department, Colombia are still a source for platinum group metals. The second large alluvial deposit was found in the Ural Mountains, Russia, which is still mined.
Osmium is obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper and nickel, noble metals such as silver, gold and the platinum group metals, together with non-metallic elements such as selenium and tellurium settle to the bottom of the cell as "anode mud", which forms the starting material for their extraction. Separating the metals requires that they first be brought into solution. Several methods can achieve this, depending on the separation process and the composition of the mixture. Two representative methods are fusion with sodium peroxide followed by dissolution in aqua regia, and dissolution in a mixture of chlorine with hydrochloric acid. Osmium, ruthenium, rhodium and iridium can be separated from platinum, gold and base metals by their insolubility in aqua regia, leaving a solid residue. Rhodium can be separated from the residue by treatment with molten sodium bisulfate. The insoluble residue, containing Ru, Os and Ir, is treated with sodium oxide, in which Ir is insoluble, producing water-soluble Ru and Os salts. After oxidation to the volatile oxides, is separated from by precipitation of (NH4)3RuCl6 with ammonium chloride.
After it is dissolved, osmium is separated from the other platinum group metals by distillation or extraction with organic solvents of the volatile osmium tetroxide. The first method is similar to the procedure used by Tennant and Wollaston. Both methods are suitable for industrial scale production. In either case, the product is reduced using hydrogen, yielding the metal as a powder or sponge that can be treated using powder metallurgy techniques.
Neither the producers nor the United States Geological Survey published any production amounts for osmium. In 1971, estimations of the United States production of osmium as a byproduct of copper refining was 2000 troy ounces (62 kg). In 2017, the estimated US import of osmium for consumption was 90 kg.
Because of the volatility and extreme toxicity of its oxide, osmium is rarely used in its pure state, but is instead often alloyed with other metals for high-wear applications. Osmium alloys such as osmiridium are very hard and, along with other platinum-group metals, are used in the tips of fountain pens, instrument pivots, and electrical contacts, as they can resist wear from frequent operation. They were also used for the tips of phonograph styli during the late 78 rpm and early "LP" and "45" record era, circa 1945 to 1955. Osmium-alloy tips were significantly more durable than steel and chromium needle points, but wore out far more rapidly than competing, and costlier, sapphire and diamond tips, so they were discontinued.
Osmium tetroxide has been used in fingerprint detection and in staining fatty tissue for optical and electron microscopy. As a strong oxidant, it cross-links lipids mainly by reacting with unsaturated carbon–carbon bonds and thereby both fixes biological membranes in place in tissue samples and simultaneously stains them. Because osmium atoms are extremely electron-dense, osmium staining greatly enhances image contrast in transmission electron microscopy (TEM) studies of biological materials. Those carbon materials otherwise have very weak TEM contrast (see image). Another osmium compound, osmium ferricyanide (OsFeCN), exhibits similar fixing and staining action.
The tetroxide and its derivative potassium osmate are important oxidants in organic synthesis. For the Sharpless asymmetric dihydroxylation, which uses osmate for the conversion of a double bond into a vicinal diol, Karl Barry Sharpless was awarded the Nobel Prize in Chemistry in 2001. OsO4 is very expensive for this use, so KMnO4 is often used instead, even though the yields are less for this cheaper chemical reagent.
In 1898 an Austrian chemist Auer von Welsbach developed the Oslamp with a filament made of osmium, which he introduced commercially in 1902. After only a few years, osmium was replaced by the more stable metal tungsten. Tungsten has the highest melting point among all metals, and its use in light bulbs increases the luminous efficacy and life of incandescent lamps.
The light bulb manufacturer Osram (founded in 1906, when three German companies, Auer-Gesellschaft, AEG and Siemens & Halske, combined their lamp production facilities) derived its name from the elements of osmium and Wolfram (the latter is German for tungsten).
Like palladium, powdered osmium effectively absorbs hydrogen atoms. This could make osmium a potential candidate for a metal-hydride battery electrode. However, osmium is expensive and would react with potassium hydroxide, the most common battery electrolyte.
Osmium has high reflectivity in the ultraviolet range of the electromagnetic spectrum; for example, at 600 Å osmium has a reflectivity twice that of gold. This high reflectivity is desirable in space-based UV spectrometers, which have reduced mirror sizes due to space limitations. Osmium-coated mirrors were flown in several space missions aboard the Space Shuttle, but it soon became clear that the oxygen radicals in the low Earth orbit are abundant enough to significantly deteriorate the osmium layer.
The only known clinical use of osmium is synovectomy in arthritic patients in Scandinavia. It involves the local administration of osmium tetroxide (OsO4), which is a highly toxic compound. The lack of reports of long-term side effects suggest that osmium itself can be biocompatible, though this depends on the osmium compound administered. In 2011, osmium(VI) and osmium(II) compounds were reported to show anticancer activity "in vivo", it indicated a promising future for using osmium compounds as anticancer drugs.
Metallic osmium is harmless but finely divided metallic osmium is pyrophoric and reacts with oxygen at room temperature, forming volatile osmium tetroxide. Some osmium compounds are also converted to the tetroxide if oxygen is present. This makes osmium tetroxide the main source of contact with the environment.
Osmium tetroxide is highly volatile and penetrates skin readily, and is very toxic by inhalation, ingestion, and skin contact. Airborne low concentrations of osmium tetroxide vapor can cause lung congestion and skin or eye damage, and should therefore be used in a fume hood. Osmium tetroxide is rapidly reduced to relatively inert compounds by e.g. ascorbic acid or polyunsaturated vegetable oils (such as corn oil).
Osmium is usually sold as a minimum 99.9% pure powder. Like other precious metals, it is measured by troy weight and by grams. The market price of osmium has not changed in decades, primarily because little change has occurred in supply and demand. In addition to so little of it being available, osmium is difficult to work with, has few uses, and is a challenge to store safely because of the toxic compound it produces when it oxidizes.
While the price of $400 per troy ounce has remained steady since the 1990s, inflation since that time has led to the metal losing about one-third of its value in the two decades prior to 2019. | https://en.wikipedia.org/wiki?curid=22304 |
Oxide
An oxide () is a chemical compound that contains at least one oxygen atom and one other element in its chemical formula. "Oxide" itself is the dianion of oxygen, an O2– atom. Metal oxides thus typically contain an anion of oxygen in the oxidation state of −2. Most of the Earth's crust consists of solid oxides, the result of elements being oxidized by the oxygen in air or in water. Even materials considered pure elements often develop an oxide coating. For example, aluminium foil develops a thin skin of Al2O3 (called a passivation layer) that protects the foil from further corrosion. Certain elements can form multiple oxides, differing in the amounts of the element combining with the oxygen. Examples are carbon, iron, nitrogen (see nitrogen oxide), silicon, titanium, and aluminium. In such cases the oxides are distinguished by specifying the numbers of atoms involved, as in carbon monoxide and carbon dioxide, or by specifying the element's oxidation number, as in iron(II) oxide and iron(III) oxide.
Due to its electronegativity, oxygen forms stable chemical bonds with almost all elements to give the corresponding oxides. Noble metals (such as gold or platinum) are prized because they resist direct chemical combination with oxygen, and substances like gold(III) oxide must be generated by indirect routes.
Two independent pathways for corrosion of elements are hydrolysis and oxidation by oxygen. The combination of water and oxygen is even more corrosive. Virtually all elements burn in an atmosphere of oxygen or an oxygen-rich environment. In the presence of water and oxygen (or simply air), some elements— sodium—react rapidly, to give the hydroxides. In part, for this reason, alkali and alkaline earth metals are not found in nature in their metallic, i.e., native, form. Cesium is so reactive with oxygen that it is used as a getter in vacuum tubes, and solutions of potassium and sodium, so-called NaK are used to deoxygenate and dehydrate some organic solvents. The surface of most metals consists of oxides and hydroxides in the presence of air. A well-known example is aluminium foil, which is coated with a thin film of aluminium oxide that passivates the metal, slowing further corrosion. The aluminum oxide layer can be built to greater thickness by the process of electrolytic anodizing. Though solid magnesium and aluminum react slowly with oxygen at STP—they, like most metals, burn in air, generating very high temperatures. Finely grained powders of most metals can be dangerously explosive in air. Consequently, they are often used in solid-fuel rockets.
In dry oxygen, iron readily forms iron(II) oxide, but the formation of the hydrated ferric oxides, Fe2O3−"x"(OH)2"x", that mainly comprise rust, typically requires oxygen "and" water. Free oxygen production by photosynthetic bacteria some 3.5 billion years ago precipitated iron out of solution in the oceans as Fe2O3 in the economically important iron ore hematite.
Oxides have a range of different structures, from individual molecules to polymeric and crystalline structures. At standard conditions, oxides may range from solids to gases.
Oxides of most metals adopt polymeric structures. The oxide typically links three metal atoms (e.g., rutile structure) or six metal atoms (carborundum or rock salt structures). Because the M-O bonds are typically strong and these compounds are crosslinked polymers, the solids tend to be insoluble in solvents, though they are attacked by acids and bases. The formulas are often deceptively simple where many are nonstoichiometric compounds.
Although most metal oxides are polymeric, some oxides are molecules. Examples of molecular oxides are carbon dioxide and carbon monoxide. All simple oxides of nitrogen are molecular, e.g., NO, N2O, NO2 and N2O4. Phosphorus pentoxide is a more complex molecular oxide with a deceptive name, the real formula being P4O10. Some polymeric oxides depolymerize when heated to give molecules, examples being selenium dioxide and sulfur trioxide. Tetroxides are rare. The more common examples: ruthenium tetroxide, osmium tetroxide, and xenon tetroxide.
Many oxyanions are known, such as polyphosphates and polyoxometalates. Oxycations are rarer, some examples being nitrosonium (NO+), vanadyl (VO2+), and uranyl (). Of course many compounds are known with both oxides and other groups. In organic chemistry, these include ketones and many related carbonyl compounds. For the transition metals, many oxo complexes are known as well as oxyhalides.
Conversion of a metal oxide to the metal is called reduction. The reduction can be induced by many reagents. Many metal oxides convert to metals simply by heating.
Metals are "won" from their oxides by chemical reduction, i.e. by the addition of a chemical reagent. A common and cheap reducing agent is carbon in the form of coke. The most prominent example is that of iron ore smelting. Many reactions are involved, but the simplified equation is usually shown as:
Metal oxides can be reduced by organic compounds. This redox process is the basis for many important transformations in chemistry, such as the detoxification of drugs by the P450 enzymes and the production of ethylene oxide, which is converted to antifreeze. In such systems, the metal center transfers an oxide ligand to the organic compound followed by regeneration of the metal oxide, often by oxygen in the air.
Metals that are lower in the reactivity series can be reduced by heating alone. For example, silver oxide decomposes at 200 °C:
Metals that are more reactive displace the oxide of the metals that are less reactive. For example, zinc is more reactive than copper, so it displaces copper (II) oxide to form zinc oxide:
Apart from metals, hydrogen can also displace metal oxides to form hydrogen oxide, also known as water:
Since metals that are reactive form oxides that are stable, some metal oxides must be electrolyzed to be reduced. This includes sodium oxide, potassium oxide, calcium oxide, magnesium oxide, and aluminium oxide. The oxides must be molten before immersing graphite electrodes in them:
Oxides typically react with acids or bases, sometimes both. Those reacting only with acids are labeled basic oxides. Those reacting only by bases are called "acidic oxides". Oxides that react with both are amphoteric. Metals tend to form basic oxides, non-metals tend to form acidic oxides, and amphoteric oxides are formed by elements near the boundary between metals and non-metals (metalloids). This reactivity is the basis of many practical processes, such as the extraction of some metals from their ores in the process called hydrometallurgy.
Oxides of more electropositive elements tend to be basic. They are called "basic anhydrides". Exposed to water, they may form basic hydroxides. For example, sodium oxide is basic—when hydrated, it forms sodium hydroxide. Oxides of more electronegative elements tend to be acidic. They are called "acid anhydrides"; adding water, they form oxoacids. For example, dichlorine heptoxide is an acid anhydride; perchloric acid is its fully hydrated form. Some oxides can act as both acid and base. They are amphoteric. An example is aluminium oxide. Some oxides do not show behavior as either acid or base.
The oxide ion has the formula O2−. It is the conjugate base of the hydroxide ion, OH− and is encountered in ionic solids such as calcium oxide. O2− is unstable in aqueous solution − its affinity for H+ is so great (p"K"b ~ −38) that it abstracts a proton from a solvent H2O molecule:
The equilibrium constant of aforesaid reactions is pKeq ~ −22
In the 18th century, oxides were named calxes or calces after the calcination process used to produce oxides. "Calx" was later replaced by "oxyd."
The reductive dissolution of a transition metal oxide occurs when dissolution is coupled to a redox event. For example, ferric oxides dissolve in the presence of reductants, which can include organic compounds. or bacteria Reductive dissolution is integral to geochemical phenomena such as the iron cycle.
Reductive dissolution does not necessarily occur at the site where the reductant adsorbs. Instead, the added electron travel through the particle, causing reductive dissolution elsewhere on the particle.
Sometimes, metal-oxygen ratios are used to name oxides. Thus, NbO would be called niobium monoxide and TiO2 is titanium dioxide. This naming follows the Greek numerical prefixes. In the older literature and continuing in industry, oxides are named by adding the suffix "-a" to the element's name. Hence alumina, magnesia and chromia, are, respectively, Al2O3, MgO and Cr2O3.
Special types of oxides are peroxide, O22−, and superoxide, O2−. In such species, oxygen is assigned higher oxidation states than oxide.
The chemical formulas of the oxides of the chemical elements in their highest oxidation state are predictable and are derived from the number of valence electrons for that element. Even the chemical formula of O4, tetraoxygen, is predictable as a group 16 element. One exception is copper, for which the highest oxidation state oxide is copper(II) oxide and not copper(I) oxide. Another exception is fluoride, which does not exist as one might expect—as F2O7—but as OF2.
Since fluorine is more electronegative than oxygen, oxygen difluoride (OF2) does not represent an oxide of fluorine, but instead represents a fluoride of oxygen.
The following table gives examples of commonly encountered oxides. Only a few representatives are given, as the number of polyatomic ions encountered in practice is very large. | https://en.wikipedia.org/wiki?curid=22305 |
Overgrazing
Overgrazing occurs when plants are exposed to intensive grazing for extended periods of time, or without sufficient recovery periods. It can be caused by either livestock in poorly managed agricultural applications, game reserves, or nature reserves. It can also be caused by immobile, travel restricted populations of native or non-native wild animals. However, "overgrazing" is a controversial concept, based on equilibrium system theory. A strong indicator of overgrazing is where additional feed needs to be brought in from outside the farm, often to support livestock through the winter. Traditionally this feed was sourced on the farm, with fewer animals being kept and some fields being used for hay and silage production. Modern farm businesses often choose to keep more animals than their land can support alone; buying in external feed to offset this.
It reduces the usefulness, productivity, and biodiversity of the land and is one cause of desertification and erosion. Overgrazing is also seen as a cause of the spread of invasive species of non-native plants and of weeds. It is reversed or prevented by moving grazers in large herds, such as the American bison of the Great Plains, or migratory Wildebeests of the African savannas, or by holistic planned grazing.
Sustainable grassland production is based on grass and grassland management, land management, animal management and livestock marketing. Grazing management, with sustainable agriculture and agroecology practices, is the foundation of grassland-based livestock production, since it affects both animal and plant health and productivity. There are several new grazing models and management systems that attempt to reduce or eliminate overgrazing like Holistic management and Permaculture
One indicator of overgrazing is that the animals run short of pasture. In some regions of the United States under continuous grazing, overgrazed pastures are predominated by short-grass species such as bluegrass and will be less than 2-3 inches tall in the grazed areas. In other parts of the world, overgrazed pasture is typically taller than sustainably grazed pasture, with grass heights typically over 1 meter and dominated by unpalatable species such as "Aristida" or "Imperata". In all cases, palatable tall grasses such as orchard grass are sparse or non-existent. In such cases of overgrazing, soil may be visible between plants in the stand, allowing erosion to occur, though in many circumstances overgrazed pastures have a greater sward cover than sustainably grazed pastures.
Under rotational grazing, overgrazed plants do not have enough time to recover to the proper height between grazing events. The animals resume grazing before the plants have restored carbohydrate reserves and grown back roots lost after the last defoliation. The result is the same as under continuous grazing: in some parts of the United States tall-growing species die and short-growing species that are more subject to drought injury predominate the pasture, while in most other parts of the world tall, drought tolerant, unpalatable species such as "Imperata" or "Aristida" come to dominate. As the sod thins, weeds encroach into the pasture in some parts of the United States, whereas in most other parts of the world overgrazing can promote thick swards of native unpalatable grasses that hamper the spread of weeds.
Another indicator of overgrazing in some parts of North America is that livestock run out of pasture, and hay needs to be fed early in the fall. In contrast, most areas of the world do not experience the same climatic regime as the continental United States and hay feeding is rarely conducted.
Overgrazing is also indicated in livestock performance and condition. Cows having inadequate pasture immediately following their calf's weaning may have poor body condition the following season. This may reduce the health and vigor of cows and calves at calving. Also, cows in poor body condition do not cycle as soon after calving, which can result in delayed breeding and a long calving season. With good cow genetics, nutrition, ideal seasons and controlled breeding 55% to 75% of the calves should come in the first 21 days of the calving season. Poor weaning weights of calves can be caused by insufficient pasture, when cows give less milk and the calves need pasture to maintain weight gain.
Overgrazing typically increases soil erosion. Reduction in soil depth, soil organic matter and soil fertility impair the land's future natural and agricultural productivity. Soil fertility can sometimes be mitigated by applying the appropriate lime and organic fertilizers. However, the loss of soil depth and organic matter takes centuries to correct. Their loss is critical in determining the soil's water-holding capacity and how well pasture plants do during dry weather.
Overgrazing results in increased trampling of soil by livestock, which increases soil compaction (Fuls, 1992) and thus, decreases the permeability of the soil. Furthermore, with more exposure of soil due to the decrease in plant biomass, the soil is exposed to increased levels of direct rainfall, creating a crust layer that is compacted and impermeable. This impermeability is what increases runoff and soil erosion .
With continued overutilization of land for grazing, there is an increase in degradation. This leads to poor soil conditions that only xeric and early successional species can tolerate .
Native plant grass species, both individual bunch grasses and in grasslands, are especially vulnerable. For example, excessive browsing of white-tailed deer can lead to the growth of less preferred species of grasses and ferns or non-native plant species that can potentially displace native, woody plants, decreasing the biodiversity .
In the continental United States, to prevent overgrazing, match the forage supplement to the herd's requirement. This means that a buffer needs to be in the system to adjust for the fastest growth of forages.
Another potential buffer is to plant warm-season perennial grasses such as switchgrass, which do not grow early in the season. This reduces the area that the livestock can use early in the season, making it easier for them to keep up with the cool-season grasses. The animals then use the warm-season grasses during the heat of the summer, and the cool-season grasses recover for fall grazing.
The grazing guidelines in the table are for rotationally grazed, cool-season forages. When using continuous grazing, manage pasture height at one-half the recommended turn-in height for rotational grazing to optimize plant health. The growth habit of some forage species, such as alfalfa, does not permit their survival under continuous grazing. When managing for legumes in the stand, it is beneficial to use rotational grazing and graze the stand close and then give adequate rest to stimulate the legumes' growth.
Overgrazing is used as an example in the economic concept now known as the Tragedy of the Commons devised in a 1968 paper by Garrett Hardin. This cited the work of a Victorian economist who used the over-grazing of common land as an example of behaviour. Hardin's example could only apply to unregulated use of land regarded as a common resource.
Normally, rights of use of Common land in England and Wales were, and still are, closely regulated, and available only to "commoners". If excessive use was made of common land, for example in overgrazing, a common would be "stinted", that is, a limit would be put on the number of animals each commoner was allowed to graze. These regulations were responsive to demographic and economic pressure; thus rather than let a common become degraded, access was restricted even further. This important part of actual historic practice was absent from the economic model of Hardin.
In reality the use of common land in England and Wales was a triumph of conserving a scarce resource using agreed custom and practice.
The violent herder–farmer conflicts in Nigeria, Mali, Sudan and other countries in the Sahel region have been exacerbated by land degradation and overgrazing.
Various countries in Sub-Sahara Africa are affected by overgrazing and resulting ecological effects.
In Namibia, overgrazing is considered the main cause of the thickening of shrubs and bushes at the expenses of grassees on a land area of up to 45 million hectares (see bush encroachment).
Overgrazing does not necessarily mean total denudation of the land. Overgrazing can also occur with native species. In the Australian Capital Territory, Australia the local government, in 2013, authorised the culling of 1455 kangaroos due to overgrazing. | https://en.wikipedia.org/wiki?curid=22306 |
Oxford
Oxford () is a university city in Oxfordshire, England, with a population of 154,600. It is northwest of London, from Birmingham and from Reading by road.
The city is home to the University of Oxford, the oldest university in the English-speaking world, and has buildings in every style of English architecture from late Anglo-Saxon. Oxford's industries include motor manufacturing, education, publishing, information technology and science.
Oxford was first settled by the Anglo-Saxons and was initially known as Oxenaforda, meaning "ford of the oxen", as referenced in Florence of Worcester's "Chronicon ex chronicis". A river crossing for oxen began around AD 900.
In the 10th century, Oxford became an important military frontier town between the kingdoms of Mercia and Wessex and was raided by Danes. In 1002, many Danes were killed in Oxford during the St. Brice's Day massacre ordered by Æthelred the Unready. The skeletons of more than thirty suspected victims were unearthed in 2008 during the course of building work at St John's College. The ‘massacre’ was a contributing factor to King Sweyn I of Denmark’s invasion of England in 1003 and the sacking of Oxford by the Danes in 1004.
Oxford was heavily damaged during the Norman Invasion of 1066. Following the conquest, the town was assigned to a governor, Robert D'Oyly, who ordered the construction of Oxford Castle to confirm Norman authority over the area. The castle has never been used for military purposes and its remains survive to this day. D'Oyly set up a monastic community in the castle consisting of a chapel and living quarters for monks ("St George in the Castle"). The community never grew large but it earned its place in history as one of Britain's oldest places of formal education. It was there that in 1139 Geoffrey of Monmouth wrote his "History of the Kings of Britain", a compilation of Arthurian legends. Additionally, there is evidence of Jews living in the city as early as 1141, and during the 12th century the Jewish community is estimated to have numbered about 80–100. The city was besieged during The Anarchy in 1142.
In 1191, a city charter stated in Latin,
Oxford's prestige was enhanced by its charter granted by King Henry II, granting its citizens the same privileges and exemptions as those enjoyed by the capital of the kingdom; and various important religious houses were founded in or near the city. Oxford's status as a liberty obtained from this period until the 19th century. A grandson of King John established Rewley Abbey for the Cistercian Order; and friars of various orders (Dominicans, Franciscans, Carmelites, Augustinians and Trinitarians) all had houses of varying importance at Oxford. Parliaments were often held in the city during the 13th century. The Provisions of Oxford were instigated by a group of barons led by Simon de Montfort; these documents are often regarded as England's first written constitution.
Richard I of England (reigned 6 July 1189 – 6 April 1199) and John, King of England (reigned 6 April 1199 – 19 October 1216) the sons of Henry II of England, were both born at Beaumont Palace in Oxford, on 8 September 1157 and 24 December 1166 respectively. A plaque in Beaumont Street commemorates these events.
The University of Oxford is first mentioned in 12th-century records. Of the hundreds of aularian houses that sprang up across the city, only St Edmund Hall (c. 1225) remains. What put an end to the halls was the emergence of colleges. Oxford's earliest colleges were University College (1249), Balliol (1263) and Merton (1264). These colleges were established at a time when Europeans were starting to translate the writings of Greek philosophers. These writings challenged European ideology, inspiring scientific discoveries and advancements in the arts, as society began to see itself in a new way. These colleges at Oxford were supported by the Church in the hope of reconciling Greek philosophy and Christian theology. The relationship between "town and gown" has often been uneasy – as many as 93 students and townspeople were killed in the St Scholastica Day Riot of 1355.
The sweating sickness epidemic in 1517 was particularly devastating to Oxford and Cambridge where it killed half of both cities' populations, including many students and dons.
Christ Church Cathedral, Oxford is unique in combining a college chapel and a cathedral in one foundation. Originally the Priory Church of St Frideswide, the building was extended and incorporated into the structure of the Cardinal's College shortly before its refounding as Christ Church in 1546, since when it has functioned as the cathedral of the Diocese of Oxford.
The Oxford Martyrs were tried for heresy in 1555 and subsequently burnt at the stake, on what is now Broad Street, for their religious beliefs and teachings. The three martyrs were the bishops Hugh Latimer and Nicholas Ridley, and the archbishop Thomas Cranmer. The Martyrs' Memorial stands nearby, round the corner to the north on St. Giles.
During the English Civil War, Oxford housed the court of Charles I in 1642, after the king was expelled from London. The town yielded to Parliamentarian forces under General Fairfax in the Siege of Oxford of 1646. It later housed the court of Charles II during the Great Plague of London in 1665–66. Although reluctant to do so, he was forced to evacuate when the plague got too close. The city suffered two serious fires in 1644 and 1671.
In 1790, the Oxford Canal connected the city with Coventry. The Duke's Cut was completed by the Duke of Marlborough in 1789 to link the new canal with the River Thames; and, in 1796, the Oxford Canal company built its own link to the Thames, at Isis Lock. In 1844, the Great Western Railway linked Oxford with London via Didcot and Reading, and other rail routes soon followed.
In the 19th century, the controversy surrounding the Oxford Movement in the Anglican Church drew attention to the city as a focus of theological thought.
A permanent military presence was established in the city with the completion of Cowley Barracks in 1876.
Local government in Oxford was reformed by the Municipal Corporations Act 1835, and the boundaries of the borough were extended to include a small area east of the River Cherwell. The boundaries were further extended in 1889 to add the areas of Grandpont and New Hinksey, south of the Thames, which were transferred from Berkshire to Oxfordshire. At the same time Summertown and the western part of Cowley were also added to the borough. In 1890 Oxford became a county borough.
Oxford Town Hall was built by Henry T. Hare; the foundation stone was laid on 6 July 1893 and opened by the future King Edward VII on 12 May 1897. The site has been the seat of local government since the Guild Hall of 1292 and though Oxford is a city and a Lord Mayoralty, the building is still called by its traditional name of "Town Hall".
During the First World War, the population of Oxford changed. The number of University members was significantly reduced as students, fellows and staff enlisted. Some of their places in college accommodation were taken by soldiers in training. Another reminder of the ongoing war was found in the influx of wounded and disabled soldiers, who were treated in new hospitals housed in buildings such as the university's Examination School, the town hall and Somerville College.
By the early 20th century, there was rapid industrial and population growth, with the printing and publishing industries becoming well established by the 1920s. In 1929 the boundaries of the city were extended to include the suburbs of Headington, Cowley and Iffley to the east, and Wolvercote to the north.
Also during the 1920s, the economy and society of Oxford underwent a huge transformation as William Morris established Morris Motors Limited to mass-produce cars in Cowley, on the south-eastern edge of the city. By the early 1970s over 20,000 people worked in Cowley at the huge Morris Motors and Pressed Steel Fisher plants. Oxford was now a city of two halves: the university city to the west of Magdalen Bridge and the car town to the east. This led to the witticism that "Oxford is the left bank of Cowley". Cowley suffered major job losses in the 1980s and 1990s during the decline of British Leyland, but is now producing the successful Mini for BMW on a smaller site. Much of the original car factories at Cowley was demolished in the 1990s, and is now the site of the Oxford Business Park.
During the Second World War, Oxford was largely ignored by the German air raids during the Blitz, perhaps due to the lack of heavy industry such as steelworks or shipbuilding that would have made it a target, although it was still affected by the rationing and influx of refugees fleeing London and other cities. The university's colleges served as temporary military barracks and training areas for soldiers before deployment.
On 6 May 1954, Roger Bannister, a 25-year-old medical student, ran the first authenticated sub-four-minute mile at the Iffley Road running track in Oxford. Although he had previously studied at Oxford University, Bannister was studying at St Mary's Hospital Medical School in London at the time. He later returned to Oxford University and became Master of Pembroke College.
Oxford's second university, Oxford Brookes University, formerly the Oxford School of Art, then Oxford Polytechnic, based at Headington Hill, was given its charter in 1991 and for ten years has been voted the best new university in the UK. It was named to honour the school's founding principal, John Henry Brookes.
The influx of migrant labour to the car plants and hospitals, recent immigration from south Asia, and a large student population, have given Oxford a notably cosmopolitan character, especially in the Headington and Cowley Road areas with their many bars, cafes, restaurants, clubs, Asian shops and fast food outlets and the annual Cowley Road Carnival. Oxford is one of the most diverse small cities in Britain: the most recent population estimates for 2011 showed that 22% of the population were from black or minority ethnic groups, compared to 13% in England.
Oxford's latitude and longitude are or (at Carfax Tower, which is usually considered the centre).
Oxford is north-west of Reading, north-east of Swindon, east of Cheltenham and east of Gloucester, south-west of Milton Keynes, south-east of Evesham, south of Rugby and west-north-west of London. The rivers Cherwell and Thames (also sometimes known as the Isis locally, supposedly from the Latinised name ) run through Oxford and meet south of the city centre. These rivers and their flood plains constrain the size of the city centre.
Oxford has a maritime temperate climate (Köppen: "Cfb"). Precipitation is uniformly distributed throughout the year and is provided mostly by weather systems that arrive from the Atlantic. The lowest temperature ever recorded in Oxford was on 24 December 1860. The highest temperature ever recorded in Oxford is on 25 July 2019.
The average conditions below are from the Radcliffe Meteorological Station. It boasts the longest series of temperature and rainfall records for one site in Britain. These records are continuous from January 1815. Irregular observations of rainfall, cloud and temperature exist from 1767.
The driest year on record was 1788, with of rainfall. Whereas, the wettest year was 2012, with . The wettest month on record was September 1774, with a total fall of . The warmest month on record is July 1983, with an average of and the coldest is January 1963, with an average of . The warmest year on record is 2014, with an average of and the coldest is 1879, both with a mean temperature of . The sunniest month on record is May 2020, with 331.7 hours and December 1890 is the least sunny, with 5.0 hours. The greatest one-day rainfall occurred on 10 July 1968, with a total of . The greatest one-day snowfall is on 25 April 1908. The greatest known snow depth was in February 1888.
Twenty-two percent of the population come from Black, Asian and minority ethnic (BAME) groups.
Aside from the city centre, there are several suburbs and neighbourhoods within the borders of city of Oxford, including:
Suburbs and neighbourhoods outside the city boundaries include:
Oxford is at the centre of the Oxford Green Belt, which is an environmental and planning policy that regulates the rural space in Oxfordshire surrounding the city which aims to prevent urban sprawl and minimize convergence with nearby settlements. The policy has been blamed for the large rise in house prices in Oxford, making it the least affordable city in the UK outside London, with estate agents calling for brownfield land inside the green belt to be released for new housing. The vast majority of area covered is outside the city, but there are some green spaces within that are covered by the designation such as much of the Thames and Cherwell river flood-meadows, and the village of Binsey, along with several smaller portions on the fringes. Other landscape features and places of interest covered include Cutteslowe Park and the mini railway attraction, the University Parks, Hogacre Common Eco Park, numerous sports grounds, Aston's Eyot, St Margaret's Church and well, and Wolvercote Common and community orchard.
Oxford's economy includes manufacturing, publishing and science-based industries as well as education, research and tourism.
Oxford has been an important centre of motor manufacturing since Morris Motors was established in the city in 1910. The principal production site for Mini cars, owned by BMW since 2000, is in the Oxford suburb of Cowley. The plant, which survived the threat of closure in the early 1990s, also produced cars under the Austin and Rover brands following the demise of the Morris brand in 1984, although the last Morris-badged car was produced there in 1982.
Oxford University Press, a department of the University of Oxford, is based in the city, although it no longer operates its own paper mill and printing house. The city is also home to the UK operations of Wiley-Blackwell, Elsevier and several smaller publishing houses.
The presence of the university has given rise to many science and technology based businesses, including Oxford Instruments, Research Machines and Sophos. The university established Isis Innovation in 1987 to promote technology transfer. The Oxford Science Park was established in 1990, and the Begbroke Science Park, owned by the university, lies north of the city.
Oxford increasingly has a reputation for being a centre of digital innovation, as epitomized by Digital Oxford. Several startups including Passle, Brainomix, Labstep, and more, are based in Oxford.
The presence of the university has also led to Oxford becoming a centre for the education industry. Companies often draw their teaching staff from the pool of Oxford University students and graduates, and, especially for EFL education, use their Oxford location as a selling point.
There is a long history of brewing in Oxford. Several of the colleges had private breweries, one of which, at Brasenose, survived until 1889. In the 16th century brewing and malting appear to have been the most popular trades in the city. There were breweries in Brewer Street and Paradise Street, near the Castle Mill Stream.
The rapid expansion of Oxford and the development of its railway links after the 1840s facilitated expansion of the brewing trade. As well as expanding the market for Oxford's brewers, railways enabled brewers further from the city to compete for a share of its market. By 1874 there were nine breweries in Oxford and 13 brewers' agents in Oxford shipping beer in from elsewhere. The nine breweries were: Flowers & Co in Cowley Road, Hall's St Giles Brewery, Hall's Swan Brewery (see below), Hanley's City Brewery in Queen Street, Le Mills's Brewery in St. Ebbes, Morrell's Lion Brewery in St Thomas Street (see below), Simonds's Brewery in Queen Street, Weaving's Eagle Brewery (by 1869 the Eagle Steam Brewery) in Park End Street and Wootten and Cole's St. Clement's Brewery.
The Swan's Nest Brewery, later the Swan Brewery, was established by the early 18th century in Paradise Street, and in 1795 was acquired by William Hall. The brewery became known as Hall's Oxford Brewery, which acquired other local breweries. Hall's Brewery was acquired by Samuel Allsopp & Sons in 1926, after which it ceased brewing in Oxford.
Morrell's was founded in 1743 by Richard Tawney. He formed a partnership in 1782 with Mark and James Morrell, who eventually became the owners. After an acrimonious family dispute this much-loved brewery was closed in 1998, the beer brand names being taken over by the Thomas Hardy Burtonwood brewery, while the 132 tied pubs were bought by Michael Cannon, owner of the American hamburger chain Fuddruckers, through a new company, Morrells of Oxford. The new owners sold most of the pubs on to Greene King in 2002. The Lion Brewery was converted into luxury apartments in 2002.
The Taylor family of Loughborough had a bell-foundry in Oxford between 1786 and 1854.
Outside the city centre:
Oxford has numerous major tourist attractions, many belonging to the university and colleges. As well as several famous institutions, the town centre is home to Carfax Tower and the University Church of St Mary the Virgin, both of which offer views over the spires of the city. Many tourists shop at the historic Covered Market. In the summer punting on the Thames/Isis and the Cherwell is popular.
The University of Oxford is the oldest university in the English-speaking world and one of the most famous and prestigious higher education institutions of the world, averaging nine applications to every available place, and attracting 40% of its academic staff and 17% of undergraduates from overseas. It is currently ranked as the world's number one university, according to The Times Higher Education World University Rankings.
Oxford is renowned for its tutorial-based method of teaching, with students attending an average of one one-hour tutorial a week.
As well as being a major draw for tourists (9.1 million in 2008, similar in 2009), Oxford city centre has many shops, several theatres and an ice rink. The historic buildings make this location a popular target for film and TV crews.
The city centre is relatively small, and is centred on Carfax, a crossroads which forms the junction of Cornmarket Street (pedestrianised), Queen Street (mainly pedestrianised), St Aldate's and the High Street ("the High"; blocked for through traffic). Cornmarket Street and Queen Street are home to Oxford's various chain stores, as well as a small number of independent retailers, one of the longest established of which is Boswell's, founded in 1738. St Aldate's has few shops but several local government buildings, including the town hall, the city police station and local council offices. The High (the word "street" is traditionally omitted) is the longest of the four streets and has a number of independent and high-end chain stores, but mostly university and college buildings.
There are two small shopping malls in the city centre: The Clarendon Centre and the Westgate Centre. The Westgate Centre is named for the original West Gate in the city wall, and is at the west end of Queen Street. A major redevelopment and expansion to , with a new John Lewis department store and a number of new homes, was completed in October 2017.
Blackwell's Bookshop is a large bookshop which claims the largest single room devoted to book sales in the whole of Europe, the cavernous Norrington Room (10,000 sq ft).
The University of Oxford maintains the largest university library system in the UK, and, with over 11 million volumes housed on of shelving, the Bodleian group is the second-largest library in the UK, after the British Library. The Bodleian is a legal deposit library, which means that it is entitled to request a free copy of every book published in the UK. As such, its collection is growing at a rate of over three miles (five kilometres) of shelving every year.
Visitors can take a guided tour of the Old Bodleian Library to see inside its historic rooms, including the 15th-century Divinity School, medieval Duke Humfrey's Library, and the Radcliffe Camera. The Weston Library was redeveloped and reopened in 2015, with a new shop, café and exhibition galleries for visitors.
Oxford is home to many museums, galleries, and collections, most of which are free of admission charges and are major tourist attractions. The majority are departments of the University of Oxford.
The first of these to be established was the Ashmolean Museum, the world's first university museum, and the oldest museum in the UK. Its first building was erected in 1678–1683 to house a cabinet of curiosities given to the University of Oxford in 1677. The museum reopened in 2009 after a major redevelopment. It holds significant collections of art and archaeology, including works by Michelangelo, Leonardo da Vinci, Turner, and Picasso, as well as treasures such as the Scorpion Macehead, the Parian Marble and the Alfred Jewel. It also contains "The Messiah", a pristine Stradivarius violin, regarded by some as one of the finest examples in existence.
The University Museum of Natural History holds the University's zoological, entomological and geological specimens. It is housed in a large neo-Gothic building on Parks Road, in the University's Science Area. Among its collection are the skeletons of a "Tyrannosaurus rex" and "Triceratops", and the most complete remains of a dodo found anywhere in the world. It also hosts the Simonyi Professorship of the Public Understanding of Science, currently held by Marcus du Sautoy.
Adjoining the Museum of Natural History is the Pitt Rivers Museum, founded in 1884, which displays the University's archaeological and anthropological collections, currently holding over 500,000 items. It recently built a new research annexe; its staff have been involved with the teaching of anthropology at Oxford since its foundation, when as part of his donation General Augustus Pitt Rivers stipulated that the University establish a lectureship in anthropology.
The Museum of the History of Science is housed on Broad St in the world's oldest-surviving purpose-built museum building. It contains 15,000 artefacts, from antiquity to the 20th century, representing almost all aspects of the history of science.
In the University's Faculty of Music on St Aldate's is the Bate Collection of Musical Instruments, a collection mostly of instruments from Western classical music, from the medieval period onwards. Christ Church Picture Gallery holds a collection of over 200 old master paintings. The University also has an archive at the Oxford University Press Museum.
Other museums and galleries in Oxford include Modern Art Oxford, the Museum of Oxford, the Oxford Castle, and The Story Museum.
Oxford is a very green city, with several parks and nature walks within the ring road, as well as several sites just outside the ring road. In total, 28 nature reserves exist within or just outside Oxford ring road, including:
In addition to the larger airports in the region, Oxford is served by nearby Oxford Airport, in Kidlington. The airport is also home to CAE Oxford Aviation Academy and Airways Aviation airline pilot flight training centres, and several private jet companies. The airport is also home to Airbus Helicopters UK headquarters.
Bus services in Oxford and its suburbs are run by the Oxford Bus Company and Stagecoach Oxfordshire as well as other operators including Arriva Shires & Essex and Thames Travel.
Arriva Shires & Essex operates Sapphire route 280 to Aylesbury via Wheatley, Thame and Haddenham seven days a week, at a frequency of up to every 20 minutes. The new Sapphire buses have three-pin power sockets, leather seats and free, onboard Wi-Fi.
Oxford has five park and ride car parks with frequent bus links to the city centre:
There are also bus services to the John Radcliffe Hospital (from Thornhill and Water Eaton/Oxford Parkway) and to the Churchill and Nuffield Hospitals (from Thornhill). , Oxford has one of the largest urban park and ride networks in the UK. Its five sites have a combined capacity of 4,930 car parking spaces, served by 20 Oxford Bus Company double deck buses with a combined capacity of 1,695 seats. By comparison, York park and ride has six sites with a combined total of 4,970 parking spaces served by 35 First York buses, but they are single deckers with a combined capacity of 1,548 seats.
More than 58% of Oxford Bus Company customers use the ITSO Ltd smartcard. Secondary school students are able to gain either a reduced price pass, like pay a set fee for the month, or a free riding pass for the school year.
In November 2014 all Oxford Bus Company buses within the Oxford SmartZone area have free WiFi installed.
Hybrid buses, which use battery power with a small diesel generator, began to be used in Oxford on 15 July 2010, on Stagecoach Oxfordshire's Route 1 (City centre – Cowley – Blackbird Leys). Both Stagecoach and Oxford Bus Company now operate numerous hybrid buses in the city. In 2014 Oxford Bus introduced a fleet of 20 new buses with flywheel energy storage (FES) on the services it operates under contract for Brookes University. Whereas electric hybrids use battery storage and an electric motor to save fuel, FES uses a high-speed flywheel.
The Oxford to London coach route offers a frequent coach service to London. The Oxford Tube is operated by Stagecoach Oxfordshire and the Oxford Bus Company runs the Airline services to Heathrow and Gatwick airports.
There is a bus station at Gloucester Green, used mainly by the London and airport buses, National Express coaches and other long-distance buses including route X5 to Milton Keynes and Cambridge and Stagecoach Gold routes S1, S2, S3, S4, S5, S8 and S9.
Among UK cities, Oxford has the second highest percentage of people cycling to work.
In 1844, the Great Western Railway linked Oxford with London Paddington via and ; in 1851, the London and North Western Railway opened its own route from Oxford to London Euston, via Bicester, and Watford; and in 1864 a third route, also to Paddington, running via , and , was provided; this was shortened in 1906 by the opening of a direct route between High Wycombe and London Paddington by way of . The distance from Oxford to London was via Bletchley; via Didcot and Reading; via Thame and Maidenhead; and via Denham. Only the original (Didcot) route is still in use for its full length, portions of the others remain.
There were also routes to the north and west. The line to was opened in 1850, and was extended to Birmingham Snow Hill in 1852; a route to Worcester opened in 1853. A branch to Witney was opened in 1862, which was extended to in 1873. The line to Witney and Fairford closed in 1962, but the others remain open.
Oxford has had three main railway stations. The first was opened at Grandpont in 1844, but this was a terminus, inconvenient for routes to the north; it was replaced by the present station on Park End Street in 1852 with the opening of the Birmingham route. Another terminus, at Rewley Road, was opened in 1851 to serve the Bletchley route; this station closed in 1951. There have also been a number of local railway stations, all of which are now closed. A fourth station, , is just outside the city, at the park and ride site near Kidlington.
Oxford railway station is half a mile (about 1 km) west of the city centre. The station is served by CrossCountry services to Bournemouth, Manchester Piccadilly and Newcastle, Great Western Railway (who manage the station) services to London Paddington, Banbury and Hereford and Chiltern Railways services to London Marylebone.
The present railway station opened in 1852. Oxford is the junction for a short branch line to Bicester, a remnant of the former Varsity line to Cambridge. This Oxford–Bicester line was upgraded to running during an 18-month closure in 2014/2015 – and is scheduled to be extended to form the planned East West Rail line to . Chiltern Railways now connects Oxford to London Marylebone via , having sponsored the building of about 400 metres of new track between Bicester Village and the Chiltern Main Line southwards in 2014. The route serves High Wycombe and London Marylebone, avoiding London Paddington and Didcot Parkway. East West Rail is proposed to continue through Bletchley (for ) to Bedford, Cambridge, and ultimately Ipswich and Norwich, thus providing alternative route to East Anglia without needing to travel via, and connect between, the London mainline terminals.
From Oxford station direct trains run to where interchange with the Heathrow Express train links with Heathrow Airport. Passengers can change at Reading for connecting trains to Gatwick Airport. Some CrossCountry trains run direct services to Birmingham International as well as further afield Southampton Airport Parkway.
Oxford was historically an important port on the River Thames, with this section of the river being called the Isis; the Oxford-Burcot Commission in the 17th century attempted to improve navigation to Oxford. Iffley Lock and Osney Lock lie within the bounds of the city. In the 18th century the Oxford Canal was built to connect Oxford with the Midlands.
Commercial traffic has given way to recreational use of the river and canal. Oxford was the original base of Salters Steamers (founded in 1858), which was a leading racing-boat-builder that played an important role in popularising pleasure boating on the Upper Thames. The firm runs a regular service from Folly Bridge downstream to Abingdon and beyond.
Oxford's central location on several transport routes means that it has long been a crossroads city with many coaching inns, although road traffic is now strongly discouraged from using the city centre. From 2020, a new Zero Emission Zone will mean any vehicles which are not zero-emission will be banned from the city centre during certain hours.
The Oxford Ring Road surrounds the city centre and close suburbs Marston, Iffley, Cowley and Headington; it consists of the A34 to the west, a 330-yard section of the A44, the A40 north and north-east, A4142/A423 to the east. It is a dual carriageway, except for a 330-yard section of the A40 where two residential service roads adjoin, and was completed in 1966.
The main roads to/from Oxford are:
The city is served by the M40 motorway, which connects London to Birmingham. The M40 approached Oxford in 1974, leading from London to Waterstock, where the A40 continued to Oxford. When the M40 extension to Birmingham was completed in January 1991, it curved sharply north, and a mile of the old motorway became a spur. The M40 comes no closer than away from the city centre, curving to the east of Otmoor. The M40 meets the A34 to the north of Oxford.
There are two universities in Oxford, the University of Oxford and Oxford Brookes University, as well as the specialist further and higher education institution Ruskin College that is an Affiliate of the University of Oxford. The Islamic Azad University also has a campus near Oxford.
As well as the BBC national radio stations, Oxford and the surrounding area has several local stations, including BBC Oxford, Heart Thames Valley, Destiny 105, Jack FM and Jack FM 2 along with Oxide: Oxford Student Radio (which went on terrestrial radio at 87.7 MHz FM in late May 2005). A local TV station, Six TV: The Oxford Channel, was also available but closed in April 2009; a service operated by That's TV, originally called That's Oxford (now That's Oxfordshire), took to the airwaves in 2015. The city is home to a BBC TV newsroom which produces an opt-out from the main "South Today" programme broadcast from Southampton.
Popular local papers include "The Oxford Times" (compact; weekly), its sister papers the "Oxford Mail" (tabloid; daily) and the "Oxford Star" (tabloid; free and delivered), and "Oxford Journal" (tabloid; weekly free pick-up). Oxford is also home to several advertising agencies.
"Daily Information" (known locally as Daily Info) is an events and advertising news sheet which has been published since 1964 and now provides a connected website.
"Nightshift" is a monthly local free magazine that has covered the Oxford music scene since 1991.
In 2003 DIY grassroots non-corporate media has begun to spread. Independent and community newspapers include the "Jericho Echo" and "Oxford Prospect".
Well-known Oxford-based authors include:
Oxford appears in the following works:
Holywell Music Room is said to be the oldest purpose-built music room in Europe, and hence Britain's first concert hall. Tradition has it that George Frideric Handel performed there, though there is little evidence. Joseph Haydn was awarded an honorary doctorate by Oxford University in 1791, an event commemorated by three concerts of his music at the Sheldonian Theatre, directed by the composer and from which his Symphony No. 92 earned the nickname of the "Oxford" Symphony. Victorian composer Sir John Stainer was organist at Magdalen College and later Professor of Music at the university, and is buried in Holywell Cemetery.
Oxford, and its surrounding towns and villages, have produced many successful bands and musicians in the field of popular music. The most notable Oxford act is Radiohead, who all met at nearby Abingdon School, though other well known local bands include Supergrass, Ride, Swervedriver, Lab 4, Talulah Gosh, the Candyskins, Medal, the Egg, Unbelievable Truth, Hurricane No. 1, Crackout, Goldrush and more recently, Young Knives, Foals, Glass Animals, Dive Dive and Stornoway. These and many other bands from over 30 years of the Oxford music scene's history feature in the documentary film "Anyone Can Play Guitar?". In 1997, Oxford played host to Radio 1's Sound City, with acts such as Travis, Bentley Rhythm Ace, Embrace, Spiritualized and DJ Shadow playing in various venues around the city including Oxford Brookes University.
It is also home to several brass bands, notably the City of Oxford Silver Band, founded in 1887.
The city's leading football club, Oxford United, are currently in League One, the third tier of league football, though they enjoyed some success in the past in the upper reaches of the league. They were elected to the Football League in 1962, reached the Third Division after three years and the Second Division after six, and most notably reached the First Division in 1985 – 23 years after joining the Football League. They spent three seasons in the top flight, winning the Football League Cup a year after promotion. The 18 years that followed relegation in 1988 saw their fortunes decline gradually, though a brief respite in 1996 saw them win promotion to the new (post Premier League) Division One in 1996 and stay there for three years. They were relegated to the Football Conference in 2006, staying there for four seasons before returning to the Football League in 2010. They play at the Kassam Stadium (named after former chairman Firoz Kassam), which is near the Blackbird Leys housing estate and has been their home since relocation from the Manor Ground in 2001. The club's notable former managers include Ian Greaves, Jim Smith, Maurice Evans, Brian Horton, Ramon Diaz and Denis Smith. Notable former players include John Aldridge, Ray Houghton, Tommy Caton, Matt Elliott, Dean Saunders and Dean Whitehead.
Oxford City F.C. is a semi-professional football club, separate from Oxford United. It plays in the Conference South, the sixth tier, two levels below the Football League in the pyramid. Oxford City Nomads F.C. was a semi-professional football club who ground-shared with Oxford City and played in the Hellenic league.
In 2013, Oxford Rugby League entered rugby league's semi-professional Championship 1, the third tier of British rugby league. Oxford Cavaliers, who were formed in 1996, compete at the next level, the Conference League South. Oxford University (The Blues) and Oxford Brookes University (The Bulls) both compete in the rugby league BUCS university League.
Oxford Harlequins RFC is the city's main Rugby Union team and currently plays in the South West Division.
Oxford R.F.C is the oldest city team and currently plays in the Berks, Bucks and Oxon Championship. Their most famous player was arguably Michael James Parsons known as Jim Parsons who was capped by England.
Oxford University RFC are the most famous club with more than 300 Oxford players gaining International honours; including Phil de Glanville, Joe Roff, Tyrone Howe, Anton Oliver, Simon Halliday, David Kirk and Rob Egerton.
London Welsh RFC moved to the Kassam Stadium in 2012 to fulfil their Premiership entry criteria regarding stadium capacity. At the end of the 2015 season, following relegation, the club left Oxford.
Oxford Cheetahs motorcycle speedway team has raced at Oxford Stadium in Cowley on and off since 1939. The Cheetahs competed in the Elite League and then the Conference League until 2007. They were Britain's most successful club in the late 1980s, becoming British League champions in 1985, 1986 and 1989. Four-times world champion Hans Nielsen was the club's most successful rider.
Greyhound racing took place at the Oxford Stadium from 1939 until 2012 and hosted some of the sport's leading events such as the Pall Mall Stakes, The Cesarewitch and Trafalgar Cup. The stadium remains intact but unused after closing in 2012.
There are several hockey clubs based in Oxford. The Oxford Hockey Club (formed after a merger of City of Oxford HC and Rover Oxford HC in 2011) plays most of its home games on the pitch at Oxford Brookes University, Headington Campus and also uses the pitches at Headington Girls' School and Iffley Road. Oxford Hawks has two astroturf pitches at Banbury Road North, by Cutteslowe Park to the north of the city.
Oxford City Stars is the local Ice Hockey Team which plays at Oxford Ice Rink. There is a senior/adults’ team and a junior/children's team. The Oxford University Ice Hockey Club was formed as an official University sports club in 1921, and traces its history back to a match played against Cambridge in St Moritz, Switzerland in 1885. The club currently competes in Checking Division 1 of the British Universities Ice Hockey Association.
Oxford Saints is Oxford's senior American Football team. One of the longest running American football clubs in the UK, the Saints were founded in 1983 and have competed for over 30 years against other British teams across the country.
Oxford University Cricket Club is Oxford's most famous club with more than 300 Oxford players gaining international honours, including Colin Cowdrey, Douglas Jardine and Imran Khan.
Oxfordshire County Cricket Club play in the Minor Counties League.
Oxford University Boat Club compete in the world-famous Boat Race. Since 2007 the club has been based at a training facility and boathouse in Wallingford, south of Oxford, after the original boathouse burnt down in 1999. Oxford is also home to the City of Oxford Rowing Club, based near Donnington Bridge.
Headington Road Runners based at the OXSRAD sports facility in Marsh Lane (next to Oxford City F.C.) is Oxford's only road running club with an average annual membership exceeding 300. It was the club at which double Olympian Mara Yamauchi started her running career.
Oxford is twinned with:
The following people and military units have received the Freedom of the City of Oxford. | https://en.wikipedia.org/wiki?curid=22308 |
Oslo
Oslo ( , also , , rarely ) is the capital and most populous city of Norway. It constitutes both a county and a municipality. During the Viking Age the area was part of Viken, the northernmost Danish province. Oslo was founded as a city at the end of the Viking Age in the year 1040 under the name Ánslo, and established as a "kaupstad" or trading place in 1048 by Harald Hardrada. The city was elevated to a bishopric in 1070 and a capital under Haakon V of Norway around 1300. Personal unions with Denmark from 1397 to 1523 and again from 1536 to 1814 reduced its influence. After being destroyed by a fire in 1624, during the reign of King Christian IV, a new city was built closer to Akershus Fortress and named Christiania in the king's honour. It was established as a municipality ("formannskapsdistrikt") on 1 January 1838. The city functioned as the capital of Norway during the 1814–1905 union between Sweden and Norway. From 1877, the city's name was spelled Kristiania in government usage, a spelling that was adopted by the municipal authorities only in 1897. In 1925 the city, after incorporating the village retaining its former name, was renamed Oslo. In 1948 Oslo merged with Aker, a municipality which surrounded the capital and which was 27 times larger, thus creating the modern, vastly enlarged Oslo municipality.
Oslo is the economic and governmental centre of Norway. The city is also a hub of Norwegian trade, banking, industry and shipping. It is an important centre for maritime industries and maritime trade in Europe. The city is home to many companies within the maritime sector, some of which are among the world's largest shipping companies, shipbrokers and maritime insurance brokers. Oslo is a pilot city of the Council of Europe and the European Commission intercultural cities programme.
Oslo is considered a global city and was ranked "Beta World City" in studies carried out by the Globalization and World Cities Study Group and Network in 2008. It was ranked number one in terms of quality of life among European large cities in the European Cities of the Future 2012 report by "fDi" magazine. A survey conducted by ECA International in 2011 placed Oslo as the second most expensive city in the world for living expenses after Tokyo. In 2013 Oslo tied with the Australian city of Melbourne as the fourth most expensive city in the world, according to the Economist Intelligence Unit (EIU)'s Worldwide Cost of Living study. Oslo was ranked as the 24th most liveable city in the world by Monocle magazine.
As of 27 February 2020, the municipality of Oslo had a population of 693,491, while the population of the city's urban area of 4 November 2019 was 1,019,513. The metropolitan area had an estimated population of 1.71 million. The population was increasing at record rates during the early 2000s, making it the fastest growing major city in Europe at the time. This growth stems for the most part from international immigration and related high birth rates, but also from intra-national migration. The immigrant population in the city is growing somewhat faster than the Norwegian population, and in the city proper this is now more than 25% of the total population if immigrant parents are included.
As of 27 February 2020, the municipality of Oslo had a population of 693,491. The urban area extends beyond the boundaries of the municipality into the surrounding county of Viken (municipalities of Asker, Bærum, Lillestrøm, Enebakk, Rælingen, Lørenskog, Nittedal, Gjerdrum, Nordre Follo); the total population of this agglomeration is 1,019,513. The city centre is situated at the end of the Oslofjord, from which point the city sprawls out in three distinct "corridors"—inland north-eastwards, and southwards along both sides of the fjord—which gives the urbanized area a shape reminiscent of an upside-down reclining "Y" (on maps, satellite pictures, or from high above the city).
To the north and east, wide forested hills ("Marka") rise above the city giving the location the shape of a giant amphitheatre. The urban municipality ("bykommune") of Oslo and county ["fylke"] of Oslo are two parts of the same entity, making Oslo the only city in Norway where two administrative levels are integrated. Of Oslo's total area, is built-up and is agricultural. The open areas within the built-up zone amount to .
The city of Oslo was established as a municipality on 3 January 1838 (see formannskapsdistrikt). It was separated from the county of Akershus to become a county of its own in 1842. The rural municipality of Aker was merged with Oslo on 1 January 1948 (and simultaneously transferred from Akershus county to Oslo county). Furthermore, Oslo shares several important functions with Akershus county.
"As defined in January 2004 by the city council"
In addition is Marka (1610 residents, 301.1 km2), that is administered by several boroughs; and Sentrum (1471 residents, 1.8 km2) that is partially administered by St. Hanshaugen, and in part directly by the city council. As of 27 February 2020, 2386 residents were not allocated to a borough.
After being destroyed by a fire in 1624, during the reign of King Christian IV, a new city was built closer to Akershus Fortress and named Christiania in the king's honour. The old site east of the Aker river was not abandoned however and the village of Oslo remained as a suburb outside the city gates. The suburb called Oslo was eventually included in the city proper. In 1925 the name of the suburb was transferred to the whole city, while the suburb was renamed "Gamlebyen" (literally "the Old town") to avoid confusion. The Old Town is an area within the administrative district Gamle Oslo. The previous names are reflected in street names like Oslo gate (Oslo street) and Oslo hospital.
The origin of the name "Oslo" has been the subject of much debate. It is certainly derived from Old Norse and was — in all probability — originally the name of a large farm at Bjørvika, but the meaning of that name is disputed. Modern linguists generally interpret the original "Óslo", "Áslo" or "Ánslo" as either "Meadow at the Foot of a Hill" or "Meadow Consecrated to the Gods", with both considered equally likely.
Erroneously, it was once assumed that "Oslo" meant "the mouth of the Lo river", a supposed previous name for the river Alna. However, not only has no evidence been found of a river "Lo" predating the work where Peder Claussøn Friis first proposed this etymology, but the very name is ungrammatical in Norwegian: the correct form would have been "Loaros" (cf. Nidaros). The name "Lo" is now believed to be a back-formation arrived at by Friis in support of his [idea about] etymology for "Oslo".
Oslo is one of very few cities in Norway, besides Bergen and Tønsberg, that does not have a formal coat of arms, but which uses a city seal instead. The seal of Oslo shows the city's patron saint, St. Hallvard, with his attributes, the millstone and arrows, with a naked woman at his feet. He is seated on a throne with lion decorations, which at the time was also commonly used by the Norwegian kings.
Oslo has various nicknames and names in other languages. The city is sometimes known under the nickname "The Tiger City" (), probably inspired by an 1870 poem by Bjørnstjerne Bjørnson which referenced then-Christiania in central Oslo. The nickname is mostly used by Norwegians from out of town, and rarely by people from the Oslo region. In the 2010s "Oslove" was proposed as a name in Southern Sami, also inspired by its meaning ("Oslo love") as a portmanteau in English; Oslo is not itself located in the Southern Sami linguistic area which is centered on Trøndelag, or in the Sámi traditional region.
During the Viking Age the area that includes modern Oslo was located in Viken, the northernmost province of Denmark. Control over the area shifted between Danish and Norwegian kings in the Middle Ages, and Denmark continued to claim the area until 1241.
According to the Norse sagas, Oslo was founded around 1049 by Harald Hardrada. Recent archaeological research however has uncovered Christian burials which can be dated to prior to AD 1000, evidence of a preceding urban settlement. This called for the celebration of Oslo's millennium in 2000.
It has been regarded as the capital city since the reign of Haakon V of Norway (1299–1319), the first king to reside permanently in the city. He also started the construction of the Akershus Fortress and the Oslo Kongsgård. A century later, Norway was the weaker part in a personal union with Denmark, and Oslo's role was reduced to that of provincial administrative centre, with the monarchs residing in Copenhagen. The fact that the University of Oslo was founded as late as 1811 had an adverse effect on the development of the nation.
Oslo was destroyed several times by fire, and after the fourteenth calamity, in 1624, Christian IV of Denmark and Norway ordered it rebuilt at a new site across the bay, near Akershus Castle and given the name "Christiania". Long before this, Christiania had started to establish its stature as a centre of commerce and culture in Norway. The part of the city built starting in 1624 is now often called "Kvadraturen" because of its orthogonal layout in regular, square blocks. The last Black Death outbreak in Oslo occurred in 1654. In 1814 Christiania once more became a real capital when the union with Denmark was dissolved.
Many landmarks were built in the 19th century, including the Royal Palace (1825–1848), Storting building (the Parliament) (1861–1866), the University, National Theatre and the Stock Exchange. Among the world-famous artists who lived here during this period were Henrik Ibsen and Knut Hamsun (the latter was awarded the Nobel Prize for literature). In 1850, Christiania also overtook Bergen and became the most populous city in the country. In 1877 the city was renamed "Kristiania". The original name of Oslo was restored in 1925.
Under the reign of Olaf III of Norway, Oslo became a cultural centre for Eastern Norway. Hallvard Vebjørnsson became the city's patron saint and is depicted on the city's seal.
In 1174, Hovedøya Abbey was built. The churches and abbeys became major owners of large tracts of land, which proved important for the city's economic development, especially before the Black Death.
On 25 July 1197, Sverre of Norway and his soldiers attacked Oslo from Hovedøya.
During the Middle Ages, Oslo reached its heights in the reign of Haakon V of Norway. He started building Akershus Fortress and was also the first king to reside permanently in the city, which helped to make Oslo the capital of Norway.
In the end of the 12th century, Hanseatic League traders from Rostock moved into the city and gained major influence in the city. The Black Death came to Norway in 1349 and, like other cities in Europe, the city suffered greatly. The churches' earnings from their land also dropped so much that the Hanseatic traders dominated the city's foreign trade in the 15th century.
Over the years, fire destroyed major parts of the city many times, as many of the city's buildings were built entirely of wood. After the last fire in 1624, which lasted for three days, Christian IV of Denmark decided that the old city should not be rebuilt again. His men built a network of roads in Akershagen near Akershus Castle. He demanded that all citizens should move their shops and workplaces to the newly built city Christiania, named as an honor to the king.
The transformation of the city went slowly for the first hundred years. Outside the city, near Vaterland and Grønland near Old Town, Oslo, a new, unmanaged part of the city grew up filled with citizens of low class status.
In the 18th century, after the Great Northern War, the city's economy boomed with shipbuilding and trade. The strong economy transformed Christiania into a trading port.
In 1814 the former provincial town of Christiania became the capital of the independent Kingdom of Norway, in a personal union with Sweden. Several state institutions were established and the city's role as a capital initiated a period of rapidly increasing population. The government of this new state needed buildings for its expanding administration and institutions. Several important buildings were erected – The Bank of Norway (1828), the Royal Palace (1848), and the Storting (1866). Large areas of the surrounding Aker municipality were incorporated in 1839, 1859 an 1878. The 1859 expansion included Grünerløkka, Grønland and Oslo. At that time the area called "Oslo" (now "Gamlebyen" or Old Town) was a village or suburb outside the city borders east of Aker river. The population increased from approximately 10 000 in 1814 to 230 000 in 1900. Christiania expanded its industry from 1840, most importantly around Akerselva. There was a spectacular building boom during the last decades of the 19th century, with many new apartment buildings and renewal of the city center, but the boom collapsed in 1899.
In 1948 Oslo merged with Aker, a municipality which surrounded the capital and which was 27 times larger, thus creating the modern, vastly enlarged Oslo municipality. At the time Aker was a mostly affluent, green suburban community, and the merger was unpopular in Aker.
The municipality developed new areas such as Ullevål garden city (1918–1926) and Torshov (1917–1925). City Hall was constructed in the former slum area of Vika from 1931 to 1950. The municipality of Aker was incorporated into Oslo in 1948, and suburbs were developed, such as Lambertseter (from 1951). Aker Brygge was constructed on the site of the former shipyard Akers Mekaniske Verksted, from 1982 to 1998.
The city and municipality used the name "Kristiania" until 1 January 1925 when the name changed to "Oslo". Oslo was the name of an eastern suburb - it had been the site of the city centre until the devastating 1624 fire. King Christian IV of Denmark ordered a new city built with his own name; Oslo remained a poor suburb outside the city border. In the early-20th century Norwegians argued that a name memorialising a Danish king was inappropriate as the name of the capital of Norway, which became fully independent in 1905.
In the 2011 Norway terror attacks, Oslo was hit by a bomb blast that ripped through the Government quarter, damaging several buildings including the building that houses the Office of the Prime Minister. Eight people died in the bomb attack.
Oslo occupies an arc of land at the northernmost end of the Oslofjord. The fjord, which is nearly bisected by the Nesodden peninsula opposite Oslo, lies to the south; in all other directions Oslo is surrounded by green hills and mountains. There are 40 islands within the city limits, the largest being Malmøya (), and scores more around the Oslofjord. Oslo has 343 lakes, the largest being Maridalsvannet (). This is also a main source of drinking water for large parts of Oslo.
Although Eastern Norway has a number of rivers, none of these flow into the ocean at Oslo. Instead Oslo has two smaller rivers: Akerselva (draining Maridalsvannet, which flows into the fjord in Bjørvika), and Alna. The waterfalls in Akerselva gave power to some of the first modern industry of Norway in the 1840s. Later in the century, the river became the symbol of the stable and consistent economic and social divide of the city into an East End and a West End; the labourers' neighbourhoods lie on both sides of the river, and the divide in reality follows Uelands street a bit further west. River Alna flows through Groruddalen, Oslo's major suburb and industrial area. The highest point is Kirkeberget, at . Although the city's population is small compared to most European capitals, it occupies an unusually large land area, of which two-thirds are protected areas of forests, hills and lakes. Its boundaries encompass many parks and open areas, giving it an airy and green appearance.
Oslo has a humid continental climate (Köppen climate classification "Dfb") with warm summers and cold winters. Due to oceanic influences, winters are less cold than more continental areas at same latitude. With isotherm, it is also reasonable for Oslo to be classified as a borderline oceanic climate. Oslo has a significant amount of rainfall during the year. This is true even for the driest month. Because of the city's northern latitude, daylight varies greatly, from more than 18 hours in midsummer, when it never gets completely dark at night (no darker than nautical twilight), to around 6 hours in midwinter. Oslo sits right on the border between hardiness zones 7a and 7b.
May 2018 saw hotter than average temperatures throughout the month.
On 30 May 2018 the city saw temperatures rise to , making it the hottest May temperature on Oslo records. On 27 July 2018 the temperature in Oslo rose to , the hottest ever recorded since 1937, when the weather observations for Oslo have been conducted in the university area at Blindern. However, the warmest temperature ever recorded in the city of Oslo was in July 1901. In January, three out of four days are below freezing (), on average one out of four days is colder than . The coldest temperature recorded is , on 21 January 1841, while the coldest ever recorded at Blindern is in January 1941.
Oslo has many parks and green areas within the city core, as well as outside it.
Oslo (with neighbouring Sandvika-Asker) is built in a horseshoe shape on the shores of the Oslofjord and limited in most directions by hills and forests. As a result, any point within the city is relatively close to the forest. There are two major forests bordering the city: "Østmarka" (literally "Eastern Forest", on the eastern perimeter of the city), and the very large "Nordmarka" (literally "Northern Forest", stretching from the northern perimeter of the city deep into the hinterland).
The lake's altitude above sea level is 183 metres. The water is in a popular hiking area. Near the water itself, it is great for barbecues, swimming, beach volleyball and other activities.
The municipality operates eight public swimming pools. Tøyenbadet is the largest indoor swimming facility in Oslo and one of the few pools in Norway offering a 50-metre main pool. Another in that size is the outdoor pool Frognerbadet.
Oslo's cityscape is being redeveloped as a modern city with various access-points, an extensive metro-system with a new financial district and a cultural city. In 2008, an exhibition was held in London presenting the award-winning Oslo Opera House, the urban regeneration scheme of Oslo's seafront, Munch/Stenersen and the new Deichman Library. Most of the buildings in the city and in neighbouring communities are low in height with only the Plaza, Posthuset and the highrises at Bjørvika considerably taller.
Oslo's architecture is very diverse. The architect Carl Frederik Stanley (1769–1805), who was educated in Copenhagen, spent some years in Norway around the turn of the 19th century. He did minor works for wealthy patrons in and around Oslo, but his major achievement was the renovation of the Oslo Katedralskole, completed in 1800. He added a classical portico to the front of an older structure, and a semicircular auditorium that was sequestered by Parliament in 1814 as a temporary place to assemble, now preserved at Norsk Folkemuseum as a national monument.
When Christiania was made capital of Norway in 1814, there were practically no buildings suitable for the many new government institutions. An ambitious building program was initiated, but realised very slowly because of economic constraints. The first major undertaking was the Royal Palace, designed by Hans Linstow and built between 1824 and 1848. Linstow also planned Karl Johans gate, the avenue connecting the Palace and the city, with a monumental square halfway to be surrounded by buildings for University, the Parliament (Storting) and other institutions. Only the university buildings were realised according to this plan. Christian Heinrich Grosch, one of the first architects educated completely within Norway, designed the original building for the Oslo Stock Exchange (1826–1828), the local branch of the Bank of Norway (1828), Christiania Theatre (1836–1837), and the first campus for the University of Oslo (1841–1856). For the university buildings, he sought the assistance of the renowned German architect Karl Friedrich Schinkel. German architectural influence persisted in Norway, and many wooden buildings followed the principles of Neoclassicism. In Oslo, the German architect Alexis de Chateauneuf designed Trefoldighetskirken, the first neo-gothic church, completed by von Hanno in 1858.
A number of landmark buildings, particularly in Oslo, were built in the Functionalist style (better known in the US and Britain as Modernist), the first being Skansen restaurant (1925–1927) by Lars Backer, demolished in 1970. Backer also designed the restaurant at Ekeberg, which opened in 1929. Kunstnernes Hus art gallery by Gudolf Blakstad and Herman Munthe-Kaas (1930) still shows the influence of the preceding classicist trend of the 1920s. The redevelopment of Oslo Airport (by the Aviaplan consortium) at Gardermoen, which opened in 1998, was Norway's largest construction project to date.
Oslo is the capital of Norway, and as such is the seat of Norway's national government. Most government offices, including that of the Prime Minister, are gathered at "Regjeringskvartalet", a cluster of buildings close to the national Parliament, the Storting.
Constituting both a municipality and a county of Norway, the city of Oslo is represented in the Storting by nineteen members of parliament. The Conservative Party is the most represented party in Oslo with six members, the Labour Party has five, the Progress Party, the Liberals and the Socialist Left Party have two each; the Green Party and the Red Party have one each.
The combined municipality and county of Oslo has had a parliamentary system of government since 1986. The supreme authority of the city is the City Council ("Bystyret"), which currently has 59 seats. Representatives are popularly elected every four years. The City Council has five standing committees, each having its own areas of responsibility. The largest parties in the City Council after the 2015-elections are the Labour Party and the Conservatives, with 20 and 19 representatives respectively.
The Mayor of Oslo is the head of the City Council and the highest ranking representative of the city. This used to be the most powerful political position in Oslo, but following the implementation of parliamentarism, the mayor has had more of a ceremonial role, similar to that of the President of the Storting at the national level. The current Mayor of Oslo is Marianne Borgen.
Since the local elections of 2015, the city government has been a coalition of the Labour Party, the Green Party and the Socialist Left. Based mostly on support from the Red Party, the coalition maintains a workable majority in the City Council. Following the local elections of 2019, the centre-left coalition remained in government.
The Governing Mayor of Oslo is the head of the City government. The post was created with the implementation of parliamentarism in Oslo and is similar to the role of the prime minister at the national level. The current governing mayor is Raymond Johansen.
Oslo has a varied and strong economy and was ranked number one among European large cities in economic potential in the fDi Magazine report European Cities of the Future 2012. It was ranked 2nd in the category of business friendliness, behind Amsterdam.
Oslo is an important centre of maritime knowledge in Europe and is home to approximately 1980 companies and 8,500 employees within the maritime sector. Some of them are the world's largest shipping companies, shipbrokers, and insurance brokers. Det Norske Veritas, headquartered at Høvik outside Oslo, is one of the three major maritime classification societies in the world, with 16.5% of the world fleet to class in its register. The city's port is the largest general cargo port in the country and its leading passenger gateway. Close to 6,000 ships dock at the Port of Oslo annually with a total of 6 million tonnes of cargo and over five million passengers.
The GDP of Oslo totalled €64 billion (€96,000 per capita) in 2016, which amounted to 20% of the national GDP. This compares with NOK166 billion (US$17 billion) in 1995. The metropolitan area, bar Moss and Drammen, contributed 25% of the national GDP in 2003 and was also responsible for more than one quarter of tax revenues. In comparison, total tax revenues from the oil and gas industry on the Norwegian Continental Shelf amounted to about 16%.
Oslo is one of the most expensive cities in the world. , it is ranked tenth according to the Worldwide Cost of Living Survey provided by Mercer Human Resource Consulting and first according to the Economist Intelligence Unit. The reason for this discrepancy is that the EIU omits certain factors from its final index calculation, most notably housing. In the 2015 update of the EIU's Worldwide Cost of Living survey, Oslo now ranks as the third most expensive city in the world. Although Oslo does have the most expensive housing market in Norway, it is comparably cheaper than other cities on the list in that regard. Meanwhile, prices on goods and services remain some of the highest of any city. Oslo hosts 2654 of the largest companies in Norway. Within the ranking of Europe's largest cities ordered by their number of companies Oslo is in fifth position. A whole group of oil and gas companies is situated in Oslo.
According to a report compiled by Swiss bank UBS in the month of August 2006, Oslo and London were the world's most expensive cities.
Oslo is a compact city. It is easy to move around by public transportation and rentable city bikes are accessible to all, all over the city centre. In 2003, Oslo received The European Sustainable City Award and in 2007 Reader's Digest ranked Oslo as number two on a list of the world's greenest, most liveable cities.
The City of Oslo has set the goal of becoming a low carbon city, and reducing greenhouse gas emissions 95% from 1990 levels by 2030. The climate action plan for the Port of Oslo includes refitting ferry boats, implementing a low-carbon contracting process, and installing shore power for vessels which are docked.
The level of education and productivity in the workforce is high in Norway. Nearly half of those with education at tertiary level in Norway live in the Oslo region, placing it among Europe's top three regions in relation to education.
In 2008, the total workforce in the greater Oslo region (5 counties) numbered 1,020,000 people. The greater Oslo region has several higher educational institutions and is home to more than 73,000 students. The University of Oslo is the largest institution for higher education in Norway with 27,400 students and 7,028 employees in total.
Oslo has a large and varied number of , which include several buildings containing artwork from Edvard Munch and various other international artists but also several Norwegian artists. Several world-famous writers have either lived or been born in Oslo. Examples are Knut Hamsun and Henrik Ibsen. The government has recently invested large amounts of money in cultural installations, facilities, buildings and festivals in the City of Oslo. Bygdøy, outside the city centre is the centre for history and the Norwegian Vikings' history. The area contains many parks and seasites and many museums. Examples are the Fram Museum, Vikingskiphuset and the Kon-Tiki Museum. Oslo hosts the annual Oslo Freedom Forum, a conference described by "The Economist" as "on its way to becoming a human-rights equivalent of the Davos economic forum." Oslo is also known for giving out the Nobel Peace Prize every year.
Grønland, the central areas around Youngstorget and Torggata, Karl Johans gate (the main pedestrian thoroughfare), Aker Brygge and Tjuvholmen, Sørenga, and the boroughs of Frogner, Majorstuen, St. Hanshaugen / Bislett, and Grünerløkka all have a high concentration of cafes and restaurants. There are several food markets, the largest being Mathallen Food Hall at Vulkan with more than 30 specialty shops, cafés, and eateries.
As of March 2018 six Oslo restaurants were mentioned in the Michelin Guide. Maaemo is the only Norwegian restaurant ever to have been awarded three Michelin stars. Statholdergaarden, Kontrast, and Galt each have one star. Only two restaurants in Oslo have a BIB gourmand mention: Restaurant Eik and Smalhans.
Oslo houses several major museums and galleries. The Munch Museum contains "The Scream" and other works by Edvard Munch, who donated all his work to the city after his death. The city council is currently planning a new Munch Museum which is most likely to be built in Bjørvika, in the southeast of the city. The museum will be named Munch/Stenersen. 50 different museums are located around the city.
Folkemuseet is located on the Bygdøy peninsula and is dedicated to Folk art, Folk Dress, Sami culture and the viking culture. The outdoor museum contains 155 authentic old buildings from all parts of Norway, including a Stave Church.
The Vigeland Museum located in the large Frogner Park, is free to access and contains over 212 sculptures by Gustav Vigeland including an obelisk and the Wheel of Life.
Another popular sculpture is Sinnataggen, a baby boy stamping his foot in fury. This statue is very well known as an icon in the city. There is also a newer landscaped sculpture park, Ekebergparken Sculpture Park, with works by Norwegian and international artists such as Salvador Dalí.
The Viking Ship Museum features three Viking ships found at Oseberg, Gokstad and Tune and several other unique items from the Viking Age.
The Oslo City Museum holds a permanent exhibition about the people in Oslo and the history of the city.
The Kon-Tiki Museum houses Thor Heyerdahl's Kontiki and Ra2.
The National Museum holds and preserves, exhibits and promotes public knowledge about Norway's most extensive collection of art. The Museum shows permanent exhibitions of works from its own collections but also temporary exhibitions that incorporate work loaned from elsewhere. The National Museums exhibition avenues are the National Gallery, the Museum of Contemporary Art, the National Museum, the Museum of Decorative Arts and the National Museum of Architecture. A new National Museum in Oslo will open in 2020 located at Vestbanen behind the Nobel Peace Center.
The Nobel Peace Center is an independent organisation opened on 11 June 2005 by the King Harald V as part of the celebrations to mark Norway's centenary as an independent country. The building houses a permanent exhibition, expanding every year when a new Nobel Peace Prize winner is announced, containing information of every winner in history. The building is mainly used as a communication centre.
Many festivals are held in Oslo, such as Oslo Jazz festival, a six-day jazz festival which has been held annually in August for the past 25 years. Oslo's biggest rock festival is Øyafestivalen or simply "Øya". It draws about 60,000 people to the Tøyen Park east in Oslo and lasts for four days.
The Oslo International Church Music Festival has been held annually since 2000. The Oslo World Music Festival showcases people who are stars in their own country but strangers in Norway. The Oslo Chamber Music Festival is held in August every year and world-class chambers and soloists gather in Oslo to perform at this festival. The Norwegian Wood Rock Festival is held every year in June in Oslo.
The Nobel Peace Prize Ceremony is headed by the Institute; the award ceremony is held annually in The City Hall on 10 December. Even though Sami land is far away from the capital, the Norwegian Museum of Cultural History marks the Sami National Day with a series of activities and entertainment.
The World Cup Biathlon in Holmenkollen is held every year and here male and female competitors compete against each other in Sprint, Pursuit and Mass Start disciplines.
Other examples of annual events in Oslo are Desucon, a convention focusing on Japanese culture and Færderseilasen, the world's largest overnight regatta with more than 1100 boats taking part every year.
Rikard Nordraak, composer of the national anthem of Norway, was born in Oslo in 1842.
Norway's principal orchestra is the Oslo Philharmonic, based at the Oslo Concert Hall since 1977. Although it was founded in 1919, the Oslo Philharmonic can trace its roots to the founding of the "Christiania Musikerforening" (Christiania Musicians Society) by Edvard Grieg and Johan Svendsen in 1879.
Oslo has hosted the Eurovision Song Contest twice, in 1996 and 2010.
Oslo houses over 20 theatres, such as the Norwegian Theatre and the National Theatre located at Karl Johan Street. The National Theatre is the largest theatre in Norway and is situated between the royal palace and the parliament building, Stortinget.
The names of Ludvig Holberg, Henrik Ibsen and Bjørnstjerne Bjørnson are engraved on the façade of the building over the main entrance. This theatre represents the actors and play-writers of the country but the songwriters, singers and dancers are represented in the form of a newly opened Oslo Opera House, situated in Bjørvika. The Opera was opened in 2008 and is a national landmark, designed by the Norwegian architectural firm, Snøhetta. There are two houses, together containing over 2000 seats. The building cost 500 million euro to build and took five years to build and is known for being the first Opera House in the world to let people walk on the roof of the building. The foyer and the roof are also used for concerts as well as the three stages.
Most great Norwegian authors have lived in Oslo for some period in their life. For instance, Nobel Prize-winning author Sigrid Undset grew up in Oslo, and described her life there in the autobiographical novel "Elleve år" (1934; translated as "The Longest Years"; New York 1971).
The playwright Henrik Ibsen is probably the most famous Norwegian author. Ibsen wrote plays such as "Hedda Gabler", "Peer Gynt", "A Doll's House" and "The Lady from the Sea". The Ibsen Quotes project completed in 2008 is a work of art consisting of 69 Ibsen quotations in stainless steel lettering which have been set into the granite sidewalks of the city's central streets.
In recent years, novelists like Lars Saabye Christensen, Tove Nilsen, Jo Nesbø and Roy Jacobsen have described the city and its people in their novels. Early 20th-century literature from Oslo include poets Rudolf Nilsen and André Bjerke.
The newspapers "Aftenposten", "Dagbladet", "Verdens Gang", "Dagens Næringsliv", "Finansavisen", "Dagsavisen", "Morgenbladet", "Vårt Land, Nationen" and "Klassekampen" are published in Oslo. The main office of the national broadcasting company NRK is located at Marienlyst in Oslo, near Majorstuen, and NRK also has regional services via both radio and television. TVNorge (TVNorway) is also located in Oslo, while TV 2 (based in Bergen) and TV3 (based in London) operate branch offices in central Oslo. There is also a variety of specialty publications and smaller media companies. A number of magazines are produced in Oslo. The two dominant companies are Aller Media and Hjemmet Mortensen AB.
Oslo is home to the Holmenkollen National Arena and Holmenkollbakken, the country's main biathlon and Nordic skiing venues. It hosts annual world cup tournaments, including the Holmenkollen Ski Festival. Oslo hosted the Biathlon World Championships in 1986, 1990, 2000, 2002 and 2016. FIS Nordic World Ski Championships have been hosted in 1930, 1966, 1982 and 2011, as well as the 1952 Winter Olympics.
Oslo is the home of several football clubs in the Norwegian league system. Vålerenga, Lyn and Skeid have won both the league and the cup, while Mercantile SFK and Frigg have won the cup.
Ullevål Stadion is the home arena for the Norway national team and the Football Cup Final. The stadium has previously hosted the finals of the UEFA Women's Championship in 1987 and 1997, and the 2002 UEFA European Under-19 Football Championship. Røa IL is Oslo's only team in the women's league, Toppserien. Each year, the international youth football tournament Norway Cup is held on Ekebergsletta and other places in the city.
Due to the cold climate and proximity to major forests bordering the city, skiing is a popular recreational activity in Oslo. The Tryvann Ski Resort is the most used ski resort in Norway. The most successful ice hockey team in Norway, Vålerenga Ishockey, is based in Oslo. Manglerud Star is another Oslo-team who play in the top league.
Bislett Stadium is the city's main track and field venue, and hosts the annual Bislett Games, part of Diamond League. Bjerke Travbane is the main venue for harness racing in the country. Oslo Spektrum is used for large ice hockey and handball matches. Nordstrand HE and Oppsal IF plays in the women's GRUNDIGligaen in handball, while Bækkelaget HE plays in the men's league. Jordal Amfi, the home of the ice hockey team Vålerenga Ishockey, and the national team. The 1999 IIHF World Championship in ice hockey were held in Oslo, as have three Bandy World Championships, in 1961, 1977 and 1985. The UCI Road World Championships in bicycle road racing were hosted 1993.
Oslo is also home to the Oslo Pretenders Sportsklubb, a club that hosts a baseball, softball, basketball, and disc golf teams. The baseball team has won 21 Norwegian Cup Championships and 18 Norwegian Baseball League titles. They participate in the European Cup.
Oslo was bidding to host the 2022 Winter Olympics, but later withdrew on 2 October 2014.
In 2018 Oslo is named one of Lonely Planet's Top Ten Cities. The travel guide's best-selling yearbook Best in Travel has selected Oslo as one of the ten best cities in the world to visit in 2018, citing the Norwegian capital's "innovative architecture and unmissable museums alongside cool bars, bistros and cafés".
Oslo Police District is Norway's largest police district with over 2,300 employees. Over 1,700 of those are police officers, nearly 140 police lawyers and 500 civilian employees. Oslo Police District has five police stations located around the city at Grønland, Sentrum, Stovner, Majorstuen and Manglerud. The National Criminal Investigation Service is located in Oslo, which is a Norwegian special police division under the NMJP. PST is also located in the Oslo District. PST is a security agency which was established in 1936 and is one of the non-secret agencies in Norway.
Oslo police stated that the capital is one of Europe's safest. Statistics have shown that crime in Oslo is on the rise, and some media have reported that there are four times as many thefts and robberies in Oslo than in New York City per capita. According to the Oslo Police, they receive more than 15,000 reports of petty thefts annually. Fewer than one in a hundred cases get solved.
On 22 July 2011, Oslo was the site of one of two terrorist attacks: the bombing of Oslo government offices.
Oslo has Norway's most extensive public transport system, managed by Ruter. This includes the six-line Oslo Metro, the world's most extensive metro per resident, the six-line Oslo Tramway and the eight-line Oslo Commuter Rail. The tramway operates within the areas close to the city centre, while the metro, which runs underground through the city centre, operates to suburbs further away; this includes two lines that operate to Bærum, and the Ring Line which loops to areas north of the centre. Oslo is also covered by a bus network consisting of 32 city lines, as well as regional buses to the neighboring county of Akershus.
Oslo Central Station acts as the central hub, and offers rail services to most major cities in southern Norway as well as Stockholm and Gothenburg in Sweden. The Airport Express Train operates along the high-speed Gardermoen Line. The Drammen Line runs under the city centre in the Oslo Tunnel. Some of the city islands and the neighbouring municipality of Nesodden are connected by ferry. Daily cruiseferry services operate to Copenhagen and Frederikshavn in Denmark, and to Kiel in Germany.
Many of the motorways pass through the downtown and other parts of the city in tunnels. The construction of the roads is partially supported through a toll ring. The major motorways through Oslo are European Route E6 and E18. There are three beltways, the innermost which are streets and the outermost, Ring 3 which is an expressway.
The main airport serving the city is Gardermoen Airport, located in Ullensaker, from the city centre of Oslo. It acts as the main international gateway to Norway, and is the sixth-largest domestic airport in Europe. Gardermoen is a hub for Scandinavian Airlines, Norwegian Air Shuttle and Widerøe. Oslo is also served by a secondary airport, which serve some low-cost carriers, such as Ryanair: Torp Airport, from the city.
The population of Oslo was by 2010 increasing at a record rate of nearly 2% annually (17% over the last 15 years), making it the fastest-growing Scandinavian capital. In 2015, according to Statistics Norway annual report, there were 647,676 permanent residents in the Oslo municipality, of which 628,719 resided in the city proper. There were also 1,019,4513 in the city's urban area and an estimated 1.71 million in the Greater Oslo Region, within of the city centre.
According to the most recent census 432,000 Oslo residents (70.4% of the population) were ethnically Norwegian, an increase of 6% since 2002 (409,000). Oslo has the largest population of immigrants and Norwegians born to immigrant parents in Norway, both in relative and absolute figures. Of Oslo's 624,000 inhabitants, 189,400 were immigrants or born to immigrant parents, representing 30.4 percent of the capital's population. All suburbs in Oslo were above the national average of 14.1 percent. The suburbs with the highest proportions of people of immigrant origin were Søndre Nordstrand, Stovner and Alna, where they formed around 50 percent of the population.
Pakistanis make up the single largest ethnic minority, followed by Poles, Somalis, and
Swedes. Other large immigrant groups are people from Sri Lanka, Vietnam, Turkey, Morocco, Iraq & Kurdistan region and Iran & Kordestan province.
In 2013, 40% of Oslo's primary school pupils were registered as having a first language other than Norwegian or Sami. The western part of the city is predominantly ethnic Norwegian, with several schools having less than 5% pupils with an immigrant background. The eastern part of Oslo is more mixed, with some schools up to 97% of immigrant background. Schools are also increasingly divided by ethnicity, with white flight being present in some of the northeastern suburbs of the city. In the borough of Groruddalen in 2008 for instance, the ethnic Norwegian population decreased by 1,500, while the immigrant population increased by 1,600.
Oslo has numerous religious communities. In 2019, 48.7% of the population were members of the Church of Norway, lower than the national average of 69.9%. Members of other Christian denominations make up 8.4% of the population. Islam was followed by 9.5% and Buddhism by 0.6% of the population. Adherents of other religions formed 1.1% of the population. Life stance communities, mainly the Norwegian Humanist Association, were represented by 2.8% of the population. 28.9% of the Oslo population were unaffiliated with any religion or life stance community.
Oslo has cooperation agreements with the following cities/regions:
Oslo was formerly twinned with Madison, Wisconsin, Tel Aviv and Vilnius, but has since abolished the concept of twin cities.
Oslo has a tradition of sending a Christmas tree every year to the cities of Washington, D.C.; New York; London; Edinburgh; Rotterdam; Antwerp and Reykjavík. Since 1947, Oslo has sent a , 50 to 100-year-old spruce, as an expression of gratitude toward Britain for its support of Norway during World War II. | https://en.wikipedia.org/wiki?curid=22309 |
Outing
Outing is the act of disclosing an LGBT person's sexual orientation or gender identity without that person's consent. Outing gives rise to issues of privacy, choice, hypocrisy, and harm in addition to sparking debate on what constitutes common good in efforts to combat homophobia and heterosexism. A publicized outing targets prominent figures in a society, for example well-known politicians, accomplished athletes or popular artists. Opponents to LGBT rights movements as well as activists within LGBT communities have used this type of outing as a controversial political campaign or tactic. In an attempt to pre-empt being outed, an LGBT public figure may decide to come out publicly first, although controlling the conditions under which one's LGBT identity is revealed is only one of numerous motives for coming out.
It is hard to pinpoint the first use of outing in the modern sense. In a 1982 issue of "Harper's", Taylor Branch predicted that "outage" would become a political tactic in which the closeted would find themselves trapped in a crossfire. The article "Forcing Gays Out of the Closet" by William A. Henry III in "Time" (January 29, 1990) introduced the term "outing" to the general public.
While the term is recent, the practice goes back much further. Outing was a common put-down of Greek and Roman orators. Before the Christian era, sodomy was not illegal in Greek or, most believe, in Roman law, between adult citizens, but homosexual acts between citizens were considered acceptable only under certain social circumstances. Both Romans and Greeks sneeringly deemed vulgar the persons engaged in those acts.
The Harden–Eulenburg affair of 1907–1909 was the first public outing scandal of the twentieth century. Left-wing journalists opposed to Kaiser Wilhelm II's policies outed a number of prominent members of his cabinet and inner circle — and by implication the Kaiser — beginning with Maximilian Harden's indictment of the aristocratic diplomat Prince Eulenburg. Harden's accusations incited other journalists to follow suit, including Adolf Brand, founder of "Der Eigene".
Left-wing journalists outed Adolf Hitler's closest ally Ernst Röhm in the early 1930s, causing Brand to write, "when someone — as teacher, priest, representative, or statesman — would like to set in the most damaging way the intimate love contacts of others under degrading control — in that moment his own love-life also ceases to be a private matter and forfeits every claim to remain protected hence-forward from public scrutiny and suspicious oversight."
In the 1950s during the Lavender Scare, tabloid publications like "Confidential" emerged, specializing in the revelation of scandalous information about entertainment and political celebrities. Among the political figures targeted by the magazine were former Under Secretary of State Sumner Welles and Arthur H. Vandenberg, Jr., who had briefly served as President Eisenhower's Appointments Secretary.
Outing may be found to be libel by a court of law. For example, in 1957 American pianist Liberace, successfully sued the "Daily Mirror" for merely insinuating that he was gay. The newspaper responded that columnist William Connor words (written under his byline 'Cassandra') did not imply that Liberace was gay. Their defence contended that there was no libel as no accusation had been made, rather than arguing that the accusation was true. Following Liberace's death from an AIDS-related illness in 1987, the paper asked for the award to be refunded. In a 2011 interview, actress and close friend Betty White stated that Liberace was gay, and that she often served as a beard to counter rumors of the musician's homosexuality.
After the Stonewall riots of 1969, swells of gay-libbers came out aggressively in the 1970s, crying out: "Out of the closets, Into the streets!" Some began to demand that all homosexuals come out, and that if they weren't willing to do so, then it was the community's responsibility to do it for them. One example is the outing of Oliver Sipple, who helped save the life of United States President Gerald Ford during an assassination attempt. Sipple was outed by gay activists, most prominently Harvey Milk. The negative impact the outing had on Sipple's life later provoked opposition.
Some political conservatives opposed to increased public acceptance of homosexuality engaged in outing in this period as well, with the goal of embarrassing or discrediting their ideological foes. Conservative commentator Dinesh D'Souza, for example, published the letters of gay fellow students at Dartmouth College in the campus newspaper he edited ("The Dartmouth Review") in 1981; a few years later, succeeding "Review" editor Laura Ingraham had a meeting of a campus gay organization secretly tape-recorded, then published a transcript along with attendees' names as part of an editorial denouncing the group as "cheerleaders for latent campus sodomites."
In the 1980s, the AIDS pandemic led to the outing of several major entertainers, including Rock Hudson.
One of the first outings by an activist in the United States occurred in February 1989. Michael Petrelis, along with a few others, alleged that Mark Hatfield, a Republican Senator from Oregon, was gay. They did this because he supported legislation initiated by Jesse Helms. At a fundraiser in a small town outside of Portland, the group stood up and outed him in front of the crowd. Petrelis later tried to make news by standing on the U.S. Capitol steps and reading the names of "twelve men and women in politics and music who ... are secretly gay." Though the press showed up, no major news organization published the story. Potential libel suits deterred publishers.
"OutWeek", which had begun publishing in 1989, was home to activist and outing pioneer Michelangelo Signorile, who stirred the waters when he outed the recently-deceased Malcolm Forbes in March 1990. His column "Gossip Watch" became a hot spot for outing the rich and famous. Both praised and lambasted for his behavior, he garnered responses to his actions as wide-ranging as "one of the greater contemporary gay heroes," to "revolting, infantile, cheap name-calling."
Other people who have been outed include Fannie Flagg, Pete Williams, Chaz Bono, and Richard Chamberlain.
In 2004, gay rights activist Michael Rogers outed Edward Schrock, a Republican Congressman from Virginia. Rogers posted a story on his website alleging that Schrock used an interactive phone sex service to meet other men for sex. Schrock did not deny this, and announced on August 30, 2004, that he would not seek re-election. Rogers said that he outed Schrock to punish him for his hypocrisy in voting for the Marriage Protection Act and signing on as a co-sponsor of the Federal Marriage Amendment.
New Jersey Governor Jim McGreevey announced that he was a "gay American" in August 2004. McGreevey had become aware that he was about to be named in a sexual harassment suit by Golan Cipel, his former security advisor, with whom it was alleged McGreevey had a sexual relationship. McGreevey resigned, but unlike Schrock, McGreevey decided not to step out of public life. John McCain's Presidential Campaign removed images of Alabama Attorney General Troy King from its website after he was outed in 2008.
Often outing is used solely to damage the outed person's reputation, and has thus been controversial. Some activists argue that outing is appropriate and legitimate in some cases — for example, if the individual is actively working against LGBT rights. United States Congressman Barney Frank argued during the 2006 Mark Foley scandal, "I think there's a right to privacy. But the right to privacy should not be a right to hypocrisy. And people who want to demonize other people shouldn't then be able to go home and close the door and do it themselves."
In 2009, Kirby Dick's documentary "Outrage" argued that several American political figures have led closeted gay lives while supporting and endorsing legislation that is harmful to the gay community. The film was based on the work of Michael Rogers and BlogActive.com. The film focused particular attention on Idaho Senator Larry Craig, an outspoken opponent of gay rights who in 2007 pleaded guilty to disorderly conduct for soliciting sex from an undercover police officer in a public bathroom. "Outrage" featured interviews with several people who claim that Governor of Florida Charlie Crist has led a private gay life while publicly opposing gay marriage and gay adoption.
Other politicians discussed in the film include former Virginia Representative Ed Schrock, California Representative David Dreier, former New York City mayor Ed Koch, and former Louisiana Representative Jim McCrery.
The film argues that the mass media is reluctant to discuss issues involving gay politicians despite the many comparable news stories about heterosexual politicians and scandals. "Outrage" describes this behavior as a form of institutionalized homophobia that has resulted in a tacit policy of self-censorship when reporting on these issues.
Gabriel Rotello, once editor of "OutWeek", called outing "equalizing", explaining, "what we have called 'outing' is a primarily journalistic movement to treat homosexuality as equal to heterosexuality in the media...In 1990, many of us in the gay media announced that henceforth we would simply treat homosexuality and heterosexuality as equals. We were not going to wait for the perfect, utopian future to arrive before equalizing the two: We were going to do it now. That's what outing really is: equalizing homosexuality and heterosexuality in the media."
Their aim is not only to reveal the hypocrisy of those in what Branch termed the "closets of power" but also a gay person awareness of the presence of gay people and political issues, thus showing that being gay and lesbian is not "so utterly grotesque that it should never be discussed." (Signorile, p. 78) Richard Mohr noted, "some people have compared outing to McCarthyism...And vindictive outing is like McCarthyism: such outing feeds gays to the wolves, who thereby are made stronger...But the sort of outing I have advocated does not invoke, mobilize, or ritualistically confirm anti-gay values; rather it cuts against them, works to undo them. The point of outing, as I have defended it, is not to wreak vengeance, not to punish, and not to deflect attention from one's own debased state. Its point is to avoid degrading oneself." Thus outing is "both permissible and an expected consequence of living morally."
Further, outing is not the airing of private details. As Signorile asked, "How can being gay be private when being straight isn't? Sex is private. But by outing we do not discuss anyone's sex life. We only say they're gay." "Average people have been outed for decades. People have always outed the mailman and the milkman and the spinster who lives down the block. If anything, the goal behind outing is to show just how many gay people there are among the most visible people in our society so that when someone outs the milkman or the spinster, everyone will say, 'So what?'"
There is no widely agreed definition of fair outing nor even clear consensus in most organizations on when it can occur. Virtually all who take a position on outing have qualified the limits to which it is permissible for one to go — often quite idiosyncratic. The extremes are to out no one or to out everyone. In between, four intermediate positions have been discerned to justify so-called fair outing:
Assessing to which degree the outer goes allows insight into the goal striven towards. Most outers target those who support decisions and further policy, both religious and secular, which discriminate against gay people while they themselves live a clandestine gay existence. A "truism to people active in the gay movement [is] that the greatest impediments to homosexuals' progress often [are] not heterosexuals, but closeted homosexuals," said San Francisco journalist Randy Shilts.
The effectiveness of outing as a political tactic depends on the willingness of the media to report that a person has been outed. The advent of the internet has made outing public figures much easier. Twenty years ago Michael Rogers would have had to persuade a newspaper or other media outlet to risk legal action by reporting his allegations about Congressman Ed Schrock. Today he can publish them himself on his website and other media will then report that he has done so.
Signorile argues that the outing of journalist Pete Williams "and its aftermath did indeed make a big dent in the military's policy against gays. The publicity generated put the policy on the front burner in 1992, thrusting the issue into the presidential campaign," with every Democratic candidate and independent Ross Perot publicly promising to end the ban. (ibid, p. 161).
The military forces of the world have differing approaches to the enlistment of heterosexual and bisexual individuals. Some have open policies, others prohibit, and some are ambiguous. The armed forces of most developed countries have now removed policies excluding non-heterosexual individuals (with strict policies on sexual harassment).
Nations that permit gay people to serve openly in the military include the 4 of the 5 members of the UN Security Council (United States, United Kingdom, France, and Russia), the Republic of China (Taiwan), Australia, Israel, Argentina, and all NATO members except Turkey.
In the United Kingdom the Ministry of Defence policy since the year 2000 is to allow homosexual men, lesbians and transgender personnel to serve openly, and discrimination on a sexual orientation basis is forbidden. It is also forbidden for someone to pressure LGBT people to come out.
In the United States lesbian, gay, and bisexual people are allowed to serve openly in the United States military, this however excludes transgender people who are in the process of gender transition. Military policy and legislation had previously entirely prohibited gay individuals from serving, and subsequently from serving openly, but these prohibitions were ended in September 2011 after the United States Congress voted to repeal the policy. The first time homosexuals were differentiated from non-homosexuals in the military literature was in revised army mobilization regulations in 1942. Additional policy revisions in 1944 and 1947 further codified the ban. Throughout the next few decades, homosexuals were routinely discharged, regardless of whether they had engaged in sexual conduct while serving. In response to the gay rights movements of the 1970s and 1980s, the Department of Defense issued a 1982 policy (DOD Directive 1332.14) stating that homosexuality was clearly incompatible with military service. Controversy over this policy created political pressure to amend the policy, with socially liberal efforts seeking a repeal of the ban and socially conservative groups wishing to reinforce it by statute.
Some gay rights activists defend outing as a tactic. The British activist Peter Tatchell says "The lesbian and gay community has a right to defend itself against public figures who abuse their power and influence to support policies which inflict suffering on homosexuals." In 1994 Tatchell's activist group OutRage! alleged that fourteen bishops of the Church of England were homosexual or bisexual and named them, accusing them of hypocrisy for upholding the Church's policy of regarding homosexual acts as sinful while not observing this prohibition in their personal lives.
"Outing is queer self-defence," Tatchell said in a 1995 speech to the Lesbian and Gay Christian Movement conference. "Lesbians and gay men have a right, and a duty, to expose hypocrites and homophobes. By not outing gay Bishops who support policies which harm homosexuals, we would be protecting those Bishops and thereby allowing them to continue to inflict suffering on members of our community. Collusion with hypocrisy and homophobia is not ethically defensible for Christians, or for anyone else."
Some gay activists, however, continue to disapprove of outing as a political tactic, arguing that even anti-gay conservatives have a right to personal privacy which should be respected. Steven Fisher, a spokesperson for the Human Rights Campaign, the largest advocacy group for gay and lesbian issues in the United States, commenting on the Schrock outing, said he opposed using "sexual orientation as a weapon." Christopher R. Barron, political director of the Log Cabin Republicans, a group representing gay and lesbian Republicans said: "We disagree strongly with the outing campaign, but we also strongly disagree with President Bush's sponsorship of the anti-family Federal Marriage Amendment."
Roger Rosenblatt argued in his January 1993 "New York Times Magazine" essay "Who Killed Privacy?" that, "The practice of 'outing' homosexuals implies contradictorily that homosexuals have a right to private choice but not to private lives." In March 2002, singer Will Young revealed he was gay, pre-empting a tabloid newspaper (reportedly "News of the World") that was preparing to out him.
Other criticism concerning outing centers upon the harm that outing individuals as homosexual, transgender, or transsexual does to them personally and professionally and upon the fact that some individuals have been erroneously outed or have been outed when there is no proof to substantiate the claim that they are gay or transgender.
Christine Jorgensen, Beth Elliott, Renée Richards, Sandy Stone, Billy Tipton, Alan L. Hart, April Ashley, Caroline Cossey ("Tula"), Jahna Steele, and Nancy Jean Burkholder were outed as transsexuals by European or American media or, in the case of Billy Tipton, by his coroner. In many cases, being outed had an adverse effect on their personal lives and their careers.
In some cases well-known celebrities have been outed as transgender or intersex when no proof to substantiate the claims was presented, e.g., Jamie Lee Curtis.
Outing has been featured in comedy films as well, such as the French comedy "Le Placard" ("The Closet"), where a heterosexual man is falsely outed, or in the 1997 comedy "In & Out" where Kevin Kline stars as a small-town teacher who gets outed on national television, and is then forced to come to terms with his own unrecognized homosexuality.
In Season 5 the television series "The L Word", the issue of public outing is addressed in the form of Alice Pieszecki, a web-journalist, outing a basketball player who made offensive comments toward gay people while himself being gay. She also ambiguously outs lesbian actress Niki Stevens while guest-hosting a fictional talk show called "The Look".
In the television series "The Office (US)" there is an episode "Gay Witch Hunt" in which fictional character, Oscar Martinez (The Office) is outed by his boss, Michael Scott (The Office). Oscar is notable for being one of the very few LGBT people of color on American television at the time. | https://en.wikipedia.org/wiki?curid=22310 |
Las Vegas Raiders
The Las Vegas Raiders are a professional American football team based in the Las Vegas metropolitan area. The Raiders compete in the National Football League (NFL) as a member club of the league's American Football Conference (AFC) West division. The Raiders will play their home games at Allegiant Stadium in Paradise, Nevada, in 2020.
Founded on January 30, 1960, and originally based in Oakland, California, they played their first regular season game on September 11, 1960, as a charter member of the American Football League (AFL). They moved to the NFL with the AFL-NFL merger in 1970. The team departed Oakland to play in Los Angeles from the 1982 season through the 1994 season before returning to Oakland at the start of the 1995 season. On March 27, 2017, NFL team owners voted nearly unanimously to approve the Raiders' application to relocate to Las Vegas. Nearly three years later, on January 22, 2020, the Raiders officially moved to Las Vegas.
The Raiders' off-field fortunes have varied considerably over the years. The team's first three years of operation (1960–1962) were marred by poor on-field performance, financial difficulties, and spotty attendance. In 1963, however, the Raiders' fortunes improved dramatically with the introduction of head coach (and eventual owner) Al Davis. In 1967, after several years of improvement, the Raiders reached the postseason for the first time. The team would go on to win its first (and only) AFL Championship that year; in doing so, the Raiders advanced to Super Bowl II, where they were soundly defeated by the Green Bay Packers. Since 1963, the team has won 15 division titles (3 AFL and 12 NFL), 4 AFC Championships (1976, 1980, 1983, and 2002), 1 AFL Championship (1967), and 3 Super Bowl Championships (XI, XV, and XVIII). At the end of the NFL's 2019 season, the Raiders have an all-time regular season record of 473 wins, 432 losses, and 11 ties; their all-time playoff record currently stands at 25 wins and 19 losses.
Al Davis owned the team from 1972 until his death in 2011. Control of the franchise was then given to Al's son Mark Davis, with Al's wife Carol maintaining ownership. The Raiders are known for their extensive fan base and distinctive team culture. The Raiders have 14 former members who have been enshrined in the Pro Football Hall of Fame. They have previously played at Kezar Stadium and Candlestick Park in San Francisco, Frank Youell Field and Oakland Coliseum in Oakland, and the Los Angeles Memorial Coliseum in Los Angeles.
The Oakland Raiders were originally going to be called the "Oakland Señors" after a name-the-team contest had that name finish first, but after being the target of local jokes, the name was changed to the Raiders before the 1960 season began. Having enjoyed a successful collegiate coaching career at Navy during the 1950s, San Francisco native Eddie Erdelatz was hired as the Raiders' first head coach. On February 9, 1960, after rejecting offers from the NFL's Washington Redskins and the AFL's Los Angeles Chargers, Erdelatz accepted the Raiders' head coaching position. In January 1960, the Raiders were established in Oakland, and because of NFL interference with the original eighth franchise owner, were the last team of eight in the new American Football League to select players, thus relegated to the remaining talent available (see below).
The 1960 Raiders 42-man roster included 28 rookies and only 14 veterans. Among the Raiders rookies were future Pro Football Hall of Fame inductee center Jim Otto, and a future Raiders head coach, quarterback Tom Flores. In their debut year under Erdelatz the Raiders finished with a 6–8 record.
On September 18, 1961, Erdelatz was dismissed after the Raiders were outscored 77–46 in the first two games of the season. On September 24, 1961, after the dismissal of Erdelatz, management named Los Angeles native and offensive line coach Marty Feldman as the Raiders head coach. The team finished the 1961 season with a 2–12 record.
Feldman began the 1962 season as Raiders head coach but was fired on October 16, 1962 after an 0–5 start. From October 16 through December, the Raiders were coached by Oklahoma native and former assistant coach Red Conkright. Under Conkright, the Raiders went 1–8, finishing the season with 1–13 record. Following the 1962 season the Raiders appointed Conkright to an interim mentor position as they looked for a new head coach.
After the 1962 season, Raiders managing general partner F. Wayne Valley hired Al Davis as Raiders head coach and general manager. At 33, he was the youngest person in professional football history to hold the positions. Davis immediately began to implement what he termed the "vertical game", an aggressive offensive strategy inspired by the offense developed by Chargers head coach Sid Gillman. Under Davis the Raiders improved to 10–4 and he was named the AFL's Coach of the Year in 1963. Though the team slipped to 5–7–2 in 1964, they rebounded to an 8–5–1 record in 1965. The famous silver and black Raider uniform debuted at the regular season opening game on September 8, 1963. Prior to this, the team wore a combination of black and white with gold trim on the pants and oversized numerals.
In April 1966 Davis left the Raiders after being named AFL Commissioner, promoting assistant coach John Rauch to head coach. Two months later, the league announced its merger with the NFL. The leagues would retain separate regular seasons until 1970. With the merger, the position of commissioner was no longer needed, and Davis entered into discussions with Valley about returning to the Raiders. On July 25, 1966, Davis returned as part-owner of the team. He purchased a 10% interest in the team for $18,000, and became the team's third general partner – the partner in charge of football operations.
Under Rauch, the Raiders matched their 1965 season's 8–5–1 record in 1966 but missed the playoffs, finishing second in the AFL West Division.
On the field, the team Davis had assembled steadily improved. Led by quarterback Daryle Lamonica, acquired in a trade with the Buffalo Bills, the Raiders finished the 1967 season with a 13–1 record and won the 1967 AFL Championship, defeating the Houston Oilers 40–7. The win earned the team a trip to the Orange Bowl in Miami, Florida to participate in Super Bowl II. On January 14, 1968, the Raiders were defeated in the second-ever Super Bowl, losing 33–14 to Vince Lombardi's Green Bay Packers.
The following year, the Raiders ended the 1968 season with a 12–2 record and again winning the AFL West Division title. However, this time, they lost 27–23 by the New York Jets in the AFL Championship Game.
Citing management conflicts with day-to-day coaching decisions, Rauch resigned as Raiders head coach on January 16, 1969, accepting the head coaching job of the Buffalo Bills.
During the early 1960s, John Madden was a defensive assistant coach at San Diego State University under SDSU head coach Don Coryell. Madden credited Coryell as being an influence on his coaching. In 1967, Madden was hired by Al Davis as the Raiders linebacker coach. On February 4, 1969, after the departure of John Rauch, Madden was named the Raiders sixth head coach. Under Madden, the 1969 Raiders won the AFL West Division title for the third consecutive year with a 12–1–1 record. On December 20, 1969, the Raiders defeated the Oilers 56–7 in the AFL Division playoff game. In the AFL Championship game on January 4, 1970, the Raiders were defeated by Hank Stram's Kansas City Chiefs 17–7.
In 1970, the AFL–NFL merger was officially completed after four years and the Raiders joined the Western Division of the American Football Conference (actually the AFL West with the same teams as in 1969, except for the Cincinnati Bengals) in the newly-merged NFL. The first post-merger season saw the Raiders win the AFC West with an 8–4–2 record and advance to the conference championship, where they lost to the Baltimore Colts. Despite another 8–4–2 season in 1971, it was only good for second place in the AFC West, and the team failed to make the playoffs. When backup offensive lineman Ron Mix played, the 1971 Raiders had an eventual all-Pro Football Hall of Fame offensive line with tackle Art Shell, guard Gene Upshaw, center Jim Otto, and tackle Bob Brown.
The teams of the 1970s were thoroughly dominant teams, with eight Hall of Fame inductees on the roster and a Hall of Fame coach in John Madden. The 1970s Raiders created the team's identity and persona as a team that was hard-hitting. Dominant on defense, with the crushing hits of safeties Jack Tatum and George Atkinson and cornerback Skip Thomas, the Raiders regularly held first place in the AFC West, entering the playoffs nearly every season. From 1973 through 1977, the Raiders reached the conference championship every year.
This was also the era of a bitter rivalry between the Pittsburgh Steelers and Raiders. In the 1970s, the Steelers and Raiders were frequently the two best teams in the AFC and, arguably, the NFL. The teams would meet on five different occasions in the playoffs, and the winner of the Steelers-Raiders game went on to win the Super Bowl in three of those instances, from 1974 to 1976. The rivalry garnered attention in the sports media, with controversial plays, late hits, accusations and public statements.
The rivalry began with and was fueled by a controversial last-second play in their first playoff game in 1972. That season the Raiders achieved a 10–3–1 record and an AFC West title. In the divisional round, the Raiders would lose to the Steelers 13–7 on the controversial play that become known as the "Immaculate Reception".
The Raiders and Steelers would meet again the following season as the Raiders won the AFC West again with a 9–4–1 record. Lamonica was replaced as starting quarterback early in the season by Ken Stabler. The Raiders defeated Pittsburgh 33–14 in the divisional round of the playoffs to reach the AFC Championship, but lost 27–10 to the Miami Dolphins.
In 1974 Oakland had a 12–2 regular season, which included a nine-game winning streak. They beat the Dolphins 28–26 in the divisional round of the playoffs in a see-saw battle remembered as the "Sea of Hands" game. They then lost the AFC Championship to the Steelers, who went on to win the Super Bowl. The Raiders were held to only 29 yards rushing by the Pittsburgh defense, and late mistakes turned a 10–3 lead at the start of the fourth quarter into a disappointing 24–13 loss.
In the 1975 season opener, the Raiders beat Miami and ended their 31-game home winning streak. With an 11–3 record, they defeated Cincinnati 31–28 in the divisional playoff round. Again, the Raiders faced the Steelers in the conference championship, eager for revenge. According to Madden and Davis, the Raiders relied on quick movement by their wide receivers on the outside sidelines – the deep threat, or 'long ball' – more so than the Steelers of that year, whose offense was far more run-oriented than it would become later in the 1970s. Forced to adapt to the frozen field of Three Rivers Stadium, with receivers slipping and unable to make quick moves to beat coverage, the Raiders lost, 16–10. The rivalry had now grown to hatred, and became the stereotype of the 'grudge match.' Again, the Raiders came up short, as the Steelers won the AFC Championship and then went on to another Super Bowl title.
In 1976, the Raiders came from behind dramatically to beat Pittsburgh 31–28 in the season opener and continued to cement its reputation for dirty play by knocking WR Lynn Swann out for two weeks with a clothesline to the helmet. Al Davis later tried to sue Steelers coach Chuck Noll for libel after the latter called safety George Atkinson a criminal for the hit. The Raiders won 13 regular season games and a close controversial 21–17 victory over New England in the divisional playoffs. With the Patriots up by three points in final two minutes, referee Ben Dreith called roughing the passer on New England's Ray "Sugar Bear" Hamilton after he hit Oakland QB Ken Stabler. The Raiders went on to score a touchdown in the final minute to win. They then defeated the Steelers 24–7 in the AFC Championship to advance to their second Super Bowl. In Super Bowl XI, Oakland's opponent was the Minnesota Vikings, a team that had lost three previous Super Bowls. The Raiders jumped out to an early lead and led 16–0 at halftime. By the end, having forced Minnesota into multiple turnovers, the Raiders won 32–14 for their first Super Bowl and post-merger championship.
The following season saw the Raiders finish 11–3, but they lost the division title to the Denver Broncos. They settled for a wild card, beating the Colts in the second-longest overtime game in NFL history and which featured the Ghost to the Post. however, the Raiders then fell to the Broncos in the AFC Championship.
During a 1978 preseason game, Patriots WR Darryl Stingley was injured by a hit from Raiders FS Jack Tatum and paralyzed for life. Although the 1978 Raiders achieved a winning record at 9–7, they missed the playoffs for the first time since 1971, losing critical games down the stretch to miss the playoffs.
After 10 consecutive winning seasons and one Super Bowl championship, John Madden left coaching in 1979 to pursue a career as a television football commentator. His replacement was former Raiders quarterback Tom Flores, the first Hispanic head coach in NFL history. Flores led the Raiders to another 9–7 season, but the team missed the playoffs.
In the midst of the turmoil of Al Davis' attempts to move the team to Los Angeles in 1980, Flores looked to lead the Raiders to their third Super Bowl by finishing the season 11–5 and earning a wild card berth. Quarterback Jim Plunkett revitalized his career, taking over in game five when starter Dan Pastorini was lost for the season to a broken leg after owner Al Davis had picked up Pastorini when he swapped quarterbacks with the Houston Oilers, sending the beloved Ken Stabler to the Oilers. The Raiders defeated Stabler and the Oilers in the Wild Card game and advanced to the AFC Championship by defeating the Cleveland Browns 14–12. The Raiders slipped by the AFC West champion San Diego Chargers to advance to their third Super Bowl. In Super Bowl XV, the Raiders faced head coach Dick Vermeil's Philadelphia Eagles. The Raiders dominated the Eagles, taking an early 14–0 lead in the first quarter behind two touchdown passes by Plunkett, including a then-Super Bowl record 80-yard pass and catch to running back Kenny King. A Cliff Branch third quarter touchdown reception put the Raiders up 21–3 in the third quarter. They would go on to win 27–10, winning their second Super Bowl and becoming the first team to ever win the Super Bowl after getting into the playoffs as the wild card team.
In 1980 Al Davis attempted unsuccessfully to have improvements made to the Oakland–Alameda County Coliseum, specifically the addition of luxury boxes. That year, he signed a memorandum of agreement to move the Raiders from Oakland to Los Angeles. The move, which required three-fourths approval by league owners, was defeated 22–0 (with five owners abstaining). When Davis tried to move the team anyway, he was blocked by an injunction. In response, the Raiders not only became an active partner in an antitrust lawsuit filed by the Los Angeles Memorial Coliseum (who had recently lost the Los Angeles Rams to Anaheim), but filed an antitrust lawsuit of their own. After the first case was declared a mistrial, in May 1982, a second jury found in favor of Davis and the Los Angeles Coliseum, clearing the way for the move. With the ruling, the Raiders would relocate to Los Angeles for the 1982 season to play their home games at the Memorial Coliseum.
The Raiders' final campaign of their first run in Oakland of 1981 saw the team fall to a 7–9 record, failing to make the playoffs following their Super Bowl win.
The newly minted Los Angeles Raiders finished the strike-shortened 1982 season 8–1 to win the AFC West, but lost in the second round of the playoffs to the Jets. The following season, the Raiders finished 12–4 to win the AFC West. Convincing playoff wins over the Steelers and Seattle Seahawks in the AFC playoffs propelled the Raiders to their fourth Super Bowl. Against the Washington Redskins in Super Bowl XVIII, the Raiders built a lead after blocking a punt and recovering for a touchdown early in the game. A Branch touchdown reception from Plunkett put the Raiders up 14–0 with more than nine minutes remaining in the first quarter. With seven seconds remaining in the first half, linebacker Jack Squirek intercepted a Joe Theismann swing pass at the Washington five-yard line and scored, sending the Raiders to a 21–3 halftime lead. Following a John Riggins one-yard touchdown run (extra point was blocked), Marcus Allen scored from five yards out to build the lead to 28–9. The Raiders sealed the game with Allen reversed his route on a Super Bowl record run that turned into a 74-yard touchdown. The Raiders went on to a 38–9 victory and their third NFL championship. Allen set a record for most rushing yards (191) and combined yards (209) in a Super Bowl as the Raiders won their third Super Bowl in eight years.
The team had another successful regular season in 1984, finishing 11–5, but a three-game losing streak forced them to enter the playoffs as a wild-card, where they fell to the Seahawks in the Wild Card game.
The 1985 Raiders campaign saw 12 wins and a division title as Marcus Allen was named MVP. However, a loss to the Patriots derailed any further postseason hopes.
The Raiders' fortunes declined after that, and from 1986 to 1989, they finished no better than 8–8 and posted consecutive losing seasons for the first time since 1961–62. Also in 1986, Al Davis got into a widely publicized argument with Marcus Allen, whom he accused of faking injuries. The feud continued into 1987, and Davis retaliated by signing Bo Jackson to essentially replace Allen. However, Jackson was also a left fielder for Major League Baseball's Kansas City Royals, and could not play full-time until baseball season ended in October. Even worse, another strike cost the NFL one game and prompted them to use substitute players. The Raiders achieved a 1–2 record before the regular players returned after the strike. After a weak 5–10 finish, Tom Flores moved to the front office and was replaced by Denver Broncos offensive assistant coach Mike Shanahan.
Shanahan led the team to a 7–9 season in 1988, and Allen and Jackson continued to trade places as the starting running back. Low game attendance and fan apathy were evident by this point, and in the summer of 1988, rumors of a Raiders return to Oakland intensified when a preseason game against the Houston Oilers was scheduled at Oakland–Alameda County Coliseum.
As early as 1986, Davis sought to abandon the Coliseum in favor of a more modern stadium. In addition to sharing the venue with the USC Trojans, the Raiders were less than ecstatic with the Coliseum as it was aging and still lacked the luxury suites and other amenities that Davis was promised when he moved the Raiders to Los Angeles. Finally, the Coliseum had 95,000 seats and the Raiders were rarely able to fill all of them even in their best years, and so most Raiders home games were blacked out in Southern California. Numerous sites in California were considered, including one near now-defunct Hollywood Park in Inglewood, where SoFi Stadium for the Rams and Chargers is under construction, and another in Carson. In August 1987 it was announced that the city of Irwindale paid Davis US$10 million as a good-faith deposit for a prospective stadium site. When the bid failed, Davis kept the non-refundable deposit. During this time Davis also almost moved the team to Sacramento in a deal that would have included Davis becoming the managing partner of the Sacramento Kings.
Negotiations between Davis and Oakland commenced in January 1989, and on March 11, 1991, Davis announced his intention to bring the Raiders back to Oakland. By September 1991, however, numerous delays had prevented the completion of the deal between Davis and Oakland. On September 11, Davis announced a new deal to stay in Los Angeles, leading many fans in Oakland to burn Raiders paraphernalia in disgust.
After starting the 1989 season with a 1–3 record, Shanahan was fired by Davis, which began a long-standing feud between the two. He was replaced by former Raider offensive lineman Art Shell, who had been voted into the Pro Football Hall of Fame earlier in the year. With the hiring, Shell became the first African American head coach in the modern NFL era, but the team still finished a middling 8–8.
In 1990 Shell led the Raiders to a 12–4 record. Behind Bo Jackson's spectacular play, they beat the Cincinnati Bengals in the divisional round of the playoffs. However, Jackson suffered a severe hip and leg injury after a tackle during the game. Without him, the Raiders were blown out 51–3 in the AFC Championship by the Buffalo Bills. Jackson was forced to quit football as a result of the injury, although surgery allowed him to continue playing baseball until he retired in 1994.
The Raiders finished with a 9–7 record in 1991, but struggled looking for a reliable quarterback and lost to the Kansas City Chiefs in the Wild Card game. The struggle for a quarterback continued in 1992 as the Raiders started two different quarterbacks and stumbled to a 7–9 record, two other playoff appearances during the 1990s, and finished higher than third place only three times.
The Raiders rebounded well in 1993 with Jeff Hostetler as the everyday quarterback, finishing in second place in the AFC West with a 10–6 record. A win over the Broncos in the wild card game mean a rematch against the Bills for the right to go to the AFC Championship game. The Raiders, led by two Napoleon McCallum rushing touchdowns took a halftime lead, but could only manage six points in the second half losing to the Bills again 29–23.
However, following a 9–7 record in the 1994 season that resulted in the team missing the playoffs, Art Shell was fired.
On June 23, 1995, Davis signed a letter of intent to move the Raiders back to Oakland. The move was approved by the Alameda County Board of Supervisors the next month. As the NFL had never recognized the Raiders' initial move to Los Angeles, they could not disapprove of the move or request a relocation fee, which had to be paid by the Los Angeles Rams for their move to St. Louis. In order to convince Davis to return, Oakland spent $220 million on stadium renovations. These included a new seating section – commonly known as "Mount Davis" – with 10,000 seats. It also built the team a training facility and paid all its moving costs. The Raiders paid $525,000 a year in rent – a fraction of what the nearby San Francisco 49ers paid to play at the now-extinct Candlestick Park – and did not pay maintenance or game-day operating costs.
The move was greeted with much fanfare, and under new head coach Mike White the 1995 season began well for the Raiders. Oakland started 8–2, but injuries to starting quarterback Jeff Hostetler contributed to a six-game losing streak for an 8–8 finish and the Raiders failed to qualify for the playoffs for a second consecutive season.
After two more losing seasons (7–9 in 1996 and 4–12 in 1997) under White and his successor, Joe Bugel, Davis selected a new head coach from outside the Raiders organization for only the second time when he hired Philadelphia Eagles offensive coordinator Jon Gruden. Gruden previously worked for the 49ers and Green Bay Packers under head coach Mike Holmgren. Under Gruden, the Raiders posted consecutive 8–8 seasons in 1998 and 1999.
Oakland finished 12–4 in the 2000 season, the team's most successful in a decade. Led by veteran quarterback Rich Gannon (MVP), Oakland won their first division title since 1990, and advanced to the AFC Championship, where Gannon was hurt when sacked by Baltimore Ravens' lineman Tony Siragusa. The Raider offense struggled without Gannon, and the Raiders fell 16–3 to the eventual Super Bowl champion Ravens.
The Raiders acquired all-time leading receiver Jerry Rice prior to the 2001 season. They started 10–3 but lost their last three games and finished with a 10–6 record and a wild card playoff spot. They defeated the New York Jets 38–24 in the wild card round to advance to face the New England Patriots. In a game in which the Raiders led for most of the game, the game was played in a heavy snowstorm. In what would be known as the "Tuck Rule Game", late in the fourth quarter with the Patriots trailing the Raiders by a field goal, Raiders star cornerback Charles Woodson blitzed Patriots quarterback Tom Brady, causing an apparent fumble which was recovered by Raiders linebacker Greg Biekert. The recovery would assuredly have led to a Raiders victory, as the Raiders would have a first down with 1:43 remaining and the Patriots had no more time outs); however, the play was reviewed and determined to be an incomplete pass (it was ruled that Brady had pump-faked and then had not yet "tucked" the ball into his body, which, by rule, cannot result in a fumble, was instead an incomplete pass—though this explanation was not given on the field, but after the NFL season had ended). The Patriots retained possession and drove for a game-tying field goal. The game went into overtime and the Patriots won 16–13.
Shortly after the season, the Raiders made a move that involved releasing Gruden from his contract and allowing the Tampa Bay Buccaneers to sign him. In return, the Raiders received cash and future draft picks from the Buccaneers. The sudden move came after months of speculation in the media that Davis and Gruden had fallen out with each other both personally and professionally. Bill Callahan, who served as the team's offensive coordinator and offensive line coach during Gruden's tenure, was named head coach.
Under Callahan, the Raiders finished the 2002 season 11–5, won their third-straight division title, and clinched the top seed in the playoffs. Rich Gannon was named MVP of the NFL after passing for a league-high 4,689 yards. After beating the Jets and Titans by large margins in the playoffs, the Raiders made their fifth Super Bowl appearance in Super Bowl XXXVII. Their opponent was the Tampa Bay Buccaneers, coached by Gruden. The Raiders, who had not made significant changes to Gruden's offensive schemes, were intercepted five times by the Buccaneers en route to a 48–21 blowout. Some Tampa Bay players claimed that Gruden had given them so much information on Oakland's offense, they knew exactly what plays were being called.
Callahan's second season as head coach was considerably less successful. Oakland finished 4–12, which was their worst showing since 1997. After a late-season loss to the Denver Broncos, a visibly frustrated Callahan exclaimed, "We've got to be the dumbest team in America in terms of playing the game." At the end of the 2003 regular season Callahan was fired and replaced by former Washington Redskins head coach Norv Turner.
The team's fortunes did not improve in Turner's first year. Oakland finished the 2004 season 5–11, with only one divisional win (a one-point victory over the Broncos in Denver). During a Week 3 victory against the Buccaneers, Rich Gannon suffered a neck injury that ended his season and eventually his career. He never returned to the team and retired before the 2005 season. Kerry Collins, who led the New York Giants to an appearance in Super Bowl XXXV and signed with Oakland after the 2003 season, became the team's starting quarterback.
In an effort to bolster their offense, in early 2005 the Raiders acquired Pro Bowl wide receiver Randy Moss via trade with the Minnesota Vikings, and signed free agent running back Lamont Jordan of the New York Jets. After a 4–12 season and a second consecutive last place finish, Turner was fired as head coach.
On February 11, 2006, the team announced the return of Art Shell as head coach. In announcing the move, Al Davis said that firing Shell in 1995 had been a mistake.
Under Shell, the Raiders lost their first five games in 2006 en route to a 2–14 record, the team's worst since 1962. Despite having one of the best defenses, Oakland's offense struggled greatly, scoring just 168 points (fewest in franchise history) and allowing a league-high 72 sacks. Wide receiver Jerry Porter was benched by Shell for most of the season in what many viewed as a personal, rather than football-related, decision. Shell was fired again at the end of the season. The Raiders also earned the right to the first overall pick in the 2007 NFL Draft for the first time since 1962, by virtue of having the league's worst record.
On January 22, 2007, the team announced the hiring of 31-year-old USC offensive coordinator Lane Kiffin, the youngest coach in franchise history and the youngest coach in the NFL. In the 2007 NFL Draft, the Raiders selected LSU quarterback JaMarcus Russell with the No. 1 overall pick, despite a strong objection from Kiffin. Russell, arguably the biggest bust in NFL history, held out until September 12 and did not make his first career start until week 17. Kiffin coached the Raiders to a 4–12 record in the 2007 season. After a 1–3 start to 2008 and months of speculation and rumors, Davis fired Kiffin on September 30.
Tom Cable was named as Kiffin's interim replacement, and officially signed as the 17th head coach of the Raiders on February 3, 2009.
The team's finish to the 2008 season would turn out to match their best since they lost the Super Bowl in the 2002 season. However, they still finished 5–11 and ended up third in the AFC West, the first time they did not finish last since 2002. They would produce an identical record in 2009; however, the season was somewhat ameliorated by the fact that four of the Raiders' five wins were against opponents with above .500 records. In 2010 the Raiders became the first team in NFL history to go undefeated against their division yet miss the playoffs (6–0 in the AFC West, 8–8 overall, 3 games behind the Jets for the second Wild Card entry). On January 4, 2011, owner Al Davis informed head coach Tom Cable that his contract would not be renewed, ending his tenure with the organization. Many Raider players, such as punter Shane Lechler, were upset with the decision.
On January 17, 2011, it was announced that offensive coordinator Hue Jackson was going to be the next Raiders head coach. A press conference was held on January 18, 2011, to formally introduce Jackson as the next Raiders head coach, the fifth in just seven years. Following Davis's death during the 2011 season, new owners Carol and Mark Davis decided to take the franchise in a drastically different direction by hiring a general manager. On New Year's Day of 2012, the Raiders played the San Diego Chargers, hoping to go to the playoffs for the first time since 2002, the game ended with a 38–26 loss. Their season ended with another disappointing 8–8 record.
The Raiders named Green Bay Packers director of football operations Reggie McKenzie as the team's first general manager since Al Davis on January 6, 2012.
The Raiders began 2012 by running a nose tackle when they run a 4-3 defense. They lost their home opener on Monday Night Football against San Diego 22–14, and finished the season 4–12.
In the 2013 off-season, the Raiders began making major roster moves. These included the signing of linebackers Kevin Burnett, Nick Roach, and Kaluka Maiava, defensive tackles Pat Sims and Vance Walker, cornerbacks Tracy Porter and Mike Jenkins, defensive end Jason Hunter, and safety Usama Young and the release of wide receiver Darrius Heyward-Bey, safety Michael Huff, linebacker Rolando McClain and defensive tackle Tommy Kelly. Starting quarterback Carson Palmer was traded to the Arizona Cardinals in exchange for a sixth-round draft pick and a conditional seventh-round draft pick. Shortly before, they had traded a fifth-round pick and an undisclosed conditional pick in exchange for Matt Flynn. In addition to signing Matt Flynn, the Raiders also welcomed back Charles Woodson, signing him to a 1-year deal in mid-May. The Raiders finished the 2013 season with a record of 4–12.
In the 2014 NFL Draft, the Raiders selected linebacker Khalil Mack in the first round and quarterback Derek Carr in the second round hoping each would anchor their side of the ball. Carr was given control early as he was chosen as the starter for the opener of the 2014 season. After an 0–4 start to the 2014 season, and an 8–28 overall record as head coach, Allen was fired. Offensive line coach Tony Sparano was named interim head coach on September 30. The Raiders finished the 2014 season with a record of 3–13. Carr started all 16 games for the Raiders, the first Raider since 2002 to do so. First round pick Mack finished third in Defensive Rookie of the Year voting.
Jack Del Rio was hired to become the new head coach of the Raiders on January 14, 2015, replacing the fired Dennis Allen (who coincidentally had preceded him as the Broncos defensive coordinator) and interim head coach Tony Sparano.
The Raiders showed great improvement in Del Rio's first season, improving upon their three-win 2014 season, going 7–9 in the 2015 season. Rookie wide receiver Amari Cooper fulfilled almost all expectations and Derek Carr continued his improvement at quarterback. Cooper, Mack, Murray, and Carr were selected to participate in the Pro Bowl. DE Khalil Mack was the first player ever to be selected as an AP 2015 All-Pro Team at two positions in the same year.
The day following the conclusion of the 2015 regular season, the Raiders, St. Louis Rams, and San Diego Chargers all filed to relocate to Los Angeles. On January 12, 2016, the NFL owners voted 30–2 to allow the Rams to return to L.A. and approved a stadium project in Inglewood proposed by Rams owner Stan Kroenke over a competing project in Carson that the Chargers and Raiders had jointly proposed. The Chargers were given a one-year approval to relocate as well, conditioned on negotiating a lease agreement with the Rams or an agreement to partner with the Rams on the new stadium construction. The Raiders were given conditional permission to relocate if the Chargers were to decline their option first.
As part of the Rams' relocation decision, the NFL offered to provide both the Chargers and Raiders $100 million each if they could work out new stadiums in their home markets. The Chargers eventually announced on January 12, 2017, that they would exercise their option to relocate to Los Angeles following the failure of a November 2016 ballot initiative to fund a new stadium in San Diego. In an official statement on the Rams decision, the Raiders offered they would "now turn our attention to exploring all options to find a permanent stadium solution." Las Vegas and San Antonio were heavily rumored as possible relocation destinations. By mid-February 2016, the team had worked out a one-year lease agreement with the City of Oakland to play at O.co Coliseum with the option for a second one-year lease.
In late January 2016 billionaire Sheldon Adelson, president and CEO of the Las Vegas Sands Corporation casino empire, proposed a new domed stadium in Las Vegas to potentially house the University of Nevada, Las Vegas football team and a possible NFL team. Adelson quickly reached out to the Raiders to discuss the team partnering on the new stadium. In April 2016, without promising the team would move, Raiders owner Mark Davis met with the Southern Nevada Tourism Infrastructure Committee and pledged $500 million toward Adelson's stadium if public officials agreed to contribute to the stadium.
A group of investors led by former NFL stars Ronnie Lott and Rodney Peete proposed a new stadium to the city of Oakland in June 2016 as a way to keep the Raiders in the city.
Nevada's legislature approved a $750 million public subsidy for the proposed domed Las Vegas stadium in October 2016. Davis informed his fellow NFL owners that he intended to file for relocation to Las Vegas following the end of the season.
On November 28, 2016, the Raiders secured their first winning season since 2002 with a comeback win against the Carolina Panthers, and on December 18, the team clinched their first postseason berth since 2002 with a victory over the San Diego Chargers. On December 20, 2016, the NFL announced that the Raiders would have seven Pro Bowl selections: Khalil Mack, Derek Carr, Amari Cooper, Donald Penn, Kelechi Osemele, Rodney Hudson and Reggie Nelson. This was the most selections for the team since 1991, and the most for any team in the 2016 NFL season.
As the fifth seed in the AFC in the 2016 NFL playoffs, the Raiders faced the Houston Texans in the opening Wild Card round. With significant injuries hampering the team, including the loss of starting quarterback Carr in the second to last regular season game, they lost to the Texans 27–14.
The Raiders officially filed paperwork with the NFL on January 19, 2017, to relocate the club from Oakland to Las Vegas, Nevada by the 2020 season. The vote for the team's relocation took place on March 27, 2017, and the NFL officially approved the Raiders' relocation to Las Vegas by a 31–1 vote. Only the Miami Dolphins dissented the proposed move. Subsequently, the team announced that it would continue to be known as the Oakland Raiders for the 2017 and 2018 NFL seasons and play its games in Oakland for at least those two seasons.
Prior to the 2017 season, the Raiders signed quarterback Derek Carr to a then-NFL record contract extension of five years, $125 million. Following their first trip to the playoffs in 14 years, the Raiders expected bigger things in 2017, with a return to the playoffs seeming likely. However, the Raider defense struggled mightily on the year under Ken Norton Jr., but later improved with John Pagano as the defensive coordinator and the Raider offense could not return to its previous year's form under first-year offensive coordinator Todd Downing. After winning the first two games of the season, the Raiders lost four straight and six of their next eight leaving them two games below .500 with six games remaining. They would win their next two games, but lose their final four games, ending the season a disappointing 6–10. On December 31, 2017, following a loss to the Los Angeles Chargers in Week 17, head coach Del Rio was fired by Mark Davis after being granted a four-year contract extension prior to the season.
On January 6, 2018, the team announced the return of Jon Gruden as head coach. Gruden returned to the Raiders and coaching after a nine-year stint with ESPN serving as analyst for Monday Night Football. Davis, who had reportedly been wanting to hire Gruden for six years, gave Gruden a 10-year contract worth an estimated $100 million. One of the first major moves of the second Gruden era was a blockbuster trade that sent Khalil Mack who was holding out for a new contract to the Chicago Bears for two first round draft picks, and later sent Amari Cooper to the Dallas Cowboys for another first round draft pick. During the 2018 season the Raiders fired general manager Reggie McKenzie, replacing him with NFL Network's draft expert Mike Mayock for the 2019 season. The Raiders finished 4–12 and in last place in the AFC West for the first time since 2014. The next year, in what would be the last season of the team's second tenure in Oakland, the team posted a three-game turnaround with a 7–9 record.
On January 22, 2020, it was announced that the Raiders had officially relocated to Las Vegas. Soon after the announcement, the organization donated $500,000 in an effort to eliminate school lunch debt in the state of Nevada.
The Raiders finished the 1967 season with a 13–1–0 record and won the 1967 AFL Championship. They subsequently lost to the Green Bay Packers in Super Bowl II.
The Raiders have won a total of 3 Super Bowls. They won their first Super Bowl under John Madden, and their next two with Tom Flores.
When the team was founded in 1960, the "Oakland Tribune" held a name-the-team contest. The winning name was the Oakland Señors. After a few days of being the butt of local jokes (and accusations that the contest was fixed, as Chet Soda was fairly well known within the Oakland business community for calling his acquaintances "señor"), the fledgling team (and its owners) changed the team's name nine days later to the Oakland Raiders, which had finished third in the naming contest. Chet Soda hired a well known sportswriter, Gene Lawrence Perry as the first Director of Public Relations. Perry (who was hired in 1959 as the first front office hire) commissioned an unknown Berkeley artist and asked that a logo be created which included a helmeted man with an eye-patch, with the firm chin of a Randolph Scott, a well known Western film actor. The new owners had their newly minted Raiders logo, a pirate wearing a football helmet with an eye patch on a gold football background with two white swords in black trim with gold handles crossed behind the football.
The original Raiders uniforms were black and gold with Gothic numerals, while the helmets were black with a white stripe and no logo. The team wore this design from 1960 to 1962. In a very rare move, the jerseys displayed the player's full name on the back, before being pared down to only the surname in 1963. When Al Davis became head coach and general manager in 1963, he changed the team's color scheme to silver and black, and added a logo to the helmet. This logo is a shield that consists of the word "RAIDERS" at the top, two crossed cutlasses with handles up and cutting edge down, and superimposed head of a Raider wearing a football helmet and a black eye patch covering his right eye. Over the years, it has undergone minor color modifications (such as changing the background from silver to black in 1964), but it has essentially remained the same.
The Raiders' current silver and black uniform design has essentially remained the same since it debuted in 1963. It consists of silver helmets, silver pants, and either black or white jerseys. The black jerseys have silver lettering names and numbers, while the white jerseys have black lettering names and numbers with silver outlining the numbers only. Originally, the white jerseys had black letters for the names and silver numbers with a thick black outline, but they were changed to black with a silver outline for the 1964 season. In 1970, the team used silver numerals with black outline and black lettering names for the season. However, in 1971, the team again displayed black numerals and have stayed that way ever since (with the exception of the 1994 season as part of the NFL's 75th Anniversary where they donned the 1963 helmets with the 1970 silver away numbers and black lettering names).
The Raiders wore their white jerseys at home for the first time in their history on September 28, 2008 against the San Diego Chargers. The decision was made by Lane Kiffin, who was coaching his final game for the Raiders, and was purportedly due to intense heat. The high temperature in Oakland that day was 78°.
For the 2009 season, the Raiders took part in the AFL Legacy Program and wore 1960s throwback jerseys for games against other teams from the former AFL.
In the 2012 and 2013 seasons, the team wore black cleats as a tribute to Al Davis. However, the team reverted to white cleats in 2014.
In the 2016 season, the Raiders brought back their classic white jerseys with silver numerals as part of the NFL Color Rush initiative. Unlike the regular uniforms which are paired with silver pants and black/white socks, the Color Rush jerseys were paired with white pants with silver stripes and all-white socks. Starting in 2018, the Raiders retired the white pants but kept the throwback white jerseys, wearing them along with silver pants and black socks in a style reminiscent of the 1970 road set.
No changes to uniforms or logos were made during the team's move to Las Vegas, aside from changing "OAKLAND" to "LAS VEGAS" on various wordmark logos.
After splitting the first home season between Kezar Stadium and Candlestick, the Raiders moved exclusively to Candlestick Park in 1961, where total attendance for the season was about 50,000, and finished 2–12. Valley threatened to move the Raiders out of the area unless a stadium was built in Oakland, so in 1962 the Raiders moved into 18,000-seat Frank Youell Field (later expanded to 22,000 seats), their first home in Oakland. It was a temporary home for the team while the 53,000 seat Oakland Coliseum was under construction; the Coliseum was completed in 1966. The Raiders shared the Coliseum with the Oakland Athletics once the A's moved to Oakland from Kansas City in 1968, except for the years the Raiders called Los Angeles home (1982–94). The Raiders defeated and lost to all 31 other NFL teams at the Coliseum at least once.
The Raiders did play one regular season game at California Memorial Stadium in Berkeley. On September 23, 1973 they played the Miami Dolphins in Berkeley due to a scheduling conflict with the Athletics. The team defeated the Dolphins 12–7, ending Miami's winning streak.
During the Los Angeles years, the Raiders played in the 93,000-seat Los Angeles Memorial Coliseum.
From the assumption of the team by Mark Davis in 2011, the Raiders had been subject to rampant relocation speculation as the team attempted to find a new stadium in Oakland or elsewhere, due to the age of Oakland Alameda Coliseum, being secondary tenants to Major League Baseball's Athletics, and the expiration of the team's lease at the end of 2013. After looking into a variety of options in the Bay Area, Los Angeles and elsewhere the team ultimately relocated to the Las Vegas area in 2020 where Allegiant Stadium is under construction. The Raiders will share the 65,000 seat stadium with the UNLV Rebels football program.
In the event that Allegiant Stadium is not finished in time, the Raiders have an option with the Oakland Coliseum to remain there in 2020.
Al Davis coined slogans such as "Pride and Poise", "Commitment to Excellence", and "Just Win, Baby"—all of which are registered trademarks of the team.
"Commitment to Excellence" comes from a quote from Vince Lombardi, "The quality of a person's life is in direct proportion to their commitment to excellence, regardless of their chosen field of endeavor."
The nickname Raider Nation refers to the die hard fans of the team spread throughout the United States and the world. Members of the Raider Nation who attend home games are known for arriving to the stadium early, tailgating, and dressing up in face masks and black outfits. The Raider Nation is also known for the Black Hole, originally a specific area of the Coliseum (sections 104–107) frequented by the team's rowdiest and most fervent fans from 1995 until 2019.
Al Davis created the phrase Raider Nation in 1968. In September 2009 Ice Cube recorded a song for the Raiders named "Raider Nation". In 2010 he took part in a documentary for ESPN's "30 for 30" series titled "Straight Outta L.A.". It mainly focuses on N.W.A and the effect of the Raiders' image on their persona. In 2012 Ice Cube wrote another song for the Raiders, as a part of Pepsi's NFL Anthems campaign, "Come and Get It". It was released on September 14, 2012.
The Las Vegas Raiderettes are the cheerleading squad for the Las Vegas Raiders. They were established in 1961 as the Oakland Raiderettes. During the team's time in Los Angeles they were the Los Angeles Raiderettes. They have been billed as "Football's Fabulous Females".
Raider games are broadcast in English on 36 radio stations across the western United States, including flagship station KYMT 93.1 FM in Las Vegas. Games are broadcast on radio stations in Nevada, California, Oregon, Colorado, Hawaii, and Arkansas. Former CBS Sports, ABC Sports and ESPN sportscaster Brent Musburger is the play-by-play announcer (as announced in July 2018), along with former Raiders tackle Lincoln Kennedy doing commentary. George Atkinson and Jim Plunkett offer pre- and post-game commentary. Compass Media Networks is responsible for producing and distributing Raiders radio broadcasts.
Bill King was the voice of the Raiders from 1966 to 1992, during which time he called approximately 600 games. The Raiders awarded him rings for all three of their Super Bowl victories. It is King's radio audio heard on most of the NFL Films highlight footage of the Raiders. King's call of the Holy Roller has been labeled (by Chris Berman, among others) as one of the five best in NFL history. King died in October 2005 from complications after surgery. Former San Francisco 49ers tight end Monty Stickles and Scotty Stirling, an "Oakland Tribune" sportswriter, served as color commentators with King. The Raider games were called on radio from 1960 to 1962 by Bud (Wilson Keene) Foster and Mel Venter, and from 1963 to 1965 by Bob Blum and Dan Galvin. Until their dismissal prior to the 2018 season, Greg Papa was the voice of the Raiders with former Raiders quarterback and coach Tom Flores doing commentary from 1997 to 2017.
In June 2017 it was announced that Beasley Media Group signed a two-year deal as the Las Vegas flagship radio partner of the Raiders. Beasley's stations KCYE (102.7) "The Coyote" and KDWN (720) began carrying all preseason and regular season games in the 2017 season. Beginning with the 2019 season, the Raiders' Las Vegas flagship station became "93.1 The Mountain" KYMT.
The Raiders' games are broadcast in Las Vegas on CBS affiliate KLAS-TV (CBS 8) and in the Bay Area on CBS affiliate KPIX (CBS Channel 5) (when playing an AFC opponent) and on Las Vegas affiliate KVVU-TV (Fox 5) and Fox Bay Area affiliate KTVU (Fox 2) (when hosting an NFC opponent), unless the game is blacked out locally. Sunday night games are on Las Vegas affiliate KSNV (NBC 3) and NBC Bay Area affiliate KNTV (NBC Channel 11). In 2018, Thursday games moved to Fox 2 and Fox 5 in Oakland and Las Vegas, respectively, formerly airing on either NBC or CBS prior to 2018. All Thursday games air on NFL Network otherwise. Traditionally, Monday night games airing on ESPN would air ABC affiliates KTNV 13 in Las Vegas and 7 in Oakland.
During the team's two tenures in Oakland the Raiders were a beneficiary of league scheduling policies. Both the Raiders and the San Francisco 49ers shared the San Francisco Bay Area market, on the West Coast of the United States. This meant that the Raiders could not play any home games, road division games against the Denver Broncos or Los Angeles Chargers, or interconference road games against the NFC West (in seasons that the AFC West and NFC West meet in interconference play) in the early 10:00 a.m. Pacific time slot. In addition, they could not play interconference home games at the same time or network as the 49ers. As a result, both teams generally had more limited scheduling options, and also benefited by receiving more prime time games than usual.
Starting shortly after the announcement of the Raider franchise relocation to Las Vegas, KVVU-TV, the local Fox affiliate in Las Vegas began carrying all Raiders preseason games and special content. In 2020, a deal was made with Nexstar Broadcasting for stations in Raiders markets placing Raiders preseason and special content on KRON-TV (moving from KTVU) in the Bay Area, KTLA in Los Angeles, KTVX in Salt Lake City, KHON-TV in Honolulu, and KGET-TV in Bakersfield alongside KVVU and KLAS in Las Vegas.
The Raiders have rivalries with the other three teams in the AFC West (Denver Broncos, Kansas City Chiefs, and Los Angeles Chargers) and a geographic rivalry with the San Francisco 49ers. They also have rivalries with other teams that arose from playoff battles in the past, most notably with the Pittsburgh Steelers and the New England Patriots. The Seattle Seahawks have an old rivalry with Oakland/Los Angeles/Las Vegas as well, but the rivalry largely died down when the Seahawks moved to the NFC West as part of the NFL's 2002 realignment.
The Chiefs are one of the Raiders' most iconic and longstanding divisional foes, with the rivalry dating back to earliest days of the AFL. Oakland lost the 1969 AFL Championship against Kansas City, who went on to beat the Minnesota Vikings and win the Super Bowl. From 1990 to 1999, the Raiders have lost 17 out of 20 regular season meetings between the Chiefs, including a 10–game losing streak at Kansas City; the Raiders also lost to the Chiefs 10–6 in the Wild Card round on December 28, 1991. On September 8, 1996, the Chiefs also began to lead the overall series against the Raiders for the first time since November 23, 1969. On January 1, 2000, the last game of the 1999 NFL regular season, the Raiders defeated the Chiefs for the first time in Kansas City since 1988 in overtime on a 33-yard field goal kick made by Joe Nedney. The Chiefs lead the overall series 64–53–2, and are the only team in the AFC West that the Raiders have a losing record against. Oakland currently has defeated Kansas City just twice since the 2012 NFL season. Until October 19, 2017 - when they defeated the Chiefs, 31–30 on a game-tying touchdown on the last play of the game, leading to a game winning PAT - the Raiders had lost five straight to the Chiefs, their previous win against them being in the 2014 season.
The Raiders' rivalry with the Broncos, is considered to be one of the most heated and well known rivalries in NFL history. The Raiders managed a 14-game winning streak against the Broncos from 1965 to 1971, which lasted until October 22, 1972, when the Broncos defeated the Raiders 30–23. While the Raiders still hold the advantage in the all-time series 63–53–2, the Broncos amassed 21 wins in 28 games, from the 1995 season and the arrival of Broncos head coach Mike Shanahan, through the 2008 season. Shanahan coached the Raiders before being fired just four games into the 1989 season, which has only served to intensify this rivalry. On October 24, 2010, the Raiders beat the Broncos (59–14), giving the Raiders the most points scored in a game in the team's history. On December 13, 2015, the Raiders pulled a huge upset on the Broncos (15–12) by a spectacular performance from their defense allowing 4 field goals. Linebacker Khalil Mack who recorded 5 sacks in that game against Denver which is tied the most sacks in franchise along with Howie Long. The Broncos reached their first ever Super Bowl when they defeated the Raiders 20–17 in the AFC Championship Game. The two teams have faced off on Monday Night Football a total of 19 times, making it the most frequent Monday Night matchup in NFL history.
The Los Angeles Chargers' rivalry with Oakland dates to the 1963 season, when the Raiders defeated the heavily favored Chargers twice, both come-from-behind fourth quarter victories. The Raiders held a streak without losing to the Chargers with a 16–0–2 record from 1968 to 1977. One of the most memorable games between these teams was the "Holy Roller" game in 1978, in which the Raiders fumbled for a touchdown in a very controversial play. In January 1981 the Chargers hosted their first AFC title against the Raiders. The Raiders were victorious over the Chargers of a score 34–27. The Raiders ended up moving on to play in Super Bowl 15 defeating the Eagles 27–10. On November 22, 1982, the Raiders hosted their first Monday Night football game in Los Angeles against the San Diego Chargers. The Chargers led the game in the 1st half 24–0 until the Raiders came into the 2nd half and made a huge comeback and defeated the San Diego Chargers 28–24. On October 10, 2010, the Raiders ended their 13-game losing streak to the San Diego Chargers with a score of 35–27. The Raiders hold the overall series advantage at 63–54–2.
The Pittsburgh Steelers' rivalry with the Raiders has historically been very tight; as of the 2018 season the Raiders lead the regular season series 13 wins to 10, and their playoff rivalry is tied 3–3. The rivalry was extremely intense during the 1970s, and considered by many to be one of most vicious and brutal in the history of Professional football. From 1972 to 1976 the teams would meet in the playoffs five consecutive times, including three consecutive AFC Championship games. The rivalry really kicked off during the teams’ first playoff meeting at the 1972 AFC divisional round in Pittsburgh. Considered to be one of the most famous plays in NFL history, the "Immaculate Reception", as it was dubbed, saw the Steelers beat the Raiders on a controversial last second play. During the 1975 AFC Championship game, Raiders strong safety George Atkinson delivered a hit on Pittsburgh wide receiver Lynn Swann, which left him concussed. When the two teams met in the 1976 season opener, Atkinson again hit Swann, this time with a forearm to the head, causing yet another concussion. After the second incident, Steelers head coach Chuck Noll referred to Atkinson as part of the "criminal element" in the NFL. Atkinson filed a $2 million defamation lawsuit against Noll and the Steelers, which he lost. The rivalry reached its apex in the late 1980s, cooled when the teams faced each other only sporadically, then headed up again in the late 1990s before cooling again.
The four most recent contests between the Raiders and Steelers harkened back to the rivalry's history of bitterness and close competition. On December 6, 2009, the 3–8 Raiders helped spoil the defending champions' quest for the playoffs as the game lead changed five times in the fourth quarter and a Louis Murphy touchdown with 11 seconds to go won it 27–24 for the Raiders. Oakland was then beaten 35–3 by Pittsburgh on November 21, 2010; this game brought out the roughness of the rivalry's 1970s history when Steelers quarterback Ben Roethlisberger was punched by Raiders defensive end Richard Seymour following a touchdown. On November 8, 2015, the Steelers outplayed the Raiders for a 38–35 victory. During the game, the Raiders defense allowed wide receiver Antonio Brown to catch 17 passes for 284 yards. Both are Steelers team records and the 284 yards is the 7th most yards receiving in a game in NFL history. However, in 2018, the Raiders upset the Steelers again, scoring a late touchdown to take a 24–21 fourth-quarter lead and getting the last laugh when Steelers kicker Chris Boswell slipped and missed a game-tying field goal. This game, which will likely be the teams' final matchup in Oakland, contributed to the Steelers' late-season collapse and missing the playoffs that year.
The rivalry between the Raiders and New England Patriots dates to their time in the AFL, but was intensified during a 1978 preseason game, when Patriots wide receiver Darryl Stingley was permanently paralyzed after a vicious hit delivered by Raiders free safety Jack Tatum. Before that, New England also lost a playoff game in 1976 to the Raiders; the game is unofficially known as "The Ben Dreith Game" due to a controversial penalty by head referee Dreith. While based in Los Angeles, the team hosted New England in the divisional round of the playoffs in 1986. The game was won by New England and marred by a chaotic rumble between the teams in the end zone as players were leaving the field after the game. The brawl was especially notable for Matt Millen attacking Patriots GM Patrick Sullivan with his helmet. The two teams met in a divisional-round playoff game in 2002, which became known as the "Tuck Rule Game". Late in the game, an incomplete pass, ruled a fumble, by Patriots quarterback Tom Brady was overturned, and New England went on to win in overtime and eventually won the Super Bowl against the heavily favored St. Louis Rams, the Raiders' former crosstown rivals in Los Angeles. Since that game, the Patriots have won five of the last six regular season contests between the two teams. The first contest being the following year during the 2002 season in Oakland, with the Raiders winning 27–20; they met in the 2005 season opener in New England with the Patriots ruining Randy Moss' debut as a Raider 30–20; the Patriots defeated the Raiders 49–26 in December 2008 in Bill Belichick's 100th regular season win as Patriots coach; a Patriots 31–19 win during the 2011 season; a scrappy 16–9 Patriots win in the third week of the 2014 season, and the Patriots' 33–8 win in Mexico City in 2017.
The New York Jets began a strong rivalry with the Raiders in the AFL during the 1960s that continued through much of the 1970s, fueled in part by Raider Ike Lassiter breaking star quarterback Joe Namath's jaw during a 1967 game (though Ben Davidson was wrongly blamed), the famous Heidi Game during the 1968 season, and the Raiders' bitter loss to the Jets in the AFL Championship later that season. The rivalry waned in later years, but saw a minor resurgence in the 2000–02 period. The Jets edged the Raiders in the final week of the 2001 season 24–22 on a last-second John Hall field goal; the Raiders hosted the Jets in the Wild Card round the following Saturday and won 38–24. In the 2002 season the Raiders defeated the Jets 26–20 in December, then defeated them again in the AFC Divisional Playoffs, 30–10. The Raiders lost the 37–27 on December 8, 2013, but won the most recent matchup 20–34 on November 1, 2015.
Rivalries that have waned in recent years have been with the Miami Dolphins and Houston Oilers/Tennessee Titans. The Raiders faced the Dolphins twice in the early 1970s; the Dolphins defeated the Raiders in the 1973 AFC Championship Game 27–10 on their way to Super Bowl VIII. The next year in the divisional playoffs the Raiders trailed Miami 26–21; in the final minute the Raiders drove to the Miami eight-yard line; a desperation pass by Ken Stabler was caught in traffic by Clarence Davis in the play known as the "Sea of Hands."
The Raiders faced the Houston Oilers throughout the AFL era and twice in AFL playoffs in the late 1960s, winning 40–7 in 1967 on their way to Super Bowl II and 56–7 in the 1969 divisional playoffs. Oakland defeated the Oilers in the 1980 Wild Card playoffs 27–7 and defeated the Titans in the 2002 AFC Championship Game 41–24.
The San Francisco 49ers, located on the other side of San Francisco Bay, were the Raiders' geographic rivals during the Raiders' time in Oakland. The first exhibition game, played in 1967, ended with the 49ers defeating the AFL Raiders 13–10. After the 1970 merger, the 49ers won in Oakland 38–7. As a result, games between the two are referred to as the "Battle of the Bay." Since the two teams play in different conferences, regular season matchups happen only once every four years. Fans and players of the winning team could claim "bragging rights" as the better team in the area.
On August 20, 2011, in the third week of the preseason, the preseason game between the rivals was marked by fights in restrooms and stands at Candlestick Park, including a shooting outside the stadium in which several were injured. The NFL has decided to cancel all future preseason games between the Raiders and 49ers.
The series ended on November 1, 2018, during a Thursday Night Football broadcast at Levi's Stadium, marking the last time both teams would meet before the Raiders moved to their new home in Las Vegas. The 49ers won the game 34-3 to tie their regular season series at 7.
As mentioned earlier, the Raiders and Los Angeles Rams had a rivalry during the 13 years both teams shared the Los Angeles market. The teams met six times in the regular season in this period; the Raiders won the first meeting of this era, 37–31, on December 18, 1982. The Raiders won four of their six games against the Rams while the teams shares the Los Angeles market.
Max Winter, a Minneapolis businessman was among the eight proposed franchise owners in the American Football League. In a move typical of the NFL owners who were frightened by the prospect of competition and continually obstructed the new league, they offered Winter an expansion franchise in the NFL. This was after the NFL had rejected Lamar Hunt's feelers, saying they were not interested in expansion. One of many obfuscations put forward by the NFL in its attempt to derail the AFL.
After the AFL's first draft, in which players were selected for the then nameless Minneapolis franchise, Winter reneged from his agreement with the AFL owners and defected to the NFL with a franchise that started play in 1961 and was named the Minnesota Vikings. The Vikings were never an AFL team, nor did they have any association with the AFL. Many of the players (including Abner Haynes) that had been assigned to the UNNAMED and defunct Minneapolis AFL franchise were signed by some of the seven loyal remaining members of the AFL's 'Foolish Club'.
The city of Oakland was awarded the eighth AFL franchise on January 30, 1960. Once the consortium of owners was found for the eighth franchise, the team was named the Raiders. Because many of the defunct Minneapolis franchise's originally drafted players were signed by other AFL teams, the AFL held an 'allocation' draft, in which each team earmarked players that could be chosen by the Raiders.
The Minneapolis group did not take with them any of the rights to players they drafted when they defected to the NFL, because their first draft in that league was in 1961. The Raiders were not originally in Minnesota as some claim. They were a new, charter franchise in the American Football League. One reason they were so weak in the first few years of the AFL was that the other AFL teams did not make quality players available in the allocation draft.
At the time, Oakland seemed an unlikely venue for a professional football team. The city had not asked for a team, there was no ownership group and there was no stadium in Oakland suitable for pro football (the closest stadiums were in Berkeley and San Francisco) and there was already a successful NFL franchise in the Bay Area in the San Francisco 49ers. However, the AFL owners selected Oakland after Los Angeles Chargers owner Barron Hilton threatened to forfeit his franchise unless a second team was placed on the West Coast.
Upon receiving the franchise, Oakland civic leaders found a number of businesspeople willing to invest in the new team. A limited partnership was formed to own the team headed by managing general partner Y. Charles (Chet) Soda (1908–89), a local real estate developer, and included general partners Ed McGah (1899–1983), Robert Osborne (1898–1968), F. Wayne Valley (1914–86), restaurateur Harvey Binns (1914–82), Don Blessing (1904–2000), and contractor Charles Harney (1902–62) as well as numerous limited partners.
The Raiders finished their first campaign with a 6–8 record, and lost $500,000. Desperately in need of money to continue running the team, Valley received a $400,000 loan from Buffalo Bills founder Ralph C. Wilson Jr.
After the conclusion of the first season Soda dropped out of the partnership, and on January 17, 1961, Valley, McGah and Osborne bought out the remaining four general partners. Soon after, Valley and McGah purchased Osborne's interest, with Valley named as the managing general partner.
In 1962 Valley hired Al Davis, a former assistant coach for the San Diego Chargers, as head coach and general manager. In April 1966 Davis left the Raiders after being named AFL Commissioner. Two months later, the league announced its merger with the NFL. With the merger, the position of commissioner was no longer needed, and Davis entered into discussions with Valley about returning to the Raiders. On July 25, 1966, Davis returned as part owner of the team. He purchased a 10% interest in the team for US$18,000, and became the team's third general partner – the partner in charge of football operations.
In 1972, with Wayne Valley out of the country for several weeks attending the Olympic Games in Munich, Davis's attorneys drafted a revised partnership agreement that gave him total control over all of the Raiders' operations. McGah, a supporter of Davis, signed the agreement. Under partnership law, by a 2–1 vote of the general partners, the new agreement was thus ratified. Valley was furious when he discovered this, and immediately filed suit to have the new agreement overturned, but the court sided with Davis and McGah.
In 1976 Valley sold his interest in the team, and Davis – who now owned only 25% of the Raiders – was firmly in charge.
Legally, the club is a limited partnership with nine partners – Davis' heirs and the heirs of the original eight team partners. From 1972 onward, Davis had exercised near-complete control as president of the team's general partner, A.D. Football, Inc. Although exact ownership stakes are not known, it has been reported that Davis owned 47% of the team shares before his death in 2011.
Ed McGah, the last of the original eight general partners of the Raiders, died in September 1983. Upon his death, his interest was devised to a family trust, of which his son, E.J. McGah, was the trustee. The younger McGah was himself a part-owner of the team, as a limited partner, and died in 2002. Several members of the McGah family filed suit against Davis in October 2003, alleging mismanagement of the team by Davis. The lawsuit sought monetary damages and to remove Davis and A. D. Football, Inc. as the team's managing general partner. Among their specific complaints, the McGahs alleged that Davis failed to provide them with detailed financial information previously provided to Ed and E.J. McGah. The Raiders countered that—under the terms of the partnership agreement as amended in 1972—upon the death of the elder McGah in 1983, his general partner interest converted to that of a limited partner. The team continued to provide the financial information to the younger McGah as a courtesy, though it was under no obligation to do so.
The majority of the lawsuit was dismissed in April 2004, when an Alameda County Superior Court judge ruled that the case lacked merit since none of the other partners took part in the lawsuit. In October 2005 the lawsuit was settled out of court. The terms of the settlement are confidential, but it was reported that under its terms Davis purchased the McGah family's interest in the Raiders (approximately 31%), which gave him for the first time a majority interest, speculated to be approximately 67% of the team. As a result of the settlement, confidential details concerning Al Davis and the ownership of the Raiders were not released to the public. His ownership share went down to 47% when he sold 20% of the team to Wall Street investors
In 2006 it was reported that Davis had been attempting to sell the 31% ownership stake in the team obtained from the McGah family. He was unsuccessful in this effort, reportedly because the sale would not give the purchaser any control of the Raiders, even in the event of Davis's death.
Al Davis died on October 8, 2011, at age 82. According to a 1999 partnership agreement, Davis' interest passed to his wife, Carol. After Davis' death, Raiders chief executive Amy Trask said that the team "will remain in the Davis family." Al and Carol's son, Mark, inherited his father's old post as managing general partner and serves as the public face of the ownership.
According to a 2017 report released by "Forbes Magazine", the Raiders' overall team value is US 2.38 billion ranked 19th out of 32 NFL teams. This valuation was made after the team's announcement of relocation to Las Vegas by 2020 and into a new stadium which moved the team's value up 19 percent.
Although the team has regularly sold out since 2013, the team ranked in the bottom three in league attendance from 2003 to 2005, and failed to sell out a majority of their home games. One of the reasons cited for the poor attendance figures was the decision to issue costly personal seat licenses (PSLs) upon the Raiders' return to Oakland in 1995. The PSLs, which ranged in cost from $250 to $4,000, were meant to help repay the $200 million it cost the city of Oakland and Alameda County to expand the Oakland Coliseum. They were only valid for ten years, however, while other teams issue them permanently. As a result, fewer than 31,000 PSLs were sold for a stadium that holds twice that number. From 1995 until the lifting of the policy in 2014 television blackouts of Raiders home games were common.
In November 2005 the team announced that it was taking over ticket sales from the privately run Oakland Football Marketing Association (OFMA), and abolishing PSLs. In February 2006 the team also announced that it would lower ticket prices for most areas of the Oakland Coliseum. Just prior to the start of the 2006 NFL season, the Raiders revealed that they had sold 37,000 season tickets, up from 29,000 the previous year. Despite the team's 2–14 record, they sold out six of their eight home games in 2006.
The Raiders and Al Davis have been involved in several lawsuits throughout their history, including ones against the NFL. When the NFL declined to approve the Raiders' move from Oakland to Los Angeles in 1980, the team joined the Los Angeles Memorial Coliseum Commission in a lawsuit against the league alleging a violation of antitrust laws. The Coliseum Commission received a settlement from the NFL of $19.6 million in 1987. In 1986, Davis testified on behalf of the United States Football League in their unsuccessful antitrust lawsuit against the NFL. He was the only NFL owner to do so.
After relocating back to Oakland, the team sued the NFL for interfering with their negotiations to build a new stadium at Hollywood Park prior to the move. The Raiders' lawsuit further contended that they had the rights to the Los Angeles market, and thus were entitled to compensation from the league for giving up those rights by moving to Oakland. A jury found in favor of the NFL in 2001, but the verdict was overturned a year later due to alleged juror misconduct. In February 2005, a California Court of Appeal unanimously upheld the original verdict.
When the Raiders moved back from Los Angeles in 1995, the city of Oakland and the Oakland–Alameda County Coliseum Authority agreed to sell Personal Seat Licenses (PSLs) to help pay for the renovations to their stadium. But after games rarely sold out, the Raiders filed suit, claiming that they were misled by the city and the Coliseum Authority with the false promise that there would be sellouts. On November 2, 2005, a settlement was announced, part of which was the abolishment of PSLs as of the 2006 season.
In 1996 the team sued the NFL in Santa Clara County, in a lawsuit that ultimately included 22 separate causes of action. Included in the team's claims were claims that the Tampa Bay Buccaneers' pirate logo diluted the team's California trademark in its own pirate logo and for trade dress dilution on the ground that the League had improperly permitted other teams (including the Buccaneers and Carolina Panthers) to adopt colors for their uniforms similar to those of the Raiders. Among other things, the lawsuit sought an injunction to prevent the Buccaneers and Panthers from wearing their uniforms while playing in California. In 2003 these claims were dismissed on summary judgment because the relief sought would violate the Commerce Clause of the United States Constitution.
In 2003 a number of current and former Oakland players such as Bill Romanowski, Tyrone Wheatley, Barrett Robbins, Chris Cooper and Dana Stubblefield were named as clients of the Bay Area Laboratory Co-Operative (BALCO). BALCO was an American company led by founder and owner Victor Conte. Also in 2003, journalists Lance Williams and Mark Fainaru-Wada investigated the company's role in a drug sports scandal later referred to as the "BALCO Affair". BALCO marketed tetrahydrogestrinone ("the Clear"), a then-undetected, performance-enhancing steroid developed by chemist Patrick Arnold. Conte, BALCO vice president James Valente, weight trainer Greg Anderson and coach Remi Korchemny had supplied a number of high-profile sports stars from the United States and Europe with the Clear and human growth hormone for several years.
Headquartered in Burlingame, BALCO was founded in 1984. Officially, BALCO was a service business for blood and urine analysis and food supplements. In 1988, Victor Conte offered free blood and urine tests to a group of athletes known as the "BALCO Olympians". He then was allowed to attend the Summer Olympics in Seoul, South Korea. From 1996 Conte worked with well-known American football star Bill Romanowski, who proved to be useful to establish new connections to athletes and coaches.
The Pro Football Hall of Fame has inducted 14 players who made their primary contribution to professional football while with the Raiders, in addition to coach-owner-commissioner Al Davis, head coach John Madden and executive Ron Wolf. The Raiders' total is of 25 Hall of Famers.
Notes:
The Raider organization does not retire the jersey numbers of former players on an official or unofficial basis. All 99 numbers are available for any player, regardless of stature or who previously wore the number.
The following Raiders players have been named to the All-Pro team:
The following Raiders players have been named to the Pro Bowl:
The coaches and executives that have contributed to the history & success of the Los Angeles/Oakland/Las Vegas Raiders franchise are as follows: | https://en.wikipedia.org/wiki?curid=22312 |
Oligarchy
Oligarchy (; ) is a form of power structure in which power rests with a small number of people. These people may be distinguished by nobility, wealth, education, corporate, religious, political, or military control. Such states are often controlled by families who pass their influence from one generation to the next, but inheritance is not a necessary condition of oligarchy.
Throughout history, oligarchies have often been tyrannical, relying on public obedience or oppression to exist. Aristotle pioneered the use of the term as meaning rule by the rich, for which another term commonly used today is plutocracy. In the early 20th century Robert Michels developed the theory that democracies, as all large organizations, have a tendency to turn into oligarchies. In his "Iron law of oligarchy" he suggests that the necessary division of labor in large organizations leads to the establishment of a ruling class mostly concerned with protecting their own power.
This was already recognized by the Athenians in the fourth century BCE: after the restoration of democracy from oligarchical coups, they used the drawing of lots for selecting government officers to counteract that tendency toward oligarchy in government. They drew lots from large groups of adult volunteers to pick civil servants performing judicial, executive, and administrative functions ("archai", "boulē", and "hēliastai"). They even used lots for posts, such as judges and jurors in the political courts ("nomothetai"), which had the power to overrule the Assembly.
The exclusive consolidation of power by a dominant religious or ethnic minority has also been described as a form of oligarchy. Examples of this system include South Africa under "apartheid", Liberia under Americo-Liberians, the Sultanate of Zanzibar, and Rhodesia, where the installation of oligarchic rule by the descendants of foreign settlers was primarily regarded as a legacy of various forms of colonialism.
The modern United States has also been described as an oligarchy because economic elites and organized groups representing special interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.
A business group might be defined as an oligarch if it satisfies the following conditions:
(1) owners are the largest private owners in the country
(2) it possesses sufficient political power to promote its own interests
(3) owners control multiple businesses, which intensively coordinate their activities.
Since the collapse of the Soviet Union and privatization of the economy in December 1991, privately owned Russia-based multinational corporations, including producers of petroleum, natural gas, and metal have, in the view of many analysts, led to the rise of Russian oligarchs.
The Ukrainian oligarchs are a group of business oligarchs that quickly appeared on the economic and political scene of Ukraine after its independence in 1991. Overall there are 35 oligarchic groups
The Zimbabwean oligarchs are a group of liberation war veterans who form the Zimbabwe African National Union - Patriotic Front, a colonial liberation party. The philosophy of the Zimbabwean government is that Zimbabwe can only be governed by a leader who took part in the pre-independence war. The motto of ZANU-PF in Shona is ""Zimbabwe yakauya neropa"", meaning Zimbabwe was born from the blood of the sons and daughters who died fighting for its independence. The born free generation (born since independence in 1980) has no birthright to rule Zimbabwe.
Some contemporary authors have characterized current conditions in the United States as oligarchic in nature. Simon Johnson wrote that "the reemergence of an American financial oligarchy is quite recent", a structure which he delineated as being the "most advanced" in the world. Jeffrey A. Winters wrote that "oligarchy and democracy operate within a single system, and American politics is a daily display of their interplay." The top 1% of the U.S. population by wealth in 2007 had a larger share of total income than at any time since 1928. In 2011, according to PolitiFact and others, the top 400 wealthiest Americans "have more wealth than half of all Americans combined."
In 1998, Bob Herbert of "The New York Times" referred to modern American plutocrats as "The Donor Class" (list of top donors) and defined the class, for the first time, as "a tiny group—just one-quarter of 1 percent of the population—and it is not representative of the rest of the nation. But its money buys plenty of access."
French economist Thomas Piketty states in his 2013 book, "Capital in the Twenty-First Century," that "the risk of a drift towards oligarchy is real and gives little reason for optimism about where the United States is headed."
A study conducted by political scientists Martin Gilens of Princeton University and Benjamin Page of Northwestern University was released in April 2014, which stated that their "analyses suggest that majorities of the American public actually have little influence over the policies our government adopts." The study analyzed nearly 1,800 policies enacted by the US government between 1981 and 2002 and compared them to the expressed preferences of the American public as opposed to wealthy Americans and large special interest groups. It found that wealthy individuals and organizations representing business interests have substantial political influence, while average citizens and mass-based interest groups have little to none. The study did concede that "Americans do enjoy many features central to democratic governance, such as regular elections, freedom of speech and association, and a widespread (if still contested) franchise." Gilens and Page do not characterize the US as an "oligarchy" per se; however, they do apply the concept of "civil oligarchy" as used by Jeffrey Winters with respect to the US. Winters has posited a comparative theory of "oligarchy" in which the wealthiest citizens – even in a "civil oligarchy" like the United States – dominate policy concerning crucial issues of wealth- and income-protection.
Gilens says that average citizens only get what they want if wealthy Americans and business-oriented interest groups also want it; and that when a policy favored by the majority of the American public is implemented, it is usually because the economic elites did not oppose it. Other studies have questioned the Page and Gilens study.
In a 2015 interview, former President Jimmy Carter stated that the United States is now "an oligarchy with unlimited political bribery" due to the "Citizens United v. FEC" ruling which effectively removed limits on donations to political candidates. Wall Street spent a record $2 billion trying to influence the 2016 United States presidential election. | https://en.wikipedia.org/wiki?curid=22315 |
Oman
Oman ( ; ' ), officially the Sultanate of Oman ( ), is a country on the southeastern coast of the Arabian Peninsula in Western Asia. Located in a strategically important position at the mouth of the Persian Gulf, the country shares land borders with the United Arab Emirates to the northwest, Saudi Arabia to the west, and Yemen to the southwest, and shares marine borders with Iran and Pakistan. The coast is formed by the Arabian Sea on the southeast and the Gulf of Oman on the northeast. The Madha and Musandam exclaves are surrounded by the UAE on their land borders, with the Strait of Hormuz (which it shares with Iran) and the Gulf of Oman forming Musandam's coastal boundaries.
From the late 17th century, the Omani Sultanate was a powerful empire, vying with the Portuguese Empire and the British Empire for influence in the Persian Gulf and Indian Ocean. At its peak in the 19th century, Omani influence or control extended across the Strait of Hormuz to modern-day Iran and Pakistan, and as far south as Zanzibar. When its power declined in the 20th century, the sultanate came under the influence of the United Kingdom. For over 300 years, the relations built between the two empires were based on mutual benefits. The UK recognized Oman's geographical importance as a trading hub that secured their trading lanes in the Persian Gulf and Indian Ocean and protected their empire in the Indian sub-continent. Historically, Muscat was the principal trading port of the Persian Gulf region. Muscat was also among the most important trading ports of the Indian Ocean.
Sultan Qaboos bin Said al Said was the hereditary leader of the country, which is an absolute monarchy, from 1970 until his death on 10 January 2020. His cousin, Haitham bin Tariq, was named as the country's new ruler following his death.
Oman is a member of the United Nations, the Arab League, the Gulf Cooperation Council, the Non-Aligned Movement and the Organisation of Islamic Cooperation. It has sizeable oil reserves, ranking 25th globally. In 2010, the United Nations Development Programme ranked Oman as the most improved nation in the world in terms of development during the preceding 40 years. A significant portion of its economy involves tourism and trading fish, dates and other agricultural produce. Oman is categorized as a high-income economy and ranks as the 69th most peaceful country in the world according to the Global Peace Index.
The origin of Oman's name is uncertain. It seems to be related to Pliny the Elder's Omana and Ptolemy's Omanon (), both probably the ancient Sohar. The city or region is typically etymologized in Arabic from "aamen" or "amoun" ("settled" people, as opposed to the Bedouin), although a number of eponymous founders have been proposed (Oman bin Ibrahim al-Khalil, Oman bin Siba' bin Yaghthan bin Ibrahim, Oman bin Qahtan and the Biblical Lot) and others derive it from the name of a valley in Yemen at Ma'rib presumed to have been the origin of the city's founders, the Azd, a tribe migrating from Yemen.
At Aybut Al Auwal, in the Dhofar Governorate of Oman, a site was discovered in 2011 containing more than 100 surface scatters of stone tools, belonging to a regionally specific African lithic industry—the late Nubian Complex—known previously only from the northeast and Horn of Africa. Two optically stimulated luminescence age estimates place the Arabian Nubian Complex at 106,000 years old. This supports the proposition that early human populations moved from Africa into Arabia during the Late Pleistocene.
In recent years surveys have uncovered Palaeolithic and Neolithic sites on the eastern coast. Main Palaeolithic sites include Saiwan-Ghunaim in the Barr al-Hikman. Archaeological remains are particularly numerous for the Bronze Age Umm an-Nar and Wadi Suq periods. Sites such as Bat show professional wheel-turned pottery, excellent hand-made stone vessels, a metals industry and monumental architecture
. The Early (1300‒300 BC) and Late Iron Ages (100 BC‒300 AD) show more differences than similarities to each other. Thereafter, until the coming of Ibadi Islam, little or nothing is known.
During the 8th century BC, it is believed that the Yaarub, the descendant of Kahtan, ruled the entire region of Yemen, including Oman. Wathil bin Himyar bin Abd-Shams-Saba bin Jashjub bin Yaarub later ruled Oman. It is thus believed that the Yaarubah were the first settlers in Oman from Yemen.
In the 1970s and 1980s scholars like John C. Wilkinson believed by virtue of oral history that in the 6th century BC, the Achaemenids exerted control over the Omani peninsula, most likely ruling from a coastal centre such as Suhar. Central Oman has its own indigenous Samad Late Iron Age cultural assemblage named eponymously from Samad al-Shan. In the northern part of the Oman Peninsula the Recent Pre-Islamic Period begins in the 3rd century BC and extends into the 3rd A.D. century. Whether or not Persians brought south-eastern Arabian under their control is a moot point, since the lack of Persian finds speak against this belief. M. Caussin de Percevel suggests that Shammir bin Wathil bin Himyar recognized the authority of Cyrus over Oman in 536 B.C.
Sumerian tablets referred to Oman as "Magan" and in the Akkadian language "Makan", a name which links Oman's ancient copper resources. Mazoon, a Persian name used to refer to Oman's region, which was part of the Sasanian Empire.
Over centuries tribes from western Arabia settled in Oman, making a living by fishing, farming, herding or stock breeding, and many present day Omani families trace their ancestral roots to other parts of Arabia. Arab migration to Oman started from northern-western and south-western Arabia and those who chose to settle had to compete with the indigenous population for the best arable land. When Arab tribes started to migrate to Oman, there were two distinct groups. One group, a segment of the Azd tribe migrated from the southwest of Arabia in A.D. 120/200 following the collapse of Marib Dam, while the other group migrated a few centuries before the birth of Islam from central and northern Arabia, named Nizari (Nejdi). Other historians believe that the Yaarubah, like the Azd, from Qahtan but belong to an older branch, were the first settlers of Oman from Yemen, and then came the Azd.
The Azd settlers in Oman are descendants of Nasr bin Azd, a branch of Yaarub bin Qahtan, and were later known as "the Al-Azd of Oman". Seventy years after the first Azd migration, another branch of Alazdi under Malik bin Fahm, the founder of Kingdom of Tanukhites on the west of Euphrates, is believed to have settled in Oman. According to Al-Kalbi, Malik bin Fahm was the first settler of Alazd. He is said to have first settled in Qalhat. By this account, Malik, with an armed force of more than 6000 men and horses, fought against the Marzban, who served an ambiguously named Persian king in the battle of Salut in Oman and eventually defeated the Persian forces. This account is, however, semi-legendary and seems to condense multiple centuries of migration and conflict into a story of two campaigns that exaggerate the success of the Arabs. The account may also represent an amalgamation of various traditions from not only the Arab tribes but also the region's original inhabitants. Furthermore, no date can be determined for the events of this story.
In the 7th century AD, Omanis came in contact with and accepted Islam. The conversion of Omanis to Islam is ascribed to Amr ibn al-As, who was sent by the prophet Muhammad during the Expedition of Zaid ibn Haritha (Hisma). Amer was dispatched to meet with Jaifer and Abd, the sons of Julanda who ruled Oman. They appear to have readily embraced Islam.
Omani Azd used to travel to Basra for trade, which was a centre of Islam during the Umayyad empire. Omani Azd were granted a section of Basra, where they could settle and attend their needs. Many of the Omani Azd who settled in Basra became wealthy merchants and under their leader Muhallab bin Abi Sufrah started to expand their influence of power eastwards towards Khorasan. Ibadhi Islam originated in Basra by its founder Abdullah ibn Ibada around the year 650 CE, which the Omani Azd in Iraq followed. Later, Alhajjaj, the governor of Iraq, came into conflict with the Ibadhis, which forced them out to Oman. Among those who returned to Oman was the scholar Jaber bin Zaid. His return and the return of many other scholars greatly enhanced the Ibadhi movement in Oman. Alhajjaj, also made an attempt to subjugate Oman, which was ruled by Suleiman and Said, the sons of Abbad bin Julanda. Alhajjaj dispatched Mujjaah bin Shiwah who was confronted by Said bin Abbad. The confrontation devastated Said's army. Thus, Said and his forces resorted to the Jebel Akhdar. Mujjaah and his forces went after Said and his forces and succeeded in besieging them from a position in "Wade Mastall". Mujjaah later moved towards the coast where he confronted Suleiman bin Abbad. The battle was won by Suleiman's forces. Alhajjaj, however, sent another force under Abdulrahman bin Suleiman and eventually won the war and took over the governance of Oman.
The first elective Imamate of Oman is believed to have been established shortly after the fall of the Umayyad Dynasty in 750/755 AD when Janah bin Abbada Alhinawi was elected. Other scholars claim that Janah bin Abbada served as a Wali (governor) under Umayyad dynasty and later ratified the Imamate, while Julanda bin Masud was the first elected Imam of Oman in A.D. 751. The first Imamate reached its peak power in the ninth A.D. century. The Imamate established a maritime empire whose fleet controlled the Gulf during the time when trade with the Abbasid Dynasty, the East and Africa flourished. The authority of the Imams started to decline due to power struggles, the constant interventions of Abbasid and the rise of the Seljuk Empire.
During the 11th and 12th centuries, Oman was controlled by the Seljuk Empire. They were expelled in 1154, when the Nabhani dynasty came to power. The Nabhanis ruled as "muluk", or kings, while the Imams were reduced to largely symbolic significance. The capital of the dynasty was Bahla. The Banu Nabhan controlled the trade in frankincense on the overland route via Sohar to the Yabrin oasis, and then north to Bahrain, Baghdad and Damascus. The mango-tree was introduced to Oman during the time of Nabhani dynasty, by ElFellah bin Muhsin. The Nabhani dynasty started to deteriorate in 1507 when Portuguese colonisers captured the coastal city of Muscat, and gradually extended their control along the coast up to Sohar in the north and down to Sur in the southeast. Other historians argue that the Nabhani dynasty ended earlier in A.D. 1435 when conflicts between the dynasty and Alhinawis arose, which led to the restoration of the elective Imamate.
A decade after Vasco da Gama's successful voyage around the Cape of Good Hope and to India in 1497–98, the Portuguese arrived in Oman and occupied Muscat for a 143-year period, from 1507 to 1650. In need of an outpost to protect their sea lanes, the Portuguese built up and fortified the city, where remnants of their colonial architectural style still exist. An Ottoman fleet captured Muscat in 1552, during the fight for control of the Persian Gulf and the Indian Ocean.
The Ottoman Turks temporarily captured Muscat from the Portuguese in 1581 and held it until 1588. During the 17th century, the Omanis were reunited by the Yaruba Imams. Nasir bin Murshid became the first Yaarubah Imam in 1624, when he was elected in Rustak. Nasir's energy and perseverance is believed to have earned him the election. Imam Nasir succeeded in the 1650s to force the Portuguese colonisers out of Oman. The Omanis over time established a maritime empire that later expelled the Portuguese from East Africa, which became an Omani colony. To capture Zanzibar Saif bin Sultan, the Imam of Oman, pressed down the Swahili Coast. A major obstacle to his progress was Fort Jesus, housing the garrison of a Portuguese settlement at Mombasa. After a two-year siege, the fort fell to Saif bin Sultan in 1698. Thereafter the Omanis easily ejected the Portuguese from other African coastal regions including Kilwa and Pemba. Saif bin Sultan occupied Bahrain in 1700. Qeshm was captured in 1720. The rivalry within the house of Yaruba over power after the death of Imam Sultan in 1718 weakened the dynasty. With the power of the Yaruba Dynasty dwindling, Imam Saif bin Sultan II eventually asked for help against his rivals from Nader Shah of Persia. A Persian force arrived in March 1737 to aid Saif. From their base at Julfar, the Persian forces eventually rebelled against the Yaruba in 1743. The Persian empire then colonised Oman for a short period until 1747.
After the decolonization of Oman from the Persians, Ahmed bin Sa'id Albusaidi in 1749 became the elected Imam of Oman, with Rustaq serving as the capital. Since the Yaruba dynasty, the Omanis kept the elective system but, provided that the person is deemed qualified, gave preference to a member of the ruling family. Following Imam Ahmed's death in 1783, his son, Said bin Ahmed became the elected Imam. His son, Seyyid Hamed bin Said, overthrew the representative of the Imam in Muscat and obtained the possession of Muscat fortress. Hamed ruled as "Seyyid". Afterwards, Seyyid Sultan bin Ahmed, the uncle of Seyyid Hamed, took over power. Seyyid Said bin Sultan succeeded Sultan bin Ahmed. During the entire 19th century, in addition to Imam Said bin Ahmed who retained the title until he died in 1803, Azzan bin Qais was the only elected Imam of Oman. His rule started in 1868. However, the British refused to accept Imam Azzan as a ruler. The refusal played an instrumental role in deposing Imam Azzan in 1871 by a sultan who Britain deemed to be more acceptable.
Oman's Imam Sultan, defeated ruler of Muscat, was granted sovereignty over Gwadar, an area of modern-day Pakistan. This coastal city is located in the Makran region of what is now the far southwestern corner of Pakistan, near the present-day border of Iran, at the mouth of the Gulf of Oman. After regaining control of Muscat, this sovereignty was continued via an appointed "wali" ("governor").s
The British empire was keen to dominate southeast Arabia to stifle the growing power of other European states and to curb the Omani maritime power that grew during the 17th century. The British empire over time, starting from the late 18th century, began to establish a series of treaties with the sultans with the objective of advancing British political and economic interest in Muscat, while granting the sultans military protection. In 1798, the first treaty between the British East India Company and Albusaidi family was signed by Sultan bin Ahmed. The treaty was to block commercial competition of the French and the Dutch as well as obtain a concession to build a British factory at Bandar Abbas. A second treaty was signed in 1800, which stipulated that a British representative shall reside at the port of Muscat and manage all external affairs with other states. The British influence that grew during the nineteenth century over Muscat weakened the Omani Empire.
In 1854, a deed of cession of the Omani Kuria Muria islands to Britain was signed by the sultan of Muscat and the British government. The British government achieved predominating control over Muscat, which, for the most part, impeded competition from other nations. Between 1862 and 1892, the Political Residents, Lewis Pelly and Edward Ross, played an instrumental role in securing British supremacy over the Persian Gulf and Muscat by a system of indirect governance. By the end of the 19th century, the British influence increased to the point that the sultans became heavily dependent on British loans and signed declarations to consult the British government on all important matters. The Sultanate thus became a "de facto" British colony.
Zanzibar was a valuable property as the main slave market of the Swahili Coast, and became an increasingly important part of the Omani empire, a fact reflected by the decision of the 19th century sultan of Muscat, Sa'id ibn Sultan, to make it his main place of residence in 1837. Sa'id built impressive palaces and gardens in Zanzibar. Rivalry between his two sons was resolved, with the help of forceful British diplomacy, when one of them, Majid, succeeded to Zanzibar and to the many regions claimed by the family on the Swahili Coast. The other son, Thuwaini, inherited Muscat and Oman. Zanzibar influences in the Comoros archipelago in the Indian Ocean indirectly introduced Omani customs to the Comorian culture. These influences include clothing traditions and wedding ceremonies. In 1856, under British direction, Zanzibar and Muscat became two different sultanates.
The Al Hajar Mountains, of which the Jebel Akhdar is a part, separate the country into two distinct regions: the interior, known as Oman, and the coastal area dominated by the capital, Muscat. The British imperial development over Muscat and Oman during the 19th century led to the renewed revival of the Imamate cause in the interior of Oman, which has appeared in cycles for more than 1,200 years in Oman. The British Political Agent, who resided in Muscat, owed the alienation of the interior of Oman to the vast influence of the British government over Muscat, which he described as being completely self-interested and without any regard to the social and political conditions of the locals. In 1913, Imam Salim Alkharusi instigated an anti-Muscat rebellion that lasted until 1920 when the Imamate established peace with the Sultanate by signing the Treaty of Seeb.The treaty was brokered by Britain, which had no economic interest in the interior of Oman during that point of time. The treaty granted autonomous rule to the Imamate in the interior of Oman and recognized the sovereignty of the coast of Oman, the Sultanate of Muscat. In 1920, Imam Salim Alkharusi died and Muhammad Alkhalili was elected.
On 10 January 1923, an agreement between the Sultanate and the British government was signed in which the Sultanate had to consult with the British political agent residing in Muscat and obtain the approval of the High Government of India to extract oil in the Sultanate. On 31 July 1928, the Red Line Agreement was signed between Anglo-Persian Company (later renamed British Petroleum), Royal Dutch/Shell, Compagnie Française des Pétroles (later renamed Total), Near East Development Corporation (later renamed ExxonMobil) and Calouste Gulbenkian (an Armenian businessman) to collectively produce oil in the post-Ottoman Empire region, which included the Arabian peninsula, with each of the four major companies holding 23.75 percent of the shares while Calouste Gulbenkian held the remaining 5 percent shares. The agreement stipulated that none of the signatories was allowed to pursue the establishment of oil concessions within the agreed on area without including all other stakeholders. In 1929, the members of the agreement established Iraq Petroleum Company (IPC). On 13 November 1931, Sultan Taimur bin Faisal abdicated.
Said bin Taimur became the sultan of Muscat officially on 10 February 1932. The rule of sultan Said bin Taimur, who was backed by the British government, was characterized as being feudal, reactionary and isolationist. The British government maintained vast administrative control over the Sultanate as the defence secretary and chief of intelligence, chief adviser to the sultan and all ministers except for one were British. In 1937, an agreement between the sultan and Iraq Petroleum Company (IPC), a consortium of oil companies that was 23.75% British owned, was signed to grant oil concessions to IPC. After failing to discover oil in the Sultanate, IPC was intensely interested in some promising geological formations near Fahud, an area located within the Imamate. IPC offered financial support to the sultan to raise an armed force against any potential resistance by the Imamate.
In 1955, the exclave coastal Makran strip acceded to Pakistan and was made a district of its Balochistan province, while Gwadar remained in Oman. On 8 September 1958, Pakistan purchased the Gwadar enclave from Oman for US$3 million. Gwadar then became a tehsil in the Makran district.
Sultan Said bin Taimur expressed his interest to the British government in occupying the Imamate right after the death of Imam Alkhalili and take advantage of potential instability that may occur within the Imamate when elections were due. The British political agent in Muscat believed that the only method of gaining access to the oil reserves in the interior was by assisting the sultan in taking over the Imamate. In 1946, the British government offered arms and ammunition, auxiliary supplies and officers to prepare the sultan to attack the interior of Oman. In May 1954, Imam Alkhalili died and Ghalib Alhinai became the elected Imam of the Imamate of Oman. Relations between the sultan of Muscat, Said bin Taimur, and Imam Ghalib Alhinai frayed over their dispute about oil concessions. Under the terms of the 1920 treaty of Seeb, the Sultan, backed by the British government, claimed all dealings with the oil company as his prerogative. The Imam, on the other hand, claimed that since the oil was in the Imamate territory, anything concerning it was an internal matter.
In December 1955, sultan Said bin Taimur sent troops of the Muscat and Oman Field Force to occupy the main centres in Oman, including Nizwa, the capital of the Imamate of Oman, and Ibri. The Omanis in the interior led by Imam Ghalib Alhinai, Talib Alhinai, the brother of the Imam and the Wali (governor) of Rustaq, and Suleiman bin Hamyar, who was the Wali (governor) of Jebel Akhdar, defended the Imamate of Oman in the Jebel Akhdar War against British-backed attacks by the Sultanate. In July 1957, the Sultan's forces were withdrawing, but they were repeatedly ambushed, sustaining heavy casualties. Sultan Said, however, with the intervention of British infantry (two companies of the Cameronians), armoured car detachments from the British Army and RAF aircraft, was able to suppress the rebellion. The Imamate's forces retreated to the inaccessible Jebel Akhdar.
Colonel David Smiley, who had been seconded to organise the Sultan's Armed Forces, managed to isolate the mountain in autumn 1958 and found a route to the plateau from Wadi Bani Kharus. On 4 August 1957, the British Foreign Secretary gave the approval to carry out air strikes without prior warning to the locals residing in the interior of Oman. Between July and December 1958, the British RAF made 1,635 raids, dropping 1,094 tons and firing 900 rockets at the interior of Oman targeting insurgents, mountain top villages, water channels and crops. On 27 January 1959, the Sultanate's forces occupied the mountain in a surprise operation. Ghalib, Talib and Sulaiman managed to escape to Saudi Arabia, where the Imamate's cause was promoted until the 1970s. The interior of Oman presented the case of Oman to the Arab League and the United Nations. On 11 December 1963, the UN General Assembly decided to establish an Ad-Hoc Committee on Oman to study the 'Question of Oman' and report back to the General Assembly. The UN General Assembly adopted the 'Question of Oman' resolution in 1965, 1966 and again in 1967 that called upon the British government to cease all repressive action against the locals, end British control over Oman and reaffirmed the inalienable right of the Omani people to self-determination and independence.
Oil reserves in Dhofar were discovered in 1964 and extraction began in 1967. In the Dhofar Rebellion, which began in 1965, pro-Soviet forces were pitted against government troops. As the rebellion threatened the Sultan's control of Dhofar, Sultan Said bin Taimur was deposed in a bloodless coup (1970) by his son Qaboos bin Said, who expanded the Sultan of Oman's Armed Forces, modernised the state's administration and introduced social reforms. The uprising was finally put down in 1975 with the help of forces from Iran, Jordan, Pakistan and the British Royal Air Force, army and Special Air Service.
After deposing his father in 1970, Sultan Qaboos opened up the country, embarked on economic reforms, and followed a policy of modernisation marked by increased spending on health, education and welfare. Slavery, once a cornerstone of the country's trade and development, was outlawed in 1970.
In 1981 Oman became a founding member of the six-nation Gulf Cooperation Council. Political reforms were eventually introduced. Historically, voters had been chosen from among tribal leaders, intellectuals and businessmen. In 1997 Sultan Qaboos decreed that women could vote for, and stand for election to, the Majlis al-Shura, the Consultative Assembly of Oman. Two women were duly elected to the body.
In 2002, voting rights were extended to all citizens over the age of 21, and the first elections to the Consultative Assembly under the new rules were held in 2003. In 2004, the Sultan appointed Oman's first female minister with portfolio, Sheikha Aisha bint Khalfan bin Jameel al-Sayabiyah. She was appointed to the post of National Authority for Industrial Craftsmanship, an office that attempts to preserve and promote Oman's traditional crafts and stimulate industry. Despite these changes, there was little change to the actual political makeup of the government. The Sultan continued to rule by decree. Nearly 100 suspected Islamists were arrested in 2005 and 31 people were convicted of trying to overthrow the government. They were ultimately pardoned in June of the same year.
Inspired by the Arab Spring uprisings taking place throughout the region, protests occurred in Oman during the early months of 2011. Although they did not call for the ousting of the regime, demonstrators demanded political reforms, improved living conditions and the creation of more jobs. They were dispersed by riot police in February 2011. Sultan Qaboos reacted by promising jobs and benefits. In October 2011, elections were held to the Consultative Assembly, to which Sultan Qaboos promised greater powers. The following year, the government began a crackdown on internet criticism. In September 2012, trials began of 'activists' accused of posting "abusive and provocative" criticism of the government online. Six were given jail terms of 12–18 months and fines of around $2,500 each.
Qaboos died on 10 January 2020, and the government declared three days of national mourning. He was buried the next day.
On 11 January 2020, Qaboos was succeeded by his first cousin Sultan Haitham bin Tariq Al Said.
Oman lies between latitudes 16° and 28° N, and longitudes 52° and 60° E. A vast gravel desert plain covers most of central Oman, with mountain ranges along the north (Al Hajar Mountains) and southeast coast (Qara or Dhofar Mountains), where the country's main cities are located: the capital city Muscat, Sohar and Sur in the north, and Salalah in the south. Oman's climate is hot and dry in the interior and humid along the coast. During past epochs, Oman was covered by ocean, as evidenced by the large numbers of fossilized shells found in areas of the desert away from the modern coastline.
The peninsula of Musandam (Musandem) exclave, which is strategically located on the Strait of Hormuz, is separated from the rest of Oman by the United Arab Emirates. The series of small towns known collectively as Dibba are the gateway to the Musandam peninsula on land and the fishing villages of Musandam by sea, with boats available for hire at Khasab for trips into the Musandam peninsula by sea.
Oman's other exclave, inside UAE territory, known as Madha, located halfway between the Musandam Peninsula and the main body of Oman, is part of the Musandam governorate, covering approximately . Madha's boundary was settled in 1969, with the north-east corner of Madha barely from the Fujairah road. Within the Madha exclave is a UAE enclave called Nahwa, belonging to the Emirate of Sharjah, situated about along a dirt track west of the town of New Madha, and consisting of about forty houses with a clinic and telephone exchange.
The central desert of Oman is an important source of meteorites for scientific analysis.
Like the rest of the Persian Gulf, Oman generally has one of the hottest climates in the world—with summer temperatures in Muscat and northern Oman averaging . Oman receives little rainfall, with annual rainfall in Muscat averaging , occurring mostly in January. In the south, the Dhofar Mountains area near Salalah has a tropical-like climate and receives seasonal rainfall from late June to late September as a result of monsoon winds from the Indian Ocean, leaving the summer air saturated with cool moisture and heavy fog. Summer temperatures in Salalah range from —relatively cool compared to northern Oman.
The mountain areas receive more rainfall, and annual rainfall on the higher parts of the Jabal Akhdar probably exceeds . Low temperatures in the mountainous areas leads to snow cover once every few years. Some parts of the coast, particularly near the island of Masirah, sometimes receive no rain at all within the course of a year. The climate is generally very hot, with temperatures reaching around (peak) in the hot season, from May to September.
On 26 June 2018 the city of Qurayyat set the record for highest minimum temperature in a 24-hour period, 42.6 °C (108.7 °F).
Desert shrub and desert grass, common to southern Arabia, are found in Oman, but vegetation is sparse in the interior plateau, which is largely gravel desert. The greater monsoon rainfall in Dhofar and the mountains makes the growth there more luxuriant during summer; coconut palms grow plentifully on the coastal plains of Dhofar and frankincense is produced in the hills, with abundant oleander and varieties of acacia. The Al Hajar Mountains are a distinct ecoregion, the highest points in eastern Arabia with wildlife including the Arabian tahr.
Indigenous mammals include the leopard, hyena, fox, wolf, hare, oryx and ibex. Birds include the vulture, eagle, stork, bustard, Arabian partridge, bee eater, falcon and sunbird. In 2001, Oman had nine endangered species of mammals, five endangered types of birds, and nineteen threatened plant species. Decrees have been passed to protect endangered species, including the Arabian leopard, Arabian oryx, mountain gazelle, goitered gazelle, Arabian tahr, green sea turtle, hawksbill turtle and olive ridley turtle. However, the Arabian Oryx Sanctuary is the first site ever to be deleted from UNESCO's World Heritage List, following the government's 2007 decision to reduce the site's area by 90% in order to clear the way for oil prospectors.
In recent years, Oman has become one of the newer hot spots for whale watching, highlighting the critically endangered Arabian humpback whale, the most isolated and only non-migratory population in the world, sperm whales and pygmy blue whales.
Drought and limited rainfall contribute to shortages in the nation's water supply. Maintaining an adequate supply of water for agricultural and domestic use is one of Oman's most pressing environmental problems, with limited renewable water resources. 94% of available water is used in farming and 2% for industrial activity, with the majority sourced from fossil water in the desert areas and spring water in hills and mountains.
In terms of climate action, major challenges remain to be solved, per the United Nations Sustainable Development 2019 index. The emissions from energy (t/capita) and emissions embodied in fossil fuel exports (kg per capita) rates are very high, while imported emissions (t/capita) and people affected by climate-related disasters (per 100,000 people) rates are low.
Drinking water is available throughout Oman, either piped or delivered. The soil in coastal plains, such as Salalah, have shown increased levels of salinity, due to over exploitation of ground water and encroachment by seawater on the water table. Pollution of beaches and other coastal areas by oil tanker traffic through the Strait of Hormuz and Gulf of Oman is also a persistent concern.
Local and national entities have noted unethical treatment of animals in Oman. In particular, stray dogs (and to a lesser extent, stray cats) are often the victims of torture, abuse or neglect. Currently, the only approved method of decreasing the stray dog population is shooting by police officers. The Oman government has refused to implement a spay and neuter programme or create any animal shelters in the country. Cats, while seen as more acceptable than dogs, are viewed as pests and frequently die of starvation or illness.
In 2019, the World Health Organization (WHO) ranked Oman as the least polluted country in the Arab world, with a score of 37.7 in the pollution index. The country ranked 112th in Asia among the list of highest polluted countries.
Oman is a unitary state and an absolute monarchy, in which all legislative, executive and judiciary power ultimately rests in the hands of the hereditary Sultan. Consequently, Freedom House has routinely rated the country "Not Free".
The sultan is the head of state and directly controls the foreign affairs and defence portfolios. He has absolute power and issues laws by decree.
Oman is an absolute monarchy, with the Sultan's word having the force of law. The judiciary branch is subordinate to the Sultan. According to Oman's constitution, Sharia law is one of the sources of legislation. Sharia court departments within the civil court system are responsible for family-law matters, such as divorce and inheritance.
Oman does not have a system of checks and balances, and thus no separation of powers. All power is concentrated in the Sultan, who is also chief of staff of the armed forces, Minister of Defence, Minister of Foreign Affairs and chairman of the Central Bank. All legislation since 1970 has been promulgated through royal decrees, including the 1996 Basic Law. The Sultan appoints judges, and can grant pardons and commute sentences. The Sultan's authority is inviolable and the Sultan expects total subordination to his will.
The administration of justice is highly personalized, with limited due process protections, especially in political and security-related cases. The Basic Statute of the State is supposedly the cornerstone of the Omani legal system and it operates as a constitution for the country. The Basic Statute was issued in 1996 and thus far has only been amended once, in 2011, in response to protests.
Though Oman's legal code theoretically protects civil liberties and personal freedoms, both are regularly ignored by the regime. Women and children face legal discrimination in many areas. Women are excluded from certain state benefits, such as housing loans, and are refused equal rights under the personal status law. Women also experience restrictions on their self-determination in respect to health and reproductive rights.
The Omani legislature is the bicameral Council of Oman, consisting of an upper chamber, the Council of State (Majlis ad-Dawlah) and a lower chamber, the Consultative Council (Majlis ash-Shoura). Political parties are banned. The upper chamber has 71 members, appointed by the Sultan from among prominent Omanis; it has only advisory powers. The 84 members of the Consultative Council are elected by popular vote to serve four-year terms, but the Sultan makes the final selections and can negotiate the election results. The members are appointed for three-year terms, which may be renewed once. The last elections were held on 27 October 2019, and the next is due in October 2023. Oman's national anthem, "As-Salam as-Sultani" is dedicated to former Sultan Qaboos.
Homosexual acts are illegal in Oman. The practice of torture is widespread in Oman state penal institutions and has become the state's typical reaction to independent political expression. Torture methods in use in Oman include mock execution, beating, hooding, solitary confinement, subjection to extremes of temperature and to constant noise, abuse and humiliation. There have been numerous reports of torture and other inhumane forms of punishment perpetrated by Omani security forces on protesters and detainees. Several prisoners detained in 2012 complained of sleep deprivation, extreme temperatures and solitary confinement. Omani authorities kept Sultan al-Saadi, a social media activist, in solitary confinement, denied him access to his lawyer and family, forced him to wear a black bag over his head whenever he left his cell, including when using the toilet, and told him his family had "forsaken" him and asked for him to be imprisoned.
The Omani government decides who can or cannot be a journalist and this permission can be withdrawn at any time. Censorship and self-censorship are a constant factor. Omanis have limited access to political information through the media. Access to news and information can be problematic: journalists have to be content with news compiled by the official news agency on some issues. Through a decree by the Sultan, the government has now extended its control over the media to blogs and other websites. Omanis cannot hold a public meeting without the government's approval. Omanis who want to set up a non-governmental organisation of any kind need a licence. To get a licence, they have to demonstrate that the organisation is "for legitimate objectives" and not "inimical to the social order". The Omani government does not permit the formation of independent civil society associations. Human Rights Watch issued on 2016, that an Omani court sentenced three journalists to prison and ordered the permanent closure of their newspaper, over an article that alleged corruption in the judiciary.
The law prohibits criticism of the Sultan and government in any form or medium. Oman's police do not need search warrants to enter people's homes. The law does not provide citizens with the right to change their government. The Sultan retains ultimate authority on all foreign and domestic issues. Government officials are not subject to financial disclosure laws. Libel laws and concerns for national security have been used to suppress criticism of government figures and politically objectionable views. Publication of books is limited and the government restricts their importation and distribution, as with other media products.
Merely mentioning the existence of such restrictions can land Omanis in trouble. In 2009, a web publisher was fined and given a suspended jail sentence for revealing that a supposedly live TV programme was actually pre-recorded to eliminate any criticisms of the government.
Faced with so many restrictions, Omanis have resorted to unconventional methods for expressing their views. Omanis sometimes use donkeys to express their views. Writing about Gulf rulers in 2001, Dale Eickelman observed: "Only in Oman has the occasional donkey… been used as a mobile billboard to express anti-regime sentiments. There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed." Some people have been arrested for allegedly spreading fake news about the COVID-19 pandemic in Oman.
Omani citizens need government permission to marry foreigners. The Ministry of Interior requires Omani citizens to obtain permission to marry foreigners (except nationals of GCC countries); permission is not automatically granted. Citizen marriage to a foreigner abroad without ministry approval may result in denial of entry for the foreign spouse at the border and preclude children from claiming citizenship rights. It also may result in a bar from government employment and a fine of 2,000 rials ($5,200).
In August 2014, The Omani writer and human rights defender Mohammed Alfazari, the founder and editor-in-chief of the e-magazine Mowatin "Citizen", disappeared after going to the police station in the Al-Qurum district of Muscat. For several months the Omani government denied his detention and refused to disclose information about his whereabouts or condition. On 17 July 2015, Alfazari left Oman seeking political asylum in UK after a travel ban was issued against him without providing any reasons and after his official documents including his national ID and passport were confiscated for more than 8 months. There were more reports of politically motivated disappearances in the country. In 2012, armed security forces arrested Sultan al-Saadi, a social media activist. According to reports, authorities detained him at an unknown location for one month for comments he posted online critical of the government. Authorities previously arrested al-Saadi in 2011 for participating in protests and again in 2012 for posting comments online deemed insulting to Sultan Qaboos. In May 2012 security forces detained Ismael al-Meqbali, Habiba al-Hinai and Yaqoub al-Kharusi, human rights activists who were visiting striking oil workers. Authorities released al-Hinai and al-Kharusi shortly after their detention but did not inform al-Meqbali's friends and family of his whereabouts for weeks. Authorities pardoned al-Meqbali in March. In December 2013, a Yemeni national disappeared in Oman after he was arrested at a checkpoint in Dhofar Governorate. Omani authorities refuse to acknowledge his detention. His whereabouts and condition remain unknown.
The National Human Rights Commission, established in 2008, is not independent from the regime. It is chaired by the former deputy inspector general of Police and Customs and its members are appointed by royal decree. In June 2012, one of its members requested that she be relieved of her duties because she disagreed with a statement made by the Commission justifying the arrest of intellectuals and bloggers and the restriction of freedom of expression in the name of respect for "the principles of religion and customs of the country".
Since the beginning of the "Omani Spring" in January 2011, a number of serious violations of civil rights have been reported, amounting to a critical deterioration of the human rights situation. Prisons are inaccessible to independent monitors. Members of the independent Omani Group of Human Rights have been harassed, arrested and sentenced to jail. There have been numerous testimonies of torture and other inhumane forms of punishment perpetrated by security forces on protesters and detainees. The detainees were all peacefully exercising their right to freedom of expression and assembly. Although authorities must obtain court orders to hold suspects in pre-trial detention, they do not regularly do this. The penal code was amended in October 2011 to allow the arrest and detention of individuals without an arrest warrant from public prosecutors.
In January 2014, Omani intelligence agents arrested a Bahraini actor and handed him over to the Bahraini authorities on the same day of his arrest. The actor has been subjected to a forced disappearance, his whereabouts and condition remain unknown.
The plight of domestic workers in Oman is a taboo subject. In 2011, the Philippines government determined that out of all the countries in the Middle East, only Oman and Israel qualify as safe for Filipino migrants. In 2012, it was reported that every 6 days, an Indian migrant in Oman commits suicide. There has been a campaign urging authorities to check the migrant suicide rate. In the 2014 Global Slavery Index, Oman is ranked No. 45 due to 26,000 people in slavery. The descendants of servant tribes and slaves are victims of widespread discrimination. Oman was one of the last countries to abolish slavery in 1970.
Since 1970, Oman has pursued a moderate foreign policy, and has expanded its diplomatic relations dramatically. Oman is among the very few Arab countries that have maintained friendly ties with Iran. WikiLeaks disclosed US diplomatic cables which state that Oman helped free British sailors captured by Iran's navy in 2007. The same cables also portray the Omani government as wishing to maintain cordial relations with Iran, and as having consistently resisted US diplomatic pressure to adopt a sterner stance. Yusuf bin Alawi bin Abdullah is the Sultanate's Minister Responsible for Foreign Affairs.
Oman allowed the British Royal Navy and Indian Navy access to the port facilities of Al Duqm Port & Drydock.
Oman's military and security expenditure as a percentage of GDP in 2015 was 16.5 percent, making it the world's highest rate in that year. Oman's on-average military spending as a percentage of GDP between 2016 and 2018 was around 10 percent, while the world's average during the same period was 2.2 percent.
Oman's military manpower totalled 44,100 in 2006, including 25,000 men in the army, 4,200 sailors in the navy, and an air force with 4,100 personnel. The Royal Household maintained 5,000 Guards, 1,000 in Special Forces, 150 sailors in the Royal Yacht fleet, and 250 pilots and ground personnel in the Royal Flight squadrons. Oman also maintains a modestly sized paramilitary force of 4,400 men.
The Royal Army of Oman had 25,000 active personnel in 2006, plus a small contingent of Royal Household troops. Despite a comparative large military spending, it has been relatively slow to modernise its forces. Oman has a relatively limited number of tanks, including 6 M60A1, 73 M60A3 and 38 Challenger 2 main battle tanks, as well as 37 aging Scorpion light tanks.
The Royal Air Force of Oman has approximately 4,100 men, with only 36 combat aircraft and no armed helicopters. Combat aircraft include 20 aging Jaguars, 12 Hawk Mk 203s, 4 Hawk Mk 103s and 12 PC-9 turboprop trainers with a limited combat capability. It has one squadron of 12 F-16C/D aircraft. Oman also has 4 A202-18 Bravos and 8 MFI-17B Mushshaqs.
The Royal Navy of Oman had 4,200 men in 2000, and is headquartered at Seeb. It has bases at Ahwi, Ghanam Island, Mussandam and Salalah. In 2006, Oman had 10 surface combat vessels. These included two 1,450-ton "Qahir" class corvettes, and 8 ocean-going patrol boats. The Omani Navy had one 2,500-ton "Nasr al Bahr" class LSL (240 troops, 7 tanks) with a helicopter deck. Oman also had at least four landing craft. Oman ordered three "Khareef" class corvettes from the VT Group for £400 million in 2007. They were built at Portsmouth. In 2010 Oman spent US$4.074 billion on military expenditures, 8.5% of the gross domestic product. The sultanate has a long history of association with the British military and defence industry. According to SIPRI, Oman was the 23rd largest arms importer from 2012 to 2016.
The Sultanate is administratively divided into eleven governorates. Governorates are, in turn, divided into 60 wilayats.
Oman's Basic Statute of the State expresses in Article 11 that the "national economy is based on justice and the principles of a free economy." By regional standards, Oman has a relatively diversified economy, but remains dependent on oil exports. In terms of monetary value, mineral fuels accounted for 82.2 percent of total product exports in 2018. Tourism is the fastest-growing industry in Oman. Other sources of income, agriculture and industry, are small in comparison and account for less than 1% of the country's exports, but diversification is seen as a priority by the government. Agriculture, often subsistence in its character, produces dates, limes, grains and vegetables, but with less than 1% of the country under cultivation, Oman is likely to remain a net importer of food.
Oman's socio-economic structure is described as being hyper-centralized rentier welfare state. The largest 10 percent of corporations in Oman are the employers of almost 80 percent of Omani nationals in the private sector. Half of the private sector jobs are classified as elementary. One third of employed Omanis are in the private sector, while the remaining majority are in the public sector. A hyper-centralized structure produces a monopoly-like economy, which hinders having a healthy competitive environment between businesses.
Since a slump in oil prices in 1998, Oman has made active plans to diversify its economy and is placing a greater emphasis on other areas of industry, namely tourism and infrastructure. Oman had a 2020 Vision to diversify the economy established in 1995, which targeted a decrease in oil's share to less than 10 percent of GDP by 2020, but it was rendered obsolete in 2011. Oman then established 2040 Vision.
A free-trade agreement with the United States took effect 1 January 2009, eliminated tariff barriers on all consumer and industrial products, and also provided strong protections for foreign businesses investing in Oman. Tourism, another source of Oman's revenue, is on the rise. A popular event is The Khareef Festival held in Salalah, Dhofar, which is 1,200 km from the capital city of Muscat, during the monsoon season (August) and is similar to Muscat Festival. During this latter event the mountains surrounding Salalah are popular with tourists as a result of the cool weather and lush greenery, rarely found anywhere else in Oman.
Oman's foreign workers send an estimated US$10 billion annually to their home states in Asia and Africa, more than half of them earning a monthly wage of less than US$400. The largest foreign community is from the Indian states of Kerala, Tamil Nadu, Karnataka, Maharashtra, Gujarat and the Punjab, representing more than half of entire workforce in Oman. Salaries for overseas workers are known to be less than for Omani nationals, though still from two to five times higher than for the equivalent job in India.
In terms of foreign direct investment (FDI), total investments in 2017 exceeded US$24billion. The highest share of FDI went to the oil and gas sector, which represented around US$13billion (54.2 percent), followed by financial intermediation, which represented US$3.66billion (15.3 percent). FDI is dominated by the United Kingdom with an estimated value of US$11.56billion (48 percent), followed by the UAE USD 2.6billion (10.8 percent), followed by Kuwait USD 1.1billion (4.6 percent).
Oman, in 2018 had a budget deficit of 32 percent of total revenue and a government debt to GDP of 47.5 percent. Oman's military spending to GDP between 2016 and 2018 averaged 10 percent, while the world's average during the same period was 2.2 percent. Oman's health spending to GDP between 2015 and 2016 averaged 4.3 percent, while the world's average during the same period was 10 percent. Oman's research and development spending between 2016 and 2017 averaged 0.24 percent, which is significantly lower than the world's average (2.2 percent) during the same period. Oman's government spending on education to GDP in 2016 was 6.11 percent, while the world's average was 4.8 percent (2015).
Oman's proved reserves of petroleum total about 5.5 billion barrels, 25th largest in the world. Oil is extracted and processed by Petroleum Development Oman (PDO), with proven oil reserves holding approximately steady, although oil production has been declining. The Ministry of Oil and Gas is responsible for all oil and gas infrastructure and projects in Oman. Following the 1970s energy crisis, Oman doubled their oil output between 1979 and 1985.
In 2018, oil and gas represented 71 percent of the government's revenues. In 2016, oil and gas share of the government's revenue represented 72 percent. The government's reliance on oil and gas as a source of income dropped by 1 percent from 2016 to 2018. Oil and gas sector represented 30.1 percent of the nominal GDP in 2017.
Between 2000 and 2007, production fell by more than 26%, from 972,000 to 714,800 barrels per day. Production has recovered to 816,000 barrels in 2009, and 930,000 barrels per day in 2012. Oman's natural gas reserves are estimated at 849.5 billion cubic metres, ranking 28th in the world, and production in 2008 was about 24 billion cubic metres per year.
In September 2019, Oman was confirmed to become the first Middle Eastern country to host the International Gas Union Research Conference (IGRC 2020). This 16th iteration of the event will be held between 24 and 26 February 2020, in collaboration with Oman LNG, under the auspices of the Ministry of Oil and Gas.
Tourism in Oman has grown considerably recently, and it is expected to be one of the largest industries in the country. The World Travel & Tourism Council stated that Oman is the fastest growing tourism destination in the Middle East.
Tourism contributed 2.8 percent to the Omani GDP in 2016. It grew from RO 505 million (US$1.3 billion) in 2009 to RO 719 million (US$1.8 billion) in 2017 (+42.3 percent growth). Citizens of the Gulf Cooperation Council (GCC), including Omanis who are residing outside of Oman, represent the highest ratio of all tourists visiting Oman, estimated to be 48 percent. The second highest number of visitors come from other Asian countries, who account for 17 percent of the total number of visitors. A challenge to tourism development in Oman is the reliance on the government-owned firm, Omran, as a key actor to develop the tourism sector, which potentially creates a market barrier-to-entry of private-sector actors and a crowding out effect. Another key issue to the tourism sector is deepening the understanding of the ecosystem and biodiversity in Oman to guarantee their protection and preservation.
Oman has one of the most diverse environments in the Middle East with various tourist attractions and is particularly well known for adventure and cultural tourism. Muscat, the capital of Oman, was named the second best city to visit in the world in 2012 by the travel guide publisher Lonely Planet. Muscat also was chosen as the Capital of Arab Tourism of 2012.
In November 2019, Oman made the rule of visa on arrival an exception and introduced the concept of e-visa for tourists from all nationalities. Under the new laws, visitors were required to apply for the visa in advance by visiting Oman's online government portal.
In industry, innovation and infrastructure, Oman is still faced with "significant challenges", as per United Nations Sustainable Development Goals index, as of 2019. Oman has scored high on the rates of internet use, mobile broadband subscriptions, logistics performance and on the average of top 3 university rankings. Meanwhile, Oman scored low on the rate of scientific and technical publications and on research & development spending. Oman's manufacturing value added to GDP rate in 2016 was 8.4 percent, which is lower than the average in the Arab world (9.8 percent) and world average (15.6 percent). In terms of research & development expenditures to GDP, Oman's share was on average 0.20 percent between 2011 and 2015, while the world's average during the same period was 2.11 percent. The majority of firms in Oman operate in the oil and gas, construction and trade sectors.
Oman is refurbishing and expanding the ports infrastructure in Muscat, Duqm, Sohar and Salalah to expand tourism, local production and export shares. Oman is also expanding its downstream operations by constructing a refinery and petrochemical plant in Duqm with a 230,000 barrels per day capacity projected for completion by 2021. The majority of industrial activity in Oman takes place in 8 industrial states and 4 free-zones. The industrial activity is mainly focused on mining-and-services, petrochemicals and construction materials. The largest employers in the private-sector are the construction, wholesale-and-retail and manufacturing sectors, respectively. Construction accounts for nearly 48 percent of the total labour force, followed by wholesale-and-retail, which accounts for around 15 percent of total employment and manufacturing, which accounts for around 12 percent of employment in the private sector. The percentage of Omanis employed in the construction and manufacturing sectors is nevertheless low, as of 2011 statistics.
Oman, as per Global Innovation Index (2019) report, scores "below expectations" in innovation relative to countries classified under high income. Oman in 2019 ranked 80 out of 129 countries in innovation index, which takes into consideration factors, such as, political environment, education, infrastructure and business sophistication. Innovation, technology-based growth and economic diversification are hindered by an economic growth that relies on infrastructure expansion, which heavily depends on a high percentage of 'low-skilled' and 'low-wage' foreign labour. Another challenge to innovation is the dutch disease phenomenon, which creates an oil and gas investment lock-in, while relying heavily on imported products and services in other sectors. Such a locked-in system hinders local business growth and global competitiveness in other sectors, and thus impedes economic diversification. The inefficiences and bottlenecks in business operations that are a result of heavy dependence on natural resources and 'addiction' to imports in Oman suggest a 'factor-driven economy'. A third hindrance to innovation in Oman is an economic structure that is heavily dependent on few large firms, while granting few opportunities for SMEs to enter the market, which impedes healthy market-share competition between firms. The ratio of patent applications per million people was 0.35 in 2016 and the MENA region average was 1.50, while the 'high-income' countries' average was approximately 48.0 during the same year.
Oman's fishing industry contributed 0.78 percent to the GDP in 2016. Fish exports between 2000 and 2016 grew from US$144 million to US$172 million (+19.4 percent). The main importer of Omani fish in 2016 was Vietnam, which imported almost US$80 million (46.5 percent) in value, and the second biggest importer was the United Arab Emirates, which imported around US$26 million (15 percent). The other main importers are Saudi Arabia, Brazil and China. Oman's consumption of fish is almost two times the world's average. The ratio of exported fish to total fish captured in tons fluctuated between 49 and 61 percent between 2006 and 2016. Omani strengths in the fishing industry comes from having a good market system, a long coastline (3,165 km) and wide water area. Oman, on the other hand, lacks sufficient infrastructure, research and development, quality and safety monitoring, together with a limited contribution by the fishing industry to GDP.
Dates represent 80 percent of all fruit crop production. Further, date farms employ 50 percent of the total agricultural area in the country. Oman's estimated production of dates in 2016 is 350,000 tons, making it the 9th largest producer of dates. The vast majority of date production (75 percent) comes from only 10 cultivars. Oman's total export of dates was US$12.6 million in 2016, almost equivalent to Oman's total imported value of dates, which was US$11.3 million in 2016. The main importer is India (around 60 percent of all imports). Oman's date exports remained steady between 2006 and 2016. Oman is considered to have good infrastructure for date production and support provision to cultivation and marketing, but lacks innovation in farming and cultivation, industrial coordination in the supply chain and encounter high losses of unused dates.
, Oman's population is over 4 million, with 2.23 million Omani nationals and 1.76 million expatriates. The total fertility rate in 2011 was estimated at 3.70. Oman has a very young population, with 43 percent of its inhabitants under the age of 15. Nearly 50 percent of the population lives in Muscat and the Batinah coastal plain northwest of the capital. Omani people are predominantly of Arab, Baluchi and African origins.
Omani society is largely tribal and encompasses three major identities: that of the tribe, the Ibadi faith and maritime trade. The first two identities are closely tied to tradition and are especially prevalent in the interior of the country, owing to lengthy periods of isolation. The third identity pertains mostly to Muscat and the coastal areas of Oman, and is reflected by business, trade, and the diverse origins of many Omanis, who trace their roots to Baloch, Al-Lawatia, Persia and historical Omani Zanzibar. Consequently, the third identity is generally seen to be more open and tolerant towards others, and is often in tension with the more traditional and insular identities of the interior.
Even though the Oman government does not keep statistics on religious affiliation, spuriously precise statistics from the US's Central Intelligence Agency state that adherents of Islam are in the majority at 85.9%, with Christians at 6.5%, Hindus at 5.5%, Buddhists at 0.8%, Jews less than 0.1%. Other religious affiliations have a proportion of 1% and the unaffiliated only 0.2%.
Most Omanis are Muslims, most of whom follow the Ibadi School of Islam, followed by the Twelver school of Shia Islam and Shafi`i school of Sunni Islam.
Virtually all non-Muslims in Oman are foreign workers. Non-Muslim religious communities include various groups of Jains, Buddhists, Zoroastrians, Sikhs, Jews, Hindus and Christians. Christian communities are centred in the major urban areas of Muscat, Sohar and Salalah. These include Catholic, Eastern Orthodox and various Protestant congregations, organising along linguistic and ethnic lines. More than 50 different Christian groups, fellowships and assemblies are active in the Muscat metropolitan area, formed by migrant workers from Southeast Asia.
There are also communities of ethnic Indian Hindus and Christians. Muscat has two Hindu temples. One of them is over a hundred years old. There is a significant Sikh community in Oman. Though there are no permanent gurdwaras, many smaller gurdwaras in makeshift camps exist and are recognised by the government. The Government of India had signed an accord in 2008 with the Omani government to build a permanent gurdwara but little progress has been made on the matter.
Arabic is the official language of Oman. It belongs to the Semitic branch of the Afroasiatic family. Prior to Islam, Central Oman lay outside of the core area of spoken Arabic. Possibly Old South Arabian speakers dwelled from the Bāṭinah to Ẓafār. Rare Musnad inscriptions have come to light in central Oman and in the Emirate of Sharjah, but the script says nothing about the language which it conveys. A bilingual text from the 3rd century BCE is written in Aramaic and in musnad Hasiatic, which mentions a 'king of Oman' (mālk mn ʿmn). Today the Mehri language is limited in its distribution to the area around Ṣalālah in Ẓafār and westward into the Yemen. But until the 18th or 19th century it was spoken further north, perhaps into Central Oman. Baluchi (Southern Baluchi) is widely spoken in Oman. Endangered indigenous languages in Oman include Kumzari, Bathari, Harsusi, Hobyot, Jibbali and Mehri. Omani Sign Language is the language of the deaf community. Oman was also the first Arab country in the Persian Gulf to have German taught as a second language. The Bedouin Arabs, who reached eastern and southeastern Arabia in migrational waves—the latest in the 18th century, brought their language and rule including the ruling families of Bahrain, Qatar and the United Arab Emirates. At the most basic level, there are two kinds of dialects, those of settlers and those of Bedouin which share some features. Omani dialects preserve much vocabulary which has been lost in other Arabic dialects. C. Holes has argued convincingly that Omani Arabic has indigenous characteristics of its own which do not derive from Bedouin central Arabia. They are better preserved than in neighbouring countries.
According to the CIA, besides Arabic, English, Baluchi (Southern Baluchi), Urdu and various Indian languages are the main languages spoken in Oman. English is widely spoken in the business community and is taught at school from an early age. Almost all signs and writings appear in both Arabic and English at tourist sites. Baluchi is the mother tongue of the Baloch people from Balochistan in western Pakistan, eastern Iran and southern Afghanistan. It is also used by some descendants of Sindhi sailors. A significant number of residents also speak Urdu, due to the influx of Pakistani migrants during the late 1980s and 1990s. Additionally, Swahili is widely spoken in the country due to the historical relations between Oman and Zanzibar.
Outwardly, Oman shares many of the cultural characteristics of its Arab neighbours, particularly those in the Gulf Cooperation Council. Despite these similarities, important factors make Oman unique in the Middle East. These result as much from geography and history as from culture and economics. The relatively recent and artificial nature of the state in Oman makes it difficult to describe a national culture; however, sufficient cultural heterogeneity exists within its national boundaries to make Oman distinct from other Arab States of the Persian Gulf. Oman's cultural diversity is greater than that of its Arab neighbours, given its historical expansion to the Swahili Coast and the Indian Ocean.
Oman has a long tradition of shipbuilding, as maritime travel played a major role in the Omanis' ability to stay in contact with the civilisations of the ancient world. Sur was one of the most famous shipbuilding cities of the Indian Ocean. The Al Ghanja ship takes one whole year to build. Other types of Omani ship include As Sunbouq and Al Badan.
In March 2016 archaeologists working off Al Hallaniyah Island identified a shipwreck believed to be that of the "Esmeralda" from Vasco da Gama's 1502–1503 fleet. The wreck was initially discovered in 1998. Later underwater excavations took place between 2013 and 2015 through a partnership between the Oman Ministry of Heritage and Culture and Blue Water Recoveries Ltd., a shipwreck recovery company. The vessel was identified through such artifacts as a "Portuguese coin minted for trade with India (one of only two coins of this type known to exist) and stone cannonballs engraved with what appear to be the initials of Vincente Sodré, da Gama's maternal uncle and the commander of the "Esmeralda"."
The male national dress in Oman consists of the "dishdasha", a simple, ankle-length, collarless gown with long sleeves. Most frequently white in colour, the dishdasha may also appear in a variety of other colours. Its main adornment, a tassel ("furakha") sewn into the neckline, can be impregnated with perfume. Underneath the dishdasha, men wear a plain, wide strip of cloth wrapped around the body from the waist down. The most noted regional differences in dishdasha designs are the style with which they are embroidered, which varies according to age group. On formal occasions a black or beige cloak called a "bisht" may cover the dishdasha. The embroidery edging the cloak is often in silver or gold thread and it is intricate in detail.
Omani men wear two types of headdress:
Some men carry the "assa", a stick, which can have practical uses or is simply used as an accessory during formal events. Omani men, on the whole, wear sandals on their feet.
The "khanjar" (dagger) forms part of the national dress and men wear the khanjar on all formal public occasions and festivals. It is traditionally worn at the waist. Sheaths may vary from simple covers to ornate silver or gold-decorated pieces. It is a symbol of a man's origin, his manhood and courage. A depiction of a khanjar appears on the national flag.
Omani women wear eye-catching national costumes, with distinctive regional variations. All costumes incorporate vivid colours and vibrant embroidery and decorations. In the past, the choice of colours reflected a tribe's tradition. The Omani women's traditional costume comprises several garments: the "kandoorah", which is a long tunic whose sleeves or "radoon" are adorned with hand-stitched embroidery of various designs. The "dishdasha" is worn over a pair of loose fitting trousers, tight at the ankles, known as a "sirwal". Women also wear a head shawl most commonly referred to as the "lihaf".
Women wear "hijab", and though some women cover their faces and hands, most do not. The Sultan has forbidden the covering of faces in public office.
Music of Oman is extremely diverse due to Oman's imperial legacy. There are over 130 different forms of traditional Omani songs and dances. The Oman Centre for Traditional Music was established in 1984 to preserve them. In 1985, Sultan Qaboos founded the Royal Oman Symphony Orchestra, an act attributed to his love for classical music. Instead of engaging foreign musicians, he decided to establish an orchestra made up of Omanis. On 1 July 1987 at the Al Bustan Palace Hotel's Oman Auditorium the Royal Oman Symphony Orchestra gave its inaugural concert.
The cinema of Oman is very small, there being only one Omani film "Al-Boom" (2006) . Oman Arab Cinema Company LLC is the single largest motion picture exhibitor chain in Oman. It belongs to the Jawad Sultan Group of Companies, which has a history spanning more than 40 years in the Sultanate of Oman. In popular music, a seven-minute music video about Oman went viral, achieving 500,000 views on YouTube within 10 days of being released on YouTube in November 2015. The a cappella production features three of the region's most popular talents: Kahliji musician Al Wasmi, Omani poet Mazin Al-Haddabi and actress Buthaina Al Raisi.
The government has continuously held a monopoly on television in Oman. Oman TV is the only state-owned national television channel broadcaster in Oman. It began broadcasting for the first time from Muscat on 17 November 1974 and separately from Salalah on 25 November 1975. On 1 June 1979, the two stations at Muscat and Salalah linked by satellite to form a unified broadcasting service. Currently, Oman TV broadcasts four HD channels, including Oman TV General, Oman TV Sport, Oman TV Live and Oman TV Cultural.
Although private ownership of radio and television stations is permitted, Oman has only one privately owned television channel. Majan TV is the first private TV channel in Oman. It began broadcasting on January 2009. However, Majan TV's official channel website was last updated in early 2010. Moreover, the public has access to foreign broadcasts since the use of satellite receivers is allowed.
Oman Radio is the first and only state-owned radio channel. It began broadcasting on the 30th, July 1970. It operates both Arabic and English networks. Other private channels include Hala FM, Hi FM, Al-Wisal, Virgin Radio Oman FM and Merge. In early 2018, Muscat Media Group (MMG), trend-setting media group founded by late Essa bin Mohammed Al Zedjali, launched a new private radio stations in hopes of catering educative and entertaining programmes to the youth of the Sultanate.
Oman has nine main newspapers, five in Arabic and four in English. Instead of relying on sales or state subsidies, private newspapers depend on advertising revenues to sustain themselves.
The media landscape in Oman has been continuously described as restrictive, censored, and subdued. The Ministry of Information censors politically, culturally, or sexually offensive material in domestic or foreign media. The press freedom group Reporters Without Borders ranked the country 127th out of 180 countries on its 2018 World Press Freedom Index. In 2016, the government drew international criticism for suspending the newspaper "Azamn" and arresting three journalists after a report on corruption in the country's judiciary. Azamn was not allowed to reopen in 2017 although an appeal court ruled in late 2016 that the paper can resume operating.
Traditional art in Oman stems from its long heritage of material culture. Art movements in the 20th century reveal that the art scene in Oman began with early practices that included a range of tribal handicrafts and self-portraiture in painting since the 1960s. However, since the inclusion of several Omani artists in international collections, art exhibitions, and events, such Alia Al Farsi, the first Omani artist to show at the last Venice Biennale and Radhika Khimji, the first Omani artist to exhibit at both the Marrakesh and Haiti Ghetto biennale, Oman's position as a newcomer to the contemporary art scene in recent years has been more important for Oman's international exposure.
Bait Muzna Gallery is the first art gallery in Oman. Established in 2000 by Sayyida Susan Al Said, Bait Muzna has served as a platform for emerging Omani artists to showcase their talent and place themselves on the wider art scene. In 2016, Bait Muzna opened a second space in Salalah to branch out and support art film and the digital art scene. The gallery has been primarily active as an art consultancy.
The Sultanate's flagship cultural institution, the National Museum of Oman, opened on 30 July 2016 with 14 permanent galleries. It showcases national heritage from the earliest human settlement in Oman two million years ago through to the present day. The museum takes a further step by presenting information on the material in Arabic Braille script for the visually impaired, the first museum to do this in the Gulf region.
The Omani Society for Fine Arts, established in 1993, offers educational programmes, workshops and artist grants for practitioners across varied disciplines. In 2016, the organisation opened its first exhibition on graphic design. It also hosted the "Paint for Peace" competition with 46 artists in honour of the country's 46th National Day, where Mazin al-Mamari won the top prize. The organisation has additional branches in Sohar, Buraimi and Salalah.
Bait Al- Zubair Museum is a private, family-funded museum that opened its doors to the public in 1998. In 1999, the museum received Sultan Qaboos’ Award for Architectural Excellence. Bait Al Zubair displays the family's collection of Omani artifacts that spans a number of centuries and reflect inherited skills that define Oman's society in the past and present. Located within Bait Al-Zubair, Gallery Sarah, which opened in October 2013, offers an array of paintings and photographs by established local and international artists. The gallery also occasionally holds lectures and workshops.
Omani cuisine is diverse and has been influenced by many cultures. Omanis usually eat their main daily meal at midday, while the evening meal is lighter. During Ramadan, dinner is served after the Taraweeh prayers, sometimes as late as 11 pm. However, these dinner timings differ according to each family; for instance, some families would choose to eat right after maghrib prayers and have dessert after taraweeh.
Arsia, a festival meal served during celebrations, consists of mashed rice and meat (sometimes chicken). Another popular festival meal, shuwa, consists of meat cooked very slowly (sometimes for up to 2 days) in an underground clay oven. The meat becomes extremely tender and it is infused with spices and herbs before cooking to give it a very distinct taste. Fish is often used in main dishes too, and the kingfish is a popular ingredient. Mashuai is a meal consisting of a whole spit-roasted kingfish served with lemon rice.
Rukhal bread is a thin, round bread originally baked over a fire made from palm leaves. It is eaten at any meal, typically served with Omani honey for breakfast or crumbled over curry for dinner. Chicken, fish, and lamb or mutton are regularly used in dishes. The Omani halwa is a very popular sweet, basically consisting of cooked raw sugar with nuts. There are many different flavors, the most popular ones being black halwa (original) and saffron halwa. Halwa is considered as a symbol of Omani hospitality, and is traditionally served with coffee. As is the case with most Arab states of the Persian Gulf, alcohol is only available over-the-counter to non-Muslims. Muslims can still purchase alcoholic drinks. Alcohol is served in many hotels and a few restaurants.
In October 2004, the Omani government set up a Ministry of Sports Affairs to replace the General Organisation for Youth, Sports and Cultural Affairs. The 19th Arabian Gulf Cup took place in Muscat, from 4 to 17 January 2009 and was won by the Omani national football team. The 23rd Arabian Gulf Cup that took place in Kuwait, from 22 December 2017 until 5 January 2018 with Oman winning their second title, defeating the United Arab Emirates in the final on penalties following a goalless draw.
Oman's traditional sports are dhow racing, horse racing, camel racing, bull fighting and falconry. Association football, basketball, waterskiing and sandboarding are among the sports that have emerged quickly and gained popularity among the younger generation.
Ali Al-Habsi is an Omani professional association football player. , he plays in the Football League Championship as a goalkeeper for West Brom. The International Olympic Committee awarded the former GOYSCA its prestigious prize for Sporting excellence in recognition of its contributions to youth and sports and its efforts to promote the Olympic spirit and goals.
The Oman Olympic Committee played a major part in organising the highly successful 2003 Olympic Days, which were of great benefit to the sports associations, clubs and young participants. The football association took part, along with the handball, basketball, rugby union, hockey, volleyball, athletics, swimming and tennis associations. In 2010 Muscat hosted the 2010 Asian Beach Games.
Oman also hosts tennis tournaments in different age divisions each year. The Sultan Qaboos Sports Complex stadium contains a 50-meter swimming pool which is used for international tournaments from different schools in different countries. The Tour of Oman, a professional cycling 6-day stage race, takes place in February. Oman hosted the Asian 2011 FIFA Beach Soccer World Cup qualifiers, where 11 teams competed for three spots at the FIFA World Cup. Oman hosted the Men's and Women's 2012 Beach Handball World Championships at the Millennium Resort in Mussanah, from 8 to 13 July. Oman has competed repeatedly for a position in the FIFA World Cup, but have yet qualified to compete in the tournament.
Oman, along with Fujairah in the UAE, are the only regions in the Middle East that have a variant of bullfighting, known as 'bull-butting', organised within their territories. Al-Batena area in Oman is specifically prominent for such events. It involves two bulls of the Brahman breed pitted against one another and as the name implies, they engage in a forceful barrage of headbutts. The first one to collapse or concede its ground is declared the loser. Most bull-butting matches are short affairs and last for less than 5 minutes. The origins of bull-butting in Oman remain unknown, but many locals believe it was brought to Oman by the Moors of Spanish origin. Yet others say it has a direct connection with Portugal, which colonised the Omani coastline for nearly two centuries.
In Cricket, Oman qualified for the 2016 ICC World Twenty20 by securing sixth place in 2015 ICC World Twenty20 Qualifier. They have also been granted T20I status as they were among the top six teams in the qualifiers. On 30 October 2019, they qualified for 2020 T20 Cricket World Cup which will be hosted by Australia.
Oman scored high as of 2019 on the percentage of students who complete lower secondary school and on the literacy rate between the age of 15 and 24, 99.7 percent and 98.7 percent, respectively. However, Oman's net primary school enrollment rate in 2019, which is 94.1 percent, is rated as "challenges remain" by the United Nations Sustainable Development Goals (UNSDG) standard. Oman's overall evaluation in quality of education, according to UNSDG, is 94.8 ("challenges remain") as of 2019.
Oman's higher education produces a surplus in humanities and liberal arts, while it produces an insufficient number in technical and scientific fields and required skill-sets to meet the market demand. Further, sufficient human capital creates a business environment that can compete with, partner or attract foreign firms. Accreditation standards and mechanisms with a quality control that focuses on input assessments, rather than output, are areas of improvement in Oman, according to the United Nations Conference on Trade and Development 2014 report. The transformation Index BTI 2018 report on Oman recommends that the education curriculum should focus more on the "promotion of personal initiative and critical perspective".
The adult literacy rate in 2010 was 86.9%. Before 1970, only three formal schools existed in the entire country, with fewer than 1,000 students. Since Sultan Qaboos' ascension to power in 1970, the government has given high priority to education to develop a domestic work force, which the government considers a vital factor in the country's economic and social progress. Today, there are over 1,000 state schools and about 650,000 students.
Oman's first university, Sultan Qaboos University, opened in 1986. The University of Nizwa is one of the fastest growing universities in Oman. Other post-secondary institutions in Oman include the Higher College of Technology and its six branches, six colleges of applied sciences (including a teachers' training college), a college of banking and financial studies, an institute of Sharia sciences, and several nursing institutes. Some 200 scholarships are awarded each year for study abroad.
According to the Webometrics Ranking of World Universities, the top-ranking universities in the country are Sultan Qaboos University (1678th worldwide), the Dhofar University (6011th) and the University of Nizwa (6093rd).
Since 2003, Oman's undernourished share of the population has dropped from 11.7 percent to 5.4 percent in 2016, but the rate remains high (double) the level of high-income economies (2.7 percent) in 2016. The UNSDG targets zero hunger by 2030. Oman's coverage of essential health services in 2015 was 77 percent, which is relatively higher than the world's average of approximately 54 percent during the same year, but lower than high-income economies' level (83 percent) in 2015.
Since 1995, the percentage of Omani children who receive key vaccines has consistently been very high (above 99 percent). As for road incident death rates, Oman's rate has been decreasing since 1990, from 98.9 per 100,000 individuals to 47.1 per 100,000 in 2017, however, the rate remains significantly above average, which was 15.8 per 100,000 in 2017. Oman's health spending to GDP between 2015 and 2016 averaged 4.3 percent, while the world's average during the same period averaged 10 percent.
As for mortality due to air pollution (household and ambient air pollution), Oman's rate was 53.9 per 100,000 population as of 2016.
Life expectancy at birth in Oman was estimated to be 76.1 years in 2010. , there were an estimated 2.1 physicians and 2.1 hospital beds per 1,000 people. In 1993, 89% of the population had access to health care services. In 2000, 99% of the population had access to health care services. During the last three decades, the Oman health care system has demonstrated and reported great achievements in health care services and preventive and curative medicine. Oman has been making strides in health research too recently. Comprehensive research on the prevalence of skin diseases was performed in North batinah governorate. In 2000, Oman's health system was ranked number 8 by the World Health Organization. | https://en.wikipedia.org/wiki?curid=22316 |
History of Oman
Oman is the site of pre-historic human habitation, stretching back over 100,000 years. The region was impacted by powerful invaders, including other Arab tribes, Portugal and Britain. Oman once possessed the island of Zanzibar, on the east coast of Africa as a colony.
In Oman, a site was discovered by Doctor Bien Joven in 2011 containing more than 100 surface scatters of stone tools belonging to the late Nubian Complex, known previously only from archaeological excavations in Sudan. Two optically stimulated luminescence age estimates place the Arabian Nubian Complex at approximately 106,000 years old. This provides evidence for a distinct Mobile Stone Age technocomplex in southern Arabia, around the earlier part of the Marine Isotope Stage 5.
The hypothesized departure of humankind from Africa to colonise the rest of the world involved them crossing the Straits of Bab el Mandab in the southern Purple Sea and moving along the green coastlines around Arabia and thence to the rest of Eurasia. Such crossing became possible when sea level had fallen by more than 80 meters to expose much of the shelf between southern Eritrea and Yemen; a level that was reached during a glacial stadial from 60 to 70 ka as climate cooled erratically to reach the last glacial maximum. From 135,000 to 90,000 years ago, tropical Africa had megadroughts which drove the humans from the land and towards the sea shores, and forced them to cross over to other continents. The researchers used radiocarbon dating techniques on pollen grains trapped in lake-bottom mud to establish vegetation over the ages of the Malawi lake in Africa, taking samples at 300-year-intervals. Samples from the megadrought times had little pollen or charcoal, suggesting sparse vegetation with little to burn. The area around Lake Malawi, today heavily forested, was a desert approximately 135,000 to 90,000 years ago.
Luminescence dating is a technique that measures naturally occurring radiation stored in the sand. Data culled via this methodology demonstrates that 130,000 years ago, the Arabian Peninsula was relatively warmer which caused more rainfall, turning it into a series of lush habitable land. During this period the southern Red Sea's levels dropped and was only wide. This offered a brief window of time for humans to easily cross the sea and cross the Peninsula to opposing sites like Jebel Faya. These early migrants running away from the climate change in Africa, crossed the Red Sea into Yemen and Oman, trekked across Arabia during favourable climate conditions. 2,000 kilometres of inhospitable desert lie between the Red Sea and Jebel Faya in UAE. But around 130,000 years ago the world was at the end of an ice age. The Red Sea was shallow enough to be crossed on foot or on a small raft, and the Arabian peninsula was being transformed from a parched desert into a green land.
There have been discoveries of Paleolithic stone tools in caves in southern and central Oman, and in the United Arab Emirates close to the Straits of Hormuz at the outlet of the Persian Gulf (UAE site (Jebel Faya). The stone tools, some up to 125,000 years old, resemble those made by humans in Africa around the same period.
The northern half of Oman (beside modern-day Bahrain, Qatar, United Arab Emirates, plus Balochistan and Sindh provinces of Pakistan) presumably was part of the Maka satrapy of the Persian Achaemenid Empire. By the time of the conquests of Alexander the Great, the satrapy may have existed in some form and Alexander is said to have stayed in Purush, its capital, perhaps near Bam, in Kerman province. From the 2nd half of the 1st millennium BCE, waves of Semitic speaking peoples migrated from central and western Arabia to the east. The most important of these tribes are known as Azd. On the coast Parthian and Sassanian colonies were maintained. From c. 100 BCE to c. 300 CE Semitic speakers appear in central Oman at Samad al-Shan and the so-called Pre-islamic recent period, abbreviated PIR, in what has become the United Arab Emirates. These waves continue, in the 19th century bringing Bedouin ruling families who finally ruled the Persian Gulf states.
The Kingdom of Oman was subdued by the Sasanian Empire's forces under Vahrez during the Abysinian-Persian Wars. The 4,000-strong Sasanian garrison was headquartered at Jamsetjerd (modern Jebel Gharabeh, also known as Felej al-Sook).
Oman was exposed to Islam in 630, during the lifetime of the prophet Muhammad; consolidation took place in the Ridda Wars in 632.
In 751 Ibadi Muslims, a moderate branch of the Kharijites, established an imamate in Oman. Despite interruptions, the Ibadi imamate survived until the mid-20th century.
Oman is the only country with a majority Ibadi population. Ibadhism has a reputation for its "moderate conservatism". One distinguishing feature of Ibadism is the choice of ruler by communal consensus and consent. The introduction of Ibadism vested power in the Imam, the leader nominated by the ulema. The Imam's position was confirmed when the imam—having gained the allegiance of the tribal sheiks—received the bay'ah (oath of allegiance) from the public.
Several foreign powers attacked Oman. The Qarmatians controlled the area between 931 and 932 and then again between 933 and 934. Between 967 and 1053 Oman formed part of the domain of the Iranian Buyyids, and between 1053 and 1154 Oman was part of the Seljuk Empire. Seljuk power even spread through Oman to Koothanallur in southern India.
In 1154 the indigenous Nabhani dynasty took control of Oman, and the Nabhani kings ruled Oman until 1470, with an interruption of 37 years between 1406 and 1443.
The Portuguese took Muscat on 1 April 1515, and held it until 26 January 1650, although the Ottomans controlled Muscat from 1550 to 1551 and from 1581 to 1588. In about the year 1600, Nabhani rule was temporarily restored to Oman, although that lasted only to 1624 with the establishment of the fifth imamate, also known as the Yarubid Imamate. The latter recaptured Muscat from the Portuguese in 1650 after a colonial presence on the northeastern coast of Oman dating to 1508.
Turning the table, the Omani Yarubid dynasty became a colonial power itself, acquiring former Portuguese colonies in east Africa and engaging in the slave trade, centered on the Swahili coast and the island of Zanzibar.
By 1719 dynastic succession led to the nomination of Saif bin Sultan II (c. 1706–1743). His candidacy prompted a rivalry among the ulama and a civil war between the two factions, led by major tribes, the Hinawi and the Ghafiri, with the Ghafiri supporting Saif ibn Sultan II. In 1743, Persian ruler Nader Shah occupied Muscat and Sohar with Saif's assistance. Saif died, and was succeeded by Bal'arab bin Himyar of the Yaruba.
Persia had occupied the coast previously. Yet this intervention on behalf of an unpopular dynasty brought about a revolt. The leader of the revolt, Ahmad bin Said al-Busaidi, expelled the Persians by 1749. He then defeated Bal'arab, and was elected sultan of Muscat and imam of Oman.
The Al Busaid clan thus became a royal dynasty. Like its predecessors, Al Busaid dynastic rule has been characterized by a history of internecine family struggle, fratricide, and usurpation. Apart from threats within the ruling family, there were frequent challenges from the independent tribes of the interior. The Busaidid dynasty renounced the imamate after Ahmad bin Said. The interior tribes recognized the imam as the sole legitimate ruler, rejected the authority of the sultan, and fought for the restoration of the imamate.
Schisms within the ruling family became apparent before Ahmad ibn Said's death in 1783 and later manifested themselves with the division of the family into two main lines:
This period also included a revolt in Oman's colony of Zanzibar in the year 1784.
During the period of Sultan Said ibn Sultan's reign (1806–1856), Oman built up its overseas colonies, profiting from the slave trade. As a regional commercial power in the 19th century, Oman held the island of Zanzibar on the Swahili Coast, the Zanj region of the East African coast, including Mombasa and Dar es Salaam, and (until 1958) Gwadar on the Arabian Sea coast of present-day Pakistan.
When Great Britain prohibited slavery in the mid-19th century, the sultanate's fortunes reversed. The economy collapsed, and many Omani families migrated to Zanzibar. The population of Muscat fell from 55,000 to 8,000 between the 1850s and 1870s. Britain seized most of the overseas possessions, and by 1900 Oman had become a different country than before.
When Sultan Sa'id bin Sultan Al-Busaid died in 1856, his sons quarrelled over the succession. As a result of this struggle, the empire—through the mediation of Britain under the Canning Award—was divided in 1861 into two separate principalities: Zanzibar (with its African Great Lakes dependencies), and the area of "Muscat and Oman". This name was abolished in 1970 in favor of "Sultanate of Oman", but implies two political cultures with a long history:
The more cosmopolitan Muscat has been the ascending political culture since the founding of the Al Busaid dynasty in 1744, although the imamate tradition has found intermittent expression.
The death of Sa'id bin Sultan in 1856 prompted a further division: the descendants of the late sultan ruled Muscat and Oman (Thuwaini ibn Said Al-Busaid, r. 1856–1866) and Zanzibar (Mayid ibn Said Al-Busaid, r. 1856–1870); the Qais branch intermittently allied itself with the ulama to restore imamate legitimacy. In 1868, Azzan bin Qais Al-Busaid (r. 1868–1871) emerged as self-declared imam. Although a significant number of Hinawi tribes recognized him as imam, the public neither elected him nor acclaimed him as such.
Imam Azzan understood that to unify the country a strong, central authority had to be established with control over the interior tribes of Oman. His rule was jeopardized by the British, who interpreted his policy of bringing the interior tribes under the central government as a move against their established order. In resorting to military means to unify Muscat and Oman, Imam Azzan alienated members of the Ghafiri tribes, who revolted in the 1870–1871 period. The British gave financial and political support to Turki bin Said Al-Busaid, Imam Azzan's rival. In the Battle of Dhank, Turki bin Said defeated the forces of Imam Azzan, who was killed in battle outside Muttrah in January 1871.
Muscat and Oman was the object of Franco-British rivalry throughout the 18th century. During the 19th century, Muscat and Oman and the United Kingdom concluded several treaties of friendship and commerce. In 1908 the British entered into an agreement of friendship. Their traditional association was confirmed in 1951 through a new treaty of friendship, commerce, and navigation by which the United Kingdom recognized the Sultanate of Muscat and Oman as a fully independent state.
During the late 19th and early 20th centuries, there were tensions between the sultan in Muscat and the Ibadi Imam in Nizwa. This conflict was resolved temporarily by the Treaty of Seeb, which granted the imam rule in the interior Imamate of Oman, while recognising the sovereignty of the sultan in Muscat and its surroundings.
In 1954, the conflict flared up again, when the Treaty of Seeb was broken by the sultan after oil was discovered in the lands of the Imam. The new imam (Ghalib bin Ali) led a 5-year rebellion against the sultan's attack. The Sultan was aided by the colonial British forces and the Shah of Iran. In the early 1960s, the Imam, exiled to Saudi Arabia, obtained support from his hosts and other Arab governments, but this support ended in the 1980s. The case of the Imam was argued at the United Nations as well, but no significant measures were taken.
Zanzibar paid an annual subsidy to Muscat and Oman until its independence in early 1964.
In 1964, a separatist revolt began in Dhofar province. Aided by Communist and leftist governments such as the former South Yemen (People's Democratic Republic of Yemen), the rebels formed the Dhofar Liberation Front, which later merged with the Marxist-dominated Popular Front for the Liberation of Oman and the Arabian Gulf (PFLOAG). The PFLOAG's declared intention was to overthrow all traditional Persian Gulf régimes. In mid-1974, the Bahrain branch of the PFLOAG was established as a separate organisation and the Omani branch changed its name to the Popular Front for the Liberation of Oman (PFLO), while continuing the Dhofar Rebellion.
In 1970, Qaboos bin Said al Said ousted his father, Sa'id bin Taimur, in the 1970 Omani coup d'état who later died in exile in London. Al Said ruled as sultan until his death. The new sultan confronted insurgency in a country plagued by endemic disease, illiteracy, and poverty. One of the new sultan's first measures was to abolish many of his father's harsh restrictions, which had caused thousands of Omanis to leave the country, and to offer amnesty to opponents of the previous régime, many of whom returned to Oman. 1970 also brought the abolition of slavery.
Sultan Qaboos also established a modern government structure and launched a major development programme to upgrade educational and health facilities, build a modern infrastructure, and develop the country's natural resources.
In an effort to curb the Dhofar insurgency, Sultan Qaboos expanded and re-equipped the armed forces and granted amnesty to all surrendering rebels while vigorously prosecuting the war in Dhofar. He obtained direct military support from the UK, Iran, and Jordan. By early 1975, the guerrillas were confined to a area near the Yemeni border and shortly thereafter were defeated. As the war drew to a close, civil action programs were given priority throughout Dhofar and helped win the allegiance of the people. The PFLO threat diminished further with the establishment of diplomatic relations in October 1983 between South Yemen and Oman, and South Yemen subsequently lessened propaganda and subversive activities against Oman. In late 1987 Oman opened an embassy in Aden, South Yemen, and appointed its first resident ambassador to the country.
Throughout his reign, Sultan Qaboos balanced tribal, regional, and ethnic interests in composing the national administration. The Council of Ministers, which functions as a cabinet, consisted of 26 ministers, all of whom were directly appointed by Qaboos. The "Majlis Al-Shura" (Consultative Council) has the mandate of reviewing legislation pertaining to economic development and social services prior to its becoming law. The "Majlis Al-Shura" may request ministers to appear before it.
In November 1996, Sultan Qaboos presented his people with the "Basic Statutes of the State", Oman's first written "constitution". It guarantees various rights within the framework of Qur'anic and customary law. It partially resuscitated long dormant conflict-of-interest measures by banning cabinet ministers from being officers of public shareholding firms. Perhaps most importantly, the Basic Statutes provide rules for setting Sultan Qaboos' succession.
Oman occupies a strategic location on the Strait of Hormuz at the entrance to the Persian Gulf, directly opposite Iran. Oman has concerns with regional stability and security, given tensions in the region, the proximity of Iran and Iraq, and the potential threat of political Islam. Oman maintained its diplomatic relations with Iraq throughout the Gulf War while supporting the United Nations allies by sending a contingent of troops to join coalition forces and by opening up to pre-positioning of weapons and supplies.
In September 2000, about 100,000 Omani men and women elected 83 candidates, including two women, to seats in the "Majlis Al-Shura". In December 2000, Sultan Qaboos appointed the 48-member "Majlis Al Dowla", or State Council, including five women, which acts as the upper chamber in Oman's bicameral representative body.
Al Said's extensive modernization program has opened the country to the outside world and has preserved a long-standing political and military relationship with the United Kingdom, the United States, and others. Oman's moderate, independent foreign policy has sought to maintain good relations with all Middle Eastern countries.
Qaboos died on 10 January 2020 after nearly 50 years in power. | https://en.wikipedia.org/wiki?curid=22317 |
Geography of Oman
Oman is a country situated in Southwest Asia, bordering the Arabian Sea, Gulf of Oman, and Persian Gulf, between Yemen and the United Arab Emirates (UAE). The coast of Oman was an important part in the Omani empire and sultanate.
Oman is located in the southeastern quarter of the Arabian Peninsula and covers a total land area of . The land area is composed of varying topographic features: valleys and desert account for 82 percent of the land mass; mountain ranges, 15 percent; and the coastal plain, 3 percent. The sultanate is flanked by the Gulf of Oman, the Arabian Sea, and the Rub' al Khali (Empty Quarter) of Saudi Arabia, all of which contributed to Oman's isolation. Historically, the country's contacts with the rest of the world were by sea, which not only provided access to foreign lands but also linked the coastal towns of Oman. The Rub' al-Khali, difficult to cross even with modern desert transport, formed a barrier between the sultanate and the Arabian interior. The Al Hajar Mountains, which form a belt between the coast and the desert from the Musandam Peninsula (Ras Musandam) to the city of Sur, almost at Oman's easternmost point, formed another barrier. These geographic barriers kept the interior of Oman free from foreign military encroachments.
Natural features divide the country into six distinct areas: Ru'us al-Jibal, including the northern Musandam Peninsula; the Batinah plain running southeast along the Gulf of Oman coast; the Oman interior behind the Batinah coast comprising the Hajar Mountains, their foothills, and desert fringes; the coast from Muscat-Matrah around the point of Ras Al Hadd, and down the Arabian Sea; the offshore island of Masirah; and finally the barren coastline south to the Dhofar region in the south.
Except for the foggy and fertile Dhofar, all of the coast and the lowlands around the Hajar mountains are part of the Gulf of Oman desert and semi-desert ecoregion, while the mountains themselves are a distinct habitat.
The northernmost area, Musandam, extends from the tip of the Musandam Peninsula to the boundary with the United Arab Emirates (UAE) at Hisn al-Dibba. It borders the Strait of Hormuz, which links the Persian Gulf with the Gulf of Oman, and is separated from the rest of the sultanate by a strip of territory belonging to the UAE. This area consists of low mountains forming the northernmost extremity of the Western Hajar. Two inlets, Elphinstone ("Khawr ash-Shamm") and Malcom ("Ghubbat al-Ghazirah"), cleave the coastline about one third of the distance from the Strait of Hormuz and at one point are separated by only a few hundred meters of land. The coastline is extremely rugged, and the Elphinstone Inlet, long and surrounded by cliffs high, has frequently been compared with fjords in Norway.
The UAE territory separating Ru'us al Jibal from the rest of Oman extends almost as far south as the coastal town of Shinas. A narrow, well-populated coastal plain known as "Al-Batinah" runs from the point at which the sultanate is re-entered to the town of As-Sib, about to the southeast. Across the plains, a number of wadis, heavily populated in their upper courses, descend from the Western Hajar Mountains to the south. A ribbon of oases, watered by wells and underground channels ("aflaj"), extends the length of the plain, about inland.
South of As Sib, the coast changes character. For about , from As-Sib to Ras al-Hadd, it is barren and bounded by cliffs almost its entire length; there is no cultivation and little habitation. Although the deep water off this coast renders navigation relatively easy, there are few natural harbors or safe anchorages. The two best are at Muscat and Matrah, where natural harbors facilitated the growth of cities centuries ago.
Al Sharqiyah is the northeastern region of the Sultanate of Oman and overlooks the Arabian Sea to the east and includes the inner side of the Eastern Hijr Mountains.
The region consists of the following states:
South Al Sharqiyah - The state of Sur is its administrative capital in addition to the states of Jalan Bani Bu Ali and Jalan Bani Bu Hassan, Kamel and Alwafi and Masirah.
North Al Sharqiyah - The state of Ibra is its administrative capital in addition to the states of Bidiyah, Al-Mudhaibi, Qabil, Wadi Bani Khalid, Damma and Al-Tayyeen.
The desolate coastal tract from Jalan to Ras Naws has no specific name. Low hills and wastelands meet the sea for long distances. Midway along this coast and about fifteen kilometers offshore is the barren Masirah Island. Stretching about , the island occupies a strategic location near the entry point to the Gulf of Oman from the Arabian Sea. Because of its location, it became the site of military facilities used first by the British and then by the United States, following an access agreement signed in 1980 by the United States and Oman.
West of the coastal areas lies the tableland of central Oman. The Wadi Samail (the largest wadi in the mountain zone), a valley that forms the traditional route between Muscat and the interior divides the Hajar range into two subranges: "Al-Ḥajar Al-Gharbī" (The Western Hajar) and "Al-Ḥajar Ash-Sharqī" (The Eastern Hajar). At the same time, mountains in the central region, where the highest of the Hajar are located, are recognised as the "Central Hajar". The general elevation is about , but the peaks of the high ridge known as Jebel Akhdar ("Green Mountain"), rise to more than . Jabal Akhdar is a home of the Arabian tahr, a unique species of wild goat. In the hope of saving this rare animal, Sultan Qabus ibn Said has declared part of the mountain a national park. Behind the Western Mountains are two inland regions, Az-Zahirah and Inner Oman, separated by the lateral range of the Rub al Khali. Adjoining the Eastern Hajar Mountains are the sandy regions of Ash-Sharqiyah and Jalan, which also border the desert.
Dhofar region extends from Ras ash-Sharbatat to the border of Yemen and north to the clearly defined border with Saudi Arabia. Its capital, Salalah, was the permanent residence of Sultan Said ibn Taimur Al Said and the birthplace of the present sultan, Qaboos ibn Said. The highest peak of the Dhofar Mountains, Jabal Samhan, is about . The coast of Dhofar is fertile, being watered by monsoonal fogs from the Indian Ocean and is part of the Arabian Peninsula coastal fog desert ecoregion.
Al Dharerah region consists of three parts: Dhank, Ibri and Yanqul.
According to Köppen climate classification Oman has three different climates (BWh, BSk, BSh) and is dominated by BWh.
With the exception of Dhofari region, which has a strong monsoon climate and receives warm winds from the Indian Ocean, the climate of Oman is extremely hot and dry most of the year.
Summer begins in mid-April and lasts until October. The highest temperatures are registered in the interior, where readings up to a maximum of have been recorded. On the Batinah coastal plain, summer temperatures seldom exceed , but, because of the low elevation, the humidity may be as high as 90 percent. The mean summer temperature in Muscat is , but the "gharbī" (), a strong wind that blows from the Rub' al-Khali, can raise temperatures from the towns on the Gulf of Oman by to .
Winter temperatures are mild and pleasant, ranging between .
Precipitation on the coasts and on the interior plains ranges from a year and falls during mid- and late winter. Rainfall in the mountains, particularly over Jebel Akhdar, is much higher and may reach .
Because the plateau of Jebel Akhdar is porous limestone, rainfall seeps quickly through it, and the vegetation, which might be expected to be more lush, is meager. However, a huge reservoir under the plateau provides springs for low-lying areas. In addition, an enormous wadi channels water to these valleys, making the area agriculturally productive in years of good rainfall.
Dhofar, benefiting from a southwest monsoon between June and September, receives heavier rainfall and has constantly running streams, which make the region Oman's most fertile area.
Occasionally, a cyclone from the North Indian Ocean makes landfall, bringing with it heavy rain, such as Cyclone Kelia did in 2011. Oman was hit by Cyclone Gonu on June 6. Large areas in the capital area region in the Governorate of Muscat and in Amerat and Quriyat were severely affected. Gonu first hit the southern city of Sur late on June 5, 2007. | https://en.wikipedia.org/wiki?curid=22318 |
Demographics of Oman
This article is about the demographic features of the population of Oman, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population.
About 50% of the population in Oman lives in Muscat and the Batinah coastal plain northwest of the capital; about 200,000 live in the Dhofar (southern) region; and about 30,000 live in the remote Musandam Peninsula on the Strait of Hormuz. Some 2 million (45% of entire population) expatriates live in Oman, most of whom are workers from India, Pakistan, Bangladesh, Morocco, Jordan, and the Philippines.
Since 1970, the government has given high priority to education in order to develop a domestic work force, which the government considers a vital factor in the country's economic and social progress. In 1986, Oman's first university, Sultan Qaboos University, opened. Other post secondary institutions include a law school, technical college, banking institute, teachers' training college, and health sciences institute. Some 200 scholarships are awarded each year for study abroad.
Nine private colleges exist, providing two-year post secondary diplomas. Since 1999, the government has embarked on reforms in higher education designed to meet the needs of a growing population. Under the reformed system, four public regional universities were created, and incentives are provided by the government to promote the upgrading of the existing nine private colleges and the creation of other degree-granting private colleges.
Structure of the population (01.07.2009) (Estimates) :
Structure of the population (01.07.2012) (Estimates) :
Births and deaths
According to the CIA, Oman's population primarily consists of Arab, Baluchi, South Asian (Indian, Pakistani, Sri Lankan, Bangladeshi), and African ethnic groups.
Omani society is largely tribal. Oman has three known types of identities. Two of these identities are 'tribalism' and 'Ibadism'; the third identity is linked to 'maritime trade'. The first two identities are widespread in the interior of Oman; these identities are closely tried to tradition, as a result of lengthy periods of isolation. The third identity, which pertains to Muscat and the coastal areas of Oman, is an identity that has become embodied in business and trade. The third identity is generally seen to be more open and tolerant towards others. Thus, tension between socio-cultural groups in Omani society exists. More important is the existence of social inequality between these three groups.
Because of the combination of a relatively small Omani population and a fast-growing oil-driven economy, Oman has attracted many migrants. At the 2014 census the total expatriate population was 1,789,000 or 43.7% of the population. Most migrants are males from India (465,660 for both sexes), Bangladesh (107,125) or Pakistan (84,658). Female migrant workers are mainly from Indonesia (25,300), the Philippines (15,651) or Sri Lanka (10,178). Migrants from Arab countries account for 68,986 migrants (Egypt 29,877, Jordan 7,403, Sudan 6,867, UAE 6,426, Iraq 4,159, Saudi Arabia 725, Bahrain 388, Qatar 168, other 12,683) and other Asian countries for 12,939 migrants. There were 8,541 migrants from Europe, 1,540 from the United States and 15,565 from other countries.
The following demographic statistics are from the CIA World Factbook, unless otherwise indicated.
Age structure
Median age
Birth rate
Death rate
Population growth rate
Urbanization
Sex ratio
Infant mortality rate
Life expectancy at birth
Obesity - adult prevalence rate
Children under the age of 5 years underweight
"noun:"
Omani(s)
"adjective:"
Omani
Islam (official; majority are Ibadhi, lesser numbers of Sunni and Shia) 85.9%, Christian 6.5%, Hindu 5.5%, Buddhist 0.8%, Jewish <0.1, Other 1%, Unaffiliated 0.2%
Arabic (official), English, Baluchi, Kiswahili, Hindi, Urdu, Lawati (Khojki), Gujarati, Zadjali, Ajami, Kamzari, Jibbali (Qarawi): Shehri, Mehri, Habyoti, Bathari, Hikmani, Harsusi, Malayalam, Tamil and other Indian languages
"definition:"
Literacy has been described as the ability to read for knowledge and write coherently and think critically about the written word.
"total population:" 91.1%
"male:" 93.6%
"female:" 85.6% (2015 est.)
Today several thousand Omani-born people have emigrated abroad. The figures are shown below (only countries with more than 100 Omani-born residents are listed). | https://en.wikipedia.org/wiki?curid=22319 |
Economy of Oman
The economy of Oman is rural and agrarian. Oman's current GDP per capita has expanded continuously in the past fifty years. It grew 339% in the 1960s reaching a peak growth of 1,370% in the 1970s scaling back to modest 13% growth in the 1980s and rising again to 34% in the 1990s.
This is a chart of trend of the gross domestic product and gross domestic product per capita of Oman at market prices by the International Monetary Fund.
Oman liberalised its markets in an effort to accede to the World Trade Organization (WTO) and gained membership in 2000. Further, on 20 July 2006 the U.S. Congress approved the US-Oman Free Trade Agreement. This took effect on 1 January 2009, eliminating tariff barriers on all consumer and industrial products. It also provides strong protections for foreign businesses investing in Oman.
The government also undertook some important policy measures during 2018 with the establishment of a commercial arbitration center, the adoption of a new commercial companies’ law, and a further streamlining of licensing processes through Invest Easy in order to improve the business and investment climate and promote private sector-led growth in the Sultanate.
Oman's economy and revenues from petroleum products have enabled Oman's dramatic development over the past 50 years. Notably however, Oman is not a member of OPEC, although it has coordinated with the group in recent years.
Oil was first discovered in the interior near Fahud in the western desert in 1964. Petroleum Development Oman (PDO) began production in August 1967. The Omani Government owns 60% of PDO, and foreign interests own 40% (Royal Dutch Shell owns 34%; the remaining 6% is owned by Compagnie Francaise des Petroles [Total] and Partex). In 1976, Oman's oil production rose to 366,000 barrels (58,000 m³) per day but declined gradually to about 285,000 barrels (45,000 m³) per day in late 1980 due to the depletion of recoverable reserves. From 1981 to 1986, Oman compensated for declining oil prices, by increasing production levels to 600,000 b/d. With the collapse of oil prices in 1986, however, revenues dropped dramatically. Production was cut back temporarily in coordination with the Organization of Petroleum Exporting Countries (OPEC), and production levels again reached 600,000 b/d by mid-1987, which helped increase revenues. By mid-2000, production had climbed to more than 900,000 b/d where they remain. Natural gas reserves, which increasingly provide the fuel for power generation and desalination, stand at 18 trillion ft³ (510 km³). The Oman LNG processing plant located in Sur was opened in 2000, with production capacity of 6.6 million tons/YR, as well as unsubstantial gas liquids, including condensates.
Oman's 10th five-year plan (2020-2025) is the first implementation plan of Vision 2040, and will focus its efforts towards achieving economic diversification. The plan for economic diversification aims to move Oman away from the oil-and-gas-based sources of income, and has earmarked five sectors that have high growth potential and economic returns. These are agriculture and fisheries, manufacturing, logistics and transport, energy and mining, and tourism.
According to the Central Bank of Oman's Annual Report 2018, the Omani crude oil price averaged at US$69.7 a barrel in 2018 as compared to US$51.3 per barrel during 2017. The recovery in oil prices also contributed to growth in non-oil economic activities, reflecting inter-linkages, although the dependency of non-oil activities on oil activities has somewhat weakened in the last few years.
According to the World Bank growth is expected to increase over 2020–21, driven in part by a large increase in gas production from the new Khazzan gas project, and infrastructure spending plans in both oil and non-oil sectors. Notably, with Khazzan phase-I becoming operational, the natural gas under the petroleum sector is also emerging as a significant contributor to the Omani economy, with BP committing to invest US$16 Billion developing the field. Meanwhile, the Special Economic Zone Authority of Duqm (SEZAD) attracted $14.2 billion worth of investments in the form of usufruct agreements signed till the end of 2018. With a land area of 2,000 km2 and 70 km. of coastline along the Arabian Sea, the Duqm Special Economic Zone is the largest in the Middle East and North Africa region and ranks among the largest in the world. Duqm is an integrated economic development composed of zones: a sea port, industrial area, new town, fishing harbor, tourist zone, a logistics center and an education and training zone, all of which are supported by a multimodal transport system that connects it with nearby regions.
On the fiscal front, government expenditure also increased noticeably in 2018 due to higher spending on oil & gas production, defence, subsidies and elevated interest payments. The government debt also increased to RO 14,492 in 2018 – with the debt to GDP ratio expected increased to 58 percent by 2020, leading to constraints on the ability of fiscal spending to support growth and raising sustainability concerns.
In Oman, the Omanisation programme has been in operation since 1999, working toward replacing expatriates with trained Omani personnel. The goal of this initiative is to provide jobs for the rapidly growing Omani population. The state has allotted subsidies for companies to hire local employees not only to gradually reduce reliance on foreign workers but also to overcome an overwhelming employment preference on the part of Omanis for government jobs.
By the end of 1999, the number of Omanis in government services exceeded the set target of 72%, and in most departments reached 86% of employees. The Ministry has also stipulated fixed Omanisation targets in six areas of the private sector. Most companies have registered Omanisation plans. Since April 1998 a 'green card' has been awarded to companies that meet their Omanisation targets and comply with the eligibility criteria for labour relations. The names of these companies are published in the local press and they receive preferential treatment in their dealings with the Ministry.
Academics working on various aspects of Omanisation include Ingo Forstenlechner from United Arab Emirates University and Paul Knoglinger from the FHWien.
Omanisation, however, in the private sector is not always successful. One of the reasons is that jobs are still filled by expatriates because of the lower wages. Studies reveal that an increasing number of the job openings in the private sector pay the official minimum salary for nationals, which is an unattractive employment prospect for the locals. There is also the problem of placing Omani workers in senior positions due to the fact that a significant chunk of the workforce is composed of young and inexperienced workers.
In order to meet the training and Omanisation requirements of the banking sector, the Omani Institute of Bankers was established in 1983 and has since played a leading role in increasing the number of Omanis working in the sector. The Central Bank monitors the progress made by the commercial banks with Omanisation and in July 1995 issued a circular stipulating that by the year 2000, at least 75% of senior and middle management positions should be held by Omanis. In the clerical grades 95% of staff should be Omanised and 100% in all other grades. At the end of 1999, no less than 98.8% of all positions were held by Omanis. Women made up 60% of the total. During 2001 the percentage of Omanis employed at senior and middle management levels went up from 76.7% to 78.8%. There was a slight increase in the clerical grade percentage to 98.7%, while the non-clerical grades had already reached 100% Omanisation in 1998. The banking sector currently employs 2,113 senior and middle managers supported by 4,757 other staff.
The Ministry has issued a decision regulating tourist guides, who in future will be required to have a license. This Ministerial decision aims at encouraging professionalism in the industry as well as providing career opportunities for Omanis who will be encouraged to learn foreign languages so as to replace foreign tour guides. In January 1996, a major step forward in the training of Omanis in the hotel industry came with the opening of the National Hospitality Institute (NHI). The Institute is a public company quoted on the Omani Stock exchange. In February 1997, the first batch of 55 male and female trainees, sponsored by the Vocational Training Authority, were awarded their first level certificates and were given on-the-job training in several hotels. In May 1999, the fourth batch of 95 trainees obtained their NVQs, bringing the number of Omanis trained by the Institute to around 450. Omanis now make up 37% of the 34,549 employees in the hotel and catering business, which exceeds the Omanisation target of 30% set by the Government. The NHI has also trained catering staff from the Sultan's Armed Forces and has launched a two-year tour guide course, which includes language training, safe driving, first aid and a knowledge of local history and geography.
The stock market capitalisation of listed companies in Oman was valued at $15,269 million in 2005 by the World Bank. | https://en.wikipedia.org/wiki?curid=22321 |
Telecommunications in Oman
Oman Telecommunication Company (Omantel) has a monopoly in the landline telephone and internet access markets. Its arm Omanmobile offers mobile services. The Omani government owns 70% of Omantel after 30% was listed for the public in 2005. In 2005 Qatar Telecommunication Company (Qtel) and partners were awarded the second license to offer mobile services in the country under the brand of Nawras, which is now rebranded as Ooredoo (Ooredoo Oman). Oman now has 4 mobile networks offering internet. The networks providing 4G coverage are Oman mobile, Ooredoo, Renna and Friendi.
In 2019 Omantel introduce its first 5G coverage in the country.
In October 2007 the government overhauled Omantel board of directors and announced its plan to remerge the two arm of the company and to sell part of its share to a strategic partner. The government also slashed the royalty fee paid on revenue from 12% to 7%.
Country Code: 00968
Landlines in use: 254,051(2010 - Feb)
Prepaid (36,430) - Postpaid (210,816)- Public Pay-phone (6,805)
Mobile cellular: 4,131,922 (2010 - Feb)
Prepaid (3,767,218) – Postpaid (364,704)
Domestic: open wire, microwave, radiotelephone communications, limited coaxial cable and a domestic satellite system with 8 earth stations.
International: satellite earth stations - 2 Intelsat (Indian Ocean) and 1 Arabsat.
Internet Service Providers (ISPs): 1 (2007)
Country code (Top level domain): OM
Dial-up: Postpaid (50,000) plus prepaid access cards containing a username and a password which give a set number of surfing hours.
The postpaid dialup service offered by Omantel costs 3 R.O. ($8) per month plus 0.180 R..O. ($0.47) for each hour of use.
Broadband: (29,000)
ADSL services were launched in 2005 in Oman through the provider Omantel, the only ISP in Oman.
Packages available for home users:
Both Omanmobile and Ooredoo Oman offer access to the internet through their EDGE networks. Ooredoo Oman launched its 3G network in selected areas in December 2007 with a download speed of 1 megabit. Omanmobile is also offering high speed 3G coverage .
Broadcast stations: 13 (plus 25 low-power repeaters) (1999)
Televisions: 1.6 million (1997)
Broadcast stations: AM 3, FM 9, shortwave 2 (1999)
Radios: 1.4 million (1997)
In April, 2008, Nokia Siemens was appointed to replace parts of the existing radio network. | https://en.wikipedia.org/wiki?curid=22322 |
Transport in Oman
This article is about transport in Oman.
"total:"
62,240 km
"paved:"
29,685 km (including 1943 km of expressways)
"unpaved:"
30,545 km (2012)
Oman has two expressway grade highways, with the first 8 lane expressway set to open in 2017. Al Batinah Coastal Road runs along the Batinah Coast of the Sea of Oman. It forks near Shinas, with one leading inland to Wadi Hatta and another to Fujairah. The speed limit is generally 120 km/h. In the Muscat area, this highway is known as Sultan Qaboos Street, and it is the trunk road running through the city. Outside the Muscat area, the interchanges take the form of roundabouts spaced approximately 7 km apart. Each roundabout contains unique features to enliven the streetscape. The roundabouts are named for driver navigation.
The other highway is Muscat Expressway, a 54 kilometre highway running from Al Qurum area of Muscat to Halban area on the outskirts of Muscat. Al Batinah Expressway is a 256 kilometre, 8-lane highway that continues from the Muscat Expressway in Halban up to the Oman-UAE border at Khatmat Malaha.
Other roads in Muscat Governorate and some cities such as Sohar and Salalah are dual-carriageways, with four or six lanes each with a speed limit ranging from 60 to 120 km/h; while in the rest of Oman, the roads are mostly single-carriageways.
There are no mainline railways in Oman, but some are planned, including links to adjacent countries. The narrow gauge Al Hoota Cave Train takes tourists into the cave complex in a journey of 4 minutes and distance of 400m.
The estimated total length of the future Oman National Railway network is 2,135km. It will be divided into several segments linking Oman's borders with the UAE to Muscat, as part of the GCC Railway Network and also to the southern parts of the country - Port of Al Duqm, the Port of Salalah and the Yemen border.
crude oil 1,300 km; natural gas 1,030 km
"total:"
3 ships (1,000 GT or over) totaling 16,306 GT/
"ships by type:"
cargo 1, passenger 1, passenger/cargo 1 (1999 est.)
"total:"
136
"over 3,047 m:"
2
"2,438 to 3,047 m:"
6
"1,524 to 2,437 m:"
56
"914 to 1,523 m:"
37
"under 914 m:"
35 (1999 est.) | https://en.wikipedia.org/wiki?curid=22323 |
Foreign relations of Oman
When Sultan Qaboos bin Said Al Said assumed power in 1970, Oman had limited contacts with the outside world, including neighbouring Arab states. A special treaty relationship permitted the United Kingdom close involvement in Oman's civil and military affairs. Ties with the United Kingdom remained very close throughout Sultan Qaboos' reign, along with strong ties to the United States.
Since 1970, Oman has pursued a moderate foreign policy and expanded its diplomatic relations dramatically. It supported the 1979 Camp David accords and was one of three Arab League states, along with Somalia and Sudan, which did not break relations with Egypt after the signing of the Egyptian-Israeli Peace Treaty in 1979. During the Persian Gulf crisis, Oman assisted the United Nations coalition effort. Oman has developed close ties to its neighbors; it joined the six-member Gulf Cooperation Council when it was established in 1980.
Oman has traditionally supported Middle East peace initiatives, as it did those in 1983. In April 1994, Oman hosted the plenary meeting of the Water Working Group of the peace process, the first Persian Gulf state to do so.
During the Cold War period, Oman avoided relations with communist countries because of the communist support for the insurgency in Dhofar. In recent years, Oman has undertaken diplomatic initiatives in the Central Asian republics, particularly in Kazakhstan, where it is involved in a joint oil pipeline project. In addition, Oman maintains good relations with Iran, its north-eastern neighbor across the Gulf of Oman, and the two countries regularly exchange delegations. Oman is an active member in international and regional organizations, notably the Arab League and the Gulf Cooperation Council. Its foreign policy is overseen by its Ministry of Foreign Affairs.
The northern boundary with the United Arab Emirates has not been bilaterally defined; the northern section in the Musandam Peninsula is an administrative boundary. | https://en.wikipedia.org/wiki?curid=22325 |
Old Testament
The Old Testament (abbreviated OT) is the first part of the Christian biblical canon, which is based primarily upon the twenty-four books of the Hebrew Bible (or Tanakh), a collection of ancient religious Hebrew writings by the Israelites believed by most Christians and religious Jews to be the sacred Word of God. The second part of Christian Bibles is the New Testament, written in the Koine Greek language.
The books that compose the Old Testament canon, as well as their order and names, differ between Christian denominations. The Catholic canon comprises 46 books, the canons of the Eastern Orthodox and Oriental Orthodox Churches comprise up to 49 books, and the most common Protestant canon comprises 39 books. The 39 books in common to all the Christian canons correspond to the 24 books of the Tanakh, with some differences of order, and there are some differences in text. The additional number reflects the splitting of several texts (Kings, Samuel and Chronicles, Ezra–Nehemiah and the Twelve Minor Prophets) into separate books in Christian bibles.
The books that are part of the Christian Old Testament but that are not part of the Hebrew canon are sometimes described as deuterocanonical. In general, Protestant Bibles do not include the deuterocanonical books in their canon, but some versions of Anglican and Lutheran bibles place such books in a separate section called Apocrypha. These extra books are ultimately derived from the earlier Greek Septuagint collection of the Hebrew scriptures and are also Jewish in origin. Some are also contained in the Dead Sea Scrolls.
The Old Testament consists of many distinct books by various authors produced over a period of centuries. Christians traditionally divide the Old Testament into four sections: (1) the first five books or Pentateuch (Torah); (2) the history books telling the history of the Israelites, from their conquest of Canaan to their defeat and exile in Babylon; (3) the poetic and "Wisdom books" dealing, in various forms, with questions of good and evil in the world; and (4) the books of the biblical prophets, warning of the consequences of turning away from God.
The Old Testament contains 39 (Protestant), 46 (Catholic), or more (Orthodox and other) books, divided, very broadly, into the Pentateuch (Torah), the historical books, the "wisdom" books and the prophets.
The table below uses the spellings and names present in modern editions of the Christian Bible, such as the Catholic New American Bible Revised Edition and the Protestant Revised Standard Version and English Standard Version. The spelling and names in both the 1609–10 Douay Old Testament (and in the 1582 Rheims New Testament) and the 1749 revision by Bishop Challoner (the edition currently in print used by many Catholics, and the source of traditional Catholic spellings in English) and in the Septuagint differ from those spellings and names used in modern editions which are derived from the Hebrew Masoretic text.
For the Orthodox canon, Septuagint titles are provided in parentheses when these differ from those editions. For the Catholic canon, the Douaic titles are provided in parentheses when these differ from those editions. Likewise, the King James Version references some of these books by the traditional spelling when referring to them in the New Testament, such as "Esaias" (for Isaiah).
In the spirit of ecumenism more recent Catholic translations (e.g. the New American Bible, Jerusalem Bible, and ecumenical translations used by Catholics, such as the Revised Standard Version Catholic Edition) use the same "standardized" (King James Version) spellings and names as Protestant Bibles (e.g. 1 Chronicles as opposed to the Douaic 1 Paralipomenon, 1–2 Samuel and 1–2 Kings instead of 1–4 Kings) in those books which are universally considered canonical, the protocanonicals.
The Talmud (the Jewish commentary on the scriptures) in Bava Batra 14b gives a different order for the books in "Nevi'im" and "Ketuvim". This order is also cited in Mishneh Torah Hilchot Sefer Torah 7:15. The order of the books of the Torah is universal through all denominations of Judaism and Christianity.
The disputed books, included in one canon but not in others, are often called the Biblical apocrypha, a term that is sometimes used specifically to describe the books in the Catholic and Orthodox canons that are absent from the Jewish Masoretic Text and most modern Protestant Bibles. Catholics, following the Canon of Trent (1546), describe these books as deuterocanonical, while Greek Orthodox Christians, following the Synod of Jerusalem (1672), use the traditional name of "anagignoskomena", meaning "that which is to be read." They are present in a few historic Protestant versions; the German Luther Bible included such books, as did the English 1611 King James Version.
Empty table cells indicate that a book is absent from that canon.
Several of the books in the Eastern Orthodox canon are also found in the appendix to the Latin Vulgate, formerly the official bible of the Roman Catholic Church.
Some of the stories of the Pentateuch may derive from older sources. American physiologist and science writer Homer W. Smith points out similarities between the Genesis creation narrative and that of the Sumerian "Epic of Gilgamesh", such as the inclusion of the creation of the first man (Adam/Enkidu) in the Garden of Eden, a tree of knowledge, a tree of life, and a deceptive serpent. Scholars such as Andrew R. George point out the similarity of the Genesis flood narrative and the Gilgamesh flood myth. Similarities between the origin story of Moses and that of Sargon of Akkad were noted by psychoanalyst Otto Rank in 1909 and popularized by later writers, such as H. G. Wells and Joseph Campbell. Wells concedes in "The Outline of History" that "there is a growing flavour of reality in most of" the later books of the Old Testament, describing the stories of David and Solomon as being detailed with "the harshest facts" only a nearly contemporary writer would likely be able to relate. Similarly, Will Durant states in "Our Oriental Heritage" (1935):
The first five books – Genesis, Exodus, Leviticus, book of Numbers and Deuteronomy – reached their present form in the Persian period (538–332 BC), and their authors were the elite of exilic returnees who controlled the Temple at that time. The books of Joshua, Judges, Samuel and Kings follow, forming a history of Israel from the Conquest of Canaan to the Siege of Jerusalem c. 587 BC. There is a broad consensus among scholars that these originated as a single work (the so-called "Deuteronomistic history") during the Babylonian exile of the 6th century BC.
The two Books of Chronicles cover much the same material as the Pentateuch and Deuteronomistic history and probably date from the 4th century BC. Chronicles, and Ezra–Nehemiah, were probably finished during the 3rd century BC. Catholic and Orthodox Old Testaments contain two (Catholic Old Testament) to four (Orthodox) Books of Maccabees, written in the 2nd and 1st centuries BC.
These history books make up around half the total content of the Old Testament. Of the remainder, the books of the various prophets – Isaiah, Jeremiah, Ezekiel, and the twelve "minor prophets" – were written between the 8th and 6th centuries BC, with the exceptions of Jonah and Daniel, which were written much later. The "wisdom" books – Job, Proverbs, Ecclesiastes, Psalms, Song of Solomon – have various dates: Proverbs possibly was completed by the Hellenistic time (332–198 BC), though containing much older material as well; Job completed by the 6th century BC; Ecclesiastes by the 3rd century BC.
God is consistently depicted as the one who created the world. Although the God of the Old Testament is not consistently presented as the only God who exists, he is always depicted as the only God whom Israel is to worship, or the one "true God", that only Yahweh is Almighty, and both Jews and Christians have always interpreted the Bible (both the "Old" and "New" Testaments) as an affirmation of the oneness of Almighty God.
The Old Testament stresses the special relationship between God and his chosen people, Israel, but includes instructions for proselytes as well. This relationship is expressed in the biblical covenant (contract) between the two, received by Moses. The law codes in books such as Exodus and especially Deuteronomy are the terms of the contract: Israel swears faithfulness to God, and God swears to be Israel's special protector and supporter. "The Jewish Study Bible" denies that covenant means contract.
Further themes in the Old Testament include salvation, redemption, divine judgment, obedience and disobedience, faith and faithfulness, among others. Throughout there is a strong emphasis on ethics and ritual purity, both of which God demands, although some of the prophets and wisdom writers seem to question this, arguing that God demands social justice above purity, and perhaps does not even care about purity at all. The Old Testament's moral code enjoins fairness, intervention on behalf of the vulnerable, and the duty of those in power to administer justice righteously. It forbids murder, bribery and corruption, deceitful trading, and many sexual misdemeanors. All morality is traced back to God, who is the source of all goodness.
The problem of evil plays a large part in the Old Testament. The problem the Old Testament authors faced was that a good God must have had just reason for bringing disaster (meaning notably, but not only, the Babylonian exile) upon his people. The theme is played out, with many variations, in books as different as the histories of Kings and Chronicles, the prophets like Ezekiel and Jeremiah, and in the wisdom books like Job and Ecclesiastes.
The process by which scriptures became canons and Bibles was a long one, and its complexities account for the many different Old Testaments which exist today. Timothy H. Lim, a professor of Hebrew Bible and Second Temple Judaism at the University of Edinburgh, identifies the Old Testament as "a collection of authoritative texts of apparently divine origin that went through a human process of writing and editing." He states that it is not a magical book, nor was it literally written by God and passed to mankind. By about the 5th century BC Jews saw the five books of the Torah (the Old Testament Pentateuch) as having authoritative status; by the 2nd century BC the Prophets had a similar status, although without quite the same level of respect as the Torah; beyond that, the Jewish scriptures were fluid, with different groups seeing authority in different books.
Hebrew texts commenced to be translated into Greek in Alexandria in about 280 and continued until about 130 BC. These early Greek translations supposedly commissioned by Ptolemy Philadelphus were called the Septuagint (Latin: "Seventy") from the supposed number of translators involved (hence its abbreviation "LXX"). This Septuagint remains the basis of the Old Testament in the Eastern Orthodox Church.
It varies in many places from the Masoretic Text and includes numerous books no longer considered canonical in some traditions: 1 and 2 Esdras, Judith, Tobit, 3 and 4 Maccabees, the Book of Wisdom, Sirach, and Baruch. Early modern Biblical criticism typically explained these variations as intentional or ignorant corruptions by the Alexandrian scholars, but most recent scholarship holds it is simply based on early source texts differing from those later used by the Masoretes in their work.
The Septuagint was originally used by Hellenized Jews whose knowledge of Greek was better than Hebrew. But the texts came to be used predominantly by gentile converts to Christianity and by the early Church as its scripture, Greek being the "lingua franca" of the early Church. The three most acclaimed early interpreters were Aquila of Sinope, Symmachus the Ebionite, and Theodotion; in his Hexapla, Origen placed his edition of the Hebrew text beside its transcription in Greek letters and four parallel translations: Aquila's, Symmachus's, the Septuagint's, and Theodotion's. The so-called "fifth" and "sixth editions" were two other Greek translations supposedly miraculously discovered by students outside the towns of Jericho and Nicopolis: these were added to Origen's Octapla.
In 331, Constantine I commissioned Eusebius to deliver fifty Bibles for the Church of Constantinople. Athanasius recorded Alexandrian scribes around 340 preparing Bibles for Constans. Little else is known, though there is plenty of speculation. For example, it is speculated that this may have provided motivation for canon lists, and that Codex Vaticanus and Codex Sinaiticus are examples of these Bibles. Together with the Peshitta and Codex Alexandrinus, these are the earliest extant Christian Bibles. There is no evidence among the canons of the First Council of Nicaea of any determination on the canon. However, Jerome (347–420), in his "Prologue to Judith", makes the claim that the Book of Judith was "found by the Nicene Council to have been counted among the number of the Sacred Scriptures".
In Western Christianity or Christianity in the Western half of the Roman Empire, Latin had displaced Greek as the common language of the early Christians, and in 382 AD Pope Damasus I commissioned Jerome, the leading scholar of the day, to produce an updated Latin bible to replace the Vetus Latina, which was a Latin translation of the Septuagint. Jerome's work, called the Vulgate, was a direct translation from Hebrew, since he argued for the superiority of the Hebrew texts in correcting the Septuagint on both philological and theological grounds. His Vulgate Old Testament became the standard bible used in the Western Church, specifically as the Sixto-Clementine Vulgate, while the Churches in the East continued, and still continue, to use the Septuagint.
Jerome, however, in the Vulgate's prologues describes some portions of books in the Septuagint not found in the Hebrew Bible as being non-canonical (he called them "apocrypha"); for "Baruch", he mentions by name in his "Prologue to Jeremiah" and notes that it is neither read nor held among the Hebrews, but does not explicitly call it apocryphal or "not in the canon". The Synod of Hippo (in 393), followed by the Council of Carthage (397) and the Council of Carthage (419), may be the first council that explicitly accepted the first canon which includes the books that did not appear in the Hebrew Bible; the councils were under significant influence of Augustine of Hippo, who regarded the canon as already closed.
In the 16th century, the Protestant reformers sided with Jerome; yet although most Protestant Bibles now have only those books that appear in the Hebrew Bible, the order is that of the Greek Bible.
Rome then officially adopted a canon, the Canon of Trent, which is seen as following Augustine's Carthaginian Councils or the Council of Rome, and includes most, but not all, of the Septuagint (3 Ezra and 3 and 4 Maccabees are excluded); the Anglicans after the English Civil War adopted a compromise position, restoring the 39 Articles and keeping the extra books that were excluded by the Westminster Confession of Faith, but only for private study and for reading in churches, while Lutherans kept them for private study, gathered in an appendix as Biblical Apocrypha.
While the Hebrew, Greek and Latin versions of the Hebrew Bible are the best known Old Testaments, there were others. At much the same time as the Septuagint was being produced, translations were being made into Aramaic, the language of Jews living in Palestine and the Near East and likely the language of Jesus: these are called the Aramaic Targums, from a word meaning "translation", and were used to help Jewish congregations understand their scriptures.
For Aramaic Christians there was a Syriac translation of the Hebrew Bible called the Peshitta, as well as versions in Coptic (the everyday language of Egypt in the first Christian centuries, descended from ancient Egyptian), Ethiopic (for use in the Ethiopian church, one of the oldest Christian churches), Armenian (Armenia was the first to adopt Christianity as its official religion), and Arabic.
Christianity is based on the belief that the historical Jesus is also the Christ, as in the Confession of Peter. This belief is in turn based on Jewish understandings of the meaning of the Hebrew term messiah, which, like the Greek "Christ", means "anointed". In the Hebrew Scriptures it describes a king anointed with oil on his accession to the throne: he becomes "The 's anointed" or Yahweh's Anointed. By the time of Jesus, some Jews expected that a flesh and blood descendant of David (the "Son of David") would come to establish a real Jewish kingdom in Jerusalem, instead of the Roman province.
Others stressed the Son of Man, a distinctly other-worldly figure who would appear as a judge at the end of time; and some harmonised the two by expecting a this-worldly messianic kingdom which would last for a set period and be followed by the other-worldly age or World to Come. Some thought the Messiah was already present, but unrecognised due to Israel's sins; some thought that the Messiah would be announced by a fore-runner, probably Elijah (as promised by the prophet Malachi, whose book now ends the Old Testament and precedes Mark's account of John the Baptist). None predicted a Messiah who suffers and dies for the sins of all the people. The story of Jesus' death therefore involved a profound shift in meaning from the tradition of the Old Testament.
The name "Old Testament" reflects Christianity's understanding of itself as the fulfillment of Jeremiah's prophecy of a New Covenant (which is similar to "testament" and often conflated) to replace the existing covenant between God and Israel (Jeremiah 31:31). The emphasis, however, has shifted from Judaism's understanding of the covenant as a racially or tribally-based contract between God and Jews to one between God and any person of faith who is "in Christ". | https://en.wikipedia.org/wiki?curid=22326 |
Octal
The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping consecutive binary digits into groups of three (starting from the right). For example, the binary representation for decimal 74 is 1001010. Two zeroes can be added at the left: , corresponding the octal digits , yielding the octal representation 112.
In the decimal system each decimal place is a power of ten. For example:
In the octal system each place is a power of eight. For example:
By performing the calculation above in the familiar decimal system we see why 112 in octal is equal to 64+8+2 = 74 in decimal.
The Yuki language in California and the Pamean languages in Mexico have octal systems because the speakers count using the spaces between their fingers rather than the fingers themselves.
Octal became widely used in computing when systems such as the UNIVAC 1050, PDP-8, ICL 1900 and IBM mainframes employed 6-bit, 12-bit, 24-bit or 36-bit words. Octal was an ideal abbreviation of binary for these machines because their word size is divisible by three (each octal digit represents three binary digits). So two, four, eight or twelve digits could concisely display an entire machine word. It also cut costs by allowing Nixie tubes, seven-segment displays, and calculators to be used for the operator consoles, where binary displays were too complex to use, decimal displays needed complex hardware to convert radices, and hexadecimal displays needed to display more numerals.
All modern computing platforms, however, use 16-, 32-, or 64-bit words, further divided into eight-bit bytes. On such systems three octal digits per byte would be required, with the most significant octal digit representing two binary digits (plus one bit of the next significant byte, if any). Octal representation of a 16-bit word requires 6 digits, but the most significant octal digit represents (quite inelegantly) only one bit (0 or 1). This representation offers no way to easily read the most significant byte, because it's smeared over four octal digits. Therefore, hexadecimal is more commonly used in programming languages today, since two hexadecimal digits exactly specify one byte. Some platforms with a power-of-two word size still have instruction subwords that are more easily understood if displayed in octal; this includes the PDP-11 and Motorola 68000 family. The modern-day ubiquitous x86 architecture belongs to this category as well, but octal is rarely used on this platform, although certain properties of the binary encoding of opcodes become more readily apparent when displayed in octal, e.g. the ModRM byte, which is divided into fields of 2, 3, and 3 bits, so octal can be useful in describing these encodings.
Octal is sometimes used in computing instead of hexadecimal, perhaps most often in modern times in conjunction with file permissions under Unix systems (see chmod). It has the advantage of not requiring any extra symbols as digits (the hexadecimal system is base-16 and therefore needs six additional symbols beyond 0–9). It is also used for digital displays.
In programming languages, octal literals are typically identified with a variety of prefixes, including the digit 0, the letters o or q, the digit–letter combination 0o, or the symbol & or $. In "Motorola convention", octal numbers are prefixed with @, whereas a small (or capital) letter o or q is added as a postfix following the "Intel convention". In Concurrent DOS, Multiuser DOS and REAL/32 as well as in DOS Plus and DR-DOS various environment variables like $CLS, $ON, $OFF, $HEADER or $FOOTER support an \nnn octal number notation, and DR-DOS DEBUG utilizes \ to prefix octal numbers as well.
For example, the literal 73 (base 8) might be represented as 073, o73, q73, 0o73, \73, @73, &73, $73 or 73o in various languages.
Newer languages have been abandoning the prefix 0, as decimal numbers are often represented with leading zeroes. The prefix q was introduced to avoid the prefix o being mistaken for a zero, while the prefix 0o was introduced to avoid starting a numerical literal with an alphabetic character (like o or q), since these might cause the literal to be confused with a variable name. The prefix 0o also follows the model set by the prefix 0x used for hexadecimal literals in the C language; it is supported by Haskell, OCaml, Python as of version 3.0, Raku, Ruby, Tcl as of version 9, and it is intended to be supported by ECMAScript 6 (the prefix 0 originally stood for base 8 in JavaScript but could cause confusion, therefore it has been discouraged in ECMAScript 3 and dropped in ECMAScript 5).
Octal numbers that are used in some programming languages (C, Perl, PostScript…) for textual/graphical representations of byte strings when some byte values (unrepresented in a code page, non-graphical, having special meaning in current context or otherwise undesired) have to be to escaped as \nnn. Octal representation may be particularly handy with non-ASCII bytes of UTF-8, which encodes groups of 6 bits, and where any start byte has octal value \3nn and any continuation byte has octal value \2nn.
Octal was also used for floating point in the Ferranti Atlas (1962), Burroughs B5500 (1964), Burroughs B5700 (1971), Burroughs B6700 (1971) and Burroughs B7700 (1972) computers.
Transponders in aircraft transmit a code, expressed as a four-octal-digit number, when interrogated by ground radar. This code is used to distinguish different aircraft on the radar screen.
To convert integer decimals to octal, divide the original number by the largest possible power of 8 and divide the remainders by successively smaller powers of 8 until the power is 1. The octal representation is formed by the quotients, written in the order generated by the algorithm.
For example, to convert 12510 to octal:
Therefore, 12510 = 1758.
Another example:
Therefore, 90010 = 16048.
To convert a decimal fraction to octal, multiply by 8; the integer part of the result is the first digit of the octal fraction. Repeat the process with the fractional part of the result, until it is null or within acceptable error bounds.
Example: Convert 0.1640625 to octal:
Therefore, 0.164062510 = 0.1248.
These two methods can be combined to handle decimal numbers with both integer and fractional parts, using the first on the integer part and the second on the fractional part.
To convert integer decimals to octal, prefix the number with "0.". Perform the following steps for as long as digits remain on the right side of the radix:
Double the value to the left side of the radix, using "octal" rules, move the radix point one digit rightward, and then place the doubled value underneath the current value so that the radix points align. If the moved radix point crosses over a digit that is 8 or 9, convert it to 0 or 1 and add the carry to the next leftward digit of the current value. "Add" "octally" those digits to the left of the radix and simply drop down those digits to the right, without modification.
Example:
To convert a number to decimal, use the formula that defines its base-8 representation:
In this formula, is an individual octal digit being converted, where is the position of the digit (counting from 0 for the right-most digit).
Example: Convert 7648 to decimal:
For double-digit octal numbers this method amounts to multiplying the lead digit by 8 and adding the second digit to get the total.
Example: 658 = 6 × 8 + 5 = 5310
To convert octals to decimals, prefix the number with "0.". Perform the following steps for as long as digits remain on the right side of the radix: Double the value to the left side of the radix, using "decimal" rules, move the radix point one digit rightward, and then place the doubled value underneath the current value so that the radix points align. "Subtract" "decimally" those digits to the left of the radix and simply drop down those digits to the right, without modification.
Example:
To convert octal to binary, replace each octal digit by its binary representation.
Example: Convert 518 to binary:
Therefore, 518 = 101 0012.
The process is the reverse of the previous algorithm. The binary digits are grouped by threes, starting from the least significant bit and proceeding to the left and to the right. Add leading zeroes (or trailing zeroes to the right of decimal point) to fill out the last group of three if necessary. Then replace each trio with the equivalent octal digit.
For instance, convert binary 1010111100 to octal:
Therefore, 10101111002 = 12748.
Convert binary 11100.01001 to octal:
Therefore, 11100.010012 = 34.228.
The conversion is made in two steps using binary as an intermediate base. Octal is converted to binary and then binary to hexadecimal, grouping digits by fours, which correspond each to a hexadecimal digit.
For instance, convert octal 1057 to hexadecimal:
Therefore, 10578 = 22F16.
Hexadecimal to octal conversion proceeds by first converting the hexadecimal digits to 4-bit binary values, then regrouping the binary bits into 3-bit octal digits.
For example, to convert 3FA516:
Therefore, 3FA516 = 376458.
Due to having only factors of two, many octal fractions have repeating digits, although these tend to be fairly simple:
The table below gives the expansions of some common irrational numbers in decimal and octal. | https://en.wikipedia.org/wiki?curid=22330 |
October
October is the tenth month of the year in the Julian and Gregorian Calendars and the sixth of seven months to have a length of 31 days. The eighth month in the old calendar of Romulus , October retained its name (from the Latin and Greek "ôctō" meaning "eight") after January and February were inserted into the calendar that had originally been created by the Romans. In Ancient Rome, one of three Mundus patet would take place on October 5, Meditrinalia October 11, Augustalia on October 12, October Horse on October 15, and Armilustrium on October 19. These dates do not correspond to the modern Gregorian calendar. Among the Anglo-Saxons, it was known as Ƿinterfylleþ, because at this full moon ("fylleþ") winter was supposed to begin.
October is commonly associated with the season of autumn in the Northern hemisphere and with spring in the Southern hemisphere.
"This list does not necessarily imply either official status or general observance."
The last two to three weeks in October (and, occasionally, the first week of November) are the only time of the year during which all of the "Big Four" major professional sports leagues in the U.S. and Canada schedule games; the National Basketball Association begins its preseason and about two weeks later starts the regular season, the National Hockey League is about one month into its regular season, the National Football League is about halfway through its regular season, and Major League Baseball is in its postseason with the League Championship Series and World Series. There have been 19 occasions in which all four leagues have played games on the same day (an occurrence popularly termed a "sports equinox"), with the most recent of these taking place on October 27, 2019. Additionally, the Canadian Football League is typically nearing the end of its regular season during this period, while Major League Soccer is beginning the MLS Cup Playoffs. | https://en.wikipedia.org/wiki?curid=22332 |
Oberkommando des Heeres
The Oberkommando des Heeres (OKH) was the High Command () of the German Army during the Era of Nazi Germany. It was founded in 1935 as a part of Adolf Hitler's re-militarisation of Germany. From 1938 OKH was, together with OKL ("Oberkommando der Luftwaffe", High Command of the Air Force) and OKM ("Oberkommando der Marine", High Command of the Navy), formally subordinated to the OKW ("Oberkommando der Wehrmacht", High Command of the Armed Forces), with the exception of the Waffen-SS. During the war, OKH had the responsibility of strategic planning of Armies and Army Groups, while the General Staff of the OKH managed operational matters. Each German Army also had an "Armeeoberkommando", Army Command, or AOK. Until the German defeat at Moscow in December 1941, OKH and its staff was "de facto" the most important unit within the German war planning. OKW then took over this function for theatres other than the German-Soviet front.
OKH commander held the title "Oberbefehlshaber des Heeres" (Supreme Commander of the Army). Following the Battle of Moscow, after OKH commander Field Marshal Walther von Brauchitsch was excused, Hitler appointed himself as Commander-in-Chief of the Army.
Hitler had been the head of OKW since January 1938, using it to pass orders to the navy (OKM), air force (OKL), and army (OKH). After a major crisis developed in the Battle of Moscow, Walther von Brauchitsch was dismissed (partly because of his failing health), and Hitler appointed himself as head of the OKH while still retaining his position at the OKW. At the same time, he limited the OKH's authority to the Russian front, giving OKW direct authority over army units elsewhere. This enabled Hitler to declare that only he had complete awareness of Germany's strategic situation, should any general request a transfer of resources between the Russian front and another theater of operations.
In 1944, these elements were subordinate to the OKH:
The Commander-in-Chief of the Army () was the head of the OKH and the German Army during the years of the Nazi regime. Supreme Commanders of the Army were:
The Chiefs of the OKH General Staff () were:
Although both OKW and OKH were headquartered in Zossen during the Third Reich, the functional and operational independence of both establishments were not lost on the respective staff during their tenure. Personnel at the sprawling Zossen compound remarked that even if Maybach 2 (the OKW complex) was completely destroyed, the OKH staff in Maybach 1 would scarcely notice. These camouflaged facilities, separated physically by a fence, also maintained structurally different mindsets towards their objectives.
On 28 April 1945 (two days before his suicide), Hitler formally subordinated OKH to OKW, giving the latter command of forces on the Eastern Front. | https://en.wikipedia.org/wiki?curid=22336 |
Operation Sea Lion
Operation Sea Lion, also written as Operation Sealion (), was Nazi Germany's code name for the plan for an invasion of the United Kingdom during the Battle of Britain in the Second World War. Following the Fall of France, Adolf Hitler, the German Führer and Supreme Commander of the Armed Forces, hoped the British government would seek a peace agreement and he reluctantly considered invasion only as a last resort if all other options failed.
However, once Hitler had determined that Germany would invade the Soviet Union in 1941, the desirability of forcing Britain out of the war before that date increased the attractiveness of an invasion, as potentially offering a quick and decisive victory in the West. As a precondition, Hitler specified the achievement of both air and naval superiority over the English Channel and the proposed landing sites, but the German forces did not achieve either at any point during the war, and both the German High Command and Hitler himself had serious doubts about the prospects for success. Nevertheless, both the German Army and Navy undertook a major programme of preparations for an invasion: training troops, developing specialised weapons and equipment, and modifying transport vessels. A large number of river barges and transport ships were gathered together on the Channel coast, but with Luftwaffe aircraft losses increasing in the Battle of Britain and no sign that the Royal Air Force had been defeated, Hitler postponed Sea Lion indefinitely on 17 September 1940 and it was never put into action.
Adolf Hitler hoped for a negotiated peace with the UK and made no preparations for amphibious assault on Britain until the Fall of France. At the time, the only forces with experience of, or modern equipment for, such landings were the Japanese, at the Battle of Wuhan in 1938.
In September 1939, the German invasion of Poland was a success, but this infringed on both a French and a British alliance with Poland and both countries declared war on Germany. On 9 October, Hitler's "Directive No. 6 for the Conduct of the War" planned an offensive to defeat these allies and "win as much territory as possible in Holland, Belgium, and northern France to serve as a base for the successful prosecution of the air and sea war against England".
With the prospect of the Channel ports falling under "Kriegsmarine" (German Navy) control, Grand Admiral ("Großadmiral") Erich Raeder (head of the "Kriegsmarine") attempted to anticipate the obvious next step that might entail and instructed his operations officer, "Kapitän" Hansjürgen Reinicke, to draw up a document examining "the possibility of troop landings in England should the future progress of the war make the problem arise". Reinicke spent five days on this study and set forth the following prerequisites:
On 22 November 1939, the Head of "Luftwaffe" (German Air Force) intelligence Joseph "Beppo" Schmid presented his "Proposal for the Conduct of Air Warfare", which argued for a counter to the British blockade and said "Key is to paralyse the British trade" by blocking imports to Britain and attacking seaports. The OKW ("Oberkommando der Wehrmacht" or "High Command of the Armed Forces") considered the options and Hitler's 29 November "Directive No. 9 – Instructions For Warfare Against The Economy of the Enemy" stated that once the coast had been secured, the "Luftwaffe" and "Kriegsmarine" were to blockade UK ports with sea mines, attack shipping and warships, and make air attacks on shore installations and industrial production. This directive remained in force in the first phase of the Battle of Britain.
In December 1939, the German Army issued its own study paper (designated "Nordwest") and solicited opinions and input from both "Kriegsmarine" and "Luftwaffe". The paper outlined an assault on England's eastern coast between The Wash and the River Thames by troops crossing the North Sea from ports in the Low Countries. It suggested airborne troops as well as seaborne landings of 100,000 infantry in East Anglia, transported by the "Kriegsmarine", which was also to prevent Royal Navy ships from getting through the Channel, while the "Luftwaffe" had to control airspace over the landings. The "Kriegsmarine" response was focused on pointing out the many difficulties to be surmounted if invading England was to be a viable option. It could not envisage taking on the Royal Navy Home Fleet and said it would take a year to organise shipping for the troops. "Reichsmarschall" Hermann Göring, head of the "Luftwaffe", responded with a single-page letter in which he stated, "[A] combined operation having the objective of landing in England must be rejected. It could only be the final act of an already victorious war against Britain as otherwise the preconditions for success of a combined operation would not be met".
Germany's swift and successful occupation of France and the Low Countries gained control of the Channel coast, facing what Schmid's 1939 report called their "most dangerous enemy". Raeder met Hitler on 21 May 1940 and raised the topic of invasion, but warned of the risks and expressed a preference for blockade by air, submarines and raiders.
By the end of May, the "Kriegsmarine" had become even more opposed to invading Britain following its costly victory in Norway; after Operation Weserübung, the "Kriegsmarine" had only one heavy cruiser, two light cruisers, and four destroyers available for operations. Raeder was strongly opposed to Sea Lion, for over half of "Kriegsmarine" surface fleet had been either sunk or badly damaged in "Weserübung", and his service was hopelessly outnumbered by the ships of the Royal Navy. British parliamentarians still arguing for peace negotiations were defeated in the May 1940 War Cabinet Crisis, but throughout July the Germans continued with attempts to find a diplomatic solution.
In a report presented on 30 June, OKW Chief of Staff Alfred Jodl reviewed options to increase pressure on Britain to agree to a negotiated peace. The first priority was to eliminate the Royal Air Force and gain air supremacy. Intensified air attacks against shipping and the economy could affect food supplies and civilian morale in the long term. Reprisal attacks of terror bombing had the potential to cause quicker capitulation but the effect on morale was uncertain. Once the Luftwaffe had control of the air and the British economy had been weakened, an invasion would be a last resort or a final strike (""Todesstoss"") after England had already been practically defeated, but could have a quick result. At a meeting that day, OKH Chief of General Staff Franz Halder heard from Secretary of State Ernst von Weizsäcker that Hitler had turned his attention to Russia. Halder met Admiral Otto Schniewind on 1 July, and they shared views without understanding each other's position. Both thought that air superiority was needed first, and could make the invasion unnecessary. They agreed that minefields and U-boats could limit the threat posed by the Royal Navy; Schniewind emphasised the significance of weather conditions.
On 2 July, the OKW asked the services to start preliminary planning for an invasion, as Hitler had concluded that invasion would be achievable in certain conditions, the first of which was command of the air, and specifically asked the "Luftwaffe" when this would be achieved. On 4 July, after asking General Erich Marcks to begin planning an attack on Russia, Halder heard from the "Luftwaffe" that they planned to eliminate the RAF, destroying its aircraft manufacturing and supply systems, with damage to naval forces as a secondary aim. A "Luftwaffe" report presented to the OKW at a meeting on 11 July said that it would take 14 to 28 days to achieve air superiority. The meeting also heard that England was discussing an agreement with Russia. On the same day, Grand Admiral Raeder visited Hitler at the Berghof to persuade him that the best way to pressure the British into a peace agreement would be a siege combining air and submarine attacks. Hitler agreed with him that invasion would be a last resort.
Jodl set out the OKW proposals for the proposed invasion in a memorandum issued on 12 July, which described operation Löwe (Lion) as "a river crossing on a broad front", irritating the "Kriegsmarine". On 13 July, Hitler met Field Marshal von Brauchitsch and Halder at Berchtesgaden and they presented detailed plans prepared by the army on the assumption that the navy would provide safe transport. To the surprise of Von Brauchitsch and Halder, and completely at odds with his normal practice, Hitler did not ask any questions about specific operations, had no interest in details, and made no recommendations to improve the plans; instead he simply told OKW to start preparations.
On 16 July 1940 Hitler issued Führer Directive No. 16, setting in motion preparations for a landing in Britain. He prefaced the order by stating: "As England, in spite of her hopeless military situation, still shows no signs of willingness to come to terms, I have decided to prepare, and if necessary to carry out, a landing operation against her. The aim of this operation is to eliminate the English Motherland as a base from which the war against Germany can be continued, and, if necessary, to occupy the country completely." The code name for the invasion was "Seelöwe", "Sea Lion".
Hitler's directive set four conditions for the invasion to occur:
This ultimately placed responsibility for Sea Lions success squarely on the shoulders of Raeder and Göring, neither of whom had the slightest enthusiasm for the venture and, in fact, did little to hide their opposition to it. Nor did Directive 16 provide for a combined operational headquarters, similar to the Allies' creation of the Supreme Headquarters Allied Expeditionary Force (SHAEF) for the later Normandy landings, under which all three service branches (Army, Navy, and Air Force) could work together to plan, co-ordinate, and execute such a complex undertaking.
The invasion was to be on a broad front, from around Ramsgate to beyond the Isle of Wight.
Preparations, including overcoming the RAF, were to be in place by mid August.
Grand Admiral Raeder sent a memorandum to OKW on 19 July, complaining about the onus placed on the navy in relation to the army and air force, and stating that the navy would be unable to achieve its objectives.
The first joint services conference on the proposed invasion was held by Hitler in Berlin on 21 July, with Raeder, Field Marshal von Brauchitsch, and "Luftwaffe" Chief of Staff Hans Jeschonnek. Hitler told them that the British had no hope of survival, and ought to negotiate, but were hoping to get Russia to intervene and halt German oil supplies. Invasion was very risky, and he asked them if direct attacks by air and submarine could take effect by mid September. Jeschonnek proposed large bombing attacks so that responding RAF fighters could be shot down. The idea that invasion could be a surprise "river crossing" was dismissed by Raeder, and the navy could not complete its preparations by mid August. Hitler wanted the air attack to commence early in August and, if it succeeded, the invasion was to start around 25 August before weather deteriorated. Hitler's main interest was the question of countering potential Russian intervention. Halder outlined his first thoughts on defeating Russian forces. Detailed plans were to be made to attack the Soviet Union.
Raeder met Hitler on 25 July to report on navy progress: they were not sure if preparations could be completed during August: he was to present plans at a conference on 31 July. On 28 July he told OKW that ten days would be needed to get the first wave of troops across the Channel, even on a much narrower front. Planning was to resume. In his diary, Halder noted that if what Raeder had said was true, "all previous statements by the navy were so much rubbish and we can throw away the whole plan of invasion". On the next day, Halder dismissed the navy's claims and required a new plan.
The "Luftwaffe" announced on 29 July that they could begin a major air attack at the start of August, and their intelligence reports gave them confidence of a decisive result. Half of their bombers were to be kept in reserve to support the invasion. At a meeting with the army, the navy proposed delay until May 1941, when the new battleships "Bismarck" and "Tirpitz" would be ready. A navy memorandum issued on 30 July said invasion would be vulnerable to the Royal Navy, and autumn weather could prevent necessary maintenance of supplies. The OKW assessed alternatives, including attacking the British in the Mediterranean, and favoured extended operations against England while remaining on good terms with Russia.
At the Berghof conference on 31 July, the "Luftwaffe" were not represented. Raeder said barge conversions would take until 15 September, leaving the only possible 1940 invasion dates as 22–26 September, when weather was likely to be unsuitable. Landings would have to be on a narrow front, and would be better in spring 1941. Hitler wanted the invasion in September as the British army was increasing in strength. After Raeder left, Hitler told von Brauchitsch and Halder that the air attack was to start around 5 August; eight to fourteen days after that, he would decide on the landing operation. London was showing new-found optimism, and he attributed this to their hopes of intervention by Russia, which Germany was to attack in the spring of 1941.
On 1 August 1940, Hitler instructed intensified air and sea warfare to "establish the necessary conditions for the final conquest of England". From 5 August, subject to weather delays, the "Luftwaffe" was "to overpower the English Air Force with all the forces at its command, in the shortest possible time." Attacks were then to be made on ports and food stocks, while leaving alone ports to be used in the invasion, and "air attacks on enemy warships and merchant ships may be reduced except where some particularly favourable target happens to present itself." The "Luftwaffe" was to keep sufficient forces in reserve for the proposed invasion, and was not to target civilians without a direct order from Hitler in response to RAF terror bombing. No decision had been reached on the choice between immediate decisive action and a siege. The Germans hoped the air action would force the British to negotiate, and make invasion unnecessary.
In the Army plan of 25 July 1940, the invasion force was to be organised into two army groups drawn from the 6th Army, the 9th Army and the 16th Army. The first wave of the landing would have consisted of eleven infantry and mountain divisions, the second wave of eight panzer and motorised infantry divisions and finally, the third wave was formed of six further infantry divisions. The initial assault would have also included two airborne divisions and the special forces of the Brandenburg Regiment.
This initial plan was vetoed by opposition from both the Kriegsmarine and the Luftwaffe, who successfully argued that an amphibious force could only be assured air and naval protection if confined to a narrow front, and that the landing areas should be as far from Royal Navy bases as possible. The definitive order of battle adopted on 30 August 1940 envisaged a first wave of nine divisions from the 9th and 16th armies landing along four stretches of beach — two infantry divisions on beach 'B' between Folkestone and New Romney supported by a special forces company of the Brandenburg Regiment, two infantry divisions on beach 'C' between Rye and Hastings supported by three battalions of submersible/floating tanks, two infantry divisions on beach 'D' between Bexhill and Eastbourne supported by one battalion of submersible/floating tanks and a second company of the Brandenburg Regiment, and three infantry divisions on beach 'E' between Beachy Head and Brighton. A single airborne division would land in Kent north of Hythe; with the objective of seizing the aerodrome at Lympne and bridge-crossings over the Royal Military Canal, and in assisting the ground forces in capturing Folkestone. Folkestone (to the east) and Newhaven (to the west) were the only cross-channel port facilities that would have been accessible to the invasion forces; and much depended on these being captured substantially intact or with the capability of rapid repair; in which case the second wave of eight divisions (including all the motorised and armoured divisions) might be unloaded directly onto their respective quaysides. A further six infantry divisions were allocated to the third wave.
The order of battle defined on 30 August remained as the agreed overall plan, but was always considered as potentially subject to change if circumstances demanded it. The Army High Command continued to press for a wider landing area if possible, against the opposition of the Kriegsmarine; in August they had won the concession that, if the opportunity arose, a force might be landed directly from ships onto the seafront at Brighton, perhaps supported by a second airborne force landing on the South Downs. Contrariwise, the Kriegsmarine (fearful of possible fleet action against the invasion forces from Royal Navy ships in Portsmouth) insisted that the divisions enshipped from Cherbourg and Le Harvre for landing on beach 'E', might be diverted to any one of the other beaches where sufficient space allowed.
Each of the first wave landing forces was divided into three echelons. The first echelon, carried across the Channel on barges, coasters and small motor launches, would consist of the main infantry assault force. The second echelon, carried across the Channel in larger transport vessels, would consist predominantly of artillery, armoured vehicles and other heavy equipment. The third echelon, carried across the channel on barges, would consist of the vehicles, horses, stores and personnel of the division-level support services. Loading of barges and transports with heavy equipment, vehicles and stores would start on S-tag minus nine (in Antwerp); and S minus eight in Dunkirk, with horses not loaded till S minus two. All troops would be loaded onto their barges from French or Belgian ports on S minus two or S minus one. The first echelon would land on the beaches on S-tag itself, preferably at daybreak around two hours after high tide. The barges used for the first echelon would be retrieved by tugs on the afternoon of S-tag, and those still in working order would be drawn up alongside the transport vessels to trans-ship the second echelon overnight, so that much of the second echelon and third echelon could land on S plus one, with the remainder on S plus two. The Navy intended that all four invasion fleets would return across the Channel on the night of S plus two, having been moored for three full days off the South coast of England. The Army had sought to have the third echelon cross in later separate convoys to avoid men and horses having to wait for as long as four days and nights in their barges, but the Kriegsmarine were insistent that they could only protect the four fleets from Royal Navy attack if all vessels crossed the Channel together.
In the summer of 1940, UK Home Forces Command tended to consider East Anglia and the East coast to be the most likely landing sites for a German invasion force, as this would have offered much greater opportunities to seize ports and natural harbours, and would be further from naval forces at Portsmouth. But then the accumulation of invasion barges in French ports from late August 1940 rather indicated a landing on the South coast. Consequently, the main Home Forces mobile reserve force was held back around London, so as to be able to move forwards to protect the capital, either into Kent or Essex. Hence, Sea Lion landings in Kent and Sussex would have been initially opposed by XII Corps of Eastern Command with three infantry divisions and two independent brigades and V Corps of Southern Command with three infantry divisions. In reserve were two more Corps under GHQ Home Forces; located south of London was the VII Corps with the 1st Canadian Infantry Division, an armoured division and an independent armoured brigade, while north of London was IV Corps with an armoured division, infantry division and independent infantry brigade.
The success of the German invasion of Denmark and Norway, on 9 April 1940, had relied extensively on the use of paratroop and glider-borne formations ("Fallschirmjäger") to capture key defensive points in advance of the main invasion forces. The same airborne tactics had also been used in support of the invasions of Belgium and the Netherlands on 10 May 1940. However, although spectacular success had been achieved in the airborne assault on Fort Eben-Emael in Belgium, German airborne forces had come close to disaster in their attempt to seize the Dutch government and capital of The Hague. Around 1,300 of the 22nd Air Landing Division had been captured (subsequently shipped to Britain as prisoners of war), around 250 Junkers Ju 52 transport aircraft had been lost, and several hundred elite paratroops and air-landing infantry had been killed or injured. Consequently, even in September 1940 the Luftwaffe had the capacity to provide only around 3,000 airborne troops to participate in the first wave of Operation Sea Lion.
The Battle of Britain began in early July 1940, with attacks on shipping and ports in the "Kanalkampf" which forced RAF Fighter Command into defensive action. In addition, wider raids gave aircrew experience of day and night navigation, and tested the defences. On 13 August, the German "Luftwaffe" began a series of concentrated aerial attacks (designated "Unternehmen Adlerangriff" or Operation Eagle Attack) on targets throughout the United Kingdom in an attempt to destroy the RAF and establish air superiority over Great Britain. The change in emphasis of the bombing from RAF bases to bombing London, however, turned "Adlerangriff" into a short-range strategic bombing operation.
The effect of the switch in strategy is disputed. Some historians argue the change in strategy lost the Luftwaffe the opportunity of winning the air battle, or air superiority. Others argue the "Luftwaffe" achieved little in the air battle and the RAF was not on the verge of collapse, as often claimed. Another perspective has also been put forward, which suggests the Germans could not have gained air superiority before the weather window closed. Others have said that it was unlikely the "Luftwaffe" would ever have been able to destroy RAF Fighter Command. If British losses became severe, the RAF could simply have withdrawn northward and regrouped. It could then deploy when, or if, the Germans launched an invasion. Most historians agree Sea Lion would have failed regardless, because of the weaknesses of German sea power, compared to the Royal Navy.
The record of the "Luftwaffe" against naval combat vessels up to that point in the war was poor. In the Norwegian Campaign, despite eight weeks of continuous air supremacy, the "Luftwaffe" sank only two British warships. The German aircrews were not trained or equipped to attack fast-moving naval targets, particularly agile naval destroyers or Motor Torpedo Boats (MTB). The Luftwaffe also lacked armour-piercing bombs and their only aerial torpedo capability, essential for defeating larger warships, consisted of a small number of slow and vulnerable Heinkel He 115 floatplanes. The "Luftwaffe" made 21 deliberate attacks on small torpedo boats during the Battle of Britain, sinking none. The British had between 700 and 800 small coastal craft (MTBs, Motor Gun Boats and smaller vessels), making them a critical threat if the "Luftwaffe" could not deal with the force. Only nine MTBs were lost to air attack out of 115 sunk by various means throughout the Second World War. Only nine destroyers were sunk by air attack in 1940, out of a force of over 100 operating in British waters at the time. Only five were sunk while evacuating Dunkirk, despite large periods of German air superiority, thousands of sorties flown, and hundreds of tons of bombs dropped. The "Luftwaffe"'s record against merchant shipping was also unimpressive: it sank only one in every 100 British vessels passing through British waters in 1940, and most of this total was achieved using mines.
Had an invasion taken place, the Bf 110 equipped "Erprobungsgruppe 210" would have dropped "Seilbomben" just prior to the landings. This was a secret weapon which would have been used to blackout the electricity network in South-east England. The equipment for dropping the wires was fitted to the Bf 110 aeroplanes and tested. It involved dropping wires across high voltage wires, and was probably as dangerous to the aircraft crews as to the British. However, there was no national electricity network in the UK at this time, only the local generation of electricity for each city/town and surrounding area.
Upon hearing of Hitler's intentions, Italian dictator Benito Mussolini, through his Foreign Minister Count Galeazzo Ciano, quickly offered up to ten divisions and thirty squadrons of Italian aircraft for the proposed invasion. Hitler initially declined any such aid but eventually allowed a small contingent of Italian fighters and bombers, the Italian Air Corps ("Corpo Aereo Italiano" or CAI), to assist in the "Luftwaffe"s aerial campaign over Britain in October and November 1940.
The most daunting problem for Germany in protecting an invasion fleet was the small size of its navy. The "Kriegsmarine", already numerically far inferior to Britain's Royal Navy, had lost a sizeable portion of its large modern surface units in April 1940 during the Norwegian Campaign, either as complete losses or due to battle damage. In particular, the loss of two light cruisers and ten destroyers was crippling, as these were the very warships most suited to operating in the Channel narrows where the invasion would likely take place. Most U-boats, the most powerful arm of the , were meant for destroying ships, not supporting an invasion.
Although the Royal Navy could not bring the whole of its naval superiority to bear—as most of the fleet was engaged in the Atlantic and Mediterranean, and a substantial proportion had been detached to support Operation Menace against Dakar—the British Home Fleet still had a very large advantage in numbers. It was debatable whether British ships were as vulnerable to enemy air attack as the Germans hoped. During the Dunkirk evacuation, few warships were actually sunk, despite being stationary targets. The overall disparity between the opposing naval forces made the amphibious invasion plan extremely risky, regardless of the outcome in the air. In addition, the "Kriegsmarine" had allocated its few remaining larger and more modern ships to diversionary operations in the North Sea.
The fleet of defeated France, one of the most powerful and modern in the world, might have tipped the balance against Britain if it had been captured by the Germans. However, the pre-emptive destruction of a large part of the French fleet by the British at Mers-el-Kébir, and the scuttling of the remainder by the French themselves at Toulon two years later, ensured that this could not happen.
The view of those who believed, regardless of a potential German victory in the air battle, that Sea Lion was still not going to succeed included a number of German General Staff members. After the war, Admiral Karl Dönitz said he believed air superiority was "not enough". Dönitz stated, "[W]e possessed neither control of the air or the sea; nor were we in any position to gain it". In his memoirs, Erich Raeder, commander-in-chief of the "Kriegsmarine" in 1940, argued:
On 13 August 1940, Alfred Jodl, Chief of Operations in the OKW ("Oberkommando der Wehrmacht") wrote his "Assessment of the situation arising from the views of the Army and Navy on a landing in England." His first point was that "The landing operation must under no circumstances fail. A failure could leave political consequences, which would go far beyond the military ones." He believed that the "Luftwaffe" could meet its essential objectives, but if the "Kriegsmarine" could not meet the operational requirements of the Army for an attack on a broad front with two divisions landed within four days, followed promptly by three further divisions irrespective of weather, "then I consider the landing to be an act of desperation, which would have to be risked in a desperate situation, but which we have no reason whatsoever to undertake at this moment."
The "Kriegsmarine" invested considerable energy in planning and assembling the forces for an elaborate deception plan called Operation Herbstreise or "Autumn Journey". The idea was first mooted by "Generaladmiral" Rolf Carls on 1 August proposing a feint expedition into the North Sea resembling a troop convoy heading for Scotland, with the aim of drawing the British Home Fleet away from the intended invasion routes. Initially, the convoy was to consist of about ten small cargo ships fitted with false funnels to make them appear larger, and two small hospital ships. As the plan gathered momentum, the large ocean liners , , and were added to the list. These were organised into four separate convoys, escorted by light cruisers, torpedo boats and minesweepers, some of which were obsolete vessels being used by naval training bases. The plan was that three days before the actual invasion, the troopships would load the men and equipment of four divisions in major Norwegian and German ports and put to sea, before unloading them again on the same day in quieter locations. Returning to sea, the convoys would head west towards Scotland before turning around at about 21:00 on the following day. In addition, the only heavy warships available to the "Kriegsmarine", the heavy cruisers "Admiral Scheer" and "Admiral Hipper", would attack the British armed merchant cruisers of the Northern Patrol and convoys inbound from Canada; however, the "Scheer"'s repairs overran and if the invasion had taken place in September, would have left the "Hipper" to operate alone.
Lacking surface naval forces capable of meeting the Home Fleet of the Royal Navy in open battle, the main seaborne defence for the first wave invasion fleets would be four massive minefields; which were intended to be laid from S minus nine onwards. The ANTON minefield (off Selsey Bill) and the BRUNO minefield (off Beachy Head), each totalling over 3,000 mines in four rows, would block off the invasion beaches against naval forces from Portsmouth; while the counterpart CAESAR minefield would block off beach 'B' from Dover. A fourth minefield, DORA, was to be laid off Lyme Bay to inhibit naval forces from Plymouth. By the autumn of 1940, the Kriegsmarine had achieved considerable success in laying minefields in support of active operations, notably in the night of 31 August 1940 when the 20th Destroyer flotilla suffered heavy losses when running into a newly laid German minefield off Texel; however no plans were made to prevent the mines being cleared by the large force of British minesweepers which were based in the area. "Vizeadmiral" Friedrich Ruge, who was in charge of the mining operation, wrote after the war that if the minefields had been relatively complete, they would have been a "strong obstacle" but that "even a strong obstacle is not an absolute barrier".
In 1940 the German Navy was ill-prepared for mounting an amphibious assault the size of Operation Sea Lion. Lacking purpose-built landing craft and both doctrinal and practical experience with amphibious warfare, the "Kriegsmarine" was largely starting from scratch. Some efforts had been made during the inter-war years to investigate landing military forces by sea, but inadequate funding severely limited any useful progress.
For the successful German invasion of Norway, German naval forces (assisted in places by thick fog) had simply forced an entry into key Norwegian harbours with motor launches and E-boats against stiff resistance from the outgunned Norwegian army and navy, and then unloaded troops from destroyers and troop transports directly onto the dockfronts at Bergen, Egersund, Trondheim, Kristiansand, Arendal and Horten. At Stavanger and Oslo capture of the port was preceded by landing airborne forces. No beach landings were attempted.
The "Kriegsmarine" had taken some small steps in remedying the landing craft situation with construction of the "Pionierlandungsboot 39" (Engineer Landing Boat 39), a self-propelled shallow-draft vessel which could carry 45 infantrymen, two light vehicles or 20 tons of cargo and land on an open beach, unloading via a pair of clamshell doors at the bow. But by late September 1940 only two prototypes had been delivered.
Recognising the need for an even larger craft capable of landing both tanks and infantry onto a hostile shore, the "Kriegsmarine" began development of the 220-ton "Marinefährprahm" (MFP) but these too were unavailable in time for a landing on British soil in 1940, the first of them not being commissioned until April 1941.
Given barely two months to assemble a large seagoing invasion fleet, the "Kriegsmarine" opted to convert inland river barges into makeshift landing craft. Approximately 2,400 barges were collected from throughout Europe (860 from Germany, 1,200 from the Netherlands and Belgium and 350 from France). Of these, only about 800 were powered albeit insufficiently to cross the Channel under their own power. All barges would be towed across by tugs, with two barges to a tug in line abreast, preferably one being powered and one unpowered. On reaching the English coast, the powered barges would be cast-off, to beach themselves under their own power; the unpowered barges would be taken inshore as far as possible by the tugs and anchored, so as to settle on the falling tide, their troops unloading some hours later than those on the powered barges. Accordingly, the Sea Lion plans were prepared on the basis that the landings would take place shortly after high tide and on a date when this coincided with sunrise. Towards evening, on the following rising tide, the empty barges would have been retrieved by their tugs to receive the second echelon forces, stores and heavy equipment in the awaiting transport vessels. These transport vessels would have remained moored off the beach throughout the day. By contrast, the Allied D day landings in 1944 were timed to happen at low tide; with all troops and equipment transhipped from their transport vessels to landing craft off-shore overnight.
All the troops intended to land at beach 'E', the westernmost of the four beaches, would cross the channel in larger transport vessels - the barges being towed loaded with equipment but empty of troops - and would then be transferred onto their barges a short distance from the beach. For the landings on the other three beaches, the first echelon of the invasion forces (and their equipment) would be loaded onto their barges in French or Belgian ports, while the second echelon force crossed the channel in associated transport vessels. Once the first echelon had been unloaded onto the beach, the barges would return to the transport vessels to transport the second echelon. The same procedure was envisaged for the second wave (unless the first wave had captured a usable port). Trials showed that this process of trans-shipment in open sea, in any circumstances other than flat calm, would likely take at least 14 hours, such that the disembarkation of the first wave might extend over several tides and several days, with barges and invasion fleet subsequently needing to be escorted together back across the Channel for repairs and reloading. Since loading of the tanks, vehicles and stores of the second wave onto the returned barges and transport ships would take at least a week, the second wave could not be expected to land much less than ten days after the first wave, and more likely longer still.
Two types of inland river barge were generally available in Europe for use in Sea Lion: the "peniche", which was 38.5 meters long and carried 360 tons of cargo, and the "Kampine", which was 50 meters long and carried 620 tons of cargo. Of the barges collected for the invasion, 1,336 were classified as "peniches" and 982 as "Kampinen". For simplicity's sake, the Germans designated any barge up to the size of a standard "peniche" as Type A1 and anything larger as Type A2.
Converting the assembled barges into landing craft involved cutting an opening in the bow for off-loading troops and vehicles, welding longitudinal I-beams and transverse braces to the hull to improve seaworthiness, adding a wooden internal ramp and pouring a concrete floor in the hold to allow for tank transport. As modified, the Type A1 barge could accommodate three medium tanks while the Type A2 could carry four. Tanks, armoured vehicles and artillery were envisaged as crossing the Channel in one of around 170 transport ships, which would be anchored off the landing beaches while the barges disembarked the first echelon of assault troops; those in powered barges disembarking soonest. The empty barges would then have been retrieved by tugs on the following rising tide, so as to have the second echelon (including tanks and other heavy equipment) loaded onto them using ship's derricks. Barges would consequently have shuttled between ships and beaches over at least two days before being assembled together for the escorted night-time return voyage across the Channel.
This barge was a Type A altered to carry and rapidly off-load the submersible tanks ("Tauchpanzer") developed for use in Sea Lion. They had the advantage of being able to unload their tanks directly into water up to in depth, several hundred yards from shore, whereas the unmodified Type A had to be firmly grounded on the beach, making it more vulnerable to enemy fire. The Type B required a longer external ramp (11 meters) with a float attached to the front of it. Once the barge anchored, the crew would extend the internally stowed ramp using block and tackle sets until it was resting on the water's surface. As the first tank rolled forward onto the ramp, its weight would tilt the forward end of the ramp into the water and push it down onto the seabed. Once the tank rolled off, the ramp would bob back up to a horizontal position, ready for the next one to exit. If a barge was securely grounded along its full length, the longer ramp could also be used to discharge submersible tanks directly onto the beach, and beachmasters were given the option of landing tanks by this method, if the risk of loss in submersible running appeared to be too high. The Navy High Command increased its initial order for 60 of these vessels to 70 in order to compensate for expected losses. A further five were ordered on 30 September as a reserve.
The Type C barge was specifically converted to carry the Panzer II amphibious tank ("Schwimmpanzer"). Because of the extra width of the floats attached to this tank, cutting a broad exit ramp into the bow of the barge was not considered advisable as it would have compromised the vessel's seaworthiness to an unacceptable degree. Instead, a large hatch was cut into the stern, thereby allowing the tanks to drive directly into deep water before turning under their own motive power and heading towards shore. The Type C barge could accommodate up to four "Schwimmpanzern" in its hold. Approximately 14 of these craft were available by the end of September.
During the planning stages of Sea Lion, it was deemed desirable to provide the advanced infantry detachments (making the initial landings) with greater protection from small-arms and light artillery fire by lining the sides of a powered Type A barge with concrete. Wooden slides were also installed along the barge's hull to accommodate ten assault boats ("Sturmboote"), each capable of carrying six infantrymen and powered by a 30 hp outboard motor. The extra weight of this additional armour and equipment reduced the barge's load capacity to 40 tons. By mid-August, 18 of these craft, designated Type AS, had been converted, and another five were ordered on 30 September.
The "Luftwaffe" had formed its own special command ("Sonderkommando") under Major Fritz Siebel to investigate the production of landing craft for Sea Lion. Major Siebel proposed giving the unpowered Type A barges their own motive power by installing a pair of surplus BMW aircraft engines, driving propellers. The "Kriegsmarine" was highly sceptical of this venture, but the "Heer" (Army) high command enthusiastically embraced the concept and Siebel proceeded with the conversions.
The aircraft engines were mounted on a platform supported by iron scaffolding at the aft end of the vessel. Cooling water was stored in tanks mounted above-deck. As completed, the Type AF had a speed of six knots, and a range of 60 nautical miles unless auxiliary fuel tanks were fitted. Disadvantages of this set-up included an inability to back the vessel astern, limited manoeuvrability and the deafening noise of the engines which would have made voice commands problematic.
By 1 October 128 Type A barges had been converted to airscrew propulsion and, by the end of the month, this figure had risen to over 200.
The "Kriegsmarine" later used some of the motorised Sea Lion barges for landings on the Russian-held Baltic islands in 1941 and, though most of them were eventually returned to the inland rivers they originally plied, a reserve was kept for military transport duties and for filling out amphibious flotillas.
As a consequence of employing all of their available cruisers in the North Sea deception operation, there would have been only light forces available to protect the vulnerable transport fleets. The plan revised on 14 September 1940 by Admiral Günther Lütjens called for three groups of five U-boats, all seven destroyers, and seventeen torpedo boats to operate to the west of the mine barrier in the Channel, while two groups of three U-boats and all the available E-boats to operate north of it. Lütjens suggested the inclusion of the old battleships and which were used for training. They were considered too vulnerable to send into action without improvement, especially considering the fate of their sister ship, , which had blown up at the Battle of Jutland. The Blohm und Voss shipyard considered that it would take six weeks for a minimal upgrade of armour and armament and the idea was dropped, as was a suggestion that they be used as troopships. Four coasters were converted to auxiliary gunboats by the addition of a single 15 cm naval gun and another was fitted with two 10.5 cm guns, while a further twenty-seven smaller vessels were converted into light gunboats by attaching a single ex-French 75 mm field gun to an improvised platform; these were expected to provide naval gunfire support as well as fleet defence against modern British cruisers and destroyers.
Providing armour support for the initial wave of assault troops was a critical concern for Sea Lion planners, and much effort was devoted to finding practical ways of rapidly getting tanks onto the invasion beaches in support of the first echelon. Though the Type A barges could disembark several medium tanks onto an open beach, this could be accomplished only once the tide had fallen further and the barges were firmly grounded along their full length; otherwise a leading tank might topple off an unsteady ramp and block those behind from deployment. The time needed for assembling the external ramps also meant that both the tanks and the ramp assembly crews would be exposed to close-quarter enemy fire for a considerable time. A safer and faster method was needed and the Germans eventually settled on providing some tanks with floats and making others fully submersible. It was nevertheless recognised that a high proportion of these specialised tanks might be expected not to make it off the beach.
The "Schwimmpanzer" II Panzer II, at 8.9 tons, was light enough to float with the attachment of long rectangular buoyancy boxes on each side of the tank's hull. The boxes were machined from aluminium stock and filled with Kapok sacks for added buoyancy. Motive power came from the tank's own tracks which were connected by rods to a propeller shaft running through each float. The "Schwimmpanzer" II could make 5.7 km/h in the water. An inflatable rubber hose around the turret ring created a waterproof seal between the hull and turret. The tank's 2 cm gun and coaxial machinegun were kept operational and could be fired while the tank was still making its way ashore. Because of the great width of the pontoons, "Schwimmpanzer" IIs were to be deployed from specially-modified Type C landing barges, from which they could be launched directly into open water from a large hatch cut into the stern. The Germans converted 52 of these tanks to amphibious use prior to Sea Lion's cancellation.
The "Tauchpanzer" or deep-wading tank (also referred to as the "U-Panzer" or "Unterwasser Panzer") was a standard Panzer III or Panzer IV medium tank with its hull made completely waterproof by sealing all sighting ports, hatches and air intakes with tape or caulk. The gap between the turret and hull was sealed with an inflatable hose while the main gun mantlet, commander's cupola and radio operator's machine gun were given special rubber coverings. Once the tank reached the shore, all covers and seals could be blown off via explosive cables, enabling normal combat operation.
Fresh air for both the crew and engine was drawn into the tank via an 18 m long rubber hose to which a float was attached to keep one end above the water's surface. A radio antenna was also attached to the float to provide communication between the tank crew and the transport barge. The tank's engine was converted to be cooled with seawater, and the exhaust pipes were fitted with overpressure valves. Any water seeping into the tank's hull could be expelled by an internal bilge pump. Navigation underwater was accomplished using a directional gyrocompass or by following instructions radioed from the transport barge.
Experiments conducted at the end of June and early July at Schilling, near Wilhelmshaven, showed that the submersible tanks functioned best when they were kept moving along the seabed as, if halted for any reason, they tended to sink into the seabed and remain stuck there. Obstacles such as underwater trenches or large rocks tended to stop the tanks in their tracks, and it was decided for this reason that they should be landed at high tide so that any mired tanks could be retrieved at low tide. Submersible tanks could operate in water up to a depth of .
The "Kriegsmarine" initially expected to use 50 specially-converted motor coasters to transport the submersible tanks, but testing with the coaster "Germania" showed this to be impractical. This was due to the ballast needed to offset the weight of the tanks, and the requirement that the coasters be grounded to prevent them from capsizing as the tanks were transferred by crane onto the vessel's wooden side ramps. These difficulties led to development of the Type B barge.
By the end of August the Germans had converted 160 Panzer IIIs, 42 Panzer IVs, and 52 Panzer IIs to amphibious use. This gave them a paper strength of 254 machines, about an equivalent number to those that would otherwise have been allocated to an armoured division. The tanks were divided into four battalions or detachments labelled "Panzer-Abteilung" A, B, C and D. They were to carry sufficient fuel and ammunition for a combat radius of 200 km.
As part of a "Kriegsmarine" competition, prototypes for a prefabricated "heavy landing bridge" or jetty (similar in function to later Allied Mulberry Harbours) were designed and built by Krupp Stahlbau and Dortmunder Union and successfully overwintered in the North Sea in 1941–42. Krupp's design won out, as it only required one day to install, as opposed to twenty-eight days for the Dortmunder Union bridge. The Krupp bridge consisted of a series of 32m-long connecting platforms, each supported on the seabed by four steel columns. The platforms could be raised or lowered by heavy-duty winches in order to accommodate the tide. The German Navy initially ordered eight complete Krupp units composed of six platforms each. This was reduced to six units by the autumn of 1941, and eventually cancelled altogether when it became apparent that Sea Lion would never take place.
In mid-1942, both the Krupp and Dortmunder prototypes were shipped to the Channel Islands and installed together off Alderney, where they were used for unloading materials needed to fortify the island. Referred to as the "German jetty" by local inhabitants, they remained standing for the next thirty-six years until demolition crews finally removed them in 1978–79, a testament to their durability.
The German Army developed a portable landing bridge of its own nicknamed "Seeschlange" (Sea Snake). This "floating roadway" was formed from a series of joined modules that could be towed into place to act as a temporary jetty. Moored ships could then either unload their cargo directly onto the roadbed or lower it down onto waiting vehicles via their heavy-duty booms. The "Seeschlange" was successfully tested by the Army Training Unit at Le Havre in France in the autumn of 1941 and later chosen for use in "Operation Herkules", the proposed Italo-German invasion of Malta. It was easily transportable by rail.
A specialised vehicle intended for Sea Lion was the "Landwasserschlepper" (LWS), an amphibious tractor under development since 1935. It was originally intended for use by Army engineers to assist with river crossings. Three of them were assigned to Tank Detachment 100 as part of the invasion; it was intended to use them for pulling ashore unpowered assault barges and towing vehicles across the beaches. They would also have been used to carry supplies directly ashore during the six hours of falling tide when the barges were grounded. This involved towing a "Kässbohrer" amphibious trailer capable of transporting 10–20 tons of freight behind the LWS. The LWS was demonstrated to General Halder on 2 August 1940 by the Reinhardt Trials Staff on the island of Sylt and, though he was critical of its high silhouette on land, he recognised the overall usefulness of the design. It was proposed to build enough tractors that one or two could be assigned to each invasion barge, but the late date and difficulties in mass-producing the vehicle prevented this.
Operation Sea Lion would have been the first ever amphibious invasion by a mechanised army, and the largest amphibious invasion since Gallipoli. The Germans had to invent and improvise a lot of equipment. They also proposed to use some new weapons and use upgrades of their existing equipment for the first time. These included:
The German Army High Command (, "OKH") originally planned an invasion on a vast scale by landing over forty divisions from Dorset to Kent. This was far in excess of what the "Kriegsmarine" could supply, and final plans were more modest, calling for nine divisions to make an amphibious assault on Sussex and Kent with around 67,000 men in the first echelon and a single airborne division of 3,000 men to support them. The chosen invasion sites ran from Rottingdean in the west to Hythe in the east.
The "Kriegsmarine" wanted a front as short as possible, as it regarded this as more defensible. Admiral Raeder wanted a front stretching from Dover to Eastbourne and stressed that shipping between Cherbourg/Le Havre and Dorset would be exposed to attacks from the Royal Navy based in Portsmouth and Plymouth. General Halder rejected this: "From the army's point of view I regard it as complete suicide, I might just as well put the troops that have landed straight through the sausage machine".
One complication was the tidal flow in the English Channel, where high water moves from west to east, with high water at Lyme Regis occurring around six hours before it reaches Dover. If all the landings were to be made at high water across a broad front, they would have to be made at different times along different parts of the coast, with the landings in Dover being made six hours after any landings in Dorset and thus losing the element of surprise. If the landings were to be made at the same time, methods would have to be devised to disembark men, vehicles and supplies at all states of the tide. That was another reason to favour landing craft.
With Germany's occupation of the Pas-de-Calais region in Northern France, the possibility of closing the Strait of Dover to Royal Navy warships and merchant convoys by the use of land-based heavy artillery became readily apparent, both to the German High Command and to Hitler. Even the "Kriegsmarine"'s Naval Operations Office deemed this a plausible and desirable goal, especially given the relatively short distance, , between the French and English coasts. Orders were therefore issued to assemble and begin emplacing every Army and Navy heavy artillery piece available along the French coast, primarily at Pas-de-Calais. This work was assigned to the "Organisation Todt" and commenced on 22 July 1940.
By early August, four traversing turrets were fully operational as were all of the Army's railway guns. Seven of these weapons, six 28 cm K5 pieces and a single K12 gun with a range of , could only be used against land targets. The remainder, thirteen 28 cm and five pieces, plus additional motorised batteries comprising twelve 24 cm guns and ten 21 cm weapons, could be fired at shipping but were of limited effectiveness due to their slow traverse speed, long loading time and ammunition types.
Better suited for use against naval targets were the four heavy naval batteries installed by mid-September: "Friedrich August" with three barrels; "Prinz Heinrich" with two 28 cm guns; "Oldenburg" with two 24 cm weapons and, largest of all, "Siegfried" (later renamed "Batterie Todt") with a pair of guns. Fire control for these weapons was provided by both spotter aircraft and by DeTeGerät radar sets installed at Blanc Nez and Cap d’Alprech. These units were capable of detecting targets out to a range of , including small British patrol craft inshore of the English coast. Two additional radar sites were added by mid-September: a DeTeGerät at Cap de la Hague and a FernDeTeGerät long-range radar at Cap d’Antifer near Le Havre.
To strengthen German control of the Channel narrows, the Army planned to quickly establish mobile artillery batteries along the English shoreline once a beachhead had been firmly established. Towards that end, 16th Army's "Artillerie Kommand 106" was slated to land with the second wave to provide fire protection for the transport fleet as early as possible. This unit consisted of twenty-four and seventy-two guns. About one third of them were to be deployed on English soil by the end of Sea Lion's first week.
The presence of these batteries was expected to greatly reduce the threat posed by British destroyers and smaller craft along the eastern approaches as the guns would be sited to cover the main transport routes from Dover to Calais and Hastings to Boulogne. They could not entirely protect the western approaches, but a large area of those invasion zones would still be within effective range.
The British military was well aware of the dangers posed by German artillery dominating the Dover Strait and on 4 September 1940 the Chief of Naval Staff issued a memo stating that if the Germans "…could get possession of the Dover defile and capture its gun defences from us, then, holding these points on both sides of the Straits, they would be in a position largely to deny those waters to our naval forces". Should the Dover defile be lost, he concluded, the Royal Navy could do little to interrupt the flow of German supplies and reinforcements across the Channel, at least by day, and he further warned that "…there might really be a chance that they (the Germans) might be able to bring a serious weight of attack to bear on this country". The very next day the Chiefs of Staff, after discussing the importance of the defile, decided to reinforce the Dover coast with more ground troops.
The guns started to fire in the second week of August 1940 and were not silenced until 1944, when the batteries were overrun by Allied ground forces. They caused 3,059 alerts, 216 civilian deaths, and damage to 10,056 premises in the Dover area. However, despite firing on frequent slow moving coastal convoys, often in broad daylight, for almost the whole of that period (there was an interlude in 1943), there is no record of any vessel being hit by them, although one seaman was killed and others were injured by shell splinters from near misses. Whatever the perceived risk, this lack of ability to hit any moving ship does not support the contention that the German coastal batteries would have been a serious threat to fast destroyers or smaller warships.
During the summer of 1940, both the British public and the Americans believed that a German invasion was imminent, and they studied the forthcoming high tides of 5–9 August, 2–7 September, 1–6 October, and 30 October – 4 November as likely dates. The British prepared extensive defences, and, in Churchill's view, "the great invasion scare" was "serving a most useful purpose" by "keeping every man and woman tuned to a high pitch of readiness". He did not think the threat credible. On 10 July, he advised the War Cabinet that the possibility of invasion could be ignored, as it "would be a most hazardous and suicidal operation"; and on 13 August that "now that we were so much stronger", he thought "we could spare an armoured brigade from this country". Over-riding General Dill, Churchill initiated Operation Apology by which a series of troop convoys, including three tank regiments and eventually the entire 2nd Armoured Division, were sent around the Cape of Good Hope to reinforce General Wavell in the Middle East in support of operations against Italian colonial forces (Italy had declared war on 10 June). Furthermore, on Churchill's urging, on 5 August the War Cabinet approved Operation Menace, in which a substantial proportion of the Home Fleet - two battleships, an aircraft carrier, five cruisers, and twelve destroyers, together with five out of six battalions of Royal Marines, were dispatched to Dakar on 30 August in an attempt to neutralise the battleship Richelieu and detach French West Africa from Vichy France to the control of the Free French. Overall, these actions in the summer of 1940 demonstrated Churchill's confidence in August 1940; that the immediate danger of a German invasion was now over, that the Home Forces were fully adequate to defend Great Britain if the Germans did come, and that the interests of the British Empire were, for the present, better served by attacking the colonial forces of Germany's allies, rather than by confronting the German Army directly.
The Germans were confident enough to film a simulation of the intended invasion in advance. A crew turned up at the Belgian port of Antwerp in early September 1940 and, for two days, they filmed tanks and troops landing from barges on a nearby beach under simulated fire. It was explained that, as the invasion would happen at night, Hitler wanted the German people to see all the details.
In early August, the German command had agreed that the invasion should begin on 15 September, but the Navy's revisions to its schedule set the date back to 20 September. At a conference on 14 September, Hitler praised the various preparations, but told his service chiefs that, as air superiority had still not been achieved, he would review whether to proceed with the invasion. At this conference, he gave the Luftwaffe the opportunity to act independently of the other services, with intensified continuous air attacks to overcome British resistance; on 16 September, Göring issued orders for this new phase of the air attack. On 17 September 1940, Hitler held a meeting with "Reichsmarschall" Hermann Göring and "Generalfeldmarschall" Gerd von Rundstedt during which he became convinced the operation was not viable. Control of the skies was still lacking, and co-ordination among three branches of the armed forces was out of the question. Later that day, Hitler ordered the postponement of the operation. He ordered the dispersal of the invasion fleet in order to avert further damage by British air and naval attacks.
The postponement coincided with rumours that there had been an attempt to land on British shores on or about 7 September, which had been repulsed with large German casualties. The story was later expanded to include false reports that the British had set the sea on fire using flaming oil. Both versions were widely reported in the American press and in William L. Shirer's "Berlin Diary", but both were officially denied by Britain and Germany. Author James Hayward has suggested that the whispering campaign around the "failed invasion" was a successful example of British black propaganda to bolster morale at home and in occupied Europe, and convince America that Britain was not a lost cause.
On 12 October 1940, Hitler issued a directive releasing forces for other fronts. The appearance of preparations for Sea Lion was to be continued to keep political pressure on Britain, and a fresh directive would be issued if it was decided that invasion was to be reconsidered in the spring of 1941. On 12 November 1940, Hitler issued Directive No. 18 demanding further refinement to the invasion plan. On 1 May 1941, fresh invasion orders were issued under the codename "Haifische" (shark), accompanied by additional landings on the southwest and northeast coasts of England codenamed "Harpune Nord" and "Harpune Süd" (harpoon north and south), although commanders of naval stations were informed that these were deception plans. Work continued on the various amphibious warfare developments such as purpose-built landing craft, which were later employed in operations in the Baltic.
While the bombing of Britain intensified during the Blitz, Hitler issued his Directive No. 21 on 18 December 1940 instructing the Wehrmacht to be ready for a quick attack to commence his long planned invasion of the Soviet Union. lapsed, never to be resumed. On 23 September 1941, Hitler ordered all Sea Lion preparations to cease, but it was 1942 before the last of the barges at Antwerp were returned to trade. Hitler's last recorded order with reference to Sea Lion was on 24 January 1944, reusing equipment that was still stockpiled for the invasion and stating that twelve months' notice would be given of its resumption.
"Reichsmarschall" Hermann Göring, Commander-in-Chief of the "Luftwaffe", believed the invasion could not succeed and doubted whether the German air force would be able to win unchallenged control of the skies; nevertheless he hoped that an early victory in the Battle of Britain would force the UK government to negotiate, without any need for an invasion. Adolf Galland, commander of fighters at the time, claimed invasion plans were not serious and that there was a palpable sense of relief in the when it was finally called off. "Generalfeldmarschall" Gerd von Rundstedt also took this view and thought that Hitler never seriously intended to invade Britain; he was convinced that the whole thing was a bluff to put pressure on the British government to come to terms following the Fall of France. He observed that Napoleon had failed to invade and the difficulties that confounded him did not appear to have been solved by the Sea Lion planners. In fact, in November 1939, the German naval staff produced a study on the possibility of an invasion of Britain and concluded that it required two preconditions, air and naval superiority, neither of which Germany ever had. Grand Admiral Karl Dönitz believed air superiority was not enough and admitted, "We possessed neither control of the air or the sea; nor were we in any position to gain it." Grand Admiral Erich Raeder thought it would be impossible for Germany to attempt an invasion until the spring of 1941; he instead called for Malta and the Suez Canal to be overrun so German forces could link up with Japanese forces in the Indian Ocean to bring about the collapse of the British Empire in the Far East, and prevent the Americans from being able to use British bases if the United States entered the war.
As early as 14 August 1940, Hitler had told his generals that he would not attempt to invade Britain if the task seemed too dangerous, before adding that there were other ways of defeating the UK than invading.
In "Memoirs of WWII", Churchill stated, "Had the Germans possessed in 1940 well trained [and equipped] amphibious forces their task would still have been a forlorn hope in the face of our sea and air power. In fact they had neither the tools or the training". He added, "There were indeed some who on purely technical grounds, and for the sake of the effect the total defeat of his expedition would have on the general war, were quite content to see him try."
Although Operation Sea Lion was never attempted, there has been much speculation about its hypothetical outcome. The great majority of military historians, including Peter Fleming, Derek Robinson and Stephen Bungay, have expressed the opinion that it had little chance of success and would have most likely resulted in a disaster for the Germans. Len Deighton and some other writers have called the German amphibious plans a "Dunkirk in reverse". Robinson argues the massive superiority of the Royal Navy over the "Kriegsmarine" would have made Sea Lion a disaster. Dr Andrew Gordon, in an article for the "Royal United Services Institute Journal" agrees with this and is clear in his conclusion the German Navy was never in a position to mount Sealion, regardless of any realistic outcome of the Battle of Britain. In his fictional alternate history "Invasion: the German invasion of England, July 1940", Kenneth Macksey proposes that the Germans might have succeeded if they had swiftly and decisively begun preparations even before the Dunkirk evacuations, and the Royal Navy for some reason had held back from large-scale intervention, though in practice the Germans were unprepared for such a speedy commencement of their assault. The German official naval war historian, Vice Admiral , wrote in 1958: "Had the German Air Force defeated the Royal Air Force as decisively as it had defeated the French Air Force a few months earlier, I am sure Hitler would have given the order for the invasion to be launched - and the invasion would in all probability been smashed".
An alternative perspective was advanced in 2016 by Robert Forczyk in "We march against England". Forczyk claims to apply a much more realistic assessment of the relative strengths and weaknesses of both German and British forces, and challenges the views advanced by previous writers that the Royal Navy might easily have overwhelmed the German naval units protecting the first wave invasion fleet. His assessment concurs with that emerging from the 1974 Sandhurst Sea Lion wargame (see below) that the first wave would likely have crossed the Channel and established a lodgement around the landing beaches in Kent and East Sussex without major loss, and that the defending British forces would have been unlikely to have dislodged them once ashore. He proposes though, that the westernmost German landing at beach 'E' could not have been sustained for long against counterattacking British ground, naval and air forces, and that accordingly these German units would have had to fight their way eastwards, abandoning any aspiration to hold Newhaven. In the absence of access to a major port and with continued losses of German troop transport vessels from submarine attack, Forczyk argues that proposed arrangements for landing the second wave onto the beaches would have been wholly impractical once autumn and winter weather set into the Channel, so the first wave would be stranded in Kent as a 'beached whale' without substantial armour, transport or heavy artillery — unable to break out and threaten London. Nevertheless, Forczyk does not accept that they would have surrendered, pointing to the determined resistance of surrounded German forces at Stalingrad and Demyansk. He suggests they could have held on into 1941, sustained by a small-ship fast night-time resupply operation into Folkestone (and maybe Dover), holding the possibility of negotiating their withdrawal in Spring 1941 under a truce agreed with the British government.
Four years later, the Allied D-Day landings showed just how much materiel had to be landed continuously to maintain an amphibious invasion. The problem for the Germans was worse, as the German Army was mostly horse-drawn. One of its prime headaches would have been transporting thousands of horses across the Channel. British intelligence calculated that the first wave of 10 divisions (including the airborne division) would require a daily average of 3,300 tons of supplies. In fact, in Russia in 1941, when engaged in heavy fighting (at the end of a very long supply line), a single German infantry division required up to 1,100 tons of supplies a day, though a more usual figure would be 212-425 tons per day. The smaller figure is more likely due to the very short distances the supplies would have to travel. Rations for two weeks were to be provided to the German troops of the first wave because the armies had been instructed to live off the land as far as possible in order to minimise supply across the Channel during the initial phase of the battle. British intelligence further calculated that Folkestone, the largest harbour falling within the planned German landing zones, could handle 150 tons per day in the first week of the invasion (assuming all dockside equipment was successfully demolished and regular RAF bombing raids reduced capacity by 50%). Within seven days, maximum capacity was expected to rise to 600 tons per day, once German shore parties had made repairs to the quays and cleared the harbour of any blockships and other obstacles. This meant that, at best, the nine German infantry and one airborne division landed in the first wave would receive less than 20% of the 3,300 tons of supplies they required each day through a port, and would have to rely heavily on whatever could be brought in directly over the beaches or air-lifted into captured airstrips.
The successful capture of Dover and its harbour facilities might have been expected to add another 800 tons per day, raising to 40% the amount of supplies brought in through ports. However, this rested on the rather unrealistic assumption of little or no interference from the Royal Navy and RAF with the German supply convoys which would have been made up of underpowered (or unpowered, i.e. towed) inland waterways vessels as they shuttled slowly between the Continent to the invasion beaches and any captured harbours.
From 19 to 26 September 1940, sea and wind conditions on and over the Channel where the invasion was to take place were good overall, and a crossing, even using converted river barges, was feasible provided the sea state remained at less than 4, which for the most part it did. Winds for the remainder of the month were rated as "moderate" and would not have prevented the German invasion fleet from successfully depositing the first wave troops ashore during the ten days needed to accomplish this. From the night of 27 September, strong northerly winds prevailed, making passage more hazardous, but calm conditions returned on 11–12 October and again on 16–20 October. After that, light easterly winds prevailed which would have assisted any invasion craft travelling from the Continent towards the invasion beaches. But by the end of October, according to British Air Ministry records, very strong south-west winds (force 8) would have prohibited any non-seagoing craft from risking a Channel crossing.
At least 20 spies were sent to England by boat or parachute to gather information on the British coastal defences under the codename "Operation Lena"; many of the agents spoke limited English. All agents were quickly captured and many were convinced to defect by MI5's Double-Cross System, providing disinformation to their German superiors. It has been suggested that the "amateurish" espionage efforts were a result of deliberate sabotage by the head of the army intelligence bureau in Hamburg, Herbert Wichmann, in an effort to prevent a disastrous and costly amphibious invasion; Wichmann was critical of the Nazi regime and had close ties to Wilhelm Canaris, the former head of the "Abwehr" who was later executed by the Nazis for treason.
While some errors might not have caused problems, others, such as the inclusion of bridges that no longer existed and misunderstanding the usefulness of minor British roads, would have been detrimental to German operations, and would have added to the confusion caused by the layout of Britain's cities (with their maze of narrow roads and alleys) and the removal of road signs.
A 1974 wargame was conducted at Royal Military Academy Sandhurst. The controllers of the game assumed that the had not diverted its daytime operations into bombing London on 7 September 1940, but had continued its assault against RAF airbases in the South East. Consequently, the German High Command, relying on grossly overstated claims of RAF fighters shot down, were under the erroneous impression that by 19 September RAF front-line fighter strength had fallen to 140 (as against a true figure of over 700); and hence that effective German air superiority might shortly be achieved. In the Game, the Germans were able to land almost all their first echelon forces on 22 September 1940, and established a beachhead in south-east England, capturing Folkestone, and Newhaven, albeit that the British had demolished the facilities of both ports. The British army forces, delayed in moving units from East Anglia to the South East by bomb damage to the rail network south of London, were nevertheless able to hold onto positions in and around Newhaven and Dover, sufficient to deny their use by German forces. Both the RAF and the Luftwaffe lost nearly a quarter of their available forces on the first day, after which it finally became apparent to the German command that British airpower was not, after all, on the point of collapse. On the second day a Royal Navy force of cruisers and destroyers was able to reach the Channel from Rosyth, in time to intercept and destroy most of the barges carrying the second and third echelon of German amphibious landings (for the game, these follow-up echelons had been held back from crossing the Channel on S minus one with the first echelon, instead sailing across on the night of S plus one). Without the second and third echelons, the forces ashore were cut off from reserves of artillery, vehicles, fuel and ammunition supplies; and blocked from further reinforcements. Isolated and facing fresh regular troops with armour and artillery, the invasion force was forced to surrender after six days.
One of the primary German foreign policy aims throughout the 1930s had been to establish a military alliance with the United Kingdom, and despite anti-British policies having been adopted as this proved impossible, hope remained that the UK would in time yet become a reliable German ally. Hitler professed an admiration for the British Empire and preferred to see it preserved as a world power, mostly because its break-up would benefit other countries far more than it would Germany, particularly the United States and Japan. Britain's situation was likened to the historical situation of the Austrian Empire after its defeat to Kingdom of Prussia in 1866, after which Austria was formally excluded from German affairs but would prove to become a loyal ally of the German Empire in the pre-World War I power alignments in Europe. It was hoped that a defeated Britain would fulfill a similar role, being excluded from continental affairs, but maintaining its Empire and becoming an allied seafaring partner of the Germans.
The continued military actions against the UK after the fall of France had the strategic goal of making Britain 'see the light' and conduct an armistice with the Axis powers, with 1 July 1940 being named the "probable date" for the cessation of hostilities. On 21 May 1940, Chief of Army Staff Franz Halder, after a consultation with Hitler on the war aims regarding Britain, wrote in his diary: "We are seeking contact with Britain on the basis of partitioning the world". Even as the war went on Hitler hoped in August 1941 for the eventual day when "England and Germany [march] together against America", and in January 1942 he still daydreamed that it was "not impossible" for Britain to quit the war and join the Axis side. Nazi ideologist Alfred Rosenberg hoped that after the victorious conclusion of the war against the USSR, Englishmen would be among the Germanic nationalities who would join the German settlers in colonizing the conquered eastern territories.
William L. Shirer, however, claims that the British male population between 17 and 45 would have been forcibly transferred to the continent to be used as industrial slave labour, although possibly with better treatment than similar forced labor from Eastern Europe. The remaining population would have been terrorized, including civilian hostages being taken and the death penalty immediately imposed for even the most trivial acts of resistance, with the UK being plundered for anything of financial, military, industrial or cultural value.
According to the most detailed plans created for the immediate post-invasion administration, Great Britain and Ireland were to be divided into six military-economic commands, with headquarters in London, Birmingham, Newcastle, Liverpool, Glasgow and Dublin. Hitler decreed that Blenheim Palace, the ancestral home of Winston Churchill, was to serve as the overall headquarters of the German occupation military government. The OKW, RSHA, and Foreign Ministry compiled lists of those they thought could be trusted to form a new German-friendly government along the lines of the one in occupied Norway. The list was headed by British fascist leader Oswald Mosley. The RSHA also felt that Harold Nicolson might prove useful in this role. It appears, based on the German police plans, that the occupation was to be only temporary, as detailed provisions for the post-occupation period are mentioned.
Some sources indicated that the Germans only intended to occupy Southern England, and that draft documents existed on the regulation of the passage of British civilians back and forth between the occupied and unoccupied territories. Others state that Nazi planners envisaged the institution of a nationalities policy in Western Europe to secure German hegemony there, which entailed the granting of independence to various regions. This involved detaching Scotland from the United Kingdom, the creation of a United Ireland, and an autonomous status for Western England.
After the war rumours also emerged about the selection either Joachim von Ribbentrop or Ernst Wilhelm Bohle, for the "viceregal" office of "Reichskommissar für Großbritannien" ("Imperial Commissioner for Great Britain"). However, no establishment by this name was ever approved by either Hitler or the Nazi government during the war, and was also denied by Bohle when he was interrogated by the victorious Allies (von Ribbentrop not having been questioned on the matter). After the Second Armistice at Compiègne with France, when he expected an imminent British capitulation, Hitler did however assure Bohle that he would be the next German ambassador to the Court of St. James's "if the British behave[d] sensibly".
The German Government used 90% of James Vincent Murphy's rough draft translation of Mein Kampf to form the body of an edition to be distributed in the UK once Operation Sea Lion was completed. This 'Operation Sea Lion Edition' was finalised and printed in the summer of 1940. Once the invasion was called off by Adolf Hitler most copies were distributed to english speaking POW camps. Original copies are very rare and highly sought after by serious book collectors interested in military history.
A Channel 5 documentary broadcast on 16 July 2009 repeated the claim that the Germans intended to restore Edward VIII to the throne in the event of a German occupation. Many senior German officials believed the Duke of Windsor to be highly sympathetic to the Nazi government, a feeling that was reinforced by his and Wallis Simpson's 1937 visit to Germany. However, despite German approaches, "The Duke never wavered in his loyalty to Great Britain during the war", according to a statement by the British Foreign Office.
Had Operation Sea Lion succeeded, Franz Six was intended to become the "Sicherheitsdienst" (SD) Commander in the country, with his headquarters to be located in London, and with regional task forces in Birmingham, Liverpool, Manchester, and Edinburgh. His immediate mission would have been to hunt down and arrest the 2,820 people on the "Sonderfahndungsliste G.B." ("Special Search List Great Britain"). This document, which post-war became known as "The Black Book", was a secret list compiled by Walter Schellenberg containing the names of prominent British residents to be arrested immediately after a successful invasion. Six would also have been responsible for handling the over 300,000 large population of British Jews.
Six had also been entrusted with the task of securing "aero-technological research result and important equipment" as well as "Germanic works of art". There is also a suggestion that he toyed with the idea of moving Nelson's Column to Berlin. The RSHA planned to take over the Ministry of Information, to close the major news agencies and to take control of all of the newspapers. Anti-German newspapers were to be closed down.
There is a large corpus of works set in an alternate history where the Nazi invasion of Great Britain is attempted or successfully carried out. | https://en.wikipedia.org/wiki?curid=22338 |
Observational error
Observational error (or measurement error) is the difference between a measured value of a quantity and its true value. In statistics, an error is not a "mistake". Variability is an inherent part of the results of measurements and of the measurement process.
Measurement errors can be divided into two components: "random error" and "systematic error".
Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measurements of a constant attribute or quantity are taken. Systematic errors are errors that are not determined by chance but are introduced by an inaccuracy (involving either the observation or measurement process) inherent to the system. Systematic error may also refer to an error with a non-zero mean, the effect of which is not reduced when observations are averaged.
When either randomness or uncertainty modeled by probability theory is attributed to such errors, they are "errors" in the sense in which that term is used in statistics; see errors and residuals in statistics.
Every time we repeat a measurement with a sensitive instrument, we obtain slightly different results. The common statistical model used is that the error has two additive parts:
Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error.
Random error (or random variation) is due to factors which cannot or will not be controlled. Some possible reason to forgo controlling for these random errors is because it may be too expensive to control them each time the experiment is conducted or the measurements are made. Other reasons may be that whatever we are trying to measure is changing in time (see dynamic models), or is fundamentally probabilistic (as is the case in quantum mechanics — see Measurement in quantum mechanics). Random error often occurs when instruments are pushed to the extremes of their operating limits. For example, it is common for digital balances to exhibit random error in their least significant digit. Three measurements of a single object might read something like 0.9111g, 0.9110g, and 0.9112g.
Measurement errors can be divided into two components: random error and systematic error.
Random error is always present in a measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement apparatus or in the experimenter's interpretation of the instrumental reading. Random errors show up as different results for ostensibly the same repeated measurement. They can be estimated by comparing multiple measurements, and reduced by averaging multiple measurements.
Systematic error is predictable and typically constant or proportional to the true value. If the cause of the systematic error can be identified, then it usually can be eliminated. Systematic errors are caused by imperfect calibration of measurement instruments or imperfect methods of observation, or interference of the environment with the measurement process, and always affect the results of an experiment in a predictable direction. Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation.
The Performance Test Standard PTC 19.1-2005 “Test Uncertainty”, published by the American Society of Mechanical Engineers (ASME), discusses systematic and random errors in considerable detail. In fact, it conceptualizes its basic uncertainty categories in these terms.
Random error can be caused by unpredictable fluctuations in the readings of a measurement apparatus, or in the experimenter's interpretation of the instrumental reading; these fluctuations may be in part due to interference of the environment with the measurement process. The concept of random error is closely related to the concept of precision. The higher the precision of a measurement instrument, the smaller the variability (standard deviation) of the fluctuations in its readings.
Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes in the environment which interfere with the measurement process and sometimes imperfect methods of observation can be either zero error or percentage error. If you consider an experimenter taking a reading of the time period of a pendulum swinging past a fiducial marker: If their stop-watch or timer starts with 1 second on the clock then all of their results will be off by 1 second (zero error). If the experimenter repeats this experiment twenty times (starting at 1 second each time), then there will be a percentage error in the calculated average of their results; the final result will be slightly larger than the true period.
Distance measured by radar will be systematically overestimated if the slight slowing down of the waves in air is not accounted for. Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation.
Systematic errors may also be present in the result of an estimate based upon a mathematical model or physical law. For instance, the estimated oscillation frequency of a pendulum will be systematically in error if slight movement of the support is not accounted for.
Systematic errors can be either constant, or related (e.g. proportional or a percentage) to the actual value of the measured quantity, or even to the value of a different quantity (the reading of a ruler can be affected by environmental temperature). When it is constant, it is simply due to incorrect zeroing of the instrument. When it is not constant, it can change its sign. For instance, if a thermometer is affected by a proportional systematic error equal to 2% of the actual temperature, and the actual temperature is 200°, 0°, or −100°, the measured temperature will be 204° (systematic error = +4°), 0° (null systematic error) or −102° (systematic error = −2°), respectively. Thus the temperature will be overestimated when it will be above zero, and underestimated when it will be below zero.
Systematic errors which change during an experiment (drift) are easier to detect. Measurements indicate trends with time rather than varying randomly about a mean. Drift is evident if a measurement of a constant quantity is repeated several times and the measurements drift one way during the experiment. If the next measurement is higher than the previous measurement as may occur if an instrument becomes warmer during the experiment then the measured quantity is variable and it is possible to detect a drift by checking the zero reading during the experiment as well as at the start of the experiment (indeed, the zero reading is a measurement of a constant quantity). If the zero reading is consistently above or below zero, a systematic error is present. If this cannot be eliminated, potentially by resetting the instrument immediately before the experiment then it needs to be allowed by subtracting its (possibly time-varying) value from the readings, and by taking it into account while assessing the accuracy of the measurement.
If no pattern in a series of repeated measurements is evident, the presence of fixed systematic errors can only be found if the measurements are checked, either by measuring a known quantity or by comparing the readings with readings made using a different apparatus, known to be more accurate. For example, if you think of the timing of a pendulum using an accurate stopwatch several times you are given readings randomly distributed about the mean. Hopings systematic error is present if the stopwatch is checked against the 'speaking clock' of the telephone system and found to be running slow or fast. Clearly, the pendulum timings need to be corrected according to how fast or slow the stopwatch was found to be running.
Measuring instruments such as ammeters and voltmeters need to be checked periodically against known standards.
Systematic errors can also be detected by measuring already known quantities. For example, a spectrometer fitted with a diffraction grating may be checked by using it to measure the wavelength of the D-lines of the sodium electromagnetic spectrum which are at 600 nm and 589.6 nm. The measurements may be used to determine the number of lines per millimetre of the diffraction grating, which can then be used to measure the wavelength of any other spectral line.
Constant systematic errors are very difficult to deal with as their effects are only observable if they can be removed. Such errors cannot be removed by repeating measurements or averaging large numbers of results. A common method to remove systematic error is through calibration of the measurement instrument.
The random or stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to be normally distributed when the stochastic error is the sum of many independent random errors because of the central limit theorem. Stochastic errors added to a regression equation account for the variation in "Y" that cannot be explained by the included "X"s.
The term "Observational error" is also sometimes used to refer to response errors and some other types of non-sampling error. In survey-type situations, these errors can be mistakes in the collection of data, including both the incorrect recording of a response and the correct recording of a respondent's inaccurate response. These sources of non-sampling error are discussed in Salant and Dillman (1994) and Bland and Altman (1996).
These errors can be random or systematic. Random errors are caused by unintended mistakes by respondents, interviewers and/or coders. Systematic error can occur if there is a systematic reaction of the respondents to the method used to formulate the survey question. Thus, the exact formulation of a survey question is crucial, since it affects the level of measurement error. Different tools are available for the researchers to help them decide about this exact formulation of their questions, for instance estimating the quality of a question using MTMM experiments or predicting this quality using the Survey Quality Predictor software (SQP). This information about the quality can also be used in order to correct for measurement error.
If the dependent variable in a regression is measured with error, regression analysis and associated hypothesis testing are unaffected, except that the R2 will be lower than it would be with perfect measurement.
However, if one or more independent variables is measured with error, then the regression coefficients and standard hypothesis tests are invalid. | https://en.wikipedia.org/wiki?curid=22346 |
Opera
Opera is a form of theatre in which music has a leading role and the parts are taken by singers, but is distinct from musical theatre. Such a "work" (the literal translation of the Italian word "opera") is typically a collaboration between a composer and a librettist and incorporates a number of the performing arts, such as acting, scenery, costume, and sometimes dance or ballet. The performance is typically given in an opera house, accompanied by an orchestra or smaller musical ensemble, which since the early 19th century has been led by a conductor.
Opera is a key part of the Western classical music tradition. Originally understood as an entirely sung piece, in contrast to a play with songs, opera has come to include , including some that include spoken dialogue such as musical theatre, "Singspiel" and "Opéra comique". In traditional number opera, singers employ two styles of singing: recitative, a speech-inflected style, and self-contained arias. The 19th century saw the rise of the continuous music drama.
Opera originated in Italy at the end of the 16th century (with Jacopo Peri's mostly lost "Dafne", produced in Florence in 1598) especially from works by Claudio Monteverdi, notably "L'Orfeo", and soon spread through the rest of Europe: Heinrich Schütz in Germany, Jean-Baptiste Lully in France, and Henry Purcell in England all helped to establish their national traditions in the 17th century. In the 18th century, Italian opera continued to dominate most of Europe (except France), attracting foreign composers such as George Frideric Handel. Opera seria was the most prestigious form of Italian opera, until Christoph Willibald Gluck reacted against its artificiality with his "reform" operas in the 1760s. The most renowned figure of late 18th-century opera is Wolfgang Amadeus Mozart, who began with opera seria but is most famous for his Italian comic operas, especially "The Marriage of Figaro" ("Le nozze di Figaro"), "Don Giovanni", and "Così fan tutte", as well as "Die Entführung aus dem Serail" ("The Abduction from the Seraglio"), and "The Magic Flute" ("Die Zauberflöte"), landmarks in the German tradition.
The first third of the 19th century saw the high point of the bel canto style, with Gioachino Rossini, Gaetano Donizetti and Vincenzo Bellini all creating works that are still performed. It also saw the advent of Grand Opera typified by the works of Auber and Meyerbeer. The mid-to-late 19th century was a golden age of opera, led and dominated by Giuseppe Verdi in Italy and Richard Wagner in Germany. The popularity of opera continued through the verismo era in Italy and contemporary French opera through to Giacomo Puccini and Richard Strauss in the early 20th century. During the 19th century, parallel operatic traditions emerged in central and eastern Europe, particularly in Russia and Bohemia. The 20th century saw many experiments with modern styles, such as atonality and serialism (Arnold Schoenberg and Alban Berg), Neoclassicism (Igor Stravinsky), and Minimalism (Philip Glass and John Adams). With the rise of recording technology, singers such as Enrico Caruso and Maria Callas became known to much wider audiences that went beyond the circle of opera fans. Since the invention of radio and television, operas were also performed on (and written for) these media. Beginning in 2006, a number of major opera houses began to present live high-definition video transmissions of their performances in cinemas all over the world. Since 2009, complete performances can be downloaded and are live streamed.
The words of an opera are known as the libretto (literally "small book"). Some composers, notably Wagner, have written their own libretti; others have worked in close collaboration with their librettists, e.g. Mozart with Lorenzo Da Ponte. Traditional opera, often referred to as "number opera", consists of two modes of singing: recitative, the plot-driving passages sung in a style designed to imitate and emphasize the inflections of speech, and aria (an "air" or formal song) in which the characters express their emotions in a more structured melodic style. Vocal duets, trios and other ensembles often occur, and choruses are used to comment on the action. In some forms of opera, such as singspiel, opéra comique, operetta, and semi-opera, the recitative is mostly replaced by spoken dialogue. Melodic or semi-melodic passages occurring in the midst of, or instead of, recitative, are also referred to as arioso. The terminology of the various kinds of operatic voices is described in detail below. During both the Baroque and Classical periods, recitative could appear in two basic forms, each of which was accompanied by a different instrumental ensemble: "secco" (dry) recitative, sung with a free rhythm dictated by the accent of the words, accompanied only by "basso continuo", which was usually a harpsichord and a cello; or "accompagnato" (also known as "strumentato") in which the orchestra provided accompaniment. Over the 18th century, arias were increasingly accompanied by the orchestra. By the 19th century, "accompagnato" had gained the upper hand, the orchestra played a much bigger role, and Wagner revolutionized opera by abolishing almost all distinction between aria and recitative in his quest for what Wagner termed "endless melody". Subsequent composers have tended to follow Wagner's example, though some, such as Stravinsky in his "The Rake's Progress" have bucked the trend. The changing role of the orchestra in opera is described in more detail below.
The Italian word "opera" means "work", both in the sense of the labour done and the result produced. The Italian word derives from the Latin "opera", a singular noun meaning "work" and also the plural of the noun "opus". According to the Oxford English Dictionary, the Italian word was first used in the sense "composition in which poetry, dance, and music are combined" in 1639; the first recorded English usage in this sense dates to 1648.
"Dafne" by Jacopo Peri was the earliest composition considered opera, as understood today. It was written around 1597, largely under the inspiration of an elite circle of literate Florentine humanists who gathered as the "Camerata de' Bardi". Significantly, "Dafne" was an attempt to revive the classical Greek drama, part of the wider revival of antiquity characteristic of the Renaissance. The members of the Camerata considered that the "chorus" parts of Greek dramas were originally sung, and possibly even the entire text of all roles; opera was thus conceived as a way of "restoring" this situation. "Dafne," however, is lost. A later work by Peri, "Euridice", dating from 1600, is the first opera score to have survived to the present day. The honour of being the first opera still to be regularly performed, however, goes to Claudio Monteverdi's "L'Orfeo", composed for the court of Mantua in 1607. The Mantua court of the Gonzagas, employers of Monteverdi, played a significant role in the origin of opera employing not only court singers of the concerto delle donne (till 1598), but also one of the first actual "opera singers", Madama Europa.
Opera did not remain confined to court audiences for long. In 1637, the idea of a "season" (often during the carnival) of publicly attended operas supported by ticket sales emerged in Venice. Monteverdi had moved to the city from Mantua and composed his last operas, "Il ritorno d'Ulisse in patria" and "L'incoronazione di Poppea", for the Venetian theatre in the 1640s. His most important follower Francesco Cavalli helped spread opera throughout Italy. In these early Baroque operas, broad comedy was blended with tragic elements in a mix that jarred some educated sensibilities, sparking the first of opera's many reform movements, sponsored by the Arcadian Academy, which came to be associated with the poet Metastasio, whose libretti helped crystallize the genre of opera seria, which became the leading form of Italian opera until the end of the 18th century. Once the Metastasian ideal had been firmly established, comedy in Baroque-era opera was reserved for what came to be called opera buffa.
Before such elements were forced out of opera seria, many libretti had featured a separately unfolding comic plot as sort of an "opera-within-an-opera". One reason for this was an attempt to attract members of the growing merchant class, newly wealthy, but still not as cultured as the nobility, to the public opera houses. These separate plots were almost immediately resurrected in a separately developing tradition that partly derived from the commedia dell'arte, a long-flourishing improvisatory stage tradition of Italy. Just as intermedi had once been performed in between the acts of stage plays, operas in the new comic genre of "intermezzi", which developed largely in Naples in the 1710s and '20s, were initially staged during the intermissions of opera seria. They became so popular, however, that they were soon being offered as separate productions.
"Opera seria" was elevated in tone and highly stylised in form, usually consisting of "secco" recitative interspersed with long "da capo" arias. These afforded great opportunity for virtuosic singing and during the golden age of "opera seria" the singer really became the star. The role of the hero was usually written for the high-pitched male castrato voice, which was produced by castration of the singer before puberty, which prevented a boy's larynx from being transformed at puberty. Castrati such as Farinelli and Senesino, as well as female sopranos such as Faustina Bordoni, became in great demand throughout Europe as "opera seria" ruled the stage in every country except France. Farinelli was one of the most famous singers of the 18th century. Italian opera set the Baroque standard. Italian libretti were the norm, even when a German composer like Handel found himself composing the likes of "Rinaldo" and "Giulio Cesare" for London audiences. Italian libretti remained dominant in the classical period as well, for example in the operas of Mozart, who wrote in Vienna near the century's close. Leading Italian-born composers of opera seria include Alessandro Scarlatti, Vivaldi and Porpora.
Opera seria had its weaknesses and critics. The taste for embellishment on behalf of the superbly trained singers, and the use of spectacle as a replacement for dramatic purity and unity drew attacks. Francesco Algarotti's "Essay on the Opera" (1755) proved to be an inspiration for Christoph Willibald Gluck's reforms. He advocated that "opera seria" had to return to basics and that all the various elements—music (both instrumental and vocal), ballet, and staging—must be subservient to the overriding drama. In 1765 Melchior Grimm published "", an influential article for the Encyclopédie on lyric and opera librettos. Several composers of the period, including Niccolò Jommelli and Tommaso Traetta, attempted to put these ideals into practice. The first to succeed however, was Gluck. Gluck strove to achieve a "beautiful simplicity". This is evident in his first reform opera, "Orfeo ed Euridice", where his non-virtuosic vocal melodies are supported by simple harmonies and a richer orchestra presence throughout.
Gluck's reforms have had resonance throughout operatic history. Weber, Mozart, and Wagner, in particular, were influenced by his ideals. Mozart, in many ways Gluck's successor, combined a superb sense of drama, harmony, melody, and counterpoint to write a series of comic operas with libretti by Lorenzo Da Ponte, notably "Le nozze di Figaro", "Don Giovanni", and "Così fan tutte", which remain among the most-loved, popular and well-known operas today. But Mozart's contribution to "opera seria" was more mixed; by his time it was dying away, and in spite of such fine works as "Idomeneo" and "La clemenza di Tito", he would not succeed in bringing the art form back to life again.
The bel canto opera movement flourished in the early 19th century and is exemplified by the operas of Rossini, Bellini, Donizetti, Pacini, Mercadante and many others. Literally "beautiful singing", "bel canto" opera derives from the Italian stylistic singing school of the same name. Bel canto lines are typically florid and intricate, requiring supreme agility and pitch control. Examples of famous operas in the bel canto style include Rossini's "Il barbiere di Siviglia" and "La Cenerentola", as well as Bellini's "Norma", "La sonnambula" and "I puritani" and Donizetti's "Lucia di Lammermoor", "L'elisir d'amore" and "Don Pasquale".
Following the bel canto era, a more direct, forceful style was rapidly popularized by Giuseppe Verdi, beginning with his biblical opera "Nabucco". This opera, and the ones that would follow in Verdi's career, revolutionized Italian opera, changing it from merely a display of vocal fireworks, with Rossini's and Donizetti's works, to dramatic story-telling. Verdi's operas resonated with the growing spirit of Italian nationalism in the post-Napoleonic era, and he quickly became an icon of the patriotic movement for a unified Italy. In the early 1850s, Verdi produced his three most popular operas: "Rigoletto", "Il trovatore" and "La traviata". The first of these, "Rigoletto", proved the most daring and revolutionary. In it, Verdi blurs the distinction between the aria and recitative as it never before was, leading the opera to be "an unending string of duets". "La traviata" was also novel. It tells the story of courtesan, and is often cited as one of the first "realistic" operas, because rather than featuring great kings and figures from literature, it focuses on the tragedies of ordinary life and society. After these, he continued to develop his style, composing perhaps the greatest French Grand Opera, "Don Carlos", and ending his career with two Shakespeare-inspired works, "Otello" and "Falstaff", which reveal how far Italian opera had grown in sophistication since the early 19th century. These final two works showed Verdi at his most masterfully orchestrated, and are both incredibly influential, and modern. In "Falstaff", Verdi sets the preeminent standard for the form and style that would dominate opera throughout the twentieth century. Rather than long, suspended melodies, "Falstaff" contains many little motifs and mottos, that, rather than being expanded upon, are introduced and subsequently dropped, only to be brought up again later. These motifs never are expanded upon, and just as the audience expects a character to launch into a long melody, a new character speaks, introducing a new phrase. This fashion of opera directed opera from Verdi, onward, exercising tremendous influence on his successors Giacomo Puccini, Richard Strauss, and Benjamin Britten.
After Verdi, the sentimental "realistic" melodrama of verismo appeared in Italy. This was a style introduced by Pietro Mascagni's "Cavalleria rusticana" and Ruggero Leoncavallo's "Pagliacci" that came to dominate the world's opera stages with such popular works as Giacomo Puccini's "La bohème", "Tosca", and "Madama Butterfly". Later Italian composers, such as Berio and Nono, have experimented with modernism.
The first German opera was "Dafne", composed by Heinrich Schütz in 1627, but the music score has not survived. Italian opera held a great sway over German-speaking countries until the late 18th century. Nevertheless, native forms would develop in spite of this influence. In 1644, Sigmund Staden produced the first "Singspiel", "Seelewig", a popular form of German-language opera in which singing alternates with spoken dialogue. In the late 17th century and early 18th century, the Theater am Gänsemarkt in Hamburg presented German operas by Keiser, Telemann and Handel. Yet most of the major German composers of the time, including Handel himself, as well as Graun, Hasse and later Gluck, chose to write most of their operas in foreign languages, especially Italian. In contrast to Italian opera, which was generally composed for the aristocratic class, German opera was generally composed for the masses and tended to feature simple folk-like melodies, and it was not until the arrival of Mozart that German opera was able to match its Italian counterpart in musical sophistication. The theatre company of Abel Seyler pioneered serious German-language opera in the 1770s, marking a break with the previous simpler musical entertainment.
Mozart's "Singspiele", "Die Entführung aus dem Serail" (1782) and "Die Zauberflöte" (1791) were an important breakthrough in achieving international recognition for German opera. The tradition was developed in the 19th century by Beethoven with his "Fidelio" (1805), inspired by the climate of the French Revolution. Carl Maria von Weber established German Romantic opera in opposition to the dominance of Italian bel canto. His "Der Freischütz" (1821) shows his genius for creating a supernatural atmosphere. Other opera composers of the time include Marschner, Schubert and Lortzing, but the most significant figure was undoubtedly Wagner.
Wagner was one of the most revolutionary and controversial composers in musical history. Starting under the influence of Weber and Meyerbeer, he gradually evolved a new concept of opera as a "Gesamtkunstwerk" (a "complete work of art"), a fusion of music, poetry and painting. He greatly increased the role and power of the orchestra, creating scores with a complex web of leitmotifs, recurring themes often associated with the characters and concepts of the drama, of which prototypes can be heard in his earlier operas such as "Der fliegende Holländer", "Tannhäuser" and "Lohengrin"; and he was prepared to violate accepted musical conventions, such as tonality, in his quest for greater expressivity. In his mature music dramas, "Tristan und Isolde", "Die Meistersinger von Nürnberg", "Der Ring des Nibelungen" and "Parsifal", he abolished the distinction between aria and recitative in favour of a seamless flow of "endless melody". Wagner also brought a new philosophical dimension to opera in his works, which were usually based on stories from Germanic or Arthurian legend. Finally, Wagner built his own opera house at Bayreuth with part of the patronage from Ludwig II of Bavaria, exclusively dedicated to performing his own works in the style he wanted.
Opera would never be the same after Wagner and for many composers his legacy proved a heavy burden. On the other hand, Richard Strauss accepted Wagnerian ideas but took them in wholly new directions, along with incorporating the new form introduced by Verdi. He first won fame with the scandalous "Salome" and the dark tragedy "Elektra", in which tonality was pushed to the limits. Then Strauss changed tack in his greatest success, "Der Rosenkavalier", where Mozart and Viennese waltzes became as important an influence as Wagner. Strauss continued to produce a highly varied body of operatic works, often with libretti by the poet Hugo von Hofmannsthal. Other composers who made individual contributions to German opera in the early 20th century include Alexander von Zemlinsky, Erich Korngold, Franz Schreker, Paul Hindemith, Kurt Weill and the Italian-born Ferruccio Busoni. The operatic innovations of Arnold Schoenberg and his successors are discussed in the section on modernism.
During the late 19th century, the Austrian composer Johann Strauss II, an admirer of the French-language operettas composed by Jacques Offenbach, composed several German-language operettas, the most famous of which was "Die Fledermaus", which is still regularly performed today. Nevertheless, rather than copying the style of Offenbach, the operettas of Strauss II had distinctly Viennese flavor to them, which have cemented the Strauss II's place as one of the most renowned operetta composers of all time.
In rivalry with imported Italian opera productions, a separate French tradition was founded by the Italian Jean-Baptiste Lully at the court of King Louis XIV. Despite his foreign origin, Lully established an Academy of Music and monopolised French opera from 1672. Starting with "Cadmus et Hermione", Lully and his librettist Quinault created "tragédie en musique", a form in which dance music and choral writing were particularly prominent. Lully's operas also show a concern for expressive recitative which matched the contours of the French language. In the 18th century, Lully's most important successor was Jean-Philippe Rameau, who composed five "tragédies en musique" as well as numerous works in other genres such as opéra-ballet, all notable for their rich orchestration and harmonic daring. Despite the popularity of Italian opera seria throughout much of Europe during the Baroque period, Italian opera never gained much of a foothold in France, where its own national operatic tradition was more popular instead. After Rameau's death, the German Gluck was persuaded to produce six operas for the Parisian stage in the 1770s. They show the influence of Rameau, but simplified and with greater focus on the drama. At the same time, by the middle of the 18th century another genre was gaining popularity in France: "opéra comique". This was the equivalent of the German singspiel, where arias alternated with spoken dialogue. Notable examples in this style were produced by Monsigny, Philidor and, above all, Grétry. During the Revolutionary period, composers such as Étienne Méhul and Luigi Cherubini, who were followers of Gluck, brought a new seriousness to the genre, which had never been wholly "comic" in any case. Another phenomenon of this period was the 'propaganda opera' celebrating revolutionary successes, e.g. Gossec's "Le triomphe de la République" (1793).
By the 1820s, Gluckian influence in France had given way to a taste for Italian bel canto, especially after the arrival of Rossini in Paris. Rossini's "Guillaume Tell" helped found the new genre of Grand Opera, a form whose most famous exponent was another foreigner, Giacomo Meyerbeer. Meyerbeer's works, such as "Les Huguenots", emphasised virtuoso singing and extraordinary stage effects. Lighter "opéra comique" also enjoyed tremendous success in the hands of Boïeldieu, Auber, Hérold and Adam. In this climate, the operas of the French-born composer Hector Berlioz struggled to gain a hearing. Berlioz's epic masterpiece "Les Troyens", the culmination of the Gluckian tradition, was not given a full performance for almost a hundred years.
In the second half of the 19th century, Jacques Offenbach created operetta with witty and cynical works such as "Orphée aux enfers", as well as the opera "Les Contes d'Hoffmann"; Charles Gounod scored a massive success with "Faust"; and Georges Bizet composed "Carmen", which, once audiences learned to accept its blend of Romanticism and realism, became the most popular of all opéra comiques. Jules Massenet, Camille Saint-Saëns and Léo Delibes all composed works which are still part of the standard repertory, examples being Massenet's "Manon", Saint-Saëns' "Samson et Dalila" and Delibes' "Lakmé". Their operas formed another genre, the Opera Lyrique, combined opera comique and grand opera. It is less grandiose than grand opera, but without the spoken dialogue of opera comique. At the same time, the influence of Richard Wagner was felt as a challenge to the French tradition. Many French critics angrily rejected Wagner's music dramas while many French composers closely imitated them with variable success. Perhaps the most interesting response came from Claude Debussy. As in Wagner's works, the orchestra plays a leading role in Debussy's unique opera "Pelléas et Mélisande" (1902) and there are no real arias, only recitative. But the drama is understated, enigmatic and completely un-Wagnerian.
Other notable 20th-century names include Ravel, Dukas, Roussel and Milhaud. Francis Poulenc is one of the very few post-war composers of any nationality whose operas (which include "Dialogues des Carmélites") have gained a foothold in the international repertory. Olivier Messiaen's lengthy sacred drama "Saint François d'Assise" (1983) has also attracted widespread attention.
In England, opera's antecedent was the 17th-century "jig". This was an afterpiece that came at the end of a play. It was frequently libellous and scandalous and consisted in the main of dialogue set to music arranged from popular tunes. In this respect, jigs anticipate the ballad operas of the 18th century. At the same time, the French masque was gaining a firm hold at the English Court, with even more lavish splendour and highly realistic scenery than had been seen before. Inigo Jones became the quintessential designer of these productions, and this style was to dominate the English stage for three centuries. These masques contained songs and dances. In Ben Jonson's "Lovers Made Men" (1617), "the whole masque was sung after the Italian manner, stilo recitativo". The approach of the English Commonwealth closed theatres and halted any developments that may have led to the establishment of English opera. However, in 1656, the dramatist Sir William Davenant produced "The Siege of Rhodes". Since his theatre was not licensed to produce drama, he asked several of the leading composers (Lawes, Cooke, Locke, Coleman and Hudson) to set sections of it to music. This success was followed by "The Cruelty of the Spaniards in Peru" (1658) and "The History of Sir Francis Drake" (1659). These pieces were encouraged by Oliver Cromwell because they were critical of Spain. With the English Restoration, foreign (especially French) musicians were welcomed back. In 1673, Thomas Shadwell's "Psyche", patterned on the 1671 'comédie-ballet' of the same name produced by Molière and Jean-Baptiste Lully. William Davenant produced "The Tempest" in the same year, which was the first musical adaption of a Shakespeare play (composed by Locke and Johnson). About 1683, John Blow composed "Venus and Adonis", often thought of as the first true English-language opera.
Blow's immediate successor was the better known Henry Purcell. Despite the success of his masterwork "Dido and Aeneas" (1689), in which the action is furthered by the use of Italian-style recitative, much of Purcell's best work was not involved in the composing of typical opera, but instead, he usually worked within the constraints of the semi-opera format, where isolated scenes and masques are contained within the structure of a spoken play, such as Shakespeare in Purcell's "The Fairy-Queen" (1692) and Beaumont and Fletcher in "The Prophetess" (1690) and "Bonduca" (1696). The main characters of the play tend not to be involved in the musical scenes, which means that Purcell was rarely able to develop his characters through song. Despite these hindrances, his aim (and that of his collaborator John Dryden) was to establish serious opera in England, but these hopes ended with Purcell's early death at the age of 36.
Following Purcell, the popularity of opera in England dwindled for several decades. A revived interest in opera occurred in the 1730s which is largely attributed to Thomas Arne, both for his own compositions and for alerting Handel to the commercial possibilities of large-scale works in English. Arne was the first English composer to experiment with Italian-style all-sung comic opera, with his greatest success being "Thomas and Sally" in 1760. His opera "Artaxerxes" (1762) was the first attempt to set a full-blown opera seria in English and was a huge success, holding the stage until the 1830s. Although Arne imitated many elements of Italian opera, he was perhaps the only English composer at that time who was able to move beyond the Italian influences and create his own unique and distinctly English voice. His modernized ballad opera, "Love in a Village" (1762), began a vogue for pastiche opera that lasted well into the 19th century. Charles Burney wrote that Arne introduced "a light, airy, original, and pleasing melody, wholly different from that of Purcell or Handel, whom all English composers had either pillaged or imitated".
Besides Arne, the other dominating force in English opera at this time was George Frideric Handel, whose "opera serias" filled the London operatic stages for decades and influenced most home-grown composers, like John Frederick Lampe, who wrote using Italian models. This situation continued throughout the 18th and 19th centuries, including in the work of Michael William Balfe, and the operas of the great Italian composers, as well as those of Mozart, Beethoven, and Meyerbeer, continued to dominate the musical stage in England.
The only exceptions were ballad operas, such as John Gay's "The Beggar's Opera" (1728), musical burlesques, European operettas, and late Victorian era light operas, notably the Savoy Operas of W. S. Gilbert and Arthur Sullivan, all of which types of musical entertainments frequently spoofed operatic conventions. Sullivan wrote only one grand opera, "Ivanhoe" (following the efforts of a number of young English composers beginning about 1876), but he claimed that even his light operas constituted part of a school of "English" opera, intended to supplant the French operettas (usually performed in bad translations) that had dominated the London stage from the mid-19th century into the 1870s. London's "Daily Telegraph" agreed, describing "The Yeomen of the Guard" as "a genuine English opera, forerunner of many others, let us hope, and possibly significant of an advance towards a national lyric stage". Sullivan produced a few light operas in the 1890s that were of a more serious nature than those in the G&S series, including "Haddon Hall" and "The Beauty Stone", but "Ivanhoe" (which ran for 155 consecutive performances, using alternating casts—a record until Broadway's "La bohème") survives as his only Grand Opera.
In the 20th century, English opera began to assert more independence, with works of Ralph Vaughan Williams and in particular Benjamin Britten, who in a series of works that remain in standard repertory today, revealed an excellent flair for the dramatic and superb musicality. More recently Sir Harrison Birtwistle has emerged as one of Britain's most significant contemporary composers from his first opera "Punch and Judy" to his most recent critical success in The Minotaur. In the first decade of the 21st century, the librettist of an early Birtwistle opera, Michael Nyman, has been focusing on composing operas, including "Facing Goya", "", and "Love Counts". Today composers such as Thomas Adès continue to export English opera abroad.
Also in the 20th century, American composers like George Gershwin ("Porgy and Bess"), Scott Joplin ("Treemonisha"), Leonard Bernstein ("Candide"), Gian Carlo Menotti, Douglas Moore, and Carlisle Floyd began to contribute English-language operas infused with touches of popular musical styles. They were followed by composers such as Philip Glass ("Einstein on the Beach"), Mark Adamo, John Corigliano, Robert Moran, John Adams ("Nixon in China"), André Previn and Jake Heggie. Many contemporary 21st century opera composers have emerged such as Missy Mazzoli, Kevin Puts, Tom Cipullo, Huang Ruo, David T. Little, Terence Blanchard, Jennifer Higdon, Tobias Picker, Michael Ching, and Ricky Ian Gordon.
Opera was brought to Russia in the 1730s by the Italian operatic troupes and soon it became an important part of entertainment for the Russian Imperial Court and aristocracy. Many foreign composers such as Baldassare Galuppi, Giovanni Paisiello, Giuseppe Sarti, and Domenico Cimarosa (as well as various others) were invited to Russia to compose new operas, mostly in the Italian language. Simultaneously some domestic musicians like Maksym Berezovsky and Dmitry Bortniansky were sent abroad to learn to write operas. The first opera written in Russian was "Tsefal i Prokris" by the Italian composer Francesco Araja (1755). The development of Russian-language opera was supported by the Russian composers Vasily Pashkevich, Yevstigney Fomin and Alexey Verstovsky.
However, the real birth of Russian opera came with Mikhail Glinka and his two great operas "A Life for the Tsar" (1836) and "Ruslan and Lyudmila" (1842). After him, during the 19th century in Russia, there were written such operatic masterpieces as "Rusalka" and "The Stone Guest" by Alexander Dargomyzhsky, "Boris Godunov" and "Khovanshchina" by Modest Mussorgsky, "Prince Igor" by Alexander Borodin, "Eugene Onegin" and "The Queen of Spades" by Pyotr Tchaikovsky, and "The Snow Maiden" and "Sadko" by Nikolai Rimsky-Korsakov. These developments mirrored the growth of Russian nationalism across the artistic spectrum, as part of the more general Slavophilism movement.
In the 20th century, the traditions of Russian opera were developed by many composers including Sergei Rachmaninoff in his works "The Miserly Knight" and "Francesca da Rimini", Igor Stravinsky in "Le Rossignol", "Mavra", "Oedipus rex", and "The Rake's Progress", Sergei Prokofiev in "The Gambler", "The Love for Three Oranges", "The Fiery Angel", "Betrothal in a Monastery", and "War and Peace"; as well as Dmitri Shostakovich in "The Nose" and "Lady Macbeth of the Mtsensk District", Edison Denisov in "L'écume des jours", and Alfred Schnittke in "Life with an Idiot" and "Historia von D. Johann Fausten".
Spain also produced its own distinctive form of opera, known as zarzuela, which had two separate flowerings: one from the mid-17th century through the mid-18th century, and another beginning around 1850. During the late 18th century up until the mid-19th century, Italian opera was immensely popular in Spain, supplanting the native form.
Czech composers also developed a thriving national opera movement of their own in the 19th century, starting with Bedřich Smetana, who wrote eight operas including the internationally popular "The Bartered Bride". Antonín Dvořák, most famous for "Rusalka", wrote 13 operas; and Leoš Janáček gained international recognition in the 20th century for his innovative works including "Jenůfa", "The Cunning Little Vixen", and "Káťa Kabanová".
In Russian Eastern Europe, several national operas began to emerge. Ukrainian opera was developed by Semen Hulak-Artemovsky (1813–1873) whose most famous work "Zaporozhets za Dunayem" (A Cossack Beyond the Danube) is regularly performed around the world. Other Ukrainian opera composers include Mykola Lysenko ("Taras Bulba" and "Natalka Poltavka"), Heorhiy Maiboroda, and Yuliy Meitus. At the turn of the century, a distinct national opera movement also began to emerge in Georgia under the leadership Zacharia Paliashvili, who fused local folk songs and stories with 19th-century Romantic classical themes.
The key figure of Hungarian national opera in the 19th century was Ferenc Erkel, whose works mostly dealt with historical themes. Among his most often performed operas are "Hunyadi László" and "Bánk bán". The most famous modern Hungarian opera is Béla Bartók's "Duke Bluebeard's Castle".
Stanisław Moniuszko's opera "Straszny Dwór" (in English "The Haunted Manor") (1861–64) represents a nineteenth-century peak of Polish national opera. In the 20th century, other operas created by Polish composers included "King Roger" by Karol Szymanowski and "Ubu Rex" by Krzysztof Penderecki.
The first known opera from Turkey (the Ottoman Empire) was "Arshak II", which was an Armenian opera composed by an ethnic Armenian composer Tigran Chukhajian in 1868 and partially performed in 1873. It was fully staged in 1945 in Armenia.
The first years of the Soviet Union saw the emergence of new national operas, such as the "Koroğlu" (1937) by the Azerbaijani composer Uzeyir Hajibeyov. The first Kyrgyz opera, "Ai-Churek", premiered in Moscow at the Bolshoi Theatre on 26 May 1939, during Kyrgyz Art Decade. It was composed by Vladimir Vlasov, Abdylas Maldybaev and Vladimir Fere. The libretto was written by Joomart Bokonbaev, Jusup Turusbekov, and Kybanychbek Malikov. The opera is based on the Kyrgyz heroic epic "Manas".
In Iran, opera gained more attention after the introduction of Western classical music in the late 19th century. However, it took until mid 20th century for Iranian composers to start experiencing with the field, especially as the construction of the Roudaki Hall in 1967, made possible staging of a large variety of works for stage. Perhaps, the most famous Iranian opera is Rostam and Sohrab by Loris Tjeknavorian premiered not until the early 2000s.
Chinese contemporary classical opera, a Chinese language form of Western style opera that is distinct from traditional Chinese opera, has had operas dating back to "The White Haired Girl" in 1945.
In Latin America, opera started as a result of European colonisation. The first opera ever written in the Americas was "La púrpura de la rosa", by Tomás de Torrejón y Velasco, although "Partenope", by the Mexican Manuel de Zumaya, was the first opera written from a composer born in Latin America (music now lost). The first Brazilian opera for a libretto in Portuguese was "A Noite de São João", by Elias Álvares Lobo. However, Antonio Carlos Gomes is generally regarded as the most outstanding Brazilian composer, having a relative success in Italy with its Brazilian-themed operas with Italian librettos, such as "Il Guarany". Opera in Argentina developed in the 20th century after the inauguration of Teatro Colón in Buenos Aires—with the opera "Aurora", by Ettore Panizza, being heavily influenced by the Italian tradition, due to immigration. Other important composers from Argentina include Felipe Boero and Alberto Ginastera.
Perhaps the most obvious stylistic manifestation of modernism in opera is the development of atonality. The move away from traditional tonality in opera had begun with Richard Wagner, and in particular the Tristan chord. Composers such as Richard Strauss, Claude Debussy, Giacomo Puccini, Paul Hindemith, Benjamin Britten and Hans Pfitzner pushed Wagnerian harmony further with a more extreme use of chromaticism and greater use of dissonance. Another aspect of modernist opera is the shift away from long, suspended melodies, to short quick mottos, as first illustrated by Giuseppe Verdi in his "Falstaff". Composers such as Strauss, Britten, Shostakovich and Stravinsky adopted and expanded upon this style.
Operatic modernism truly began in the operas of two Viennese composers, Arnold Schoenberg and his student Alban Berg, both composers and advocates of atonality and its later development (as worked out by Schoenberg), dodecaphony. Schoenberg's early musico-dramatic works, "Erwartung" (1909, premiered in 1924) and "Die glückliche Hand" display heavy use of chromatic harmony and dissonance in general. Schoenberg also occasionally used Sprechstimme.
The two operas of Schoenberg's pupil Alban Berg, "Wozzeck" (1925) and "Lulu" (incomplete at his death in 1935) share many of the same characteristics as described above, though Berg combined his highly personal interpretation of Schoenberg's twelve-tone technique with melodic passages of a more traditionally tonal nature (quite Mahlerian in character) which perhaps partially explains why his operas have remained in standard repertory, despite their controversial music and plots. Schoenberg's theories have influenced (either directly or indirectly) significant numbers of opera composers ever since, even if they themselves did not compose using his techniques.
Composers thus influenced include the Englishman Benjamin Britten, the German Hans Werner Henze, and the Russian Dmitri Shostakovich. (Philip Glass also makes use of atonality, though his style is generally described as minimalist, usually thought of as another 20th-century development.)
However, operatic modernism's use of atonality also sparked a backlash in the form of neoclassicism. An early leader of this movement was Ferruccio Busoni, who in 1913 wrote the libretto for his neoclassical number opera "Arlecchino" (first performed in 1917). Also among the vanguard was the Russian Igor Stravinsky. After composing music for the Diaghilev-produced ballets "Petrushka" (1911) and "The Rite of Spring" (1913), Stravinsky turned to neoclassicism, a development culminating in his opera-oratorio "Oedipus Rex" (1927). Stravinsky had already turned away from the modernist trends of his early ballets to produce small-scale works that do not fully qualify as opera, yet certainly contain many operatic elements, including "Renard" (1916: "a burlesque in song and dance") and "The Soldier's Tale" (1918: "to be read, played, and danced"; in both cases the descriptions and instructions are those of the composer). In the latter, the actors declaim portions of speech to a specified rhythm over instrumental accompaniment, peculiarly similar to the older German genre of "Melodrama". Well after his Rimsky-Korsakov-inspired works "The Nightingale" (1914), and "Mavra" (1922), Stravinsky continued to ignore serialist technique and eventually wrote a full-fledged 18th-century-style diatonic number opera "The Rake's Progress" (1951). His resistance to serialism (an attitude he reversed following Schoenberg's death) proved to be an inspiration for many other composers.
A common trend throughout the 20th century, in both opera and general orchestral repertoire, is the use of smaller orchestras as a cost-cutting measure; the grand Romantic-era orchestras with huge string sections, multiple harps, extra horns, and exotic percussion instruments were no longer feasible. As government and private patronage of the arts decreased throughout the 20th century, new works were often commissioned and performed with smaller budgets, very often resulting in chamber-sized works, and short, one-act operas. Many of Benjamin Britten's operas are scored for as few as 13 instrumentalists; Mark Adamo's two-act realization of "Little Women" is scored for 18 instrumentalists.
Another feature of late 20th-century opera is the emergence of contemporary historical operas, in contrast to the tradition of basing operas on more distant history, the re-telling of contemporary fictional stories or plays, or on myth or legend. "The Death of Klinghoffer", "Nixon in China", and "Doctor Atomic" by John Adams, "Dead Man Walking" by Jake Heggie, and "Anna Nicole" by Mark-Anthony Turnage exemplify the dramatisation onstage of events in recent living memory, where characters portrayed in the opera were alive at the time of the premiere performance.
The Metropolitan Opera in the US reports that the average age of its audience is now 60. Many opera companies have experienced a similar trend, and opera company websites are replete with attempts to attract a younger audience. This trend is part of the larger trend of greying audiences for classical music since the last decades of the 20th century. In an effort to attract younger audiences, the Metropolitan Opera offers a student discount on ticket purchases.
Smaller companies in the US have a more fragile existence, and they usually depend on a "patchwork quilt" of support from state and local governments, local businesses, and fundraisers. Nevertheless, some smaller companies have found ways of drawing new audiences. Opera Carolina offer discounts and happy hour events to the 21- to 40-year-old demographic. In addition to radio and television broadcasts of opera performances, which have had some success in gaining new audiences, broadcasts of live performances in HD to movie theatres have shown the potential to reach new audiences. Since 2006, the Met has broadcast live performances to several hundred movie screens all over the world.
By the late 1930s, some musicals began to be written with a more operatic structure. These works include complex polyphonic ensembles and reflect musical developments of their times. "Porgy and Bess" (1935), influenced by jazz styles, and "Candide" (1956), with its sweeping, lyrical passages and farcical parodies of opera, both opened on Broadway but became accepted as part of the opera repertory. Popular musicals such as "Show Boat", "West Side Story", "Brigadoon", "", "Passion", "Evita", "The Light in the Piazza", "The Phantom of the Opera" and others tell dramatic stories through complex music and in the 2010s they are sometimes seen in opera houses. "The Most Happy Fella" (1952) is quasi-operatic and has been revived by the New York City Opera. Other rock influenced musicals, such as "Tommy" (1969) and "Jesus Christ Superstar" (1971), "Les Misérables" (1980), "Rent" (1996), "Spring Awakening" (2006), and "Natasha, Pierre & The Great Comet of 1812" (2012) employ various operatic conventions, such as through composition, recitative instead of dialogue, and leitmotifs.
A subtle type of sound electronic reinforcement called acoustic enhancement is used in some modern concert halls and theatres where operas are performed. Although none of the major opera houses "...use traditional, Broadway-style sound reinforcement, in which most if not all singers are equipped with radio microphones mixed to a series of unsightly loudspeakers scattered throughout the theatre", many use a sound reinforcement system for acoustic enhancement and for subtle boosting of offstage voices, child singers, onstage dialogue, and sound effects (e.g., church bells in "Tosca" or thunder effects in Wagnerian operas).
Operatic vocal technique evolved, in a time before electronic amplification, to allow singers to produce enough volume to be heard over an orchestra, without the instrumentalists having to substantially compromise their volume.
Singers and the roles they play are classified by voice type, based on the tessitura, agility, power and timbre of their voices. Male singers can be classified by vocal range as bass, bass-baritone, baritone, tenor and countertenor, and female singers as contralto, mezzo-soprano and soprano. (Men sometimes sing in the "female" vocal ranges, in which case they are termed sopranist or countertenor. The countertenor is commonly encountered in opera, sometimes singing parts written for castrati—men neutered at a young age specifically to give them a higher singing range.) Singers are then further classified by size—for instance, a soprano can be described as a lyric soprano, coloratura, soubrette, spinto, or dramatic soprano. These terms, although not fully describing a singing voice, associate the singer's voice with the roles most suitable to the singer's vocal characteristics.
Yet another sub-classification can be made according to acting skills or requirements, for example the "basso buffo" who often must be a specialist in patter as well as a comic actor. This is carried out in detail in the "Fach" system of German speaking countries, where historically opera and spoken drama were often put on by the same repertory company.
A particular singer's voice may change drastically over his or her lifetime, rarely reaching vocal maturity until the third decade, and sometimes not until middle age. Two French voice types, "premiere dugazon" and "deuxieme dugazon", were named after successive stages in the career of Louise-Rosalie Lefebvre (Mme. Dugazon). Other terms originating in the star casting system of the Parisian theatres are "baryton-martin" and soprano "falcon".
The soprano voice has typically been used as the voice of choice for the female protagonist of the opera since the latter half of the 18th century. Earlier, it was common for that part to be sung by any female voice, or even a castrato. The current emphasis on a wide vocal range was primarily an invention of the Classical period. Before that, the vocal virtuosity, not range, was the priority, with soprano parts rarely extending above a high A (Handel, for example, only wrote one role extending to a high C), though the castrato Farinelli was alleged to possess a top D (his lower range was also extraordinary, extending to tenor C). The mezzo-soprano, a term of comparatively recent origin, also has a large repertoire, ranging from the female lead in Purcell's "Dido and Aeneas" to such heavyweight roles as Brangäne in Wagner's "Tristan und Isolde" (these are both roles sometimes sung by sopranos; there is quite a lot of movement between these two voice-types). For the true contralto, the range of parts is more limited, which has given rise to the insider joke that contraltos only sing "witches, bitches, and britches" roles. In recent years many of the "trouser roles" from the Baroque era, originally written for women, and those originally sung by castrati, have been reassigned to countertenors.
The tenor voice, from the Classical era onwards, has traditionally been assigned the role of male protagonist. Many of the most challenging tenor roles in the repertory were written during the "bel canto" era, such as Donizetti's sequence of 9 Cs above middle C during "La fille du régiment". With Wagner came an emphasis on vocal heft for his protagonist roles, with this vocal category described as "Heldentenor"; this heroic voice had its more Italianate counterpart in such roles as Calaf in Puccini's "Turandot". Basses have a long history in opera, having been used in "opera seria" in supporting roles, and sometimes for comic relief (as well as providing a contrast to the preponderance of high voices in this genre). The bass repertoire is wide and varied, stretching from the comedy of Leporello in "Don Giovanni" to the nobility of Wotan in Wagner's "Ring Cycle," to the conflicted King Phillip of Verdi's "Don Carlos". In between the bass and the tenor is the baritone, which also varies in weight from say, Guglielmo in Mozart's "Così fan tutte" to Posa in Verdi's "Don Carlos"; the actual designation "baritone" was not standard until the mid-19th century.
Early performances of opera were too infrequent for singers to make a living exclusively from the style, but with the birth of commercial opera in the mid-17th century, professional performers began to emerge. The role of the male hero was usually entrusted to a castrato, and by the 18th century, when Italian opera was performed throughout Europe, leading castrati who possessed extraordinary vocal virtuosity, such as Senesino and Farinelli, became international stars. The career of the first major female star (or prima donna), Anna Renzi, dates to the mid-17th century. In the 18th century, a number of Italian sopranos gained international renown and often engaged in fierce rivalry, as was the case with Faustina Bordoni and Francesca Cuzzoni, who started a fist fight with one another during a performance of a Handel opera. The French disliked castrati, preferring their male heroes to be sung by an haute-contre (a high tenor), of which Joseph Legros (1739–1793) was a leading example.
Though opera patronage has decreased in the last century in favor of other arts and media (such as musicals, cinema, radio, television and recordings), mass media and the advent of recording have supported the popularity of many famous singers including Maria Callas, Enrico Caruso, Amelita Galli-Curci, Kirsten Flagstad, Juan Arvizu, Nestor Mesta Chayres,
Mario Del Monaco, Renata Tebaldi, Risë Stevens, Alfredo Kraus, Franco Corelli, Montserrat Caballé, Joan Sutherland, Birgit Nilsson, Nellie Melba, Rosa Ponselle, Beniamino Gigli, Jussi Björling, Feodor Chaliapin, Cecilia Bartoli, Renée Fleming, Marilyn Horne, Bryn Terfel and "The Three Tenors" (Luciano Pavarotti, Plácido Domingo, and José Carreras).
Before the 1700s, Italian operas used a small string orchestra, but it rarely played to accompany the singers. Opera solos during this period were accompanied by the basso continuo group, which consisted of the harpsichord, "plucked instruments" such as lute and a bass instrument. The string orchestra typically only played when the singer was not singing, such as during a singer's "...entrances and exits, between vocal numbers, [or] for [accompanying] dancing". Another role for the orchestra during this period was playing an orchestral ritornello to mark the end of a singer's solo. During the early 1700s, some composers began to use the string orchestra to mark certain aria or recitatives "...as special"; by 1720, most arias were accompanied by orchestra. Opera composers such as Domenico Sarro, Leonardo Vinci, Giambattista Pergolesi, Leonardo Leo, and Johann Adolf Hasse added new instruments to the opera orchestra and gave the instruments new roles. They added wind instruments to the strings and used orchestral instruments to play instrumental solos, as a way to mark certain arias as special.
The orchestra has also provided an instrumental overture before the singers come onstage since the 1600s. Peri's "Euridice" opens with a brief instrumental ritornello, and Monteverdi's "L'Orfeo" (1607) opens with a toccata, in this case a fanfare for muted trumpets. The French overture as found in Jean-Baptiste Lully's operas consist of a slow introduction in a marked "dotted rhythm", followed by a lively movement in fugato style. The overture was frequently followed by a series of dance tunes before the curtain rose. This overture style was also used in English opera, most notably in Henry Purcell's "Dido and Aeneas". Handel also uses the French overture form in some of his Italian operas such as Giulio Cesare.
In Italy, a distinct form called "overture" arose in the 1680s, and became established particularly through the operas of Alessandro Scarlatti, and spread throughout Europe, supplanting the French form as the standard operatic overture by the mid-18th century. It uses three generally homophonic movements: fast–slow–fast. The opening movement was normally in duple metre and in a major key; the slow movement in earlier examples was short, and could be in a contrasting key; the concluding movement was dance-like, most often with rhythms of the gigue or minuet, and returned to the key of the opening section. As the form evolved, the first movement may incorporate fanfare-like elements and took on the pattern of so-called "sonatina form" (sonata form without a development section), and the slow section became more extended and lyrical.
In Italian opera after about 1800, the "overture" became known as the "sinfonia". Fisher also notes the term "Sinfonia avanti l'opera" (literally, the "symphony before the opera") was "an early term for a sinfonia used to begin an opera, that is, as an overture as opposed to one serving to begin a later section of the work". In 19th-century opera, in some operas, the overture, "Vorspiel", "Einleitung", Introduction, or whatever else it may be called, was the portion of the music which takes place before the curtain rises; a specific, rigid form was no longer required for the overture.
The role of the orchestra in accompanying the singers changed over the 19th century, as the Classical style transitioned to the Romantic era. In general, orchestras got bigger, new instruments were added, such as additional percussion instruments (e.g., bass drum, cymbals, snare drum, etc.). The orchestration of orchestra parts also developed over the 19th century. In Wagnerian operas, the forefronting of the orchestra went beyond the overture. In Wagnerian operas such as "Tristan", the orchestra often played the recurrent musical themes or leitmotifs, a role which gave a prominence to the orchestra which "...elevated its status to that of a prima donna". Wagner's operas were scored with unprecedented scope and complexity, adding more brass instruments and huge ensemble sizes: indeed, his score to "Das Rheingold" calls for six harps.
As the role of the orchestra and other instrumental ensembles changed over the history of opera, so did the role of leading the musicians. In the Baroque era, the musicians were usually directed by the harpsichord player, although the French composer Lully is known to have conducted with a long staff. In the 1800s, during the Classical period, the first violinist, also known as the concertmaster, would lead the orchestra while sitting. Over time, some directors began to stand up and use hand and arm gestures to lead the performers. Eventually this role of music director became termed the conductor, and a podium was used to make it easier for all the musicians to see him or her. By the time Wagnerian operas were introduced, the complexity of the works and the huge orchestras used to play them gave the conductor an increasingly important role. Modern opera conductors have a challenging role: they have to direct both the orchestra in the orchestra pit and the singers up on stage.
Since the days of Handel and Mozart, many composers have favored Italian as the language for the libretto of their operas. From the Bel Canto era to Verdi, composers would sometimes supervise versions of their operas in both Italian and French. Because of this, operas such as "Lucia di Lammermoor" or "Don Carlos" are today deemed canonical in both their French and Italian versions.
Till the mid 1950s, it was acceptable to produce operas in translations even if these had not been authorized by the composer or the original librettists. For example, opera houses in Italy routinely staged Wagner in Italian. After WWII, opera scholarship improved, artists refocused on the original versions, and translations fell out of favor. Knowledge of European languages, especially Italian, French, and German, is today an important part of the training for professional singers."The biggest chunk of operatic training is in linguistics and musicianship," explains mezzo-soprano Dolora Zajick. "[I have to understand] not only what I'm singing, but what everyone else is singing. I sing Italian, Czech, Russian, French, German, English."
In the 1980s, supertitles (sometimes called surtitles) began to appear. Although supertitles were first almost universally condemned as a distraction, today many opera houses provide either supertitles, generally projected above the theatre's proscenium arch, or individual seat screens where spectators can choose from more than one language. TV broadcasts typically include subtitles even if intended for an audience who knows well the language (for example, a RAI broadcast of an Italian opera). These subtitles target not only the hard of hearing but the audience generally, since a sung discourse is much harder to understand than a spoken one—even in the ears of native speakers. Subtitles in one or more languages have become standard in opera broadcasts, simulcasts, and DVD editions.
Today, operas are only rarely performed in translation. Exceptions include the English National Opera, the Opera Theatre of Saint Louis, Opera Theater of Pittsburgh, and Opera South East, which favor English translations. Another exception are opera productions intended for a young audience, such as Humperdinck's "Hansel and Gretel" and some productions of Mozart's "The Magic Flute".
Outside the US, and especially in Europe, most opera houses receive public subsidies from taxpayers. In Milan, Italy, 60% of La Scala's annual budget of €115 million is from ticket sales and private donations, with the remaining 40% coming from public funds. In 2005, La Scala received 25% of Italy's total state subsidy of €464 million for the performing arts. In the UK, Arts Council England provides funds to Opera North, the Royal Opera House, Welsh National Opera, and English National Opera. Between 2012 and 2015, these four opera companies along with the English National Ballet, Birmingham Royal Ballet and Northern Ballet accounted for 22% of the funds in the Arts Council's national portfolio. During that period, the Council undertook an analysis of its funding for large-scale opera and ballet companies, setting recommendations and targets for the companies to meet prior to the 2015–2018 funding decisions. In February 2015, concerns over English National Opera's business plan led to the Arts Council placing it "under special funding arrangements" in what "The Independent" termed "the unprecedented step" of threatening to withdraw public funding if the Council's concerns were not met by 2017. European public funding to opera has led to a disparity between the number of year-round opera houses in Europe and the United States. For example, "Germany has about 80-year-round opera houses [as of 2004], while the U.S., with more than three times the population, does not have any. Even the Met only has a seven-month season."
A milestone for opera broadcasting in the U.S. was achieved on 24 December 1951, with the live broadcast of "Amahl and the Night Visitors", an opera in one act by Gian Carlo Menotti. It was the first opera specifically composed for television in America. Another milestone occurred in Italy in 1992 when "Tosca" was broadcast live from its original Roman settings and times of the day: The first act came from the 16th-century Church of Sant'Andrea della Valle at noon on Saturday; the 16th-century Palazzo Farnese was the setting for the second at 8:15 P.M.; and on Sunday at 6 A.M., the third act was broadcast from Castel Sant'Angelo. The production was transmitted via satellite to 105 countries.
Major opera companies have begun presenting their performances in local cinemas throughout the United States and many other countries. The Metropolitan Opera began a series of live high-definition video transmissions to cinemas around the world in 2006. In 2007, Met performances were shown in over 424 theaters in 350 U.S. cities. "La bohème" went out to 671 screens worldwide. San Francisco Opera began prerecorded video transmissions in March 2008. As of June 2008, approximately 125 theaters in 117 U.S. cities carry the showings. The HD video opera transmissions are presented via the same HD digital cinema projectors used for major Hollywood films. European opera houses and festivals including the Royal Opera in London, La Scala in Milan, the Salzburg Festival, La Fenice in Venice, and the Maggio Musicale in Florence have also transmitted their productions to theaters in cities around the world since 2006, including 90 cities in the U.S.
The emergence of the Internet has also affected the way in which audiences consume opera. In 2009 the British Glyndebourne Festival Opera offered for the first time an online digital video download of its complete 2007 production of "Tristan und Isolde". In 2013 season the festival streamed all six of its productions online. In July 2012 the first online community opera was premiered at the Savonlinna Opera Festival. Titled "Free Will", it was created by members of the Internet group Opera By You. Its 400 members from 43 countries wrote the libretto, composed the music, and designed the sets and costumes using the Wreckamovie web platform. Savonlinna Opera Festival provided professional soloists, an 80-member choir, a symphony orchestra, and the stage machinery. It was performed live at the festival and streamed live on the internet. | https://en.wikipedia.org/wiki?curid=22348 |
Osteichthyes
Osteichthyes (), popularly referred to as the bony fish, is a diverse taxonomic group of fish that have skeletons primarily composed of bone tissue, as opposed to cartilage. The vast majority of fish are members of Osteichthyes, which is an extremely diverse and abundant group consisting of 45 orders, and over 435 families and 28,000 species. It is the largest class of vertebrates in existence today.
The group Osteichthyes is divided into the ray-finned fish (Actinopterygii) and lobe-finned fish (Sarcopterygii). The oldest known fossils of bony fish are about 420 million years old, which are also transitional fossils, showing a tooth pattern that is in between the tooth rows of sharks and bony fishes.
Osteichthyes can be compared to Euteleostomi. In paleontology the terms are synonymous. In ichthyology the difference is that Euteleostomi presents a cladistic view which includes the terrestrial tetrapods that evolved from lobe-finned fish. Until recently, the view of most ichthyologists has been that Osteichthyes were paraphyletic and include only fishes. However, since 2013 widely cited ichthyology papers have been published with phylogenetic trees that treat the Osteichthyes as a clade including tetrapods.
Bony fish are characterized by a relatively stable pattern of cranial bones, rooted, medial insertion of mandibular muscle in the lower jaw. The head and pectoral girdles are covered with large dermal bones. The eyeball is supported by a sclerotic ring of four small bones, but this characteristic has been lost or modified in many modern species. The labyrinth in the inner ear contains large otoliths. The braincase, or neurocranium, is frequently divided into anterior and posterior sections divided by a fissure.
Early bony fish had simple lungs (a pouch on either side of the esophagus) which helped them breathe in low-oxygen water. In many bony fish these have evolved into swim bladders, which help the body create a neutral balance between sinking and floating. (The lungs of amphibians, reptiles, birds, and mammals were inherited from their bony fish ancestors.) They do not have fin spines, but instead support the fin with lepidotrichia (bone fin rays). They also have an operculum, which helps them breathe without having to swim.
Bony fish have no placoid scales. Mucus glands coat the body. Most have smooth and overlapping ganoid, cycloid or ctenoid scales.
Traditionally, Osteichthyes were considered a class, recognised on having a swim bladder, only three pairs of gill arches hidden behind a bony operculum, and a predominately bony skeleton. Under this classification systems, the Osteichthyes were paraphyletic with regard to land vertebrates as the common ancestor of all Osteichthyes includes tetrapods amongst its descendants. While the largest subclass, the Actinopterygii (ray-finned fish) are monophyletic, with the inclusion of the smaller sub-class Sarcopterygii, Osteichthyes was regarded as paraphyletic.
This has led to the current cladistic classification which splits the Osteichthyes into two full classes. Under this scheme Osteichthyes is monophyletic, as it includes the tetrapods making it a synonym of the clade Euteleostomi. Most bony fish belong to the ray-finned fish (Actinopterygii).
A phylogeny of living Osteichthyes, including the tetrapods, is shown in the cladogram.
Whole-genome duplication took place in the ancestral Osteichthyes.
All bony fish possess gills. For the majority this is their sole or main means of respiration. Lungfish and other osteichthyan species are capable of respiration through lungs or vascularized swim bladders. Other species can respire through their skin, intestines, and/or stomach.
Osteichthyes are primitively ectothermic (cold blooded), meaning that their body temperature is dependent on that of the water. But some of the larger marine osteichthyids, such as the opah, swordfish and tuna have independently evolved various levels of endothermy. Bony fish can be any type of heterotroph: numerous species of omnivore, carnivore, herbivore, filter-feeder or detritivore are documented.
Some bony fish are hermaphrodites, and a number of species exhibit parthenogenesis. Fertilization is usually external, but can be internal. Development is usually oviparous (egg-laying) but can be ovoviviparous, or viviparous. Although there is usually no parental care after birth, before birth parents may scatter, hide, guard or brood eggs, with sea horses being notable in that the males undergo a form of "pregnancy", brooding eggs deposited in a ventral pouch by a female.
The ocean sunfish is the heaviest bony fish in the world, while the longest is the king of herrings, a type of oarfish. Specimens of ocean sunfish have been observed up to in length and weighing up to . Other very large bony fish include the Atlantic blue marlin, some specimens of which have been recorded as in excess of , the black marlin, some sturgeon species, and the giant and goliath grouper, which both can exceed in weight. In contrast, "Paedocypris progenetica" and the stout infantfish can measure less than .
The Beluga sturgeon is the largest species of freshwater bony fish extant today, and "Arapaima gigas" is among the largest of the freshwater fish. The largest bony fish ever was "Leedsichthys", which dwarfed the beluga sturgeon as well as the ocean sunfish, giant grouper and all the other giant bony fishes alive today.
Cartilaginous fishes can be further divided into sharks, rays and chimaeras. In the table below, the comparison is made between sharks and bony fishes. For the further differences with rays, see sharks versus rays. | https://en.wikipedia.org/wiki?curid=22350 |
Otto Dix
Wilhelm Heinrich Otto Dix (; 2 December 1891 – 25 July 1969) was a German painter and printmaker, noted for his ruthless and harshly realistic depictions of German society during the Weimar Republic and the brutality of war. Along with George Grosz and Max Beckmann, he is widely considered one of the most important artists of the "Neue Sachlichkeit".
Otto Dix was born in Untermhaus, Germany, now a part of the city of Gera, Thuringia. The eldest son of Franz Dix, an iron foundry worker, and Louise, a seamstress who had written poetry in her youth, he was exposed to art from an early age. The hours he spent in the studio of his cousin, Fritz Amann, who was a painter, were decisive in forming young Otto's ambition to be an artist; he received additional encouragement from his primary school teacher. Between 1906 and 1910, he served an apprenticeship with painter Carl Senff, and began painting his first landscapes. In 1910, he entered the Kunstgewerbeschule in Dresden, now the Dresden Academy of Fine Arts, where Richard Guhr was among his teachers. At that time the school was not a school for the fine arts but rather an academy that concentrated on applied arts and crafts.
The majority of Dix’s early works concentrated on landscapes and portraits which were done in a stylized realism that later shifted to expressionism.
When the First World War erupted, Dix volunteered for the German Army. He was assigned to a field artillery regiment in Dresden. In the autumn of 1915 he was assigned as a non-commissioned officer of a machine-gun unit on the Western front and took part in the Battle of the Somme. In November 1917, his unit was transferred to the Eastern front until the end of hostilities with Russia, and in February 1918 he was stationed in Flanders. Back on the western front, he fought in the German Spring Offensive. He earned the Iron Cross (second class) and reached the rank of "vizefeldwebel". In August of that year he was wounded in the neck, and shortly after he took pilot training lessons.
He took part in a "Fliegerabwehr-Kurs" ("Defense Pilot Course") in Tongern, was promoted to Vizefeldwebel and after passing the medical tests transferred to Aviation Replacement Unit Schneidemühl in Posen. He was discharged from service on 22 December 1918 and was home for Christmas.
Dix was profoundly affected by the sights of the war, and later described a recurring nightmare in which he crawled through destroyed houses. He represented his traumatic experiences in many subsequent works, including a portfolio of fifty etchings called "Der Krieg", published in 1924. Subsequently, he referred again to the war in The War Triptych, painted from 1929-1932.
At the end of 1918 Dix returned to Gera, but the next year he moved to Dresden, where he studied at the Hochschule für Bildende Künste. He became a founder of the Dresden Secession group in 1919, during a period when his work was passing through an expressionist phase. In 1920, he met George Grosz and, influenced by Dada, began incorporating collage elements into his works, some of which he exhibited in the first Dada Fair in Berlin. He also participated in the German Expressionists exhibition in Darmstadt that year.
In 1924, he joined the Berlin Secession; by this time he was developing an increasingly realistic style of painting that used thin glazes of oil paint over a tempera underpainting, in the manner of the old masters. His 1923 painting "The Trench", which depicted dismembered and decomposed bodies of soldiers after a battle, caused such a furore that the Wallraf-Richartz Museum hid the painting behind a curtain. In 1925 the then-mayor of Cologne, Konrad Adenauer, cancelled the purchase of the painting and forced the director of the museum to resign.
Dix was a contributor to the "Neue Sachlichkeit" exhibition in Mannheim in 1925, which featured works by George Grosz, Max Beckmann, Heinrich Maria Davringhausen, Karl Hubbuch, Rudolf Schlichter, Georg Scholz and many others. Dix's work, like that of Grosz—his friend and fellow veteran—was extremely critical of contemporary German society and often dwelled on the act of Lustmord, or sexualized murder. He drew attention to the bleaker side of life, unsparingly depicting prostitution, violence, old age and death.
In one of his few statements, published in 1927, Dix declared, "The object is primary and the form is shaped by the object."
Among his most famous paintings are "Sailor and Girl" (1925), used as the cover of Philip Roth's 1995 novel Sabbath's Theater, the triptych "Metropolis" (1928), a scornful portrayal of depraved actions of Germany's Weimar Republic, where nonstop revelry was a way to deal with the wartime defeat and financial catastrophe, and the startling "Portrait of the Journalist Sylvia von Harden" (1926). His depictions of legless and disfigured veterans—a common sight on Berlin's streets in the 1920s—unveil the ugly side of war and illustrate their forgotten status within contemporary German society, a concept also developed in Erich Maria Remarque's "All Quiet on the Western Front."
When the Nazis came to power in Germany, they regarded Dix as a degenerate artist and had him sacked from his post as an art teacher at the Dresden Academy. He later moved to Lake Constance in the southwest of Germany. Dix's paintings "The Trench" and "War Cripples" were exhibited in the state-sponsored Munich 1937 exhibition of degenerate art, "Entartete Kunst". "War Cripples" was later burned. "The Trench" was long thought to have been destroyed too, but there are indications the work survived until at least 1940. Its later whereabouts are unknown. It may have been looted during the confusion at the end of the war. It has been called 'perhaps the most famous picture in post-war Europe ... a masterpiece of unspeakable horror.
Dix, like all other practising artists, was forced to join the Nazi government's Reich Chamber of Fine Arts (Reichskammer der bildenden Kuenste), a subdivision of Goebbels' Cultural Ministry ("Reichskulturkammer"). Membership was mandatory for all artists in the Reich. Dix had to promise to paint only inoffensive landscapes. He still painted an occasional allegorical painting that criticized Nazi ideals. His paintings that were considered "degenerate" were discovered among the 1500+ paintings hidden away by an art dealer and his son in 2012.
In 1939 he was arrested on the trumped-up charge of being involved in a plot against Hitler (see Georg Elser), but was later released.
During World War II, Dix was conscripted into the "Volkssturm". He was captured by French troops at the end of the war and released in February 1946.
Dix eventually returned to Dresden and remained there until 1966. After the war most of his paintings were religious allegories or depictions of post-war suffering, including his 1948 "Ecce homo with self-likeness behind barbed wire". In this period, Dix gained recognition in both parts of the then-divided Germany. In 1959 he was awarded the Grand Merit Cross of the Federal Republic of Germany ("Großes Verdienstkreuz") and in 1950, he was unsuccessfully nominated for the National Prize of the GDR. He received the Lichtwark Prize in Hamburg and the Martin Andersen Nexo Art Prize in Dresden to mark his 75th birthday in 1967. Dix was made an honorary citizen of Gera. Also in 1967 he received the Hans Thoma Prize and in 1968 the Rembrandt Prize of the Goethe Foundation in Salzburg.
Dix died on 25 July 1969 after a second stroke in Singen am Hohentwiel. He is buried at Hemmenhofen on Lake Constance.
Dix had three children: a daughter Nelly (1923–1955) and two sons, Ursus (1927–2002) and Jan (born 1928).
Since 1991, the 100th anniversary of Dix's birth, the 18th-century house where he was born and grew up, at Mohrenplatz 4 in the city of Gera, has been open to the public as a museum and art gallery. It is managed by the city administration.
As well as providing access to the rooms Dix lived in, it houses a permanent collection of 400 of his works on paper and paintings. Visitors can see examples of his childhood sketch books, watercolours and drawings from the 1920s and 1930s, and lithographs. The collection also includes 48 postcards he sent from the front during World War I.
The gallery also regularly hosts temporary exhibitions.
The building was affected by a flood in June 2013. In order to repair the underlying damage, the museum was closed in January 2016, and re-opened in December 2016 following restoration. | https://en.wikipedia.org/wiki?curid=22351 |
Orrorin
Orrorin tugenensis is a postulated early species of Homininae, estimated at and discovered in 2000. It is not confirmed how "Orrorin" is related to modern humans. Its discovery was used to argue against the hypothesis that australopithecines are human ancestors, as much as it still remains the most prevalent hypothesis of human evolution as of 2012.
The name of genus Orrorin (plural "Orroriek") means "original man" in Tugen, and the name of the only classified species, "O. tugenensis", derives from Tugen Hills in Kenya, where the first fossil was found in 2000. As of 2007, 20 fossils of the species have been found.
The 20 specimens found as of 2007 include: the posterior part of a mandible in two pieces; a symphysis and several isolated teeth; three fragments of femora; a partial humerus; a proximal phalanx; and a distal thumb phalanx.
"Orrorin" had small teeth relative to its body size. Its dentition differs from that found in "Australopithecus" in that its cheek teeth are smaller and less elongated mesiodistally and from "Ardipithecus" in that its enamel is thicker. The dentition differs from both these species in the presence of a mesial groove on the upper canines. The canines are ape-like but reduced, like those found in Miocene apes and female chimpanzees. "Orrorin" had small post-canines and was microdont, like modern humans, whereas robust australopithecines were megadont.
In the femur, the head is spherical and rotated anteriorly; the neck is elongated and oval in section and the lesser trochanter protrudes medially. While these suggest that "Orrorin" was bipedal, the rest of the postcranium indicates it climbed trees. While the proximal phalanx is curved, the distal pollical phalanx is of human proportions and has thus been associated with toolmaking, but should probably be associated with grasping abilities useful for tree-climbing in this context.
After the fossils were found in 2000, they were held at the Kipsaraman village community museum, but the museum was subsequently closed. Since then, according to the Community Museums of Kenya chairman Eustace Kitonga, the fossils are stored at a secret bank vault in Nairobi.
If "Orrorin" proves to be a direct human ancestor, then according to some paleoanthropologists, australopithecines such as "Australopithecus afarensis" ("Lucy") may be considered a side branch of the hominid family tree: "Orrorin" is both earlier, by almost 3 million years, and more similar to modern humans than is "A. afarensis". The main similarity is that the "Orrorin" femur is morphologically closer to that of "Homo sapiens" than is "Lucy's"; there is, however, some debate over this point.
However, another point of view cites comparisons between Orrorin and other Miocene Apes, rather than extant great apes, which shows instead that the femur shows itself as an intermediate between that of Australopiths and said earlier apes.
Other fossils (leaves and many mammals) found in the Lukeino Formation show that "Orrorin" lived in a dry evergreen forest environment, not the savanna assumed by many theories of human evolution.
The fossils of "Orrorin tugenensis" shares no derived features of hominoid great-ape relatives. In contrast, ""Orrorin" shares several apomorphic features with modern humans, as well as some with australopithecines, including the presence of an "obturator externus" groove, elongated femoral neck, anteriorly twisted head (posterior twist in "Australopithecus"), anteroposteriorly compressed femoral neck, asymmetric distribution of cortexin the femoral neck, shallow superior notch, and a well developed gluteal tuberosity which coalesces vertically with the crest that descends the femoral shaft poste-riorly." It does, however, also share many of such properties with several Miocene ape species, even showing some transitional elements between basal apes like the Aegypropithecus and Australopithecus. According to recent studies "Orrorin tugenensis" is a basal hominid that adapted an early form of bipedalism. Based on the structure of its femoral head it still exhibited some arboreal properties, likely to forage and build shelters. The length of the femoral neck in "Orrorin tugenensis" fossils is elongated and is similar in shape and length to "Australopithicines" and modern humans. Additionally, its femoral head is larger in comparison to "Australopithicines" and is much closer in shape and relative size to "Homo sapiens." This archaic morphology suggests that O. tugenensis developed bipedalism 6 million years ago.
"O. tugenensis" shares an early hominin feature in which their iliac blade is flared to help counter the torque of their body weight, this shows that they adapted bipedalism around 6 MYA. These features are shared with many species of "Australopithecus". It has been suggested by Pickford that the many features "Orrorin" shares with modern humans show that it is more closely related to "Homo" "sapiens" than to "Australopithecus". This would mean that Australopithecus would represent a side branch in the homin evolution that does not directly lead to "Homo". However the femora morphology of "O. tugenensis" shares many similarities with "Australopithicine" femora morphology, which weakens this claim. Another study conducted by Almecija suggested that "Orrorin" is more closely related to early hominins than to "Homo". An analysis of the BAR 10020' 00 femur showed that "Orrorin" is an intermediate between "Pan" and "Australopithecus afarensis". The current prevailing theory is that "Orrorin tugenensis" is a basal hominin and that bipedalism developed early in the hominin clade and successfully evolved down the human evolutionary tree. It is clear that the phylogeny of "Orrorin" is uncertain, however the evidence of the evolution of bipedalism is an invaluable discovery from this early fossil hominin.
The team that found these fossils in 2000 was led by Brigitte Senut and Martin Pickford from the Muséum national d'histoire naturelle. The 20 fossils have been found at four sites in the Lukeino Formation, located in Kenya: of these, the fossils at Cheboit and Aragai are the oldest (), while those in Kapsomin and Kapcheberek are found in the upper levels of the formation (). | https://en.wikipedia.org/wiki?curid=22353 |
Lists of Olympic medalists
This article includes lists of all Olympic medalists since 1896, organized by each Olympic sport or discipline, and also by Olympiad.
A. Including military patrol event at 1924 Games, which IOC now refers to biathlon.
B. Figure skating was held at the 1908 and 1920 Summer Olympic games prior to the establishment of the Winter Olympics. 21 medals (seven of each color) were awarded in seven events.
C. A men's ice hockey tournament was held at the 1920 Summer Olympics, and then added as a Winter Olympics event. Three medals were awarded.
D. The IOC site for the 1900 Olympic Games gives erroneous figure of 95 events, while the IOC database for the 1900 Olympic Games lists 85 ones.
E. The IOC site for the 1904 Olympic Games gives erroneous figure of 91 events, while the IOC database for the 1904 Olympic Games lists 94 ones.
F. The IOC site for the 1920 Olympic Games gives erroneous figure of 154 events, while the IOC database for the 1920 Olympic Games lists 156 ones.
G. Due to Australian quarantine laws, 6 equestrian events were held in Stockholm several months before the rest of the 1956 Games in Melbourne.
H. The IOC site for the 1956 Olympic Games gives figure of 145 events, however actually there was 151 (145 events in Melbourne and 6 equestrian events in Stockholm). | https://en.wikipedia.org/wiki?curid=22360 |
Ordered pair
In mathematics, an ordered pair ("a", "b") is a pair of objects. The order in which the objects appear in the pair is significant: the ordered pair ("a", "b") is different from the ordered pair ("b", "a") unless "a" = "b". (In contrast, the unordered pair {"a", "b"} equals the unordered pair {"b", "a"}.)
Ordered pairs are also called 2-tuples, or sequences (sometimes, lists in a computer science context) of length 2. Ordered pairs of scalars are sometimes called 2-dimensional vectors. (Technically, this is an abuse of notation since an ordered pair need not be an element of a vector space.)
The entries of an ordered pair can be other ordered pairs, enabling the recursive definition of ordered "n"-tuples (ordered lists of "n" objects). For example, the ordered triple ("a","b","c") can be defined as ("a", ("b","c")), i.e., as one pair nested in another.
In the ordered pair ("a", "b"), the object "a" is called the "first entry", and the object "b" the "second entry" of the pair. Alternatively, the objects are called the first and second "components", the first and second "coordinates", or the left and right "projections" of the ordered pair.
Cartesian products and binary relations (and hence functions) are defined in terms of ordered pairs.
Let formula_1 and formula_2 be ordered pairs. Then the "characteristic" (or "defining") "property" of the ordered pair is:
The set of all ordered pairs whose first entry is in some set "A" and whose second entry is in some set "B" is called the Cartesian product of "A" and "B", and written "A" × "B". A binary relation between sets "A" and "B" is a subset of "A" × "B".
The notation may be used for other purposes, most notably as denoting open intervals on the real number line. In such situations, the context will usually make it clear which meaning is intended. For additional clarification, the ordered pair may be denoted by the variant notation formula_4, but this notation also has other uses.
The left and right projection of a pair "p" is usually denoted by 1("p") and 2("p"), or by "ℓ"("p") and "r"("p"), respectively.
In contexts where arbitrary "n"-tuples are considered, ("t") is a common notation for the "i"-th component of an "n"-tuple "t".
In some introductory mathematics textbooks an informal (or intuitive) definition of ordered pair is given, such as
For any two objects and , the ordered pair is a notation specifying the two objects and , in that order.
This is usually followed by a comparison to a set of two elements; pointing out that in a set and must be different, but in an ordered pair they may be equal and that while the order of listing the elements of a set doesn't matter, in an ordered pair changing the order of distinct entries changes the ordered pair.
This "definition" is unsatisfactory because it is only descriptive and is based on an intuitive understanding of "order". However, as is sometimes pointed out, no harm will come from relying on this description and almost everyone thinks of ordered pairs in this manner.
A more satisfactory approach is to observe that the characteristic property of ordered pairs given above is all that is required to understand the role of ordered pairs in mathematics. Hence the ordered pair can be taken as a primitive notion, whose associated axiom is the characteristic property. This was the approach taken by the N. Bourbaki group in its "Theory of Sets", published in 1954. However, this approach also has its drawbacks as both the existence of ordered pairs and their characteristic property must be axiomatically assumed.
Another way to rigorously deal with ordered pairs is to define them formally in the context of set theory. This can be done in several ways and has the advantage that existence and the characteristic property can be proven from the axioms that define the set theory. One of the most cited versions of this definition is due to Kuratowski (see below) and his definition was used in the second edition of Bourbaki's "Theory of Sets", published in 1970. Even those mathematical textbooks that give an informal definition of ordered pairs will often mention the formal definition of Kuratowski in an exercise.
If one agrees that set theory is an appealing foundation of mathematics, then all mathematical objects must be defined as sets of some sort. Hence if the ordered pair is not taken as primitive, it must be defined as a set. Several set-theoretic definitions of the ordered pair are given below.
Norbert Wiener proposed the first set theoretical definition of the ordered pair in 1914:
He observed that this definition made it possible to define the types of "Principia Mathematica" as sets. "Principia Mathematica" had taken types, and hence relations of all arities, as primitive.
Wiener used instead of {"b"} to make the definition compatible with type theory where all elements in a class must be of the same "type". With "b" nested within an additional set, its type is equal to formula_6's.
About the same time as Wiener (1914), Felix Hausdorff proposed his definition:
"where 1 and 2 are two distinct objects different from a and b."
In 1921 Kazimierz Kuratowski offered the now-accepted definition
of the ordered pair ("a", "b"):
Note that this definition is used even when the first and the second coordinates are identical:
Given some ordered pair "p", the property ""x" is the first coordinate of "p"" can be formulated as:
The property ""x" is the second coordinate of "p"" can be formulated as:
In the case that the left and right coordinates are identical, the right conjunct formula_12 is trivially true, since "Y"1 ≠ "Y"2 is never the case.
This is how we can extract the first coordinate of a pair (using the notation for arbitrary intersection and arbitrary union):
This is how the second coordinate can be extracted:
The above Kuratowski definition of the ordered pair is "adequate" in that it satisfies the characteristic property that an ordered pair must satisfy, namely that formula_15. In particular, it adequately expresses 'order', in that formula_16 is false unless formula_17. There are other definitions, of similar or lesser complexity, that are equally adequate:
The reverse definition is merely a trivial variant of the Kuratowski definition, and as such is of no independent interest. The definition short is so-called because it requires two rather than three pairs of braces. Proving that short satisfies the characteristic property requires the Zermelo–Fraenkel set theory axiom of regularity. Moreover, if one uses von Neumann's set-theoretic construction of the natural numbers, then 2 is defined as the set {0, 1} = {0, {0}}, which is indistinguishable from the pair (0, 0)short. Yet another disadvantage of the short pair is the fact, that even if "a" and "b" are of the same type, the elements of the short pair are not. (However, if "a" = "b" then the short version keeps having cardinality 2, which is something one might expect of any "pair", including any "ordered pair". Also note that the short version is used in Tarski–Grothendieck set theory, upon which the Mizar system is founded.)
Prove: ("a", "b") = ("c", "d") if and only if "a" = "c" and "b" = "d".
Kuratowski:
"If". If "a = c" and "b = d", then = . Thus ("a, b")K = ("c, d")K.
"Only if". Two cases: "a" = "b", and "a" ≠ "b".
If "a" = "b":
If "a" ≠ "b", then ("a, b")K = ("c, d")K implies = .
Reverse:
("a, b")reverse = = = ("b, a")K.
"If". If ("a, b")reverse = ("c, d")reverse,
("b, a")K = ("d, c")K. Therefore, "b = d" and "a = c".
"Only if". If "a = c" and "b = d", then = .
Thus ("a, b")reverse = ("c, d")reverse.
Short:
"If": If "a = c" and "b = d", then {"a", {"a, b"}} = {"c", {"c, d"}}. Thus ("a, b")short = ("c, d")short.
"Only if": Suppose {"a", {"a, b"}} = {"c", {"c, d"}}.
Then "a" is in the left hand side, and thus in the right hand side.
Because equal sets have equal elements, one of "a = c" or "a" = {"c, d"} must be the case.
Again, we see that {"a, b"} = "c" or {"a, b"} = {"c, d"}.
Rosser (1953) employed a definition of the ordered pair due to Quine which requires a prior definition of the natural numbers. Let formula_21 be the set of natural numbers and define first
The function formula_23 increments its argument if it is a natural number and leaves it as is otherwise; the number 0 does not appear as functional value of formula_23.
As formula_25 is the set of the elements of formula_26 not in formula_21 go on with
This is the set image of a set formula_26 under formula_23, sometimes denoted by formula_31 as well. Applying function formula_32 to a set "x" simply increments every natural number in it. In particular, formula_33 does never contain the number 0, so that for any sets "x" and "y",
Further, define
By this, formula_36 does always contain the number 0.
Finally, define the ordered pair ("A", "B") as the disjoint union
(which is formula_38 in alternate notation).
Extracting all the elements of the pair that do not contain 0 and undoing formula_32 yields "A". Likewise, "B" can be recovered from the elements of the pair that do contain 0.
For example, the pair formula_44 is encoded as formula_45 provided formula_46.
In type theory and in outgrowths thereof such as the axiomatic set theory NF, the Quine–Rosser pair has the same type as its projections and hence is termed a "type-level" ordered pair. Hence this definition has the advantage of enabling a function, defined as a set of ordered pairs, to have a type only 1 higher than the type of its arguments. This definition works only if the set of natural numbers is infinite. This is the case in NF, but not in type theory or in NFU. J. Barkley Rosser showed that the existence of such a type-level ordered pair (or even a "type-raising by 1" ordered pair) implies the axiom of infinity. For an extensive discussion of the ordered pair in the context of Quinian set theories, see Holmes (1998).
Early in the development of the set theory, before paradoxes were discovered, Cantor followed Frege by defining the ordered pair of two sets as the class of all relations that hold between these sets, assuming that the notion of relation is primitive:
This definition is inadmissible in most modern formalized set theories and is methodologically similar to defining the cardinal of a set as the class of all sets equipotent with the given set.
Morse–Kelley set theory makes free use of proper classes. Morse defined the ordered pair so that its projections could be proper classes as well as sets. (The Kuratowski definition does not allow this.) He first defined ordered pairs whose projections are sets in Kuratowski's manner. He then "redefined" the pair
where the component Cartesian products are Kuratowski pairs of sets and where
This renders possible pairs whose projections are proper classes. The Quine–Rosser definition above also admits proper classes as projections. Similarly the triple is defined as a 3-tuple as follows:
The use of the singleton set formula_51 which has an inserted empty set allows tuples to have the uniqueness property that if "a" is an "n"-tuple and b is an "m"-tuple and "a" = "b" then "n" = "m". Ordered triples which are defined as ordered pairs do not have this property with respect to ordered pairs.
A category-theoretic product "A" × "B" in a category of sets represents the set of ordered pairs, with the first element coming from "A" and the second coming from "B". In this context the characteristic property above is a consequence of the universal property of the product and the fact that elements of a set "X" can be identified with morphisms from 1 (a one element set) to "X". While different objects may have the universal property, they are all naturally isomorphic. | https://en.wikipedia.org/wiki?curid=22362 |
Old Catholic Church
The term Old Catholic Church was used from the 1850s by groups which had separated from the Roman Catholic Church over certain doctrines, primarily concerned with papal authority; some of these groups, especially in the Netherlands, had already existed long before the term. These churches are not in full communion with the Holy See. Member churches of the Union of Utrecht of the Old Catholic Churches (UU) are in full communion with the Anglican Communion, and some are members of the World Council of Churches.
The formation of the Old Catholic communion of Germans, Austrians and Swiss began in 1870 at a public meeting held in Nuremberg under the leadership of Ignaz von Döllinger, following the First Vatican Council. Four years later, episcopal succession was established with the consecration of an Old Catholic German bishop by a prelate of the Church of Utrecht. In line with the "Declaration of Utrecht" of 1889, adherents accept the first seven ecumenical councils and doctrine formulated before the East–West Schism of 1054, but reject communion with the pope and a number of other Catholic doctrines and practices. "The Oxford Dictionary of the Christian Church" notes that since 1925 they have recognized Anglican ordinations, have had full communion with the Church of England since 1932, and have taken part in the ordination of Anglican bishops. According to the principle of "ex opere operato", ordinations out of communion with Rome are still valid, and for this reason the validity of orders of Old Catholic bishops has never been formally questioned by Rome, only the ordination of female priests.
The term "Old Catholic" was first used in 1853 to describe the members of the See of Utrecht who did not recognize any infallible papal authority. Later Catholics who disagreed with the Roman Catholic dogma of papal infallibility as defined by the First Vatican Council (1870) were hereafter without a bishop and joined with Utrecht to form the Union of Utrecht of the Old Catholic Churches (UU). Today these Old Catholic churches are found chiefly in Germany, Switzerland, the Netherlands, Austria, Poland and the Czech Republic.
Though not possessing any relationship with the Union of Utrecht of the Old Catholic Churches, numerous Independent Catholic clergy in the English-speaking world mistakenly self-identify as "Old Catholic", which likely signifies that, independent of the Roman Catholic Church, they see themselves as part of the Old Catholic tradition.
Old Catholic theology views the Eucharist as the core of the Christian Church. From that point the church is a community of believers. All are in communion with one another around the sacrifice of Jesus Christ, as the highest expression of the love of God. Therefore, the celebration of the Eucharist is understood as the experience of Christ's triumph over sin. The defeat of sin consists in bringing together that which is divided.
Old Catholics believe in unity in diversity and often quote the Church Father Vincent of Lérins "Commonitory": "in the Catholic Church itself, all possible care must be taken, that we hold that faith which has been believed everywhere, always, by all."
Four disputes set the stage for an independent Bishopric of Utrecht: the Concordat of Worms, the First Lateran Council, the Fourth Lateran Council, and confirmation of church procedural law by Pope Leo X. Also relevant was the 12th-century Investiture Controversy over whether the Holy Roman Emperor or the Pope could appoint bishops.
In 1122, the Concordat of Worms was signed, making peace.
The Emperor renounced the right to invest ecclesiastics with ring and crosier, the symbols of their spiritual power, and guaranteed election by the canons of cathedral or abbey and free consecration. The Emperor Henry V and Pope Calixtus II ended the feud by granting one another peace.
In 1215, the Fourth Lateran Council canon 23 states that the duty to elect a bishop for a cathedral within three months devolves to the next immediate superior when that duty is neglected by electors.
In 1517 Pope Leo X, in ', forbade the Archbishop-Elector of Cologne, Hermann of Wied, to rely on his status as ' in summoning Philip of Burgundy, his treasurer, and his ecclesiastical and secular subjects to a court of first instance in Cologne. John Mason Neale explained that Leo X only confirmed a right of the church but Leo X's confirmation "was providential" in respect to the future schism. This greatly promoted the independence of the diocese, so that no clergy or laity from Utrecht would ever be tried by a Roman tribunal.
Old Catholicism's formal separation from Roman Catholicism occurred over the issue of papal authority. This separation occurred in The Netherlands in 1724, creating the first Old Catholic church. The churches of Germany, Austria, Bohemia, and Switzerland created the after Vatican I (1871), over the Roman Catholic dogma of papal infallibility. By the early 1900s, the movement included groups in England, Canada, Croatia, France, Denmark, Italy, the United States, the Philippines, China, and Hungary. The Polish National Catholic Church was the member church in America but left the union in opposition to the ordination of women by other member churches.
During the Protestant Reformation, the Catholic Church was persecuted and the Holy See appointed an apostolic vicar to govern the bishop-less dioceses north of the Rhine and Waal. Protestants occupied most church buildings, and those remaining were confiscated by the government of the Dutch Republic, which favoured the Dutch Reformed Church.
The northern provinces, that revolted against the Spanish Netherlands and signed the 1579 Union of Utrecht, persecuted the Catholic Church, confiscated church property, expelled monks and nuns from convents and monasteries, and made it illegal to receive the Catholic sacraments. However, the Catholic Church did not die, rather priests and communities went underground. Groups would meet for the sacraments in the attics of private homes at the risk of arrest. Priests identified themselves by wearing all black clothing with very simple collars. All the episcopal sees of the area, including that of Utrecht, had fallen vacant by 1580, because the Spanish crown, which since 1559 had patronal rights over all bishoprics in the Netherlands, refused to make appointments for what it saw as heretical territories, and the nomination of an apostolic vicar was seen as a way of avoiding direct violation of the privilege granted to the crown. The appointment of an apostolic vicar, the first after many centuries, for what came to be called the Holland Mission was followed by similar appointments for other Protestant-ruled countries, such as England, which likewise became mission territories. The disarray of the Catholic Church in the Netherlands between 1572 and about 1610 was followed by a period of expansion of Catholicism under the apostolic vicars, leading to Protestant protests.
The initial shortage of Catholic priests in the Netherlands resulted in increased pastoral activity of religious clergy, among whom Jesuits formed a considerable minority, coming to represent between 10 and 15 percent of all the Dutch clergy in the 1600–1650 period. Conflicts arose between these, and the apostolic vicars and secular clergy. In 1629, the priests were 321, 250 secular and 71 religious, with Jesuits at 34 forming almost half of the religious. By the middle of the 17th century the secular priests were 442, the religious 142, of whom 62 were Jesuits.
The fifth apostolic vicar of the Dutch Mission, Petrus Codde, was appointed in 1688. In 1691, the Jesuits accused him of favouring the Jansenist heresy. Pope Innocent XII appointed a commission of cardinals to investigate the accusations against Codde. The commission concluded that the accusations were groundless.
In 1700, Pope Clement XI summoned Codde to Rome to participate in the Jubilee Year, whereupon a second commission was appointed to try Codde. The result of this second proceeding was again acquittal. However, in 1701 Clement XI decided to suspend Codde and appoint a successor. The church in Utrecht refused to accept the replacement and Codde continued in office until 1703, when he resigned.
After Codde's resignation, the Diocese of Utrecht elected Cornelius Steenoven as bishop. Following consultation with both canon lawyers and theologians in France and Germany, Dominique Marie Varlet, a Roman Catholic Bishop of the French Oratorian Society of Foreign Missions, consecrated Steenoven as a bishop without a papal mandate. What had been "de jure" autonomous became "de facto" an independent Catholic church. Steenoven appointed and ordained bishops to the sees of Deventer, Haarlem and Groningen. Although the pope was notified of all proceedings, the Holy See still regarded these dioceses as vacant due to papal permission not being sought. The pope, therefore, continued to appoint apostolic vicars for the Netherlands. Steenoven and the other bishops were excommunicated, and thus began the Old Catholic Church in the Netherlands.
While the religious clergy remained loyal to Rome, three-quarters of the secular clergy at first followed Codde, but by 1706 over two-thirds of these returned to Roman allegiance. Of the laity, the overwhelming majority sided with Rome. Thus most Dutch Catholics remained in full communion with the pope and with the apostolic vicars appointed by him. However, due to prevailing anti-papal feeling among the powerful Dutch Calvinists, the Church of Utrecht was tolerated and even praised by the government of the Dutch Republic.
In 1853 Pope Pius IX received guarantees of religious freedom from King William II of the Netherlands and re-established the Catholic hierarchy in the Netherlands. This existed alongside that of the Old Catholic See of Utrecht. Thereafter in the Netherlands the Utrecht hierarchy was referred to as the "Old Catholic Church" to distinguish it from those in union with the pope. According to Catholic Church interpretation, the Old Catholic Church of Utrecht maintained apostolic succession and its clergy celebrated valid sacraments. The Old Catholic Diocese of Utrecht was considered schismatic but not in heresy, but the Holy See sees the Roman Catholic Archdiocese of Utrecht as the continuation of the episcopal see founded in the 7th century and raised to metropolitan status on 12 May 1559.
After the First Vatican Council (1869–1870), several groups of Catholics in Austria-Hungary, Imperial Germany, and Switzerland rejected the Roman Catholic dogma of papal infallibility in matters of faith and morals and left to form their own churches. These were supported by the Old Catholic Archbishop of Utrecht, who ordained priests and bishops for them. Later the Dutch were united more formally with many of these groups under the name "Utrecht Union of Churches".
In the spring of 1871 a convention in Munich attracted several hundred participants, including Church of England and Protestant observers. Döllinger, an excommunicated Roman Catholic priest and church historian, was a notable leader of the movement but was never a member of an Old Catholic Church.
The convention decided to form the "Old Catholic Church" in order to distinguish its members from what they saw as the novel teaching in the Roman Catholic dogma of papal infallibility. Although it had continued to use the Roman Rite, from the middle of the 18th century the Dutch Old Catholic See of Utrecht had increasingly used the vernacular instead of Latin. The churches which broke from the Holy See in 1870 and subsequently entered into union with the Old Catholic See of Utrecht gradually introduced the vernacular into the liturgy until it completely replaced Latin in 1877. In 1874 Old Catholics removed the requirement of clerical celibacy.
The Old Catholic Church within the German Empire received support from the government of Otto von Bismarck, whose 1870s "Kulturkampf" policies persecuted the Catholic Church. In Austria-Hungary, pan-Germanic nationalist groups, like those of Georg Ritter von Schönerer, promoted the conversion of all German speaking Catholics to Old Catholicism and Lutheranism.
In 1908 the Archbishop of Utrecht Gerardus Gul consecrated Father Arnold Harris Mathew, a former Catholic priest, as Regionary Bishop for England. His mission was to establish a community for Anglicans and Roman Catholics. During his time with the Old Catholics, Mathew attended the Old Catholic Congress in Vienna in 1909 as well as acted as co-consecrator of Archbishop Michael Kowalski of the Mariavite Church in Poland. In 1910, Mathew left the over his allegation of their becoming more Protestant and called his church the "Old Roman Catholic Church".
In 1913, Mathew consecrated Rudolph de Landas Berghes. At the beginning of World War I, Berghes emigrated to the United States in 1914, hoping to consolidate various independent Old Catholic groups under Mathew. Berghes, in spite of his isolation, was able to plant the seed of Old Catholicism in the Americas. He consecrated an excommunicated Capuchin Franciscan priest as bishop: Carmel Henry Carfora. From this the Old Catholic Church in the United States evolved into local and regional self-governing dioceses and provinces along the design of St. Ignatius of Antioch – a network of communities.
Joseph René Vilatte worked with Catholics of Belgian ancestry living on the Door Peninsula of Wisconsin, with the knowledge and blessing of the Union of Utrecht and under the full jurisdiction of the local Episcopal Bishop of Fond du Lac, Wisconsin. Vilatte was ordained a deacon on 6 June 1885 and priest on 7 June 1885 by Bishop Eduard Herzog, of the Christian Catholic Church of Switzerland. Vilatte's work provided the only sacramental presence in that particular part of rural Wisconsin – under the jurisdiction of the Episcopal Bishop of Fond du Lac, WI.
In time, Vilatte asked the Old Catholic Archbishop of Utrecht to be ordained a bishop so that he might confirm, but his petition was not granted because recognized the Episcopal Church (United States) as the local catholic church. Vilatte solicited the Eastern Orthodox and Oriental Orthodox Churches to consecrate him. He was made a bishop in India on 28 May 1892 under the jurisdiction of the Syriac Orthodox Patriarch of Antioch. Over the years, hundreds of people in the United States have come to claim apostolic succession from Vilatte; none is in communion with, nor recognised by, the Old Catholic See of Utrecht.
The Polish National Catholic Church (PNCC) is no longer in communion with the Old Catholic churches but considers "the Declaration of Utrecht as a normative document of faith" and part of its ecclesiology. The Polish National Catholic Church began in the late 19th century over concerns about the ownership of church property and the domination of the U.S. church by Irish bishops. The church traces its apostolic succession directly to the Utrecht Union and thus possesses orders and sacraments which are recognised by the Holy See. In 1993 a pastoral agreement was concluded on the basis of can. 844 § 2 CIC permitting members of the PNCC to receive the sacraments from Roman Catholic priests. In 2003 the church voted itself out of the because the accepted the ordination of women and has an open attitude towards homosexuality, both of which the Polish National Catholic Church rejects.
The Old Catholic Church of Slovakia was accepted in 2000 as a member of the Union of Utrecht. As early as 2001 some issues arose concerning future consecration of Augustin Bacinsky as old-catholic bishop of Slovakia, and the matter was postponed. Old Catholic Church of Slovakia was expelled from the Union of Utrecht in 2004, because the episcopal administrator Augustin Bacinsky had been consecrated by an "episcopus vagans".
The only recognized group in America that is in communion with the Union of Utrecht is the Episcopal Church. However, independent Old Catholic groups with recognized apostolic succession have attempted to seek recognition from the . These are listed in the sections below.
In 2006, the Old Catholic Communion of North America (OCCNA) was formed by Archbishop Michael Nesmith. The purpose was to provide a means for Old Catholic churches which embraced the theology and beliefs of the undivided Church to come together in communion while remaining fully autocephalous. As of January 2016, OCCNA has grown to a point where it has been established as a church with provincial structure with parishes, missions, and other ministries in Tennessee, Arizona, Indiana, Delaware, South Carolina, Kentucky, Texas, Oregon, and Nevada. OCCNA clergy have also served as priests in charge with the Anglican Province of America and have clergy licensed to serve in the Anglican Church of North America (ACNA).
The OCCNA fully believes the intent of the founders of Old Catholicism was for the Western Church to return to the faith of the undivided church prior to the Great Schism of 1054. The OCCNA believes this intent is fully expressed in the , the Fourteen Theses of Bonn, and other documents from the time at the beginning of the Old Catholic churches in Europe. Therefore, the OCCNA embraces the theology of the undivided Church, which in practice means it does not ordain women, does not bless same-sex unions, and rejects Roman Catholic dogmas defined after the Great Schism in 1054 (the Immaculate Conception of Mary, papal infallibility, and the Assumption of Mary). OCCNA strives to bring about unity among like-minded Old Catholics by establishing dialog or communion with any Old Catholic churches or Anglican churches which embrace the theology of the undivided and early Christian Church.
After the separated from the , the 's International Old Catholic Bishops' Conference (IBC) asked the Episcopal Church (United States) (TEC) to survey the groups self identifying as Old Catholics about how they identify as Old Catholics, their understanding of Old Catholic ecclesiology, and whether they ordain women. The results were reported at the 's 2005 annual meeting. In May 2006, four American Old Catholic bishops, Peter Paul Brennan, Peter Hickman, Charles Leigh, and Robert T. Fuentes, met in Queens Village, New York, with Episcopal Diocese of West Virginia Bishop Michie Klusmeyer, the liaison to the ; Tom Ferguson, deputy for ecumenical and interfaith relations; Old Catholic theologian and priest ordained in the Old Catholic Church of Austria, Bjorn Marcussen; and, representative Gunther Esser. They discussed Old Catholic Church ecclesiology, "highlighted in the Preamble" of the Statutes. Three days later the four bishops – Brennan, Fuentes, Hickman, and Leigh – formed the Conference of North American Old Catholic Bishops (CNAOCB), modeled on the . The group's "central goal" was "the tangible, organic unity among American Old Catholic jurisdictions.
Klusmeyer, without an open dialogue with the Conference members or other viable Old Catholic jurisdictions, declared that there was not enough interest to form an American Old Catholic Church which could be a member of the UU. There were only four bishops who attended this meeting. Many jurisdictions within the United States would like the to reconsider their decision, but there is also a feeling that, given the different charisms, union might not be feasible.
In November 2006, the "bishops who remained engaged" met in Los Angeles and agreed on a Unity Statement, rules of order, and criteria for joining the . The Unity Statement, to which all members subscribe, "incorporated the ecclesiological understanding of the []".
In the United States of America, Communion of Catholic Evangelical Episcopal Churches (CEEC) under Archbishop Charles Travis was established in 1995 and is part of the Old Catholic Churches. His Official liturgy is the Latin liturgy, through which the Apostolate of St. Francis in the Middle East, North Africa and Egypt and, in September 2010, members signed the Plan of Union, which created (TOCCUSA). This federation of members was a step in realizing the goal of a national church modeled on ecclesiology. bishops invite Old Catholic bishops not yet a part of to enter into dialogue, with the hopes that deeper unity may be accomplished.
, there are 115,000 members of Old Catholic churches.
Immediately after forming the , Old Catholic theologians dedicated themselves to a reunion of the Christian churches. The Conferences of Reunion in Bonn in 1874 and 1875 convoked by Döllinger, a leading personality of Old Catholicism, are famous. Representatives of the Orthodox, Anglican and Lutheran churches were invited. They discussed denominational differences as the ground for restoring church communion. They assumed the following principles for participation: acceptance of the Christological dogma of First Council of Nicaea and Council of Chalcedon; Christ's foundation of the Church; the Holy Bible, the doctrine of the undivided Church, and the Church fathers of the first ten centuries as the genuine sources of belief; and Vincent of Lérins's "Commonitory" as a preferred method for historical research.
Reunion of the churches had to be based on a re-actualization of the decisions of faith made by the undivided Church. In that way the original unity of the Church could be made visible again. Following these principles, later bishops and theologians of the Old Catholic churches stayed in contact with Russian Orthodox and Anglican representatives.
Old Catholic involvement in the multilateral ecumenical movement formally began with the participation of two bishops, from the Netherlands and Switzerland, at the Lausanne Faith and Order (F&O) conference (1927). This side of ecumenism has always remained a major interest for Old Catholics who have never missed an F&O conference. Old Catholics also participate in other activities of the WCC and of national councils of churches. By active participation in the ecumenical movement since its very beginning, the OCC demonstrates its belief in this work.
Old Catholicism values apostolic succession by which they mean both the uninterrupted laying on of hands by bishops through time and the continuation of the whole life of the church community by word and sacrament over the years and ages. Old Catholics consider apostolic succession to be the handing on of belief in which the whole Church is involved. In this process the ministry has a special responsibility and task, caring for the continuation in time of the mission of Jesus Christ and his Apostles.
The Old Catholic Church shares some of the liturgy with the Roman Catholic Church and similar to the Orthodox, Anglicans and high church Protestants.
Christ-Catholic Swiss bishop Urs Küry dismissed the Roman Catholic dogma of transubstantiation because this Scholastic interpretation presumes to explain the Eucharist using the metaphysical concept of "substance". Like the Orthodox approach to the Eucharist, Old Catholics, he says, ought to accept an unexplainable divine mystery as such and should not cleave to or insist upon a particular theory of the sacrament. Because of this approach, Old Catholics hold an open view to most issues, including the role of women in the Church, the role of married people within ordained ministry, the morality of same sex relationships, the use of conscience when deciding whether to use artificial contraception, and liturgical reforms such as open communion. Its liturgy has not significantly departed from the Tridentine Mass, as is shown in the translation of the German altar book (missal).
In 1994 the German bishops decided to ordain women as priests, and put this into practice on 27 May 1996. Similar decisions and practices followed in Austria, Switzerland and the Netherlands. The allows those who are divorced to have a new marriage in the church, and has no particular teaching on abortion, leaving such decisions to the married couple.
An active contributor to the Declaration of the Catholic Congress, Munich, 1871, and all later assemblies for organization was Johann Friedrich von Schulte, the professor of dogma at Prague. Von Schulte summed up the results of the congress as follows: | https://en.wikipedia.org/wiki?curid=22370 |
Object code
In computing, object code or object module is the product of a compiler. In a general sense object code is a sequence of statements or instructions in a computer language, usually a machine code language (i.e., binary) or an intermediate language such as register transfer language (RTL). The term indicates that the code is the goal or result of the compiling process, with some early sources referring to source code as a "subject program."
Object files can in turn be linked to form an executable file or library file. In order to be used, object code must either be placed in an executable file, a library file, or an object file.
Object code is a portion of machine code that has not yet been linked into a complete program. It is the machine code for one particular library or module that will make up the completed product. It may also contain placeholders or offsets, not found in the machine code of a completed program, that the linker will use to connect everything together. Whereas machine code is binary code that can be executed directly by the CPU, object code has the jumps partially parameterized so that a linker can fill them in.
An assembler is used to convert assembly code into machine code (object code). A linker links several object (and library) files to generate an executable. Assemblers can also assemble directly to machine code executable files without the object intermediary step. | https://en.wikipedia.org/wiki?curid=22373 |
Oort cloud
The Oort cloud (), named after the Dutch astronomer Jan Oort, sometimes called the Öpik–Oort cloud, is a theoretical cloud of predominantly icy planetesimals proposed to surround the Sun at distances ranging from 2,000 to 200,000 au (0.03 to 3.2 light-years). It is divided into two regions: a disc-shaped inner Oort cloud (or Hills cloud) and a spherical outer Oort cloud. Both regions lie beyond the heliosphere and in interstellar space. The Kuiper belt and the scattered disc, the other two reservoirs of trans-Neptunian objects, are less than one thousandth as far from the Sun as the Oort cloud.
The outer limit of the Oort cloud defines the cosmographical boundary of the Solar System and the extent of the Sun's Hill sphere. The outer Oort cloud is only loosely bound to the Solar System, and thus is easily affected by the gravitational pull both of passing stars and of the Milky Way itself. These forces occasionally dislodge comets from their orbits within the cloud and send them toward the inner Solar System. Based on their orbits, most of the short-period comets may come from the scattered disc, but some may still have originated from the Oort cloud.
Astronomers conjecture that the matter composing the Oort cloud formed closer to the Sun and was scattered far into space by the gravitational effects of the giant planets early in the Solar System's evolution. Although no confirmed direct observations of the Oort cloud have been made, it may be the source of all long-period and Halley-type comets entering the inner Solar System, and many of the centaurs and Jupiter-family comets as well.
The existence of the Oort cloud was first postulated by Estonian astronomer Ernst Öpik in 1932. Oort independently proposed it in 1950.
There are two main classes of comet: short-period comets (also called ecliptic comets) and long-period comets (also called nearly isotropic comets). Ecliptic comets have relatively small orbits, below 10 au, and follow the ecliptic plane, the same plane in which the planets lie. All long-period comets have very large orbits, on the order of thousands of au, and appear from every direction in the sky.
A. O. Leuschner in 1907 suggested that many comets believed to have parabolic orbits, and thus making single visits to the solar system, actually had elliptical orbits and would return after very long periods. In 1932 Estonian astronomer Ernst Öpik postulated that long-period comets originated in an orbiting cloud at the outermost edge of the Solar System. Dutch astronomer Jan Oort independently revived the idea in 1950 as a means to resolve a paradox:
Thus, Oort reasoned, a comet could not have formed while in its current orbit and must have been held in an outer reservoir for almost all of its existence. He noted that there was a peak in numbers of long-period comets with aphelia (their farthest distance from the Sun) of roughly 20,000 au, which suggested a reservoir at that distance with a spherical, isotropic distribution. Those relatively rare comets with orbits of about 10,000 au have probably gone through one or more orbits through the Solar System and have had their orbits drawn inward by the gravity of the planets.
The Oort cloud is thought to occupy a vast space from somewhere between to as far as from the Sun. Some estimates place the outer boundary at between . The region can be subdivided into a spherical outer Oort cloud of , and a torus-shaped inner Oort cloud of . The outer cloud is only weakly bound to the Sun and supplies the long-period (and possibly Halley-type) comets to inside the orbit of Neptune. The inner Oort cloud is also known as the Hills cloud, named after Jack G. Hills, who proposed its existence in 1981. Models predict that the inner cloud should have tens or hundreds of times as many cometary nuclei as the outer halo; it is seen as a possible source of new comets to resupply the tenuous outer cloud as the latter's numbers are gradually depleted. The Hills cloud explains the continued existence of the Oort cloud after billions of years.
The outer Oort cloud may have trillions of objects larger than , and billions with absolute magnitudes brighter than 11 (corresponding to approximately diameter), with neighboring objects tens of millions of kilometres apart. Its total mass is not known, but, assuming that Halley's Comet is a suitable prototype for comets within the outer Oort cloud, roughly the combined mass is , or five times that of Earth.
Earlier it was thought to be more massive (up to 380 Earth masses),
but improved knowledge of the size distribution of long-period comets led to lower estimates. No known estimates of the mass of the inner Oort cloud have been published.
If analyses of comets are representative of the whole, the vast majority of Oort-cloud objects consist of ices such as water, methane, ethane, carbon monoxide and hydrogen cyanide.
However, the discovery of the object , an object whose appearance was consistent with a D-type asteroid in an orbit typical of a long-period comet, prompted theoretical research that suggests that the Oort cloud population consists of roughly one to two percent asteroids. Analysis of the carbon and nitrogen isotope ratios in both the long-period and Jupiter-family comets shows little difference between the two, despite their presumably vastly separate regions of origin. This suggests that both originated from the original protosolar cloud, a conclusion also supported by studies of granular size in Oort-cloud comets and by the recent impact study of Jupiter-family comet Tempel 1.
The Oort cloud is thought to have developed after the formation of planets from the primordial protoplanetary disc approximately 4.6 billion years ago. The most widely accepted hypothesis is that the Oort cloud's objects initially coalesced much closer to the Sun as part of the same process that formed the planets and minor planets. After formation, strong gravitational interactions with young gas giants, such as Jupiter, scattered the objects into extremely wide elliptical or parabolic orbits that were subsequently modified by perturbations from passing stars and giant molecular clouds into long-lived orbits detached from the gas giant region.
Recent research has been cited by NASA hypothesizing that a large number of Oort cloud objects are the product of an exchange of materials between the Sun and its sibling stars as they formed and drifted apart and it is suggested that many—possibly the majority—of Oort cloud objects did not form in close proximity to the Sun. Simulations of the evolution of the Oort cloud from the beginnings of the Solar System to the present suggest that the cloud's mass peaked around 800 million years after formation, as the pace of accretion and collision slowed and depletion began to overtake supply.
Models by Julio Ángel Fernández suggest that the scattered disc, which is the main source for periodic comets in the Solar System, might also be the primary source for Oort cloud objects. According to the models, about half of the objects scattered travel outward toward the Oort cloud, whereas a quarter are shifted inward to Jupiter's orbit, and a quarter are ejected on hyperbolic orbits. The scattered disc might still be supplying the Oort cloud with material. A third of the scattered disc's population is likely to end up in the Oort cloud after 2.5 billion years.
Computer models suggest that collisions of cometary debris during the formation period play a far greater role than was previously thought. According to these models, the number of collisions early in the Solar System's history was so great that most comets were destroyed before they reached the Oort cloud. Therefore, the current cumulative mass of the Oort cloud is far less than was once suspected. The estimated mass of the cloud is only a small part of the 50–100 Earth masses of ejected material.
Gravitational interaction with nearby stars and galactic tides modified cometary orbits to make them more circular. This explains the nearly spherical shape of the outer Oort cloud. On the other hand, the Hills cloud, which is bound more strongly to the Sun, has not acquired a spherical shape. Recent studies have shown that the formation of the Oort cloud is broadly compatible with the hypothesis that the Solar System formed as part of an embedded cluster of 200–400 stars. These early stars likely played a role in the cloud's formation, since the number of close stellar passages within the cluster was much higher than today, leading to far more frequent perturbations.
In June 2010 Harold F. Levison and others suggested on the basis of enhanced computer simulations that the Sun "captured comets from other stars while it was in its birth cluster." Their results imply that "a substantial fraction of the Oort cloud comets, perhaps exceeding 90%, are from the protoplanetary discs of other stars."
Comets are thought to have two separate points of origin in the Solar System. Short-period comets (those with orbits of up to 200 years) are generally accepted to have emerged from either the Kuiper belt or the scattered disc, which are two linked flat discs of icy debris beyond Neptune's orbit at 30 au and jointly extending out beyond 100 au from the Sun. Long-period comets, such as comet Hale–Bopp, whose orbits last for thousands of years, are thought to originate in the Oort cloud. Comets modeled to be coming directly from the Oort cloud include C/2010 X1 (Elenin), Comet ISON, C/2013 A1 (Siding Spring), and C/2017 K2. The orbits within the Kuiper belt are relatively stable, and so very few comets are thought to originate there. The scattered disc, however, is dynamically active, and is far more likely to be the place of origin for comets. Comets pass from the scattered disc into the realm of the outer planets, becoming what are known as centaurs. These centaurs are then sent farther inward to become the short-period comets.
There are two main varieties of short-period comet: Jupiter-family comets (those with semi-major axes of less than 5 AU) and Halley-family comets. Halley-family comets, named for their prototype, Halley's Comet, are unusual in that although they are short-period comets, it is hypothesized that their ultimate origin lies in the Oort cloud, not in the scattered disc. Based on their orbits, it is suggested they were long-period comets that were captured by the gravity of the giant planets and sent into the inner Solar System. This process may have also created the present orbits of a significant fraction of the Jupiter-family comets, although the majority of such comets are thought to have originated in the scattered disc.
Oort noted that the number of returning comets was far less than his model predicted, and this issue, known as "cometary fading", has yet to be resolved. No dynamical process are known to explain the smaller number of observed comets than Oort estimated. Hypotheses for this discrepancy include the destruction of comets due to tidal stresses, impact or heating; the loss of all volatiles, rendering some comets invisible, or the formation of a non-volatile crust on the surface. Dynamical studies of hypothetical Oort cloud comets have estimated that their occurrence in the outer-planet region would be several times higher than in the inner-planet region. This discrepancy may be due to the gravitational attraction of Jupiter, which acts as a kind of barrier, trapping incoming comets and causing them to collide with it, just as it did with Comet Shoemaker–Levy 9 in 1994. An example of typical Oort cloud comet could be C/2018 F4.
Most of the comets seen close to the Sun seem to have reached their current positions through gravitational perturbation of the Oort cloud by the tidal force exerted by the Milky Way. Just as the Moon's tidal force deforms Earth's oceans, causing the tides to rise and fall, the galactic tide also distorts the orbits of bodies in the outer Solar System. In the charted regions of the Solar System, these effects are negligible compared to the gravity of the Sun, but in the outer reaches of the system, the Sun's gravity is weaker and the gradient of the Milky Way's gravitational field has substantial effects. Galactic tidal forces stretch the cloud along an axis directed toward the galactic centre and compress it along the other two axes; these small perturbations can shift orbits in the Oort cloud to bring objects close to the Sun. The point at which the Sun's gravity concedes its influence to the galactic tide is called the tidal truncation radius. It lies at a radius of 100,000 to 200,000 au, and marks the outer boundary of the Oort cloud.
Some scholars theorise that the galactic tide may have contributed to the formation of the Oort cloud by increasing the perihelia (smallest distances to the Sun) of planetesimals with large aphelia (largest distances to the Sun). The effects of the galactic tide are quite complex, and depend heavily on the behaviour of individual objects within a planetary system. Cumulatively, however, the effect can be quite significant: up to 90% of all comets originating from the Oort cloud may be the result of the galactic tide. Statistical models of the observed orbits of long-period comets argue that the galactic tide is the principal means by which their orbits are perturbed toward the inner Solar System.
Besides the galactic tide, the main trigger for sending comets into the inner Solar System is thought to be interaction between the Sun's Oort cloud and the gravitational fields of nearby stars or giant molecular clouds. The orbit of the Sun through the plane of the Milky Way sometimes brings it in relatively close proximity to other stellar systems. For example, it is hypothesized that 70 thousand years ago, perhaps Scholz's star passed through the outer Oort cloud (although its low mass and high relative velocity limited its effect). During the next 10 million years the known star with the greatest possibility of perturbing the Oort cloud is Gliese 710. This process could also scatter Oort cloud objects out of the ecliptic plane, potentially also explaining its spherical distribution.
In 1984, Physicist Richard A. Muller postulated that the Sun has a heretofore undetected companion, either a brown dwarf or a red dwarf, in an elliptical orbit within the Oort cloud. This object, known as Nemesis, was hypothesized to pass through a portion of the Oort cloud approximately every 26 million years, bombarding the inner Solar System with comets. However, to date no evidence of Nemesis has been found, and many lines of evidence (such as crater counts), have thrown its existence into doubt. Recent scientific analysis no longer supports the idea that extinctions on Earth happen at regular, repeating intervals. Thus, the Nemesis hypothesis is no longer needed to explain current assumptions.
A somewhat similar hypothesis was advanced by astronomer John J. Matese of the University of Louisiana at Lafayette in 2002. He contends that more comets are arriving in the inner Solar System from a particular region of the postulated Oort cloud than can be explained by the galactic tide or stellar perturbations alone, and that the most likely cause would be a Jupiter-mass object in a distant orbit. This hypothetical gas giant was nicknamed Tyche. The WISE mission, an all-sky survey using parallax measurements in order to clarify local star distances, was capable of proving or disproving the Tyche hypothesis. In 2014, NASA announced that the WISE survey had ruled out any object as they had defined it.
Space probes have yet to reach the area of the Oort cloud. "Voyager 1", the fastest and farthest of the interplanetary space probes currently leaving the Solar System, will reach the Oort cloud in about 300 years and would take about 30,000 years to pass through it. However, around 2025, the radioisotope thermoelectric generators on "Voyager 1" will no longer supply enough power to operate any of its scientific instruments, preventing any further exploration by "Voyager 1." The currently escaping the Solar System either are already or are predicted to be non-functional when they reach the Oort cloud; however, it may be possible to find an object from the cloud that has been knocked into the inner Solar System.
In the 1980s there was a concept for a probe to reach 1,000 au in 50 years called "TAU"; among its missions would be to look for the Oort cloud.
In the 2014 Announcement of Opportunity for the Discovery program, an observatory to detect the objects in the Oort cloud (and Kuiper belt) called the "Whipple Mission" was proposed. It would monitor distant stars with a photometer, looking for transits up to 10,000 au away. The observatory was proposed for halo orbiting around L2 with a suggested 5-year mission. It was also suggested that the Kepler observatory could have been capable of detecting objects in the Oort cloud. | https://en.wikipedia.org/wiki?curid=22385 |
Ohio River
The Ohio River is a long river in the United States. It is located in the midwestern United States, flowing southwesterly from western Pennsylvania south of Lake Erie to its mouth on the Mississippi River at the southern tip of Illinois. It is the third largest river by discharge volume in the United States and the largest tributary by volume of the north-south flowing Mississippi River that divides the eastern from western United States. The river flows through or along the border of six states, and its drainage basin includes parts of 15 states. Through its largest tributary, the Tennessee River, the basin includes several states of the southeastern U.S. It is the source of drinking water for three million people.
The lower Ohio River just below Louisville is obstructed by rapids known as the Falls of the Ohio where the water level falls 26 ft. in 2 miles and is impassible for navigation. The McAlpine Locks and Dam, a shipping canal bypassing the rapids, now allows commercial navigation from the Forks of the Ohio at Pittsburgh to the Port of New Orleans at the mouth of the Mississippi on the Gulf of Mexico.
The name "Ohio" comes from the Seneca, , lit. "Good River". European discovery of the Ohio River may be attributed to English explorers from Virginia in the latter half of the 17th century. In his "Notes on the State of Virginia" published in 1781–82, Thomas Jefferson stated: "The Ohio is the most beautiful river on earth. Its current gentle, waters clear, and bosom smooth and unbroken by rocks and rapids, a single instance only excepted." In the late 18th century, the river was the southern boundary of the Northwest Territory. It became a primary transportation route for pioneers during the westward expansion of the early U.S.
The river is sometimes considered as the western extension of the Mason–Dixon Line that divided Pennsylvania from Maryland, and thus part of the border between free and slave territory, and between the Northern and Southern United States or Upper South. Where the river was narrow, it was the way to freedom for thousands of slaves escaping to the North, many helped by free blacks and whites of the Underground Railroad resistance movement.
The Ohio River is a climatic transition area, as its water runs along the periphery of the humid subtropical and humid continental climate areas. It is inhabited by fauna and flora of both climates. In winter, it regularly freezes over at Pittsburgh but rarely farther south toward Cincinnati and Louisville. At Paducah, Kentucky, in the south, near the Ohio's confluence with the Mississippi, it is ice-free year-round.
The name "Ohio" comes from the Seneca language (an Iroquoian language), (roughly pronounced oh-hee-yoh, with the vowel in "hee" held longer), a proper name derived from ("good river"), therefore literally translating to "Good River". "Great river" and "large creek" have also been given as translations.
Native Americans, including the Lenni Lenape and Iroquois, considered the Ohio and Allegheny rivers as the same, as is suggested by a New York State road sign on Interstate 86 that refers to the Allegheny River also as ; the Geographic Names Information System lists "O-hee-yo" and "O-hi-o" as variant names for the Allegheny.
An earlier Miami-Illinois language name was also applied to the Ohio River, ("river of the Mosopelea" tribe). Shortened in the Shawnee language to , or , the name evolved through variant forms such as "Polesipi", "Peleson", "Pele Sipi" and "Pere Sipi", and eventually stabilized to the variant spellings "Pelisipi", "Pelisippi" and "Pellissippi". Originally applied just to the Ohio River, the "Pelisipi" name later was variously applied back and forth between the Ohio River and the Clinch River in Virginia and Tennessee. In his original draft of the Land Ordinance of 1784, Thomas Jefferson proposed a new state called "Pelisipia", to the south of the Ohio River, which would have included parts of present-day Eastern Kentucky, Virginia and West Virginia.
The river had great significance in the history of the Native Americans, as numerous civilizations formed along its valley. For thousands of years, Native Americans used the river as a major transportation and trading route. Its waters connected communities. In the five centuries before European conquest, the Mississippian culture built numerous regional chiefdoms and major earthwork mounds in the Ohio Valley, such as Angel Mounds near Evansville, Indiana, as well as in the Mississippi Valley and the Southeast. The Osage, Omaha, Ponca and Kaw lived in the Ohio Valley, but under pressure from the Iroquois to the northeast, migrated west of the Mississippi River in the 17th century to territory now defined as Missouri, Arkansas and Oklahoma.
The discovery and traversal of the Ohio River by Europeans admits of several possibilities, all in the latter half of the 17th century. Virginian Englishman Abraham Wood's trans-Appalachian expeditions between 1654 and 1664; Frenchman Robert de La Salle's putative Ohio expedition of 1669; and two expeditions of Virginians sponsored by Colonel Wood: the Batts and Fallam expedition of 1671, and the Needham and Arthur expedition of 1673-74. The first known European to traverse the length of the river, from the headwaters of the Allegheny to its mouth on the Mississippi, was a Dutchman from New York, Arnout Viele, in 1692.
In 1749, Great Britain established the Ohio Company to settle and trade in the area. Exploration of the territory and trade with the Indians in the region near the Forks brought British colonials from both Pennsylvania and Virginia across the mountains, and both colonies claimed the territory. The movement across the Allegheny Mountains of British settlers and the claims of the area near modern-day Pittsburgh led to conflict with the French, who had forts in the Ohio River Valley. This conflict was called the French and Indian War. In 1763, following the war, France ceded the area to Britain, and thus to the settlers in the colonies of Britain.
The 1768 Treaty of Fort Stanwix opened Kentucky to colonial settlement and established the Ohio River as a southern boundary for American Indian territory. In 1774, the Quebec Act restored the land east of the Mississippi River and north of the Ohio River to Quebec, in effect making the Ohio the southern boundary of Canada. This appeased the British subjects of Canada, but angered the colonists of the Thirteen Colonies. Lord Dunmore's War south of the Ohio river also contributed to giving the land north to Quebec to stop further encroachment of the British colonials on native land. During the American Revolution, in 1776 the British military engineer John Montrésor created a map of the river showing the strategic location of Fort Pitt, including specific navigational information about the Ohio River's rapids and tributaries in that area. However, the Treaty of Paris (1783) gave the entire Ohio Valley to the United States.
The economic connection of the Ohio Country to the East was significantly increased in 1818 when the National Road being built westward from Cumberland, Maryland reached Wheeling, Virginia (now West Virginia), providing an easier overland connection from the Potomac River to the Ohio River. The Wheeling Suspension Bridge was built over the river at Wheeling from 1847 to 1849, making the trip west easier. For a brief time, until 1851, it was the world's largest suspension bridge. Fortunately, the bridge was not blown up during the American Civil War. The bridge has been improved in 1859 and 1872, and remains in use as the oldest vehicular suspension bridge in the U.S.
Louisville was founded in 1779 at the only major natural navigational barrier on the river, the Falls of the Ohio. The Falls were a series of rapids where the river dropped in a stretch of about . In this area, the river flowed over hard, fossil-rich beds of limestone. The first locks on the river the Louisville and Portland Canal were built to circumnavigate the falls between 1825 and 1830. Fears that Louisville's transshipment industry would collapse proved ill-founded: the increasing size of steamships and barges on the river meant that the outdated locks could only service the smallest vessels until well after the Civil War. The U.S. Army Corps of Engineers improvements were expanded again in the 1960s, forming the present-day McAlpine Locks and Dam.
Because the Ohio River flowed westward, it became a convenient means of westward movement by pioneers traveling from western Pennsylvania. After reaching the mouth of the Ohio, settlers would travel north on the Mississippi River to St. Louis, Missouri. There, some continued on up the Missouri River, some up the Mississippi, and some further west over land routes. In the early 19th century, river pirates such as Samuel Mason, operating out of Cave-In-Rock, Illinois, waylaid travelers on their way down the river. They killed travelers, stealing their goods and scuttling their boats. The folktales about Mike Fink recall the keelboats used for commerce in the early days of American settlement. The Ohio River boatmen were the inspiration for performer Dan Emmett, who in 1843 wrote the song "The Boatman's Dance".
Trading boats and ships traveled south on the Mississippi to New Orleans, and sometimes beyond to the Gulf of Mexico and other ports in the Americas and Europe. This provided a much-needed export route for goods from the west, since the trek east over the Appalachian Mountains was long and arduous. The need for access to the port of New Orleans by settlers in the Ohio Valley is one of the factors that led to the Louisiana Purchase in 1803.
Because the river is the southern border of Ohio, Indiana, and Illinois, it was part of the border between free states and slave states in the years before the American Civil War. The expression "sold down the river" originated as a lament of Upper South slaves, especially from Kentucky, who were shipped via the Ohio and Mississippi to cotton and sugar plantations in the Deep South. Before and during the Civil War, the Ohio River was called the "River Jordan" by slaves crossing it to escape to freedom in the North via the Underground Railroad. More escaping slaves, estimated in the thousands, made their perilous journey north to freedom across the Ohio River than anywhere else across the north-south frontier. Harriet Beecher Stowe's "Uncle Tom's Cabin", the bestselling novel that fueled abolitionist work, was the best known of the anti-slavery novels that portrayed such escapes across the Ohio. The times have been expressed by 20th-century novelists as well, such as the Nobel Prize-winning Toni Morrison, whose novel "Beloved" was adapted as a film of the same name. She also composed the libretto for the opera "Margaret Garner" (2005), based on the life and trial of an enslaved woman who escaped with her family across the river.
The colonial charter for Virginia defined its territory as extending to the north shore of the Ohio, so that the riverbed was "owned" by Virginia. Where the river serves as a boundary between states today, Congress designated the entire river to belong to the states on the east and south, i.e., West Virginia and Kentucky at the time of admission to the Union, that were divided from Virginia. Thus Wheeling Island, the largest inhabited island in the Ohio River, belongs to West Virginia, although it is closer to the Ohio shore than to the West Virginia shore. Kentucky brought suit against Indiana in the early 1980s because of the building of the never-completed Marble Hill Nuclear Power Plant in Indiana, which would have discharged its waste water into the river.
The U.S. Supreme Court held that Kentucky's jurisdiction (and, implicitly, that of West Virginia) extended only to the low-water mark of 1793 (important because the river has been extensively dammed for navigation, so that the present river bank is north of the old low-water mark.) Similarly, in the 1990s, Kentucky challenged Illinois' right to collect taxes on a riverboat casino docked in Metropolis, citing its own control of the entire river. A private casino riverboat that docked in Evansville, Indiana, on the Ohio River opened about the same time. Although such boats cruised on the Ohio River in an oval pattern up and down, the state of Kentucky soon protested. Other states had to limit their cruises to going forwards, then reversing and going backwards on the Indiana shore only. Both Illinois and Indiana have long since changed their laws to allow riverboat casinos to be permanently docked, with Illinois changing in 1999 and Indiana in 2002.
The Silver Bridge at Point Pleasant, West Virginia collapsed into the river on December 15, 1967. The collapse took the lives of 46 people who had been crossing on the bridge at the moment of its failure. The bridge had been built in 1929, and by 1967 was carrying too heavy a load for its design. The bridge was rebuilt about one mile downstream and in service as the Silver Memorial Bridge in 1969.
In the early 1980s, the Falls of the Ohio National Wildlife Conservation Area was established at Clarksville, Indiana.
The Ohio River as a whole is ranked as the most polluted river in the US based on 2009 and 2010 data although the more industrial and regional West Virginia/Pennsylvania tributary, Monongahela River, ranked behind 16 other American rivers for water pollution at number 17. The river again ranked as the most polluted in 2013 and has been the most polluted river since at least 2001, according to Ohio River Valley Water Sanitation Commission (ORSANCO). The Commission found that 92% of toxic discharges were nitrates including farm runoff and waste water from industrial processes like steel production. The Commission also noted mercury pollution as an ongoing concern, citing a 500% increase in mercury discharges between 2007 and 2013.
The Ohio River was polluted with hundreds of thousands of pounds of PFOA, a fluoride-based chemical used in making teflon among other things, by DuPont chemical company from an outflow pipe at its Parkersburg, WV, facility for several decades beginning in the 1950s.
The Ohio River is heavily industrialized and populated and sees traffic from large barge cargoes carrying oil, steel and other industrial goods. There are several major cities located along the northern and southern banks of the river including Cincinnati, Ohio; Pittsburgh, Pennsylvania; and Louisville, Kentucky.
The combined Allegheny-Ohio river is long and carries the largest volume of water of any tributary of the Mississippi. The Indians and early explorers and settlers of the region also often considered the Allegheny to be part of the Ohio. The forks (the confluence of the Allegheny and Monongahela rivers at what is now Pittsburgh) were considered a strategic military location.
The Ohio River is formed by the confluence of the Allegheny and Monongahela rivers at Point State Park in Pittsburgh, Pennsylvania. From there, it flows northwest through Allegheny and Beaver counties, before making an abrupt turn to the south-southwest at the West Virginia–Ohio–Pennsylvania triple-state line (near East Liverpool, Ohio; Chester, West Virginia; and Ohioville, Pennsylvania). From there, it forms the border between West Virginia and Ohio, upstream of Wheeling, West Virginia.
The river then follows a roughly southwest and then west-northwest course until Cincinnati, before bending to a west-southwest course for most of its length. The course forms the northern borders of West Virginia and Kentucky; and the southern borders of Ohio, Indiana and Illinois, until it joins the Mississippi River at the city of Cairo, Illinois. Where the Ohio joins the Mississippi is the lowest elevation in the state of Illinois, at .
The Ohio River drains to the Mississippi River which flows to the Gulf of Mexico on the Atlantic Ocean. Among rivers wholly or mostly in the United States, it is the second largest by discharge volume, the tenth longest and has the eighth largest drainage basin. It is considered to separate Midwestern Great Lakes states from the Upper South states, which were historically border states in the Civil War.
The Ohio River is a left (east) and largest tributary by volume of the Mississippi River in the United States. At the confluence, the Ohio is considerably bigger than the Mississippi, measured by long-term mean discharge. The Ohio River at Cairo is 281,500 cu ft/s (7,960 m3/s); and the Mississippi River at Thebes, Illinois, which is upstream of the confluence, is 208,200 cu ft/s (5,897 m3/s). The Ohio River flow is higher than that of the Mississippi River so hydrologically, the Ohio River is the main stream of the river system.
The Ohio River is a naturally shallow river that was artificially deepened by a series of dams. The natural depth of the river varied from about . The dams raise the water level and have turned the river largely into a series of reservoirs, eliminating shallow stretches and allowing for commercial navigation. From its origin to Cincinnati, the average depth is approximately . The largest immediate drop in water level is below the McAlpine Locks and Dam at the Falls of the Ohio at Louisville, Kentucky, where flood stage is reached when the water reaches on the lower gauge. However, the river's deepest point is on the western side of Louisville, Kentucky. From Louisville, the river loses depth very gradually until its confluence with the Mississippi at Cairo, Illinois, where it has an approximate depth of .
Water levels for the Ohio River from Smithland Lock and Dam upstream to Pittsburgh are predicted daily by the National Oceanic and Atmospheric Administration's Ohio River Forecast Center. The water depth predictions are relative to each local flood plain based upon predicted rainfall in the Ohio River basin in five reports as follows:
The water levels for the Ohio River from Smithland Lock and Dam to Cairo, Illinois, are predicted by the National Oceanic and Atmospheric Administration's Lower Mississippi River Forecast Center.
The largest tributaries of the Ohio by discharge volume are:
By drainage basin area, the largest tributaries are:
The largest tributaries by length are:
Major tributaries of the river, in order from the head to the mouth of the Ohio, include:
The Ohio's drainage basin covers , encompassing the easternmost regions of the Mississippi Basin. The Ohio drains parts of 15 states in four regions.
The Ohio River is a climatic transition area, as its water runs along the periphery of the humid continental and humid subtropical climate areas. It is inhabited by fauna and flora of both climates. In winter, it regularly freezes over at Pittsburgh but rarely farther south toward Cincinnati and Louisville. At Paducah, Kentucky, in the south, at the Ohio's confluence with the Tennessee River, it is ice-free year-round.
In the 21st century, with the 2016 update of climate zones, the humid subtropical zone has stretched across the river, into the southern portions of Ohio, Indiana, and Illinois.
From a geological standpoint, the Ohio River is young. Before the river was created, large parts of North America were covered by water forming a saltwater lake about 200 miles across and 400 miles in length. The bedrock of the Ohio Valley was mostly set during this time. The river formed on a piecemeal basis beginning between 2.5 and 3 million years ago. The movement of glaciers during the earliest ice ages the present day river drainages of the Kanawha, Sandy, Kentucky, Green, Cumberland and Tennessee rivers northward created the Ohio system and the course of early tributaries of the Ohio River, including the Monongahela and the Allegheny rivers, were set. The Teays River was the largest of these rivers. The modern Ohio River flows within segments of the ancient Teays. The ancient rivers were rearranged or consumed.
The section of the river that runs southwest from Pittsburgh to Cairo, Illinois is around tens of thousands of years old.
The upper Ohio River formed when one of the glacial lakes overflowed into a south-flowing tributary of the Teays River. Prior to that event, the north-flowing Steubenville River (no longer in existence) ended between New Martinsville and Paden City, West Virginia. The south-flowing Marietta River (no longer in existence) ended between the present-day cities. The overflowing lake carved through the separating hill and connected the rivers. The floodwaters enlarged the small Marietta valley to a size more typical of a large river. The new large river subsequently drained glacial lakes and melting glaciers at the end of the ice ages. The valley grew during and following the ice age. Many small rivers were altered or abandoned after the upper Ohio River formed. Valleys of some abandoned rivers can still be seen on satellite and aerial images of the hills of Ohio and West Virginia between Marietta, Ohio, and Huntington, West Virginia.
The middle Ohio River formed in a manner similar to formation of the upper Ohio River. A north-flowing river was temporarily dammed by natural forces southwest of present-day Louisville, creating a large lake until the dam burst. A new route was carved to the Mississippi. Eventually the upper and middle sections combined to form what is essentially the modern Ohio River.
Along the banks of the Ohio are some of the largest cities in their respective states: Pittsburgh, the largest city on the river and second-largest city in Pennsylvania; Cincinnati, the third-largest city in Ohio; Louisville, the largest city in Kentucky; Evansville, the third-largest city in Indiana; Owensboro, the fourth-largest city in Kentucky; Huntington, the second-largest city in West Virginia; Parkersburg, the fourth-largest city in West Virginia; and Wheeling, the fifth-largest city in West Virginia. Only Illinois, among the border states, has no significant cities on the river. There are hundreds of other cities, towns, villages and unincorporated populated places on the river, most of them very small.
Cities along the Ohio are also among the oldest cities in their respective states and among the oldest cities in the United States west of the Appalachian Mountains (by date of founding): Pittsburgh, Pennsylvania, 1758; Wheeling, West Virginia, 1769; Huntington, West Virginia, 1775; Louisville, Kentucky, 1779; Clarksville, Indiana, 1783; Maysville, Kentucky, 1784; Martin's Ferry, Ohio, 1785; Marietta, Ohio, 1788; Cincinnati, Ohio, 1789; Manchester, Ohio, 1790; and Beaver, Pennsylvania, 1792.
Other cities of interest include Cairo, Illinois, at the mouth of the Ohio on the Mississippi River and the southernmost and westernmost city on the river; Pittsburgh, Pennsylvania, the easternmost city on the river at the head or Forks of the Ohio, where the Allegheny and Monongahela Rivers join to create the Ohio; and Beaver, Pennsylvania, the site of colonial Fort McIntosh and the northernmost city on the river. It is 548 miles as the crow flies between Cairo and Pittsburgh, but 981 miles by water. Direct water travel the length of the river is obstructed by the Falls of the Ohio just below Louisville, Kentucky. The Ohio River Scenic Byway follows the Ohio River through Illinois, Indiana and Ohio ending at Steubenville, Ohio, on the river.
Before there were cities, there were colonial forts. These forts played a dominant role in the French and Indian War, Northwest Indian War and pioneering settlement of Ohio Country. Many cities got their start at or adjacent to the forts. Most were abandoned by 1800. Forts along the Ohio river include Fort Pitt (Pennsylvania), Fort McIntosh (Pennsylvania), Fort Randolph (West Virginia), Fort Henry (West Virginia), Fort Harmar (Ohio), Fort Washington (Ohio), and Fort Nelson (Kentucky). Short-lived special purpose forts included Fort Steuben (Ohio), Fort Finney (Indiana), Fort Finney (Ohio) and Fort Gower (Ohio). | https://en.wikipedia.org/wiki?curid=22388 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.