text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Z4_(computer)] | [TOKENS: 1128] |
Contents Z4 (computer) The Z4 was arguably the world's first commercial digital computer, and is the oldest surviving programmable computer.: 1028 It was designed, and manufactured by early computer scientist Konrad Zuse's company Zuse Apparatebau, for an order placed by Henschel & Son, in 1942; though only partially assembled in Berlin, then completed in Göttingen in the Third Reich in April 1945, but not delivered before the defeat of Nazi Germany, in 1945. The Z4 was Zuse's final target for the Z3 design. Like the earlier Z2, it comprised a combination of mechanical memory and electromechanical logic. The Z4 was used at the ETH Zurich from 1950 to 1955,: 14 also serving as the inspiration for the construction of the ERMETH,: 1009 the first Swiss computer, created under the direction of ETH engineer Ambros Speiser.: 1087 Construction The Z4 was very similar to the Z3 in its design but was significantly enhanced in a number of respects. The memory consisted of 32-bit rather than 22-bit floating point words. The Program Construction Unit (Planfertigungsteil) punched the program tapes, making programming and correcting programs for the machine much easier by the use of symbolic operations and memory cells. Numbers were entered and output as decimal floating-point even though the internal working was in binary. The machine had a large repertoire of instructions including square root, MAX, MIN and sine. Conditional tests included tests for infinity. When delivered to ETH Zurich in 1950 the machine had a conditional branch facility added and could print on a Mercedes typewriter. There were two program tapes where the second could be used to hold a subroutine. (Originally six were planned.) In 1944, Zuse was working on the Z4 with around two dozen people, including Wilfried de Beauclair. Some engineers who worked at the telecommunications facility of the OKW also worked for Zuse as a secondary occupation. Also in 1944 Zuse transformed his company to the Zuse KG (Kommanditgesellschaft, i.e. a limited partnership) and planned to manufacture 300 computers. This way he could also request additional staff and scientists as a contractor in the Emergency Fighter Program. Zuse's company also cooperated with Alwin Walther's Institute for Applied Mathematics at the Technische Universität Darmstadt. To prevent it from falling into the hands of the Soviets, the Z4 was evacuated from Berlin in February 1945 and transported to Göttingen. The Z4 was completed in Göttingen in a facility of the Aerodynamische Versuchsanstalt (AVA, Aerodynamic Research Institute), which was headed by Albert Betz. But when it was presented to scientists of the AVA the roar of the approaching front could already be heard, so the computer was transported with a truck of the Wehrmacht to Hinterstein in Bad Hindelang in southern Bavaria, where Konrad Zuse met Wernher von Braun. By 1947 it was possible for constants to be entered by the punched tape. Use after World War II In 1949, the Swiss mathematician Eduard Stiefel, after coming back from a stay in the US where he inspected American computers, visited Zuse and the Z4. When he formulated a differential equation as a test, Zuse immediately programmed the Z4 to solve it. Stiefel decided to acquire the computer for his newly founded Institute for Applied Mathematics at the ETH Zurich. It was delivered to ETH Zurich in 1950. At least Zürich has an interesting nightlife with the rattling of the Z4, even if it is only modest. In 1954, Wolfgang Haack tried to obtain the Z4 for Technische Universität Berlin, but it was instead transferred to the Institut Franco-Allemand des Recherches de St. Louis (ISL, Franco-German Institute of Research) in France, where it was in use until 1959, under its technical head Hubert Schardin. Today, the Z4 is on display in the Deutsches Museum in Munich. The Z4 inspired the ETH to build its own computer (mainly by Ambros Speiser and Eduard Stiefel), which was called ERMETH, an acronym for German: Elektronische Rechenmaschine ETH ("Electronic Computing Machine ETH").: 1009 In 1950/1951, the Z4 was the only working digital computer in Central Europe, and the second digital computer in the world to be sold or loaned,: 981 beating the Ferranti Mark 1 by five months and the UNIVAC I by ten months, but in turn being beaten by the BINAC (although that never worked at the customer's site). Other computers, all numbered with a leading Z, were built by Zuse and his company. Notable are the Z11, which was sold to the optics industry and to universities, and the Z22. In 1955 the Z4 was sold to the French-German Research Institute of Saint-Louis (Institut franco-allemand de recherches de Saint-Louis) in Saint-Louis, close to Basel, and in 1960 transferred to the German Museum in Munich. The Z4 was used for calculations for work on the Grande Dixence Dam in 1950.: 1081 Specifications See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/ETH_Zurich] | [TOKENS: 3172] |
Contents ETH Zurich ETH Zurich (German: Eidgenössische Technische Hochschule Zürich; English: Federal Institute of Technology Zurich) is a public university in Zurich, Switzerland. Founded in 1854, the university primarily teaches and conducts research in science, technology, engineering, and mathematics (STEM). Like its sister institution École Polytechnique Fédérale de Lausanne (EPFL), ETH Zurich is part of the Swiss Federal Institutes of Technology Domain, a consortium of universities and research institutes under the Swiss Federal Department of Economic Affairs, Education and Research. As of 2024[update], ETH Zurich enrolled 26,198 students from over 120 countries, of whom 4,351 were pursuing doctoral degrees. Students, faculty, and researchers affiliated with ETH Zurich include 22 Nobel laureates, including Albert Einstein, two Fields Medalists, three Pritzker Prize winners, and one Turing Award recipient. It is a founding member of the IDEA League and the International Alliance of Research Universities (IARU), as well as a member of CESAER, the League of European Research Universities (LERU), and the ENHANCE Alliance. History ETH Zurich was founded on 7 February 1854 by the Swiss Confederation and began giving its first lectures on 16 October 1855 as a polytechnic institute (eidgenössische polytechnische Schule) at various sites throughout the city of Zurich. It initially consisted of six faculties: architecture, civil engineering, mechanical engineering, chemistry, forestry, and an integrated department for mathematics, natural sciences, literature, and social and political sciences. Locally, it is still known as Polytechnikum or simply Poly, derived from the original name eidgenössische polytechnische Schule, which translates to "federal polytechnic school". ETH Zurich is a federal institute under direct administration by the Swiss government. The creation of a new federal university was heavily disputed at the time; liberals advocated for a "federal university," while conservatives wanted universities to remain under cantonal control, fearing an increase in liberal political power. Initially, ETH was co-located in the buildings of the University of Zurich. From 1905 to 1908, under the presidency of Jérôme Franel, ETH Zurich restructured its course programs to those of a university and was granted the right to award doctorates. The first doctorates were awarded in 1909. In 1911, the institution received its current name, Eidgenössische Technische Hochschule. Another reorganization in 1924 structured the university into 12 departments. Today, it has 16 departments. ETH Zurich, along with EPFL and four associated research institutes, forms the "ETH Domain" to collaborate on scientific projects. Campus ETH Zurich has two campuses, namely Zentrum and Hönggerberg; since 2007, ETH Zurich is also present in Basel with one department. The Zentrum campus grew around the main building, which was constructed 1858–1864 outside and right above the eastern border of the town, but which is nowadays located right in the heart of the city. As the town and university grew, ETH Zurich spread into the surrounding vineyards and later quarters. Because this geographic situation substantially hindered the expansion of ETH Zurich, a new campus was built from 1964 to 1976 on the Hönggerberg, a northern hill in the outskirts of the city. The last major expansion project of this new campus was completed in 2003. The Zentrum campus consists of various buildings and institutions throughout the city of Zurich. The Zentrum campus houses the: The main building of ETH Zurich was built from 1858 to 1864 under Gustav Zeuner; the architect, however, was Gottfried Semper, who was a professor of architecture at ETH Zurich at the time and one of the most important architectural writers and theorists of the age. Semper worked in a neoclassical style that was unique to him; and the namesake and architect of the Semperoper in Dresden. It emphasized bold and clear massings with a detailing, such as the rusticated ground level and giant order above, that derived in part from the work of Andrea Palladio and Donato Bramante. During the construction of the University of Zurich, the south wing of the building was allocated to the University until its own new main building was constructed (1912–1914). At about the same time, Semper's ETH Zurich building was enlarged and received its cupola. The main building stands directly across the street from the University Hospital of Zurich and, right alongside the main building of the University of Zurich. The Hönggerberg campus is a more classical university campus, consisting mainly of university buildings and student accommodation. The Hönggerberg campus houses the: There is also an ASVZ sports centre which is accessible to all students and faculty, and includes a gym, beach volleyball court, football field, and martial-arts rooms. In 2005, ETH Zurich's 150th anniversary, an extensive project called "Science City" for the Hönggerberg Campus was started with the goal to transform the campus into an attractive district based on the principle of sustainability. The Department of Biosystems Science and Engineering (D-BSSE) is located on Campus Schällemätteli in close vicinity to the University of Basel, the University Hospital and the Children’s Hospital, and several other research institutions as well as major players in the chemical and pharmaceutical industries. The concentration of life science institutions in this area of Europe, and Basel’s tri-national reach, motivated ETH Zurich to establish a departement in Basel in 2007. A train ride of 55 minutes connects the location in Basel with Zurich. Research and education Undergraduate education at ETH Zurich is marked by the distinctive Basisprüfungen ("base examinations"), intensive first-year examination blocks, typically encompassing foundational subjects in mathematics, physics, and engineering disciplines. These exams serve both as a filter and as preparation for advanced, research-oriented coursework. Students must pass these examinations within two attempts, with failure rates in mathematics-intensive programmes often reaching between 50% and 60%. Doctoral education at ETH emphasizes hands-on research experience, where PhD candidates are hired directly as paid employees in professors' laboratories, conducting independent research and actively contributing to teaching. Many departments further structure doctoral training through thematic graduate schools, promoting collaborative research with multiple advisers and international cooperation, notably with the University of Zurich. Its research is especially focused on the STEM areas and ETH hosts several research hubs. The ETH AI Center is ETH Zurich's central hub for artificial intelligence research. It is an active member of the European Laboratory for Learning and Intelligent Systems (ELLIS), hosting the ELLIS unit in Zurich and offering ELLIS PhD fellowships. Through the Max Planck ETH Center for Learning Systems (CLS), it cooperates closely with the Max Planck Institute for Intelligent Systems, jointly funding research and supervising doctoral students. The Swiss National Supercomputing Center is an autonomous organizational unit of the ETH Zurich. It is a national facility based in Lugano-Cornaredo, offering high-performance computing services for Swiss-based scientists. In 2024 it deployed the Alps Supercomputer, existing of over 10,000 H100 Nvidia GPUs, making it one of the largest academic supercomputers in the world. The ETH Laboratory of Ion Beam Physics (LIB) is a physics laboratory located in Science City. It specializes in accelerator mass spectrometry (AMS) and the use of ion beam based techniques with applications in archeology, earth sciences, life sciences, material sciences and fundamental physics. ETH Zurich promotes technology and knowledge transfer through an entrepreneurial ecosystem to foster spin-offs and start-ups. As of 2022, 527 ETH Zurich spin-off companies had been created. Rankings and reputation Historically, ETH Zurich has achieved its reputation particularly in the fields of chemistry, mathematics, physics and computer science. There are 22 Nobel laureates who are associated with ETH Zurich, the most recent of whom is Didier Queloz, awarded the Nobel Prize in Physics in 2019. Albert Einstein is perhaps its most famous alumnus. ETH Zurich is ranked 7th worldwide (first in Switzerland) in the QS World University Rankings 2025, 11th worldwide (first in Switzerland) in the Times Higher Education World University Rankings 2024, and 20th worldwide in the Academic Ranking of World Universities 2023. ETH Zurich ranked 1st in Europe in the 2025 QS Europe rankings. In the 2023 Nature Index of academic institutions, ETH Zurich ranked 20th worldwide and first in Switzerland. In the 2024 QS Word University Rankings by subject, ETH Zurich was ranked within the top 10 in the world in architecture, engineering and technology, and the natural sciences. It ranked first worldwide in the earth and marine sciences, geology, and geophysics. In the 2024 THE World University Rankings by subject, it was the top Swiss university in all ranked subjects. In the 2023 ARWU Subject Ranking, the university was ranked within the top 10 worldwide in civil engineering, water resources, environmental engineering, automation, mathematics, earth sciences, and ecology. Student life Being a public university, the subsidized (by Swiss federal tax) tuition fees are CHF 730 per semester, regardless of the student's nationality. From the autumn semester 2025, tuition fees for foreign students will be tripled to CHF 2190 per semester. Both merit and need based scholarships are also available. ETH Zurich has well over 100 student associations. Most notable is the VSETH (Verband der Studierenden an der ETH) which forms the umbrella organization of all field of study specific student associations and comprises a large variety of committees such as the Student Sustainability Committee and the ETH Model United Nations. The associations regularly organize events with varying size and popularity. Events of the neighboring University of Zurich are well-attended by ETH Zurich students and vice versa. The largest career fair on campus is the Polymesse which is organized by students in the Forum und Contact committee of VSETH. Many student associations however organize career fairs specifically for the students in their departments. The VSETH is also the official representation of the student body towards the school and has been working with ETH on various projects with the aim of improving the students' experience at ETH. The representation towards the various departments is handled by the respective student associations. ETH Juniors is another student organization. It forms a bridge between industry and ETH Zurich and offers many services for students and companies alike as a student-led consulting group. The Academic Sports Association of Zurich (ASVZ) offers more than 120 sports. The biggest annual sports event is the SOLA-Stafette (SOLA relay race) which consists of 14 sections over a total distance of 140 kilometres (87 mi). In 2017, ETH Zurich board approved the creation of a "Student Project House" to encourage student projects and foster innovation. A test consisting of a "makerspace" and co-working space was established on the Hönggerberg campus, followed by a 6-story space near the ETH Zurich main building. Both locations function as a unified entity for the purpose of qualifications, staffing and decision making. While both makerspaces offer similar tools, the central one is significantly larger and also hosts a rentable auditorium, intended for pitching projects to faculty to gain funding, and a bar. Both makerspaces include workspaces for wood- and metalworking, electronics fabrication, as well as an array of 3D-printers for students to use at a little over material cost. Both also feature a shop for students to buy items such as resistors in lower quantities than ordinarily, while passing down the savings of bulk purchases. The makerspaces are managed and staffed entirely by students. A new space is expected to open on the Hönggerberg campus in 2024.[needs update] The Swiss Academic Spaceflight Initiative (ARIS) (German: Akademische Raumfahrt Initiative Schweiz) is an organisation at ETH Zurich that focuses on the development of space related technologies. The most prominent area of research is in the development of a sounding rocket that is flown yearly at the Spaceport America Cup. The AMZ - Academic Motorsports Association (German: Akademischer Motorsportverein Zürich) is the ETH Zurich's Formula Student team. Swissloop is ETH Zurich's newest competition team that is working on the development of a hyperloop system. Traditions The Polyball, which is the biggest decorated ball in Europe,[citation needed] takes places annually in the main building of ETH and is organized by students and former students in the KOSTA foundation. It has been taking place since the 1880s. The amicable rivalry between ETH Zurich and its neighbor, the University of Zurich, has been cultivated since 1951 (Uni-Poly). There has been an annual rowing match between teams from the two institutions on the river Limmat. There are many regular symposia and conferences at ETH Zurich, most notably the annual Wolfgang Pauli Lectures, in honor of former ETH Zurich Professor Wolfgang Pauli. Distinct lecturers, among them 24 Nobel laureates, have held lectures of the various fields of natural sciences at this conference since 1962. Notable alumni and faculty ETH Zurich has produced and attracted many famous scientists, including Albert Einstein and John von Neumann. More than twenty Nobel laureates have either studied at ETH Zurich or were awarded the Nobel Prize for their work achieved at ETH Zurich. Other alumni include scientists who were distinguished with the highest accolades such as the Fields Medal, Pritzker Prize and Turing Award, among other distinctions in their respective fields. Academic achievements aside, ETH Zurich has been alma mater to many Olympic medalists and world champions. Related organizations The Collegium Helveticum is an Institute for Advanced Study. It is jointly supported and operated by the ETH Zurich, the University of Zurich and the Zurich University of the Arts. It is dedicated to transdisciplinary research and acts as a think tank as well. Fellows are elected for five years to work together on a particular subject. For the period 2016–2020, the research focus is on digital societies. The ETH Zurich Foundation is a legal entity on its own (a Swiss non-profit foundation) and as such not part of the ETH Zurich. Its purpose is to raise funds to support chosen institutes, projects, faculty and students at the ETH Zurich. It receives charitable donations from companies, foundations and private individuals. It can be compared with university endowments in the US. However, the ETH Zurich is a public university so that the funds of this foundation are much smaller than at comparable private universities. Examples of funded teaching and research are: The Military Academy is an institution for the education, training and development of career officers of the Swiss Armed Forces. The scientific part of this organization is attached to the ETH Zurich, while other parts such as training and an assessment center are under the direct management of the defense sector of the Swiss Federal Government. The ETH Alumni Association has been in existence since 1889. It has around 35,000 members. Membership is primarily open to former students, though academic faculty and post-docs are also eligible to join as members. After the collapse of Credit Suisse in 2025 the association took over sponsorship of the Award for Best Teaching. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Electronic_circuit] | [TOKENS: 1399] |
Contents Electronic circuit An electronic circuit is composed of individual electronic components, such as resistors, transistors, capacitors, inductors and diodes, connected by conductive wires or traces through which electric current can flow. It is a type of electrical circuit. For a circuit to be referred to as electronic, rather than electrical, generally at least one active component must be present. The combination of components and wires allows various simple and complex operations to be performed: signals can be amplified, computations can be performed, and data can be moved from one place to another. Circuits can be constructed of discrete components connected by individual pieces of wire, but today it is much more common to create interconnections by photolithographic techniques on a laminated substrate (a printed circuit board or PCB) and solder the components to these interconnections to create a finished circuit. In an integrated circuit or IC, the components and interconnections are formed on the same substrate, typically a semiconductor such as doped silicon or (less commonly) gallium arsenide. An electronic circuit can usually be categorized as an analog circuit, a digital circuit, or a mixed-signal circuit (a combination of analog circuits and digital circuits). The most widely used semiconductor device in electronic circuits is the MOSFET (metal–oxide–semiconductor field-effect transistor). Analog circuits Analog electronic circuits are those in which current or voltage may vary continuously with time to correspond to the information being represented. The basic components of analog circuits are wires, resistors, capacitors, inductors, diodes, and transistors. Analog circuits are very commonly represented in schematic diagrams, in which wires are shown as lines, and each component has a unique symbol. Analog circuit analysis employs Kirchhoff's circuit laws: all the currents at a node (a place where wires meet), and the voltage around a closed loop of wires is 0. Wires are usually treated as ideal zero-voltage interconnections; any resistance or reactance is captured by explicitly adding a parasitic element, such as a discrete resistor or inductor. Active components such as transistors are often treated as controlled current or voltage sources: for example, a field-effect transistor can be modeled as a current source from the source to the drain, with the current controlled by the gate-source voltage. When the circuit size is comparable to a wavelength of the relevant signal frequency, a more sophisticated approach must be used, the distributed-element model. Wires are treated as transmission lines, with nominally constant characteristic impedance, and the impedances at the start and end determine transmitted and reflected waves on the line. Circuits designed according to this approach are distributed-element circuits. Such considerations typically become important for circuit boards at frequencies above a GHz; integrated circuits are smaller and can be treated as lumped elements for frequencies less than 10GHz or so. Digital circuits In digital electronic circuits, electric signals take on discrete values, to represent logical and numeric values. These values represent the information that is being processed. In the vast majority of cases, binary encoding is used: one voltage (typically the more positive value) represents a binary '1' and another voltage (usually a value near the ground potential, 0 V) represents a binary '0'. Digital circuits make extensive use of transistors, interconnected to create logic gates that provide the functions of Boolean logic: AND, NAND, OR, NOR, XOR and combinations thereof. Transistors interconnected so as to provide positive feedback are used as latches and flip flops, circuits that have two or more metastable states, and remain in one of these states until changed by an external input. Digital circuits therefore can provide logic and memory, enabling them to perform arbitrary computational functions. (Memory based on flip-flops is known as static random-access memory (SRAM). Memory based on the storage of charge in a capacitor, dynamic random-access memory (DRAM), is also widely used.) The design process for digital circuits is fundamentally different from the process for analog circuits. Each logic gate regenerates the binary signal, so the designer need not account for distortion, gain control, offset voltages, and other concerns faced in an analog design. As a consequence, extremely complex digital circuits, with billions of logic elements integrated on a single silicon chip, can be fabricated at low cost. Such digital integrated circuits are ubiquitous in modern electronic devices, such as calculators, mobile phone handsets, and computers. As digital circuits become more complex, issues of time delay, logic races, power dissipation, non-ideal switching, on-chip and inter-chip loading, and leakage currents, become limitations to circuit density, speed and performance. Digital circuitry is used to create general purpose computing chips, such as microprocessors, and custom-designed logic circuits, known as application-specific integrated circuit (ASICs). Field-programmable gate arrays (FPGAs), chips with logic circuitry whose configuration can be modified after fabrication, are also widely used in prototyping and development. Mixed-signal circuits Mixed-signal or hybrid circuits contain elements of both analog and digital circuits. Examples include comparators, timers, phase-locked loops, analog-to-digital converters, and digital-to-analog converters. Most modern radio and communications circuitry uses mixed signal circuits. For example, in a receiver, analog circuitry is used to amplify and frequency-convert signals so that they reach a suitable state to be converted into digital values, after which further signal processing can be performed in the digital domain. Design Electronic circuit design comprises the analysis and synthesis of electronic circuits. Prototyping In electronics, prototyping means building an actual circuit to a theoretical design to verify that it works, and to provide a physical platform for debugging it if it does not. The prototype is often constructed using techniques such as wire wrapping or using a breadboard, stripboard or perfboard, with the result being a circuit that is electrically identical to the design but not physically identical to the final product. Open-source tools like Fritzing exist to document electronic prototypes (especially the breadboard-based ones) and move toward physical production. Prototyping platforms such as Arduino also simplify the task of programming and interacting with a microcontroller. The developer can choose to deploy their invention as-is using the prototyping platform, or replace it with only the microcontroller chip and the circuitry that is relevant to their product. A technician can quickly build a prototype (and make additions and modifications) using these techniques, but for volume production it is much faster and usually cheaper to mass-produce custom printed circuit boards than to produce these other kinds of prototype boards. The proliferation of quick-turn PCB fabrication and assembly companies has enabled the concepts of rapid prototyping to be applied to electronic circuit design. It is now possible, even with the smallest passive components and largest fine-pitch packages, to have boards fabricated, assembled, and even tested in a matter of days. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-rojas-universal-48] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Zuse_KG] | [TOKENS: 3004] |
Contents Konrad Zuse Konrad Ernst Otto Zuse (/ˈzuːsə/; German: [ˈkɔnʁaːt ˈtsuːzə]; 22 June 1910 – 18 December 1995) was a German civil engineer, pioneering computer scientist, inventor and businessman. His greatest achievement was the world's first programmable computer; the functional program-controlled Turing-complete Z3 became operational in May 1941. Thanks to this machine and its predecessors, Zuse is regarded by some as the inventor and father of the modern computer. Zuse was noted for the S2 computing machine, considered the first process control computer. In 1941, he founded one of the earliest computer businesses, producing the Z4, which became the world's first commercial computer. From 1943 to 1945 he designed Plankalkül, the first high-level programming language. In 1969, Zuse suggested the concept of a computation-based universe in his book Rechnender Raum (Calculating Space). Much of his early work was financed by his family and commerce, but after 1939 he was given resources by the government of Nazi Germany. Due to World War II, Zuse's work went largely unnoticed in the United Kingdom and United States. Possibly his first documented influence on a US company was IBM's option on his patents in 1946. The Z4 also served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Early life and education Konrad Zuse was born in Berlin on 22 June 1910. In 1912, his family moved to East Prussian Braunsberg (now Braniewo in Poland), where his father was a postal clerk. Zuse attended the Collegium Hosianum in Braunsberg, and in 1923, the family moved to Hoyerswerda, where he passed his Abitur in 1928, qualifying him to enter university.[citation needed] He enrolled at Technische Hochschule Berlin (now Technische Universität Berlin) and explored both engineering and architecture, but found them boring. Zuse then pursued civil engineering, graduating in 1935. Career After graduation, Zuse worked for the Ford Motor Company, using his artistic skills in the design of advertisements. He started work as a design engineer at the Henschel aircraft factory in Schönefeld near Berlin. This required the performance of many routine calculations by hand, leading him to theorize and plan a way of doing them by machine. Beginning in 1935, he experimented in the construction of computers in his parents' flat on Wrangelstraße 38, moving with them into their new flat on Methfesselstraße 10, the street leading up the Kreuzberg, Berlin.: 418 Working in his parents' apartment in 1936, he produced his first attempt, the Z1, a floating-point binary mechanical calculator with limited programmability, reading instructions from a perforated 35 mm film. In 1937, Zuse submitted two patents that anticipated a von Neumann architecture. In 1938, he finished the Z1 which contained some 30,000 metal parts and never worked well due to insufficient mechanical precision. On 30 January 1944, the Z1 and its original blueprints were destroyed with his parents' flat and many neighbouring buildings by a British air raid in World War II.: 426 Zuse completed his work entirely independently of other leading computer scientists and mathematicians of his day. Between 1936 and 1945, he was in near-total intellectual isolation. In 1939, Zuse was called to military service, where he was given the resources to ultimately build the Z2. In September 1940 Zuse presented the Z2, covering several rooms in the parental flat, to experts of the Deutsche Versuchsanstalt für Luftfahrt (DVL; German Research Institute for Aviation).: 424 The Z2 was a revised version of the Z1 using telephone relays. In 1940, the German government began funding him and his company through the Aerodynamische Versuchsanstalt (AVA, Aerodynamic Research Institute, forerunner of the DLR), which used his work for the production of glide bombs. Zuse built the S1 and S2 computing machines, which were special purpose devices which computed aerodynamic corrections to the wings of radio-controlled flying bombs. The S2 featured an integrated analog-to-digital converter under program control, making it the first process-controlled computer.: 75 In 1941 Zuse started a company, Zuse Apparatebau (Zuse Apparatus Construction), to manufacture his machines, renting a workshop on the opposite side in Methfesselstraße 7 and stretching through the block to Belle-Alliance Straße 29 (renamed and renumbered as Mehringdamm 84 in 1947).: 418, 425 In 1941, he improved on the basic Z2 machine, and built the Z3. On 12 May 1941 Zuse presented the Z3, built in his workshop, to the public.: 425 The Z3 was a binary 22-bit floating-point calculator featuring programmability with loops but without conditional jumps, with memory and a calculation unit based on telephone relays. The telephone relays used in his machines were largely collected from discarded stock. Despite the absence of conditional jumps, the Z3 was a Turing complete computer. However, Turing-completeness was never considered by Zuse (who was unaware of Turing's work and had practical applications in mind) and only demonstrated in 1998 (see History of computing hardware). The Z3, the first fully operational electromechanical computer, was partially financed by German government-supported DVL, which wanted their extensive calculations automated. A request by his co-worker Helmut Schreyer—who had helped Zuse build the Z3 prototype in 1938—for government funding for an electronic successor to the Z3 was denied as "strategically unimportant". In 1937, Schreyer had advised Zuse to use vacuum tubes as switching elements; Zuse at this time considered it a "crazy idea" (Schnapsidee in his own words). Zuse's workshop on Methfesselstraße 7 (along with the Z3) was destroyed in an Allied Air raid in late 1943 and the parental flat with Z1 and Z2 on 30 January the following year, whereas the successor Z4, which Zuse had begun constructing in 1942: 75 in new premises in the Industriehof on Oranienstraße 6, remained intact.: 428 On 3 February 1945, aerial bombing caused devastating destruction in the Luisenstadt, the area around Oranienstraße, including neighbouring houses. This event effectively brought Zuse's research and development to a complete halt. The partially finished, telephone relay-based Z4 computer was then packed and moved from Berlin on 14 February, arriving in Göttingen approximately two weeks later.: 428 These machines contributed to the Henschel Werke Hs 293 and Hs 294 guided missiles developed by the German military between 1941 and 1945, which were the precursors to the modern cruise missile.: 75 The circuit design of the S1 was the predecessor of Zuse's Z11.: 75 Zuse believed that these machines had been captured by occupying Soviet troops in 1945.: 75 While working on his Z4 computer, Zuse realised that programming in machine code was too complicated. He started working on a PhD thesis detailing the first high-level programming language, Plankalkül ("Plan Calculus") and, as an elaborate example program, the first real computer chess engine. After the 1945 Luisenstadt bombing, he fled from Berlin to the rural Allgäu. In the extreme deprivation of post-war Germany Zuse was unable to build computers. Zuse founded one of the earliest computer companies: the Zuse-Ingenieurbüro Hopferau. Capital was raised in 1946 through ETH Zurich and an IBM option on Zuse's patents. In 1947, according to the memoirs of the German computer pioneer Heinz Billing from the Max Planck Institute for Physics, there was a meeting between Alan Turing and Konrad Zuse in Göttingen. The encounter had the form of a colloquium. Participants were John Womersley, Turing, Arthur Porter from England and a few German researchers like Zuse, Helmut Schreyer, Alwin Walther, and Billing. (For more details see Herbert Bruderer, Konrad Zuse und die Schweiz). It was not until 1949 that Zuse was able to resume work on the Z4. He would show the computer to the mathematician Eduard Stiefel of the ETH Zurich. The two men settled a deal to lend the Z4 to the ETH. In November 1949, Zuse founded another company, Zuse KG, in Haunetal-Neukirchen; in 1957, the company's head office moved to Bad Hersfeld. The Z4 was finished and delivered to the ETH Zurich in July 1950, where it proved very reliable. At that time, it was the only working digital computer in Central Europe, and the second computer in the world to be sold or loaned, beaten only by the BINAC, which never worked properly after it was delivered. Other computers, all numbered with a leading Z, up to Z43, were built by Zuse and his company. Notable are the Z11, which was sold to the optics industry and to universities, and the Z22, the first computer with a memory based on magnetic storage. Unable to do any hardware development, he continued working on Plankalkül, eventually publishing some brief excerpts of his thesis in 1948 and 1959; the work in its entirety, however, remained unpublished until 1972. The PhD thesis was submitted at University of Augsburg, but it was rejected because Zuse forgot to pay the DM 400 university enrollment fee. The rejection did not bother him. Plankalkül slightly influenced the design of ALGOL 58 but was itself implemented only in 1975 in a dissertation by Joachim Hohmann. Heinz Rutishauser, one of the inventors of ALGOL, wrote: "The very first attempt to devise an algorithmic language was undertaken in 1948 by K. Zuse. His notation was quite general, but the proposal never attained the consideration it deserved." Further implementations followed in 1998 and then in 2000 by a team from the Free University of Berlin. Donald Knuth suggested a thought experiment: What might have happened had the bombing not taken place, and had the PhD thesis accordingly been published as planned?[relevant?] In 1956, Zuse began to work on a high precision, large format plotter. It was demonstrated at the 1961 Hanover Fair, and became well known also outside of the technical world thanks to Frieder Nake's pioneering computer art work. Other plotters designed by Zuse include the ZUSE Z90 and ZUSE Z9004. In 1967, Zuse suggested that the universe itself is running on a cellular automaton or similar computational structure (digital physics); in 1969, he published the book Rechnender Raum (translated into English as Calculating Space). Between 1989 and 1995, Zuse conceptualized and created a purely mechanical, extensible, modular tower automaton he named "helix tower" ("Helixturm"). The structure is based on a gear drive that employs rotary motion (e.g. provided by a crank) to assemble modular components from a storage space, elevating a tube-shaped tower; the process is reversible, and inverting the input direction will deconstruct the tower and store the components. In 2009, the Deutsches Museum restored Zuse's original 1:30 functional model that can be extended to a height of 2.7 m. Zuse intended the full construction to reach a height of 120 m, and envisioned it for use with wind power generators and radio transmission installations. Between 1987 and 1989, Zuse recreated the Z1, suffering a heart attack midway through the project. It cost 800,000 DM (approximately $500,000) and required four individuals (including Zuse) to assemble it. Funding for this retrocomputing project was provided by Siemens and a consortium of five companies. Personal life Konrad Zuse married Gisela Brandes in January 1945, employing a carriage, himself dressed in tailcoat and top hat and with Gisela in a wedding veil, for Zuse attached importance to a "noble ceremony". Their son Horst, the first of five children, was born in November 1945. While Zuse never became a member of the Nazi Party, he is not known to have expressed any doubts or qualms about working for the Nazi war effort. Much later, he suggested that in modern times, the best scientists and engineers usually have to choose between either doing their work for more or less questionable business and military interests in a Faustian bargain, or not pursuing their line of work at all. After Zuse retired, he focused on his hobby of painting. He signed his paintings as "Kuno [von und zu] See". Zuse was an atheist.: 12–13 Zuse died on 18 December 1995 in Hünfeld, Hesse (near Fulda) from heart failure. Awards and honours Zuse received several awards for his work: The Zuse Institute Berlin is named in his honour. The Konrad Zuse Medal of the Gesellschaft für Informatik, and the Konrad Zuse Medal of the Zentralverband des Deutschen Baugewerbes (Central Association of German Construction), are both named after Zuse. A replica of the Z3, as well as the original Z4, is in the Deutsches Museum in Munich. The Deutsches Technikmuseum in Berlin has an exhibition devoted to Zuse, displaying twelve of his machines, including a replica of the Z1 and several of Zuse's paintings. The 100th anniversary of his birth was celebrated by exhibitions, lectures and workshops. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Iowa_State_University] | [TOKENS: 9085] |
Contents Iowa State University Iowa State University of Science and Technology (Iowa State University, Iowa State, or ISU) is a public land-grant research university in Ames, Iowa, United States. Founded in 1858 as the Iowa Agricultural College and Model Farm, Iowa State became one of the nation's first designated land-grant institutions when the Iowa Legislature accepted the provisions of the 1862 Morrill Act on September 11, 1862. On July 4, 1959, the college was officially renamed Iowa State University of Science and Technology. Iowa State is the second largest university in Iowa by total enrollment. The university's academic offerings are administered through eight colleges, including the College of Agriculture and Life Sciences, the College of Veterinary Medicine, the College of Engineering, the Graduate College, the College of Liberal Arts & Sciences, the College of Design, Debbie and Jerry Ivy College of Business, and the College of Health and Human Sciences. They offer more than 100 bachelor's degree programs, 120 master's degree programs, and 80 doctoral degree programs, plus a professional degree program in Veterinary Medicine. Iowa State is classified among "R1: Doctoral Universities – Very high research activity." The university is affiliated with the Ames National Laboratory, the Biorenewables Research Laboratory, the Plant Sciences Institute, and various other research institutes. Iowa State University's athletic teams, the Cyclones, compete in Division I of the NCAA and are a founding member of the Big 12. History In 1856, the Iowa General Assembly enacted legislation to establish the Iowa Agricultural College and Model Farm. This institution (now Iowa State University) was officially established on March 22, 1858, by the General Assembly. Story County was chosen as the location on June 21, 1859, beating proposals from Johnson, Kossuth, Marshall and Polk counties. The original farm of 648 acres (2.62 km2) was purchased at a cost of $5,379. Iowa was the first state in the nation to accept the provisions of the Morrill Act of 1862. The state subsequently designated Iowa State as the land-grant college on March 29, 1864. Iowa State University is one of four universities that claims to be the first land-grant institution in the United States, the others being Kansas State University, Michigan State University, and the Pennsylvania State University. From the start, Iowa Agricultural College focused on the ideals that higher education should be accessible to all and that the university should teach liberal and practical subjects. These ideals are integral to the land-grant university. The institution has been coeducational since the first class admitted in 1868. Formal admissions began the following year, and the first graduating class of 1872 consisted of 24 men and two women. The Farm House, the first building on the Iowa State campus, was completed in 1861 before the campus was occupied by students or classrooms. It became the home of the superintendent of the Model Farm and in later years, the deans of Agriculture, including Seaman Knapp and James "Tama Jim" Wilson. Iowa State's first president, Adonijah Welch, briefly stayed at the Farm House and penned his inaugural speech in a second floor bedroom. The Iowa Experiment Station was one of the university's prominent features. Practical courses of instruction were taught, including one designed to give a general training for the career of a farmer. Courses in mechanical, civil, electrical, and mining engineering were also part of the curriculum. In 1870, President Welch and I. P. Roberts, professor of agriculture, held three-day farmers' institutes at Cedar Falls, Council Bluffs, Washington, and Muscatine. These became the earliest institutes held off-campus by a land grant institution and were the forerunners of 20th century extension. In 1872, the first courses were given in domestic economy (home economics, family and consumer sciences) and were taught by Mary B. Welch, the president's wife. Iowa State became the first land grant university to offer training in domestic economy for college credit. In 1879, the School of Veterinary Science was organized, becoming the first state veterinary college in the United States. This was originally a two-year course leading to a diploma. The veterinary course of study contained classes in zoology, botany, anatomy of domestic animals, veterinary obstetrics, and sanitary science. William Miller Beardshear was appointed President of Iowa State in 1891. During his tenure, Iowa Agricultural College truly came of age. Beardshear developed new agricultural programs and was instrumental in hiring premier faculty members such as Anson Marston, Louis B. Spinney, J. B. Weems, Perry G. Holden, and Maria Roberts. He also expanded the university administration, and added Morrill Hall (1891), the Campanile (1899), Old Botany (now Carrie Chapman Catt Hall) (1892), and Margaret Hall (1895) to the campus, all of which stand today except for Margaret Hall, which was destroyed by a fire in 1938. In his honor, Iowa State named its central administrative building (Central Building) after Beardshear in 1925. In 1898, reflecting the school's growth during his tenure, it was renamed Iowa State College of Agricultural and Mechanic Arts, or Iowa State for short. Today, Beardshear Hall holds the offices of the President, Vice-President, Treasurer, Secretary, Provost, and student financial aid. Catt Hall is named after alumna and famed suffragette Carrie Chapman Catt, and is the home of the College of Liberal Arts and Sciences. In 1912, Iowa State had its first Homecoming celebration. The idea was first proposed by Professor Samuel Beyer, the college's "patron saint of athletics", who suggested that Iowa State inaugurate a celebration for alumni during the annual football game against rival University of Iowa. Iowa State's new president, Raymond A. Pearson, liked the idea and issued a special invitation to alumni two weeks prior to the event: "We need you, we must have you. Come and see what a school you have made in Iowa State College. Find a way." In October 2012 Iowa State marked its 100th Homecoming with a "CYtennial" Celebration. Iowa State celebrated its first VEISHEA on May 11–13, 1922. Wallace McKee (class of 1922) served as the first chairman of the Central Committee and Frank D. Paine (professor of electrical engineering) chose the name, based on the first letters of Iowa State's colleges: Veterinary Medicine, Engineering, Industrial Science, Home Economics, and Agriculture. VEISHEA grew to become the largest student-run festival in the nation. The Statistical Laboratory was established in 1933, with George W. Snedecor, professor of mathematics, as the first director. It was and is the first research and consulting institute of its kind in the country. While attempting to develop a faster method of computation, mathematics and physics professor John Vincent Atanasoff conceptualized the basic tenets of what would become the world's first electronic digital computer, the Atanasoff–Berry Computer (ABC), during a drive to Illinois in 1937. These included the use of a binary system of arithmetic, the separation of computer and memory functions, and regenerative drum memory, among others. The 1939 prototype was constructed with graduate student Clifford Berry in the basement of the Physics Building. During World War II, Iowa State was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission. On July 4, 1959, the college was officially renamed Iowa State University of Science and Technology. However, as a practical matter, the institution's name is simply Iowa State University. By longstanding convention, the "Science and Technology" portion is omitted even in official documents, such as diplomas. Official names given to the university's divisions were the College of Agriculture, College of Engineering, College of Home Economics, College of Sciences and Humanities, and College of Veterinary Medicine. Iowa State's eight colleges today offer more than 100 undergraduate majors and 200 fields of study leading to graduate and professional degrees. The academic program at ISU includes a liberal arts education and research in the biological and physical sciences. The focus on technology has led directly to many research patents and inventions including the first binary computer, the ABC, Maytag blue cheese, and the round hay baler. Located on a 2,000 acres (8.1 km2) campus, the university has grown considerably from its roots as an agricultural college and model farm and is recognized internationally today for its comprehensive research programs. It continues to grow and set a new record for enrollment in the fall of 2015 with 36,001 students. Iowa State played a role in the development of the atomic bomb during World War II as part of the Manhattan Project, a research and development program begun in 1942 under the Army Corps of Engineers. The process to produce large quantities of high-purity uranium metal became known as the Ames process. One-third of the uranium metal used in the world's first controlled nuclear chain reaction was produced at Iowa State under the direction of Frank Spedding and Harley Wilhelm. The Ames Project received the Army/Navy E Award for Excellence in Production on October 12, 1945, for its work with metallic uranium as a vital war material. Today, ISU is the only university in the United States that has a U.S. Department of Energy research laboratory physically located on its campus. Iowa State is the birthplace of the first electronic digital computer, starting the world's computer technology revolution. Invented by mathematics and physics professor John Atanasoff and engineering graduate student Clifford Berry during 1937–42, the Atanasoff–Berry Computer pioneered important elements of modern computing. On October 19, 1973, U.S. Federal Judge Earl R. Larson signed his decision following a lengthy court trial which declared the ENIAC patent of Mauchly and Eckert invalid and named Atanasoff the inventor of the electronic digital computer—the Atanasoff–Berry Computer or the ABC. An ABC Team consisting of Ames Laboratory and Iowa State engineers, technicians, researchers and students unveiled a working replica of the Atanasoff–Berry Computer in 1997 which can be seen on display on campus in the Durham Computation Center. Campus Iowa State's campus contains over 160 buildings. Several buildings, as well as the Marston Water Tower, are listed on the National Register of Historic Places. The central campus includes 490 acres (2.0 km2) of trees, plants, and classically designed buildings. The landscape's most dominant feature is the 20-acre (81,000 m2) central lawn, which was listed as a "medallion site" by the American Society of Landscape Architects in 1999. Thomas Gaines, in The Campus As a Work of Art, claimed that the Iowa State campus was one of the twenty-five most beautiful campuses in the country. The campanile was constructed during 1897–1898 as a memorial to Margaret MacDonald Stanton, Iowa State's first dean of women, who died on July 25, 1895. The tower is located on ISU's central campus, just north of the Memorial Union. The site was selected by Margaret's husband, Edgar W. Stanton, with the help of then-university president William M. Beardshear. The campanile stands 110 feet (34 m) tall on a 16 by 16 foot (5 by 5 m) base, and cost $6,510.20 to construct. The campanile is widely seen as one of the major symbols of Iowa State University. It is featured prominently on the university's official ring and the university's mace, and is also the subject of the university's alma mater, The Bells of Iowa State. Named for LaVerne W. Noyes, who graduated in 1872, and later donated the funds to see that Alumni Hall could be completed after sitting unfinished and unused from 1905 to 1907. Lake LaVerne is located west of the Memorial Union and south of Alumni Hall, Carver Hall, and Music Hall. The lake was a gift from Noyes in 1916. Lake LaVerne is the home of two mute swans named Sir Lancelot and Elaine, donated to Iowa State by VEISHEA 1935. In 1944, 1970, and 1971 cygnets (baby swans) made their home on Lake LaVerne. Previously Sir Lancelot and Elaine were trumpeter swans but were too aggressive and in 1999 were replaced with two mute swans. In early spring 2003, Lake LaVerne welcomed its newest and most current mute swan duo. In support of Iowa Department of Natural Resources efforts to re-establish the trumpeter swans in Iowa, university officials avoided bringing breeding pairs of male and female mute swans to Iowa State which means the current Sir Lancelot and Elaine are both female. Iowa State has maintained a horticulture garden since 1914. Reiman Gardens is the third location for these gardens. Today's gardens began in 1993 with a gift from Bobbi and Roy Reiman. Construction began in 1994 and the Gardens' initial 5 acres (20,000 m2) were officially dedicated on September 16, 1995. Reiman Gardens has since grown to become a 14 acres (57,000 m2) site consisting of a dozen distinct garden areas, an indoor conservatory and an indoor butterfly "wing", butterfly emergence cases, a gift shop, and several supporting greenhouses. Located immediately south of Jack Trice Stadium on the ISU campus, Reiman Gardens is a year-round facility that has become one of the most visited attractions in central Iowa. The Gardens has received a number of national, state, and local awards since its opening, and its rose gardens are particularly noteworthy. It was honored with the President's Award in 2000 by All American Rose Selections, Inc., which is presented to one public garden in the United States each year for superior rose maintenance and display: "For contributing to the public interest in rose growing through its efforts in maintaining an outstanding public rose garden." The university museums consist of the Brunnier Art Museum, Farm House Museum, the Art on Campus Program, the Christian Petersen Art Museum, and the Elizabeth and Byron Anderson Sculpture Garden. The Brunnier Art Museum, Iowa's only accredited museum emphasizing a decorative arts collection, is one of the nation's few museums located within a performing arts and conference complex, the Iowa State Center. Founded in 1975, the museum is named after its benefactors, Iowa State alumnus Henry J. Brunnier and his wife Ann. The decorative arts collection they donated, called the Brunnier Collection, is extensive, consisting of ceramics, glass, dolls, ivory, jade, and enameled metals. Other fine and decorative art objects from the University Art Collection include prints, paintings, sculptures, textiles, carpets, wood objects, lacquered pieces, silver, and furniture. About eight to 12 annual changing exhibitions and permanent collection exhibitions provide educational opportunities. Lectures, receptions, conferences, university classes, panel discussions, gallery walks, and gallery talks are presented to assist with further interpretation of objects. Located near the center of the Iowa State campus, the Farm House Museum sits as a monument to early Iowa State history and culture as well as a National Historic Landmark. As the first building on campus, the Farm House was built in 1860 before campus was occupied by students or even classrooms. The college's first farm tenants primed the land for agricultural experimentation. This early practice lead to Iowa State Agricultural College and Model Farm opening its doors to Iowa students for free in 1869 under the Morrill Act (or Land-grant Act) of 1862. Many prominent figures have made the Farm House their home throughout its 150 years of use. The first president of the college, Adonijah Welch, briefly stayed at the Farm House and even wrote his inaugural speech in a bedroom on the second floor. James "Tama Jim" Wilson resided for much of the 1890s with his family at the Farm House until he joined President William McKinley's cabinet as U.S. Secretary of Agriculture. Agriculture Dean Charles Curtiss and his young family replaced Wilson and became the longest resident of Farm House. In 1976, over 110 years after the initial construction, the Farm House became a museum after much time and effort was put into restoring the early beauty of the modest farm home. Today, faculty, students, and community members can enjoy the museum while honoring its significance in shaping a nationally recognized land-grant university. Its collection boasts a large collection of 19th and early 20th century decorative arts, furnishings and material culture reflecting Iowa State and Iowa heritage. Objects include furnishings from Carrie Chapman Catt and Charles Curtiss, a wide variety of quilts, a modest collection of textiles and apparel, and various china and glassware items. The Farm House Museum is an on-campus educational resource providing a changing environment of exhibitions among the historical permanent collection objects that are on display. Iowa State is home to one of the largest campus public art programs in the United States. Over 2,000 works of public art, including 600 by significant national and international artists, are located across campus in buildings, courtyards, open spaces and offices. The traditional public art program began during the Depression in the 1930s when Iowa State College's President Raymond Hughes envisioned that "the arts would enrich and provide substantial intellectual exploration into our college curricula." Hughes invited Grant Wood to create the Library's agricultural murals that speak to the founding of Iowa and Iowa State College and Model Farm. He also offered Christian Petersen a one-semester sculptor residency to design and build the fountain and bas relief at the Dairy Industry Building. In 1955, 21 years later, Petersen retired having created 12 major sculptures for the campus and hundreds of small studio sculptures. The Art on Campus Collection is a campus-wide resource of over 2000 public works of art. Programs, receptions, dedications, university classes, Wednesday Walks, and educational tours are presented on a regular basis. The Christian Petersen Art Museum in Morrill Hall is named for the nation's first permanent campus artist-in-residence, Christian Petersen, who sculpted and taught at Iowa State from 1934 through 1955, and is considered the founding artist of the Art on Campus Collection. Named for Justin Smith Morrill who created the Morrill Land-Grant Colleges Act, Morrill Hall was completed in 1891. Originally constructed to fill the capacity of a library, museum, and chapel, its original uses are engraved in the exterior stonework on the east side. The building was vacated in 1996 when it was determined unsafe and was also listed in the National Register of Historic Places the same year. In 2005, $9 million was raised to renovate the building and convert it into a museum. Completed and reopened in March 2007, Morrill Hall is home to the Christian Petersen Art Museum. As part of University Museums, the Christian Petersen Art Museum at Morrill Hall is the home of the Christian Petersen Art Collection, the Art on Campus Program, the University Museums's Visual Literacy and Learning Program, and Contemporary Changing Art Exhibitions Program. Located within the Christian Petersen Art Museum are the Lyle and Nancy Campbell Art Gallery, the Roy and Bobbi Reiman Public Art Studio Gallery, the Margaret Davidson Center for the Study of the Art on Campus Collection, the Edith D. and Torsten E. Lagerstrom Loaned Collections Center, and the Neva M. Petersen Visual Learning Gallery. University Museums shares the James R. and Barbara R. Palmer Small Objects Classroom in Morrill Hall. The Elizabeth and Byron Anderson Sculpture Garden is located by the Christian Petersen Art Museum at historic Morrill Hall. The sculpture garden design incorporates sculptures, a gathering arena, and sidewalks and pathways. Planted with perennials, ground cover, shrubs, and flowering trees, the landscape design provides a setting for works of 20th and 21st century sculpture, primarily American. Ranging from forty-four inches to nearly nine feet high and from bronze to other metals, these works of art represent both modern and contemporary sculpture. The sculpture garden is adjacent to Iowa State's 22 acres (89,000 m2) central campus. Iowa State's composting facility is capable of processing over 10,000 tons of organic waste every year. The school's $3 million revolving loan fund loans money for energy efficiency and conservation projects on campus. In the 2011 College Sustainability Report Card issued by the Sustainable Endowments Institute, the university received a B grade. Academics Iowa State University is organized into eight colleges and two schools that offer 100 Bachelor's degree programs, 112 Masters programs, and 83 Ph.D programs, including one professional degree program in Veterinary Medicine. ISU is home to the following schools: Classified as one of Carnegie's "R1: Doctoral Universities - Very High Research Activity," Iowa State received nearly $550 million in research grants for fiscal year 2025. The W. Robert and Ellen Sorge Parks Library contains over 2.6 million books and subscribes to more than 98,600 journal titles. Named for W. Robert Parks (1915–2003), the 11th president of Iowa State University, and his wife, Ellen Sorge Parks, the original library was built in 1925 with three subsequent additions made in 1961, 1969, and 1983. The library was dedicated and named after W. Robert and Ellen Sorge Parks in 1984. Surrounding the first floor lobby staircase in Parks Library are eight mural panels designed by Iowa artist Grant Wood. As with Breaking the Prairie Sod, Wood's other Iowa State University mural painted two years later, Wood borrowed his theme for When Tillage Begins Other Arts Follow from a speech on agriculture delivered by Daniel Webster in 1840 at the State House in Boston, in which farmers were portrayed as the founders of human civilization. The cooperative extension service traces its roots to farmers' institutes developed at Iowa State in the late 19th century. Committed to community, Iowa State pioneered the outreach mission of being a land-grant college through creation of the first Extension Service in 1902. In 1906, the Iowa Legislature enacted the Agricultural Extension Act making funds available for demonstration projects. It is believed this was the first specific legislation establishing state extension work, for which Iowa State assumed responsibility. The national extension program was created in 1914 based heavily on the Iowa State model. Research Iowa State University is a member of the Universities Research Association, University Corporation for Atmospheric Research and the Association of Public and Land-grant Universities. In fiscal year 2023, Iowa State spent $420.8 million in R&D. Iowa State was a member of the Association of American Universities from 1958 until April 2022. It departed claiming that AAU's internal ranking indicators unfairly favor institutions with high levels of NIH funding and noted that its strength is not in biomedical research because the school does not have a medical school. Iowa State is the only university in the United States that has a U.S. Department of Energy research laboratory physically located on its campus. Operated by Iowa State, Ames National Laboratory is one of ten national DOE Office of Science research laboratories. ISU research for the government provided Ames National Laboratory its start in the 1940s with the development of a highly efficient process for producing high-purity uranium for atomic energy. Today, Ames National Laboratory continues its leading status in current materials research and focuses diverse fundamental and applied research strengths upon issues of national concern, cultivates research talent, and develops and transfers technologies to improve industrial competitiveness and enhance U.S. economic security. Ames National Laboratory employs more than 500 full- and part-time employees. Students make up more than 45 percent of the paid workforce. Dan Shechtman, awarded the 2011 Nobel Prize in Chemistry for the discovery of quasicrystals at Johns Hopkins University, is an Associate of Ames National Laboratory. The ISU Research Park Corporation was established in 1987 as a not-for-profit, independent, corporation operating under a board of directors appointed by Iowa State University and the ISU Foundation. The corporation manages both the Research Park and incubator programs. In 2010, the Biorenewables Research Laboratory opened in a LEED-Gold certified building that complements and helps replace labs and offices across Iowa State and promotes interdisciplinary, systems-level research and collaboration. The Lab houses the Bioeconomy Institute, the Biobased Industry Center, and the National Science Foundation Engineering Research Center for Biorenewable Chemicals, a partnership of six universities as well as the Max Planck Society in Germany and the Technical University of Denmark. The Engineering Teaching and Research Complex was built in 1999 and is home to Stanley and Helen Howe Hall and Gary and Donna Hoover Hall. The complex is occupied by the Virtual Reality Applications Center (VRAC), Center for Industrial Research and Service (CIRAS), Department of Aerospace Engineering and Engineering Mechanics, Department of Materials Science and Engineering, Engineering Computer Support Services, Engineering Distance Education, and Iowa Space Grant Consortium. And the complex contains one of the world's only six-sided immersive virtual reality labs (C6), as well as the 240 seat 3D-capable Alliant Energy Lee Liu Auditorium, the Multimodal Experience Testbed and Laboratory (METaL), and the User Experience Lab (UX Lab). All of which supports the research of more than 50 faculty and 200 graduate, undergraduate, and postdoctoral students. The Plant Sciences Institute was founded in 1999. PSI's research focus is to understand the effects of genotype (genetic makeup) and environment on phenotypes (traits) sufficiently well that it will be able to predict the phenotype of a given genotype in a given environment. The institute is housed in the Roy J. Carver Co-Laboratory and is home to the Plant Sciences Institute Faculty Scholars program. There is also the Iowa State University Northeast Research Farm in Nashua. Student life Iowa State operates 20 on-campus residence halls. The residence halls are divided into geographical areas. The Union Drive Association (UDA) consists of four residence halls located on the west side of campus, including Friley Hall, which has been declared one of the largest residence halls in the country. The Richardson Court Association (RCA) consists of 12 residence halls on the east side of campus. The Towers Residence Association (TRA) are located south of the main campus. Two of the four towers, Knapp and Storms Halls, were imploded in 2005; however, Wallace and Wilson Halls still stand. Buchanan Hall and Geoffroy Hall are nominally considered part of the RCA, despite their distance from the other buildings. ISU operates two apartment complexes for upperclassmen, Frederiksen Court and SUV Apartments. The governing body for ISU students is the ISU Student Government. The ISU Student Government is composed of a president, vice president, finance director, cabinet appointed by the president, a clerk appointed by the vice president, senators representing each college and residence area at the university, a nine-member judicial branch, and an election commission. ISU has over 900 student organizations on campus that represent a variety of interests. Organizations are supported by Iowa State's Student Engagement Office. Many student organization offices are housed in the Memorial Union. The Memorial Union at Iowa State University opened in September 1928 and is currently home to a number of University departments and student organizations, The M-Shop, CyBowl & Billiards, the University Book Store, and the Workspace. The original building was designed by architect, William T. Proudfoot. The building employs a classical style of architecture reflecting Greek and Roman influences. The building's design specifically complements the designs of the major buildings surrounding the University's Central Campus area, Beardshear Hall to the west, Curtiss Hall to the east, and MacKay Hall to the north. The style utilizes columns with Corinthian capitals, Palladian windows, triangular pediments, and formally balanced facades. Designed to be a living memorial for ISU students lost in World War I, the building includes a solemn memorial hall, named the Gold Star Room, which honors the names of the dead World War I, World War II, Korean, Vietnam, and war on terrorism veterans engraved in marble. Symbolically, the hall was built directly over a library (the Browsing Library) and a small chapel, the symbol being that no country would ever send its young men to die in a war for a noble cause without a solid foundation on both education (the library) and religion (the chapel). On Veterans Day in 2014, ISU's "Gold Star Hall" publicly honored Petty Officer Jerry Leroy Converse, a U.S. Navy sailor that was killed by Israel during the 1967 USS Liberty incident. Converse is buried at Oak Hill Cemetery in Cherokee, Iowa. This ceremony came 47 years after the attack. Renovations and additions have continued through the years to include: elevators, bowling lanes, a parking ramp, a book store, food court, and additional wings. Fraternities and sororities at ISU include fifty chapters that involve 14.6 percent of undergraduate students. Collectively, fraternity and sorority members have raised over $82,000 for philanthropies and committed 31,416 hours to community service. In 2006, the ISU Greek community was named the best large Greek community in the Midwest.[better source needed] The first fraternity, Delta Tau Delta, was established at Iowa State in 1875, six years after the first graduating class entered Iowa State. The first sorority, I.C. Sorocis, was established two years later, in 1877. I.C. Sorocis later became a chapter of the first national sorority at Iowa State, Pi Beta Phi. Anti-Greek rioting occurred in 1888. As reported in The Des Moines Register, "The anti-secret society men of the college met in a mob last night about 11 o'clock in front of the society rooms in chemical and physical hall, determined to break up a joint meeting of three secret societies." In 1891, President William Beardshear banned students from joining secret college fraternities, resulting in the eventual closing of all formerly established fraternities. President Storms lifted the ban in 1904. Following the lifting of the fraternity ban, thirteen national fraternities were installed on the Iowa State campus between 1904 and 1913. The Iowa State Daily is the university's student newspaper. The Daily has its roots from a news sheet titled the Clipper, which was started in the spring of 1890 by a group of students at Iowa Agricultural College led by F.E. Davidson. The Clipper soon led to the creation of the Iowa Agricultural College Student, and the beginnings of what would one day become the Iowa State Daily. It was awarded the 2016 Best All-Around Daily Student Newspaper by the Society of Professional Journalists. 88.5 KURE is the university's student-run radio station. ISUtv is the university's student-run television station. It is housed in the former WOI-TV station that was established in 1950. Iowa State is widely known for VEISHEA, an annual education and entertainment festival that was held on campus each spring. The name VEISHEA was derived from the initials of ISU's five original colleges, forming an acronym as the university existed when the festival was founded in 1922. VEISHEA was the largest student-run festival in the nation, bringing in tens of thousands of visitors to the campus each year.[citation needed] VEISHEA was retired as an annual event at Iowa State in 2014. The celebration featured an annual parade and many open-house demonstrations of the university facilities and departments. Campus organizations exhibited products, technologies, and held fund raisers for various charity groups. In addition, VEISHEA brought speakers, lecturers, and entertainers to Iowa State, and throughout its over eight decade history, it has hosted such distinguished guests as Bob Hope, John Wayne, Presidents Harry Truman, Ronald Reagan, and Lyndon Johnson, and performers Diana Ross, Billy Joel, Sonny and Cher, The Who, The Goo Goo Dolls, Bobby V, and The Black Eyed Peas. Athletics The "Cyclones" name dates back to 1895. That year, Iowa suffered an unusually high number of devastating cyclones (as tornadoes were called at the time). In September, Iowa Agricultural College's football team traveled to Northwestern University and defeated that team by a score of 36–0. The next day, the Chicago Tribune's headline read "Struck by a Cyclone: It Comes from Iowa and Devastates Evanston Town." The article began, "Northwestern might as well have tried to play football with an Iowa cyclone as with the Iowa team it met yesterday." The nickname stuck. The school colors are cardinal and gold. The mascot is Cy the Cardinal, introduced in 1954. Since a cyclone was determined to be difficult to depict in costume, the cardinal was chosen in reference to the school colors. A contest was held to select a name for the mascot, with the name Cy being chosen as the winner. The Iowa State Cyclones are a member of the Big 12 Conference and compete in NCAA Division I Football Bowl Subdivision (FBS), fielding 16 varsity teams in 12 sports. The Cyclones also compete in and are a founding member of the Central States Collegiate Hockey League of the American Collegiate Hockey Association. Iowa State's intrastate archrival is the University of Iowa with whom it competes annually for the Iowa Corn Cy-Hawk Series trophy, an annual athletic competition between the two schools. Sponsored by the Iowa Corn Growers Association, the competition includes all head-to-head regular season competitions between the two rival universities in all sports. Football first made its way onto the Iowa State campus in 1878 as a recreational sport, but it was not until 1892 that Iowa State organized its first team to represent the school in football. In 1894, college president William M. Beardshear spearheaded the foundation of an athletic association to officially sanction Iowa State football teams. The 1894 team finished with a 6–1 mark. The Cyclones compete each year for traveling trophies. Since 1977, Iowa State and Iowa compete annually for the Cy-Hawk Trophy. Iowa State competes in an annual rivalry game against Kansas State known as Farmageddon and against former conference foe Missouri for the Telephone Trophy. The Cyclones also compete against the Iowa Hawkeyes, their in-state rival. The Cyclones play their home games at Jack Trice Stadium, named after Jack Trice, ISU's first African-American athlete and also the first and only Iowa State athlete to die from injuries sustained during athletic competition. Trice died three days after his first game playing for Iowa State against Minnesota in Minneapolis on October 6, 1923. Suffering from a broken collarbone early in the game, he continued to play until he was trampled by a group of Minnesota players. It is disputed whether he was trampled purposely or if it was by accident. The stadium was named in his honor in 1997 and is the only NCAA Division I-A stadium named after an African-American. Jack Trice Stadium, formerly known as Cyclone Stadium, opened on September 20, 1975, with a win against the United States Air Force Academy. Hopes of "Hilton Magic" returning took a boost with the hiring of ISU alum, Ames native, and fan favorite Fred Hoiberg as coach of the men's basketball team in April 2010. Hoiberg ("The Mayor") played three seasons under legendary coach Johnny Orr and one season under future Chicago Bulls coach Tim Floyd during his standout collegiate career as a Cyclone (1991–95). Orr laid the foundation of success in men's basketball upon his arrival from Michigan in 1980 and is credited with building Hilton Magic. Besides Hoiberg, other Cyclone greats played for Orr and brought winning seasons, including Jeff Grayer, Barry Stevens, and walk-on Jeff Hornacek. The 1985-86 Cyclones were one of the most memorable. Orr coached the team to second place in the Big Eight and produced one of his greatest career wins, a victory over his former team and No. 2 seed Michigan in the second round of the NCAA tournament. Under coaches Floyd (1995–98) and Larry Eustachy (1998–2003), Iowa State achieved even greater success. Floyd took the Cyclones to the Sweet Sixteen in 1997 and Eustachy led ISU to two consecutive Big 12 regular season conference titles in 1999-2000 and 2000–01, plus the conference tournament title in 2000. Seeded No. 2 in the 2000 NCAA tournament, Eustachy and the Cyclones defeated UCLA in the Sweet Sixteen before falling to Michigan State, the eventual NCAA Champion, in the regional finals by a score of 75–64 (the differential representing the Spartans' narrowest margin of victory in the tournament). Standout Marcus Fizer and Jamaal Tinsley were scoring leaders for the Cyclones who finished the season 32–5. Tinsley returned to lead the Cyclones the following year with another conference title and No. 2 seed, but ISU finished the season with a 25–6 overall record after a stunning loss to No. 15 seed Hampton in the first round. In 2011–12, Hoiberg's Cyclones finished third in the Big 12 and returned to the NCAA tournament, dethroning defending national champion Connecticut, 77–64, in the second round before losing in the Round of 32 to top-seeded Kentucky. All-Big 12 First Team selection Royce White led the Cyclones with 38 points and 22 rebounds in the two contests, ending the season at 23–11. The 2013-14 campaign turned out to be another highly successful season. Iowa State went 28–8, won the Big 12 Tournament, and advanced to the Sweet Sixteen by beating North Carolina in the second round of the NCAA tournament. The Cyclones finished 11–7 in Big 12 play, finishing in a tie for third in the league standings, and beat a school-record nine teams (9–3) that were ranked in the Associated Press top 25. The Cyclones opened the season 14–0, breaking the school record for consecutive wins. Melvin Ejim was named the Big 12 Player of the Year and an All-American by five organizations. Deandre Kane was named the Big 12 Tournament's most valuable player. On June 8, 2015, Steve Prohm took over as head basketball coach replacing Hoiberg who left to take the head coaching position with the Chicago Bulls. In his first season with the Cyclones, Prohm secured a #4 seed in the Midwest region where the Cyclones advanced to the Sweet Sixteen before falling to top-seeded Virginia, 84–71. In 2017, Iowa State stunned 3rd ranked Kansas, 92–89, in overtime, snapping KU's 54-game home winning streak, before winning the 2017 Big 12 men's basketball tournament, its third conference championship in four years, defeating West Virginia in the final. Of Iowa State's 19 NCAA tournament appearances, the Cyclones have reached the Sweet Sixteen eight times (1944, 1986, 1997, 2000, 2014, 2016, 2022, 2024), made two appearances in the Elite Eight (1944, 2000), and reached the Final Four once in 1944. Iowa State is known for having one of the most successful women's basketball programs in the nation. Since the founding of the Big 12, Coach Bill Fennelly and the Cyclones have won three conference titles (one regular season, two tournament), and have advanced to the Sweet Sixteen five times (1999–2001, 2009, 2010) and the Elite Eight twice (1999, 2009) in the NCAA tournament. The team has one of the largest fan bases in the nation with attendance figures ranked third in the nation in 2009, 2010, 2012, 2016, 2017, and 2020 and second in the nation in 2013, 2014, 2018 and 2022. Coach Christy Johnson-Lynch led the 2012 Cyclones team to a fifth straight 20-win season and fifth NCAA regional semifinal appearance in six seasons, and leading Iowa State to a 22–8 (13–3 Big 12) overall record and second-place finish in the conference. The Cyclones finished the season with seven wins over top-25 teams, including a victory over No. 1 Nebraska Cornhuskers in Iowa State's first-ever win over a top-ranked opponent in addition to providing the only Big 12 Conference loss to the 2012 conference and NCAA champion Texas Longhorns. In 2011, Iowa State finished the season 25–6 (13–3 Big 12), placing second in the league, as well as a final national ranking of eighth. 2011 is only the second season in which an Iowa State volleyball team has ever recorded 25 wins. The Cyclones beat No. 9 Florida during the season in Gainesville, its sixth win over a top-10 team in Cyclone history. In 2009, Iowa State finished the season second in the Big 12 behind Texas with a 27–5 record and ranked No. 6, its highest ever national finish. Johnson-Lynch is the fastest Iowa State coach to clinch 100 victories. In 2011, she became the school's winningest volleyball coach when her team defeated the Texas Tech Red Raiders, her 136th coaching victory, in straight sets. The ISU wrestling program has captured the NCAA wrestling tournament title eight times between 1928 and 1987, and won the Big 12 Conference Tournament three consecutive years, 2007–2009. On February 7, 2010, the Cyclones became the first collegiate wrestling program to record its 1,000th dual win in program history by defeating the Arizona State Sun Devils, 30–10, in Tempe, Arizona. In 2002, under former NCAA champion & Olympian Coach Bobby Douglas, Iowa State became the first school to produce a four-time, undefeated NCAA Division I champion, Cael Sanderson (considered by the majority of the wrestling community to be the best college wrestler ever), who also took the gold medal at the 2004 Olympic Games in Athens, Greece. Dan Gable, another legendary ISU wrestler, is famous for having lost only one match in his entire Iowa State collegiate career - his last - and winning gold at the 1972 Olympics in Munich, Germany, while not giving up a single point. In 2013, Iowa State hosted its eighth NCAA Wrestling Championships. The Cyclones hosted the first NCAA championships in 1928. In February 2017, former Virginia Tech coach and 2016 NWCA Coach of the Year Kevin Dresser was introduced as the new Cyclone wrestling coach, replacing Kevin Jackson. Notable alumni and faculty See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Clifford_Berry] | [TOKENS: 226] |
Contents Clifford Berry Clifford Edward Berry (April 19, 1918 – October 30, 1963) was an American computer scientist who helped John Vincent Atanasoff create the first digital electronic computer in 1939, the Atanasoff–Berry computer (ABC). Biography Clifford Berry was born April 19, 1918, in Gladbrook, Iowa, to Fred and Grace Berry. His father owned an appliance repair shop, where he was able to learn about radios. He graduated from Marengo High School in Marengo, Iowa, in 1934 as the class valedictorian at age 16. He went on to study at Iowa State College (now known as Iowa State University), eventually earning a bachelor's degree in electrical engineering in 1939 and followed by his master's degree in physics in 1941. In 1942, he married an ISU classmate and Atanasoff's secretary, Martha Jean Reed. By 1948, he earned his PhD in physics from Iowa State University. He died in 1963, attributed to "possible suicide". References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:Colossus.jpg] | [TOKENS: 321] |
File:Colossus.jpg Summary This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing. العربيَّة | English | français | italiano | Bahaso Jambi | македонски | +/− Licensing This is because it is one of the following: HMSO has declared that the expiry of Crown Copyrights applies worldwide (ref: HMSO Email Reply)More information. Deutsch ∙ English ∙ Español ∙ français ∙ italiano ∙ Nederlands ∙ polski ∙ português ∙ sicilianu ∙ slovenščina ∙ suomi ∙ Türkçe ∙ македонски ∙ русский ∙ українська ∙ മലയാളം ∙ 한국어 ∙ 日本語 ∙ 简体中文 ∙ 繁體中文 ∙ العربية ∙ +/− File history Click on a date/time to view the file as it appeared at that time. File usage The following 19 pages use this file: Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-35] | [TOKENS: 4993] |
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Papsukkal] | [TOKENS: 1849] |
Contents Papsukkal Papsukkal (𒀭𒉽𒈛) was a Mesopotamian god regarded as the sukkal (attendant deity) of Anu and his wife Antu in Seleucid Uruk. In earlier periods he was instead associated with Zababa. He acquired his new role through syncretism with Ninshubur. Character Papsukkal was originally the sukkal (attendant and messenger deity) of Zababa, the tutelary god of Kish. His name is a combination of the Sumerian words pap, "older brother," and sukkal. Papsukkal's eventual rise to prominence at the expense of other similar figures, such as Ninshubur, as well as Kakka and Ilabrat, was likely rooted simply in the presence of the word sukkal in his name. In the context of the so-called "antiquarian theology" relying largely on god lists, which developed in Uruk under Achaemenid and Seleucid rule, he was fully identified with Ninshubur and thus became Anu's sukkal and one of the eighteen major deities of the city. In the Seleucid period he was regarded as the sukkal of both Anu and Antu. Papsukkal was also a protector of the parakku, the seat of Anu. While Papsukkal could be depicted as a deity holding a staff, a typical attribute of a sukkal, he also had an individual symbol, a walking bird. He was represented by it on kudurru (inscribed boundary stones). An omen text associates him with the francolin (Akkadian: ittidû), though a variant instead has the name of the god Kakka in place of Papsukkal. The constellation Orion, known in ancient Mesopotamia as Sipazianna, "the true shepherd of heaven," was regarded as the astral symbol of Papsukkal, as well as Ninshubur and Ilabrat. Associations with other deities Frans Wiggermann proposes that Papsukkal was initially viewed as the son of Zababa. In late sources, his father was Anu. A prayer to Papsukkal additionally calls him "offspring of Enmesharra," possibly indicating that a tradition in which Enmesharra was an ancestor of Anu existed. In one case, Papsukkal is listed right behind Enmesharra in a list of vanquished gods. In another similar list, he was equated with Mummu, according to Wilfred G. Lambert most likely based on their shared status as divine viziers (sukkals). Papsukkal's wife was the sparsely attested goddess Amasagnudi. Three possibilities have been proposed regarding her origin: that she was the original sukkal of Anu, replaced in this role by Inanna's sukkal Ninshubur; that she was an epithet of Ninshubur; or that she was the wife of the male form of Ninshubur. References to Amasagnudi from before the Seleucid period are incredibly rare, and according to Paul-Alain Beaulieu as of 1992 known examples were limited to the god list An = Anum and a single lexical text. More recent research revealed a further occurrence of Amasagnudi in the second millennium BCE in an Akkadian incantation against Lamashtu known from a copy from Ugarit, in which she appears alongside Papsukkal. The god lists An = Anum in a section dedicated to Papsukkal lists five daughters: Pappap, Hedu, Ninhedu, Ninkita and Mišaga. PAP.PAP, written without a "divine determinative" sign, is first attested as a name or title of queen Baranamtara, wife of Lugalanda, and it is not impossible that the name of the daughter of Papsukkal was derived from it. The primary meaning of the name Hedu and the element hedu in Ninhedu was "may it befit," but it was later reinterpreted as a homophonous word meaning "architrave," possibly because the alternate name of Papsukkal's wife Amasagnudi was Ninkagal, "lady of the gate." Ninkita's name possibly means "lady of the doorstep." Frans Wiggermann notes that while the section dedicated to them uses the name Ninshubur to refer to their father, there is no evidence they were ever regarded as the daughters of the older female version of Ninshubur attested as the sukkal of Inanna in the third millennium BCE. While Papsukkal's origin was distinct from both Ninshubur's and Ilabrat's, he came to be identified with both of them, as well as with Kakka. However, the conflation of Ninshubur and Papsukkal was only finalized in the Seleucid period in Uruk. The earlier Weidner god list does not equate Papsukkal with Ninshubur, and instead places him next to Zababa and Ilaba. The god list An = Anu ša amēli explains the syncretism between them in following terms: dnin-šubur = dpap-sukkal ša da-nim, "Ninshubur is Papsukkal when Anu is concerned." The late syncretic Papsukkal was not regarded as the sukkal of Anu and Ishtar like Ninshubur, but rather of Anu and his wife Antu. He also takes Ninshubur's role in an Akkadian adaptation of Inanna's Descent, but unlike her he is not directly designated as Ishtar's servant, and the text states that he serves "the great gods" as a group. A god list from Emar identifies Papsukkal with the Hurrian sukkal Tašmišu. Worship The oldest evidence for the worship of Papsukkal comes from Kish from the Old Babylonian period. In later times a temple in this city dedicated to him was also known as E-Akkil, originally the name of Ninshubur's temple in her cult center Akkil, located near Bad-tibira. A ritual known from a text from Sultantepe meant to solve "door troubles" (lumun dalti) involved making offerings to Papsukkal and Ninhedu. In the first millennium BCE, Papsukkal was worshiped in Kish, Babylon (where he had a temple), Arbela, Assur and Bīt-Bēlti. Additionally, a city named Dur-Papsukkal existed near Der. It is mentioned in a document according to which it was besieged by the Assyrian king Shamshi-Adad V, opposed by Babylonian king Marduk-balassu-iqbi and his Elamite allies. Papsukkal was also worshiped in Uruk, but he was only introduced there in the Seleucid period, when the entire pantheon of this city was restructured. Ishtar, Nanaya and their court, encompassing deities such as Uṣur-amāssu, were surpassed in prominence by Anu and Antu. While Anu was not completely absent from Uruk at any point in time between the third and first millennium BCE, his position was that of a "figurehead" and "otiose deity" according to Paul-Alain Beaulieu. He proposes that Anu's rise was the result of Babylon losing its influence after Persian conquest, which resulted in the development of a new local theology relying on the god list An = Anum (which starts the divine hierarchy with Anu), meant to enhance local pride. A side effect of the process was the introduction of deities connected with Anu, such as Papsukkal and his wife Amasagnudi, to the pantheon of the city. In Seleucid Uruk, Papsukkal was believed to guard the main gate of Bīt Rēš, the temple complex of Anu. A socle dedicated to him located within it was known as the E-gubiduga, "house whose voice is pleasing." A festival dedicated to him, which took the form of a ceremonial meal, took place each year on the seventeenth day of the month Tašritu. Theophoric names invoking Papsukkal are well attested from the Neo-Babylonian period. However, in Seleucid Uruk he is only attested in a single one, which belonged to a scribe. They were also rare in the Old Babylonian period, and unlike Ninshubur he was rarely chosen as a family deity. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Heliacal_rising] | [TOKENS: 1561] |
Contents Heliacal rising The heliacal rising (/hɪˈlaɪ.əkəl/ hih-LY-ə-kəl) of a star or a planet occurs annually, when it becomes visible above the eastern horizon at dawn in the brief moment just before sunrise (thus becoming "the morning star").[a] A heliacal rising marks the time when a star or planet becomes visible for the first time again in the night sky after having set with the Sun at the western horizon in a previous sunset (its heliacal setting), having since been in the sky only during daytime, obscured by sunlight. Historically, the most important such rising is that of Sirius, which was an important feature of the Egyptian calendar and astronomical development. The rising of the Pleiades heralded the start of the Ancient Greek sailing season, using celestial navigation, as well as the farming season (attested by Hesiod in his Works and Days). Heliacal rising is only one of several types of alignment for stars' risings and settings; mostly the risings and settings of celestial objects are organized into lists of morning and evening risings and settings. Culmination in the evening and the culmination in the morning are separated by half a year, while on the other hand risings and settings in the evenings and the mornings are only separated by a half-year at the equator, and at other latitudes set apart by different fractions of the year. Cause and significance Relative to the stars, the Sun appears to drift eastward about one degree per day along a path called the ecliptic because there are 360 degrees in any complete revolution (circle), which takes about 365 days in the case of one revolution of the Earth around the Sun. The star's heliacal rising will occur when the Earth has moved to a point in its orbit where the star appears on the eastern horizon at dawn. Each day after the heliacal rising, the star will rise slightly earlier and remain visible for longer before the light from the rising sun overwhelms it. Over the following days the star will move further and further westward (about one degree per day) relative to the Sun, until eventually it is no longer visible in the sky at sunrise because it has already set below the western horizon. This is called the acronycal setting. The same star will reappear in the eastern sky at dawn approximately one year after its previous heliacal rising. For stars near the ecliptic, the small difference between the solar and sidereal years due to axial precession will cause their heliacal rising to recur about one sidereal year (about 365.2564 days) later, though this depends on its proper motion. For stars far from the ecliptic, the period is somewhat different and varies slowly, but in any case the heliacal rising will move all the way through the zodiac in about 26,000 years due to precession of the equinoxes. Because the heliacal rising depends on the observation of the object, its exact timing can be dependent on weather conditions. Heliacal phenomena and their use throughout history have made them useful points of reference in archeoastronomy. Non-application to circumpolar stars Some stars, when viewed from latitudes not at the equator, do not rise or set. These are circumpolar stars, which are either always in the sky or never. For example, the North Star (Polaris) is not visible in Australia and the Southern Cross is not seen in Europe, because they always stay below the respective horizons. The term circumpolar is somewhat localised as between the Tropic of Cancer and the Equator, the Southern polar constellations have a brief spell of annual visibility (thus "heliacal" rising and "cosmic" setting) and the same applies as to the other polar constellations in respect of the reverse tropic. History Constellations containing stars that rise and set were incorporated into early calendars or zodiacs. The Sumerians, Babylonians, Egyptians, and Greeks all used the heliacal risings of various stars for the timing of agricultural activities. Because of its position about 40° off the ecliptic, the heliacal risings of the bright star Sirius in Ancient Egypt occurred not over a period of exactly one sidereal year but over a period called the "Sothic year" (from "Sothis", the name for the star Sirius). The Sothic year was about a minute longer than a Julian year of 365.25 days. Since the development of civilization, this has occurred at Cairo approximately on July 19 on the Julian calendar.[b] Its returns also roughly corresponded to the onset of the annual flooding of the Nile, although the flooding is based on the tropical year and so would occur about three quarters of a day earlier per century in the Julian or Sothic year. (July 19, 1000 BC in the Julian Calendar is July 10 in the proleptic Gregorian Calendar. At that time, the sun would be somewhere near Regulus in Leo, where it is around August 21 in the 2020s.) The ancient Egyptians appear to have constructed their 365-day civil calendar at a time when Wep Renpet, its New Year, corresponded with Sirius's return to the night sky. Although this calendar's lack of leap years caused the event to shift one day every four years or so, astronomical records of this displacement led to the discovery of the Sothic cycle and, later, the establishment of the more accurate Julian and Alexandrian calendars. The Egyptians also devised a method of telling the time at night based on the heliacal risings of 36 decan stars, one for each 10° segment of the 360° circle of the zodiac and corresponding to the ten-day "weeks" of their civil calendar. To the Māori of New Zealand, the Pleiades are called Matariki, and their heliacal rising signifies the beginning of the new year (around June). The Mapuche of South America called the Pleiades Ngauponi which in the vicinity of the we tripantu (Mapuche new year) will disappear by the west, lafkenmapu or ngulumapu, appearing at dawn to the East, a few days before the birth of new life in nature. Heliacal rising of Ngauponi, i.e. appearance of the Pleiades by the horizon over an hour before the sun approximately 12 days before the winter solstice, announced we tripantu. When a planet has a heliacal rising, there is a conjunction with the sun beforehand. Depending on the type of conjunction, there may be a syzygy, eclipse, transit, or occultation of the sun. Acronycal and cosmic(al) The rising of a planet above the eastern horizon at sunset is called its acronycal rising, which for a superior planet signifies an opposition, another type of syzygy. When the Moon has an acronycal rising, it will occur near full moon and thus, two or three times a year, a noticeable lunar eclipse. Cosmic(al) can refer to rising with sunrise or setting at sunset, or the first setting at morning twilight. Risings and settings are furthermore differentiated between apparent (the above discussed) and actual or true risings or settings. The use of the terms cosmical and acronycal is not consistent. The following table gives an overview of the different application of the terms to the rising and setting instances. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sah_(god)] | [TOKENS: 151] |
Contents Sah (god) B C D G H I J K M N O P Q R S T U W Y Sah (sꜣḥ) was a god in Ancient Egyptian religion, representing a constellation that encompassed the stars in Orion and Lepus, as well as stars found in some neighbouring modern constellations. His consort was Sopdet known by the ancient Greek name as Sothis, the goddess of the star Sirius. Sah came to be associated with a more important deity, Osiris, and Sopdet with Osiris's consort Isis. Sah was frequently mentioned as "the Father of Gods" in the Old Kingdom Pyramid Texts. The pharaoh was thought to travel to Orion after his death. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Martin_Waldseem%C3%BCller] | [TOKENS: 1752] |
Contents Martin Waldseemüller Martin Waldseemüller (c. 1470 – 16 March 1520) was a German cartographer and humanist scholar. Sometimes known by the Hellenized form of his name, Hylacomylus, his work was influential among contemporary cartographers. His collaborator Matthias Ringmann and he are credited with the first recorded usage of the word America to name a portion of the New World in honour of Italian explorer Amerigo Vespucci in a world map they delineated in 1507. The same map was the first to show the Americas as a distinct landmass clearly separated from Asia by the Pacific Ocean. Waldseemüller was also the first to produce a printed globe and the first to create a printed wall map of Europe. A set of his maps printed as an appendix to the 1513 edition of Ptolemy's Geography is considered to be the first example of a modern atlas. Life and works Details of Waldseemüller's life are scarce. He was born around 1470 in the German town of Wolfenweiler. His father was a butcher and moved to Freiburg (now Freiburg im Breisgau) in about 1480. Records show that Waldseemüller was enrolled in 1490 at the University of Freiburg, where Gregor Reisch, a noted humanist scholar, was one of his influential teachers; the printer Johannes Schott was his classmate. After finishing at the university, he lived in Basel, where he was ordained a priest, and apparently, gained experience in printing and engraving while working with the printer community in Basel. Around 1500, an association of humanist scholars formed in Saint Dié, in the Duchy of Lorraine, under the patronage of René II, Duke of Lorraine. They called themselves the Gymnasium Vosagense and their leader was Walter Lud. Their initial intention was to publish a new edition of Ptolemy's Geography. Waldseemüller was invited to join the group and contribute his skills as a cartographer. How he came to the group's attention is unclear, but Lud later described him as a master cartographer. Matthias Ringmann was also brought into the group because of his previous work with the Geography and his knowledge of Greek and Latin. Ringmann and Waldseemüller soon became friends and collaborators. In 1506, the Gymnasium obtained a French translation of the Soderini Letter, a booklet attributed to Amerigo Vespucci that provided a sensational account of four alleged Vespucci voyages to explore the coast of lands recently discovered in the western Atlantic. The Gymnasium surmised that this was the "new world" or the "antipodes" hypothesized by classical writers. The Soderini Letter gave Vespucci credit for discovery of this new continent and implied that newly obtained Portuguese maps were based on his explorations. They decided to put aside the Geography for the moment and publish a brief Introduction to Cosmography with an accompanying world map. The Introduction was written by Ringmann and included a Latin translation of the Soderini Letter. In a preface to the letter, Ringmann wrote "I see no reason why anyone could properly disapprove of a name derived from that of Amerigo, the discoverer, a man of sagacious genius. A suitable form would be Amerige, meaning Land of Amerigo, or America, since Europe and Asia have received women's names." While Ringmann was writing the Introduction, Waldseemüller focused on the creation of a world map using an aggregation of sources, including maps based on the works of Ptolemy, Henricus Martellus, Alberto Cantino, and Nicolò de Caverio. In addition to a large 12-panel wall map, Waldseemüller created a smaller, simplified globe. The wall map was decorated with prominent portraits of Ptolemy and Vespucci. The map and globe were notable for showing the New World as a continent separate from Asia and for naming the southern landmass America. By April 1507, the map, globe and accompanying book, Introduction to Cosmography, were published. A thousand copies were printed and sold throughout Europe. The Introduction and map were a great success, and four editions were printed in the first year alone. The map was widely used in universities and was influential among cartographers who admired the craftsmanship that went into its creation. In the following years, other maps were printed that often incorporated the name America. Although Waldseemüller had intended the name to apply only to a specific part of Brazil, other maps applied it to the entire continent. In 1538, Gerardus Mercator used America to name both the north and south continents on his influential map, and by this point, the name was securely fixed on the New World. After 1507, Waldseemüller and Ringmann continued to collaborate on a new edition of Ptolemy's Geography. In 1508, Ringmann travelled to Italy and obtained a Greek manuscript of Geography (Codex Vaticanum Graecorum 191). With this key reference, they continued to make progress and Waldseemüller was able to finish his maps. Completion was forestalled, though, when their patron, Duke René II, died in 1508. The new edition was finally printed in 1513 by Johannes Schott in Strasbourg. By then, Waldseemüller had pulled out of the project and was not credited for his cartographic work. Nevertheless, his maps were recognized as important contributions to the science of cartography and were considered a standard reference work for many decades. About 20 of Waldseemüller's tabulae modernae (modern maps) were included in the new Geography as a separate appendix, Claudii Ptolemaei Supplementem. This supplement constitutes the first modern atlas. Maps of Lorraine and the upper Rhine region were the first printed maps of those regions and were probably based on survey work done by Waldseemüller himself. The world map published in the 1513 Geography seems to indicate that Waldseemüller had second thoughts about the name and the nature of the lands discovered in the western Atlantic. The New World was no longer clearly shown as a continent separate from Asia, and the name America had been replaced with Terra Incognita (Unknown Land). What caused him to make these changes is not clear, but perhaps he was influenced by contemporary criticism that Vespucci had usurped Columbus's primacy of discovery. Waldseemüller was also interested in surveying and surveying instruments. In 1508, he contributed a treatise on surveying and perspective to the fourth edition of Gregor Reisch's Margarita Philosophica. He included an illustration of a forerunner to the theodolite, a surveying instrument he called the polimetrum. In 1511, he published the Carta Itineraria Europae, a road map of Europe that showed important trade routes and pilgrim routes from central Europe to Santiago de Compostela, Spain. It was the first printed wall map of Europe. In 1516, he produced another large-scale wall map of the world, the Carta Marina Navigatoria, printed in Strasbourg. It was designed in the style of portolan charts and consisted of 12 printed sheets. The Paris Green Globe (or Globe vert), has been attributed to Waldseemüller by experts at the Bibliothèque Nationale, but the attribution is not universally accepted. Waldseemüller died without a will on 16 March 1520 in Saint Dié in the Upper Rhenish Circle of the Holy Roman Empire, where he had served as a canon in the collegiate Church of Saint Dié since 1514. 1507 map rediscovered The 1507 wall map was lost for a long time, but a copy was found in Schloss Wolfegg in southern Germany by Joseph Fischer in 1901. It is the only known copy and was purchased by the United States Library of Congress in May 2003 for $10 million, the highest price paid to that time for a historical document. Five copies of Waldseemüller's globular map survive in the form of "gores": printed map sections that were intended to be cut out and pasted onto a wooden globe. Honours See also Notes Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ancient_Egyptian_deities] | [TOKENS: 12020] |
Contents Ancient Egyptian deities B C D G H I J K M N O P Q R S T U W Y Ancient Egyptian deities are the gods and goddesses worshipped in ancient Egypt. The beliefs and rituals surrounding these gods formed the core of ancient Egyptian religion, which emerged sometime in prehistory. Deities represented natural forces and phenomena, and the Egyptians supported and appeased them through offerings and rituals so that these forces would continue to function according to maat, or divine order. After the founding of the Egyptian state around 3100 BC, the authority to perform these tasks was controlled by the pharaoh, who claimed to be the gods' representative and managed the temples where the rituals were carried out. The gods' complex characteristics were expressed in myths and in intricate relationships between deities: family ties, loose groups and hierarchies, and combinations of separate gods into one. Deities' diverse appearances in art—as animals, humans, objects, and combinations of different forms—also alluded, through symbolism, to their essential features. In different eras, various gods were said to hold the highest position in divine society, including the solar deity Ra, the mysterious god Amun, and the mother goddess Isis. The highest deity was usually credited with the creation of the world and often connected with the life-giving power of the sun. Some scholars have argued, based in part on Egyptian writings, that the Egyptians came to recognize a single divine power that lay behind all things and was present in all the other deities. Yet they never abandoned their original polytheistic view of the world, except possibly during the era of Atenism in the 14th century BC, when official religion focused exclusively on an abstract solar deity, the Aten. Gods were assumed to be present throughout the world, capable of influencing natural events and the course of human lives. People interacted with them in temples and unofficial shrines, for personal reasons as well as for larger goals of state rites. Egyptians prayed for divine help, used rituals to compel deities to act, and called upon them for advice. Humans' relations with their gods were a fundamental part of Egyptian society. Definition The beings in ancient Egyptian tradition who might be labeled as deities are difficult to count. Egyptian texts list the names of many deities whose nature is unknown, and make vague, indirect references to other gods who are not even named. The Egyptologist James P. Allen estimates that more than 1,400 deities are named in Egyptian texts, whereas his colleague Christian Leitz says there are "thousands upon thousands" of gods. The Egyptian language's terms for these beings were nṯr, "god", and its feminine form nṯrt, "goddess". Scholars have tried to discern the original nature of the gods by proposing etymologies for these words, but none of these suggestions has gained acceptance, and the terms' origin remains obscure. The hieroglyphs that were used as ideograms and determinatives in writing these words show some of the traits that the Egyptians connected with divinity. The most common of these signs is a flag flying from a pole. Similar objects were placed at the entrances of temples, representing the presence of a deity, throughout ancient Egyptian history. Other such hieroglyphs include a falcon, reminiscent of several early gods who were depicted as falcons, and a seated male or female deity. The feminine form could also be written with an egg as determinative, connecting goddesses with creation and birth, or with a cobra, reflecting the use of the cobra to depict many female deities. The Egyptians distinguished nṯrw, "gods", from rmṯ, "people", but the meanings of the Egyptian and the English terms do not match perfectly. The term nṯr may have applied to any being that was in some way outside the sphere of everyday life. Deceased humans were called nṯr because they were considered to be like the gods, whereas the term was rarely applied to many of Egypt's lesser supernatural beings, which modern scholars often call "demons". Egyptian religious art also depicts places, objects, and concepts in human form. These personified ideas range from deities that were important in myth and ritual to obscure beings, only mentioned once or twice, that may be little more than metaphors. Confronting these blurred distinctions between gods and other beings, scholars have proposed various definitions of a "deity". One widely accepted definition, suggested by Jan Assmann, says that a deity has a cult, is involved in some aspect of the universe, and is described in mythology or other forms of written tradition. According to a different definition, by Dimitri Meeks, nṯr applied to any being that was the focus of ritual. From this perspective, "gods" included the king, who was called a god after his coronation rites, and deceased souls, who entered the divine realm through funeral ceremonies. Likewise, the preeminence of the great gods was maintained by the ritual devotion that was performed for them across Egypt. Origins The first written evidence of deities in Egypt comes from the Early Dynastic Period (c. 3100–2686 BC). Deities must have emerged sometime in the preceding Predynastic Period (before 3100 BC) and grown out of prehistoric religious beliefs. Predynastic artwork depicts a variety of animal and human figures. Some of these images, such as stars and cattle, are reminiscent of important features of Egyptian religion in later times, but in most cases, there is not enough evidence to say whether the images are connected with deities. As Egyptian society grew more sophisticated, clearer signs of religious activity appeared. The earliest known temples appeared in the last centuries of the predynastic era, along with images that resemble the iconographies of known deities: the falcon that represents Horus and several other gods, the crossed arrows that stand for Neith, and the enigmatic "Set animal" that represents Set. Many Egyptologists and anthropologists have suggested theories about how the gods developed in these early times. Gustave Jéquier, for instance, thought the Egyptians first revered primitive fetishes, then deities in animal form, and finally deities in human form, whereas Henri Frankfort argued that the gods must have been envisioned in human form from the beginning. Some of these theories are now regarded as too simplistic, and more current ones, such as Siegfried Morenz' hypothesis that deities emerged as humans began to distinguish themselves from their environment, and to 'personify' ideas relating to deities. Such theories are difficult to prove. Predynastic Egypt originally consisted of small, independent villages. Because many deities in later times were strongly tied to particular towns and regions, many scholars have suggested that the pantheon formed as disparate communities coalesced into larger states, spreading and intermingling the worship of the old local deities. Others have argued that the most important predynastic gods were, like other elements of Egyptian culture, present all across the country despite its political divisions. The final step in the formation of Egyptian religion was the unification of Egypt, in which rulers from Upper Egypt made themselves pharaohs of the entire country. These sacred kings and their subordinates assumed the right to interact with the gods, and kingship became the unifying focus of the religion. New deities continued to emerge after this transformation. Some important deities such as Isis and Amun are not known to have appeared until the Old Kingdom (c. 2686–2181 BC). Places and concepts could inspire the creation of a deity to represent them, and deities were sometimes created to serve as opposite-sex counterparts to established gods or goddesses. Kings were said to be divine, although only a few continued to be worshipped long after their deaths. Some non-royal humans were said to have the favor of the gods and were venerated accordingly. This veneration was usually short-lived, but the court architects Imhotep and Amenhotep son of Hapu were regarded as gods centuries after their lifetimes, as were some other officials. Through contact with neighboring civilizations, the Egyptians also adopted foreign deities. The goddess Miket, who occasionally appeared in Egyptian texts beginning in the Middle Kingdom (c. 2055–1650 BC), may have been adopted from the religion of Nubia to the south, and a Nubian ram deity may have influenced the iconography of Amun. During the New Kingdom (c. 1550–1070 BC), several deities from Canaanite religion were incorporated into that of Egypt, including Baal, Resheph, and Anat. In Greek and Roman times, from 332 BC to the early centuries AD, deities from across the Mediterranean world were revered in Egypt, but the native gods remained, and they often absorbed the cults of these newcomers into their own worship. Characteristics Modern knowledge of Egyptian beliefs about the gods is mostly drawn from religious writings produced by the nation's scribes and priests. These people were the elite of Egyptian society and were very distinct from the general populace, most of whom were illiterate. Little is known about how well this broader population knew or understood the sophisticated ideas that the elite developed. Commoners' perceptions of the divine may have differed from those of the priests. The populace may, for example, have treated the religion's symbolic statements about the gods and their actions as literal truth. But overall, what little is known about popular religious belief is consistent with the elite tradition. The two traditions form a largely cohesive vision of the gods and their nature. Most Egyptian deities represent natural or social phenomena. The gods were generally said to be immanent in these phenomena—to be present within nature. The types of phenomena they represented include physical places and objects as well as abstract concepts and forces. The god Shu was the deification of all the world's air; the goddess Meretseger oversaw a limited region of the earth, the Theban Necropolis; and the god Sia personified the abstract notion of perception. Major gods were often involved in several types of phenomena. For instance, Khnum was the god of Elephantine Island in the midst of the Nile, the river that was essential to Egyptian civilization. He was credited with producing the annual Nile flood that fertilized the country's farmland. Perhaps as an outgrowth of this life-giving function, he was said to create all living things, fashioning their bodies on a potter's wheel. Gods could share the same role in nature; Ra, Atum, Khepri, Horus, and other deities acted as sun gods. Despite their diverse functions, most gods had an overarching role in common: maintaining maat, the universal order that was a central principle of Egyptian religion and was itself personified as a goddess. Yet some deities represented disruption to maat. Most prominently, Apep was the force of chaos, constantly threatening to annihilate the order of the universe, and Set was an ambivalent member of divine society who could both fight disorder and foment it. Not all aspects of existence were seen as deities. Although many deities were connected with the Nile, no god personified it in the way that Ra personified the sun. Short-lived phenomena, such as rainbows or eclipses, were not represented by gods; neither were fire, water, or many other components of the world. The roles of each deity were fluid, and each god could expand its nature to take on new characteristics. As a result, gods' roles are difficult to categorize or define. Despite this flexibility, the gods had limited abilities and spheres of influence. Not even the creator god could reach beyond the boundaries of the cosmos that he created, and even Isis, though she was said to be the cleverest of the gods, was not omniscient. Richard H. Wilkinson, however, argues that some texts from the late New Kingdom suggest that as beliefs about the god Amun evolved he was thought to approach omniscience and omnipresence, and to transcend the limits of the world in a way that other deities did not. The deities with the most limited and specialized domains are often called "minor divinities" or "demons" in modern writing, although there is no firm definition for these terms. Some demons were guardians of particular places, especially in the Duat, the realm of the dead. Others wandered through the human world and the Duat, either as servants and messengers of the greater gods or as roving spirits that caused illness or other misfortunes among humans. Demons' position in the divine hierarchy was not fixed. The protective deities Bes and Taweret originally had minor, demon-like roles, but over time they came to be credited with great influence. The most feared beings in the Duat were regarded as both disgusting and dangerous to humans. Over the course of Egyptian history, they came to be regarded as fundamentally inferior members of divine society and to represent the opposite of the beneficial, life-giving major gods. Yet even the most revered deities could sometimes exact vengeance on humans or each other, displaying a demon-like side to their character and blurring the boundaries between demons and gods. Divine behavior was believed to govern all of nature. Except for the few deities who disrupted the divine order, the gods' actions maintained maat and created and sustained all living things. They did this work using a force the Egyptians called heka, a term usually translated as "magic". Heka was a fundamental power that the creator god used to form the world and the gods themselves. The gods' actions in the present are described and praised in hymns and funerary texts. In contrast, mythology mainly concerns the gods' actions during a vaguely imagined past in which the gods were present on earth and interacted directly with humans. The events of this past time set the pattern for the events of the present. Periodic occurrences were tied to events in the mythic past; the succession of each new pharaoh, for instance, reenacted Horus's accession to the throne of his father Osiris. Myths are metaphors for the gods' actions, which humans cannot fully understand. They contain seemingly contradictory ideas, each expressing a particular perspective on divine events. The contradictions in myth are part of the Egyptians' many-faceted approach to religious belief—what Henri Frankfort called a "multiplicity of approaches" to understanding the gods. In myth, the gods behave much like humans. They feel emotion; they can eat, drink, fight, weep, sicken, and die. Some have unique character traits. Set is aggressive and impulsive, and Thoth, patron of writing and knowledge, is prone to long-winded speeches. Yet overall, the gods are more like archetypes than well drawn characters. Deities' mythic behavior is inconsistent, and their thoughts and motivations are rarely stated. Most myths lack highly developed characters and plots, because their symbolic meaning was more important than elaborate storytelling. Characters were even interchangeable. Different versions of a myth could portray different deities playing the same role, as in the myths of the Eye of Ra, a feminine aspect of the sun god who was represented by many goddesses. The first divine act is the creation of the cosmos, described in several creation myths. They focus on different gods, each of which may act as creator deities. The eight gods of the Ogdoad, who represent the chaos that precedes creation, give birth to the sun god, who establishes order in the newly formed world; Ptah, who embodies thought and creativity, gives form to all things by envisioning and naming them; Atum produces all things as emanations of himself; and Amun, according to the theology promoted by his priesthood, preceded and created the other creator gods. These and other versions of the events of creation were not seen as contradictory. Each gives a different perspective on the complex process by which the organized universe and its many deities emerged from undifferentiated chaos. The period following creation, in which a series of gods rule as kings over the divine society, is the setting for most myths. The gods struggle against the forces of chaos and among each other before withdrawing from the human world and installing the historical kings of Egypt to rule in their place. A recurring theme in these myths is the effort of the gods to maintain maat against the forces of disorder. They fight vicious battles with the forces of chaos at the start of creation. Ra and Apep, battling each other each night, continue this struggle into the present. Another prominent theme is the gods' death and revival. The clearest instance where a god dies is the myth of Osiris's murder, in which that god is resurrected as ruler of the Duat.[Note 1] The sun god is also said to grow old during his daily journey across the sky, sink into the Duat at night, and emerge as a young child at dawn. In the process, he comes into contact with the rejuvenating water of Nun, the primordial chaos. Funerary texts that depict Ra's journey through the Duat also show the corpses of gods who are enlivened along with him. Instead of being changelessly immortal, the gods periodically died and were reborn by repeating the events of creation, thus renewing the whole world. Nonetheless, it was always possible for this cycle to be disrupted and for chaos to return. Some poorly understood Egyptian texts even suggest that this calamity is destined to happen—that the creator god will one day dissolve the order of the world, leaving only himself and Osiris amid the primordial chaos. Gods were linked to specific regions of the universe. In Egyptian tradition, the world includes the earth, the sky, and the underworld. Surrounding them is the dark formlessness that existed before creation. The gods in general were said to dwell in the sky, although gods whose roles were linked with other parts of the universe were said to live in those places instead. Most events of mythology, set in a time before the gods' withdrawal from the human realm, take place in an earthly setting. The deities there sometimes interact with those in the sky. The underworld, in contrast, is treated as a remote and inaccessible place, and the gods who dwell there have difficulties in communicating with those in the world of the living. The space outside the cosmos is also said to be very distant. It too is inhabited by deities, some hostile and some beneficial to the other gods and their orderly world. In the time after myth, most gods were said to be either in the sky or invisibly present within the world. Temples were their main means of contact with humanity. Each day, it was believed, the gods moved from the divine realm to their temples, their homes in the human world. There they inhabited the cult images, the statues that depicted deities and allowed humans to interact with them in temple rituals. This movement between realms was sometimes described as a journey between the sky and the earth. As temples were the focal points of Egyptian cities, the god in a city's main temple was the patron deity for the city and the surrounding region. Deities' spheres of influence on earth centered on the towns and regions they presided over. Many gods had more than one cult center and their local ties changed over time. They could establish themselves in new cities, or their range of influence could contract. Therefore, a given deity's main cult center in historical times is not necessarily his or her place of origin. The political influence of a city could affect the importance of its patron deity. When kings from Thebes took control of the country at start of the Middle Kingdom (c. 2055–1650 BC), they elevated Thebes' patron gods—first the war god Montu and then Amun—to national prominence. In Egyptian belief, names express the fundamental nature of the things to which they refer. In keeping with this belief, the names of deities often relate to their roles or origins. The name of the predatory goddess Sekhmet means "powerful one", the name of the mysterious god Amun means "hidden one", and the name of Nekhbet, who was worshipped in the city of Nekheb, means "she of Nekheb". Many other names have no certain meaning, even when the gods who bear them are closely tied to a single role. The names of the sky goddess Nut and the earth god Geb do not resemble the Egyptian terms for sky and earth. The Egyptians also devised false etymologies giving more meanings to divine names. A passage in the Coffin Texts renders the name of the funerary god Seker as sk r, meaning "cleaning of the mouth", to link his name with his role in the Opening of the Mouth ritual, while one in the Pyramid Texts says the name is based on words shouted by Osiris in a moment of distress, connecting Sokar with the most important funerary deity. The gods were believed to have many names. Among them were secret names that conveyed their true natures more profoundly than others. To know the true name of a deity was to have power over it. The importance of names is demonstrated by a myth in which Isis poisons the superior god Ra and refuses to cure him unless he reveals his secret name to her. Upon learning the name, she tells it to her son, Horus, and by learning it they gain greater knowledge and power. In addition to their names, gods were given epithets, like "possessor of splendor", "ruler of Abydos", or "lord of the sky", that describe some aspect of their roles or their worship. Because of the gods' multiple and overlapping roles, deities can have many epithets—with more important gods accumulating more titles—and the same epithet can apply to many deities. Some epithets eventually became separate deities, as with Werethekau, an epithet applied to several goddesses meaning "great enchantress", which came to be treated as an independent goddess. The host of divine names and titles expresses the gods' multifarious nature. The Egyptians regarded the division between male and female as fundamental to all beings, including deities. Male gods tended to have a higher status than goddesses and were more closely connected with creation and with kingship, while goddesses were more often thought of as helping and providing for humans. Some deities were androgynous, but most examples are found in the context of creation myths, in which the androgynous deity represents the undifferentiated state that existed before the world was created. Atum was primarily male but had a feminine aspect within himself, who was sometimes seen as a goddess, known as Iusaaset or Nebethetepet. Similarly, Neith, who was sometimes regarded as a creator goddess, was said to possess masculine traits but was mainly seen as female. Sex and gender were closely tied to creation and thus rebirth. Male gods were believed to have the active role in conceiving children. Female deities were often relegated to a supporting role, stimulating their male consorts' virility and nurturing their children, although goddesses were given a larger role in procreation late in Egyptian history. Goddesses acted as mythological mothers and wives of kings and thus as prototypes of human queenship. Hathor, who was the mother or consort of Horus and the most important goddess for much of Egyptian history, exemplified this relationship between divinity and the king. Female deities also had a violent aspect that could be seen either positively, as with the goddesses Wadjet and Nekhbet who protected the king, or negatively. The myth of the Eye of Ra contrasts feminine aggression with sexuality and nurturing, as the goddess rampages in the form of Sekhmet or another dangerous deity until the other gods appease her, at which point she becomes a benign goddess such as Hathor who, in some versions, then becomes the consort of a male god. The Egyptian conception of sexuality was heavily focused on heterosexual reproduction, and homosexual acts were usually viewed with disapproval. Some texts nevertheless refer to homosexual behavior between male deities. In some cases, most notably when Set sexually assaulted Horus, these acts served to assert the dominance of the active partner and humiliate the submissive one. Other couplings between male deities could be viewed positively and even produce offspring, as in one text in which Khnum is born from the union of Ra and Shu. Egyptian deities are connected in a complex and shifting array of relationships. A god's connections and interactions with other deities helped define its character. Thus Isis, as the mother and protector of Horus, was a great healer as well as the patroness of kings. Such relationships were in fact more important than myths in expressing Egyptians' religious worldview, although they were also the base material from which myths were formed. Family relationships are a common type of connection between gods. Deities often form male and female pairs. Families of three deities, with a father, mother, and child, represent the creation of new life and the succession of the father by the child, a pattern that connects divine families with royal succession. Osiris, Isis, and Horus formed the quintessential family of this type. The pattern they set grew more widespread over time, so that many deities in local cult centers, like Ptah, Sekhmet, and their child Nefertum at Memphis and the Theban Triad at Thebes, were assembled into family triads. Genealogical connections like these vary according to the circumstances. Hathor could act as the mother, consort, or daughter of the sun god, and the child form of Horus acted as the third member of many local family triads. Other divine groups were composed of deities with interrelated roles, or who together represented a region of the Egyptian mythological cosmos. There were sets of gods for the hours of the day and night and for each nome (province) of Egypt. Some of these groups contain a specific, symbolically important number of deities. Paired gods sometimes have similar roles, as do Isis and her sister Nephthys in their protection and support of Osiris. Other pairs stand for opposite but interrelated concepts that are part of a greater unity. Ra, who is dynamic and light-producing, and Osiris, who is static and shrouded in darkness, merge into a single god each night. Groups of three are linked with plurality in ancient Egyptian thought, and groups of four connote completeness. Rulers in the late New Kingdom promoted a particularly important group of three gods above all others: Amun, Ra, and Ptah. These deities stood for the plurality of all gods, as well as for their own cult centers (the major cities of Thebes, Heliopolis, and Memphis) and for many threefold sets of concepts in Egyptian religious thought. Sometimes Set, the patron god of the Nineteenth Dynasty kings and the embodiment of disorder within the world, was added to this group, which emphasized a single coherent vision of the pantheon. Nine, the product of three and three, represents a multitude, so the Egyptians called several large groups "Enneads", or sets of nine, even if they had more than nine members.[Note 2] The most prominent ennead was the Ennead of Heliopolis, an extended family of deities descended from Atum, which incorporates many important gods. The term "ennead" was often extended to include all of Egypt's deities. This divine assemblage had a vague and changeable hierarchy. Gods with broad influence in the cosmos or who were mythologically older than others had higher positions in divine society. At the apex of this society was the king of the gods, who was usually identified with the creator deity. In different periods of Egyptian history, different gods were most frequently said to hold this exalted position. Horus was most closely linked with kingship in the Early Dynastic Period, Ra rose to preeminence in the Old Kingdom, Amun was supreme in the New, and in the Ptolemaic and Roman periods, Isis was the divine queen and creator goddess. Newly prominent gods tended to adopt characteristics from their predecessors. Isis absorbed the traits of many other goddesses during her rise, and when Amun became the ruler of the pantheon, he was conjoined with Ra to become a solar deity. The gods were believed to manifest in many forms. The Egyptians had a complex conception of the human soul, consisting of several parts. The spirits of the gods were composed of many of these same elements. The ba was the component of the human or divine soul that affected the world around it. Any visible manifestation of a god's power could be called its ba; thus, the sun was called the ba of Ra. A depiction of a deity was considered a ka, another component of its being, which acted as a vessel for that deity's ba to inhabit. The cult images of gods that were the focus of temple rituals, as well as the sacred animals that represented certain deities, were believed to house divine bas in this way. Gods could be ascribed many bas and kas, which were sometimes given names representing different aspects of the god's nature. Everything in existence was said to be one of the kas of Atum the creator god, who originally contained all things within himself, and one deity could be called the ba of another, meaning that the first god is a manifestation of the other's power. Divine body parts could act as separate deities, like the Eye of Ra and Hand of Atum, both of which were personified as goddesses. The gods were so full of life-giving power that even their bodily fluids could transform into other living things; humankind was said to have sprung from the creator god's tears, and the other deities from his sweat. Nationally important deities gave rise to local manifestations, which sometimes absorbed the characteristics of older regional gods. Horus had many forms tied to particular places, including Horus of Nekhen, Horus of Buhen, and Horus of Edfu. Such local manifestations could be treated almost as separate beings. During the New Kingdom, one man was accused of stealing clothes by an oracle supposed to communicate messages from Amun of Pe-Khenty. He consulted two other local oracles of Amun hoping for a different judgment. Gods' manifestations also differed according to their roles. Horus could be a powerful sky god or vulnerable child, and these forms were sometimes counted as independent deities. Gods were combined with each other as easily as they were divided. A god could be called the ba of another, or two or more deities could be joined into one god with a combined name and iconography. Local gods were linked with greater ones, and deities with similar functions were combined. Ra was connected with the local deity Sobek to form Sobek-Ra; with his fellow ruling god, Amun, to form Amun-Ra; with the solar form of Horus to form Ra-Horakhty; and with several solar deities as Horemakhet-Khepri-Ra-Atum. On rare occasion, deities of different sexes could be joined in this way, producing combinations such as Osiris-Neith. This linking of deities is called syncretism. Unlike other situations for which this term is used, the Egyptian practice was not meant to fuse competing belief systems, although foreign deities could be syncretized with native ones. Instead, syncretism acknowledged the overlap between deities' roles and extended the sphere of influence for each of them. Syncretic combinations were not permanent; a god who was involved in one combination continued to appear separately and to form new combinations with other deities. Closely connected deities did sometimes merge. Horus absorbed several falcon gods from various regions, such as Khenti-irty and Khenti-kheti, who became little more than local manifestations of him; Hathor subsumed a similar cow goddess, Bat; and an early funerary god, Khenti-Amentiu, was supplanted by Osiris and Anubis. In the reign of Akhenaten (c. 1353–1336 BC) in the mid-New Kingdom, a single solar deity, the Aten, became the sole focus of the state religion. Akhenaten ceased to fund the temples of other deities and erased gods' names and images on monuments, targeting Amun in particular. This new religious system, sometimes called Atenism, differed dramatically from the polytheistic worship of many gods in all other periods. The Aten had no mythology, and it was portrayed and described in more abstract terms than traditional deities. Whereas, in earlier times, newly important gods were integrated into existing religious beliefs, Atenism insisted on a single understanding of the divine that excluded the traditional multiplicity of perspectives. Yet Atenism may not have been full monotheism, which totally excludes belief in other deities. There is evidence suggesting that the general populace continued to worship other gods in private. The picture is further complicated by Atenism's apparent tolerance for some other deities, such as Maat, Shu, and Tefnut. For these reasons, the Egyptologists Dominic Montserrat and John Baines have suggested that Akhenaten may have been monolatrous, worshipping a single deity while acknowledging the existence of others. In any case, Atenism's aberrant theology did not take root among the Egyptian populace, and Akhenaten's successors returned to traditional beliefs. Scholars have long debated whether traditional Egyptian religion ever asserted that the multiple gods were, on a deeper level, unified. Reasons for this debate include the practice of syncretism, which might suggest that all the separate gods could ultimately merge into one, and the tendency of Egyptian texts to credit a particular god with power that surpasses all other deities. Another point of contention is the appearance of the word "god" in wisdom literature, where the term does not refer to a specific deity or group of deities. In the early 20th century, for instance, E. A. Wallis Budge believed that Egyptian commoners were polytheistic, but knowledge of the true monotheistic nature of the religion was reserved for the elite, who wrote the wisdom literature. His contemporary James Henry Breasted thought Egyptian religion was instead pantheistic, with the power of the sun god present in all other gods, while Hermann Junker argued that Egyptian civilization had been originally monotheistic and became polytheistic in the course of its history. In 1971, Erik Hornung published a study[Note 3] rebutting such views. He points out that in any given period many deities, even minor ones, were described as superior to all others. He also argues that the unspecified "god" in the wisdom texts is a generic term for whichever deity is relevant to the reader in the situation at hand. Although the combinations, manifestations, and iconographies of each god were constantly shifting, they were always restricted to a finite number of forms, never becoming fully interchangeable in a monotheistic or pantheistic way. Henotheism, Hornung says, describes Egyptian religion better than other labels. An Egyptian could worship any deity at a particular time and credit it with supreme power in that moment, without denying the other gods or merging them all with the god that he or she focused on. Hornung concludes that the gods were fully unified only in myth, at the time before creation, after which the multitude of deities emerged from a uniform nonexistence. Hornung's arguments have greatly influenced other scholars of Egyptian religion, but some still believe that at times the gods were more unified than he allows. Jan Assmann maintains that the notion of a single deity developed slowly through the New Kingdom, beginning with a focus on Amun-Ra as the all-important sun god. In his view, Atenism was an extreme outgrowth of this trend. It equated the single deity with the sun and dismissed all other gods. Then, in the backlash against Atenism, priestly theologians described the universal god in a different way, one that coexisted with traditional polytheism. The one god was believed to transcend the world and all the other deities, while at the same time, the multiple gods were aspects of the one. According to Assmann, this one god was especially equated with Amun, the dominant god in the late New Kingdom, whereas for the rest of Egyptian history the universal deity could be identified with many other gods. James P. Allen says that coexisting notions of one god and many gods would fit well with the "multiplicity of approaches" in Egyptian thought, as well as with the henotheistic practice of ordinary worshippers. He says that the Egyptians may have recognized the unity of the divine by "identifying their uniform notion of 'god' with a particular god, depending on the particular situation." Descriptions and depictions Egyptian writings describe the gods' bodies in detail. They are made of precious materials; their flesh is gold, their bones are silver, and their hair is lapis lazuli. They give off a scent that the Egyptians likened to the incense used in rituals. Some texts give precise descriptions of particular deities, including their height and eye color. Yet these characteristics are not fixed; in myths, gods change their appearances to suit their own purposes. Egyptian texts often refer to deities' true, underlying forms as "mysterious". The Egyptians' visual representations of their gods are therefore not literal. They symbolize specific aspects of each deity's character, functioning much like the ideograms in hieroglyphic writing. For this reason, the funerary god Anubis is commonly shown in Egyptian art as a dog or jackal, a creature whose scavenging habits threaten the preservation of buried mummies, in an effort to counter this threat and employ it for protection. His black coloring alludes to the color of mummified flesh and to the fertile black soil that Egyptians saw as a symbol of resurrection. Most deities were depicted in several ways. Hathor could be a cow, cobra, lioness, or a woman with bovine horns or ears. By depicting a given god in different ways, the Egyptians expressed different aspects of its essential nature. The gods are depicted in a finite number of these symbolic forms, so they can often be distinguished from one another by their iconographies. These forms include men and women (anthropomorphism), animals (zoomorphism), and, more rarely, inanimate objects. Combinations of forms, such as deities with human bodies and animal heads, are common. New forms and increasingly complex combinations arose in the course of history, with the most surreal forms often found among the demons of the underworld. Some gods can only be distinguished from others if they are labeled in writing, as with Isis and Hathor. Because of the close connection between these goddesses, they could both wear the cow-horn headdress that was originally Hathor's alone. Certain features of divine images are more useful than others in determining a god's identity. The head of a given divine image is particularly significant. In a hybrid image, the head represents the original form of the being depicted, so that, as the Egyptologist Henry Fischer put it, "a lion-headed goddess is a lion-goddess in human form, while a royal sphinx, conversely, is a man who has assumed the form of a lion." Divine headdresses, which range from the same types of crowns used by human kings to large hieroglyphs worn on gods' heads, are another important indicator. In contrast, the objects held in gods' hands tend to be generic. Male deities hold was staffs, goddesses hold stalks of papyrus, and both sexes carry ankh signs, representing the Egyptian word for "life", to symbolize their life-giving power. The forms in which the gods are shown, although diverse, are limited in many ways. Many creatures that are widespread in Egypt were never used in divine iconography. Others could represent many deities, often because these deities had major characteristics in common. Bulls and rams were associated with virility, cows and falcons with the sky, hippopotami with maternal protection, felines with the sun god, and serpents with both danger and renewal. Animals that were absent from Egypt in the early stages of its history were not used as divine images. For instance, the horse, which was not introduced until the Second Intermediate Period (c. 1650–1550 BC), never represented a god. Similarly, the clothes worn by anthropomorphic deities in most periods changed little from the styles used in the Old Kingdom: a kilt, false beard, and often a shirt for male gods and a long, tight-fitting dress for goddesses.[Note 4] The basic anthropomorphic form varies. Child gods are depicted nude, as are some adult gods when their procreative powers are emphasized. Certain male deities are given heavy bellies and breasts, signifying either androgyny or prosperity and abundance. Whereas most male gods have red skin and most goddesses are yellow—the same colors used to depict Egyptian men and women—some are given unusual, symbolic skin colors. Thus, the blue skin and paunchy figure of the god Hapi alludes to the Nile flood he represents and the nourishing fertility it brought. A few deities, such as Osiris, Ptah, and Min, have a "mummiform" appearance, with their limbs tightly swathed in cloth. Although these gods resemble mummies, the earliest examples predate the cloth-wrapped style of mummification, and this form may instead hark back to the earliest, limbless depictions of deities. Some inanimate objects that represent deities are drawn from nature, such as trees or the disk-like emblems for the sun and the moon. Some objects associated with a specific god, like the crossed bows representing Neith (𓋋) or the emblem of Min (𓋉) symbolized the cults of those deities in Predynastic times. In many of these cases, the nature of the original object is mysterious. In the Predynastic and Early Dynastic Periods, gods were often represented by divine standards: poles topped by emblems of deities, including both animal forms and inanimate objects. Interactions with humans In official writings, pharaohs are said to be divine, and they are constantly depicted in the company of the deities of the pantheon. Each pharaoh and his predecessors were considered the successors of the gods who had ruled Egypt in mythic prehistory. Living kings were equated with Horus and called the "son" of many male deities, particularly Osiris and Ra; deceased kings were equated with these elder gods. Kings' wives and mothers were likened to many goddesses. The few women who made themselves pharaohs, such as Hatshepsut, connected themselves with these same goddesses while adopting much of the masculine imagery of kingship. Pharaohs had their own mortuary temples where rituals were performed for them during their lives and after their deaths. But few pharaohs were worshipped as gods long after their lifetimes, and non-official texts portray kings in a human light. For these reasons, scholars disagree about how genuinely most Egyptians believed the king to be a god. He may only have been considered divine when he was performing ceremonies. However much it was believed, the king's divine status was the rationale for his role as Egypt's representative to the gods, as he formed a link between the divine and human realms. The Egyptians believed the gods needed temples to dwell in, as well as the periodic performance of rituals and presentation of offerings to nourish them. These things were provided by the cults that the king oversaw, with their priests and laborers. Yet, according to royal ideology, temple-building was exclusively the pharaoh's work, as were the rituals that priests usually performed in his stead. These acts were a part of the king's fundamental role: maintaining maat. The king and the nation he represented provided the gods with maat so they could continue to perform their functions, which maintained maat in the cosmos so humans could continue to live. Although the Egyptians believed their gods to be present in the world around them, contact between the human and divine realms was mostly limited to specific circumstances. In literature, gods may appear to humans in a physical form, but in real life, the Egyptians were limited to more indirect means of communication. The ba of a god was said to periodically leave the divine realm to dwell in the images of that god. By inhabiting these images, the gods left their concealed state and took on a physical form. To the Egyptians, a place or object that was ḏsr—"sacred"—was isolated and ritually pure, and thus fit for a god to inhabit. Temple statues and reliefs, as well as particular sacred animals, like the Apis bull, served as divine intermediaries in this way. Dreams and trances provided a very different venue for interaction. In these states, it was believed, people could come close to the gods and sometimes receive messages from them. Temples, where the state rituals were carried out, were filled with images of the gods. The most important temple image was the cult statue in the inner sanctuary. These statues were usually less than life-size and made of the same precious materials that were said to form the gods' bodies.[Note 5] Many temples had several sanctuaries, each with a cult statue representing one of the gods in a group such as a family triad. The city's primary god was envisioned as its lord, employing many of the residents as servants in the divine household that the temple represented. The gods residing in the temples of Egypt collectively represented the entire pantheon. But many deities—including some important gods as well as those that were minor or hostile—were never given temples of their own, although some were represented in the temples of other gods. To insulate the sacred power in the sanctuary from the impurities of the outside world, the Egyptians enclosed temple sanctuaries and greatly restricted access to them. People other than kings and high priests were thus denied contact with cult statues. The exception was during festival processions, when the statue was carried out of the temple enclosed in a portable shrine, which usually hid it from public view. People did have less direct means of interaction. The more public parts of temples often incorporated small places for prayer, from doorways to freestanding chapels near the back of the temple building. Communities also built and managed small chapels for their own use, and some families had shrines inside their homes. Egyptian gods were involved in human lives as well as in the overarching order of nature. This divine influence applied mainly to Egypt, as foreign peoples were traditionally believed to be outside the divine order. In the New Kingdom, when other nations were under Egyptian control, foreigners were said to be under the sun god's benign rule in the same way that Egyptians were. Thoth, as the overseer of time, was said to allot fixed lifespans to both humans and gods. Other gods were also said to govern the length of human lives, including Meskhenet and Renenutet, both of whom presided over birth, and Shai, the personification of fate. Thus, the time and manner of death was the main meaning of the Egyptian concept of fate, although to some extent these deities governed other events in life as well. Several texts refer to gods influencing or inspiring human decisions, working through a person's "heart"—the seat of emotion and intellect in Egyptian belief. Deities were also believed to give commands, instructing the king in the governance of his realm and regulating the management of their temples. Egyptian texts rarely mention direct commands given to private persons, and these commands never evolved into a set of divinely enforced moral codes. Morality in ancient Egypt was based on the concept of maat, which, when applied to human society, meant that everyone should live in an orderly way that did not interfere with the well-being of other people. Because deities were the upholders of maat, morality was connected with them. For example, the gods judged humans' moral righteousness after death, and by the New Kingdom, a verdict of innocence in this judgement was believed to be necessary for admittance into the afterlife. In general, however, morality was based on practical ways to uphold maat in daily life, rather than on strict rules that the gods laid out. Humans had free will to ignore divine guidance and the behavior required by maat, but by doing so they could bring divine punishment upon themselves. A deity carried out this punishment using its ba, the force that manifested the god's power in the human world. Natural disasters and human ailments were seen as the work of angry divine bas. Conversely, the gods could cure righteous people of illness or even extend their lifespans. Both these types of intervention were eventually represented by deities: Shed, who emerged in the New Kingdom to represent divine rescue from harm, and Petbe, an apotropaic god from the late eras of Egyptian history who was believed to avenge wrongdoing. Egyptian texts take different views on whether the gods are responsible when humans suffer unjustly. Misfortune was often seen as a product of isfet, the cosmic disorder that was the opposite of maat, and therefore the gods were not guilty of causing evil events. Some deities who were closely connected with isfet, such as Set, could be blamed for disorder within the world without placing guilt on the other gods. Some writings do accuse the deities of causing human misery, while others give theodicies in the gods' defense. Beginning in the Middle Kingdom, several texts connected the issue of evil in the world with a myth in which the creator god fights a human rebellion against his rule and then withdraws from the earth. Because of this human misbehavior, the creator is distant from his creation, allowing suffering to exist. New Kingdom writings do not question the just nature of the gods as strongly as those of the Middle Kingdom. They emphasize humans' direct, personal relationships with deities and the gods' power to intervene in human events. People in this era put faith in specific gods who they hoped would help and protect them through their lives. As a result, upholding the ideals of maat grew less important than gaining the gods' favor as a way to guarantee a good life. Even the pharaohs were regarded as dependent on divine aid, and after the New Kingdom came to an end, government was increasingly influenced by oracles communicating the gods' will. Official religious practices, which maintained maat for the benefit of all Egypt, were related to, but distinct from, the religious practices of ordinary people, who sought the gods' help for their personal problems. Official religion involved a variety of rituals, based in temples. Some rites were performed every day, whereas others were festivals, taking place at longer intervals and often limited to a particular temple or deity. The gods received their offerings in daily ceremonies, in which their statues were clothed, anointed, and presented with food as hymns were recited in their honor. These offerings, in addition to maintaining maat for the gods, celebrated deities' life-giving generosity and encouraged them to remain benevolent rather than vengeful. Festivals often involved a ceremonial procession in which a cult image was carried out of the temple in a barque-shaped shrine. These processions served various purposes. In Roman times, when local deities of all kinds were believed to have power over the Nile inundation, processions in many communities carried temple images to the riverbanks so the gods could invoke a large and fruitful flood. Processions also traveled between temples, as when the image of Hathor from Dendera Temple visited her consort Horus at the Temple of Edfu. Rituals for a god were often based in that deity's mythology. Such rituals were meant to be repetitions of the events of the mythic past, renewing the beneficial effects of the original events. In the Khoiak festival in honor of Osiris, his death and resurrection were ritually reenacted at a time when crops were beginning to sprout. The returning greenery symbolized the renewal of the god's own life. Personal interaction with the gods took many forms. People who wanted information or advice consulted oracles, run by temples, that were supposed to convey gods' answers to questions. Amulets and other images of protective deities were used to ward off the demons that might threaten human well-being or to impart the god's positive characteristics to the wearer. Private rituals invoked the gods' power to accomplish personal goals, from healing sickness to cursing enemies. These practices used heka, the same force of magic that the gods used, which the creator was said to have given to humans so they could fend off misfortune. The performer of a private rite often took on the role of a god in a myth, or even threatened a deity, to involve the gods in accomplishing the goal. Such rituals coexisted with private offerings and prayers, and all three were accepted means of obtaining divine help. Prayer and private offerings are generally called "personal piety": acts that reflect a close relationship between an individual and a god. Evidence of personal piety is scant before the New Kingdom. Votive offerings and personal names, many of which are theophoric, suggest that commoners felt some connection between themselves and their gods, but firm evidence of devotion to deities became visible only in the New Kingdom, reaching a peak late in that era. Scholars disagree about the meaning of this change—whether direct interaction with the gods was a new development or an outgrowth of older traditions. Egyptians now expressed their devotion through a new variety of activities in and around temples. They recorded their prayers and their thanks for divine help on stelae. They gave offerings of figurines that represented the gods they were praying to, or that symbolized the result they desired; thus, a relief image of Hathor and a statuette of a woman could both represent a prayer for fertility. Occasionally, a person took a particular god as a patron, dedicating his or her property or labor to the god's cult. These practices continued into the latest periods of Egyptian history. These later eras saw more religious innovations, including the practice of giving animal mummies as offerings to deities depicted in animal form, such as the cat mummies given to the feline goddess Bastet. Some of the major deities from myth and official religion were rarely invoked in popular worship, but many of the great state gods were important in popular tradition. The worship of some Egyptian gods spread to neighboring lands, especially to Canaan and Nubia during the New Kingdom, when those regions were under pharaonic control. In Canaan, the exported deities, including Hathor, Amun, and Set, were often syncretized with native gods, who in turn spread to Egypt. The Egyptian deities may not have had permanent temples in Canaan, and their importance there waned after Egypt lost control of the region. In contrast, many temples to the major Egyptian gods and deified pharaohs were built in Nubia. After the end of Egyptian rule there, the imported gods, particularly Amun and Isis, were syncretized with local deities and remained part of the religion of Nubia's independent Kingdom of Kush. These gods were incorporated into the Nubian ideology of kingship much as they were in Egypt, so that Amun was considered the divine father of the king and Isis and other goddesses were linked with the Nubian queen, the kandake. Some deities reached farther. Taweret became a goddess in Minoan Crete, and Amun's oracle at Siwa Oasis was known to and consulted by people across the Mediterranean region. Under the Greek Ptolemaic Dynasty and then Roman rule, Greeks and Romans introduced their own deities to Egypt. These newcomers equated the Egyptian gods with their own, as part of the Greco-Roman tradition of interpretatio graeca. The worship of the native gods was not swallowed up by that of foreign ones. Instead, Greek and Roman gods were adopted as manifestations of Egyptian ones. Egyptian cults sometimes incorporated Greek language, philosophy, iconography, and even temple architecture. Meanwhile, the cults of several Egyptian deities—particularly Isis, Osiris, Anubis, the form of Horus named Harpocrates, and the fused Greco-Egyptian god Serapis—were adopted into Roman religion and spread across the Roman Empire. Roman emperors, like Ptolemaic kings before them, invoked Isis and Serapis to endorse their authority, inside and outside Egypt. In the empire's complex mix of religious traditions, Thoth was transmuted into the legendary esoteric teacher Hermes Trismegistus, and Isis, who was venerated from Britain to Mesopotamia, became the focus of a Greek-style mystery cult. Isis and Hermes Trismegistus were both prominent in the Western esoteric tradition that grew from the Roman religious world. Temples and cults in Egypt itself declined as the Roman economy deteriorated in the third century AD, and beginning in the fourth century, Christians suppressed the veneration of Egyptian deities. The last formal cults, at Philae, died out in the fifth or sixth century.[Note 6] Most beliefs surrounding the gods themselves disappeared within a few hundred years, remaining in magical texts into the seventh and eighth centuries. In contrast, many of the practices involved in their worship, such as processions and oracles, were adapted to fit Christian ideology and persisted as part of the Coptic Church. Given the great changes and diverse influences in Egyptian culture since that time, scholars disagree about whether any modern Coptic practices are descended from those of pharaonic religion. But many festivals and other traditions of modern Egyptians, both Christian and Muslim, resemble the worship of their ancestors' gods. In the late 20th century, several new religious groups going under the blanket term of Kemetism have formed based on different reconstructions of ancient Egyptian religion. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Osiris] | [TOKENS: 3990] |
Contents Osiris Osiris (/oʊˈsaɪrɪs/, from Egyptian wsjr)[a] was the god of fertility, agriculture, the afterlife, the dead, resurrection, life, and vegetation in ancient Egyptian religion. He was classically depicted with a pharaoh's beard, partially mummy-wrapped at the legs, wearing a distinctive atef crown and holding a symbolic crook and flail. He was one of the first to be associated with the mummy wrap. When his brother Set cut him to pieces after killing him, with her sister Nephthys, Osiris's sister-wife, Isis, searched Egypt to find each part of Osiris. She collected all but one – Osiris's genitalia. She then wrapped his body up, enabling him to return to life. Osiris was widely worshipped until the decline of ancient Egyptian religion. Osiris was at times considered the eldest son of the earth god Geb and the sky goddess Nut, as well as brother and husband of Isis, and brother of Set, Nephthys, and Horus the Elder, with Horus the Younger being considered his posthumously begotten son. Through syncretism with Iah, he was also a god of the Moon. Osiris was the lord of the dead and the underworld, the "Lord of Silence" and Khenti-Amentiu, meaning "Foremost of the Westerners". In the Old Kingdom (2686–2181 BC) the pharaoh was considered a son of the sun god Ra who, after his death, ascended to join Ra in the sky. After the spread of the Osiris cult, however, the kings of Egypt were associated with Osiris in death – as Osiris rose from the dead, they would unite with him and inherit eternal life through imitative magic. Through the hope of new life after death, Osiris began to be associated with the cycles in nature, in particular the sprouting of vegetation and annual flooding of the Nile River, as well as the heliacal rising of Orion and Sirius at the start of the new year. He became the sovereign that granted all life, "He Who is Permanently Benign and Youthful". The first evidence of the worship of Osiris is from the middle of the Fifth Dynasty of Egypt (25th century BC), though it is likely he was worshiped much earlier; the Khenti-Amentiu epithet dates to at least the First Dynasty, and was used as a pharaonic title. Most information available on the Osiris myth is derived from allusions in the Pyramid Texts at the end of the Fifth Dynasty, later New Kingdom source documents such as the Shabaka Stone and "The Contendings of Horus and Seth", and much later, in the narratives of Greek authors including Plutarch and Diodorus Siculus. Some Egyptologists believe the Osiris mythos may have originated in a former living ruler—possibly a shepherd who lived in Predynastic times (5500–3100 BC) in the Nile Delta, whose beneficial rule led to him being revered as a god. The accoutrements of the shepherd, the crook and the flail – once insignia of the Delta god Andjety, with whom Osiris was associated – support this theory. Etymology of the name Osiris is a Latin transliteration of the Ancient Greek Ὄσιρις IPA: [ó.siː.ris], which in turn is the Greek adaptation of the original name in the Egyptian language. In Egyptian hieroglyphs the name appears as wsjr, which some Egyptologists instead choose to transliterate as ꜣsjr or jsjrj[citation needed]. Since hieroglyphic writing lacks vowels, Egyptologists have vocalized the name in various ways, such as Asar, Ausar, Ausir, Wesir, Usir, or Usire.[citation needed] Several proposals have been made for the etymology and meaning of the original name; as Egyptologist Mark J. Smith notes, none are fully convincing. Most take wsjr as the accepted transliteration, following Adolf Erman: However, recently alternative transliterations have been proposed: Appearance Osiris is represented in his most developed form of iconography wearing the Atef crown, which is similar to the White crown of Upper Egypt, but with the addition of two curling ostrich feathers at each side. He also carries the crook and flail. The crook is thought to represent Osiris as a shepherd god. The symbolism of the flail is more uncertain with shepherd's whip, fly-whisk, or association with the god Andjety of the ninth nome of Lower Egypt proposed. He was commonly depicted as a pharaoh with a complexion of either green (the color of rebirth) or black (alluding to the fertility of the Nile floodplain) in mummiform (wearing the trappings of mummification from chest downward). Early mythology The Pyramid Texts describe early conceptions of an afterlife in terms of eternal travelling with the sun god amongst the stars. Amongst these mortuary texts, at the beginning of the Fourth Dynasty, is found: "An offering the king gives and Anubis". By the end of the Fifth Dynasty, the formula in all tombs becomes "An offering the king gives and Osiris". Osiris is the mythological father of the god Horus, whose conception is described in the Osiris myth (a central myth in ancient Egyptian belief). The myth describes Osiris as having been killed by his brother Set, who wanted Osiris's throne. His wife, Isis, finds the body of Osiris and hides it in the reeds where it is found and dismembered by Set. Isis retrieves and joins the fragmented pieces of Osiris, then briefly revives him by use of magic. This spell gives her time to become pregnant by Osiris. Isis later gives birth to Horus. Since Horus was born after Osiris's resurrection, Horus became thought of as a representation of new beginnings and the vanquisher of the usurper Set. Ptah-Seker (who resulted from the identification of the creator god Ptah with Seker) thus gradually became identified with Osiris, the two becoming Ptah-Seker-Osiris. As the sun was thought to spend the night in the underworld, and was subsequently "reborn" every morning, Ptah-Seker-Osiris was identified as king of the underworld, god of the afterlife, life, death, and regeneration. Osiris also has the aspect and form of Seker-Osiris. Osiris's soul, or rather his ba, was occasionally worshipped in its own right, almost as if it were a distinct god, especially in the Delta city of Mendes. This aspect of Osiris was referred to as Banebdjedet, which is grammatically feminine (also spelt "Banebded" or "Banebdjed"), literally "the ba of the lord of the djed, which roughly means The soul of the lord of the pillar. The djed, a type of pillar, was usually understood as the backbone of Osiris. The Nile supplying water, and Osiris (strongly connected to the vegetable regeneration) who died only to be resurrected, represented continuity and stability. As Banebdjed, Osiris was given epithets such as Lord of the Sky and Life of the (sun god) Ra. Ba does not mean "soul" in the western sense, and has to do with power, reputation, force of character, especially in the case of a god. Since the ba was associated with power, and also happened to be a word for ram in Egyptian, Banebdjed was depicted as a ram, or as Ram-headed. A living, sacred ram was kept at Mendes and worshipped as the incarnation of the god, and upon death, the rams were mummified and buried in a ram-specific necropolis. Banebdjed was consequently said to be Horus's father, as Banebdjed was an aspect of Osiris. Regarding the association of Osiris with the ram, the god's traditional crook and flail are the instruments of the shepherd, which has suggested to some scholars also an origin for Osiris in herding tribes of the upper Nile. Mythology Plutarch recounts one version of the Osiris myth in which Set (Osiris's brother), along with the Queen of Ethiopia, conspired with 72 accomplices to plot the assassination of Osiris. Set fooled Osiris into getting into a box, which Set then shut, sealed with lead, and threw into the Nile. Osiris's wife, Isis, searched for his remains until she finally found him embedded in a tamarisk tree trunk, which was holding up the roof of a palace in Byblos on the Phoenician coast. She managed to remove the coffin and retrieve her husband's body. In one version of the myth, Isis used a spell to briefly revive Osiris so he could impregnate her. After embalming and burying Osiris, Isis conceived and gave birth to their son, Horus. Thereafter Osiris lived on as the god of the underworld. Because of his death and resurrection, Osiris was associated with the flooding and retreating of the Nile and thus with the yearly growth and death of crops along the Nile valley. Diodorus Siculus gives another version of the myth in which Osiris was described as an ancient king who taught the Egyptians the arts of civilization, including agriculture, then travelled the world with his sister Isis, the satyrs, and the nine muses, before finally returning to Egypt. Osiris was then murdered by his evil brother Typhon, who was identified with Set. Typhon divided the body into twenty-six pieces, which he distributed amongst his fellow conspirators in order to implicate them in the murder. Isis and Hercules (Horus) avenged the death of Osiris and slew Typhon. Isis recovered all the parts of Osiris's body, except the phallus, and secretly buried them. She made replicas of them and distributed them to several locations, which then became centres of Osiris worship. Worship Annual ceremonies were performed in honor of Osiris in various places across Egypt. Evidences of which were discovered during underwater archaeological excavations of Franck Goddio and his team in the sunken city of Thonis-Heracleion. These ceremonies were fertility rites which symbolised the resurrection of Osiris. Recent scholars emphasize "the androgynous character of [Osiris's] fertility" clear from surviving material. For instance, Osiris's fertility has to come both from being castrated/cut-into-pieces and the reassembly by female Isis, whose embrace of her reassembled Osiris produces the perfect king, Horus. Further, as attested by tomb-inscriptions, both women and men could syncretize (identify) with Osiris at their death, another set of evidence that underlines Osiris's androgynous nature. Plutarch and others have noted that the sacrifices to Osiris were "gloomy, solemn, and mournful..." (Isis and Osiris, 69) and that the great mystery festival, celebrated in two phases, began at Abydos commemorating the death of the god, on the same day that grain was planted in the ground (Isis and Osiris, 13). The annual festival involved the construction of "Osiris Beds" formed in shape of Osiris, filled with soil and sown with seed. The germinating seed symbolized Osiris rising from the dead. An almost pristine example was found in the tomb of Tutankhamun. The imiut emblem- an image of a stuffed, headless skin of an animal tied to a pole mounting a pot, was a symbol associated both with Osiris as god of the underworld and with Anubis, god of mummification, was sometimes included among a deceased person's funerary equipment. The first phase of the festival was a public drama depicting the murder and dismemberment of Osiris, the search for his body by Isis, his triumphal return as the resurrected god, and the battle in which Horus defeated Set. According to Julius Firmicus Maternus of the fourth century, this play was re-enacted each year by worshippers who "beat their breasts and gashed their shoulders.... When they pretend that the mutilated remains of the god have been found and rejoined...they turn from mourning to rejoicing." (De Errore Profanarum Religionum). The passion of Osiris was reflected in his name 'Wenennefer" ("the one who continues to be perfect"), which also alludes to his post mortem power. B C D G H I J K M N O P Q R S T U W Y Much of the extant information about the rites of Osiris can be found on the Ikhernofret Stela at Abydos erected in the Twelfth Dynasty by Ikhernofret, possibly a priest of Osiris or other official (the titles of Ikhernofret are described in his stela from Abydos) during the reign of Senwosret III (Pharaoh Sesostris, about 1875 BC). The ritual reenactment of Osiris's funeral rites were held in the last month of the inundation (the annual Nile flood), coinciding with Spring, and held at Abydos which was the traditional place where the body of Osiris drifted ashore after having been drowned in the Nile. The part of the myth recounting the chopping up of the body into 14 pieces by Set is not recounted in this particular stela. Although it is attested to be a part of the rituals by a version of the Papyrus Jumilhac, in which it took Isis 12 days to reassemble the pieces, coinciding with the festival of ploughing. Some elements of the ceremony were held in the temple, while others involved public participation in a form of theatre. The Stela of Ikhernofret recounts the programme of events of the public elements over the five days of the Festival: Contrasting with the public "theatrical" ceremonies sourced from the Middle Kingdom Ikhernofret Stele, more esoteric ceremonies were performed inside the temples by priests. Plutarch mentions that (for much later period) two days after the beginning of the festival "the priests bring forth a sacred chest containing a small golden coffer, into which they pour some potable water...and a great shout arises from the company for joy that Osiris is found (or resurrected). Then they knead some fertile soil with the water ... and fashion therefrom a crescent-shaped figure, which they cloth and adorn, this indicating that they regard these gods as the substance of Earth and Water." (Isis and Osiris, 39). Yet his accounts were still obscure, for he also wrote, "I pass over the cutting of the wood" – opting not to describe it, since he considered it as a most sacred ritual (Ibid. 21). In the Osirian temple at Denderah, an inscription (translated by Budge, Chapter XV, Osiris and the Egyptian Resurrection) describes in detail the making of wheat paste models of each dismembered piece of Osiris to be sent out to the town where each piece is discovered by Isis. At the temple of Mendes, figures of Osiris were made from wheat and paste placed in a trough on the day of the murder, then water was added for several days, until finally the mixture was kneaded into a mold of Osiris and taken to the temple to be buried (the sacred grain for these cakes were grown only in the temple fields). Molds were made from the wood of a red tree in the forms of the sixteen dismembered parts of Osiris, the cakes of "divine" bread were made from each mold, placed in a silver chest and set near the head of the god with the inward parts of Osiris as described in the Book of the Dead (XVII). Judgement The idea of divine justice being exercised after death for wrongdoing during life is first encountered during the Old Kingdom in a Sixth Dynasty tomb containing fragments of what would be described later as the Negative Confessions performed in front of the 42 Assessors of Maat. At death a person faced judgment by a tribunal of forty-two divine judges. If they led a life in conformance with the precepts of the goddess Maat, who represented truth and right living, the person was welcomed into the kingdom of Osiris. If found guilty, the person was thrown to the soul-eating demon Ammit and did not share in eternal life. The person who is taken by the devourer is subject first to terrifying punishment and then annihilated. These depictions of punishment may have influenced medieval perceptions of the inferno in hell via early Christian and Coptic texts. Purification for those who are considered justified may be found in the descriptions of "Flame Island", where they experience the triumph over evil and rebirth. For the damned, complete destruction into a state of non-being awaits, but there is no suggestion of eternal torture. During the reign of Seti I, Osiris was also invoked in royal decrees to pursue the living when wrongdoing was observed but kept secret and not reported. Greco-Roman era The early Ptolemaic kings promoted a new god, Serapis, who combined traits of Osiris with those of various Greek gods and was portrayed in a Hellenistic form. Serapis was often treated as the consort of Isis and became the patron deity of the Ptolemies' capital, Alexandria. Serapis's origins are not known. Some ancient authors claim the cult of Serapis was established at Alexandria by Alexander III of Macedonia himself, but most who discuss the subject of Serapis's origins give a story similar to that by Plutarch. Writing about 400 years after the fact, Plutarch claimed that Ptolemy I established the cult after dreaming of a colossal statue at Sinope in Anatolia. His councillors identified the statue as the Greek god Pluto and said that the Egyptian name for Pluto was Serapis. This name may have been a Hellenization of "Osiris-Apis". Osiris-Apis was a patron deity of the Memphite Necropolis and the father of the Apis bull who was worshipped there, and texts from Ptolemaic times treat "Serapis" as the Greek translation of "Osiris-Apis". But little of the early evidence for Serapis's cult comes from Memphis, and much of it comes from the Mediterranean world with no reference to an Egyptian origin for Serapis, so Mark Smith expresses doubt that Serapis originated as a Greek form of Osiris-Apis's name and leaves open the possibility that Serapis originated outside Egypt. The cult of Isis and Osiris continued at Philae until at least the 450s CE, long after the imperial decrees of the late 4th century that ordered the closing of temples to "pagan" gods. Philae was the last major ancient Egyptian temple to be closed. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ancient_Egyptian_deities#Manifestations_and_combinations] | [TOKENS: 12020] |
Contents Ancient Egyptian deities B C D G H I J K M N O P Q R S T U W Y Ancient Egyptian deities are the gods and goddesses worshipped in ancient Egypt. The beliefs and rituals surrounding these gods formed the core of ancient Egyptian religion, which emerged sometime in prehistory. Deities represented natural forces and phenomena, and the Egyptians supported and appeased them through offerings and rituals so that these forces would continue to function according to maat, or divine order. After the founding of the Egyptian state around 3100 BC, the authority to perform these tasks was controlled by the pharaoh, who claimed to be the gods' representative and managed the temples where the rituals were carried out. The gods' complex characteristics were expressed in myths and in intricate relationships between deities: family ties, loose groups and hierarchies, and combinations of separate gods into one. Deities' diverse appearances in art—as animals, humans, objects, and combinations of different forms—also alluded, through symbolism, to their essential features. In different eras, various gods were said to hold the highest position in divine society, including the solar deity Ra, the mysterious god Amun, and the mother goddess Isis. The highest deity was usually credited with the creation of the world and often connected with the life-giving power of the sun. Some scholars have argued, based in part on Egyptian writings, that the Egyptians came to recognize a single divine power that lay behind all things and was present in all the other deities. Yet they never abandoned their original polytheistic view of the world, except possibly during the era of Atenism in the 14th century BC, when official religion focused exclusively on an abstract solar deity, the Aten. Gods were assumed to be present throughout the world, capable of influencing natural events and the course of human lives. People interacted with them in temples and unofficial shrines, for personal reasons as well as for larger goals of state rites. Egyptians prayed for divine help, used rituals to compel deities to act, and called upon them for advice. Humans' relations with their gods were a fundamental part of Egyptian society. Definition The beings in ancient Egyptian tradition who might be labeled as deities are difficult to count. Egyptian texts list the names of many deities whose nature is unknown, and make vague, indirect references to other gods who are not even named. The Egyptologist James P. Allen estimates that more than 1,400 deities are named in Egyptian texts, whereas his colleague Christian Leitz says there are "thousands upon thousands" of gods. The Egyptian language's terms for these beings were nṯr, "god", and its feminine form nṯrt, "goddess". Scholars have tried to discern the original nature of the gods by proposing etymologies for these words, but none of these suggestions has gained acceptance, and the terms' origin remains obscure. The hieroglyphs that were used as ideograms and determinatives in writing these words show some of the traits that the Egyptians connected with divinity. The most common of these signs is a flag flying from a pole. Similar objects were placed at the entrances of temples, representing the presence of a deity, throughout ancient Egyptian history. Other such hieroglyphs include a falcon, reminiscent of several early gods who were depicted as falcons, and a seated male or female deity. The feminine form could also be written with an egg as determinative, connecting goddesses with creation and birth, or with a cobra, reflecting the use of the cobra to depict many female deities. The Egyptians distinguished nṯrw, "gods", from rmṯ, "people", but the meanings of the Egyptian and the English terms do not match perfectly. The term nṯr may have applied to any being that was in some way outside the sphere of everyday life. Deceased humans were called nṯr because they were considered to be like the gods, whereas the term was rarely applied to many of Egypt's lesser supernatural beings, which modern scholars often call "demons". Egyptian religious art also depicts places, objects, and concepts in human form. These personified ideas range from deities that were important in myth and ritual to obscure beings, only mentioned once or twice, that may be little more than metaphors. Confronting these blurred distinctions between gods and other beings, scholars have proposed various definitions of a "deity". One widely accepted definition, suggested by Jan Assmann, says that a deity has a cult, is involved in some aspect of the universe, and is described in mythology or other forms of written tradition. According to a different definition, by Dimitri Meeks, nṯr applied to any being that was the focus of ritual. From this perspective, "gods" included the king, who was called a god after his coronation rites, and deceased souls, who entered the divine realm through funeral ceremonies. Likewise, the preeminence of the great gods was maintained by the ritual devotion that was performed for them across Egypt. Origins The first written evidence of deities in Egypt comes from the Early Dynastic Period (c. 3100–2686 BC). Deities must have emerged sometime in the preceding Predynastic Period (before 3100 BC) and grown out of prehistoric religious beliefs. Predynastic artwork depicts a variety of animal and human figures. Some of these images, such as stars and cattle, are reminiscent of important features of Egyptian religion in later times, but in most cases, there is not enough evidence to say whether the images are connected with deities. As Egyptian society grew more sophisticated, clearer signs of religious activity appeared. The earliest known temples appeared in the last centuries of the predynastic era, along with images that resemble the iconographies of known deities: the falcon that represents Horus and several other gods, the crossed arrows that stand for Neith, and the enigmatic "Set animal" that represents Set. Many Egyptologists and anthropologists have suggested theories about how the gods developed in these early times. Gustave Jéquier, for instance, thought the Egyptians first revered primitive fetishes, then deities in animal form, and finally deities in human form, whereas Henri Frankfort argued that the gods must have been envisioned in human form from the beginning. Some of these theories are now regarded as too simplistic, and more current ones, such as Siegfried Morenz' hypothesis that deities emerged as humans began to distinguish themselves from their environment, and to 'personify' ideas relating to deities. Such theories are difficult to prove. Predynastic Egypt originally consisted of small, independent villages. Because many deities in later times were strongly tied to particular towns and regions, many scholars have suggested that the pantheon formed as disparate communities coalesced into larger states, spreading and intermingling the worship of the old local deities. Others have argued that the most important predynastic gods were, like other elements of Egyptian culture, present all across the country despite its political divisions. The final step in the formation of Egyptian religion was the unification of Egypt, in which rulers from Upper Egypt made themselves pharaohs of the entire country. These sacred kings and their subordinates assumed the right to interact with the gods, and kingship became the unifying focus of the religion. New deities continued to emerge after this transformation. Some important deities such as Isis and Amun are not known to have appeared until the Old Kingdom (c. 2686–2181 BC). Places and concepts could inspire the creation of a deity to represent them, and deities were sometimes created to serve as opposite-sex counterparts to established gods or goddesses. Kings were said to be divine, although only a few continued to be worshipped long after their deaths. Some non-royal humans were said to have the favor of the gods and were venerated accordingly. This veneration was usually short-lived, but the court architects Imhotep and Amenhotep son of Hapu were regarded as gods centuries after their lifetimes, as were some other officials. Through contact with neighboring civilizations, the Egyptians also adopted foreign deities. The goddess Miket, who occasionally appeared in Egyptian texts beginning in the Middle Kingdom (c. 2055–1650 BC), may have been adopted from the religion of Nubia to the south, and a Nubian ram deity may have influenced the iconography of Amun. During the New Kingdom (c. 1550–1070 BC), several deities from Canaanite religion were incorporated into that of Egypt, including Baal, Resheph, and Anat. In Greek and Roman times, from 332 BC to the early centuries AD, deities from across the Mediterranean world were revered in Egypt, but the native gods remained, and they often absorbed the cults of these newcomers into their own worship. Characteristics Modern knowledge of Egyptian beliefs about the gods is mostly drawn from religious writings produced by the nation's scribes and priests. These people were the elite of Egyptian society and were very distinct from the general populace, most of whom were illiterate. Little is known about how well this broader population knew or understood the sophisticated ideas that the elite developed. Commoners' perceptions of the divine may have differed from those of the priests. The populace may, for example, have treated the religion's symbolic statements about the gods and their actions as literal truth. But overall, what little is known about popular religious belief is consistent with the elite tradition. The two traditions form a largely cohesive vision of the gods and their nature. Most Egyptian deities represent natural or social phenomena. The gods were generally said to be immanent in these phenomena—to be present within nature. The types of phenomena they represented include physical places and objects as well as abstract concepts and forces. The god Shu was the deification of all the world's air; the goddess Meretseger oversaw a limited region of the earth, the Theban Necropolis; and the god Sia personified the abstract notion of perception. Major gods were often involved in several types of phenomena. For instance, Khnum was the god of Elephantine Island in the midst of the Nile, the river that was essential to Egyptian civilization. He was credited with producing the annual Nile flood that fertilized the country's farmland. Perhaps as an outgrowth of this life-giving function, he was said to create all living things, fashioning their bodies on a potter's wheel. Gods could share the same role in nature; Ra, Atum, Khepri, Horus, and other deities acted as sun gods. Despite their diverse functions, most gods had an overarching role in common: maintaining maat, the universal order that was a central principle of Egyptian religion and was itself personified as a goddess. Yet some deities represented disruption to maat. Most prominently, Apep was the force of chaos, constantly threatening to annihilate the order of the universe, and Set was an ambivalent member of divine society who could both fight disorder and foment it. Not all aspects of existence were seen as deities. Although many deities were connected with the Nile, no god personified it in the way that Ra personified the sun. Short-lived phenomena, such as rainbows or eclipses, were not represented by gods; neither were fire, water, or many other components of the world. The roles of each deity were fluid, and each god could expand its nature to take on new characteristics. As a result, gods' roles are difficult to categorize or define. Despite this flexibility, the gods had limited abilities and spheres of influence. Not even the creator god could reach beyond the boundaries of the cosmos that he created, and even Isis, though she was said to be the cleverest of the gods, was not omniscient. Richard H. Wilkinson, however, argues that some texts from the late New Kingdom suggest that as beliefs about the god Amun evolved he was thought to approach omniscience and omnipresence, and to transcend the limits of the world in a way that other deities did not. The deities with the most limited and specialized domains are often called "minor divinities" or "demons" in modern writing, although there is no firm definition for these terms. Some demons were guardians of particular places, especially in the Duat, the realm of the dead. Others wandered through the human world and the Duat, either as servants and messengers of the greater gods or as roving spirits that caused illness or other misfortunes among humans. Demons' position in the divine hierarchy was not fixed. The protective deities Bes and Taweret originally had minor, demon-like roles, but over time they came to be credited with great influence. The most feared beings in the Duat were regarded as both disgusting and dangerous to humans. Over the course of Egyptian history, they came to be regarded as fundamentally inferior members of divine society and to represent the opposite of the beneficial, life-giving major gods. Yet even the most revered deities could sometimes exact vengeance on humans or each other, displaying a demon-like side to their character and blurring the boundaries between demons and gods. Divine behavior was believed to govern all of nature. Except for the few deities who disrupted the divine order, the gods' actions maintained maat and created and sustained all living things. They did this work using a force the Egyptians called heka, a term usually translated as "magic". Heka was a fundamental power that the creator god used to form the world and the gods themselves. The gods' actions in the present are described and praised in hymns and funerary texts. In contrast, mythology mainly concerns the gods' actions during a vaguely imagined past in which the gods were present on earth and interacted directly with humans. The events of this past time set the pattern for the events of the present. Periodic occurrences were tied to events in the mythic past; the succession of each new pharaoh, for instance, reenacted Horus's accession to the throne of his father Osiris. Myths are metaphors for the gods' actions, which humans cannot fully understand. They contain seemingly contradictory ideas, each expressing a particular perspective on divine events. The contradictions in myth are part of the Egyptians' many-faceted approach to religious belief—what Henri Frankfort called a "multiplicity of approaches" to understanding the gods. In myth, the gods behave much like humans. They feel emotion; they can eat, drink, fight, weep, sicken, and die. Some have unique character traits. Set is aggressive and impulsive, and Thoth, patron of writing and knowledge, is prone to long-winded speeches. Yet overall, the gods are more like archetypes than well drawn characters. Deities' mythic behavior is inconsistent, and their thoughts and motivations are rarely stated. Most myths lack highly developed characters and plots, because their symbolic meaning was more important than elaborate storytelling. Characters were even interchangeable. Different versions of a myth could portray different deities playing the same role, as in the myths of the Eye of Ra, a feminine aspect of the sun god who was represented by many goddesses. The first divine act is the creation of the cosmos, described in several creation myths. They focus on different gods, each of which may act as creator deities. The eight gods of the Ogdoad, who represent the chaos that precedes creation, give birth to the sun god, who establishes order in the newly formed world; Ptah, who embodies thought and creativity, gives form to all things by envisioning and naming them; Atum produces all things as emanations of himself; and Amun, according to the theology promoted by his priesthood, preceded and created the other creator gods. These and other versions of the events of creation were not seen as contradictory. Each gives a different perspective on the complex process by which the organized universe and its many deities emerged from undifferentiated chaos. The period following creation, in which a series of gods rule as kings over the divine society, is the setting for most myths. The gods struggle against the forces of chaos and among each other before withdrawing from the human world and installing the historical kings of Egypt to rule in their place. A recurring theme in these myths is the effort of the gods to maintain maat against the forces of disorder. They fight vicious battles with the forces of chaos at the start of creation. Ra and Apep, battling each other each night, continue this struggle into the present. Another prominent theme is the gods' death and revival. The clearest instance where a god dies is the myth of Osiris's murder, in which that god is resurrected as ruler of the Duat.[Note 1] The sun god is also said to grow old during his daily journey across the sky, sink into the Duat at night, and emerge as a young child at dawn. In the process, he comes into contact with the rejuvenating water of Nun, the primordial chaos. Funerary texts that depict Ra's journey through the Duat also show the corpses of gods who are enlivened along with him. Instead of being changelessly immortal, the gods periodically died and were reborn by repeating the events of creation, thus renewing the whole world. Nonetheless, it was always possible for this cycle to be disrupted and for chaos to return. Some poorly understood Egyptian texts even suggest that this calamity is destined to happen—that the creator god will one day dissolve the order of the world, leaving only himself and Osiris amid the primordial chaos. Gods were linked to specific regions of the universe. In Egyptian tradition, the world includes the earth, the sky, and the underworld. Surrounding them is the dark formlessness that existed before creation. The gods in general were said to dwell in the sky, although gods whose roles were linked with other parts of the universe were said to live in those places instead. Most events of mythology, set in a time before the gods' withdrawal from the human realm, take place in an earthly setting. The deities there sometimes interact with those in the sky. The underworld, in contrast, is treated as a remote and inaccessible place, and the gods who dwell there have difficulties in communicating with those in the world of the living. The space outside the cosmos is also said to be very distant. It too is inhabited by deities, some hostile and some beneficial to the other gods and their orderly world. In the time after myth, most gods were said to be either in the sky or invisibly present within the world. Temples were their main means of contact with humanity. Each day, it was believed, the gods moved from the divine realm to their temples, their homes in the human world. There they inhabited the cult images, the statues that depicted deities and allowed humans to interact with them in temple rituals. This movement between realms was sometimes described as a journey between the sky and the earth. As temples were the focal points of Egyptian cities, the god in a city's main temple was the patron deity for the city and the surrounding region. Deities' spheres of influence on earth centered on the towns and regions they presided over. Many gods had more than one cult center and their local ties changed over time. They could establish themselves in new cities, or their range of influence could contract. Therefore, a given deity's main cult center in historical times is not necessarily his or her place of origin. The political influence of a city could affect the importance of its patron deity. When kings from Thebes took control of the country at start of the Middle Kingdom (c. 2055–1650 BC), they elevated Thebes' patron gods—first the war god Montu and then Amun—to national prominence. In Egyptian belief, names express the fundamental nature of the things to which they refer. In keeping with this belief, the names of deities often relate to their roles or origins. The name of the predatory goddess Sekhmet means "powerful one", the name of the mysterious god Amun means "hidden one", and the name of Nekhbet, who was worshipped in the city of Nekheb, means "she of Nekheb". Many other names have no certain meaning, even when the gods who bear them are closely tied to a single role. The names of the sky goddess Nut and the earth god Geb do not resemble the Egyptian terms for sky and earth. The Egyptians also devised false etymologies giving more meanings to divine names. A passage in the Coffin Texts renders the name of the funerary god Seker as sk r, meaning "cleaning of the mouth", to link his name with his role in the Opening of the Mouth ritual, while one in the Pyramid Texts says the name is based on words shouted by Osiris in a moment of distress, connecting Sokar with the most important funerary deity. The gods were believed to have many names. Among them were secret names that conveyed their true natures more profoundly than others. To know the true name of a deity was to have power over it. The importance of names is demonstrated by a myth in which Isis poisons the superior god Ra and refuses to cure him unless he reveals his secret name to her. Upon learning the name, she tells it to her son, Horus, and by learning it they gain greater knowledge and power. In addition to their names, gods were given epithets, like "possessor of splendor", "ruler of Abydos", or "lord of the sky", that describe some aspect of their roles or their worship. Because of the gods' multiple and overlapping roles, deities can have many epithets—with more important gods accumulating more titles—and the same epithet can apply to many deities. Some epithets eventually became separate deities, as with Werethekau, an epithet applied to several goddesses meaning "great enchantress", which came to be treated as an independent goddess. The host of divine names and titles expresses the gods' multifarious nature. The Egyptians regarded the division between male and female as fundamental to all beings, including deities. Male gods tended to have a higher status than goddesses and were more closely connected with creation and with kingship, while goddesses were more often thought of as helping and providing for humans. Some deities were androgynous, but most examples are found in the context of creation myths, in which the androgynous deity represents the undifferentiated state that existed before the world was created. Atum was primarily male but had a feminine aspect within himself, who was sometimes seen as a goddess, known as Iusaaset or Nebethetepet. Similarly, Neith, who was sometimes regarded as a creator goddess, was said to possess masculine traits but was mainly seen as female. Sex and gender were closely tied to creation and thus rebirth. Male gods were believed to have the active role in conceiving children. Female deities were often relegated to a supporting role, stimulating their male consorts' virility and nurturing their children, although goddesses were given a larger role in procreation late in Egyptian history. Goddesses acted as mythological mothers and wives of kings and thus as prototypes of human queenship. Hathor, who was the mother or consort of Horus and the most important goddess for much of Egyptian history, exemplified this relationship between divinity and the king. Female deities also had a violent aspect that could be seen either positively, as with the goddesses Wadjet and Nekhbet who protected the king, or negatively. The myth of the Eye of Ra contrasts feminine aggression with sexuality and nurturing, as the goddess rampages in the form of Sekhmet or another dangerous deity until the other gods appease her, at which point she becomes a benign goddess such as Hathor who, in some versions, then becomes the consort of a male god. The Egyptian conception of sexuality was heavily focused on heterosexual reproduction, and homosexual acts were usually viewed with disapproval. Some texts nevertheless refer to homosexual behavior between male deities. In some cases, most notably when Set sexually assaulted Horus, these acts served to assert the dominance of the active partner and humiliate the submissive one. Other couplings between male deities could be viewed positively and even produce offspring, as in one text in which Khnum is born from the union of Ra and Shu. Egyptian deities are connected in a complex and shifting array of relationships. A god's connections and interactions with other deities helped define its character. Thus Isis, as the mother and protector of Horus, was a great healer as well as the patroness of kings. Such relationships were in fact more important than myths in expressing Egyptians' religious worldview, although they were also the base material from which myths were formed. Family relationships are a common type of connection between gods. Deities often form male and female pairs. Families of three deities, with a father, mother, and child, represent the creation of new life and the succession of the father by the child, a pattern that connects divine families with royal succession. Osiris, Isis, and Horus formed the quintessential family of this type. The pattern they set grew more widespread over time, so that many deities in local cult centers, like Ptah, Sekhmet, and their child Nefertum at Memphis and the Theban Triad at Thebes, were assembled into family triads. Genealogical connections like these vary according to the circumstances. Hathor could act as the mother, consort, or daughter of the sun god, and the child form of Horus acted as the third member of many local family triads. Other divine groups were composed of deities with interrelated roles, or who together represented a region of the Egyptian mythological cosmos. There were sets of gods for the hours of the day and night and for each nome (province) of Egypt. Some of these groups contain a specific, symbolically important number of deities. Paired gods sometimes have similar roles, as do Isis and her sister Nephthys in their protection and support of Osiris. Other pairs stand for opposite but interrelated concepts that are part of a greater unity. Ra, who is dynamic and light-producing, and Osiris, who is static and shrouded in darkness, merge into a single god each night. Groups of three are linked with plurality in ancient Egyptian thought, and groups of four connote completeness. Rulers in the late New Kingdom promoted a particularly important group of three gods above all others: Amun, Ra, and Ptah. These deities stood for the plurality of all gods, as well as for their own cult centers (the major cities of Thebes, Heliopolis, and Memphis) and for many threefold sets of concepts in Egyptian religious thought. Sometimes Set, the patron god of the Nineteenth Dynasty kings and the embodiment of disorder within the world, was added to this group, which emphasized a single coherent vision of the pantheon. Nine, the product of three and three, represents a multitude, so the Egyptians called several large groups "Enneads", or sets of nine, even if they had more than nine members.[Note 2] The most prominent ennead was the Ennead of Heliopolis, an extended family of deities descended from Atum, which incorporates many important gods. The term "ennead" was often extended to include all of Egypt's deities. This divine assemblage had a vague and changeable hierarchy. Gods with broad influence in the cosmos or who were mythologically older than others had higher positions in divine society. At the apex of this society was the king of the gods, who was usually identified with the creator deity. In different periods of Egyptian history, different gods were most frequently said to hold this exalted position. Horus was most closely linked with kingship in the Early Dynastic Period, Ra rose to preeminence in the Old Kingdom, Amun was supreme in the New, and in the Ptolemaic and Roman periods, Isis was the divine queen and creator goddess. Newly prominent gods tended to adopt characteristics from their predecessors. Isis absorbed the traits of many other goddesses during her rise, and when Amun became the ruler of the pantheon, he was conjoined with Ra to become a solar deity. The gods were believed to manifest in many forms. The Egyptians had a complex conception of the human soul, consisting of several parts. The spirits of the gods were composed of many of these same elements. The ba was the component of the human or divine soul that affected the world around it. Any visible manifestation of a god's power could be called its ba; thus, the sun was called the ba of Ra. A depiction of a deity was considered a ka, another component of its being, which acted as a vessel for that deity's ba to inhabit. The cult images of gods that were the focus of temple rituals, as well as the sacred animals that represented certain deities, were believed to house divine bas in this way. Gods could be ascribed many bas and kas, which were sometimes given names representing different aspects of the god's nature. Everything in existence was said to be one of the kas of Atum the creator god, who originally contained all things within himself, and one deity could be called the ba of another, meaning that the first god is a manifestation of the other's power. Divine body parts could act as separate deities, like the Eye of Ra and Hand of Atum, both of which were personified as goddesses. The gods were so full of life-giving power that even their bodily fluids could transform into other living things; humankind was said to have sprung from the creator god's tears, and the other deities from his sweat. Nationally important deities gave rise to local manifestations, which sometimes absorbed the characteristics of older regional gods. Horus had many forms tied to particular places, including Horus of Nekhen, Horus of Buhen, and Horus of Edfu. Such local manifestations could be treated almost as separate beings. During the New Kingdom, one man was accused of stealing clothes by an oracle supposed to communicate messages from Amun of Pe-Khenty. He consulted two other local oracles of Amun hoping for a different judgment. Gods' manifestations also differed according to their roles. Horus could be a powerful sky god or vulnerable child, and these forms were sometimes counted as independent deities. Gods were combined with each other as easily as they were divided. A god could be called the ba of another, or two or more deities could be joined into one god with a combined name and iconography. Local gods were linked with greater ones, and deities with similar functions were combined. Ra was connected with the local deity Sobek to form Sobek-Ra; with his fellow ruling god, Amun, to form Amun-Ra; with the solar form of Horus to form Ra-Horakhty; and with several solar deities as Horemakhet-Khepri-Ra-Atum. On rare occasion, deities of different sexes could be joined in this way, producing combinations such as Osiris-Neith. This linking of deities is called syncretism. Unlike other situations for which this term is used, the Egyptian practice was not meant to fuse competing belief systems, although foreign deities could be syncretized with native ones. Instead, syncretism acknowledged the overlap between deities' roles and extended the sphere of influence for each of them. Syncretic combinations were not permanent; a god who was involved in one combination continued to appear separately and to form new combinations with other deities. Closely connected deities did sometimes merge. Horus absorbed several falcon gods from various regions, such as Khenti-irty and Khenti-kheti, who became little more than local manifestations of him; Hathor subsumed a similar cow goddess, Bat; and an early funerary god, Khenti-Amentiu, was supplanted by Osiris and Anubis. In the reign of Akhenaten (c. 1353–1336 BC) in the mid-New Kingdom, a single solar deity, the Aten, became the sole focus of the state religion. Akhenaten ceased to fund the temples of other deities and erased gods' names and images on monuments, targeting Amun in particular. This new religious system, sometimes called Atenism, differed dramatically from the polytheistic worship of many gods in all other periods. The Aten had no mythology, and it was portrayed and described in more abstract terms than traditional deities. Whereas, in earlier times, newly important gods were integrated into existing religious beliefs, Atenism insisted on a single understanding of the divine that excluded the traditional multiplicity of perspectives. Yet Atenism may not have been full monotheism, which totally excludes belief in other deities. There is evidence suggesting that the general populace continued to worship other gods in private. The picture is further complicated by Atenism's apparent tolerance for some other deities, such as Maat, Shu, and Tefnut. For these reasons, the Egyptologists Dominic Montserrat and John Baines have suggested that Akhenaten may have been monolatrous, worshipping a single deity while acknowledging the existence of others. In any case, Atenism's aberrant theology did not take root among the Egyptian populace, and Akhenaten's successors returned to traditional beliefs. Scholars have long debated whether traditional Egyptian religion ever asserted that the multiple gods were, on a deeper level, unified. Reasons for this debate include the practice of syncretism, which might suggest that all the separate gods could ultimately merge into one, and the tendency of Egyptian texts to credit a particular god with power that surpasses all other deities. Another point of contention is the appearance of the word "god" in wisdom literature, where the term does not refer to a specific deity or group of deities. In the early 20th century, for instance, E. A. Wallis Budge believed that Egyptian commoners were polytheistic, but knowledge of the true monotheistic nature of the religion was reserved for the elite, who wrote the wisdom literature. His contemporary James Henry Breasted thought Egyptian religion was instead pantheistic, with the power of the sun god present in all other gods, while Hermann Junker argued that Egyptian civilization had been originally monotheistic and became polytheistic in the course of its history. In 1971, Erik Hornung published a study[Note 3] rebutting such views. He points out that in any given period many deities, even minor ones, were described as superior to all others. He also argues that the unspecified "god" in the wisdom texts is a generic term for whichever deity is relevant to the reader in the situation at hand. Although the combinations, manifestations, and iconographies of each god were constantly shifting, they were always restricted to a finite number of forms, never becoming fully interchangeable in a monotheistic or pantheistic way. Henotheism, Hornung says, describes Egyptian religion better than other labels. An Egyptian could worship any deity at a particular time and credit it with supreme power in that moment, without denying the other gods or merging them all with the god that he or she focused on. Hornung concludes that the gods were fully unified only in myth, at the time before creation, after which the multitude of deities emerged from a uniform nonexistence. Hornung's arguments have greatly influenced other scholars of Egyptian religion, but some still believe that at times the gods were more unified than he allows. Jan Assmann maintains that the notion of a single deity developed slowly through the New Kingdom, beginning with a focus on Amun-Ra as the all-important sun god. In his view, Atenism was an extreme outgrowth of this trend. It equated the single deity with the sun and dismissed all other gods. Then, in the backlash against Atenism, priestly theologians described the universal god in a different way, one that coexisted with traditional polytheism. The one god was believed to transcend the world and all the other deities, while at the same time, the multiple gods were aspects of the one. According to Assmann, this one god was especially equated with Amun, the dominant god in the late New Kingdom, whereas for the rest of Egyptian history the universal deity could be identified with many other gods. James P. Allen says that coexisting notions of one god and many gods would fit well with the "multiplicity of approaches" in Egyptian thought, as well as with the henotheistic practice of ordinary worshippers. He says that the Egyptians may have recognized the unity of the divine by "identifying their uniform notion of 'god' with a particular god, depending on the particular situation." Descriptions and depictions Egyptian writings describe the gods' bodies in detail. They are made of precious materials; their flesh is gold, their bones are silver, and their hair is lapis lazuli. They give off a scent that the Egyptians likened to the incense used in rituals. Some texts give precise descriptions of particular deities, including their height and eye color. Yet these characteristics are not fixed; in myths, gods change their appearances to suit their own purposes. Egyptian texts often refer to deities' true, underlying forms as "mysterious". The Egyptians' visual representations of their gods are therefore not literal. They symbolize specific aspects of each deity's character, functioning much like the ideograms in hieroglyphic writing. For this reason, the funerary god Anubis is commonly shown in Egyptian art as a dog or jackal, a creature whose scavenging habits threaten the preservation of buried mummies, in an effort to counter this threat and employ it for protection. His black coloring alludes to the color of mummified flesh and to the fertile black soil that Egyptians saw as a symbol of resurrection. Most deities were depicted in several ways. Hathor could be a cow, cobra, lioness, or a woman with bovine horns or ears. By depicting a given god in different ways, the Egyptians expressed different aspects of its essential nature. The gods are depicted in a finite number of these symbolic forms, so they can often be distinguished from one another by their iconographies. These forms include men and women (anthropomorphism), animals (zoomorphism), and, more rarely, inanimate objects. Combinations of forms, such as deities with human bodies and animal heads, are common. New forms and increasingly complex combinations arose in the course of history, with the most surreal forms often found among the demons of the underworld. Some gods can only be distinguished from others if they are labeled in writing, as with Isis and Hathor. Because of the close connection between these goddesses, they could both wear the cow-horn headdress that was originally Hathor's alone. Certain features of divine images are more useful than others in determining a god's identity. The head of a given divine image is particularly significant. In a hybrid image, the head represents the original form of the being depicted, so that, as the Egyptologist Henry Fischer put it, "a lion-headed goddess is a lion-goddess in human form, while a royal sphinx, conversely, is a man who has assumed the form of a lion." Divine headdresses, which range from the same types of crowns used by human kings to large hieroglyphs worn on gods' heads, are another important indicator. In contrast, the objects held in gods' hands tend to be generic. Male deities hold was staffs, goddesses hold stalks of papyrus, and both sexes carry ankh signs, representing the Egyptian word for "life", to symbolize their life-giving power. The forms in which the gods are shown, although diverse, are limited in many ways. Many creatures that are widespread in Egypt were never used in divine iconography. Others could represent many deities, often because these deities had major characteristics in common. Bulls and rams were associated with virility, cows and falcons with the sky, hippopotami with maternal protection, felines with the sun god, and serpents with both danger and renewal. Animals that were absent from Egypt in the early stages of its history were not used as divine images. For instance, the horse, which was not introduced until the Second Intermediate Period (c. 1650–1550 BC), never represented a god. Similarly, the clothes worn by anthropomorphic deities in most periods changed little from the styles used in the Old Kingdom: a kilt, false beard, and often a shirt for male gods and a long, tight-fitting dress for goddesses.[Note 4] The basic anthropomorphic form varies. Child gods are depicted nude, as are some adult gods when their procreative powers are emphasized. Certain male deities are given heavy bellies and breasts, signifying either androgyny or prosperity and abundance. Whereas most male gods have red skin and most goddesses are yellow—the same colors used to depict Egyptian men and women—some are given unusual, symbolic skin colors. Thus, the blue skin and paunchy figure of the god Hapi alludes to the Nile flood he represents and the nourishing fertility it brought. A few deities, such as Osiris, Ptah, and Min, have a "mummiform" appearance, with their limbs tightly swathed in cloth. Although these gods resemble mummies, the earliest examples predate the cloth-wrapped style of mummification, and this form may instead hark back to the earliest, limbless depictions of deities. Some inanimate objects that represent deities are drawn from nature, such as trees or the disk-like emblems for the sun and the moon. Some objects associated with a specific god, like the crossed bows representing Neith (𓋋) or the emblem of Min (𓋉) symbolized the cults of those deities in Predynastic times. In many of these cases, the nature of the original object is mysterious. In the Predynastic and Early Dynastic Periods, gods were often represented by divine standards: poles topped by emblems of deities, including both animal forms and inanimate objects. Interactions with humans In official writings, pharaohs are said to be divine, and they are constantly depicted in the company of the deities of the pantheon. Each pharaoh and his predecessors were considered the successors of the gods who had ruled Egypt in mythic prehistory. Living kings were equated with Horus and called the "son" of many male deities, particularly Osiris and Ra; deceased kings were equated with these elder gods. Kings' wives and mothers were likened to many goddesses. The few women who made themselves pharaohs, such as Hatshepsut, connected themselves with these same goddesses while adopting much of the masculine imagery of kingship. Pharaohs had their own mortuary temples where rituals were performed for them during their lives and after their deaths. But few pharaohs were worshipped as gods long after their lifetimes, and non-official texts portray kings in a human light. For these reasons, scholars disagree about how genuinely most Egyptians believed the king to be a god. He may only have been considered divine when he was performing ceremonies. However much it was believed, the king's divine status was the rationale for his role as Egypt's representative to the gods, as he formed a link between the divine and human realms. The Egyptians believed the gods needed temples to dwell in, as well as the periodic performance of rituals and presentation of offerings to nourish them. These things were provided by the cults that the king oversaw, with their priests and laborers. Yet, according to royal ideology, temple-building was exclusively the pharaoh's work, as were the rituals that priests usually performed in his stead. These acts were a part of the king's fundamental role: maintaining maat. The king and the nation he represented provided the gods with maat so they could continue to perform their functions, which maintained maat in the cosmos so humans could continue to live. Although the Egyptians believed their gods to be present in the world around them, contact between the human and divine realms was mostly limited to specific circumstances. In literature, gods may appear to humans in a physical form, but in real life, the Egyptians were limited to more indirect means of communication. The ba of a god was said to periodically leave the divine realm to dwell in the images of that god. By inhabiting these images, the gods left their concealed state and took on a physical form. To the Egyptians, a place or object that was ḏsr—"sacred"—was isolated and ritually pure, and thus fit for a god to inhabit. Temple statues and reliefs, as well as particular sacred animals, like the Apis bull, served as divine intermediaries in this way. Dreams and trances provided a very different venue for interaction. In these states, it was believed, people could come close to the gods and sometimes receive messages from them. Temples, where the state rituals were carried out, were filled with images of the gods. The most important temple image was the cult statue in the inner sanctuary. These statues were usually less than life-size and made of the same precious materials that were said to form the gods' bodies.[Note 5] Many temples had several sanctuaries, each with a cult statue representing one of the gods in a group such as a family triad. The city's primary god was envisioned as its lord, employing many of the residents as servants in the divine household that the temple represented. The gods residing in the temples of Egypt collectively represented the entire pantheon. But many deities—including some important gods as well as those that were minor or hostile—were never given temples of their own, although some were represented in the temples of other gods. To insulate the sacred power in the sanctuary from the impurities of the outside world, the Egyptians enclosed temple sanctuaries and greatly restricted access to them. People other than kings and high priests were thus denied contact with cult statues. The exception was during festival processions, when the statue was carried out of the temple enclosed in a portable shrine, which usually hid it from public view. People did have less direct means of interaction. The more public parts of temples often incorporated small places for prayer, from doorways to freestanding chapels near the back of the temple building. Communities also built and managed small chapels for their own use, and some families had shrines inside their homes. Egyptian gods were involved in human lives as well as in the overarching order of nature. This divine influence applied mainly to Egypt, as foreign peoples were traditionally believed to be outside the divine order. In the New Kingdom, when other nations were under Egyptian control, foreigners were said to be under the sun god's benign rule in the same way that Egyptians were. Thoth, as the overseer of time, was said to allot fixed lifespans to both humans and gods. Other gods were also said to govern the length of human lives, including Meskhenet and Renenutet, both of whom presided over birth, and Shai, the personification of fate. Thus, the time and manner of death was the main meaning of the Egyptian concept of fate, although to some extent these deities governed other events in life as well. Several texts refer to gods influencing or inspiring human decisions, working through a person's "heart"—the seat of emotion and intellect in Egyptian belief. Deities were also believed to give commands, instructing the king in the governance of his realm and regulating the management of their temples. Egyptian texts rarely mention direct commands given to private persons, and these commands never evolved into a set of divinely enforced moral codes. Morality in ancient Egypt was based on the concept of maat, which, when applied to human society, meant that everyone should live in an orderly way that did not interfere with the well-being of other people. Because deities were the upholders of maat, morality was connected with them. For example, the gods judged humans' moral righteousness after death, and by the New Kingdom, a verdict of innocence in this judgement was believed to be necessary for admittance into the afterlife. In general, however, morality was based on practical ways to uphold maat in daily life, rather than on strict rules that the gods laid out. Humans had free will to ignore divine guidance and the behavior required by maat, but by doing so they could bring divine punishment upon themselves. A deity carried out this punishment using its ba, the force that manifested the god's power in the human world. Natural disasters and human ailments were seen as the work of angry divine bas. Conversely, the gods could cure righteous people of illness or even extend their lifespans. Both these types of intervention were eventually represented by deities: Shed, who emerged in the New Kingdom to represent divine rescue from harm, and Petbe, an apotropaic god from the late eras of Egyptian history who was believed to avenge wrongdoing. Egyptian texts take different views on whether the gods are responsible when humans suffer unjustly. Misfortune was often seen as a product of isfet, the cosmic disorder that was the opposite of maat, and therefore the gods were not guilty of causing evil events. Some deities who were closely connected with isfet, such as Set, could be blamed for disorder within the world without placing guilt on the other gods. Some writings do accuse the deities of causing human misery, while others give theodicies in the gods' defense. Beginning in the Middle Kingdom, several texts connected the issue of evil in the world with a myth in which the creator god fights a human rebellion against his rule and then withdraws from the earth. Because of this human misbehavior, the creator is distant from his creation, allowing suffering to exist. New Kingdom writings do not question the just nature of the gods as strongly as those of the Middle Kingdom. They emphasize humans' direct, personal relationships with deities and the gods' power to intervene in human events. People in this era put faith in specific gods who they hoped would help and protect them through their lives. As a result, upholding the ideals of maat grew less important than gaining the gods' favor as a way to guarantee a good life. Even the pharaohs were regarded as dependent on divine aid, and after the New Kingdom came to an end, government was increasingly influenced by oracles communicating the gods' will. Official religious practices, which maintained maat for the benefit of all Egypt, were related to, but distinct from, the religious practices of ordinary people, who sought the gods' help for their personal problems. Official religion involved a variety of rituals, based in temples. Some rites were performed every day, whereas others were festivals, taking place at longer intervals and often limited to a particular temple or deity. The gods received their offerings in daily ceremonies, in which their statues were clothed, anointed, and presented with food as hymns were recited in their honor. These offerings, in addition to maintaining maat for the gods, celebrated deities' life-giving generosity and encouraged them to remain benevolent rather than vengeful. Festivals often involved a ceremonial procession in which a cult image was carried out of the temple in a barque-shaped shrine. These processions served various purposes. In Roman times, when local deities of all kinds were believed to have power over the Nile inundation, processions in many communities carried temple images to the riverbanks so the gods could invoke a large and fruitful flood. Processions also traveled between temples, as when the image of Hathor from Dendera Temple visited her consort Horus at the Temple of Edfu. Rituals for a god were often based in that deity's mythology. Such rituals were meant to be repetitions of the events of the mythic past, renewing the beneficial effects of the original events. In the Khoiak festival in honor of Osiris, his death and resurrection were ritually reenacted at a time when crops were beginning to sprout. The returning greenery symbolized the renewal of the god's own life. Personal interaction with the gods took many forms. People who wanted information or advice consulted oracles, run by temples, that were supposed to convey gods' answers to questions. Amulets and other images of protective deities were used to ward off the demons that might threaten human well-being or to impart the god's positive characteristics to the wearer. Private rituals invoked the gods' power to accomplish personal goals, from healing sickness to cursing enemies. These practices used heka, the same force of magic that the gods used, which the creator was said to have given to humans so they could fend off misfortune. The performer of a private rite often took on the role of a god in a myth, or even threatened a deity, to involve the gods in accomplishing the goal. Such rituals coexisted with private offerings and prayers, and all three were accepted means of obtaining divine help. Prayer and private offerings are generally called "personal piety": acts that reflect a close relationship between an individual and a god. Evidence of personal piety is scant before the New Kingdom. Votive offerings and personal names, many of which are theophoric, suggest that commoners felt some connection between themselves and their gods, but firm evidence of devotion to deities became visible only in the New Kingdom, reaching a peak late in that era. Scholars disagree about the meaning of this change—whether direct interaction with the gods was a new development or an outgrowth of older traditions. Egyptians now expressed their devotion through a new variety of activities in and around temples. They recorded their prayers and their thanks for divine help on stelae. They gave offerings of figurines that represented the gods they were praying to, or that symbolized the result they desired; thus, a relief image of Hathor and a statuette of a woman could both represent a prayer for fertility. Occasionally, a person took a particular god as a patron, dedicating his or her property or labor to the god's cult. These practices continued into the latest periods of Egyptian history. These later eras saw more religious innovations, including the practice of giving animal mummies as offerings to deities depicted in animal form, such as the cat mummies given to the feline goddess Bastet. Some of the major deities from myth and official religion were rarely invoked in popular worship, but many of the great state gods were important in popular tradition. The worship of some Egyptian gods spread to neighboring lands, especially to Canaan and Nubia during the New Kingdom, when those regions were under pharaonic control. In Canaan, the exported deities, including Hathor, Amun, and Set, were often syncretized with native gods, who in turn spread to Egypt. The Egyptian deities may not have had permanent temples in Canaan, and their importance there waned after Egypt lost control of the region. In contrast, many temples to the major Egyptian gods and deified pharaohs were built in Nubia. After the end of Egyptian rule there, the imported gods, particularly Amun and Isis, were syncretized with local deities and remained part of the religion of Nubia's independent Kingdom of Kush. These gods were incorporated into the Nubian ideology of kingship much as they were in Egypt, so that Amun was considered the divine father of the king and Isis and other goddesses were linked with the Nubian queen, the kandake. Some deities reached farther. Taweret became a goddess in Minoan Crete, and Amun's oracle at Siwa Oasis was known to and consulted by people across the Mediterranean region. Under the Greek Ptolemaic Dynasty and then Roman rule, Greeks and Romans introduced their own deities to Egypt. These newcomers equated the Egyptian gods with their own, as part of the Greco-Roman tradition of interpretatio graeca. The worship of the native gods was not swallowed up by that of foreign ones. Instead, Greek and Roman gods were adopted as manifestations of Egyptian ones. Egyptian cults sometimes incorporated Greek language, philosophy, iconography, and even temple architecture. Meanwhile, the cults of several Egyptian deities—particularly Isis, Osiris, Anubis, the form of Horus named Harpocrates, and the fused Greco-Egyptian god Serapis—were adopted into Roman religion and spread across the Roman Empire. Roman emperors, like Ptolemaic kings before them, invoked Isis and Serapis to endorse their authority, inside and outside Egypt. In the empire's complex mix of religious traditions, Thoth was transmuted into the legendary esoteric teacher Hermes Trismegistus, and Isis, who was venerated from Britain to Mesopotamia, became the focus of a Greek-style mystery cult. Isis and Hermes Trismegistus were both prominent in the Western esoteric tradition that grew from the Roman religious world. Temples and cults in Egypt itself declined as the Roman economy deteriorated in the third century AD, and beginning in the fourth century, Christians suppressed the veneration of Egyptian deities. The last formal cults, at Philae, died out in the fifth or sixth century.[Note 6] Most beliefs surrounding the gods themselves disappeared within a few hundred years, remaining in magical texts into the seventh and eighth centuries. In contrast, many of the practices involved in their worship, such as processions and oracles, were adapted to fit Christian ideology and persisted as part of the Coptic Church. Given the great changes and diverse influences in Egyptian culture since that time, scholars disagree about whether any modern Coptic practices are descended from those of pharaonic religion. But many festivals and other traditions of modern Egyptians, both Christian and Muslim, resemble the worship of their ancestors' gods. In the late 20th century, several new religious groups going under the blanket term of Kemetism have formed based on different reconstructions of ancient Egyptian religion. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-42] | [TOKENS: 17273] |
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America) |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Gelotology] | [TOKENS: 186] |
Contents Gelotology Gelotology (from the Greek γέλως gelos "laughter") is the study of laughter and its effects on the body, from a psychological and physiological perspective. Its proponents often advocate induction of laughter on therapeutic grounds in alternative medicine. The field of study was pioneered by William F. Fry of Stanford University. History Gelotology was first studied by psychiatrists, although some doctors in antiquity recommended laughter as a form of medicine. It was initially deprecated by most other physicians, who doubted that laughter possessed analgesic qualities. One early study that demonstrated the effectiveness of laughter in a clinical setting showed that laughter could help patients with atopic dermatitis respond less to allergens. Other studies have shown that laughter can help alleviate stress and pain, and can assist cardiopulmonary rehabilitation. Types of therapy Several types of therapy have emerged which use laughter to help patients. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEApte1988-111] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/List_of_humor_research_publications] | [TOKENS: 111] |
Contents List of humor research publications This article lists publications in humor research, with brief annotations. The list includes books and scholarly journals that regularly cover articles in humor research. This list is not intended for humorous books and joke collections that do not have any scholarly analysis of humor. Early publications Monographs Collections of scientific works on humor Loizou, E. & Recchia, S.(Eds.) (2019). Research on Young Children’s Humor: Theoretical and Practical Implications for Early Childhood Education. Switzerland: Springer Journals and book series References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTERaskin200817/349-116] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ed_Wynn] | [TOKENS: 2174] |
Contents Ed Wynn Isaiah Edwin Leopold (November 9, 1886 – June 19, 1966), better known as Ed Wynn, was an American actor and comedian. He began his career in vaudeville in 1903 and was known for his Perfect Fool comedy character, his pioneering radio show of the 1930s, his performances in classic Disney films such as Alice in Wonderland and Mary Poppins, and his later career as a dramatic actor, which continued into the 1960s. Wynn's variety show (1949–1950), The Ed Wynn Show, won a Peabody Award and an Emmy Award. Late in his career, he began alternating his comedic work with acclaimed dramatic performances; earning nominations for a Golden Globe and a BAFTA award for The Great Man, and the Academy Award for Best Supporting Actor for The Diary of Anne Frank. Background Wynn was born Isaiah Edwin Leopold in Philadelphia, Pennsylvania, to a Jewish family on November 9, 1886. His father, Joseph, a milliner, was born in Bohemia. His mother, Minnie Greenberg, of Turkish and Romanian descent, came from Istanbul (then Constantinople). Wynn attended Central High School in Philadelphia until age 15. He ran away from home in his teens, worked as a hat salesman and as a utility boy, and eventually adapted his middle name "Edwin" into his new stage name, "Ed Wynn". Career Wynn began his career in vaudeville in 1903 and was a star of the Ziegfeld Follies starting in 1914. During The Follies of 1915, W. C. Fields allegedly caught Wynn mugging for the audience under the table during Fields's Pool Room routine and knocked Wynn unconscious with his cue. Wynn wrote, directed, and produced many Broadway shows in the subsequent decades, and was known for his silly costumes and props as well as for the giggly, wavering voice he developed for the 1921 musical revue The Perfect Fool. Wynn became a very active member of The Lambs Club in 1919. In the early 1930s, Wynn hosted the radio show The Fire Chief, heard in North America on Tuesday nights, sponsored by Texaco gasoline. Like many former vaudeville performers who turned to radio in the same decade, the stage-trained Wynn insisted on playing for a live studio audience, doing each program as an actual stage show, using visual bits to augment his written material, and in his case, wearing a colorful costume with a red fireman's helmet. Wynn usually bounced his gags off announcer/straight man Graham McNamee; Wynn's customary opening, "Tonight, Graham, the show's gonna be different," became one of the most familiar tag-lines of its time; a sample joke: "Graham, my uncle just bought a new second-handed car... he calls it Baby! I don't know, it won't go anyplace without a rattle!"[citation needed] Wynn reprised his Fire Chief radio character in two films, Follow the Leader (1930) and The Chief (1933). Near the height of his radio fame (1933) he founded his own short-lived radio network the Amalgamated Broadcasting System, which lasted only five weeks, nearly destroying the comedian. According to radio historian Elizabeth McLeod, the failed venture left Wynn deep in debt, divorced and finally, suffering a nervous breakdown. Wynn was offered the title role The Wizard in MGM's 1939 screen adaptation of The Wizard of Oz, but turned it down, as did his Ziegfeld contemporary W. C. Fields. The part went to Frank Morgan.[citation needed] Wynn first appeared on television on July 7, 1936, in a brief, ad-libbed spot with Graham McNamee during an NBC experimental television broadcast. In the 1949–1950 season, Wynn hosted The Ed Wynn Show, one of the first network, comedy-variety television shows, on CBS, and won both a Peabody Award and an Emmy Award in 1949. Buster Keaton, Carmen Miranda, Lucille Ball, Desi Arnaz, Hattie McDaniel and The Three Stooges all made guest appearances with Wynn. This was the first CBS variety television show to originate from Los Angeles, which was seen live on the West Coast, but filmed via kinescope for distribution in the Midwestern United States and the Eastern United States, as the national coaxial cable had yet to be completed. Wynn was also a rotating host of NBC's Four Star Revue from 1950 through 1952. After the end of Wynn's third television series, The Ed Wynn Show (a short-lived situation comedy on NBC's 1958–59 schedule), his son, actor Keenan Wynn, encouraged him to make a career change rather than retire. The comedian reluctantly began a career as a dramatic actor in television and films. The father and son appeared in three productions, the first of which was the 1956 Playhouse 90 broadcast of Rod Serling's play Requiem for a Heavyweight. Ed was terrified of straight acting, and kept goofing his lines in rehearsal. When the producers wanted to fire him, star Jack Palance said he would quit if they fired Ed. (However, unbeknownst to Wynn, supporting player Ned Glass was his secret understudy in case something did happen before air time.) On live broadcast night, Wynn surprised everyone with his pitch-perfect performance, and his quick ad libs to cover his mistakes. A dramatization of what happened during the production was later staged as an April 1960 Westinghouse Desilu Playhouse episode, The Man in the Funny Suit, starring both senior and junior Wynns, with key figures involved in the original production also portraying themselves (including Rod Serling and director Ralph Nelson). Ed and his son also worked together in the Jose Ferrer film The Great Man, with Ed again proving his unexpected skills in drama.[citation needed] Requiem established Wynn as a serious dramatic actor who could easily hold his own with the best. His performance in The Diary of Anne Frank (1959) received an Academy Award nomination for Best Supporting Actor. That same year, Wynn appeared on Serling's TV series The Twilight Zone in "One for the Angels". Serling, a longtime admirer, had written that episode especially for him, and Wynn later in 1963 starred in the S5 E12 episode "Ninety Years Without Slumbering". For the rest of his life, Wynn skillfully moved between comic and dramatic roles. He appeared in feature films and anthology television, endearing himself to new generations of fans.[citation needed] Wynn was caricatured in the Merrie Melodies cartoon shorts Shuffle Off to Buffalo (1933), I've Got to Sing a Torch Song (1933), and as a pot of jam in the Betty Boop short Betty in Blunderland (1934). Wynn appeared as the Fairy Godfather in Jerry Lewis's Cinderfella. His performance as Paul Beaseley in the 1958 film The Great Man earned him nominations for a Golden Globe Award for "Best Supporting Actor" and a BAFTA Award for "Best Foreign Actor". The following year he received his first (and only) nomination for an Academy Award for Best Supporting Actor for his role as Mr. Dussell in The Diary of Anne Frank (1959). Six years later he appeared as the blind man, Old Aram, in the Bible epic The Greatest Story Ever Told. Wynn provided the voice and live action reference for the Mad Hatter in Walt Disney's film Alice in Wonderland (1951) and played The Toymaker alongside Annette Funicello and Tommy Sands in the Christmas operetta film Babes in Toyland released in 1961. In Walt Disney's Mary Poppins (1964), Wynn played eccentric Uncle Albert floating around just beneath the ceiling in uncontrollable mirth, singing "I Love to Laugh". Re-teaming with the Disney team the following year in That Darn Cat! (1965), featuring Dean Jones and Hayley Mills, Wynn filled out the character of Mr. Hofstedder, the watch jeweler with his bumbling charm. He also had brief roles in The Absent Minded Professor (as the fire chief, in a scene alongside his son Keenan Wynn, who played the film's antagonist) and Son of Flubber (as county agricultural agent A.J. Allen). Wynn's final performance, as Rufus in Walt Disney's The Gnome-Mobile, was released a year and one month after his death. In addition to Disney films, Wynn was also an actor in the Disneyland production The Golden Horseshoe Revue. Personal life Wynn was married thrice. He first married actress Hilda Keenan on September 5, 1914. They eventually divorced on May 13, 1937, after 23 years of marriage. Together, they had a son, actor Keenan Wynn. Wynn married his second wife, Frieda Mierse, on June 25, 1937, but they got divorced only two years later on December 12, 1939. He married his third and final wife Dorothy Elizabeth Nesbitt on July 31, 1946. She filed for divorce on February 1, 1955, and it was finalized a month later on March 1. Wynn was a Freemason at Lodge No. 9 in Pennsylvania. Death Wynn died on June 19, 1966, in Beverly Hills, California, of esophageal cancer, at age 79. He is interred at Forest Lawn Memorial Park in Glendale. Wynn's bronze grave marker reads: Dear God: Thanks... Ed Wynn Red Skelton, who was discovered by Wynn, stated: "His death is the first time he ever made anyone sad." Legacy Wynn's distinctive voice continues to be emulated by countless actors and comedians, including Alan Tudyk for the character King Candy in Disney's animated film Wreck-It Ralph. Wynn was posthumously named a Disney Legend on August 10, 2013. In the graphic adventure game King's Quest VI, the character Jollo is based on his style.[citation needed] Broadway and films Awards and nominations See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEGilesOxford1970Attardo2008116–117-113] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computational_humor] | [TOKENS: 1084] |
Contents Computational humor Computational humor is a branch of computational linguistics and artificial intelligence which uses computers in humor research. It is a relatively new area, with the first dedicated conference organized in 1996. The first "computer model of a sense of humor" was suggested by Suslov as early as 1992. Investigation of the general scheme of the information processing show a possibility of a specific malfunction, conditioned by the necessity of a quick deletion from consciousness of a false version. This specific malfunction can be identified with a humorous effect on the psychological grounds; however, an essentially new ingredient, a role of timing, is added to a well known role of ambiguity. In biological systems, a sense of humour inevitably develops in the course of evolution, because its biological function consists in quickening the transmission of processed information into consciousness and in a more effective use of brain resources. A realization of this algorithm in neural networks explains naturally the mechanism of laughter: deletion of a false version corresponds to zeroing of some part of the neural network and excessive energy of neurons is thrown out to the motor cortex, arousing muscular contractions. Unfortunately, a practical realization of this algorithm needs extensive databases, whose creation in the automatic regime was suggested only recently . As a result, this magistral direction was not developed properly and subsequent investigations (see below) accepted somewhat specialized colouring. Joke generators An approach to analysis of humor is classification of jokes. A further step is an attempt to generate jokes basing on the rules that underlie classification. Simple prototypes for computer pun generation were reported in the early 1990s, based on a natural language generator program, VINCI. Graeme Ritchie and Kim Binsted in their 1994 research paper described a computer program, JAPE, designed to generate question-answer-type puns from a general, i.e., non-humorous, lexicon. (The program name is an acronym for "Joke Analysis and Production Engine".) Some examples produced by JAPE are: Since then the approach has been improved, and the latest report, dated 2007, describes the STANDUP joke generator, implemented in the Java programming language. The STANDUP generator was tested on children within the framework of analyzing its usability for language skills development for children with communication disabilities, e.g., because of cerebral palsy. (The project name is an acronym for "System To Augment Non-speakers' Dialog Using Puns" and an allusion to standup comedy.) Children responded to this "language playground" with enthusiasm, and showed marked improvement on certain types of language tests. The two young people, who used the system over a ten-week period, regaled their peers, staff, family and neighbors with jokes such as: "What do you call a spicy missile? A hot shot!" Their joy and enthusiasm at entertaining others was inspirational. Stock and Strapparava described a program to generate funny acronyms. Joke recognition A statistical machine learning algorithm to detect whether a sentence contained a "That's what she said" double entendre was developed by Kiddon and Brun (2011). There is an open-source Python implementation of Kiddon & Brun's TWSS system. A program to recognize knock-knock jokes was reported by Taylor and Mazlack. This kind of research is important in analysis of human–computer interaction. An application of machine learning techniques for the distinguishing of joke texts from non-jokes was described by Mihalcea and Strapparava (2006). Takizawa et al. (1996) reported on a heuristic program for detecting puns in the Japanese language. Applications A possible application for assistance in language acquisition is described in the section "Pun generation". Another envisioned use of joke generators is in cases of a steady supply of jokes where quantity is more important than quality. Another obvious, yet remote, direction is automated joke appreciation. It is known that humans interact with computers in ways similar to interacting with other humans that may be described in terms of personality, politeness, flattery, and in-group favoritism. Therefore, the role of humor in human–computer interaction is being investigated. In particular, humor generation in user interface to ease communications with computers was suggested. Craig McDonough implemented the Mnemonic Sentence Generator, which converts passwords into humorous sentences. Based on the incongruity theory of humor, it is suggested that the resulting meaningless but funny sentences are easier to remember. For example, the password AjQA3Jtv is converted into "Arafat joined Quayle's Ant, while TARAR Jeopardized thurmond's vase," an example chosen by combining politicians names with verbs and common nouns. Related research John Allen Paulos is known for his interest in mathematical foundations of humor. His book Mathematics and Humor: A Study of the Logic of Humor demonstrates structures common to humor and formal sciences (mathematics, linguistics) and develops a mathematical model of jokes based on catastrophe theory. Conversational systems which have been designed to take part in Turing test competitions generally have the ability to learn humorous anecdotes and jokes. Because many people regard humor as something particular to humans, its appearance in conversation can be quite useful in convincing a human interrogator that a hidden entity, which could be a machine or a human, is in fact a human. See also Further reading References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_ref-91] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.