text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_S%C3%A3o_Tom%C3%A9_and_Pr%C3%ADncipe] | [TOKENS: 391]
Contents History of the Jews in São Tomé and Príncipe The history of the Jews in São Tomé and Príncipe dates back to the late 1400s, when Portuguese Jews were expelled from Portugal. History In 1496, King Manuel I of Portugal punished Portuguese Jews who refused to pay a head tax by deporting almost 2,000 Jewish children to São Tomé and Príncipe. The children ranged in age between 2 and 10. The children were forcefully converted, raised as Roman Catholics, and worked in the sugar trade, where they had to fend off crocodiles. A year after being deported to the islands, only 600 children remained alive. Some of the children tried to retain their Judaism and Jewish heritage. Until the early 1600s, descendants of the deported Jewish children retained some Jewish practices. By the 18th century, the Jewish heritage on the islands had largely dissipated. A generation later, when Portugal colonized Brazil, some of the grown children were sent to work in the Brazilian sugar trade. A new community of Jews was established on the islands in the 19th and 20th centuries with the arrival of a small number of Jewish sugar and cocoa traders. In contemporary São Tomé and Príncipe, there are no practicing Jews. However, living descendants of the Portuguese-Jewish children remain on the islands where they are visibly distinctive due to their lighter complexions. On July 12, 1995, an international conference was held on the islands' twentieth independence day to commemorate the Portuguese-Jewish children who were deported to the islands in the 15th century. Some of the Jews of São Tomé and Príncipe later settled in the Kingdom of Loango, along the coasts of continental Africa in what is now Gabon, the Republic of the Congo, and the Cabinda Province of Angola. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-59] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Sam_Altman_TechCrunch_SF_2019_Day_2_Oct_3_(cropped)_(cropped).jpg] | [TOKENS: 114]
File:Sam Altman TechCrunch SF 2019 Day 2 Oct 3 (cropped) (cropped).jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 5 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/HTTP] | [TOKENS: 6213]
Contents HTTP Page version status This is an accepted version of this page HTTP (Hypertext Transfer Protocol) is an application layer protocol in the Internet protocol suite for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser. HTTP is a request–response protocol in the client–server model. A transaction starts with a client submitting a request to the server, the server attempts to satisfy the request and returns a response to the client that describes the disposition of the request and optionally contains a requested resource such as an HTML document or other content. In a common scenario, a web browser acts as the client, and a web server, hosting one or more websites, is the server. A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them, whenever possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. To allow intermediate HTTP nodes (proxy servers, web caches, etc.) to accomplish their functions, some of the HTTP headers (found in HTTP requests/responses) are managed hop-by-hop whereas other HTTP headers are managed end-to-end (managed only by the source client and by the target web server). A web resource is located by a uniform resource locator (URL), using the Uniform Resource Identifier (URI) schemes http and https. URIs are encoded as hyperlinks in HTML documents, so as to form interlinked hypertext documents. Versions The protocol has been revised over time. A version is identified as HTTP/# where # is the version number. This article covers aspects of all versions but provides primary coverage for HTTP/0.9, HTTP/1.0, and HTTP/1.1. Separate articles cover HTTP/2 and HTTP/3 in detail. In HTTP/1.0, a separate TCP connection to the same server is made for every resource request.: §1.3 In HTTP/1.1, instead a TCP connection can be reused to make multiple resource requests (i.e. of HTML pages, frames, images, scripts, stylesheets, etc.).: §9.1,9.3 HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead, especially under high traffic conditions. Enhancements added with HTTP/2 allow for less latency and, in most cases, higher speeds than HTTP/1.1 communications. HTTP/2 adds support for: HTTP/3 uses QUIC + UDP transport protocols instead of TCP. Only the IP layer is used (which UDP, like TCP, builds on). This slightly improves the average speed of communications and avoids the occasional problem of TCP connection congestion that can temporarily block or slow down the data flow of all its streams (another form of "head of line blocking"). Use HTTP/2 is supported by 71% of websites (34.1% HTTP/2 + 36.9% HTTP/3 with backwards compatibility) and supported by almost all web browsers (over 98% of users). It is also supported by major web servers over Transport Layer Security (TLS) using an Application-Layer Protocol Negotiation (ALPN) extension where TLS 1.2 or newer is required. HTTP/3 is used on 36.9% of websites and is supported by most web browsers, i.e. (at least partially) supported by 97% of users. HTTP/3 uses QUIC instead of TCP for the underlying transport protocol. Like HTTP/2, it does not obsolete previous major versions of the protocol. In 2019, support for HTTP/3 was first added to Cloudflare and Chrome and also enabled in Firefox. HTTP/3 has lower latency for real-world web pages and loads faster than HTTP/2, in some cases over three times faster than HTTP/1.1, which is still commonly the only protocol enabled. HTTPS, the secure variant of HTTP, is used by more than 85% of websites. Technology HTTP presumes an underlying and reliable transport layer protocol.: §3.3 The standard choice of the underlying protocol prior to HTTP/3 is Transmission Control Protocol (TCP). HTTP/3 uses a different transport layer called QUIC, which provides reliability on top of the unreliable User Datagram Protocol (UDP). HTTP/1.1 and earlier have been adapted to be used over plain unreliable UDP in multicast and unicast situations, forming HTTPMU and HTTPU. They are used in UPnP and Simple Service Discovery Protocol (SSDP), two protocols usually run on a local area network. HTTP is a stateless application-level protocol and it requires a reliable network transport connection to exchange data between client and server. In HTTP implementations, TCP/IP connections are used using well-known ports (typically port 80 if the connection is unencrypted or port 443 if the connection is encrypted, see also List of TCP and UDP port numbers).: §4.2.1,4.2.2 In HTTP/2, a TCP/IP connection plus multiple protocol channels are used. In HTTP/3, the application transport protocol QUIC over UDP is used. Data is exchanged through a sequence of request–response messages which are exchanged by a session layer transport connection. An HTTP client initially tries to establish a connection, real or virtual, with a server. An HTTP server listening on the port accepts the connection and then waits for a client's request message. The client sends its HTTP request message. Upon receiving the request the server sends back an HTTP response message, which includes header(s) plus a body if it is required. The body of this response message is typically the requested resource, although an error message or other information may also be returned. At any time and for many reasons, either the client or server can close the connection. Closing a connection is usually advertised by one or more HTTP headers in the last request or response.: §9.1 In HTTP/0.9, the TCP/IP connection is always closed after server response has been sent, so it is never persistent. In HTTP/1.0, the TCP/IP connection should always be closed by server after a response has been sent.[note 2] In HTTP/1.1, a keep-alive-mechanism was officially introduced so that a connection could be reused for more than one request/response. Such persistent connections reduce request latency perceptibly because the client does not need to re-negotiate the TCP 3-Way-Handshake connection after the first request has been sent. Another positive side effect is that, in general, the connection becomes faster with time due to TCP's slow-start-mechanism. HTTP/1.1 added also HTTP pipelining in order to further reduce lag time when using persistent connections by allowing clients to send multiple requests before waiting for each response. This optimization was never considered really safe because a few web servers and many proxy servers, specially transparent proxy servers placed in Internet / Intranets between clients and servers, did not handle pipelined requests properly (they served only the first request discarding the others, they closed the connection because they saw more data after the first request or some proxies even returned responses out of order etc.). Because of this, only HEAD and some GET requests (i.e. limited to real file requests and so with URLs without query string used as a command, etc.) could be pipelined in a safe and idempotent mode. After many years of struggling with the problems introduced by enabling pipelining, this feature was first disabled and then removed from most browsers also because of the announced adoption of HTTP/2. HTTP/2 extended the usage of persistent connections by multiplexing many concurrent requests/responses through a single TCP/IP connection. HTTP/3 does not use TCP/IP connections but QUIC + UDP. In HTTP/0.9, a requested resource was always sent in its entirety. HTTP/1.0 added headers to manage resources cached by a client in order to allow conditional GET requests. HTTP/1.1 introduced and later versions provide: As a stateless protocol, HTTP does not require the web server to retain information or status about each user for the duration of multiple requests. If a web application needs an application session, it implements it via HTTP cookies, hidden variables in a web form or another mechanism. Typically, to start a session, an interactive login is performed, and to end a session, a logout is requested by the user. These kind of operations use a custom authentication mechanism, not HTTP authentication. HTTP provides multiple authentication schemes such as basic access authentication and digest access authentication which operate via a challenge–response mechanism whereby the server identifies and issues a challenge before serving the requested content. HTTP provides a general framework for access control and authentication, via an extensible set of challenge–response authentication schemes, which can be used by a server to challenge a client request and by a client to provide authentication information. The authentication mechanisms described above belong to the HTTP protocol and are managed by client and server HTTP software (if configured to require authentication before allowing client access to one or more web resources), and not by the web applications using an application session. The HTTP authentication specification includes realms that provide an arbitrary, implementation-specific construct for further dividing resources common to a given root URI. The realm value string, if present, is combined with the canonical root URI to form the protection space component of the challenge. This in effect allows the server to define separate authentication scopes under one root URI. The most popular way of establishing an encrypted HTTP connection is HTTPS. Two other methods for establishing an encrypted HTTP connection also exist: Secure Hypertext Transfer Protocol, and using the HTTP/1.1 Upgrade header to specify an upgrade to TLS. Browser support for these two is, however, nearly non-existent. Message format This section describes messages for HTTP/1.1. Later versions, HTTP/2 and HTTP/3, use a binary protocol, where headers are encoded in a single HEADERS and zero or more CONTINUATION frames using HPACK (HTTP/2) or QPACK (HTTP/3), which both provide efficient header compression. The request or response line from HTTP/1 has also been replaced by several pseudo-header fields, each beginning with a colon (:). At the highest level, a message consists of a header followed by a body. A header consists of lines of ASCII text; each terminated with a carriage return and line feed sequence. The layout for both a request and a response header is: A body consists of data in any format; not limited to ASCII. The format must match that specified by the Content-Type header field if the message contains one. A body is optional or, in other words, can be blank. Before HTTP/2, the term entity was used to mean the body plus header fields that describe the body. In particular, not all headers were considered part of the entity. The term entity header referred to a header that was considered part of the entity, and sometimes the body was called the entity body. Modern documentation uses body and header without using entity. A header field represents metadata about its containing message such as how the body is encoded (via Content-Encoding), the session verification and identification of the client (as in browser cookies, IP address, user-agent) or their anonymity thereof (VPN or proxy masking, user-agent spoofing), how the server should handle data (as in Do-Not-Track or Global Privacy Control), the age (the time it has resided in a shared cache) of the document being downloaded, and much more. Generally, the information of a header field is used by software and not shown to the user. A header field line is formatted as a name-value pair with a colon separator. Whitespace is not allowed around the name, but leading and trailing whitespace is ignored for the value part. Unlike a method name that must match exactly (case-sensitive), a header field name is matched ignoring case although often shown with each word capitalized. For example, the following are header fields for Host and Accept-Language. The standards do not limit the size of a header field or the number of fields in a message. However, most servers, clients, and proxy software impose limits for practical and security reasons. For example, the Apache 2.3 server by default limits the size of each field to 8190 bytes, and there can be at most 100 header fields in a single request. Although deprecated by RFC 7230, in the past, long lines could be split into multiple lines with a continuation line starting with a space or tab character. A request is sent by a client to a server. The start line includes a method name, a request URI and the protocol version with a single space between each field. The following request start line specifies method GET, URI /customer/123 and protocol version HTTP/1.1: Request header fields allow the client to pass additional information beyond the request line, acting as request modifiers (similarly to the parameters of a procedure). They give information about the client, about the target resource, or about the expected handling of the request. In the HTTP/1.1 protocol, all header fields except Host are optional. A request line containing only the path name is accepted by servers to maintain compatibility with HTTP clients before the HTTP/1.0 specification in RFC 1945. The protocol structures transaction as operating on resources. What a resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable running on the server. A request identifies a method (sometimes informally called verb) to classify the desired action to be performed on a resource. The HTTP/1.0 specification: §8 defined the GET, HEAD, and POST methods as well as listing the PUT, DELETE, LINK and UNLINK methods under additional methods. However, the HTTP/1.1 specification: §9 added five new methods: PUT, DELETE, CONNECT, OPTIONS, and TRACE. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate, it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined, which allows for future methods to be specified without breaking existing infrastructure. For example, WebDAV defined seven new methods and RFC 5789 specified the PATCH method. A general-purpose web server is required to implement at least GET and HEAD, and all other methods are considered optional by the specification.: §9.1 Method names are case sensitive.: §3 : §9.1 This is in contrast to HTTP header field names which are case-insensitive.: §6.3 A request method is safe if a request with that method has no intended effect on the server. The methods GET, HEAD, OPTIONS, and TRACE are defined as safe. In other words, safe methods are intended to be read-only. Safe methods can still have side effects not seen by the client, such as appending request information to a log file or charging an advertising account. In contrast, the methods POST, PUT, DELETE, CONNECT, and PATCH are not safe. They may modify the state of the server or have other effects such as sending an email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences. Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Careless or deliberately irregular programming can allow GET requests to cause non-trivial changes on the server. This is discouraged because of the problems which can occur when web caching, search engines, and other automated agents make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as https://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article. A properly coded website would require a DELETE or POST method for this action, which non-malicious bots would not make. One example of this occurring in practice was during the short-lived Google Web Accelerator beta, which prefetched arbitrary URLs on the page a user was viewing, causing records to be automatically altered or deleted en masse. The beta was suspended only weeks after its first release, following widespread criticism. A request method is idempotent if multiple identical requests with that method have the same effect as a single such request. The methods PUT and DELETE, and safe methods are defined as idempotent. Safe methods are trivially idempotent, since they are intended to have no effect on the server whatsoever; the PUT and DELETE methods, meanwhile, are idempotent since successive identical requests will be ignored. A website might, for instance, set up a PUT endpoint to modify a user's recorded email address. If this endpoint is configured correctly, any requests which ask to change a user's email address to the same email address which is already recorded—e.g. duplicate requests following a successful request—will have no effect. Similarly, a request to DELETE a certain user will have no effect if that user has already been deleted. In contrast, the methods POST, CONNECT, and PATCH are not necessarily idempotent, and therefore sending an identical POST request multiple times may further modify the state of the server or have further effects, such as sending multiple emails. In some cases this is the desired effect, but in other cases it may occur accidentally. A user might, for example, inadvertently send multiple POST requests by clicking a button again if they were not given clear feedback that the first click was being processed. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may re-submit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once. Note that whether or not a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other non-idempotent action is triggered by a GET or other request. To do so against recommendations, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not. A request method is cacheable if responses to requests with that method may be stored for future reuse. The methods GET, HEAD, and POST are defined as cacheable. In contrast, the methods PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH are not cacheable. A response is sent to the client by the server. The start line of a response consists of the protocol version, a status code and optionally a reason phrase with fields separated by a single space character. : §2.1 The following response start line specifies protocol version HTTP/1.1, status code 400 and reason phrase Bad Request. Response header fields allow the server to pass additional information beyond the status line, acting as response modifiers. They give information about the server or about further access to the target resource or related resources. Each response header field has a defined meaning which can be further refined by the semantics of the request method or response status code. The status code is a three-digit, decimal, integer value that represents the disposition of the server's attempt to satisfy the client's request. Generally, a client handles a response primarily based on the status code and secondarily on response header fields. A client may not understand each status code that a server reports but it must understand the class as indicated by the first digit and treat an unrecognized code as equivalent to the x00 code of that class. The classes are as follows: The standard reason phrases are only recommendations. A web server is allowed to use a localized equivalent. If a status code indicates a problem, the user agent might display the reason phrase to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable. The following demonstrates an HTTP/1.1 request-response transaction for a server at www.example.com, port 80. HTTP/1.0 would use the same messages except for a few missing headers. HTTP/2 and HTTP/3 would use the same request-response mechanism but with different representations for HTTP headers. The following is a request with no body. It consists of a start line, 6 header fields and a blank line – each terminated with a carriage return and line feed sequence. The Host header field distinguishes between various DNS names sharing a single IP address, allowing name-based virtual hosting. While optional in HTTP/1.0, it is mandatory in HTTP/1.1. Although not clear in the representation above (due to limitations of this wiki), the blank line at the end results in ending in two line terminator sequences. Represented as a stream of characters, a shorted version of above shows this more clearly with <CRLF> representing a line terminator sequence: GET / HTTP/1.1<CRLF>Host: www.example.com<CRLF><CRLF>. In the following response, the ETag (entity tag) header field is used to determine if a cached version of the requested resource is identical to the current version of the resource on the server. The Content-Type header field specifies the Internet media type of the data conveyed by the HTTP message, and Content-Length indicates its length in bytes. The HTTP/1.1 webserver publishes its ability to respond to requests for a byte range of the resource by including Accept-Ranges: bytes. This is useful, if the client needs to have only certain portions of a resource sent by the server, which is called byte serving. When Connection: close is sent, it means that the web server will close the TCP connection immediately after the end of the transfer of this response.: §9.1 Most of the header fields are optional but some are mandatory. When header Content-Length is missing from a response with a body, then this should be considered an error in HTTP/1.0 but it may not be an error in HTTP/1.1 if header Transfer-Encoding: chunked is present. Chunked transfer encoding uses a chunk size of 0 to mark the end of the content. Some old implementations of HTTP/1.0 omitted the header Content-Length when the length of the body was not known at the beginning of the response and so the transfer of data to client continued until server closed the socket. Content-Encoding: gzip informs the client that the body is compressed per the gzip algorithm. Similar protocols History Tim Berners-Lee and his team at CERN are credited with inventing HTTP, along with HTML and the associated technology for a web server and a client user interface called web browser. Berners-Lee designed HTTP in order to help with the adoption of his other idea: the "WorldWideWeb" project, which was first proposed in 1989, now known as the World Wide Web. Development of HTTP was initiated in 1989 and summarized in a simple document describing the behavior of a client and a server using the first HTTP version, named 0.9. That version was subsequently developed, eventually becoming the public 1.0. Development of early HTTP Request for Comments (RFC) documents started a few years later in a coordinated effort by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), with work later moving to the IETF. The first web server went live in 1990. The protocol used had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page. In 1991, the first documented official version of HTTP was written as a plain document, less than 700 words long, and this version was named HTTP/0.9, which supported only GET method, allowing clients to only retrieve HTML documents from the server, but not supporting any other file formats or information upload. Since 1992, a new document was written to specify the evolution of the basic protocol towards its next full version. It supported both the simple request method of the 0.9 version and the full GET request that included the client HTTP version. This was the first of the many unofficial HTTP/1.0 drafts that preceded the final work on HTTP/1.0. After having decided that new features of HTTP protocol were required and that they had to be fully documented as official RFC documents, in early 1995, the HTTP Working Group (HTTP WG, led by Dave Raggett) was constituted with the aim to standardize and expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields. The HTTP WG planned to revise and publish new versions of the protocol as HTTP/1.0 and HTTP/1.1 within 1995, but, because of the many revisions, that timeline lasted much more than one year. The HTTP WG planned also to specify a far future version of HTTP called HTTP-NG (HTTP Next Generation) that would have solved all remaining problems, of previous versions, related to performances, low latency responses, etc. but this work started only a few years later and it was never completed. In May 1996, RFC 1945 was published as a final HTTP/1.0 revision of what had been used in previous 4 years as a pre-standard HTTP/1.0-draft which was already used by many web browsers and web servers. In early 1996 developers started to even include unofficial extensions of the HTTP/1.0 protocol (i.e. keep-alive connections, etc.) into their products by using drafts of the upcoming HTTP/1.1 specifications. Since early 1996, major web browsers and web server developers also started to implement new features specified by pre-standard HTTP/1.1 drafts specifications. End-user adoption of the new versions of browsers and servers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet used the new HTTP/1.1 header "Host" to enable virtual hosting, and that by June 1996, 65% of all browsers accessing their servers were pre-standard HTTP/1.1 compliant. In January 1997, RFC 2068 was officially released as HTTP/1.1 specifications. In June 1999, RFC 2616 was released to include all improvements and updates based on previous (obsolete) HTTP/1.1 specifications. Resuming the old 1995 plan of previous HTTP Working Group, in 1997 an HTTP-NG Working Group was formed to develop a new HTTP protocol named HTTP-NG (HTTP New Generation). A few proposals / drafts were produced for the new protocol to use multiplexing of HTTP transactions inside a single TCP/IP connection, but in 1999, the group stopped its activity passing the technical problems to IETF. In 2007, the IETF HTTP Working Group (HTTP WG bis or HTTPbis) was restarted firstly to revise and clarify previous HTTP/1.1 specifications and secondly to write and refine future HTTP/2 specifications (named httpbis). In 2009, Google announced SPDY – a binary protocol they developed to speed up web traffic between browsers and servers. In many tests, using SPDY was indeed faster than using HTTP/1.1. SPDY was integrated into Google's Chromium and then into other major web browsers. Some of the ideas about multiplexing HTTP streams over a single TCP connection were taken from various sources, including the work of W3C HTTP-NG Working Group. In 2012, HTTP Working Group (HTTPbis) announced the need for a new protocol; initially considering aspects of SPDY and eventually deciding to derive the new protocol from SPDY. In May 2015, HTTP/2 was published as RFC 7540. The protocol was quickly adopted by web browsers already supporting SPDY and more slowly by web servers. In June 2014, the HTTP Working Group released an updated six-part HTTP/1.1 specification obsoleting RFC 2616: In 2014, HTTP/0.9 was deprecated for servers supporting version HTTP/1.1 (and higher):: §Appendix A Since HTTP/0.9 did not support header fields in a request, there is no mechanism for it to support name-based virtual hosts (selection of resource by inspection of the Host header field). Any server that implements name-based virtual hosts ought to disable support for HTTP/0.9. Most requests that appear to be HTTP/0.9 are, in fact, badly constructed HTTP/1.x requests caused by a client failing to properly encode the request-target. Since 2016 many product managers and developers of user agents (browsers, etc.) and web servers have begun planning to gradually deprecate and dismiss support for HTTP/0.9 protocol, mainly for the following reasons: As of 2022, HTTP/0.9 support has not been officially, completely deprecated and is still present in many web servers and browsers (for server responses only), even if usually disabled. It is unclear how long it will take to decommission HTTP/0.9. In 2020, the first drafts of HTTP/3 were published and major web browsers and web servers started to adopt it. On 6 June 2022, IETF standardized HTTP/3 as RFC 9114. In June 2022, RFC documents were published that deprecated many of the previous documents and introducing a few minor changes and a refactoring of HTTP semantics description into a separate document. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Fictional_world] | [TOKENS: 285]
Contents Fictional universe A fictional universe, also known as an imagined universe or a constructed universe, is the internally consistent fictional setting used in a narrative or a work of art. This concept is most commonly associated with works of fantasy and science fiction, and can be found in various forms such as novels, comics, films, television shows, video games, and other creative works. In science fiction, a fictional universe may be a remote alien planet or galaxy with little apparent relationship to the real world (as in the Star Wars universe). In fantasy, it may be a greatly fictionalized or invented version of Earth's distant past or future (as in The Lord of the Rings). Fictional continuity In a 1970 article in CAPA-alpha, comics historian Don Markstein defined the fictional universe as meant to clarify the concept of fictional continuities. According to the criteria he imagined: Collaboration Fictional universes are sometimes shared by multiple prose authors, with each author's works in that universe being granted approximately equal canonical status. For example, Larry Niven's fictional universe Known Space has an approximately 135-year period in which Niven allows other authors to write stories about the Man-Kzin Wars. Other fictional universes, like the Ring of Fire series, actively court canonical stimulus from fans, but gate and control the changes through a formalized process and the final say of the editor and universe creator. See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Ancestral_Puebloans] | [TOKENS: 5960]
Contents Ancestral Puebloans The Ancestral Puebloans, also known as Ancestral Pueblo peoples or the Basketmaker-Pueblo culture, were an ancient Native American culture of Pueblo peoples spanning the present-day Four Corners region of the United States, comprising southeastern Utah, northeastern Arizona, northwestern New Mexico, and southwestern Colorado. They are believed to have developed, at least in part, from the Oshara tradition, which developed from the Picosa culture. The Ancestral Puebloans lived in a range of structures that included small family pit houses, larger structures to house clans, grand pueblos, and cliff-sited dwellings for defense. They had a complex network linking hundreds of communities and population centers across the Colorado Plateau. They held a distinct knowledge of celestial sciences that found form in their architecture. The kiva, a congregational space that was used mostly for ceremonies, was an integral part of the community structure. Archaeologists continue to debate when this distinct culture emerged. The current agreement, based on terminology defined by the Pecos Classification, suggests their emergence around the 12th century BCE, during the archaeologically designated Early Basketmaker II Era. Beginning with the earliest explorations and excavations, researchers identified Ancestral Puebloans as the forerunners of contemporary Pueblo peoples. Three UNESCO World Heritage Sites located in the United States are credited to the Pueblos: Mesa Verde National Park, Chaco Culture National Historical Park and Taos Pueblo. Etymology Pueblo, which means "village" and "people" in Spanish, was a term originating with the Spanish explorers who used it to refer to the people's particular style of dwelling. Hopi people use the term Hisatsinom, meaning "ancient people", to describe the Ancestral Puebloans. The Navajo people, who now reside in parts of former Pueblo territory, referred to the ancient people as Anaasází in their Diné language, an exonym meaning "ancestors of our enemies", referring to their competition with the Pueblo peoples. The Navajo now use the term in the sense of referring to "ancient people" or "primitive ones", i.e., savages or barbarians, whereas others ascribe the meaning of Anasazi to "the older ones who are different from our people"; (lit. Ana = "different from us" + asaza = "the old ones"). In the twentieth century, the people and their archaeological culture were often referred to by the Diné exonym. Alfred V. Kidder, who mistakenly thought it was Diné for old people, used the term in the 1927 Pecos Classification system. Contemporary Pueblos view this term as derogatory and object to the use of the term. Modern descendants of this culture often choose to use the term Ancestral Pueblo peoples. Archaeologist Linda S. Cordell discussed the word's etymology and use: The name "Anasazi" has come to mean "ancient people," although the word itself is Navajo, meaning "enemy ancestors." [The Navajo word is anaasází (< anaa- "enemy", sází "ancestor").] It is unfortunate that a non-Pueblo word has come to stand for a tradition that is certainly ancestral Pueblo. The term was first applied to ruins of the Mesa Verde by Richard Wetherill, a rancher and trader who, in 1888–1889, was the first Anglo-American to explore the sites in that area. Wetherill knew and worked with Navajos and understood what the word meant. The name was further sanctioned in archaeology when it was adopted by Alfred V. Kidder, the acknowledged dean of Southwestern Archaeology. Kidder felt that it was less cumbersome than a more technical term he might have used. Subsequently some archaeologists who would try to change the term have worried that because the Pueblos speak different languages, there are different words for "ancestor," and using one might be offensive to people speaking other languages. Others have objected to Cordell's definition of the name "Anasazi", saying that its true connotation means in the Navajo language "those that do things differently". In yet another account, the term Anasazi originated from a Navajo adoption of a Ute word that literally means "ancient enemy" or "primitive enemy", but was used by the Navajo to mean "savage" or "barbarian", hence the Pueblos' rejection of the term.[citation needed] Navajo historian Wally Brown stated that the Navajo word "Anasazi" never referred to all Ancestral Puebloans at all, but to a specific group who, according to legend, once lived in Chaco Canyon and perpetrated slave raids on their neighbours, and that the use of the word to refer to Ancestral Puebloans in general had been a mistake by anthropologists. He stated that the slave-raiding "Anasazi" were not the ancestors of the current Puebloans and that he did not believe that they had left any descendants. Geography The Ancestral Puebloans were one of four major prehistoric archaeological traditions recognized in the American Southwest, also known as Oasisamerica. The others are the Mogollon, Hohokam, and Patayan. In relation to neighboring cultures, the Ancestral Puebloans occupied the northeast quadrant of the area. The Ancestral Pueblo homeland centers on the Colorado Plateau, but extends from central New Mexico on the east to southern Nevada on the west. Areas of southern Nevada, Utah, and Colorado form a loose northern boundary, while the southern edge is defined by the Colorado and Little Colorado Rivers in Arizona and the Rio Puerco and Rio Grande in New Mexico. Structures and other evidence of Ancestral Puebloan culture have been found extending east onto the American Great Plains, in areas near the Cimarron and Pecos Rivers and in the Galisteo Basin. Terrain and resources within this large region vary greatly. The plateau regions have high elevations ranging from 4,500 to 8,500 feet (1,400 to 2,600 m). Extensive horizontal mesas are capped by sedimentary formations and support woodlands of junipers, pinyon, and ponderosa pines, each favoring different elevations. Wind and water erosion have created steep-walled canyons, and sculpted windows and bridges out of the sandstone landscape. In areas where resistant strata (sedimentary rock layers), such as sandstone or limestone, overlie more easily eroded strata such as shale, rock overhangs formed. The Ancestral Puebloans favored building under such overhangs for shelters and defensive building sites. All areas of the Ancestral Puebloan homeland suffered from periods of drought and erosion from wind and water. Summer rains could be unreliable and produced destructive thunderstorms. While the amount of winter snowfall varied greatly, the Ancestral Puebloans depended on the snow for most of their water. Snow melt allowed the germination of seeds, both wild and cultivated, in the spring. Where sandstone layers overlay shale, snow melt could accumulate and create seeps and springs, which the Ancestral Puebloans used as water sources. Snow also fed the smaller, more predictable tributaries, such as the Chinle, Animas, Jemez, and Taos Rivers. The larger rivers were less directly important to the ancient culture, as smaller streams were more easily diverted or controlled for irrigation. Cultural characteristics The Ancestral Puebloan culture is perhaps best known for the stone and earth dwellings its people built along cliff walls, particularly during the Pueblo II and Pueblo III eras, from about 900 to 1350 CE in total. The best-preserved examples of the stone dwellings are now protected within United States' national parks, such as Navajo National Monument, Chaco Culture National Historical Park, Mesa Verde National Park, Canyons of the Ancients National Monument, Aztec Ruins National Monument, Bandelier National Monument, Hovenweep National Monument, and Canyon de Chelly National Monument. These villages, called pueblos by Spanish colonists, were accessible only by rope or through rock climbing. These astonishing building achievements had modest beginnings. The first Ancestral Puebloan homes and villages were based on the pit-house, a common feature in the Basketmaker periods. Ancestral Puebloans are also known for their pottery. Local plainware pottery used for cooking or storage was unpainted gray, either smooth or textured. Pottery used for more formal purposes was often more richly adorned. In the northern portion of the Ancestral Pueblo lands, from about 500 to 1300 CE, the pottery styles commonly had black-painted designs on white or light gray backgrounds. Decoration is characterized by fine hatching, and contrasting colors are produced by the use of mineral-based paint on a chalky background. South of the Anasazi territory, in Mogollon settlements, pottery was more often hand-coiled, scraped, and polished, with red to brown coloring. Certain tall cylinders were likely ceremonial vessels, while narrow-necked jars, called ollas, were often used for liquids. Pottery from the southern regions of Ancestral Pueblo lands has bold, black-line decoration and the use of carbon-based colorants. In northern New Mexico, the local black-on-white pottery tradition, the Rio Grande white wares, continued well after 1300 CE. Changes in pottery composition, structure, and decoration are signals of social change in the archaeological record. This is particularly true as the peoples of the American Southwest began to leave their historic homes and migrate south. According to archaeologists Patricia Crown and Steadman Upham, the appearance of the bright colors on Salado Polychromes in the 14th century may reflect religious or political alliances on a regional level. Late 14th- and 15th-century pottery from central Arizona, widely traded in the region, has colors and designs which may derive from earlier ware by both Ancestral Pueblo and Mogollon peoples. The Ancestral Puebloans also excelled at rock art, which included carved petroglyphs and painted pictographs. Ancestral Pueblo peoples painted Barrier Canyon Style pictographs in locations where the images were protected from the sun yet visible to the public. Designs include human-like forms. The so-called "Holy Ghost panel" in the Horseshoe Canyon is considered to be one of the earliest uses of graphical perspective where the largest figure appears to take on a three-dimensional representation. Recent archaeological evidence has established that in at least one great house, Pueblo Bonito, the elite family whose burials associate them with the site practiced matrilineal succession. Room 33 in Pueblo Bonito, the richest burial ever excavated in the Southwest, served as a crypt for one powerful lineage, traced through the female line, for approximately 330 years. While other Ancestral Pueblo burials have not yet been subjected to the same archaeogenomic testing, the survival of matrilineal descent among contemporary Pueblo peoples suggests that this may have been a widespread practice among Ancestral Puebloans. Architecture Ancestral Pueblo people in the North American Southwest crafted a unique architecture with planned community spaces. Population centers such as Chaco Canyon (outside Crownpoint, New Mexico), Mesa Verde (near Cortez, Colorado), and Bandelier National Monument (near Los Alamos, New Mexico) have brought renown to the Ancestral Pueblo peoples. They consisted of apartment complexes and structures made of stone, adobe mud, and other local material, or were carved into canyon walls. Developed within these cultures, the people also adopted design details from other cultures as far away as contemporary Mexico. These buildings were usually multistoried and multipurposed, and surrounded by open plazas and viewsheds. Hundreds to thousands of people lived in these communities. These complexes hosted cultural and civic events and infrastructure that supported a vast outlying region hundreds of miles away linked by transportation roadways. Built well before 1492 CE, these towns and villages were located in defensive positions, for example on high, steep mesas such as at Mesa Verde or present-day Acoma Pueblo, called the "Sky City", in New Mexico. Before 900 CE and progressing past the 13th century, the population complexes were major cultural centers. In Chaco Canyon, Chacoan developers quarried sandstone blocks and hauled timber from great distances, assembling 15 major complexes. These ranked as the largest buildings in North America until the late 19th century. Evidence of archaeoastronomy at Chaco has been proposed, with the Sun Dagger petroglyph at Fajada Butte a popular example. Many Chacoan buildings may have been aligned to capture the solar and lunar cycles, requiring generations of astronomical observations and centuries of skillfully coordinated construction. The Chacoans abandoned the canyon, probably due to climate change beginning with a 50-year drought starting in 1130. Immense complexes known as "great houses" embodied worship at Chaco. Archaeologists have found musical instruments, jewelry, ceramics, and ceremonial items, indicating people in the Great Houses were elite, wealthier families. They hosted indoor burials, where gifts were interred with the dead, often including bowls of food and turquoise beads. Over centuries, architectural forms evolved but the complexes kept some core traits, such as their size. They averaged more than 200 rooms each, and some had 700 rooms. Rooms were very large, with higher ceilings than Ancestral Pueblo buildings of earlier periods. They were well-planned: vast sections were built in a single stage. Most houses faced south. Plazas were almost always surrounded by buildings of sealed-off rooms or high walls. There were often four or five stories, with single-story rooms facing the plaza; room blocks were terraced to allow the tallest sections to compose the pueblo's rear edifice. Rooms were often organized into suites, with front rooms larger than rear, interior, and storage rooms or areas. Ceremonial structures known as kivas were built in proportion to the number of rooms in a pueblo. A small kiva was built for roughly every 29 rooms. Nine complexes each had a Great Kiva, up to 63 feet (19 m) in diameter. T-shaped doorways and stone lintels marked all Chacoan kivas. Although simple and compound walls were often used, great houses usually had core-and-veneer walls: rubble filled the gap between parallel load-bearing walls of dressed, flat sandstone blocks bound in clay mortar. Walls were covered in a veneer of small sandstone pieces, which were pressed into a layer of binding mud. These surfacing stones were often arranged in distinctive patterns. The Chacoan structures together required the wood of 200,000 conifer trees, mostly hauled – on foot – from mountain ranges up to 70 miles (110 km) away. Ceremonial infrastructure One of the most notable aspects of Ancestral Puebloan infrastructure is the Chaco Road at Chaco Canyon, a system of roads radiating from many great house sites such as Pueblo Bonito, Chetro Ketl, and Una Vida. They led toward small outlier sites and natural features in the canyon and outside. Through satellite images and ground investigations, archaeologists have found eight main roads that together run for more than 180 miles (300 km), and are more than 30 feet (10 m) wide. These were built by excavating into a smooth, leveled surface in the bedrock or removing vegetation and soil. Large ramps and stairways in the cliff rock connect the roads above the canyon to sites at the bottom. The largest roads, built at the same time as many of the great houses (1000 to 1125 CE), are: the Great North Road, the South Road, the Coyote Canyon Road, the Chacra Face Road, Ahshislepah Road, Mexican Springs Road, the West Road, and the shorter Pintado-Chaco Road. Simple structures like berms and walls are sometimes aligned along the roads. Some tracts of the roads lead to natural features such as springs, lakes, mountain tops, and pinnacles. The longest and best-known of these roads is the Great North Road, which originates from different routes close to Pueblo Bonito and Chetro Ketl. These roads converge at Pueblo Alto and from there lead north beyond the canyon limits. Along roadways were only small, isolated structures. [citation needed] Archaeological interpretations of the Chaco road system are divided between an economic purpose and a symbolic, ideological or religious role. The system was discovered in the late 19th century and excavated in the 1970s. By the late 20th century, aerial and satellite photographs helped in the study. Archaeologists suggested that the road's main purpose was to transport local and exotic goods to and from the canyon. The economic purpose of the Chaco road system is shown by the presence of luxury items at Pueblo Bonito and elsewhere in the canyon. Items such as macaws, turquoise and seashells, which are not part of this environment, and imported vessels distinguished by design, prove that the Chaco traded with distant regions. The widespread use of timber in Chacoan constructions required a large system of easy transportation, as timber was not locally available. Analysis of strontium isotopes shows that much of the timber came from distant mountain ranges. Cliff communities Throughout the southwest Ancestral Puebloan region, the inhabitants built complexes in shallow caves and under rock overhangs in canyon walls. Unlike earlier structures and villages atop mesas, this was a regional 13th-century trend of gathering the growing populations into close, defensible quarters. There were buildings for housing, defense, and storage. These were built mostly of blocks of hard sandstone, held together and plastered with adobe mortar. Constructions had many similarities, but unique forms due to the unique rock topography. The best-known site is at Mesa Verde, with a large number of well-preserved cliff dwellings. This area included common Pueblo architectural forms, such as kivas, towers, and pit-houses, but the space restrictions of these alcoves resulted in far denser populations. Mug House, a typical cliff dwelling of the period, was home to around 100 people who shared 94 small rooms and eight kivas, built right up against each other and sharing many walls. Builders maximized space use and no area was off-limits. Not all the people in the region lived in cliff dwellings; many colonized the canyon rims and slopes in multifamily structures that grew to unprecedented size as populations swelled. Decorative motifs for these sandstone/mortar structures, both cliff dwellings and not, included T-shaped windows and doors. This has been taken by some archaeologists, such as Stephen Lekson (1999), as evidence of the continuation of the Chaco Canyon elite system, which had seemingly collapsed a century earlier. Other researchers instead explain these motifs as part a wider Pueblo style or religion. History During the period from 700 to 1130 CE (Pueblo I and II Eras), the population grew fast due to consistent and regular rainfall which supported agriculture. Studies of skeletal remains show increased fertility rather than decreased mortality. However, this tenfold population increase over a few generations was probably also due to migrations of people from surrounding areas. Innovations such as pottery, food storage, and agriculture enabled this rapid growth. Over several decades, the Ancestral Puebloans culture spread across the landscape.[citation needed] Ancestral Puebloan culture has been divided into three main areas or branches, based on geographical location: [citation needed] DNA evidence confirms the ancestors of the inhabitants of Picuris Pueblo once lived in Chaco Canyon, now a UNESCO World Heritage Site. Modern Pueblo oral traditions[which?] hold that the Ancestral Puebloans originated from sipapu, where they emerged from the underworld. For unknown ages, they were led by chiefs and guided by spirits as they completed vast migrations throughout the continent of North America. They settled first in the Ancestral Puebloan areas for a few hundred years before moving to their present locations.[citation needed] The Ancestral Puebloans left their established homes in the 12th and 13th centuries. The main reason is unclear. Factors discussed include global or regional climate change, prolonged drought, environmental degradation such as cyclical periods of topsoil erosion or deforestation, hostility from new arrivals, religious or cultural change, and influence from Mesoamerican cultures. Many of these possibilities are supported by archaeological evidence. Current scholarly consensus is that Ancestral Puebloans responded to pressure from Numic-speaking peoples moving onto the Colorado Plateau, as well as climate change that resulted in agricultural failures. The archaeological record indicates that for Ancestral Puebloans to adapt to climatic change by changing residences and locations was not unusual. Early Pueblo I Era sites may have housed up to 600 individuals in a few separate but closely spaced settlement clusters. However, they were generally occupied for 30 years or less. Archaeologist Timothy A. Kohler excavated large Pueblo I sites near Dolores, Colorado, and discovered that they were established during periods of above-average rainfall. This allowed crops to be grown without requiring irrigation. At the same time, nearby areas that suffered significantly drier patterns were abandoned. Ancestral Puebloans attained a cultural "Golden Age" between about 900 and 1150. During this time, generally classed as Pueblo II Era, the climate was relatively warm and rainfall mostly adequate. Communities grew larger and were inhabited for longer. Highly specific local traditions in architecture and pottery emerged, and trade over long distances appears to have been common. Domesticated turkeys appeared. After around 1130, North America had significant climatic change in the form of a 300-year period of aridity called the Great Drought. This also led to the collapse of the Tiwanaku civilization around Lake Titicaca in present-day Bolivia. The contemporary Mississippian culture also collapsed during this period. Confirming evidence dated between 1150 and 1350 has been found in excavations of the western regions of the Mississippi Valley, which show long-lasting patterns of warmer, wetter winters and cooler, drier summers. In this later period, the Pueblo II became more self-contained, decreasing trade and interaction with more distant communities. Southwest farmers developed irrigation techniques appropriate to seasonal rainfall, including soil and water control features such as check dams and terraces. The population of the region continued to be mobile, abandoning settlements and fields under adverse conditions. There was also a drop in water table due to a different cycle unrelated to rainfall. This forced the abandonment of settlements in the more arid or overfarmed locations. Evidence suggests a profound change in religion in this period. Chacoan and other structures constructed originally along astronomical alignments, and thought to have served important ceremonial purposes to the culture, were systematically dismantled. Doorways were sealed with rock and mortar. Kiva walls show marks from great fires set within them, which probably required removal of the massive roof – a task which would require significant effort. Habitations were abandoned, and tribes divided and resettled far.[citation needed] This evidence suggests that the religious structures were abandoned deliberately over time. Pueblo oral history holds that the ancestors had achieved great spiritual power and control over natural forces. They used their power in ways that caused nature to change and caused changes that were never meant to occur. Possibly, the dismantling of their religious structures was an effort to symbolically undo the changes they believed they caused due to their abuse of their spiritual power, and thus make amends with nature.[citation needed] Most modern Pueblo peoples (whether Keresans, Hopi, or Tanoans) assert the Ancestral Puebloans did not "vanish", as is commonly portrayed. They say that the people migrated to areas in the southwest with more favorable rainfall and dependable streams. They merged into the various Pueblo peoples whose descendants still live in Arizona and New Mexico. This perspective was also presented by early 20th-century anthropologists, including Frank Hamilton Cushing, J. Walter Fewkes, and Alfred V. Kidder.[citation needed] Many modern Pueblo tribes trace their lineage from specific settlements. For example, the San Ildefonso Pueblo people believe that their ancestors lived in both the Mesa Verde and the Bandelier areas. Evidence also suggests that a profound change took place in the Ancestral Pueblo area and areas inhabited by their cultural neighbors, the Mogollon. Historian James W. Loewen agrees with this oral tradition in his book, Lies Across America: What Our Historic Markers and Monuments Get Wrong (1999). No academic consensus exists with the professional archeological and anthropological community on this issue. Environmental stress may have caused changes in social structure, leading to conflict and warfare. Near Kayenta, Arizona, Jonathan Haas of the Field Museum in Chicago has been studying a group of Ancestral Puebloan villages that relocated from the canyons to the high mesa tops during the late 13th century. Haas believes that the reason to move so far from water and arable land was a defense against enemies. He asserts that isolated communities relied on raiding for food and supplies, and that internal conflict and warfare became common in the 13th century. This conflict may have been aggravated by the influx of less settled peoples, Numic-speakers such as the Utes, Shoshones, and Paiute people, who may have originated in what is today California, and the arrival of the Athabaskan-speaking Diné who migrated from the north during this time and subsequently became the Navajo and Apache tribes most notably. Others suggest that more developed villages, such as that at Chaco Canyon, exhausted their environments, resulting in widespread deforestation and eventually the fall of their civilization through warfare over depleted resources. A 1997 excavation at Cowboy Wash near Dolores, Colorado found remains of at least 24 human skeletons that showed evidence of violence and dismemberment, with strong indications of cannibalism. This modest community appears to have been abandoned during the same time period. Other excavations within the Ancestral Puebloan cultural area have produced varying numbers of unburied, and in some cases dismembered, bodies. In a 2010 paper, Potter and Chuipka argued that evidence at Sacred Ridge site, near Durango, Colorado, is best interpreted as warfare related to competition and ethnic cleansing. This evidence of warfare, conflict, and cannibalism is hotly debated by some scholars and interest groups. Suggested alternatives include: a community suffering the pressure of starvation or extreme social stress, dismemberment and cannibalism as religious ritual or in response to religious conflict, the influx of outsiders seeking to drive out a settled agricultural community via calculated atrocity, or an invasion of a settled region by nomadic raiders who practiced cannibalism.[citation needed] Cultural distinctions Archaeological cultural units such as Ancestral Puebloan, Hohokam, Patayan, or Mogollon are used by archaeologists to define material culture similarities and differences that may identify prehistoric sociocultural units, equivalent to modern societies or peoples. The names and divisions are classification devices based on theoretical perspectives, analytical methods, and data available at the time of analysis and publication. They are subject to change, not only on the basis of new information and discoveries, but also as attitudes and perspectives change within the scientific community. It should not be assumed that an archaeological division or culture unit corresponds to a particular language group or to a socio-political entity such as a tribe. Current terms and conventions have significant limitations: Defining cultural groups, such as the Ancestral Puebloans, tends to create an image of territories separated by clear-cut boundaries, like border boundaries separating modern states. These did not exist. Prehistoric people traded, worshipped, collaborated, and fought most often with other nearby groups. Cultural differences should therefore be understood as clinal: "increasing gradually as the distance separating groups also increases". Departures from the expected pattern may occur because of unidentified social or political situations or because of geographic barriers. In the Southwest, mountain ranges, rivers, and most obviously, the Grand Canyon, can be significant barriers for human communities, likely reducing the frequency of contact with other groups. Current opinion holds that the closer cultural similarity between the Mogollon and Ancestral Puebloans, and their greater differences from the Hohokam and Patayan, is due to both the geography and the variety of climate zones in the Southwest. See also References Notes Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Somalia] | [TOKENS: 1046]
Contents History of the Jews in Somalia The history of the Jews in Somalia refers to the historical presence of Jewish communities in the Horn of Africa country of Somalia. Judaism in the Somali peninsula has received little attention in the historical record. However, there is evidence of a Jewish presence in the area for centuries, with some members of the community openly practicing their faith and others practicing in secret. Many of the Jews in the area were Adenite and Yemenite Jews, who came to the region as merchants and religious service providers. However, a report in 1949 states that there were "no Jews left in Italian and British Somaliland". While the traditional Jewry in Somalia is known, little is known about the crypto-Jews who practice their faith discreetly. Jewry in Somalia The presence of Jewish communities in Somalia has been the subject of much speculation and debate throughout history.[citation needed] A small number of Jews, estimated to be around 100-200 individuals, migrated to Somalia in the early 1900s as traders and settled in coastal towns such as Berbera, Zeila and Brava. However, a report by the Jewish Telegraphic Agency (JTA) published in 1949 stated that there were "no Jews left in Italian and British Somaliland". It is believed that a wave of Yemenite Jews arrived in the Somali territories in the 1880s, and other Ottoman friendly territories around the same time when Yemenites immigrated to the Ottoman Jerusalem. From 1881 to 1882, a few hundred Jews left but more arrived until 1914. Yemenite-Somali Jews served as prominent leaders in successive Somali governments of the 1960s and 1970s. However, when Somalia joined the Arab League in the 1970s, many Yemenite-Somali Jews sold their businesses and emigrated. Today, present-day Yemenite-Somali Jews are estimated to be no more than five to 10 merchant families widely distributed along the coast in Benadir coast, and northern Somali cities. The ruins of historic Eastern Synagogues can still be found in Obbia town in Somalia, but many smaller local synagogues in towns such as Hafun, Alula and Bender-Qasim were destroyed by the Italian fascists in the 1930s.[citation needed] While the history of traditional Jewish communities in Somalia is relatively well-known, little is understood about the crypto-Jews who practiced their faith discreetly. The true extent of Jewish presence and influence in Somalia remains a topic of ongoing research and debate. There are no formal diplomatic relations between the two countries because Somalia does not recognize Israel as an independent state. Israel was one of 35 countries that recognised the State of Somaliland's brief five-day independence in 1960.[page needed] In 1974, Somalia joined the Arab League and complied with the Arab League boycott of Israel.[better source needed] On 26 December 2025, Israel became the first UN member state to recognize Somaliland, an unrecognized state that declared independence from Somalia in 1991, as a sovereign state. The Somali government issued a statement condemning the Israeli government's decision. Judaism has a rich history in the Somali peninsula, with both Ethiopian and southern Arabian strains present in the region. While there is limited evidence of any Somali clans embracing Judaism during the pre-Islamic era, the conversion of individuals and families cannot be ruled out. The Hebrew heritage of marginalized Somali clans, including the Yibir, can be traced back to the Beta Israel, or Ethiopian Jews. By the 16th century, the Somali population had largely adopted Islam as their primary religion. Ethiopians brought their brand of Christian orthodoxy and elements of Judaism with it. This is evident in the presence of Jewish archeological evidence in the region, such as ancient cemeteries in the Hargeisa region of Somaliland embossed with the Star of David. The Damot Kingdom, led by the Jewish Queen Gudit, also had a significant impact on the spread of Judaism in the region.[citation needed] Yibro The Yibir, also spelled as Yibbiro or Yebiro, is a marginalized clan found in Somalia. They are believed[by whom?] to have Jewish roots, specifically tracing their heritage to the Beta Israel, also known as Ethiopian Jews. The Yibiro have faced discrimination and marginalization within Somali society due to their perceived Jewish heritage.[citation needed] There is limited historical evidence of the Yibir's Jewish origins, but it is believed[by whom?] that they may have been converts to Judaism or that Jewish traders and merchants may have intermarried with the clan. Some accounts[like whom?] suggest that the Yibir have preserved Jewish customs and practices, such as circumcision and kosher slaughter of animals. However, their Jewish identity has been largely kept secret for fear of persecution.[citation needed] The Yibir have traditionally been associated with the practice of traditional healing and divination, which has led to further discrimination against them. They have also been marginalized economically and socially, often occupying the lowest rungs of Somali society. Bibliography and further reading References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Armorial_of_sovereign_states] | [TOKENS: 128]
Armorial of sovereign states Page version status This is an accepted version of this page This armorial of sovereign states shows the coat of arms, national emblem, or seal for every sovereign state. Note that due to copyright restrictions in some countries (including Canada, New Zealand, Qatar, Singapore, South Africa, and the United Kingdom), some emblems may not be displayed, or may be displayed with slight alterations in appearance from their official rendition, but nonetheless remain faithful to their heraldic description. A B C D E F G H I J K L M N O P Q R S T U V Y Z Other states See also External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Compact_disc_manufacturing] | [TOKENS: 3730]
Contents Compact disc manufacturing Compact disc manufacturing is the process by which commercial compact discs (CDs) are replicated in mass quantities using a master version created from a source recording. This may be either in audio form (CD-DA) or data form (CD-ROM). This process is used in the mastering of read-only compact discs. DVDs and Blu-rays use similar methods (see Optical Disc § Optical disc manufacturing). A CD can be used to store audio, video, and data in various standardized formats defined in the Rainbow Books. CDs are usually manufactured in a class 100 (ISO 5) or better clean room, to avoid contamination which would result in data corruption. They can be manufactured to strict manufacturing tolerances for only a few US cents per disk. Replication differs from duplication (i.e. burning used for CD-Rs and CD-RWs) as the pits and lands of a replicated CD are moulded into a CD blank, rather than being burn marks in a dye layer (in CD-Rs) or areas with changed physical characteristics (in CD-RWs). In addition, CD burners write data sequentially, while a CD pressing plant forms the entire disk in one physical stamping operation, similar to record pressing. Premastering All CDs are pressed from a digital data source, with the most common sources being low error-rate CD-Rs or files from an attached computer hard drive containing the finished data (e. g., music or computer data). Some CD pressing systems can use digital master tapes, either in Digital Audio Tape, Exabyte, Digital Linear Tape, Digital Audio Stationary Head or Umatic formats. A PCM adaptor is used to record and retrieve digital audio data into and from an analog videocassette format such as Umatic or Betamax. However, such sources are suitable only for production of audio CDs due to error detection and correction issues. If the source is not a CD, the table of contents for the CD to be pressed must also be prepared and stored on a tape or hard drive. In all cases except CD-R sources, the tape must be uploaded to a media mastering system to create the TOC (Table of Contents) for the CD. Creative processing of the mixed audio recordings often occurs in conventional CD premastering sessions. While "mastering" is commonly used by laypeople to describe this phase, official industry terminology refers to it as "premastering," as it precedes the creation of the glass master from which stamper plates are electroformed. Mastering Glass mastering is performed in a class 100 (ISO 5) or better clean room or a self-enclosed clean environment within the mastering system. Contaminants introduced during critical stages of manufacturing (e.g., dust, pollen, hair, or smoke) can cause sufficient errors to make a master unusable. Once successfully completed, a CD master will be less susceptible to the effects of these contaminants. During glass mastering, glass is used as a substrate to hold the CD master image while it is created and processed; hence the name. Glass substrates, noticeably larger than a CD, are round plates of glass approximately 240 mm in diameter and 6 mm thick. They often also have a small, steel hub on one side to facilitate handling. The substrates are created especially for CD mastering and one side is polished until it is extremely smooth. Even microscopic scratches in the glass will affect the quality of CDs pressed from the master image. The extra area on the substrate allows for easier handling of the glass master and reduces the risk of damage to the pit and land structure when the "father" stamper is removed from the glass substrate. Once the glass substrate is cleaned using detergents and ultrasonic baths, the glass is placed in a spin coater. The spin coater rinses the glass blank with a solvent and then applies either photoresist or dye-polymer depending on the mastering process. Rotation spreads photoresist or dye-polymer coating evenly across the surface of the glass. The substrate is removed and baked to dry the coating and the glass substrate is ready for mastering. Once the glass is ready for mastering, it is placed in a laser beam recorder (LBR). Most LBRs are capable of mastering at greater than 1x speed, but due to the weight of the glass substrate and the requirements of a CD master they are typically mastered at no greater than 8x playback speed. The LBR uses a laser to write the information, with a wavelength and final lens NA (numerical aperture) chosen to produce the required pit size on the master blank. For example, DVD pits are smaller than CD pits, so a shorter wavelength or higher NA (or both) is needed for DVD mastering. LBRs use one of two recording techniques; photoresist and non-photoresist mastering. Photoresist also comes in two variations; positive photoresist and negative photoresist. Photoresist mastering uses a light-sensitive material (a photoresist) to create the pits and lands on the CD master blank. The laser beam recorder uses a deep blue or ultraviolet laser to write the master. When exposed to the laser light, the photoresist undergoes a chemical reaction which either hardens it (in the case of negative photoresist) or to the contrary makes it more soluble (in the case of positive photoresist). Once the mastering is complete, the glass master is removed from the LBR and chemically 'developed'. The exposed area is then soaked in a developer solution which removes the exposed positive photoresist or the unexposed negative photoresist. Once developing is finished, the glass master is metalized to provide a surface for the stamper to be formed onto. It is then polished with lubricant and wiped down. When a laser is used to record on the dye-polymer used in non-photoresist (NPR) mastering, the dye-polymer absorbs laser energy focused in a precise spot; this vapourises and forms a pit in the surface of the dye-polymer. This pit can be scanned by a red laser beam that follows the cutting beam, and the quality of the recording can be directly and immediately assessed; for instance, audio signals being recorded can also be played straight from the glass master in real time. The pit geometry and quality of the playback can all be adjusted while the CD is being mastered, as the blue writing laser and the red read laser are typically connected via a feedback system to optimise the recording. This allows the dye-polymer LBR to produce very consistent pits even if there are variations in the dye-polymer layer. Another advantage of this method is that pit depth variation can be programmed during recording to compensate for downstream characteristics of the local production process (e.g., marginal molding performance). This cannot be done with photoresist mastering because the pit depth is set by the PR coating thickness, whereas dye-polymer pits are cut into a coating thicker than the intended pits. This type of mastering is called Direct Read After Write (DRAW) and is the main advantage of some non-photoresist recording systems. Problems with the quality of the glass blank master, such as scratches, or an uneven dye-polymer coating, can be immediately detected. If required, the mastering can be halted, saving time and increasing throughput. Post-mastering After mastering, the glass master is baked to harden the developed surface material to prepare it for metalisation. Metalisation is a critical step prior to electrogalvanic manufacture (electroplating). The developed glass master is placed in a vapour deposition metallizer which uses a combination of mechanical vacuum pumps and cryopumps to lower the total vapour pressure inside a chamber to a hard vacuum. A piece of nickel wire is then heated in a tungsten boat to white-hot temperature and the nickel vapour deposited onto the rotating glass master. The glass master is coated with the nickel vapour up to a typical thickness of around 400 nm. The finished glass masters are inspected for stains, pinholes or incomplete coverage of the nickel coating and passed to the next step in the mastering process. Electroforming Electroforming occurs in "Matrix", the name used for the electroforming process area in many plants; it is also a class 100 (ISO 5) or better clean room. The data (music, computer data, etc.) on the metalised glass master is extremely easy to damage and must be transferred to a tougher form for use in the injection moulding equipment which actually produces the end-product optical disks. The metalised master is clamped in a conductive electrodeposition frame with the data side facing outwards and lowered into an electroforming tank. The specially prepared and controlled tank water contains a nickel salt solution (usually nickel sulfamate) at a particular concentration which may be adjusted slightly in different plants depending on the characteristics of the prior steps. The solution is carefully buffered to maintain its pH, and organic contaminants must be kept below one part in five million for good results. The bath is heated to approximately 50 °C. The glass master is rotated in the electroforming tank while a pump circulates the electroforming solution over the surface of the master. As the electroforming progresses, nickel is not electroplated onto the surface of the glass master, since that would preclude separation. Plating is rather eschewed through passivation and, initially, because the glass is not electroconductive. Instead, the metal coating on the glass disc, actually reverse-plates onto the nickel (not the mandrel) which is being electrodeposited by the attraction of the electrons on the cathode, which presents itself as the metal-coated glass mistress, or, premaster mandrel. Electroplating, on the other hand, would have entailed electrodeposition directly to the mandrel along with the intention of it staying adhered. That, and the more rigorous requirements of temperature control and purity of bathwater, are the main differences between the two disciplines of electrodeposition. The metal stamper first struck from the metal-coated glass is the metal master (and we shouldn't make a master from another master as that would not follow the nomenclature of the sequence of siring that is germane to electroforming) This is clearly a method opposite to normal electroplating. Another difference to electroplating is that the internal stress of the nickel must be controlled carefully, or the nickel stamper will not be flat. The solution cleanliness is important but is achieved by continuous filtration and usual anode bagging systems. Another large difference is that the stamper thickness must be controlled to ±2% of the final thickness so that it will fit on the injection moulding machines with very high tolerances of gassing rings and centre clamps. This thickness control requires electronic current control and baffles in the solution to control distribution. The current must start off quite low as the metallised layer is too thin to take large currents, and is increased steadily. As the thickness of the nickel on the glass "mistress" increases, the current can be increased. The full electroforming current density is very high with the full thickness of usually 0.3 mm taking approximately one hour. The part is removed from the tank and the metal layer carefully separated from the glass substrate. If plating occurs, the process must be begun anew, from the glass mastering phase. The metal part, now called a "father", has the desired data as a series of bumps rather than pits. The injection moulding process works better by flowing around high points rather than into pits on the metal surface. The father is washed with deionised water and other chemicals such as ammonical hydrogen peroxide, sodium hydroxide or acetone to remove all trace of resist or other contaminants. The glass master can be sent for reclamation, cleaning and checking before reuse. If defects are detected, it will be discarded or repolished recycled. Once cleaned of any loose nickel and resist, the father surface is washed and the passivated, either electrically or chemically, which allows the next plated layer to separate from the father. This layer is an atomic layer of absorbed oxygen that does not alter the physical surface. The father is clamped back into a frame and returned to the plating tank. This time the metal part that is grown is the mirror image of the father and is called a "mother"; as this is now pits, it cannot be used for moulding. The mother-father sandwich is carefully separated and the mother is then washed, passivated and returned to the electroforming baths to have a mirror image produced on it called a son. Most moulded CDs are produced from sons. Mothers can be regrown from fathers if they become damaged, or a very long run. If handled correctly, there is no limit to the number of stampers that can be grown from a single mother before the quality of the stamper is reduced unacceptably. Fathers can be used as a stamper, directly, if a very fast turnaround is required, or if the yield is 100%, in which case the father would be wastefully stored. At the end of a run, the mother is certainly to be stored. A father, mother, and a collection of stampers (sometimes called "sons") are known collectively as a "family". Fathers and mothers are the same size as a glass substrate, typically 300 μm in thickness. Stampers do not require the extra space around the outside of the program area and they are punched to remove the excess nickel from outside and inside the information area in order to fit the mould of the injection moulding machine (IMM). The physical dimensions of the mould vary depending on the injection tooling being used. Replication CD moulding machines are specifically designed high temperature polycarbonate injection moulders. They have an average throughput of 550-900 discs per hour, per moulding line. Clear polycarbonate pellets are first dried at around 130 degrees Celsius for three hours (nominal; this depends on which optical grade resin is in use) and are fed via vacuum transport into one end of the injection moulder's barrel (i.e., the feed throat) and are moved to the injection chamber via a large screw inside the barrel. The barrel, wrapped with heater bands ranging in temperature from ca 210 to 320 degrees Celsius melts the polycarbonate. When the mould is closed the screw moves forward to inject molten plastic into the mould cavity. When the mould is full, cool water running through mould halves, outside the cavity, cools the plastic so it somewhat solidifies. The entire process from the mould closing, injection and opening again takes approximately 3 to 5 seconds. The moulded "disc" (referred to as a 'green' disc, lacking final processing) is removed from the mould by vacuum handling; high-speed robot arms with vacuum suction caps. They are moved onto the finishing line infeed conveyor, or cooling station, in preparation for metallisation. At this point the discs are clear and contain all the digital information desired; however, they cannot be played because there is no reflective layer. The discs pass, one at a time, into the metallizer, a small chamber at approximately 10−3 Torr (130 mPa) vacuum. The process is called 'sputtering'. The metallizer contains a metal "target" – almost always an alloy of (mostly) aluminium and small amounts of other metals. There is a load-lock system (similar to an airlock) so the process chamber can be kept at high vacuum as the discs are exchanged. When the disc is rotated into the processing position by a swivel arm in the vacuum chamber, a small dose of argon gas is injected into the process chamber and a 700 volt DC electric current at up to 20 kW is applied to the target. This produces a plasma from the target, and the plasma vapour is deposited onto the disc; it is an anode-cathode transfer. The metal coats the data side of the disc (upper surface), covering the pit and lands. This metal layer is the reflective surface which can be seen on the reverse (non-label side) of a CD. This thin layer of metal is subject to corrosion from various contaminants and so is protected by a thin layer of lacquer. After metalisation, the discs pass on to a spin-coater, where UV curable lacquer is dispensed onto the newly metallized layer. By rapid spinning, the lacquer coats the entire disc with a very thin layer (approximately 5 to 10 μm). After the lacquer is applied, the discs pass under a high-intensity UV lamp which cures the lacquer rapidly. The lacquer also provides a surface for a label, generally screen printed or offset printed. The printing ink(s) must be chemically compatible with the lacquer used. Markers used by consumers to write on blank surfaces can lead to breaks in the protective lacquer layer, which may lead to corrosion of the reflective layer, and failure of the CD. Testing For quality control, both the stamper and the moulded discs are tested before a production run. Samples of the disc (test pressings) are taken during long production runs and tested for quality consistency. Pressed discs are analyzed on a signal analysis machine. The metal stamper can also be tested on a signal analysis machine which has been specially adapted (larger diameter, more fragile, ...). The machine will scan the disc or stamper and measure various physical and electrical parameters. Errors can be introduced at every step of production, but the moulding process is the least subject to adjustment. Sources of errors are more readily identified and compensated for during mastering. If the errors are too severe then the stamper is rejected and a replacement installed. An experienced machine operator can interpret the report from the analysis system and optimise the moulding process to make a disc that meets the required Rainbow Book specification (e.g. Red Book for Audio from the Rainbow Books series). If no defects are found, the CD continues to printing so a label can be screen or offset printed on the top surface of the disc. Thereafter, discs are counted, packaged, and shipped. Manufacturers See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Glue_language] | [TOKENS: 2671]
Contents Scripting language In computing, a script is a relatively short and simple set of instructions that typically automate an otherwise manual process. The act of writing a script is called scripting. A scripting language or script language is a programming language that is used for scripting. Originally, scripting was limited to automating shells in operating systems, and languages were relatively simple. Today, scripting is more pervasive and some scripting languages include modern features that allow them to be used to develop application software also. Overview A scripting language can be a general-purpose programming language or a domain-specific language for a given environment. When embedded in an application, it may be called an extension language. A scripting language is sometimes referred to as very high-level programming language if it operates at a high level of abstraction, or as a control language, especially for job control languages on mainframe computers. The term scripting language is sometimes used in a wider sense, to refer to dynamic high-level programming languages in general. Some are strictly interpreted languages, while others use a form of compilation. In this context, the term script refers to a small program in such a language. Typically, a script is contained in a single file, and no larger than a few thousand lines of code. The scope of scripting languages ranges from small to large, and from highly domain-specific language to general-purpose programming languages. A language may start as small and highly domain-specific and later develop into a portable and general-purpose language; conversely, a general-purpose language may later develop special domain-specific dialects. Notable languages Characteristics Script is a subjective characterization that generally includes the following attributes. A script is usually not compiled, at least not its usual meaning. Generally, they are interpreted directly from source code, or from bytecode, or run as native after just-in-time compilation. A script is generally relatively short and simple. As there is no limit on size or complexity, script is subjective. A few lines of code without branching is probably considered a script. A codebase of multiple files, that performs sophisticated user or hardware interface or complicated algorithms or multiprogramming is probably not considered a script. A script usually automates a task that would otherwise be performed by a person in a more manual way. A language that is primarily intended for scripting generally has limited capabilities compared to a general-purpose language. A scripting language may lack the functionality to write complex applications. Typically, a script starts executing at the first line of code whereas an application typically starts at a special point in the code called the entry point. For example, Java is not script-like since an application starts at the function named main which need not be at the top of the code. The following code starts at main, then calls printHelloWorld which prints "Hello World". In contrast, the following Python code prints "Hello World" without the main function or other syntax such as a class definition required by Java. Scripts are often created or modified by the person executing them, but they are also often distributed, such as when large portions of games are written in a scripting language, notably the Google Chrome T-rex game. History Early mainframe computers (in the 1950s) were non-interactive, instead using batch processing. IBM's Job Control Language (JCL) is the archetype of languages used to control batch processing. The first interactive shells were developed in the 1960s to enable remote operation of the first time-sharing systems, and these used shell scripts, which controlled running computer programs within a computer program, the shell. Calvin Mooers in his TRAC language is generally credited with inventing command substitution, the ability to embed commands in scripts that, when interpreted, insert a character string into the script. Multics calls these active functions. Louis Pouzin wrote an early processor for command scripts called RUNCOM for CTSS around 1964. Stuart Madnick at MIT wrote a scripting language for IBM's CP/CMS in 1966. He originally called this processor COMMAND, later named EXEC. Multics included an offshoot of CTSS RUNCOM, also called RUNCOM. EXEC was eventually replaced by EXEC 2 and Rexx. Languages such as Tcl and Lua were specifically designed as general-purpose scripting languages that could be embedded in any application. Other languages such as Visual Basic for Applications (VBA) provided strong integration with the automation facilities of an underlying system. Embedding of such general-purpose scripting languages instead of developing a new language for each application also had obvious benefits, relieving the application developer of the need to code a language translator from scratch and allowing the user to apply skills learned elsewhere. Some software incorporates several different scripting languages. Modern web browsers typically provide a language for writing extensions to the browser itself, and several standard embedded languages for controlling the browser, including JavaScript (a dialect of ECMAScript) or XUL. Types Scripting languages can be categorized into several different types, with a considerable degree of overlap among the types. Scripting is often contrasted with system programming, as in Ousterhout's dichotomy or "programming in the large and programming in the small". In this view, scripting is glue code, connecting software components, and a language specialized for this purpose is a glue language. Pipelines and shell scripting are archetypal examples of glue languages, and Perl was initially developed to fill this same role. Web development can be considered a use of glue languages, interfacing between a database and web server. But if a substantial amount of logic is written in script, it is better characterized as simply another software component, not "glue". Glue languages are especially useful for writing and maintaining: Glue language examples: Macro languages exposed to operating system or application components can serve as glue languages. These include Visual Basic for Applications, WordBasic, LotusScript, CorelScript, Hummingbird Basic, QuickScript, Rexx, SaxBasic, and WinWrap Basic. Other tools like AWK can also be considered glue languages, as can any language implemented by a Windows Script Host engine (VBScript, JScript and VBA by default in Windows and third-party engines including implementations of Rexx, Perl, Tcl, Python, XSLT, Ruby, Modern Pascal, Delphi, and C). A majority of applications can access and use operating system components via the object models or its own functions. Other devices like programmable calculators may also have glue languages; the operating systems of PDAs such as Windows CE may have available native or third-party macro tools that glue applications together, in addition to implementations of common glue languages—including Windows NT, DOS, and some Unix shells, Rexx, Modern Pascal, PHP, and Perl. Depending upon the OS version, WSH and the default script engines (VBScript and JScript) are available. Programmable calculators can be programmed in glue languages in three ways. For example, the Texas Instruments TI-92, by factory default can be programmed with a command script language. Inclusion of the scripting and glue language Lua in the TI-NSpire series of calculators could be seen as a successor to this. The primary on-board high-level programming languages of most graphing calculators (most often Basic variants, sometimes Lisp derivatives, and more uncommonly, C derivatives) in many cases can glue together calculator functions—such as graphs, lists, matrices, etc. Third-party implementations of more comprehensive Basic version that may be closer to variants listed as glue languages in this article are available—and attempts to implement Perl, Rexx, or various operating system shells on the TI and HP graphing calculators are also mentioned. PC-based C cross-compilers for some of the TI and HP machines used with tools that convert between C and Perl, Rexx, AWK, and shell scripts to Perl, Modern Pascal, VBScript to and from Perl make it possible to write a program in a glue language for eventual implementation (as a compiled program) on the calculator.[citation needed] A number of text editors support macros written either using a macro language built into the editor, e.g., The SemWare Editor (TSE), vi improved (VIM), or using an external implementation, e.g., XEDIT, or both, e.g., KEDIT. Sometimes text editors and edit macros are used under the covers to provide other applications, e.g., FILELIST and RDRLIST in CMS . A major class of scripting languages has grown out of the automation of job control, which relates to starting and controlling the behavior of system programs (in this sense, one might think of shells as being descendants of IBM's JCL, or Job Control Language, which was used for exactly this purpose). Many of these languages' interpreters double as command-line interpreters such as the Unix shell or the MS-DOS COMMAND.COM. Others, such as AppleScript offer the use of English-like commands to build scripts. With the advent of graphical user interfaces, a specialized kind of scripting language emerged for controlling a computer. These languages interact with the same graphic windows, menus, buttons, and so on, that a human user would. They do this by simulating the actions of a user. These languages are typically used to automate user actions. Such languages are also called macros when control is through simulated key presses or mouse clicks, and tapping or pressing on a touch-activated screen. These languages could in principle be used to control any GUI application, but in practice their use is limited because they need support from the application and from the operating system. There are a few exceptions to this limit. Some GUI scripting languages are based on recognizing graphical objects from their display screen pixels. These GUI scripting languages do not depend on support from the operating system or application. When the GUI provides the appropriate interfaces, as in the IBM Workplace Shell, a generic scripting language, e.g., Object REXX, can be used to write GUI scripts. Application specific languages can be split in many different categories, i.e., standalone based app languages (executable) or internal application specific languages (PostScript, XML, gscript as some of the widely distributed scripts, respectively implemented by Adobe, Microsoft and Google) among others include an idiomatic scripting language tailored to the needs of the application user. Likewise, many computer game systems use a custom scripting language to express the programmed actions of non-player characters and the game environment. Languages of this sort are designed for a single application; while they may superficially resemble a specific general-purpose language (e.g., QuakeC, modeled after C), they have custom features that distinguish them. Emacs Lisp, while a fully formed and capable dialect of Lisp, contains many special features that make it most useful for extending the editing functions of Emacs. An application-specific scripting language can be viewed as a domain-specific programming language specialized to one application. A number of languages have been designed for the purpose of replacing application-specific scripting languages by being embeddable in application programs. The application programmer (working in C or another systems language) includes "hooks" where the scripting language can control the application. These languages may be technically equivalent to an application-specific extension language but when an application embeds a "common" language, the user gets the advantage of being able to transfer skills from application to application. A more generic alternative is simply to provide a library (often a C library) that a general-purpose language can use to control the application, without modifying the language for the specific domain. JavaScript began as, and still is mostly, a language for scripting inside web browsers. However, the standardizing of the language as ECMAScript has made it popular as a general-purpose embeddable language. The Mozilla implementation SpiderMonkey is embedded in several environments such as the Yahoo Widgets Engine,and applications such as the Adobe products Flash (ActionScript) and Acrobat (for scripting PDF files). Tcl was created as an extension language but has come to be used more often as a general-purpose language in roles similar to Python, Perl, and Ruby. In contrast, Rexx was created as a job control language, but is widely used as an extension language and a general-purpose language. Perl is a general-purpose language, but had the Oraperl (1990) dialect, consisting of a Perl 4 binary with Oracle Call Interface compiled in. This has however since been replaced by a library (Perl Module), DBD::Oracle. Other complex and task-oriented applications may incorporate and expose an embedded programming language to allow their users more control and give them more functionality than can be available through a user interface, no matter how sophisticated. For example, Autodesk Maya 3D authoring tools embed the Maya Embedded Language, or Blender which uses Python to fill this role. Some other types of applications that need faster feature addition or tweak-and-run cycles (e.g. game engines) also use an embedded language. During the development, this allows them to prototype features faster and tweak more freely, without the need for the user to have intimate knowledge of the inner workings of the application or to rebuild it after each tweak (which can take a significant amount of time). The scripting languages used for this purpose range from the more common and more famous Lua and Python to lesser-known ones such as AngelScript and Squirrel. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Falcon_9] | [TOKENS: 7372]
Contents Falcon 9 Falcon 9 is a partially reusable, two-stage-to-orbit, medium-lift launch vehicle[d] designed and manufactured in the United States by SpaceX. The first Falcon 9 launch was on June 4, 2010, and the first commercial resupply mission to the International Space Station (ISS) launched on October 8, 2012. In 2020, it became the first commercial rocket to launch humans to orbit. The Falcon 9 has been noted for its reliability and high launch cadence, with 599 successful launches, two in-flight failures, one partial failure and one pre-flight destruction. The rocket has two stages. The first (booster) stage carries the second stage and payload to a predetermined speed and altitude, after which the second stage accelerates the payload to its target orbit. The booster is capable of landing vertically to facilitate reuse, while the fairing halves are scooped out of water after a parachute-assisted landing. This feat was first achieved on flight 20 in December 2015. As of February 20, 2026, SpaceX has successfully landed Falcon 9 boosters 557 times.[e] Individual boosters have flown as many as 32 flights. Both stages are powered by SpaceX Merlin engines,[f] using cryogenic liquid oxygen and rocket-grade kerosene (RP-1) as propellants. On the other hand, both active halves are recovered and reflown multiple times with first occurrence being, March 30, 2017, thereafter individual fairing halves having flown as many as 36 flights.[citation needed] The heaviest payloads flown to geostationary transfer orbit (GTO) were Intelsat 35e carrying 6,761 kg (14,905 lb), and Telstar 19V with 7,075 kg (15,598 lb). The former was launched into an advantageous super-synchronous transfer orbit, while the latter went into a lower-energy GTO, with an apogee well below the geostationary altitude. On January 24, 2021, Falcon 9 set a record for the most satellites launched by a single rocket, carrying 143 into orbit. Falcon 9 is human-rated for transporting NASA astronauts to the ISS, certified for the National Security Space Launch program and the NASA Launch Services Program lists it as a "Category 3" (Low Risk) launch vehicle allowing it to launch the agency's most expensive, important, and complex missions. Several versions of Falcon 9 have been built and flown: v1.0 flew from 2010 to 2013, v1.1 flew from 2013 to 2016, while v1.2 Full Thrust first launched in 2015, encompassing the Block 5 variant, which has been in operation since May 2018. Development history In October 2005, SpaceX announced plans to launch Falcon 9 in the first half of 2007. The initial launch would not occur until 2010. SpaceX spent its own capital to develop and fly its previous launcher, Falcon 1, with no pre-arranged sales of launch services. SpaceX developed Falcon 9 with private capital as well, but did have pre-arranged commitments by NASA to purchase several operational flights once specific capabilities were demonstrated. Milestone-specific payments were provided under the Commercial Orbital Transportation Services (COTS) program in 2006. The NASA contract was structured as a Space Act Agreement (SAA) "to develop and demonstrate commercial orbital transportation service", including the purchase of three demonstration flights. The overall contract award was US$278 million to provide three demonstration launches of Falcon 9 with the SpaceX Dragon cargo spacecraft. Additional milestones were added later, raising the total contract value to US$396 million. In 2008, SpaceX won a Commercial Resupply Services (CRS) contract in NASA's Commercial Orbital Transportation Services (COTS) program to deliver cargo to ISS using Falcon 9/Dragon. Funds would be disbursed only after the demonstration missions were successfully and thoroughly completed. The contract totaled US$1.6 billion for a minimum of 12 missions to ferry supplies to and from the ISS. In 2011, SpaceX estimated that Falcon 9 v1.0 development costs were approximately US$300 million. NASA estimated development costs of US$3.6 billion had a traditional cost-plus contract approach been used. A 2011 NASA report "estimated that it would have cost the agency about US$4 billion to develop a rocket like the Falcon 9 booster based upon NASA's traditional contracting processes" while "a more commercial development" approach might have allowed the agency to pay only US$1.7 billion". In 2014, SpaceX released combined development costs for Falcon 9 and Dragon. NASA provided US$396 million, while SpaceX provided over US$450 million. Congressional testimony by SpaceX in 2017 suggested that the unusual NASA process of "setting only a high-level requirement for cargo transport to the space station [while] leaving the details to industry" had allowed SpaceX to complete the task at a substantially lower cost. "According to NASA's own independently verified numbers, SpaceX's development costs of both the Falcon 1 and Falcon 9 rockets were estimated at approximately $390 million in total." SpaceX originally intended to follow its Falcon 1 launch vehicle with an intermediate capacity vehicle, Falcon 5. The Falcon line of vehicles are named after the Millennium Falcon, a fictional starship from the Star Wars film series. In 2005, SpaceX announced that it was instead proceeding with Falcon 9, a "fully reusable heavy-lift launch vehicle", and had already secured a government customer. Falcon 9 was described as capable of launching approximately 9,500 kilograms (20,900 lb) to low Earth orbit and was projected to be priced at US$27 million per flight with a 3.7 m (12 ft) payload fairing and US$35 million with a 5.2 m (17 ft) fairing. SpaceX also announced a heavy version of Falcon 9 with a payload capacity of approximately 25,000 kilograms (55,000 lb). Falcon 9 was intended to support LEO and GTO missions, as well as crew and cargo missions to the ISS. The original NASA COTS contract called for the first demonstration flight in September 2008, and the completion of all three demonstration missions by September 2009. In February 2008, the date slipped into the first quarter of 2009. According to Musk, complexity and Cape Canaveral regulatory requirements contributed to the delay. The first multi-engine test (two engines firing simultaneously, connected to the first stage) was completed in January 2008. Successive tests led to a 178-second (mission length), nine engine test-fire in November 2008. In October 2009, the first flight-ready all-engine test fire was at its test facility in McGregor, Texas. In November, SpaceX conducted the initial second stage test firing, lasting forty seconds. In January 2010, a 329-second (mission length) orbit-insertion firing of the second stage was conducted at McGregor. The elements of the stack arrived at the launch site for integration at the beginning of February 2010. The flight stack went vertical at Space Launch Complex 40, Cape Canaveral, and in March, SpaceX performed a static fire test, where the first stage was fired without launch. The test was aborted at T−2 due to a failure in the high-pressure helium pump. All systems up to the abort performed as expected, and no additional issues needed addressing. A subsequent test on March 13 fired the first-stage engines for 3.5 seconds. In December 2010, the SpaceX production line manufactured a Falcon 9 (and Dragon spacecraft) every three months. By September 2013, SpaceX's total manufacturing space had increased to nearly 93,000 m2 (1,000,000 sq ft), in order to support a production capacity of 40 rocket cores annually. The factory was producing one Falcon 9 per month as of November 2013[update]. By February 2016 the production rate for Falcon 9 cores had increased to 18 per year, and the number of first stage cores that could be assembled at one time reached six. Since 2018, SpaceX has routinely reused first stages, reducing the demand for new cores. In 2023, SpaceX performed 91 launches of Falcon 9 with only 4 using new boosters and successfully recovered the booster on all flights. The Hawthorne factory continues to produce one (expendable) second stage for each launch. Launch history Notable flights and payloads Design F9 is a two-stage, LOX/RP-1-powered launch vehicle. Both stages are equipped with Merlin 1D rocket engines. Every Merlin engine produces 854 kN (192,000 lbf) of thrust. They use a pyrophoric mixture of triethylaluminum-triethylborane (TEA-TEB) as an engine igniter. The booster stage has 9 engines, arranged in a configuration that SpaceX calls Octaweb. The second stage of the Falcon 9 has 1 short or regular nozzle, Merlin 1D Vacuum engine version. Falcon 9 is capable of losing up to 2 engines and still complete the mission by burning the remaining engines longer. Each Merlin rocket engine is controlled by three voting computers, each having 2 CPUs which constantly check the other 2 in the trio. The Merlin 1D engines can vector thrust to adjust trajectory. The propellant tank walls and domes are made from an aluminum–lithium alloy. SpaceX uses an all friction-stir welded tank, for its strength and reliability. The second stage tank is a shorter version of the first stage tank. It uses most of the same tooling, material, and manufacturing techniques. The F9 interstage, which connects the upper and lower stages, is a carbon-fibre aluminium-core composite structure that holds reusable separation collets and a pneumatic pusher system. The original stage separation system had twelve attachment points, reduced to three for v1.1. Falcon 9 uses a payload fairing (nose cone) to protect (non-Dragon) satellites during launch. The fairing is 13 m (43 ft) long, 5.2 m (17 ft) in diameter, weighs approximately 1900 kg, and is constructed of carbon fiber skin overlaid on an aluminum honeycomb core. SpaceX designed and fabricates fairings in Hawthorne. Testing was completed at NASA's Plum Brook Station facility in spring 2013 where the acoustic shock and mechanical vibration of launch, plus electromagnetic static discharge conditions, were simulated on a full-size test article in a vacuum chamber. Since 2019, fairings are designed to re-enter the Earth's atmosphere and are reused for future missions. SpaceX uses multiple redundant flight computers in a fault-tolerant design. The software runs on Linux and is written in C++. For flexibility, commercial off-the-shelf parts and system-wide radiation-tolerant design are used instead of rad-hardened parts. Each stage has stage-level flight computers, in addition to the Merlin-specific engine controllers, of the same fault-tolerant triad design to handle stage control functions. Each engine microcontroller CPU runs on a PowerPC architecture. Boosters that will be deliberately expended do not have legs or fins. Recoverable boosters include four extensible landing legs attached around the base. To control the core's descent through the atmosphere, SpaceX uses grid fins that deploy from the vehicle moments after stage separation. Initially, the V1.2 Full Thrust version of the Falcon 9 were equipped with grid fins made from aluminum, which were eventually replaced by larger, more aerodynamically efficient, and durable titanium fins. The upgraded titanium grid fins, cast and cut from a single piece of titanium, offer significantly better maneuverability and survivability from the extreme heat of re-entry than aluminum grid fins and can be reused indefinitely with minimal refurbishment. Versions The Falcon 9 has seen five major revisions: v1.0, v1.1, Full Thrust (also called Block 3 or v1.2), Block 4, and Block 5. V1.0 flew five successful orbital launches from 2010 to 2013. The much larger V1.1 made its first flight in September 2013. The demonstration mission carried a small 500 kg (1,100 lb) primary payload, the CASSIOPE satellite. Larger payloads followed, starting with the launch of the SES-8 GEO communications satellite. Both v1.0 and v1.1 used expendable launch vehicles (ELVs). The Falcon 9 Full Thrust made its first flight in December 2015. The first stage of the Full Thrust version was reusable. The current version, known as Falcon 9 Block 5, made its first flight in May 2018. F9 v1.0 was an expendable launch vehicle developed from 2005 to 2010. It flew for the first time in 2010. V1.0 made five flights, after which it was retired. The first stage was powered by nine Merlin 1C engines arranged in a 3 × 3 grid. Each had a sea-level thrust of 556 kN (125,000 lbf) for a total liftoff thrust of about 5,000 kN (1,100,000 lbf). The second stage was powered by a single Merlin 1C engine modified for vacuum operation, with an expansion ratio of 117:1 and a nominal burn time of 345 seconds. Gaseous N2 thrusters were used on the second-stage as a reaction control system (RCS). Early attempts to add a lightweight thermal protection system to the booster stage and parachute recovery were not successful. In 2011, SpaceX began a formal development program for a reusable Falcon 9, initially focusing on the first stage. V1.1 is 60% heavier with 60% more thrust than v1.0. Its nine (more powerful) Merlin 1D engines were rearranged into an "octagonal" pattern that SpaceX called Octaweb. This is designed to simplify and streamline manufacturing. The fuel tanks were 60% longer, making the rocket more susceptible to bending during flight. The v1.1 first stage offered a total sea-level thrust at liftoff of 5,885 kN (1,323,000 lbf), with the engines burning for a nominal 180 seconds. The stage's thrust rose to 6,672 kN (1,500,000 lbf) as the booster climbed out of the atmosphere. The stage separation system was redesigned to reduce the number of attachment points from twelve to three, and the vehicle had upgraded avionics and software. These improvements increased the payload capability from 9,000 kg (20,000 lb) to 13,150 kg (28,990 lb). SpaceX president Gwynne Shotwell stated the v1.1 had about 30% more payload capacity than published on its price list, with the extra margin reserved for returning stages via powered re-entry. Development testing of the first stage was completed in July 2013, and it first flew in September 2013. The second stage igniter propellant lines were later insulated to better support in-space restart following long coast phases for orbital trajectory maneuvers. Four extensible carbon fiber/aluminum honeycomb landing legs were included on later flights where landings were attempted. SpaceX pricing and payload specifications published for v1.1 as of March 2014[update] included about 30% more performance than the published price list indicated; SpaceX reserved the additional performance to perform reusability testing. Many engineering changes to support reusability and recovery of the first stage were made for v1.1. The Full Thrust upgrade (also known as FT, v1.2 or Block 3), made major changes. It added cryogenic propellant cooling to increase density allowing 17% higher thrust, improved the stage separation system, stretched the second stage to hold additional propellant, and strengthened struts for holding helium bottles believed to have been involved with the failure of flight 19. It offered a reusable first stage. Plans to reuse the second-stage were abandoned as the weight of a heat shield and other equipment would reduce payload too much. The reusable booster was developed using systems and software tested on the Falcon 9 prototypes. The Autonomous Flight Safety System (AFSS) replaced the ground-based mission flight control personnel and equipment. AFSS offered on-board Positioning, Navigation and Timing sources and decision logic. The benefits of AFSS included increased public safety, reduced reliance on range infrastructure, reduced range spacelift cost, increased schedule predictability and availability, operational flexibility, and launch slot flexibility". FT's capacity allowed SpaceX to choose between increasing payload, decreasing launch price, or both. Its first successful landing came in December 2015 and the first reflight in March 2017. In February 2017, CRS-10 launch was the first operational launch utilizing AFSS. All SpaceX launches after March 16 used AFSS. A June 25 mission carried the second batch of ten Iridium NEXT satellites, for which the aluminum grid fins were replaced by larger titanium versions, to improve control authority, and heat tolerance during re-entry. In 2017, SpaceX started including incremental changes to the Full Thrust, internally dubbed Block 4. Initially, only the second stage was modified to Block 4 standards, flying on top of a Block 3 first stage for three missions: NROL-76 and Inmarsat-5 F5 in May 2017, and Intelsat 35e in July 2017. Block 4 was described as a transition between the Full Thrust v1.2 Block 3 and Block 5. It includes incremental engine thrust upgrades leading to Block 5. The maiden flight of the full Block 4 design (first and second stages) was the SpaceX CRS-12 mission on August 14. In October 2016, Musk described Block 5 as coming with "a lot of minor refinements that collectively are important, but uprated thrust and improved legs are the most significant". In January 2017, Musk added that Block 5 "significantly improves performance and ease of reusability". The maiden flight took place on May 11, 2018, with the Bangabandhu Satellite-1 satellite. Capabilities As of February 20, 2026, Falcon 9 had achieved 599 out of 602 full mission successes (99.5%). SpaceX CRS-1 succeeded in its primary mission, but left a secondary payload in a wrong orbit, while SpaceX CRS-7 was destroyed in flight. In addition, AMOS-6 disintegrated on the launch pad during fueling for an engine test. Block 5 has a success rate of 99.8% (545/546). For comparison, the industry benchmark Soyuz series has performed 1880 launches with a success rate of 95.1% (the latest Soyuz-2's success rate is 94%), the Russian Proton series has performed 425 launches with a success rate of 88.7% (the latest Proton-M's success rate is 90.1%), the European Ariane 5 has performed 117 launches with a success rate of 95.7%, and Chinese Long March 3B has performed 85 launches with a success rate of 95.3%. F9's launch sequence includes a hold-down feature that allows full engine ignition and systems check before liftoff. After the first-stage engine starts, the launcher is held down and not released for flight until all propulsion and vehicle systems are confirmed to be operating normally. Similar hold-down systems have been used on launch vehicles such as Saturn V and Space Shuttle. An automatic safe shut-down and unloading of propellant occur if any abnormal conditions are detected. Prior to the launch date, SpaceX sometimes completes a test cycle, culminating in a three-and-a-half second first stage engine static firing. F9 has triple-redundant flight computers and inertial navigation, with a GPS overlay for additional accuracy. Since the middle of 2024, the Falcon 9 has been involved in a number of mission anomalies, which have raised reliability concerns about the rocket. In July 2024 the upper stage engine of the Falcon 9 malfunctioned during the launch of the Starlink Group 9-3 mission, resulting in the total loss of the payload and the Federal Aviation Administration grounding the rocket for two weeks. In August 2024 a Falcon 9 booster tipped over and was destroyed during landing after a successful Starlink launch, resulting in the first unsuccessful booster landing in over three years for SpaceX. The rocket was briefly grounded for two days. In September 2024, after the successful launch of the Crew-9 mission, the upper stage engine again malfunctioned during a deorbit burn, causing it to reenter outside its designed zone and resulting in another grounding of the Falcon fleet. This anomaly occurred only ten days before the planned launch date of NASA's flagship Europa Clipper mission, which had a limited launch window and required two burns of the rocket's upper stage, prompting NASA to participate in the investigation and convene its own independent anomaly review board. Europa Clipper eventually launched successfully on October 14. These anomalies were mentioned on NASA's Aerospace Safety Advisory Panel 2024 Annual Report, which warned that SpaceX's fast cadence of launches may "interfere with sound judgment, deliberate analysis, and careful implementation of corrective actions", while also praising the company's "openness with NASA and willingness to address each situation". In February 2025, another upper stage malfunction occurred after the launch of the Starlink Group 11-4 mission, which prevented the stage from executing its planned deorbit burn. It remained in orbit for two weeks before eventually falling near the city of Poznań, Poland in an uncontrolled reentry. Similar to the July 2024 failure, this anomaly was also caused by a liquid oxygen leak in the upper stage's engine. In March 2025, a Falcon 9 booster was lost when it caught fire and tipped over after a droneship landing following a Starlink launch. This failure was blamed on a fuel leak that occurred inside one of the first stage engines during ascent. Space journalist Eric Berger has argued that the main factor behind the recent anomalies is SpaceX's "ever-present pressure to accelerate, even while taking on more and more challenging tasks", noting that the company may have reached "the speed limit for commercial spaceflight". He also noted that SpaceX is under intense pressure to develop its super-heavy Starship rocket, with many talented engineers being moved off from the Falcon and Dragon programs onto Starship. Like the Saturn family of rockets, multiple engines allow for mission completion even if one fails. Detailed descriptions of destructive engine failure modes and designed-in engine-out capabilities were made public. SpaceX emphasized that the first stage is designed for "engine-out" capability. CRS-1 in October 2012 was a partial success after engine number 1 lost pressure at 79 seconds, and then shut down. To compensate for the resulting loss of acceleration, the first stage had to burn 28 seconds longer than planned, and the second stage had to burn an extra 15 seconds. That extra burn time reduced fuel reserves so that the likelihood that there was sufficient fuel to execute the mission dropped from 99% to 95%. Because NASA had purchased the launch and therefore contractually controlled several mission decision points, NASA declined SpaceX's request to restart the second stage and attempt to deliver the secondary payload into the correct orbit. As a result, the secondary payload reentered the atmosphere. Merlin 1D engines have suffered two premature shutdowns on ascent. Neither has affected the primary mission, but both landing attempts failed. On an March 18, 2020, Starlink mission, one of the first stage engines failed 3 seconds before cut-off due to the ignition of some isopropyl alcohol that was not properly purged after cleaning. On another Starlink mission on February 15, 2021, hot exhaust gasses entered an engine due to a fatigue-related hole in its cover. SpaceX stated the failed cover had the "highest... number of flights that this particular boot [cover] design had seen." SpaceX planned from the beginning to make both stages reusable. The first stages of early Falcon flights were equipped with parachutes and were covered with a layer of ablative cork to allow them to survive atmospheric re-entry. These were defeated by the accompanying aerodynamic stress and heating. The stages were salt-water corrosion-resistant. In late 2011, SpaceX eliminated parachutes in favor of powered descent. The design was complete by February 2012. Powered landings were first flight-tested with the suborbital Grasshopper rocket. Between 2012 and 2013, this low-altitude, low-speed demonstration test vehicle made eight vertical landings, including a 79-second round-trip flight to an altitude of 744 m (2,441 ft). In March 2013, SpaceX announced that as of the first v1.1 flight, every booster would be equipped for powered descent. For Flight 6 in September 2013, after stage separation, the flight plan called for the first stage to conduct a burn to reduce its reentry velocity, and then a second burn just before reaching the water. Although not a complete success, the stage was able to change direction and make a controlled entry into the atmosphere. During the final landing burn, the RCS thrusters could not overcome an aerodynamically induced spin. The centrifugal force deprived the engine of fuel, leading to early engine shutdown and a hard splashdown. After four more ocean landing tests, the CRS-5 booster attempted a landing on the ASDS floating platform in January 2015. The rocket incorporated (for the first time in an orbital mission) grid fin aerodynamic control surfaces, and successfully guided itself to the ship, before running out of hydraulic fluid and crashing into the platform. A second attempt occurred in April 2015, on CRS-6. After the launch, the bipropellant valve became stuck, preventing the control system from reacting rapidly enough for a successful landing. The first attempt to land a booster on a ground pad near the launch site occurred on flight 20, in December 2015. The landing was successful and the booster was recovered. This was the first time in history that after launching an orbital mission, a first stage achieved a controlled vertical landing. The first successful booster landing on an ASDS occurred in April 2016 on the drone ship Of Course I Still Love You during CRS-8. Sixteen test flights were conducted from 2013 to 2016, six of which achieved a soft landing and booster recovery. Since January 2017, with the exceptions of the centre core from the Falcon Heavy test flight, Falcon Heavy USAF STP-2 mission, the Falcon 9 CRS-16 resupply mission and the Starlink-4, 5, and 19 missions, every landing attempt has been successful. Two boosters have been lost or destroyed at sea after landing: the center core used during the Arabsat-6A mission, and B1058 after completing a Starlink flight. The first operational relaunch of a previously flown booster was accomplished in March 2017 with B1021 on the SES-10 mission after CRS-8 in April 2016. After landing a second time, it was retired. In June 2017, booster B1029 helped carry BulgariaSat-1 towards GTO after an Iridium NEXT LEO mission in January 2017, again achieving reuse and landing of a recovered booster. The third reuse flight came in November 2018 on the SSO-A mission. The core for the mission, Falcon 9 B1046, was the first Block 5 booster produced, and had flown initially on the Bangabandhu Satellite-1 mission. In May 2021 the first booster reached 10 missions. Musk indicated that SpaceX intends to fly boosters until they see a failure in Starlink missions. As of February 20, 2026, the record is 32 flights by the same booster. SpaceX developed payload fairings equipped with a steerable parachute as well as RCS thrusters that can be recovered and reused. A payload fairing half was recovered following a soft-landing in the ocean for the first time in March 2017, following SES-10. Subsequently, development began on a ship-based system involving a massive net, in order to catch returning fairings. Two dedicated ships were outfitted for this role, making their first catches in 2019. However, following mixed success, SpaceX returned to water landings and wet recovery. Despite public statements that they would endeavor to make the second-stage reusable as well, by late 2014, SpaceX determined that the mass needed for a heat shield, landing engines, and other equipment to support recovery of the second stage was prohibitive, and abandoned second-stage reusability efforts. Launch sites The Falcon 9 launches from three orbital launch sites: Space Launch Complex 40 (SLC-40) at Cape Canaveral Space Force Station in Florida (operational since 2007), Space Launch Complex 4E (SLC-4E) of Vandenberg Space Force Base in California (operational since 2013), and Launch Complex 39A (LC-39A) of the Kennedy Space Center in Florida (operational since 2017). SpaceX has designated specific roles for each launch site based on mission profiles. SLC-40 serves as the company's high-volume launch pad for missions to medium-inclination orbits (28.5–55°). SLC-4E is optimized for launches to highly inclined polar orbits (66–145°). LC-39A is primarily reserved for complex missions, such as Crew Dragon or Falcon Heavy launches. However, in 2024, SLC-40 was upgraded to accommodate Crew Dragon launches as a backup to LC-39A. On April 21, 2023, the United States Space Force granted SpaceX permission to lease Vandenberg Space Launch Complex 6 (SLC-6). This will become SpaceX's fourth orbital launch site, providing a second pad for highly inclined polar orbit launches and enabling Falcon Heavy launches from the West Coast. Pricing At the time of the Falcon 9's maiden flight in 2010, the advertised price for commercial satellite launches using the v1.0 version was $49.9–56 million. Over the years, the price increased, keeping pace with inflation. By 2012, it rose to $54–59.5 million, followed by $56.5 million for the v1.1 version in 2013, $61.2 million in 2014, $62 million for the Full Thrust version in 2016, and $69.85 million for the Block 5 version in 2025. Government contracts typically involve higher prices, determined through competitive bidding processes. For instance, Dragon cargo missions to the ISS cost $133 million under a fixed-price contract with NASA, which included the spacecraft's use. Similarly, the 2013 DSCOVR mission for NOAA, launched aboard a Falcon 9, cost $97 million. As of 2020, U.S. Air Force launches using the Falcon 9 cost $95 million due to added security requirements. Because of the higher prices charged to government customers, in 2020, Roscosmos administrator Dmitry Rogozin accused SpaceX of price dumping in the commercial marketplace. The declining costs of Falcon 9 launches prompted competitors to develop lower-cost launch vehicles. Arianespace introduced the Ariane 6, ULA developed the Vulcan Centaur, and Roscosmos focused on the Proton-M. ULA CEO Tory Bruno stated that in their estimates, each booster would need to fly ten times to break even on the additional costs of designing and operating reusable rockets. Musk countered, asserting that Falcon 9's recovery and refurbishment costs were under 10%, achieving breakeven after just two flights and yielding substantial savings by the third. As of 2024, SpaceX's internal costs for a Falcon 9 launch are estimated between $15 million and $28 million, factoring in workforce expenses, refurbishment, assembly, operations, and facility depreciation. These efficiencies are primarily due to the reuse of first-stage boosters and payload fairings. The second stage, which is not reused, is believed to be the largest expense per launch, with the company's COO stating that each costs $12 million to produce. Rideshare payload programs SpaceX operates two regularly scheduled rideshare programs for small satellite deployment: Transporter and Bandwagon. The Transporter program, introduced in 2021, provides missions to sun-synchronous orbit, an inclination near 90°. Transporter flights primarily serve Earth observation payloads, with launches typically occurring every four months from Vandenberg. The Bandwagon program began in 2024 and provides access to mid-inclination orbits of about 45°, with missions operating roughly every six months from Cape Canaveral. Unlike traditional rideshare arrangements, these missions are not tied to a primary customer. For larger satellites between 500 and 2,500 kilograms (1,100 and 5,500 lb), SpaceX also offers a “cake topper” option, in which a spacecraft is mounted atop the payload stack, a position typically used by a primary payload in a conventional launch. As of 2025, launch pricing begins at US$300,000 for 50 kg (110 lb) to SSO. SpaceX also continues to provide more conventional rideshare opportunities in which small satellites accompany a large primary payload. Payloads can be accommodated using the EELV Secondary Payload Adapter (ESPA) ring, the same interstage adapter used for secondary payloads on U.S. Department of Defense missions flown on EELV-class launchers such as the Atlas V and Delta IV. Although the Falcon 9 is a medium-lift launch vehicle, the high launch cadence and comparatively low pricing of its rideshare programs have made SpaceX a leading provider in the small-satellite launch market. This has contributed to a challenging competitive environment for operators of dedicated small-lift launch vehicles. Public display of Falcon 9 vehicles SpaceX first put a Falcon 9 (B1019) on public display at their headquarters in Hawthorne, California, in 2016. In 2019, SpaceX donated a Falcon 9 (B1035) to Space Center Houston, in Houston, Texas. It was a booster that flew two missions, "the 11th and 13th supply missions to the International Space Station [and was] the first Falcon 9 rocket NASA agreed to fly a second time". In 2021, SpaceX donated a Falcon Heavy side booster (B1023) to the Kennedy Space Center Visitor Complex. In 2023, a Falcon 9 (B1021) has been put on public display outside Dish Network's headquarters in Littleton, Colorado. Influence on space industry The Russian space agency has launched the development of Soyuz-7 which shares many similarities with Falcon 9, including a reusable first stage that will land vertically with the help of legs. The first launch is planned for 2028–2030. China's Beijing Tianbing Technology company is developing Tianlong-3, which is benchmarked against Falcon 9. In 2024, China's central government designated commercial space as a key industry for support, with the reusable medium-lift launchers being necessary to deploy China's planned low Earth orbit communications megaconstellations. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/JavaFX_Script] | [TOKENS: 546]
Contents JavaFX Script JavaFX Script was a scripting language designed by Sun Microsystems, forming part of the JavaFX family of technologies on the Java Platform. JavaFX targeted the Rich Internet Application domain (competing with Adobe Flex and Microsoft Silverlight), specializing in rapid development of visually rich applications for the desktop and mobile markets. JavaFX Script works with integrated development environments such as NetBeans, Eclipse and IntelliJ IDEA. JavaFX is released under the GNU General Public License, via the Sun sponsored OpenJFX project. History JavaFX Script used to be called F3 for Form Follows Function. F3 was primarily developed by Chris Oliver, who became a Sun employee through their acquisition of SeeBeyond Technology Corporation in September 2005. Its name was changed to JavaFX Script, and it became open sourced at JavaOne 2007. JavaFX 1.0 was released on December 4, 2008. On September 10, 2010 Oracle announced at JavaOne that JavaFX Script would be discontinued, although the JavaFX API would be made available to other languages for the Java Virtual Machine. On September 27, 2010 Stephen Chin announced Visage a declarative user-interface language based on the JavaFX Script with enhancements. On April 8, 2012 a project was created with the intention of resurrecting and enhancing the original F3 programming language, but the project appears to have been discontinued in August 2015. Features JavaFX Script was a compiled, statically typed, declarative scripting language for the Java Platform. It provided automatic data-binding, mutation triggers and declarative animation, using an expression language syntax (all code blocks potentially yield values.) Through its standard JavaFX APIs it supported retained mode vector graphics, video playback and standard Swing components. Although F3 began life as an interpreted language, prior to the first preview release (Q3 2008) JavaFX Script had shifted focus to being predominantly compiled. Interpreted JavaFX Script is still possible, via the JSR 223 'Scripting for Java' bridge. Because it is built on top of the Java Platform, it is easy to use Java classes in JavaFX Script code. Compiled JavaFX Script was able to run on any platform that has a recent Java Runtime installed. Syntax JavaFX Script's declarative style for constructing user interfaces can provide shorter and more readable source code than the more verbose series of method calls required to construct an equivalent interface if written in JavaFX Script's procedural style. Here is a simple Hello world program for JavaFX Script: It shows the following window/frame : This program can also be written in JavaFX Script using a procedural style this way: See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Gallery_of_sovereign_state_flags] | [TOKENS: 727]
Contents List of national flags of sovereign states All 193 member states and 2 General Assembly non-member observer states of the United Nations, in addition to several de facto states, represent themselves with national flags. National flags generally contain symbolism of their respective state and serve as an emblem which distinguishes themselves from other states in international politics. National flags are adopted by governments to strengthen national bonds and legitimate formal authority. Such flags may contain symbolic elements of their peoples, militaries, territories, rulers, and dynasties. The flag of Denmark is the oldest flag still in current use as it has been recognized as a national symbol since the 13th century. Background and definitions According to the Collins English Dictionary, a national flag is "a flag that represents or is an emblem of a country". The word country can be used to refer to a nation-state or sovereign state, sometimes also called an independent state, though a country may also be a sub-division of a sovereign state, such as with the United Kingdom consisting of a union of four countries or nations — specifically two kingdoms (the countries of England and Scotland), a principality (the country of Wales), and a province (Northern Ireland). It is customary in international law that nation-states adopt a flag to distinguish themselves from other states. National flags are considered to "provide perhaps the strongest, clearest statement of national identity," and governments have used them to promote and create bonds within the country, motivate patriotism, honor the efforts of citizens, and legitimate formal authority. Throughout history, elements within flags have been used to symbolize rulers, dynasties, territories, militaries, and peoples of their respective countries. Flags also conceptually represent a country's core values, such as group membership and love for the country. In 1975, American vexillologist Whitney Smith stated thus regarding the role of flags in society: So strong is the tradition of flags, we may not be far from the truth in surmising that there is a law – not of nature, but of human society – which impels man to make and use flags. There is perhaps no more striking demonstration of this than the fact that, despite the absence of any international regulation or treaty requiring of a national flag, without exception every country has adopted at least one. — Whitney Smith, Flags Through the Ages and Across the World, p. 32 According to the Oxford English Dictionary, a sovereign state is "a state or nation with a defined territory and a permanent population, which administers its own government, and which is recognized as not subject to or dependent upon another power." The amount of sovereign states in the world is generally derived from the number of member states of the United Nations (UN), although non-member states do exist, with such states being called de facto states. As of 2024, the UN currently includes 193 member states and 2 permanent observer states: Palestine and Vatican City. De facto states include Northern Cyprus, Abkhazia, South Ossetia, Transnistria, Kosovo, the Sahrawi Republic, Somaliland, and Taiwan. The oldest flag of a sovereign state which is currently in use is the flag of Denmark, which has been recognized as a national symbol of the country since the 13th century, although the current version was officially adopted in 1867. All 193 member states and 2 observer states are represented by their respective flags at the Headquarters of the United Nations in New York City. Flags of UN member states and General Assembly non-member observer states — No data Flags of de facto states See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Oxidising] | [TOKENS: 2839]
Contents Redox Redox (/ˈrɛdɒks/ RED-oks, /ˈriːdɒks/ REE-doks, reduction–oxidation or oxidation–reduction: 150 ) is a type of chemical reaction in which the oxidation states of the reactants change. Oxidation is the loss of electrons or an increase in the oxidation state, while reduction is the gain of electrons or a decrease in the oxidation state. The oxidation and reduction processes occur simultaneously in the chemical reaction. There are two classes of redox reactions: Terminology "Redox" is a portmanteau of "reduction" and "oxidation." The term was first used in a 1928 article by Leonor Michaelis and Louis B. Flexner. Oxidation is a process in which a substance loses electrons. Reduction is a process in which a substance gains electrons. The processes of oxidation and reduction occur simultaneously and cannot occur independently. In redox processes, the reductant transfers electrons to the oxidant. Thus, in the reaction, the reductant or reducing agent loses electrons and is oxidized, and the oxidant or oxidizing agent gains electrons and is reduced. The pair of an oxidizing and reducing agent that is involved in a particular reaction is called a redox pair. A redox couple is a reducing species and its corresponding oxidizing form, e.g., Fe2+/ Fe3+.The oxidation alone and the reduction alone are each called a half-reaction because two half-reactions always occur together to form a whole reaction. In electrochemical reactions the oxidation and reduction processes do occur simultaneously but are separated in space. Oxidation originally implied a reaction with oxygen to form an oxide. Later, the term was expanded to encompass substances that accomplished chemical reactions similar to those of oxygen. Ultimately, the meaning was generalized to include all processes involving the loss of electrons or the increase in the oxidation state of a chemical species.: A49 Substances that have the ability to oxidize other substances (cause them to lose electrons) are said to be oxidative or oxidizing, and are known as oxidizing agents, oxidants, or oxidizers. The oxidant removes electrons from another substance, and is thus itself reduced.: A50 Because it "accepts" electrons, the oxidizing agent is also called an electron acceptor. Oxidants are usually chemical substances with elements in high oxidation states: 159 (e.g., N2O4, MnO−4, CrO3, Cr2O2−7, OsO4), or else highly electronegative elements (e.g. O2, F2, Cl2, Br2, I2) that can gain extra electrons by oxidizing another substance.: 909 Oxidizers are oxidants, but the term is mainly reserved for sources of oxygen, particularly in the context of explosions. Nitric acid is a strong oxidizer. Substances that have the ability to reduce other substances (cause them to gain electrons) are said to be reductive or reducing and are known as reducing agents, reductants, or reducers. The reductant transfers electrons to another substance and is thus itself oxidized.: 159 Because it donates electrons, the reducing agent is also called an electron donor. Electron donors can also form charge transfer complexes with electron acceptors. The word reduction originally referred to the loss in weight upon heating a metallic ore such as a metal oxide to extract the metal. In other words, ore was "reduced" to metal. Antoine Lavoisier demonstrated that this loss of weight was due to the loss of oxygen as a gas. Later, scientists realized that the metal atom gains electrons in this process. The meaning of reduction then became generalized to include all processes involving a gain of electrons. Reducing equivalent refers to chemical species which transfer the equivalent of one electron in redox reactions. The term is common in biochemistry. A reducing equivalent can be an electron or a hydrogen atom as a hydride ion. Reductants in chemistry are very diverse. Electropositive elemental metals, such as lithium, sodium, magnesium, iron, zinc, and aluminium, are good reducing agents. These metals donate electrons relatively readily. Hydride transfer reagents, such as NaBH4 and LiAlH4, reduce by atom transfer: they transfer the equivalent of hydride or H−. These reagents are widely used in the reduction of carbonyl compounds to alcohols. A related method of reduction involves the use of hydrogen gas (H2) as sources of H atoms.: 288 The electrochemist John Bockris proposed the words electronation and de-electronation to describe reduction and oxidation processes, respectively, when they occur at electrodes. These words are analogous to protonation and deprotonation. IUPAC has recognized the terms electronation and de-electronation. Rates, mechanisms, and energies Redox reactions can occur slowly, as in the formation of rust, or rapidly, as in the case of burning fuel. Electron transfer reactions are generally fast, occurring within the time of mixing. The mechanisms of atom-transfer reactions are highly variable because many kinds of atoms can be transferred. Such reactions can also be quite complex, involving many steps. The mechanisms of electron-transfer reactions occur by two distinct pathways, inner sphere electron transfer and outer sphere electron transfer. Analysis of bond energies and ionization energies in water allows calculation of the thermodynamic aspects of redox reactions. Standard electrode potentials (reduction potentials) Each half-reaction has a standard electrode potential (Eocell), which is equal to the potential difference or voltage at equilibrium under standard conditions of an electrochemical cell in which the cathode reaction is the half-reaction considered, and the anode is a standard hydrogen electrode where hydrogen is oxidized: The electrode potential of each half-reaction is also known as its reduction potential (Eored), or potential when the half-reaction takes place at a cathode. The reduction potential is a measure of the tendency of the oxidizing agent to be reduced. Its value is zero for H+ + e− → 1⁄2H2 by definition, positive for oxidizing agents stronger than H+ (e.g., +2.866 V for F2) and negative for oxidizing agents that are weaker than H+ (e.g., −0.763V for Zn2+).: 873 For a redox reaction that takes place in a cell, the potential difference is: However, the potential of the reaction at the anode is sometimes expressed as an oxidation potential: The oxidation potential is a measure of the tendency of the reducing agent to be oxidized but does not represent the physical potential at an electrode. With this notation, the cell voltage equation is written with a plus sign Examples of redox reactions In the reaction between hydrogen and fluorine, hydrogen is being oxidized and fluorine is being reduced: This spontaneous reaction releases a large amount of energy (542 kJ per 2 g of hydrogen) because two H-F bonds are much stronger than one H-H bond and one F-F bond. This reaction can be analyzed as two half-reactions. The oxidation reaction converts hydrogen to protons: The reduction reaction converts fluorine to the fluoride anion: The half-reactions are combined so that the electrons cancel: The protons and fluoride combine to form hydrogen fluoride in a non-redox reaction: The overall reaction is: In this type of reaction, a metal atom in a compound or solution is replaced by an atom of another metal. For example, copper is deposited when zinc metal is placed in a copper(II) sulfate solution: In the above reaction, zinc metal displaces the copper(II) ion from the copper sulfate solution, thus liberating free copper metal. The reaction is spontaneous and releases 213 kJ per 65 g of zinc. The ionic equation for this reaction is: As two half-reactions, it is seen that the zinc is oxidized: And the copper is reduced: A disproportionation reaction is one in which a single substance is both oxidized and reduced. For example, thiosulfate ion with sulfur in oxidation state +2 can react in the presence of acid to form elemental sulfur (oxidation state 0) and sulfur dioxide (oxidation state +4). Thus one sulfur atom is reduced from +2 to 0, while the other is oxidized from +2 to +4.: 176 Redox reactions in industry Cathodic protection is a technique used to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. A simple method of protection connects protected metal to a more easily corroded "sacrificial anode" to act as the anode. The sacrificial metal, instead of the protected metal, then corrodes. Oxidation is used in a wide variety of industries, such as in the production of cleaning products and oxidizing ammonia to produce nitric acid.[citation needed] Redox reactions are the foundation of electrochemical cells, which can generate electrical energy or support electrosynthesis. Metal ores often contain metals in oxidized states, such as oxides or sulfides, from which the pure metals are extracted by smelting at high temperatures in the presence of a reducing agent. The process of electroplating uses redox reactions to coat objects with a thin layer of a material, as in chrome-plated automotive parts, silver plating cutlery, galvanization and gold-plated jewelry.[citation needed] Redox reactions in biology Many essential biological processes involve redox reactions. Before some of these processes can begin, iron must be assimilated from the environment. Aerobic cellular respiration, for instance, is the oxidation of substrates [in this case: glucose (C6H12O6)] and the reduction of oxygen to water. The summary equation for aerobic respiration is: The process of cellular respiration also depends heavily on the reduction of NAD+ to NADH and the reverse reaction (the oxidation of NADH to NAD+). Photosynthesis and cellular respiration are complementary, but photosynthesis is not the reverse of the redox reaction in cellular respiration: Biological energy is frequently stored and released using redox reactions. Photosynthesis involves the reduction of carbon dioxide into sugars and the oxidation of water into molecular oxygen. The reverse reaction, respiration, oxidizes sugars to produce carbon dioxide and water. As intermediate steps, the reduced carbon compounds are used to reduce nicotinamide adenine dinucleotide (NAD+) to NADH, which then contributes to the creation of a proton gradient, which drives the synthesis of adenosine triphosphate (ATP) and is maintained by the reduction of oxygen. In animal cells, mitochondria perform similar functions. The term redox state is often used to describe the balance of GSH/GSSG, NAD+/NADH and NADP+/NADPH in a biological system such as a cell or organ. The redox state is reflected in the balance of several sets of metabolites (e.g., lactate and pyruvate, beta-hydroxybutyrate and acetoacetate), whose interconversion is dependent on these ratios. Redox mechanisms also control some cellular processes. Redox proteins and their genes must be co-located for redox regulation according to the CoRR hypothesis for the function of DNA in mitochondria and chloroplasts. Wide varieties of aromatic compounds are enzymatically reduced to form free radicals that contain one more electron than their parent compounds. In general, the electron donor is any of a wide variety of flavoenzymes and their coenzymes. Once formed, these anion free radicals reduce molecular oxygen to superoxide and regenerate the unchanged parent compound. The net reaction is the oxidation of the flavoenzyme's coenzymes and the reduction of molecular oxygen to form superoxide. This catalytic behavior has been described as a futile cycle or redox cycling. Redox reactions in geology Minerals are generally oxidized derivatives of metals. Iron is mined as ores such as magnetite (Fe3O4) and hematite (Fe2O3). Titanium is mined as its dioxide, usually in the form of rutile (TiO2). These oxides must be reduced to obtain the corresponding metals, often achieved by heating these oxides with carbon or carbon monoxide as reducing agents. Blast furnaces are the reactors where iron oxides and coke (a form of carbon) are combined to produce molten iron. The main chemical reaction producing the molten iron is: Redox reactions in soils Electron transfer reactions are central to myriad processes and properties in soils, and redox potential, quantified as Eh (platinum electrode potential (voltage) relative to the standard hydrogen electrode) or pe (analogous to pH as −log electron activity), is a master variable, along with pH, that controls and is governed by chemical reactions and biological processes. Early theoretical research with applications to flooded soils and paddy rice production was seminal for subsequent work on thermodynamic aspects of redox and plant root growth in soils. Later work built on this foundation, and expanded it for understanding redox reactions related to heavy metal oxidation state changes, pedogenesis and morphology, organic compound degradation and formation, free radical chemistry, wetland delineation, soil remediation, and various methodological approaches for characterizing the redox status of soils. Mnemonics The key terms involved in redox can be confusing. For example, a reagent that is oxidized loses electrons; however, that reagent is referred to as the reducing agent. Likewise, a reagent that is reduced gains electrons and is referred to as the oxidizing agent. These mnemonics are commonly used by students to help memorise the terminology: See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Non-player_character&action=edit&section=2] | [TOKENS: 1431]
Editing Non-player character (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 11 hidden categories (help):
========================================
[SOURCE: https://en.wikipedia.org/wiki/Usenet] | [TOKENS: 7184]
Contents Usenet Usenet (/ˈjuːznɛt/), a portmanteau of User's Network, is a worldwide distributed discussion system available on computers. It was developed from the general-purpose Unix-to-Unix Copy (UUCP) dial-up network architecture. Tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980. Users read and post messages (called articles or posts, and collectively termed news) to one or more topic categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects and is the precursor to the Internet forums that were developed after the introduction of the World Wide Web. Discussions are threaded, as with web forums and BBSes, though posts are stored on the server sequentially. A major difference between a BBS or web message board and Usenet is the absence of a central server and dedicated administrator or hosting provider. Usenet is distributed among a large, constantly changing set of news servers that store and forward messages to one another via "news feeds". Individual users may read messages from and post to a local (or simply preferred) news server, which can be operated by anyone, and those posts will automatically be forwarded to any other news servers peered with the local one, while the local server will receive any news its peers have that it currently lacks. This results in the automatic proliferation of content posted by any user on any server to any other user subscribed to the same newsgroups on other servers. As with BBSes and message boards, individual news servers or service providers are under no obligation to carry any specific content, and may refuse to do so for many reasons: a news server might attempt to control the spread of spam by refusing to accept or forward any posts that trigger spam filters, or a server without high-capacity data storage may refuse to carry any newsgroups used primarily for file sharing, limiting itself to discussion-oriented groups. However, unlike BBSes and web forums, the dispersed nature of Usenet usually permits users who are interested in receiving some content to access it simply by choosing to connect to news servers that carry the feeds they want. Usenet is culturally and historically significant in the networked world, having given rise to, or popularized, many widely recognized concepts and terms such as "FAQ", "flame", "sockpuppet", and "spam". In the early 1990s, shortly before access to the Internet became commonly affordable, Usenet connections via FidoNet's dial-up BBS networks made long-distance or worldwide discussions and other communication widespread. The name Usenet comes from the term "users' network". The first Usenet group was NET.general, which quickly became net.general. The first commercial spam on Usenet was from immigration attorneys Canter and Siegel advertising green card services. On the Internet, Usenet is transported via the Network News Transfer Protocol (NNTP) on Transmission Control Protocol (TCP) port 119 for standard, unprotected connections, and on TCP port 563 for Secure Sockets Layer (SSL) encrypted connections. Introduction Usenet was conceived in 1979 and publicly established in 1980, at the University of North Carolina at Chapel Hill and Duke University, over a decade before the World Wide Web went online (and thus before the general public received access to the Internet), making it one of the oldest computer network communications systems still in widespread use. It was originally built on the "poor man's ARPANET", employing UUCP as its transport protocol to offer mail and file transfers, as well as announcements through the newly developed news software such as A News. The name "Usenet" emphasizes its creators' hope that the USENIX organization would take an active role in its operation. The articles that users post to Usenet are organized into topical categories known as newsgroups, which are themselves logically organized into hierarchies of subjects. For instance, sci.math and sci.physics are within the sci.* hierarchy. Or, talk.origins and talk.atheism are in the talk.* hierarchy. When a user subscribes to a newsgroup, the news client software keeps track of which articles that user has read. In most newsgroups, the majority of the articles are responses to some other article. The set of articles that can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into threads and subthreads. For example, in the wine-making newsgroup rec.crafts.winemaking, someone might start a thread called "What's the best yeast?" and that thread or conversation might grow into dozens of replies long, by perhaps six or eight different authors. Over several days, that conversation about different wine yeasts might branch into several sub-threads in a tree-like form. When a user posts an article, it is initially only available on that user's news server. Each news server talks to one or more other servers (its "newsfeeds") and exchanges articles with them. In this fashion, the article is copied from server to server and should eventually reach every server in the network. The later peer-to-peer networks operate on a similar principle, but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Usenet was designed under conditions when networks were much slower and not always available. Many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out. This is largely because the POTS network was typically used for transfers, and phone charges were lower at night. The format and transmission of Usenet articles is similar to that of Internet e-mail messages. The difference between the two is that Usenet articles can be read by any user whose news server carries the group to which the message was posted, as opposed to email messages, which have one or more specific recipients. Today, Usenet has diminished in importance with respect to Internet forums, blogs, mailing lists and social media. Usenet differs from such media in several ways: Usenet requires no personal registration with the group concerned; information need not be stored on a remote server; archives are always available; and reading the messages does not require a mail or web client, but a news client. However, it is now possible to read and participate in Usenet newsgroups to a large degree using ordinary web browsers since most newsgroups are now copied to several web sites. The groups in alt.binaries are still widely used for data transfer. ISPs, news servers, and newsfeeds Many Internet service providers (ISPs), and many other Internet sites, operate news servers for their users to access. ISPs that do not operate their own servers directly will often offer their users an account from another provider that specifically operates newsfeeds. In early news implementations, the server and newsreader were a single program suite, running on the same system. Today, one uses separate newsreader client software, a program that resembles an email client but accesses Usenet servers instead. Not all ISPs run news servers. A news server is one of the most difficult Internet services to administer because of the large amount of data involved, small customer base (compared to mainstream Internet service), and a disproportionately high volume of customer support incidents (frequently complaining of missing news articles). Some ISPs outsource news operations to specialist sites, which will usually appear to a user as though the ISP itself runs the server. Many of these sites carry a restricted newsfeed, with a limited number of newsgroups. Commonly omitted from such a newsfeed are foreign-language newsgroups and the alt.binaries hierarchy, which largely carries software, music, videos and images, and accounts for over 99 percent of article data.[citation needed] There are also Usenet providers that offer a full unrestricted service to users whose ISPs do not carry news, or that carry a restricted feed.[citation needed] Newsgroups are typically accessed with newsreaders: applications that allow users to read and reply to postings in newsgroups. These applications act as clients to one or more news servers. Historically, Usenet was associated with the Unix operating system developed at AT&T, but newsreaders were soon available for all major operating systems. Email client programs and Internet suites of the late 1990s and 2000s often included an integrated newsreader. Newsgroup enthusiasts often criticized these as inferior to standalone newsreaders that made correct use of Usenet protocols, standards and conventions. With the rise of the World Wide Web (WWW), web front-ends (web2news) have become more common. Web front ends have lowered the technical entry barrier requirements to that of one application and no Usenet NNTP server account. There are numerous websites now offering web based gateways to Usenet groups, although some people have begun filtering messages made by some of the web interfaces for one reason or another. Google Groups is one such web based front end and some web browsers can access Google Groups via news: protocol links directly. A minority of newsgroups are moderated, meaning that messages submitted by readers are not distributed directly to Usenet, but instead are emailed to the moderators of the newsgroup for approval. The moderator is to receive submitted articles, review them, and inject approved articles so that they can be properly propagated worldwide. Articles approved by a moderator must bear the Approved: header line. Moderators ensure that the messages that readers see in the newsgroup conform to the charter of the newsgroup, though they are not required to follow any such rules or guidelines. Typically, moderators are appointed in the proposal for the newsgroup, and changes of moderators follow a succession plan. Historically, a mod.* hierarchy existed before Usenet reorganization. Now, moderated newsgroups may appear in any hierarchy, typically with .moderated added to the group name. Usenet newsgroups in the Big-8 hierarchy are created by proposals called a Request for Discussion, or RFD. The RFD is required to have the following information: newsgroup name, checkgroups file entry, and moderated or unmoderated status. If the group is to be moderated, then at least one moderator with a valid email address must be provided. Other information which is beneficial but not required includes: a charter, a rationale, and a moderation policy if the group is to be moderated. Discussion of the new newsgroup proposal follows, and is finished with the members of the Big-8 Management Board making the decision, by vote, to either approve or disapprove the new newsgroup. Unmoderated newsgroups form the majority of Usenet newsgroups, and messages submitted by readers for unmoderated newsgroups are immediately propagated for everyone to see. Minimal editorial content filtering vs propagation speed form one crux of the Usenet community. One little cited defense of propagation is canceling a propagated message, but few Usenet users use this command and some news readers do not offer cancellation commands, in part because article storage expires in relatively short order anyway. Almost all unmoderated Usenet groups tend to receive large amounts of spam. Usenet is a set of protocols for generating, storing and retrieving news "articles" (which resemble Internet mail messages) and for exchanging them among a readership which is potentially widely distributed. These protocols most commonly use a flooding algorithm which propagates copies throughout a network of participating servers. Whenever a message reaches a server, that server forwards the message to all its network neighbors that haven't yet seen the article. Only one copy of a message is stored per server, and each server makes it available on demand to the (typically local) readers able to access that server. The collection of Usenet servers has thus a certain peer-to-peer character in that they share resources by exchanging them, the granularity of exchange however is on a different scale than a modern peer-to-peer system and this characteristic excludes the actual users of the system who connect to the news servers with a typical client-server application, much like an email reader. RFC 850 was the first formal specification of the messages exchanged by Usenet servers. It was superseded by RFC 1036 and subsequently by RFC 5536 and RFC 5537. In cases where unsuitable content has been posted, Usenet has support for automated removal of a posting from the whole network by creating a cancel message, although due to a lack of authentication and resultant abuse, this capability is frequently disabled. Copyright holders may still request the manual deletion of infringing material using the provisions of World Intellectual Property Organization treaty implementations, such as the United States Online Copyright Infringement Liability Limitation Act, but this would require giving notice to each individual news server administrator. On the Internet, Usenet is transported via the Network News Transfer Protocol (NNTP) on TCP port 119 for standard, unprotected connections and on TCP port 563 for SSL encrypted connections. The major set of worldwide newsgroups is contained within nine hierarchies, eight of which are operated under consensual guidelines that govern their administration and naming. The current Big Eight are: The alt.* hierarchy is not subject to the procedures controlling groups in the Big Eight, and it is as a result less organized. Groups in the alt.* hierarchy tend to be more specialized or specific—for example, there might be a newsgroup under the Big Eight which contains discussions about children's books, but a group in the alt hierarchy may be dedicated to one specific author of children's books. Binaries are posted in alt.binaries.*, making it the largest of all the hierarchies. Many other hierarchies of newsgroups are distributed alongside these. Regional and language-specific hierarchies such as japan.*, malta.* and ne.* serve specific countries and regions such as Japan, Malta and New England. Companies and projects administer their own hierarchies to discuss their products and offer community technical support, such as the historical gnu.* hierarchy from the Free Software Foundation. Microsoft closed its newsserver in June 2010, providing support for its products over forums now. Some users prefer to use the term "Usenet" to refer only to the Big Eight hierarchies; others include alt.* as well. The more general term "netnews" incorporates the entire medium, including private organizational news systems. Informal sub-hierarchy conventions also exist. *.answers are typically moderated cross-post groups for FAQs. An FAQ would be posted within one group and a cross post to the *.answers group at the head of the hierarchy seen by some as a refining of information in that news group. Some subgroups are recursive—to the point of some silliness in alt.*[citation needed]. Usenet was originally created to distribute text content encoded in the 7-bit ASCII character set. With the help of programs that encode 8-bit values into ASCII, it became practical to distribute binary files as content. Binary posts, due to their size and often-dubious copyright status, were in time restricted to specific newsgroups, making it easier for administrators to allow or disallow the traffic. The oldest widely used encoding method for binary content is uuencode, from the Unix UUCP package. In the late 1980s, Usenet articles were often limited to 60,000 characters, and larger hard limits exist today. Files are therefore commonly split into sections that require reassembly by the reader. With the header extensions and the Base64 and Quoted-Printable MIME encodings, there was a new generation of binary transport. In practice, MIME has seen increased adoption in text messages, but it is avoided for most binary attachments. Some operating systems with metadata attached to files use specialized encoding formats. For Mac OS, both BinHex and special MIME types are used. Other lesser known encoding systems that may have been used at one time were BTOA, XX encoding, BOO, and USR encoding. In an attempt to reduce file transfer times, an informal file encoding known as yEnc was introduced in 2001. It achieves about a 30% reduction in data transferred by assuming that most 8-bit characters can safely be transferred across the network without first encoding into the 7-bit ASCII space. The most common method of uploading large binary posts to Usenet is to convert the files into RAR archives and create Parchive files for them. Parity files are used to recreate missing data when not every part of the files reaches a server. Binary newsgroups can be used to distribute files, and, as of 2022, some remain popular as an alternative to BitTorrent to share and download files. Each news server allocates a certain amount of storage space for content in each newsgroup. When this storage has been filled, each time a new post arrives, old posts are deleted to make room for the new content. If the network bandwidth available to a server is high but the storage allocation is small, it is possible for a huge flood of incoming content to overflow the allocation and push out everything that was in the group before it. The average length of time that posts are able to stay on the server before being deleted is commonly called the retention time. Binary newsgroups are only able to function reliably if sufficient storage is allocated to handle the amount of articles being added. Without sufficient retention time, a reader will be unable to download all parts of the binary before it is flushed out of the group's storage allocation. This was at one time how posting undesired content was countered; the newsgroup would be flooded with random garbage data posts, of sufficient quantity to push out all the content to be suppressed. This has been compensated by service providers allocating enough storage to retain everything posted each day, including spam floods, without deleting anything. Modern Usenet news servers have enough capacity to archive years of binary content even when flooded with new data at the maximum daily speed available. In part because of such long retention times, as well as growing Internet upload speeds, Usenet is also used by individual users to store backup data. While commercial providers offer easier to use online backup services, storing data on Usenet is free of charge (although access to Usenet itself may not be). The method requires the uploader to cede control over the distribution of the data; the files are automatically disseminated to all Usenet providers exchanging data for the news group it is posted to. In general the user must manually select, prepare and upload the data. The data is typically encrypted because it is available to anyone to download the backup files. After the files are uploaded, having multiple copies spread to different geographical regions around the world on different news servers decreases the chances of data loss. Major Usenet service providers have a retention time of more than 12 years. This results in more than 60 petabytes (60000 terabytes) of storage (see image). When using Usenet for data storage, providers that offer longer retention time are preferred to ensure the data will survive a longer time than on services with lower retention time. While binary newsgroups can be used to distribute completely legal user-created works, free software, and public domain material, some binary groups are used to illegally distribute proprietary software, copyrighted media, and pornographic material. ISP-operated Usenet servers frequently block access to all alt.binaries.* groups to both reduce network traffic and to avoid related legal issues. Commercial Usenet service providers claim to operate as a telecommunications service, and assert that they are not responsible for the user-posted binary content transferred via their equipment. In the United States, Usenet providers can qualify for protection under the DMCA Safe Harbor regulations, provided that they establish a mechanism to comply with and respond to takedown notices from copyright holders. Removal of copyrighted content from the entire Usenet network is a nearly impossible task, due to the rapid propagation between servers and the retention done by each server. Petitioning a Usenet provider for removal only removes it from that one server's retention cache, but not any others. It is possible for a special post cancellation message to be distributed to remove it from all servers, but many providers ignore cancel messages by standard policy, because they can be easily falsified and submitted by anyone. For a takedown petition to be most effective across the whole network, it would have to be issued to the origin server to which the content has been posted, before it has been propagated to other servers. Removal of the content at this early stage would prevent further propagation, but with modern high speed links, content can be propagated as fast as it arrives, allowing no time for content review and takedown issuance by copyright holders. Establishing the identity of the person posting illegal content is equally difficult due to the trust-based design of the network. Like SMTP email, servers generally assume the header and origin information in a post is true and accurate. However, as in SMTP email, Usenet post headers are easily falsified so as to obscure the true identity and location of the message source. In this manner, Usenet is significantly different from modern P2P services; most P2P users distributing content are typically immediately identifiable to all other users by their network address, but the origin information for a Usenet posting can be completely obscured and unobtainable once it has propagated past the original server. Also unlike modern P2P services, the identity of the downloaders is hidden from view. On P2P services a downloader is identifiable to all others by their network address. On Usenet, the downloader connects directly to a server, and only the server knows the address of who is connecting to it. Some Usenet providers do keep usage logs, but not all make this logged information casually available to outside parties such as the Recording Industry Association of America. The existence of anonymising gateways to USENET also complicates the tracing of a postings true origin. History Bruce Jones, Henry Spencer, David Wiseman. Copied with permission from Newsgroup experiments first occurred in 1979. Tom Truscott and Jim Ellis of Duke University came up with the idea as a replacement for a local announcement program, and established a link with nearby University of North Carolina using Bourne shell scripts written by Steve Bellovin. The public release of news was in the form of conventional compiled software, written by Steve Daniel and Truscott. In 1980, Usenet was connected to ARPANET through UC Berkeley, which had connections to both Usenet and ARPANET. Mary Ann Horton, the graduate student who set up the connection, began "feeding mailing lists from the ARPANET into Usenet" with the "fa" ("From ARPANET") identifier. Usenet gained 50 member sites in its first year, including Reed College, University of Oklahoma, and Bell Labs, and the number of people using the network increased dramatically; however, it was still a while longer before Usenet users could contribute to ARPANET. UUCP networks spread quickly due to the lower costs involved, and the ability to use existing leased lines, X.25 links or even ARPANET connections. By 1983, thousands of people participated from more than 500 hosts, mostly universities and Bell Labs sites but also a growing number of Unix-related companies; the number of hosts nearly doubled to 940 in 1984. More than 100 newsgroups existed, more than 20 devoted to Unix and other computer-related topics, and at least a third to recreation. As the mesh of UUCP hosts rapidly expanded, it became desirable to distinguish the Usenet subset from the overall network. A vote was taken at the 1982 USENIX conference to choose a new name. The name Usenet was retained, but it was established that it only applied to news. The name UUCPNET became the common name for the overall network. In addition to UUCP, early Usenet traffic was also exchanged with FidoNet and other dial-up BBS networks. By the mid-1990s there were almost 40,000 FidoNet systems in operation, and it was possible to communicate with millions of users around the world, with only local telephone service. Widespread use of Usenet by the BBS community was facilitated by the introduction of UUCP feeds made possible by MS-DOS implementations of UUCP, such as UFGATE (UUCP to FidoNet Gateway), FSUUCP and UUPC. In 1986, RFC 977 provided the Network News Transfer Protocol (NNTP) specification for distribution of Usenet articles over TCP/IP as a more flexible alternative to informal Internet transfers of UUCP traffic. Since the Internet boom of the 1990s, almost all Usenet distribution is over NNTP. Early versions of Usenet used Duke's A News software, designed for one or two articles a day. Matt Glickman and Horton at Berkeley produced an improved version called B News that could handle the rising traffic (about 50 articles a day as of late 1983). With a message format that offered compatibility with Internet mail and improved performance, it became the dominant server software. C News, developed by Geoff Collyer and Henry Spencer at the University of Toronto, was comparable to B News in features but offered considerably faster processing. In the early 1990s, InterNetNews by Rich Salz was developed to take advantage of the continuous message flow made possible by NNTP versus the batched store-and-forward design of UUCP. Since that time INN development has continued, and other news server software has also been developed. Usenet was the first Internet community and the place for many of the most important public developments in the pre-commercial Internet. It was the place where Tim Berners-Lee announced the launch of the World Wide Web, where Linus Torvalds announced the Linux project, and where Marc Andreessen announced the creation of the Mosaic browser and the introduction of the image tag, which revolutionized the World Wide Web by turning it into a graphical medium. Many jargon terms now in common use on the Internet originated or were popularized on Usenet. Likewise, many conflicts which later spread to the rest of the Internet, such as the ongoing difficulties over spamming, began on Usenet. "Usenet is like a herd of performing elephants with diarrhea. Massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." — Gene Spafford, 1992 Sascha Segan of PC Magazine said in 2008 that "Usenet has been dying for years". He argued that it was dying by the late 1990s, when large binary files became a significant proportion of Usenet traffic, and Internet service providers "sensibly started to wonder why they should be reserving big chunks of their own disk space for pirated movies and repetitive porn." AOL discontinued Usenet access in 2005. In May 2010, Duke University, whose implementation had started Usenet more than 30 years earlier, decommissioned its Usenet server, citing low usage and rising costs. On February 4, 2011, the Usenet news service link at the University of North Carolina at Chapel Hill (news.unc.edu) was retired after 32 years.[citation needed] In response, John Biggs of TechCrunch said "As long as there are folks who think a command line is better than a mouse, the original text-only social network will live on". While there are still some active text newsgroups on Usenet, the system is now primarily used to share large files between users, and the underlying technology of Usenet remains unchanged. Usenet traffic changes Over time, the amount of Usenet traffic has steadily increased. As of 2010[update] the number of all text posts made in all Big-8 newsgroups averaged 1,800 new messages every hour, with an average of 25,000 messages per day. However, these averages are minuscule in comparison to the traffic in the binary groups. Much of this traffic increase reflects not an increase in discrete users or newsgroup discussions, but instead the combination of massive automated spamming and an increase in the use of .binaries newsgroups in which large files are often posted publicly. A small sampling of the change (measured in feed size per day) follows: In 2008, Verizon Communications, Time Warner Cable and Sprint Nextel signed an agreement with Attorney General of New York Andrew Cuomo to shut down access to sources of child pornography. Time Warner Cable stopped offering access to Usenet. Verizon reduced its access to the "Big 8" hierarchies. Sprint stopped access to the alt.* hierarchies. AT&T stopped access to the alt.binaries.* hierarchies. Cuomo never specifically named Usenet in his anti-child-pornography campaign. David DeJean of PC World said that some worry that the ISPs used Cuomo's campaign as an excuse to end portions of Usenet access, as it is costly for the Internet service providers and not in high demand by customers. In 2008 AOL, which no longer offered Usenet access, and the four providers that responded to the Cuomo campaign were the five largest Internet service providers in the United States; they had more than 50% of the U.S. ISP market share. On June 8, 2009, AT&T announced that it would no longer provide access to the Usenet service as of July 15, 2009. AOL announced that it would discontinue its integrated Usenet service in early 2005, citing the growing popularity of weblogs, chat forums and on-line conferencing. The AOL community had a tremendous role in popularizing Usenet some 11 years earlier. In August 2009, Verizon announced that it would discontinue access to Usenet on September 30, 2009. JANET announced it would discontinue Usenet service, effective July 31, 2010, citing Google Groups as an alternative. Microsoft announced that it would discontinue support for its public newsgroups (msnews.microsoft.com) from June 1, 2010, offering web forums as an alternative. Primary reasons cited for the discontinuance of Usenet service by general ISPs include the decline in volume of actual readers due to competition from blogs, along with cost and liability concerns of increasing proportion of traffic devoted to file-sharing and spam on unused or discontinued groups. Some ISPs did not include pressure from Cuomo's campaign against child pornography as one of their reasons for dropping Usenet feeds as part of their services. ISPs Cox and Atlantic Communications resisted the 2008 trend but both did eventually drop their respective Usenet feeds in 2010. Archives Public archives of Usenet articles have existed since the early days of Usenet, such as the system created by Kenneth Almquist in late 1982. Distributed archiving of Usenet posts was suggested in November 1982 by Scott Orshan, who proposed that "Every site should keep all the articles it posted, forever." Also in November of that year, Rick Adams responded to a post asking "Has anyone archived netnews, or does anyone plan to?" by stating that he was, "afraid to admit it, but I started archiving most 'useful' newsgroups as of September 18." In June 1982, Gregory G. Woodbury proposed an "automatic access to archives" system that consisted of "automatic answering of fixed-format messages to a special mail recipient on specified machines." In 1985, two news archiving systems and one RFC were posted to the Internet. The first system, called keepnews, by Mark M. Swenson of the University of Arizona, was described as "a program that attempts to provide a sane way of extracting and keeping information that comes over Usenet." The main advantage of this system was to allow users to mark articles as worthwhile to retain. The second system, YA News Archiver by Chuq Von Rospach, was similar to keepnews, but was "designed to work with much larger archives where the wonderful quadratic search time feature of the Unix ... becomes a real problem." Von Rospach in early 1985 posted a detailed RFC for "archiving and accessing usenet articles with keyword lookup." This RFC described a program that could "generate and maintain an archive of Usenet articles and allow looking up articles based on the article-id, subject lines, or keywords pulled out of the article itself." Also included was C code for the internal data structure of the system. The desire to have a full text search index of archived news articles is not new either, one such request having been made in April 1991 by Alex Martelli who sought to "build some sort of keyword index for [the news archive]." In early May, Martelli posted a summary of his responses to Usenet, noting that the "most popular suggestion award must definitely go to 'lq-text' package, by Liam Quin, recently posted in alt.sources." The Alt Sex Stories Text Repository (ASSTR) site archived and indexed erotic and pornographic stories posted to the Usenet group alt.sex.stories. The archiving of Usenet has led to fears of loss of privacy. An archive simplifies ways to profile people. This has partly been countered with the introduction of the X-No-Archive: Yes header, which is itself controversial. Web-based archiving of Usenet posts began in March 1995 at Deja News with a very large, searchable database. In February 2001, this database was acquired by Google; Google had begun archiving Usenet posts for itself starting in the second week of August 2000. Google Groups hosts an archive of Usenet posts dating back to May 1981. The earliest posts, which date from May 1981 to June 1991, were donated to Google by the University of Western Ontario with the help of David Wiseman and others, and were originally archived by Henry Spencer at the University of Toronto's Zoology department. The archives for late 1991 through early 1995 were provided by Kent Landfield from the NetNews CD series and Jürgen Christoffel from GMD. Google has been criticized by Vice and Wired contributors as well as former employees for its stewardship of the archive and for breaking its search functionality. As of January 2024, Google Groups carries a header notice, saying: Effective from 22 February 2024, Google Groups will no longer support new Usenet content. Posting and subscribing will be disallowed, and new content from Usenet peers will not appear. Viewing and searching of historical data will still be supported as it is done today. An explanatory page adds: In addition, Google’s Network News Transfer Protocol (NNTP) server and associated peering will no longer be available, meaning Google will not support serving new Usenet content or exchanging content with other NNTP servers. This change will not impact any non-Usenet content on Google Groups, including all user and organization-created groups. See also Usenet had administrators on a server-by-server basis, not as a whole. A few famous administrators: References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-GSAprilsales_291-0] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Space_Shuttle] | [TOKENS: 13782]
Contents Space Shuttle The Space Shuttle is a retired, partially reusable low Earth orbital spacecraft system operated from 1981 to 2011 by the U.S. National Aeronautics and Space Administration (NASA) as part of the Space Shuttle program. Its official program name was the Space Transportation System (STS), taken from the 1969 plan led by U.S. vice president Spiro Agnew for a system of reusable spacecraft where it was the only item funded for development.: 163–166 The first (STS-1) of four orbital test flights occurred in 1981, leading to operational flights (STS-5) beginning in 1982. Five complete Space Shuttle orbiter vehicles were built and flown on a total of 135 missions from 1981 to 2011. They launched from the Kennedy Space Center (KSC) in Florida. Operational missions launched numerous satellites, interplanetary probes, and the Hubble Space Telescope (HST), conducted science experiments in orbit, participated in the Shuttle-Mir program with Russia, and participated in the construction and servicing of the International Space Station (ISS). The Space Shuttle fleet's total mission time was 1,323 days. Space Shuttle components include the Orbiter Vehicle (OV) with three clustered Rocketdyne RS-25 main engines, a pair of recoverable solid rocket boosters (SRBs), and the expendable external tank (ET) containing liquid hydrogen and liquid oxygen. The Space Shuttle was launched vertically, like a conventional rocket, with the two SRBs operating in parallel with the orbiter's three main engines, which were fueled from the ET. The SRBs were jettisoned before the vehicle reached orbit, while the main engines continued to operate, and the ET was jettisoned after main engine cutoff and just before orbit insertion, which used the orbiter's two Orbital Maneuvering System (OMS) engines. At the conclusion of the mission, the orbiter fired its OMS to deorbit and reenter the atmosphere. The orbiter was protected during reentry by its thermal protection system tiles, and it glided as a spaceplane to a runway landing, usually to the Shuttle Landing Facility at KSC, Florida, or to Rogers Dry Lake in Edwards Air Force Base, California. If the landing occurred at Edwards, the orbiter was flown back to the KSC atop the Shuttle Carrier Aircraft (SCA), a specially modified Boeing 747 designed to carry the shuttle above it. The first orbiter, Enterprise, was built in 1976 and used in Approach and Landing Tests (ALT), but had no orbital capability. Four fully operational orbiters were initially built: Columbia, Challenger, Discovery, and Atlantis. Of these, two were lost in mission accidents: Challenger in 1986 and Columbia in 2003, with a total of 14 astronauts killed. A fifth operational (and sixth in total) orbiter, Endeavour, was built in 1991 to replace Challenger. The three surviving operational vehicles were retired from service following Atlantis's final flight on July 21, 2011. The U.S. relied on the Russian Soyuz spacecraft to transport astronauts to the ISS from the last Shuttle flight until the launch of the Crew Dragon Demo-2 mission in May 2020. Design and development In the late 1930s, the German government launched the "Amerikabomber" (English: America bomber) project, and Eugen Sänger's idea, together with mathematician Irene Bredt, was a winged rocket called the Silbervogel (German for "silver bird"). During the 1950s, the United States Air Force proposed using a reusable piloted glider to perform military operations such as reconnaissance, satellite attack, and air-to-ground weapons employment. In the late 1950s, the Air Force began developing the partially reusable X-20 Dyna-Soar. The Air Force collaborated with NASA on the Dyna-Soar and began training six pilots in June 1961. The rising costs of development and the prioritization of Project Gemini led to the cancellation of the Dyna-Soar program in December 1963. In addition to the Dyna-Soar, the Air Force had conducted a study in 1957 to test the feasibility of reusable boosters. This became the basis for the aerospaceplane, a fully reusable spacecraft that was never developed beyond the initial design phase in 1962–1963.: 162–163 Beginning in the early 1950s, NASA and the Air Force collaborated on developing lifting bodies to test aircraft that primarily generated lift from their fuselages instead of wings, and tested the NASA M2-F1, Northrop M2-F2, Northrop M2-F3, Northrop HL-10, Martin Marietta X-24A, and the Martin Marietta X-24B. The program tested aerodynamic characteristics that would later be incorporated in design of the Space Shuttle, including unpowered landing from a high altitude and speed.: 142 : 16–18 On September 24, 1966, as the Apollo space program neared its design completion, NASA and the Air Force released a joint study concluding that a new vehicle was required to satisfy their respective future demands and that a partially reusable system would be the most cost-effective solution.: 164 The head of the NASA Office of Manned Space Flight, George Mueller, announced the plan for a reusable shuttle on August 10, 1968. NASA issued a request for proposal (RFP) for designs of the Integral Launch and Reentry Vehicle (ILRV) on October 30, 1968. Rather than award a contract based upon initial proposals, NASA announced a phased approach for the Space Shuttle contracting and development; Phase A was a request for studies completed by competing aerospace companies, Phase B was a competition between two contractors for a specific contract, Phase C involved designing the details of the spacecraft components, and Phase D was the production of the spacecraft.: 19–22 In December 1968, NASA created the Space Shuttle Task Group to determine the optimal design for a reusable spacecraft, and issued study contracts to General Dynamics, Lockheed, McDonnell Douglas, and North American Rockwell. In July 1969, the Space Shuttle Task Group issued a report that determined the Shuttle would support short-duration crewed missions and space station, as well as the capabilities to launch, service, and retrieve satellites. The report also created three classes of a future reusable shuttle: Class I would have a reusable orbiter mounted on expendable boosters, Class II would use multiple expendable rocket engines and a single propellant tank (stage-and-a-half), and Class III would have both a reusable orbiter and a reusable booster. In September 1969, the Space Task Group, under the leadership of U.S. vice president Spiro Agnew, issued a report calling for the development of a space shuttle to bring people and cargo to low Earth orbit (LEO), as well as a space tug for transfers between orbits and the Moon, and a reusable nuclear upper stage for deep space travel.: 163–166 After the release of the Space Shuttle Task Group report, many aerospace engineers favored the Class III, fully reusable design because of perceived savings in hardware costs. Max Faget, a NASA engineer who had worked to design the Mercury capsule, patented a design for a two-stage fully recoverable system with a straight-winged orbiter mounted on a larger straight-winged booster. The Air Force Flight Dynamics Laboratory argued that a straight-wing design would not be able to withstand the high thermal and aerodynamic stresses during reentry, and would not provide the required cross-range capability. Additionally, the Air Force required a larger payload capacity than Faget's design allowed. In January 1971, NASA and Air Force leadership decided that a reusable delta-wing orbiter mounted on an expendable propellant tank would be the optimal design for the Space Shuttle.: 166 After they established the need for a reusable, heavy-lift spacecraft, NASA and the Air Force determined the design requirements of their respective services. The Air Force expected to use the Space Shuttle to launch large satellites, and required it to be capable of lifting 29,000 kg (65,000 lb) to an eastward LEO or 18,000 kg (40,000 lb) into a polar orbit. The satellite designs also required that the Space Shuttle have a 4.6 by 18 m (15 by 60 ft) payload bay. NASA evaluated the F-1 and J-2 engines from the Saturn rockets, and determined that they were insufficient for the requirements of the Space Shuttle; in July 1971, it issued a contract to Rocketdyne to begin development on the RS-25 engine.: 165–170 NASA reviewed 29 potential designs for the Space Shuttle and determined that a design with two side boosters should be used, and the boosters should be reusable to reduce costs.: 167 NASA and the Air Force elected to use solid-propellant boosters because of the lower costs and the ease of refurbishing them for reuse after they landed in the ocean. In January 1972, President Richard Nixon approved the Shuttle, and NASA decided on its final design in March. The development of the Space Shuttle Main Engine (SSME) remained the responsibility of Rocketdyne, and the contract was issued in July 1971, and updated SSME specifications were submitted to Rocketdyne that April. The following August, NASA awarded the contract to build the orbiter to North American Rockwell, which had by then constructed a full-scale mock-up, later named Inspiration. In August 1973, NASA awarded the external tank contract to Martin Marietta, and in November the solid-rocket booster contract to Morton Thiokol.: 170–173 On June 4, 1974, Rockwell began construction on the first orbiter, OV-101, dubbed Constitution, later to be renamed Enterprise. Enterprise was designed as a test vehicle, and did not include engines or heat shielding. Construction was completed on September 17, 1976, and Enterprise was moved to the Edwards Air Force Base to begin testing.: 173 Rockwell constructed the Main Propulsion Test Article (MPTA)-098, which was a structural truss mounted to the ET with three RS-25 engines attached. It was tested at the National Space Technology Laboratory (NSTL) to ensure that the engines could safely run through the launch profile.: II-163 Rockwell conducted mechanical and thermal stress tests on Structural Test Article (STA)-099 to determine the effects of aerodynamic and thermal stresses during launch and reentry.: I-415 The beginning of the development of the RS-25 Space Shuttle Main Engine was delayed for nine months while Pratt & Whitney challenged the contract that had been issued to Rocketdyne. The first engine was completed in March 1975, after issues with developing the first throttleable, reusable engine. During engine testing, the RS-25 experienced multiple nozzle failures, as well as broken turbine blades. Despite the problems during testing, NASA ordered the nine RS-25 engines needed for its three orbiters under construction in May 1978.: 174–175 NASA experienced significant delays in the development of the Space Shuttle's thermal protection system. Previous NASA spacecraft had used ablative heat shields, but those could not be reused. NASA chose to use ceramic tiles for thermal protection, as the shuttle could then be constructed of lightweight aluminum, and the tiles could be individually replaced as needed. Construction began on Columbia on March 27, 1975, and it was delivered to the KSC on March 25, 1979.: 175–177 At the time of its arrival at the KSC, Columbia still had 6,000 of its 30,000 tiles remaining to be installed. However, many of the tiles that had been originally installed had to be replaced, requiring two years of installation before Columbia could fly.: 46–48 On January 5, 1979, NASA commissioned a second orbiter. Later that month, Rockwell began converting STA-099 to OV-099, later named Challenger. On January 29, 1979, NASA ordered two additional orbiters, OV-103 and OV-104, which were named Discovery and Atlantis. Construction of OV-105, later named Endeavour, began in February 1982, but NASA decided to limit the Space Shuttle fleet to four orbiters in 1983. After the loss of Challenger, NASA resumed production of Endeavour in September 1987.: 52–53 After it arrived at Edwards AFB, Enterprise underwent flight testing with the Shuttle Carrier Aircraft, a Boeing 747 that had been modified to carry the orbiter. In February 1977, Enterprise began the Approach and Landing Tests (ALT) and underwent captive flights, where it remained attached to the Shuttle Carrier Aircraft for the duration of the flight. On August 12, 1977, Enterprise conducted its first glide test, where it detached from the Shuttle Carrier Aircraft and landed at Edwards AFB.: 173–174 After four additional flights, Enterprise was moved to the Marshall Space Flight Center (MSFC) on March 13, 1978. Enterprise underwent shake tests in the Mated Vertical Ground Vibration Test, where it was attached to an external tank and solid rocket boosters, and underwent vibrations to simulate the stresses of launch. In April 1979, Enterprise was taken to the KSC, where it was attached to an external tank and solid rocket boosters, and moved to LC-39. Once installed at the launch pad, the Space Shuttle was used to verify the proper positioning of the launch complex hardware. Enterprise was taken back to California in August 1979, and later served in the development of the SLC-6 at Vandenberg AFB in 1984.: 40–41 On November 24, 1980, Columbia was mated with its external tank and solid-rocket boosters, and was moved to LC-39 on December 29.: III-22 The first Space Shuttle mission, STS-1, would be the first time NASA performed a crewed first-flight of a spacecraft.: III-24 On April 12, 1981, the Space Shuttle launched for the first time, and was piloted by John Young and Robert Crippen. During the two-day mission, Young and Crippen tested equipment on board the shuttle, and found several of the ceramic tiles had fallen off the top side of the Columbia.: 277–278 NASA coordinated with the Air Force to use satellites to image the underside of Columbia, and determined there was no damage.: 335–337 Columbia reentered the atmosphere and landed at Edwards AFB on April 14.: III-24 NASA conducted three additional test flights with Columbia in 1981 and 1982. On July 4, 1982, STS-4, flown by Ken Mattingly and Henry Hartsfield, landed on a concrete runway at Edwards AFB. President Ronald Reagan and his wife Nancy met the crew, and delivered a speech. After STS-4, NASA declared its Space Transportation System (STS) operational.: 178–179 Description The Space Shuttle was the first operational orbital spacecraft designed for reuse. Each Space Shuttle orbiter was designed for a projected lifespan of 100 launches or ten years of operational life, although this was later extended.: 11 At launch, it consisted of the orbiter, which contained the crew and payload, the external tank (ET), and the two solid rocket boosters (SRBs).: 363 Responsibility for the Space Shuttle components was spread among multiple NASA field centers. The KSC was responsible for launch, landing, and turnaround operations for equatorial orbits (the only orbit profile actually used in the program). The U.S. Air Force at the Vandenberg Air Force Base was responsible for launch, landing, and turnaround operations for polar orbits (though this was never used). The Johnson Space Center (JSC) served as the central point for all Shuttle operations and the MSFC was responsible for the main engines, external tank, and solid rocket boosters. The John C. Stennis Space Center handled main engine testing, and the Goddard Space Flight Center managed the global tracking network. The orbiter had design elements and capabilities of both a rocket and an aircraft to allow it to launch vertically and then land as a glider.: 365 Its three-part fuselage provided support for the crew compartment, cargo bay, flight surfaces, and engines. The rear of the orbiter contained the Space Shuttle Main Engines (SSME), which provided thrust during launch, as well as the Orbital Maneuvering System (OMS), which allowed the orbiter to achieve, alter, and exit its orbit once in space. Its double-delta wings were 18 m (60 ft) long, and were swept 81° at the inner leading edge and 45° at the outer leading edge. Each wing had an inboard and outboard elevon to provide flight control during reentry, along with a flap located between the wings, below the engines to control pitch. The orbiter's vertical stabilizer was swept backwards at 45° and contained a rudder that could split to act as a speed brake.: 382–389 The vertical stabilizer also contained a two-part drag parachute system to slow the orbiter after landing. The orbiter used retractable landing gear with a nose landing gear and two main landing gear, each containing two tires. The main landing gear contained two brake assemblies each, and the nose landing gear contained an electro-hydraulic steering mechanism.: 408–411 The Space Shuttle crew varied per mission. They underwent rigorous testing and training to meet the qualification requirements for their roles. The crew was divided into three categories: Pilots, Mission Specialists, and Payload Specialists. Pilots were further divided into two roles: the Space Shuttle Commander, who would seat in the forward left seat and the Space Shuttle Pilot who would seat in the forward right seat. The test flights, STS-1 through STS-4 only had two members each, the commander and pilot. The commander and the pilot were both qualified to fly and land the orbiter. The on-orbit operations, such as experiments, payload deployment, and EVAs, were conducted primarily by the mission specialists who were specifically trained for their intended missions and systems. Early in the Space Shuttle program, NASA flew with payload specialists, who were typically systems specialists who worked for the company paying for the payload's deployment or operations. The final payload specialist, Gregory B. Jarvis, flew on STS-51-L, and future non-pilots were designated as mission specialists. An astronaut flew as a crewed spaceflight engineer on both STS-51-C and STS-51-J to serve as a military representative for a National Reconnaissance Office payload. A Space Shuttle crew typically had seven astronauts, with STS-61-A flying with eight.: III-21 The crew compartment comprised three decks and was the pressurized, habitable area on all Space Shuttle missions. The flight deck consisted of two seats for the commander and pilot, as well as an additional two to four seats for crew members. The mid-deck was located below the flight deck and was where the galley and crew bunks were set up, as well as three or four crew member seats. The mid-deck contained the airlock, which could support two astronauts on an extravehicular activity (EVA), as well as access to pressurized research modules. An equipment bay was below the mid-deck, which stored environmental control and waste management systems.: 60–62 : 365–369 On the first four Shuttle missions, astronauts wore modified U.S. Air Force high-altitude full-pressure suits, which included a full-pressure helmet during ascent and descent. From the fifth flight, STS-5, until the loss of Challenger, the crew wore one-piece light blue nomex flight suits and partial-pressure helmets. After the Challenger disaster, the crew members wore the Launch Entry Suit (LES), a partial-pressure version of the high-altitude pressure suits with a helmet. In 1994, the LES was replaced by the full-pressure Advanced Crew Escape Suit (ACES), which improved the safety of the astronauts in an emergency situation. Columbia originally had modified SR-71 zero-zero ejection seats installed for the ALT and first four missions, but these were disabled after STS-4 and removed after STS-9.: 370–371 The flight deck was the top level of the crew compartment and contained the flight controls for the orbiter. The commander sat in the front left seat, and the pilot sat in the front right seat, with two to four additional seats set up for additional crew members. The instrument panels contained over 2,100 displays and controls, and the commander and pilot were both equipped with a heads-up display (HUD) and a Rotational Hand Controller (RHC) to gimbal the engines during powered flight and fly the orbiter during unpowered flight. Both seats also had rudder controls, to allow rudder movement in flight and nose-wheel steering on the ground.: 369–372 The orbiter vehicles were originally installed with the Multifunction CRT Display System (MCDS) to display and control flight information. The MCDS displayed the flight information at the commander and pilot seats, as well as at the aft seating location, and also controlled the data on the HUD. In 1998, Atlantis was upgraded with the Multifunction Electronic Display System (MEDS), which was a glass cockpit upgrade to the flight instruments that replaced the eight MCDS display units with 11 multifunction colored digital screens. MEDS was flown for the first time in May 2000 on STS-101, and the other orbiter vehicles were upgraded to it. The aft section of the flight deck contained windows looking into the payload bay, as well as an RHC to control the Remote Manipulator System during cargo operations. Additionally, the aft flight deck had monitors for a closed-circuit television to view the cargo bay.: 372–376 The mid-deck contained the crew equipment storage, sleeping area, galley, medical equipment, and hygiene stations for the crew. The crew used modular lockers to store equipment that could be scaled depending on their needs, as well as permanently installed floor compartments. The mid-deck contained a port-side hatch that the crew used for entry and exit while on Earth.: II–26–33 The airlock is a structure installed to allow movement between two spaces with different gas components, conditions, or pressures. Continuing on the mid-deck structure, each orbiter was originally installed with an internal airlock in the mid-deck. The internal airlock was installed as an external airlock in the payload bay on Discovery, Atlantis, and Endeavour to improve docking with Mir and the ISS, along with the Orbiter Docking System.: II–26–33 The airlock module can be fitted in the mid-bay, or connected to it but in the payload bay.: 81 With an internal cylindrical volume of 1.60 metres (5 feet 3 inches) diameter and 2.11 metres (6 feet 11 inches) in length, it can hold two suited astronauts. It has two D-shaped hatchways 1.02 m (40 in) long (diameter), and 0.91 m (36 in) wide.: 82 The orbiter was equipped with an avionics system to provide information and control during atmospheric flight. Its avionics suite contained three microwave scanning beam landing systems, three gyroscopes, three TACANs, three accelerometers, two radar altimeters, two barometric altimeters, three attitude indicators, two Mach indicators, and two Mode C transponders. During reentry, the crew deployed two air data probes once they were traveling slower than Mach 5. The orbiter had three inertial measuring units (IMU) that it used for guidance and navigation during all phases of flight. The orbiter contains two star trackers to align the IMUs while in orbit. The star trackers are deployed while in orbit, and can automatically or manually align on a star. In 1991, NASA began upgrading the inertial measurement units with an inertial navigation system (INS), which provided more accurate location information. In 1993, NASA flew a GPS receiver for the first time aboard STS-51. In 1997, Honeywell began developing an integrated GPS/INS to replace the IMU, INS, and TACAN systems, which first flew on STS-118 in August 2007.: 402–403 While in orbit, the crew primarily communicated using one of four S band radios, which provided both voice and data communications. Two of the S band radios were phase modulation transceivers, and could transmit and receive information. The other two S band radios were frequency modulation transmitters and were used to transmit data to NASA. As S band radios can operate only within their line of sight, NASA used the Tracking and Data Relay Satellite System and the Spacecraft Tracking and Data Acquisition Network ground stations to communicate with the orbiter throughout its orbit. Additionally, the orbiter deployed a high-bandwidth Ku band radio out of the cargo bay, which could also be utilized as a rendezvous radar. The orbiter was also equipped with two UHF radios for communications with air traffic control and astronauts conducting EVA.: 403–404 The Space Shuttle's fly-by-wire control system was entirely reliant on its main computer, the Data Processing System (DPS). The DPS controlled the flight controls and thrusters on the orbiter, as well as the ET and SRBs during launch. The DPS consisted of five general-purpose computers (GPC), two magnetic tape mass memory units (MMUs), and the associated sensors to monitor the Space Shuttle components.: 232–233 The original GPC used was the IBM AP-101B, which used a separate central processing unit (CPU) and input/output processor (IOP), and non-volatile solid-state memory. From 1991 to 1993, the orbiter vehicles were upgraded to the AP-101S, which improved the memory and processing capabilities, and reduced the volume and weight of the computers by combining the CPU and IOP into a single unit. Four of the GPCs were loaded with the Primary Avionics Software System (PASS), which was Space Shuttle-specific software that provided control through all phases of flight. During ascent, maneuvering, reentry, and landing, the four PASS GPCs functioned identically to produce quadruple redundancy and would error check their results. In case of a software error that would cause erroneous reports from the four PASS GPCs, a fifth GPC ran the Backup Flight System, which used a different program and could control the Space Shuttle through ascent, orbit, and reentry, but could not support an entire mission. The five GPCs were separated in three separate bays within the mid-deck to provide redundancy in the event of a cooling fan failure. After achieving orbit, the crew would switch some of the GPCs functions from guidance, navigation, and control (GNC) to systems management (SM) and payload (PL) to support the operational mission.: 405–408 The Space Shuttle was not launched if its flight would run from December to January, as its flight software would have required the orbiter vehicle's computers to be reset at the year change. In 2007, NASA engineers devised a solution so Space Shuttle flights could cross the year-end boundary. Space Shuttle missions typically brought a portable general support computer (PGSC) that could integrate with the orbiter vehicle's computers and communication suite, as well as monitor scientific and payload data. Early missions brought the Grid Compass, one of the first laptop computers, as the PGSC, but later missions brought Apple and Intel laptops.: 408 The payload bay comprised most of the orbiter vehicle's fuselage, and provided the cargo-carrying space for the Space Shuttle's payloads. It was 18 m (60 ft) long and 4.6 m (15 ft) wide, and could accommodate cylindrical payloads up to 4.6 m (15 ft) in diameter. Two payload bay doors hinged on either side of the bay, and provided a relatively airtight seal to protect payloads from heating during launch and reentry. Payloads were secured in the payload bay to the attachment points on the longerons. The payload bay doors served an additional function as radiators for the orbiter vehicle's heat, and were opened upon reaching orbit for heat rejection.: 62–64 The orbiter could be used in conjunction with a variety of add-on components depending on the mission. This included orbital laboratories,: II-304, 319 boosters for launching payloads farther into space,: II-326 the Remote Manipulator System (RMS),: II-40 and optionally the EDO pallet to extend the mission duration.: II-86 To limit the fuel consumption while the orbiter was docked at the ISS, the Station-to-Shuttle Power Transfer System (SSPTS) was developed to convert and transfer station power to the orbiter.: II-87–88 The SSPTS was first used on STS-118, and was installed on Discovery and Endeavour.: III-366–368 The Remote Manipulator System (RMS), also known as Canadarm, was a mechanical arm attached to the cargo bay. It could be used to grasp and manipulate payloads, as well as serve as a mobile platform for astronauts conducting an EVA. The RMS was built by the Canadian company Spar Aerospace and was controlled by an astronaut inside the orbiter's flight deck using their windows and closed-circuit television. The RMS allowed for six degrees of freedom and had six joints located at three points along the arm. The original RMS could deploy or retrieve payloads up to 29,000 kg (65,000 lb), which was later improved to 270,000 kg (586,000 lb).: 384–385 The Spacelab module was a European-funded pressurized laboratory that was carried within the payload bay and allowed for scientific research while in orbit. The Spacelab module contained two 2.7 m (9 ft) segments that were mounted in the aft end of the payload bay to maintain the center of gravity during flight. Astronauts entered the Spacelab module through a 2.7 or 5.8 m (8.72 or 18.88 ft) tunnel that connected to the airlock. The Spacelab equipment was primarily stored in pallets, which provided storage for both experiments as well as computer and power equipment.: 434–435 Spacelab hardware was flown on 28 missions through 1999 and studied subjects including astronomy, microgravity, radar, and life sciences. Spacelab hardware also supported missions such as Hubble Space Telescope (HST) servicing and space station resupply. The Spacelab module was tested on STS-2 and STS-3, and the first full mission was on STS-9. Three RS-25 engines, also known as the Space Shuttle Main Engines (SSME), were mounted on the orbiter's aft fuselage in a triangular pattern. The engine nozzles could gimbal ±10.5° in pitch, and ±8.5° in yaw during ascent to change the direction of their thrust to steer the Shuttle. The titanium alloy reusable engines were independent of the orbiter vehicle and would be removed and replaced in between flights. The RS-25 is a staged-combustion cycle cryogenic engine that used liquid oxygen and hydrogen and had a higher chamber pressure than any previous liquid-fueled rocket. The original main combustion chamber operated at a maximum pressure of 226.5 bar (3,285 psi). The engine nozzle is 287 cm (113 in) tall and has an interior diameter of 229 cm (90.3 in). The nozzle is cooled by 1,080 interior lines carrying liquid hydrogen and is thermally protected by insulative and ablative material.: II–177–183 The RS-25 engines had several improvements to enhance reliability and power. During the development program, Rocketdyne determined that the engine was capable of safe reliable operation at 104% of the originally specified thrust. To keep the engine thrust values consistent with previous documentation and software, NASA kept the originally specified thrust at 100%, but had the RS-25 operate at higher thrust. RS-25 upgrade versions were denoted as Block I and Block II. 109% thrust level was achieved with the Block II engines in 2001, which reduced the chamber pressure to 207.5 bars (3,010 psi), as it had a larger throat area. The normal maximum throttle was 104 percent, with 106% or 109% used for mission aborts.: 106–107 The Orbital Maneuvering System (OMS) consisted of two aft-mounted AJ10-190 engines and the associated propellant tanks. The AJ10 engines used monomethylhydrazine (MMH) oxidized by dinitrogen tetroxide (N2O4). The pods carried a maximum of 2,140 kg (4,718 lb) of MMH and 3,526 kg (7,773 lb) of N2O4. The OMS engines were used after main engine cut-off (MECO) for orbital insertion. Throughout the flight, they were used for orbit changes, as well as the deorbit burn prior to reentry. Each OMS engine produced 27,080 N (6,087 lbf) of thrust, and the entire system could provide 305 m/s (1,000 ft/s) of velocity change.: II–80 The orbiter was protected from heat during reentry by the thermal protection system (TPS), a thermal soaking protective layer around the orbiter. In contrast with previous US spacecraft, which had used ablative heat shields, the reusability of the orbiter required a multi-use heat shield.: 72–73 During reentry, the TPS experienced temperatures up to 1,600 °C (3,000 °F), but had to keep the orbiter vehicle's aluminum skin temperature below 180 °C (350 °F). The TPS primarily consisted of four types of tiles. The nose cone and leading edges of the wings experienced temperatures above 1,300 °C (2,300 °F), and were protected by reinforced carbon-carbon tiles (RCC). Thicker RCC tiles were developed and installed in 1998 to prevent damage from micrometeoroid and orbital debris, and were further improved after RCC damage caused in the Columbia disaster. Beginning with STS-114, the orbiter vehicles were equipped with the wing leading edge impact detection system to alert the crew to any potential damage.: II–112–113 The entire underside of the orbiter vehicle, as well as the other hottest surfaces, were protected with tiles of high-temperature reusable surface insulation, made of borosilicate glass-coated silica fibers that trapped heat in air pockets and redirected it out. Areas on the upper parts of the orbiter vehicle were coated in tiles of white low-temperature reusable surface insulation with similar composition, which provided protection for temperatures below 650 °C (1,200 °F). The payload bay doors and parts of the upper wing surfaces were coated in reusable Nomex felt surface insulation or in beta cloth, as the temperature there remained below 370 °C (700 °F).: 395 The Space Shuttle external tank (ET) carried the propellant for the Space Shuttle Main Engines, and connected the orbiter vehicle with the solid rocket boosters. The ET was 47 m (153.8 ft) tall and 8.4 m (27.6 ft) in diameter, and contained separate tanks for liquid oxygen and liquid hydrogen. The liquid oxygen tank was housed in the nose of the ET, and was 15 m (49.3 ft) tall. The liquid hydrogen tank comprised the bulk of the ET, and was 29 m (96.7 ft) tall. The orbiter vehicle was attached to the ET at two umbilical plates, which contained five propellant and two electrical umbilicals, and forward and aft structural attachments. The exterior of the ET was covered in orange spray-on foam to allow it to survive the heat of ascent.: 421–422 The ET provided propellant to the Space Shuttle Main Engines from liftoff until main engine cutoff. The ET separated from the orbiter vehicle 18 seconds after engine cutoff and could be triggered automatically or manually. At the time of separation, the orbiter vehicle retracted its umbilical plates, and the umbilical cords were sealed to prevent excess propellant from venting into the orbiter vehicle. After the bolts attached at the structural attachments were sheared, the ET separated from the orbiter vehicle. At the time of separation, gaseous oxygen was vented from the nose to cause the ET to tumble, ensuring that it would break up upon reentry. The ET was the only major component of the Space Shuttle system that was not reused, and it would travel along a ballistic trajectory into the Indian or Pacific Ocean.: 422 For the first two missions, STS-1 and STS-2, the ET was covered in 270 kg (595 lb) of white fire-retardant latex paint to provide protection against damage from ultraviolet radiation. Further research determined that the orange foam itself was sufficiently protected, and the ET was no longer covered in latex paint beginning on STS-3.: II-210 A light-weight tank (LWT) was first flown on STS-6, which reduced tank weight by 4,700 kg (10,300 lb). The LWT's weight was reduced by removing components from the hydrogen tank and reducing the thickness of some skin panels.: 422 In 1998, a super light-weight ET (SLWT) first flew on STS-91. The SLWT used the 2195 aluminum-lithium alloy, which was 40% stronger and 10% less dense than its predecessor, 2219 aluminum-lithium alloy. The SLWT weighed 3,400 kg (7,500 lb) less than the LWT, which allowed the Space Shuttle to deliver heavy elements to ISS's high inclination orbit.: 423–424 The Solid Rocket Boosters (SRB) provided 71.4% of the Space Shuttle's thrust during liftoff and ascent, and were the largest solid-propellant motors ever flown. Each SRB was 45 m (149.2 ft) tall and 3.7 m (12.2 ft) wide, weighed 68,000 kg (150,000 lb), and had a steel exterior approximately 13 mm (.5 in) thick. The SRB's subcomponents were the solid-propellant motor, nose cone, and rocket nozzle. The solid-propellant motor comprised the majority of the SRB's structure. Its casing consisted of 11 steel sections which made up its four main segments. The nose cone housed the forward separation motors and the parachute systems that were used during recovery. The rocket nozzles could gimbal up to 8° to allow for in-flight adjustments.: 425–429 The rocket motors were each filled with a total 500,000 kg (1,106,640 lb) of solid rocket propellant (APCP+PBAN), and joined in the Vehicle Assembly Building (VAB) at KSC.: 425–426 In addition to providing thrust during the first stage of launch, the SRBs provided structural support for the orbiter vehicle and ET, as they were the only system that was connected to the mobile launcher platform (MLP).: 427 At the time of launch, the SRBs were armed at T−5 minutes, and could only be electrically ignited once the RS-25 engines had ignited and were without issue.: 428 They each provided 12,500 kN (2,800,000 lbf) of thrust, which was later improved to 13,300 kN (3,000,000 lbf) beginning on STS-8.: 425 After expending their fuel, the SRBs were jettisoned approximately two minutes after launch at an altitude of approximately 46 km (150,000 ft). Following separation, they deployed drogue and main parachutes, landed in the ocean, and were recovered by the crews aboard the ships MV Freedom Star and MV Liberty Star.: 430 Once they were returned to Cape Canaveral, they were cleaned and disassembled. The rocket motor, igniter, and nozzle were then shipped to Thiokol to be refurbished and reused on subsequent flights.: 124 The SRBs underwent several redesigns throughout the program's lifetime. STS-6 and STS-7 used SRBs 2,300 kg (5,000 lb) lighter due to walls that were 0.10 mm (.004 in) thinner, but were determined to be too thin to fly safely. Subsequent flights until STS-26 used cases that were 0.076 mm (.003 in) thinner than the standard-weight cases, which reduced 1,800 kg (4,000 lb). After the Challenger disaster as a result of an O-ring failing at low temperature, the SRBs were redesigned to provide a constant seal regardless of the ambient temperature.: 425–426 The Space Shuttle's operations were supported by vehicles and infrastructure that facilitated its transportation, construction, and crew access. The crawler-transporters carried the MLP and the Space Shuttle from the VAB to the launch site. The Shuttle Carrier Aircraft (SCA) were two modified Boeing 747s that could carry an orbiter on its back. The original SCA (N905NA) was first flown in 1975, and was used for the ALT and ferrying the orbiter from Edwards AFB to the KSC on all missions prior to 1991. A second SCA (N911NA) was acquired in 1988, and was first used to transport Endeavour from the factory to the KSC. Following the retirement of the Space Shuttle, N905NA was put on display at the JSC, and N911NA was put on display at the Joe Davies Heritage Airpark in Palmdale, California.: I–377–391 The Crew Transport Vehicle (CTV) was a modified airport jet bridge that was used to assist astronauts to egress from the orbiter after landing, where they would undergo their post-mission medical checkups. The Astrovan transported astronauts from the crew quarters in the Operations and Checkout Building to the launch pad on launch day. The NASA Railroad comprised three locomotives that transported SRB segments from the Florida East Coast Railway in Titusville to the KSC. Mission profile The Space Shuttle was prepared for launch primarily in the VAB at the KSC. The SRBs were assembled and attached to the external tank on the MLP. The orbiter vehicle was prepared at the Orbiter Processing Facility (OPF) and transferred to the VAB, where a crane was used to rotate it to the vertical orientation and mate it to the external tank.: 132–133 Once the entire stack was assembled, the MLP was carried for 5.6 km (3.5 mi) to Launch Complex 39 by one of the crawler-transporters.: 137 After the Space Shuttle arrived at one of the two launchpads, it would connect to the Fixed and Rotation Service Structures, which provided servicing capabilities, payload insertion, and crew transportation.: 139–141 The crew was transported to the launch pad at T−3 hours and entered the orbiter vehicle, which was closed at T−2 hours.: III–8 Liquid oxygen and hydrogen were loaded into the external tank via umbilicals that attached to the orbiter vehicle, which began at T−5 hours 35 minutes. At T−3 hours 45 minutes, the hydrogen fast-fill was complete, followed 15 minutes later by the oxygen tank fill. Both tanks were slowly filled up until the launch as the oxygen and hydrogen evaporated.: II–186 The launch commit criteria considered precipitation, temperatures, cloud cover, lightning forecast, wind, and humidity. The Space Shuttle was not launched under conditions where it could have been struck by lightning, as its exhaust plume could have triggered lightning by providing a current path to ground after launch, which occurred on Apollo 12.: 239 The NASA Anvil Rule for a Shuttle launch stated that an anvil cloud could not appear within a distance of 19 km (10 nmi). The Shuttle Launch Weather Officer monitored conditions until the final decision to scrub a launch was announced. In addition to the weather at the launch site, conditions had to be acceptable at one of the Transatlantic Abort Landing sites and the SRB recovery area. The mission crew and the Launch Control Center (LCC) personnel completed systems checks throughout the countdown. Two built-in holds at T−20 minutes and T−9 minutes provided scheduled breaks to address any issues and additional preparation.: III–8 After the built-in hold at T−9 minutes, the countdown was automatically controlled by the Ground Launch Sequencer (GLS) at the LCC, which stopped the countdown if it sensed a critical problem with any of the Space Shuttle's onboard systems. At T−3 minutes 45 seconds, the engines began conducting gimbal tests, which were concluded at T−2 minutes 15 seconds. The ground Launch Processing System handed off the control to the orbiter vehicle's GPCs at T−31 seconds. At T−16 seconds, the GPCs armed the SRBs, the sound suppression system (SPS) began to drench the MLP and SRB trenches with 1,100,000 L (300,000 U.S. gal) of water to protect the orbiter vehicle from damage by acoustical energy and rocket exhaust reflected from the flame trench and MLP during lift-off. At T−10 seconds, hydrogen igniters were activated under each engine bell to quell the stagnant gas inside the cones before ignition. Failure to burn these gases could trip the onboard sensors and create the possibility of an overpressure and explosion of the vehicle during the firing phase. The hydrogen tank's prevalves were opened at T−9.5 seconds in preparation for engine start.: II–186 Beginning at T−6.6 seconds, the main engines were ignited sequentially at 120-millisecond intervals. All three RS-25 engines were required to reach 90% rated thrust by T−3 seconds, otherwise the GPCs would initiate an RSLS abort. If all three engines indicated nominal performance by T−3 seconds, they were commanded to gimbal to liftoff configuration and the command would be issued to arm the SRBs for ignition at T−0. Between T−6.6 seconds and T−3 seconds, while the RS-25 engines were firing but the SRBs were still bolted to the pad, the offset thrust would cause the Space Shuttle to pitch down 650 mm (25.5 in) measured at the tip of the external tank; the 3-second delay allowed the stack to return to nearly vertical before SRB ignition. This movement was nicknamed the "twang." At T−0, the eight frangible nuts holding the SRBs to the pad were detonated, the final umbilicals were disconnected, the SSMEs were commanded to 100% throttle, and the SRBs were ignited. By T+0.23 seconds, the SRBs built up enough thrust for liftoff to commence, and reached maximum chamber pressure by T+0.6 seconds.: II–186 At T−0, the JSC Mission Control Center assumed control of the flight from the LCC.: III–9 At T+4 seconds, when the Space Shuttle reached an altitude of 22 meters (73 ft), the RS-25 engines were throttled up to 104.5%. At approximately T+7 seconds, the Space Shuttle rolled to a heads-down orientation at an altitude of 110 meters (350 ft), which reduced aerodynamic stress and provided an improved communication and navigation orientation. Approximately 20–30 seconds into ascent and an altitude of 2,700 meters (9,000 ft), the RS-25 engines were throttled down to 65–72% to reduce the maximum aerodynamic forces at Max Q.: III–8–9 Additionally, the shape of the SRB propellant was designed to cause thrust to decrease at the time of Max Q.: 427 The GPCs could dynamically control the throttle of the RS-25 engines based upon the performance of the SRBs.: II–187 At approximately T+123 seconds and an altitude of 46,000 meters (150,000 ft), pyrotechnic fasteners released the SRBs, which reached an apogee of 67,000 meters (220,000 ft) before parachuting into the Atlantic Ocean. The Space Shuttle continued its ascent using only the RS-25 engines. On earlier missions, the Space Shuttle remained in the heads-down orientation to maintain communications with the tracking station in Bermuda, but later missions, beginning with STS-87, rolled to a heads-up orientation at T+6 minutes for communication with the tracking and data relay satellite constellation. The RS-25 engines were throttled at T+7 minutes 30 seconds to limit vehicle acceleration to 3 g. At 6 seconds prior to main engine cutoff (MECO), which occurred at T+8 minutes 30 seconds, the RS-25 engines were throttled down to 67%. The GPCs controlled ET separation and dumped the remaining liquid oxygen and hydrogen to prevent outgassing while in orbit. The ET continued on a ballistic trajectory and broke up during reentry, with some small pieces landing in the Indian or Pacific Ocean.: III–9–10 Early missions used two firings of the OMS to achieve orbit; the first firing raised the apogee while the second circularized the orbit. Missions after STS-38 used the RS-25 engines to achieve the optimal apogee, and used the OMS engines to circularize the orbit. The orbital altitude and inclination were mission-dependent, and the Space Shuttle's orbits varied from 220 to 620 km (120 to 335 nmi).: III–10 The type of mission the Space Shuttle was assigned dictated the type of orbit that it entered. The initial design of the reusable Space Shuttle envisioned an increasingly cheap launch platform to deploy commercial and government satellites. Early missions routinely ferried satellites, which determined the type of orbit that the orbiter vehicle would enter. Following the Challenger disaster, many commercial payloads were moved to expendable commercial rockets, such as the Delta II.: III–108, 123 While later missions still launched commercial payloads, Space Shuttle assignments were routinely directed towards scientific payloads, such as the Hubble Space Telescope,: III–148 Spacelab,: 434–435 and the Galileo spacecraft.: III–140 Beginning with STS-71, the orbiter vehicle conducted dockings with the Mir space station.: III–224 In its final decade of operation, the Space Shuttle was used for the construction of the International Space Station.: III–264 Most missions involved staying in orbit several days to two weeks, although longer missions were possible with the Extended Duration Orbiter pallet.: III–86 The 17 day 15 hour STS-80 mission was the longest Space Shuttle mission duration.: III–238 Approximately four hours prior to deorbit, the crew began preparing the orbiter vehicle for reentry by closing the payload doors, radiating excess heat, and retracting the Ku band antenna. The orbiter vehicle maneuvered to an upside-down, tail-first orientation and began a 2–4 minute OMS burn approximately 20 minutes before it reentered the atmosphere. The orbiter vehicle reoriented itself to a nose-forward position with a 40° angle-of-attack, and the forward reaction control system (RCS) jets were emptied of fuel and disabled prior to reentry. The orbiter vehicle's reentry was defined as starting at an altitude of 120 km (400,000 ft), when it was traveling at approximately Mach 25. The orbiter vehicle's reentry was controlled by the GPCs, which followed a preset angle-of-attack plan to prevent unsafe heating of the TPS. During reentry, the orbiter's speed was regulated by altering the amount of drag produced, which was controlled by means of angle of attack, as well as bank angle. The latter could be used to control drag without changing the angle of attack. A series of roll reversals[c] were performed to control azimuth while banking. The orbiter vehicle's aft RCS jets were disabled as its ailerons, elevators, and rudder became effective in the lower atmosphere. At an altitude of 46 km (150,000 ft), the orbiter vehicle opened its speed brake on the vertical stabilizer. At 8 minutes 44 seconds prior to landing, the crew deployed the air data probes, and began lowering the angle-of-attack to 36°.: III–12 The orbiter's maximum glide ratio/lift-to-drag ratio varied considerably with speed, ranging from 1.3 at hypersonic speeds to 4.9 at subsonic speeds.: II–1 The orbiter vehicle flew to one of the two Heading Alignment Cones, located 48 km (30 mi) away from each end of the runway's centerline, where it made its final turns to dissipate excess energy prior to its approach and landing. Once the orbiter vehicle was traveling subsonically, the crew took over manual control of the flight.: III–13 The approach and landing phase began when the orbiter vehicle was at an altitude of 3,000 m (10,000 ft) and traveling at 150 m/s (300 kn). The orbiter followed either a -20° or -18° glideslope and descended at approximately 51 m/s (167 ft/s). The speed brake was used to keep a continuous speed, and crew initiated a pre-flare maneuver to a -1.5° glideslope at an altitude of 610 m (2,000 ft). The landing gear was deployed 10 seconds prior to touchdown, when the orbiter was at an altitude of 91 m (300 ft) and traveling 150 m/s (288 kn). A final flare maneuver reduced the orbiter vehicle's descent rate to 0.9 m/s (3 ft/s), with touchdown occurring at 100–150 m/s (195–295 kn), depending on the weight of the orbiter vehicle. After the landing gear touched down, the crew deployed a drag chute out of the vertical stabilizer, and began wheel braking when the orbiter was traveling slower than 72 m/s (140 kn). After the orbiter's wheels stopped, the crew deactivated the flight components and prepared to exit.: III–13 The primary Space Shuttle landing site was the Shuttle Landing Facility at KSC, where 78 of the 133 successful landings occurred. In the event of unfavorable landing conditions, the Shuttle could delay its landing or land at an alternate location. The primary alternate was Edwards AFB, which was used for 54 landings.: III–18–20 STS-3 landed at the White Sands Space Harbor in New Mexico and required extensive post-processing after exposure to the gypsum-rich sand, some of which was found in Columbia debris after STS-107.: III–28 Landings at alternate airfields required the Shuttle Carrier Aircraft to transport the orbiter back to Cape Canaveral.: III–13 In addition to the pre-planned landing airfields, there were 85 agreed-upon emergency landing sites to be used in different abort scenarios, with 58 located in other countries. The landing locations were chosen based upon political relationships, favorable weather, a runway at least 2,300 m (7,500 ft) long, and TACAN or DME equipment. Additionally, as the orbiter vehicle only had UHF radios, international sites with only VHF radios would have been unable to communicate directly with the crew. Facilities on the east coast of the US were planned for East Coast Abort Landings, while several sites in Europe and Africa were planned in the event of a Transoceanic Abort Landing. The facilities were prepared with equipment and personnel in the event of an emergency shuttle landing but were never used.: III–19 After the landing, ground crews approached the orbiter to conduct safety checks. Teams wearing self-contained breathing gear tested for the presence of hydrogen, hydrazine, monomethylhydrazine, nitrogen tetroxide, and ammonia to ensure the landing area was safe. Air conditioning and Freon lines were connected to cool the crew and equipment and dissipate excess heat from reentry.: III-13 A flight surgeon boarded the orbiter and performed medical checks of the crew before they disembarked. Once the orbiter was secured, it was towed to the OPF to be inspected, repaired, and prepared for the next mission. The processing included: Space Shuttle program The Space Shuttle flew from April 12, 1981,: III–24 until July 21, 2011.: III–398 Throughout the program, the Space Shuttle had 135 missions,: III–398 of which 133 returned safely.: III–80, 304 Throughout its lifetime, the Space Shuttle was used to conduct scientific research,: III–188 deploy commercial,: III–66 military,: III–68 and scientific payloads,: III–148 and was involved in the construction and operation of Mir: III–216 and the ISS.: III–264 During its tenure, the Space Shuttle served as the only U.S. vehicle to launch astronauts, of which there was no replacement until the launch of Crew Dragon Demo-2 on May 30, 2020. The overall NASA budget of the Space Shuttle program has been estimated to be $221 billion (in 2012 dollars).: III−488 The developers of the Space Shuttle advocated for reusability as a cost-saving measure, which resulted in higher development costs for presumed lower costs-per-launch. During the design of the Space Shuttle, the Phase B proposals were not as cheap as the initial Phase A estimates indicated; Space Shuttle program manager Robert Thompson acknowledged that reducing cost-per-pound was not the primary objective of the further design phases, as other technical requirements could not be met with the reduced costs.: III−489−490 Development estimates made in 1972 projected a per-pound cost of payload as low as $1,109 (in 2012) per pound, but the actual payload costs, not to include the costs for the research and development of the Space Shuttle, were $37,207 (in 2012) per pound.: III−491 Per-launch costs varied throughout the program and were dependent on the rate of flights as well as research, development, and investigation proceedings throughout the Space Shuttle program. In 1982, NASA published an estimate of $260 million (in 2012) per flight, which was based on the prediction of 24 flights per year for a decade. The per-launch cost from 1995 to 2002, when the orbiters and ISS were not being constructed and there was no recovery work following a loss of crew, was $806 million. NASA published a study in 1999 that concluded that costs were $576 million (in 2012) if there were seven launches per year. In 2009, NASA determined that the cost of adding a single launch per year was $252 million (in 2012), which indicated that much of the Space Shuttle program costs are for year-round personnel and operations that continued regardless of the launch rate. Accounting for the entire Space Shuttle program budget, the per-launch cost was $1.642 billion (in 2012).: III−490 On January 28, 1986, STS-51-L disintegrated 73 seconds after launch, due to the failure of the right SRB, killing all seven astronauts on board Challenger. The disaster was caused by the low-temperature impairment of an O-ring, a mission-critical seal used between segments of the SRB casing. Failure of the O-ring allowed hot combustion gases to escape from between the booster sections and burn through the adjacent ET, leading to a sequence of catastrophic events which caused the orbiter to disintegrate.: 71 Repeated warnings from design engineers voicing concerns about the lack of evidence of the O-rings' safety when the temperature was below 53 °F (12 °C) had been ignored by NASA managers.: 148 On February 1, 2003, Columbia disintegrated during re-entry, killing all seven of the STS-107 crew, because of damage to the carbon-carbon leading edge of the wing caused during launch. Ground control engineers had made three separate requests for high-resolution images taken by the Department of Defense that would have provided an understanding of the extent of the damage, while NASA's chief TPS engineer requested that astronauts on board Columbia be allowed to leave the vehicle to inspect the damage. NASA managers intervened to stop the Department of Defense's imaging of the orbiter and refused the request for the spacewalk,: III–323 and thus the feasibility of scenarios for astronaut repair or rescue by Atlantis were not considered by NASA management at the time. The partial reusability of the Space Shuttle was one of the primary design requirements during its initial development.: 164 The technical decisions that dictated the orbiter's return and re-use reduced the per-launch payload capabilities. The original intention was to compensate for this lower payload by lowering the per-launch costs and a high launch frequency. However, the actual costs of a Space Shuttle launch were higher than initially predicted, and the Space Shuttle did not fly the intended 24 missions per year as initially predicted by NASA.: III–489–490 The Space Shuttle was originally intended as a launch vehicle to deploy satellites, which it was primarily used for on the missions prior to the Challenger disaster. NASA's pricing, which was below cost, was lower than expendable launch vehicles; the intention was that the high volume of Space Shuttle missions would compensate for early financial losses. The improvement of expendable launch vehicles and the transition away from commercial payloads on the Space Shuttle resulted in expendable launch vehicles becoming the primary deployment option for satellites.: III–109–112 A key customer for the Space Shuttle was the National Reconnaissance Office (NRO) responsible for spy satellites. The existence of NRO's connection was classified through 1993, and secret considerations of NRO payload requirements led to lack of transparency in the program. The proposed Shuttle-Centaur program, cancelled in the wake of the Challenger disaster, would have pushed the spacecraft beyond its operational capacity. The fatal Challenger and Columbia disasters demonstrated the safety risks of the Space Shuttle that could result in the loss of the crew. The spaceplane design of the orbiter limited the abort options, as the abort scenarios required the controlled flight of the orbiter to a runway or to allow the crew to egress individually, rather than the abort escape options on the Apollo and Soyuz space capsules. Early safety analyses advertised by NASA engineers and management predicted the chance of a catastrophic failure resulting in the death of the crew as ranging from 1 in 100 launches to as rare as 1 in 100,000. Following the loss of two Space Shuttle missions, the risks for the initial missions were reevaluated, and the chance of a catastrophic loss of the vehicle and crew was found to be as high as 1 in 9. NASA management was criticized afterwards for accepting increased risk to the crew in exchange for higher mission rates. Both the Challenger and Columbia reports explained that NASA culture had failed to keep the crew safe by not objectively evaluating the potential risks of the missions.: 195–203 The Space Shuttle retirement was announced in January 2004.: III-347 President George W. Bush announced his Vision for Space Exploration, which called for the retirement of the Space Shuttle once it completed construction of the ISS. To ensure the ISS was properly assembled, the contributing partners determined the need for 16 remaining assembly missions in March 2006.: III-349 One additional Hubble Space Telescope servicing mission was approved in October 2006.: III-352 Originally, STS-134 was to be the final Space Shuttle mission. However, the Columbia disaster resulted in additional orbiters being prepared for launch on need in the event of a rescue mission. As Atlantis was prepared for the final launch-on-need mission, the decision was made in September 2010 that it would fly as STS-135 with a four-person crew that could remain at the ISS in the event of an emergency.: III-355 STS-135 launched on July 8, 2011, and landed at the KSC on July 21, 2011, at 5:57 a.m. EDT (09:57 UTC).: III-398 From then until the launch of Crew Dragon Demo-2 on May 30, 2020, the US launched its astronauts aboard Russian Soyuz spacecraft. Following each orbiter's final flight, it was processed to make it safe for display. The OMS and RCS systems used presented the primary dangers due to their toxic hypergolic propellant, and most of their components were permanently removed to prevent any dangerous outgassing.: III-443 Atlantis is on display at the Kennedy Space Center Visitor Complex in Florida,: III-456 Discovery is on display at the Steven F. Udvar-Hazy Center in Virginia,: III-451 Endeavour is on display at the California Science Center in Los Angeles,: III-457 and Enterprise is displayed at the Intrepid Museum in New York.: III-464 Components from the orbiters were transferred to the US Air Force, ISS program, and Russian and Canadian governments. The engines were removed to be used on the Space Launch System, and spare RS-25 nozzles were attached for display purposes.: III-445 For many Artemis program missions, the Space Launch System's two solid rocket boosters' engines and casings and four main engines and the Orion spacecraft's main engine will all be previously flown Space Shuttle main engines, solid rocket boosters, and Orbital Maneuvering System engines. They are refurbished legacy engines from the Space Shuttle program, some of which even date back to the early 1980s. For example, Artemis I had components that flew on 83 of the 135 Space Shuttle missions. From Artemis I to Artemis IV recycled Shuttle main engines will be used before manufacturing new engines. From Artemis I to Artemis III recycled Shuttle solid rocket boosters' engines and steel casings are to be used before building new ones. From Artemis I to Artemis VI the Orion main engine will use six previously flown Space Shuttle OMS engines. See also Similar spacecraft Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Arab_Legion] | [TOKENS: 1324]
Contents Arab Legion The Arab Legion (Arabic: الفيلق العربي) was the police force, then regular army, of the Emirate of Transjordan, a British protectorate, in the early part of the 20th century, and then of the Hashemite Kingdom of Jordan, an independent state, with a final Arabization[tone] of its command taking place in 1956, when British senior officers were replaced by Jordanian ones. Creation In October 1920, after taking over the Transjordan region from the Ottomans, the United Kingdom formed a unit of 150 men called the "Mobile Force", under the command of Captain Frederick Gerard Peake, to defend the territory against both internal and external threats. The Mobile Force was based in Zarqa. 80% of its men were drawn from the local Chechen community. It was quickly expanded to 1,000 men, recruiting Arabs who had served in the Ottoman Army. On 22 October 1923, the police were merged with the Reserve Mobile Force, still under Peake, who was now an employee of the Emirate of Transjordan. The new force was named Al Jeish al Arabi ("the Arab Army") but was always known officially in English as the Arab Legion. The Arab Legion was financed by Britain and commanded by British officers. The Legion was formed as a police force to keep order among the tribes of Transjordan and to guard the important Jerusalem–Amman road. On 1 April 1926, the Transjordan Frontier Force was formed from cadre drawn from the Arab Legion. It consisted of only 150 men and most of them were stationed along Transjordan's roads. During this time the Arab Legion was reduced to 900 men and was also stripped of its machine guns, artillery, and communications troops. In 1939, John Bagot Glubb, better known as "Glubb Pasha", became the Legion's commander, with Major General Abdul Qadir Pasha Al Jundi as his deputy commander. Together they transformed it into the best-trained Arab army. World War II During World War II, the Arab Legion took part in the British war effort against pro-Axis forces in the Mediterranean and Middle East Theatre. By then, the force had grown to 1,600 men. The Legion, part of Iraqforce, contributed significantly in the Anglo-Iraqi War and in the Syria-Lebanon campaign, two decisive early victories for the Allies. The top three officers representing the Legion who participated in the Victory March were Major General Abdul Qadir Pasha el Jundi, O.B.E., Colonel Bahjat Bey Tabbara, and Lieutenant Colonel Ahmed Sudqui Bey, M.B.E. 1948 Arab–Israeli War The Arab Legion actively participated in the 1948 Arab–Israeli war. With a total strength of just over 6,000, the Arab Legion's military contingent consisted of 4,500 men in four single battalion-sized regiments, each with their own armored car squadrons, and seven independent companies plus support troops. The regiments were organized into two brigades. 1st Brigade contained 1st and 3rd Regiments while 3rd brigade contained 2nd and 4th Regiments. There were also two artillery batteries with four 25-pounders each. On 9 February 1948 the Transjordan Frontier Force was disbanded with members being absorbed back into the Arab Legion. Although headed by Glubb, now a Lieutenant General, command in the field was by Brigadier Norman Lash. The Legion was initially withdrawn from Palestine to Transjordanian territory, under instruction from the United Nations, prior to the end of the British Mandate. With the commencement of hostilities the Legion re-entered Palestine with 1st Brigade heading to Nablus and 2nd Brigade heading to Ramallah. The Arab Legion entered Palestine with other Arab forces on May 15, 1948, using the Allenby, now King Hussein, bridge as they were advancing to cover the approaches from Jenin, in the north to Alaffoula and from Al-Majame'a bridge on the Jordan River to Bissan Alaffoula[dubious – discuss].[citation needed] There was considerable embarrassment from the UK government that British officers were employed in the Legion during the conflict and all of them, including a brigade commander, were ordered to return to Transjordan. This led to the bizarre spectacle of British officers leaving their units to return to Transjordan, only to sneak back across the border and rejoin the Arab Legion. Without exception all of the British officers returned to their units.[citation needed] John Platts-Mills, a Labour Member of Parliament, formally queried in Parliament why Glubb had not been prosecuted for serving in a foreign army in contravention of the Foreign Enlistment Act 1870. Units of the Arab Legion were engaged in several battles with Zionist, and later Israeli forces, including the following: By the end of the war in 1949, the Arab Legion consisted of over 10,000 men manning a 100-mile front, which then expanded to a 400-mile front following the withdrawal of Iraqi forces.[citation needed] Further clashes with Israel On 11 September 1956, an Israeli force in what the IDF termed one of its retribution operations, Operation Jehonathan, raided Jordanian territory at Al-Rahwa, Hebron Sector, attacking the police station and clashing with a unit from the Legion's Desert Force. Over twenty soldiers and policemen were killed. The Legion generally stayed out of the 1956 Suez Crisis. Jordanian army On 1 March 1956, the Arab Legion was renamed as the Arab Army (now Jordanian Armed Forces) as part of the Arabization of its command, under which King Hussein of Jordan dismissed the Legion's British commander "Glubb Pasha" and other senior British officers. In Israel, the Hebrew term "Ligioner" (ליגיונר), i.e. "Legionary" was still informally used for Jordanian soldiers for many years afterwards, also at the time of the 1967 war and its aftermath. Commanders Note: "Pasha" is a Turkish honorary title, one of various ranks, and is equivalent to the British title of "Lord". Bey is equivalent to a knighthood or "Sir". References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States_Army] | [TOKENS: 10066]
Contents United States Army The United States Army (U.S. Army) is the land service branch of the United States Armed Forces. It is designated as the army of the United States in the United States Constitution. It operates under the authority, direction, and control of the United States secretary of defense. As a part of the United States Department of Defense, it is one of the six armed forces of the United States and one of the eight uniformed services of the United States. The Army is the most senior branch in order of precedence amongst the armed services. It is also the primary offensive force of the US military, always being the first to engage in war/conflict. It has its roots in the Continental Army, formed in 1775 during the American Revolutionary War. After the war, the United States Army succeeded the Continental Army in 1784. The U.S. Army is part of the Department of the Army, which is one of the three military departments of the Department of Defense. The U.S. Army is headed by the civilian secretary of the Army, and by the military chief of staff of the Army, a member of the Joint Chiefs of Staff. It is the largest military branch, and for FY2022, the projected headcount of the Army was 1,005,725 soldiers: the Regular Army had 480,893 soldiers; the Army National Guard 336,129 soldiers, and the U.S. Army Reserve 188,703 soldiers. Major branches include Air Defense Artillery, Armor, Aviation, Field Artillery, Infantry, and Special Forces. Its operates many combat vehicles in the thousands of units, such as the M1 Abrams tank, Bradley and Stryker armoured fighting vehicles, M113 armored personnel carrier, and UH-60 Black Hawk helicopters. The U.S. Army fought the American Indian Wars (1784–1890), the War of 1812 (1812–15), Mexican–American War (1846–48). The American Civil War (1861–65) was its costliest conflict, with over 350,000 casualties. It fought the Spanish–American War (1898), World War I (1917–18), World War II (1941–45). In the Cold War, the Army fought the Korean War (1950–53) and Vietnam War (1965–71), and operated tactical nuclear weapons ranging from the Pershing II ballistic missile to the Davy Crockett rifle. Following the Cold War's end in 1991, Army has focused primarily on Western Asia and counterinsurgency roles, including the Gulf War, war in Afghanistan, and war in Iraq. The U.S. Army currently provides materiel and training to Ukraine in the Russo-Ukrainian war, and deploys air defense batteries which support Israel in the Gaza war and Hezbollah–Israel conflict. Mission The United States Army serves as the primary land-based branch of the United States Department of Defense. Section 7062 of Title 10, U.S. Code defines the purpose of the army as: In 2018, the Army Strategy 2018 articulated an eight-point addendum to the Army Vision for 2028. While the Army Mission remains constant, the Army Strategy builds upon the Army's Brigade Modernization by adding focus to corps and division-level echelons. The Army Futures Command oversees reforms geared toward conventional warfare. The Army's current reorganization plan is due to be completed by 2028. The Army's five core competencies are prompt and sustained land combat, combined arms operations (to include combined arms maneuver and wide–area security, armored and mechanized operations and airborne and air assault operations), special operations forces, to set and sustain the theater for the joint force and to integrate national, multinational, and joint power on land. History The Continental Army was created on 14 June 1775 by the Second Continental Congress as a unified army for the colonies to fight Great Britain, with George Washington appointed as its commander. The army was initially led by men who had served in the British Army or colonial militias and who brought much of British military heritage with them. As the Revolutionary War progressed, French aid, resources, and military thinking helped shape the new army. A number of European soldiers came on their own to help, such as Friedrich Wilhelm von Steuben, who taught Prussian Army tactics and organizational skills. The Army fought numerous pitched battles, and sometimes used Fabian strategy and hit-and-run tactics in the South in 1780 and 1781; under Major General Nathanael Greene, it hit where the British were weakest to wear down their forces. Washington led victories against the British at Trenton and Princeton, but lost a series of battles in the New York and New Jersey campaign in 1776 and the Philadelphia campaign in 1777. With a decisive victory at Yorktown and the help of the French, the Continental Army prevailed against the British. After the war, the Continental Army was quickly given land certificates and disbanded in a reflection of the republican distrust of standing armies. State militias became the new nation's sole ground army, except a regiment to guard the Western Frontier and one battery of artillery guarding West Point's arsenal. However, because of continuing conflict with Native Americans, it was soon considered necessary to field a trained standing army. The Regular Army was at first very small and after General St. Clair's defeat at the Battle of the Wabash, where more than 800 soldiers were killed, the Regular Army was reorganized as the Legion of the United States, established in 1791 and renamed the United States Army in 1796. In 1798, during the Quasi-War with France, the U.S. Congress established a three-year "Provisional Army" of 10,000 men, consisting of twelve regiments of infantry and six troops of light dragoons. In March 1799, Congress created an "Eventual Army" of 30,000 men, including three regiments of cavalry. Both "armies" existed only on paper, but equipment for 3,000 men and horses was procured and stored. The War of 1812 was the second and last war between the United States and Great Britain. The war was split between a Northern, Southern, and naval campaign. While a large part of the war was fought between the United States and Great Britain, there were a variety of native tribes that fought on both sides of the conflict. The result of the war is the Treaty of Ghent and is generally considered to be inconclusive, and brought upon a period of peace between the United States and Great Britain that has lasted for over two centuries. There was a long period of war between the United States and the Seminoles that lasted over 50 years. The usual strategies utilized against Native American tribes were to seize winter food supplies and to form alliances with enemies of a tribe. These were not viable options against the Seminoles, largely due to the fact of lack of climate variability in Florida and because of the long history of warring between the Seminole tribe and other tribes in the Florida region. The U.S. Army fought and won the Mexican–American War, which was a defining event for both countries. The U.S. victory resulted in the Treaty of Guadalupe Hidalgo, in which Mexico ceded a large portion of land to the United States which included the modern-day states of California, Nevada, New Mexico, Arizona, Texas, and parts of Colorado and Wyoming. The American Civil War was the costliest war for the U.S. in terms of casualties. After most slave states, located in the southern U.S., formed the Confederate States, the Confederate States Army, led by former U.S. Army officers, mobilized a large fraction of Southern white manpower. Forces of the United States (the "Union" or "the North") formed the Union Army, consisting of a small body of regular army units and a large body of volunteer units raised from every state, north and south, except South Carolina. For the first two years, Confederate forces did well in set battles but lost control of the border states. The Confederates had the advantage of defending a large territory in an area where disease caused twice as many deaths as combat. The Union pursued a strategy of seizing the coastline, blockading the ports, and taking control of the river systems. By 1863, the Confederacy was being strangled. Its eastern armies fought well, but the western armies were defeated one after another until the Union forces captured New Orleans in 1862 along with the Tennessee River. In the Vicksburg Campaign of 1862–1863, General Ulysses Grant seized the Mississippi River and cut off the Southwest. Grant took command of Union forces in 1864 and after a series of battles with very heavy casualties, he had General Robert E. Lee under siege in Richmond as General William T. Sherman captured Atlanta and marched through Georgia and the Carolinas. The Confederate capital was abandoned in April 1865 and Lee subsequently surrendered his army at Appomattox Court House. All other Confederate armies surrendered within a few months. The war remains the deadliest conflict in U.S. history, resulting in the deaths of 620,000 men on both sides. Based on 1860 census figures, 8% of all white males aged 13 to 43 died in the war, including 6.4% in the North and 18% in the South. Following the Civil War, the U.S. Army had the mission of containing western tribes of Native Americans on the Indian reservations. They set up many forts, and engaged in the last of the American Indian Wars. U.S. Army troops also occupied several Southern states during the Reconstruction Era to protect freedmen. The key battles of the Spanish–American War of 1898 were fought by the Navy. Using mostly new volunteers, the U.S. forces defeated Spain in land campaigns in Cuba and played the central role in the Philippine–American War. Starting in 1910, the army began acquiring fixed-wing aircraft. In 1910, during the Mexican Revolution, the army was deployed to U.S. towns near the border to ensure the safety of lives and property. In 1916, Pancho Villa, a major rebel leader, attacked Columbus, New Mexico, prompting a U.S. intervention in Mexico until 7 February 1917. They fought the rebels and the Mexican federal troops until 1918. The United States joined World War I as an "Associated Power" in 1917 on the side of Britain, France, Russia, Italy and the other Allies. U.S. troops were sent to the Western Front and were involved in the last offensives that ended the war. With the armistice in November 1918, the army once again decreased its forces. In 1939, estimates of the Army's strength ranged between 174,000 and 200,000 soldiers, smaller than that of Portugal's, which ranked it 17th or 19th in the world in size. General George C. Marshall became Army chief of staff in September 1939 and set about expanding and modernizing the Army in preparation for war. The United States joined World War II in December 1941 after the Japanese attack on Pearl Harbor. Some 11 million Americans were to serve in various Army operations. On the European front, U.S. Army troops formed a significant portion of the forces that landed in French North Africa and took Tunisia and then moved on to Sicily and later fought in Italy. In the June 1944 landings in northern France and in the subsequent liberation of Europe and defeat of Nazi Germany, millions of U.S. Army troops played a central role. In 1947, the number of soldiers in the US Army had decreased from eight million in 1945 to 684,000 soldiers and the total number of active divisions had dropped from 89 to 12. The leaders of the Army saw this demobilization as a success. In the Pacific War, U.S. Army soldiers made up the vast majority of ground forces there, capturing the Pacific Islands from Japanese control. In total, there were 86 amphibious landings throughout the Pacific. The Army conducted 71. Following the Axis surrenders in May (Germany) and August (Japan) of 1945, Army troops were deployed to Japan and Germany to occupy the two defeated nations. Two years after World War II, the Army Air Forces separated from the army to become the United States Air Force in September 1947. In 1948, the army was desegregated by order 9981 of President Harry S. Truman. The end of World War II set the stage for the East–West confrontation known as the Cold War. With the outbreak of the Korean War, concerns over the defense of Western Europe rose. Two corps, V and VII, were reactivated under Seventh United States Army in 1950 and U.S. strength in Europe rose from one division to four. Hundreds of thousands of U.S. troops remained stationed in West Germany, with others in Belgium, the Netherlands and the United Kingdom, until the 1990s in anticipation of a possible Soviet attack.: minute 9:00–10:00 During the Cold War, U.S. troops and their allies fought communist forces in Korea and Vietnam. The Korean War began in June 1950, when the Soviets walked out of a UN Security Council meeting, removing their possible veto. Under a United Nations umbrella, hundreds of thousands of U.S. troops fought to prevent the takeover of South Korea by North Korea and later to invade the northern nation. After repeated advances and retreats by both sides and the Chinese People's Volunteer Army's entry into the war, the Korean Armistice Agreement returned the peninsula to the status quo in July 1953. The Vietnam War is often regarded as a low point for the U.S. Army due to the use of drafted personnel, the unpopularity of the war with the U.S. public and frustrating restrictions placed on the military by U.S. political leaders. While U.S. forces had been stationed in South Vietnam since 1959, in intelligence and advising/training roles, they were not deployed in large numbers until 1965, after the Gulf of Tonkin Incident. U.S. forces effectively established and maintained control of the "traditional" battlefield, but they struggled to counter the guerrilla hit and run tactics of the communist Viet Cong and the People's Army of Vietnam (NVA). During the 1960s, the Department of Defense continued to scrutinize the reserve forces and to question the number of divisions and brigades as well as the redundancy of maintaining two reserve components, the Army National Guard and the Army Reserve. In 1967, Secretary of Defense Robert McNamara decided that 15 combat divisions in the Army National Guard were unnecessary and cut the number to eight divisions (one mechanized infantry, two armored, and five infantry), but increased the number of brigades from seven to 18 (one airborne, one armored, two mechanized infantry and 14 infantry). The loss of the divisions did not sit well with the states. Their objections included the inadequate maneuver element mix for those that remained and the end to the practice of rotating divisional commands among the states that supported them. Under the proposal, the remaining division commanders were to reside in the state of the division base. However, no reduction in total Army National Guard strength was to take place, which convinced the governors to accept the plan. The states reorganized their forces accordingly between 1 December 1967 and 1 May 1968. The Total Force Policy was adopted by Chief of Staff of the Army General Creighton Abrams in the aftermath of the Vietnam War and involved treating the three components of the army – the Regular Army, the Army National Guard and the Army Reserve as a single force. General Abrams' intertwining of the three components of the army effectively made extended operations impossible without the involvement of both the Army National Guard and Army Reserve in a predominantly combat support role. The army converted to an all-volunteer force with greater emphasis on training to specific performance standards driven by the reforms of General William E. DePuy, the first commander of United States Army Training and Doctrine Command. Following the Camp David Accords that was signed by Egypt, Israel that was brokered by president Jimmy Carter in 1978, as part of the agreement, both the United States and Egypt agreed that there would be a joint military training led by both countries that would usually take place every 2 years, that exercise is known as Exercise Bright Star. The 1980s was mostly a decade of reorganization. The Goldwater–Nichols Act of 1986 created unified combatant commands bringing the army together with the other four military services under unified, geographically organized command structures. The army also played a role in the invasions of Grenada in 1983 (Operation Urgent Fury) and Panama in 1989 (Operation Just Cause). By 1989, Germany was nearing reunification and the Cold War was coming to a close. Army leadership reacted by starting to plan for a reduction in strength. By November 1989 Pentagon briefers were laying out plans to reduce army end strength by 23%, from 750,000 to 580,000. A number of incentives such as early retirement were used. In 1990, Iraq invaded its smaller neighbor, Kuwait, and U.S. land forces quickly deployed to assure the protection of Saudi Arabia. In January 1991 Operation Desert Storm commenced, a U.S.-led coalition which deployed over 500,000 troops, the bulk of them from U.S. Army formations, to drive out Iraqi forces. The campaign ended in total victory, as Western coalition forces routed the Iraqi Army. Some of the largest tank battles in history were fought during the Gulf war. The Battle of Medina Ridge, Battle of Norfolk and the Battle of 73 Easting were tank battles of historical significance. After Operation Desert Storm, the army did not see major combat operations for the remainder of the 1990s but did participate in a number of peacekeeping activities. In 1990 the Department of Defense issued guidance for "rebalancing" after a review of the Total Force Policy, but in 2004, USAF Air War College scholars concluded the guidance would reverse the Total Force Policy which is an "essential ingredient to the successful application of military force". On 11 September 2001, 53 Army civilians (47 employees and six contractors) and 22 soldiers were among the 125 victims killed in the Pentagon in a terrorist attack when American Airlines Flight 77 commandeered by five Al-Qaeda hijackers slammed into the western side of the building, as part of the September 11 attacks. In response to the 11 September attacks and as part of the global war on terror, U.S. and NATO forces invaded Afghanistan in October 2001, displacing the Taliban government. The U.S. Army also led the combined U.S. and allied invasion of Iraq in 2003; it served as the primary source for ground forces with its ability to sustain short and long-term deployment operations. In the following years, the mission changed from conflict between regular militaries to counterinsurgency, resulting in the deaths of more than 4,000 U.S. service members (as of March 2008) and injuries to thousands more. 23,813 insurgents were killed in Iraq between 2003 and 2011. Until 2009, the army's chief modernization plan, its most ambitious since World War II, was the Future Combat Systems program. In 2009, many systems were canceled, and the remaining were swept into the BCT modernization program. By 2017, the Brigade Modernization project was completed and its headquarters, the Brigade Modernization Command, was renamed the Joint Modernization Command, or JMC. In response to Budget sequestration in 2013, Army plans were to shrink to 1940 levels, although actual Active-Army end-strengths were projected to fall to some 450,000 troops by the end of FY2017. From 2016 to 2017, the Army retired hundreds of OH-58 Kiowa Warrior observation helicopters, while retaining its Apache gunships. The 2015 expenditure for Army research, development and acquisition changed from $32 billion projected in 2012 for FY15 to $21 billion for FY15 expected in 2014. The US Army saw extensive combat throughout both Afghanistan and Iraq. The 10th Mountain Division was the most deployed unit in the US military during the Global War on Terror. Organization By 2017, a task force was formed to address Army modernization, which triggered shifts of units: CCDC, and ARCIC, from within Army Materiel Command (AMC), and Army Training and Doctrine Command (TRADOC), respectively, to a new Army Command (ACOM) in 2018. AFC's mission is modernization reform: to design hardware, as well as to work within the acquisition process which defines materiel for AMC. TRADOC's mission is to define the architecture and organization of the Army, and to train and supply soldiers to FORSCOM.: minutes 2:30–15:00 AFC's cross-functional teams (CFTs) are Futures Command's vehicle for sustainable reform of the acquisition process for the future. In order to support the Army's modernization priorities, its FY2020 budget allocated $30 billion for the top six modernization priorities over the next five years. The $30 billion came from $8 billion in cost avoidance and $22 billion in terminations. The task of organizing the U.S. Army commenced in 1775. In the first one hundred years of its existence, the United States Army was maintained as a small peacetime force to man permanent forts and perform other non-wartime duties such as engineering and construction works. During times of war, the U.S. Army was augmented by the much larger United States Volunteers which were raised independently by various state governments. States also maintained full-time militias which could also be called into the service of the army. By the twentieth century, the U.S. Army had mobilized the U.S. Volunteers on four occasions during each of the major wars of the nineteenth century. During World War I, the "National Army" was organized to fight the conflict, replacing the concept of U.S. Volunteers. It was demobilized at the end of World War I and was replaced by the Regular Army, the Organized Reserve Corps, and the state militias. In the 1920s and 1930s, the "career" soldiers were known as the "Regular Army" with the "Enlisted Reserve Corps" and "Officer Reserve Corps" augmented to fill vacancies when needed. In 1941, the "Army of the United States" was founded to fight World War II. The Regular Army, Army of the United States, the National Guard, and Officer/Enlisted Reserve Corps (ORC and ERC) existed simultaneously. After World War II, the ORC and ERC were combined into the United States Army Reserve. The Army of the United States was re-established for the Korean War and Vietnam War and was demobilized upon the suspension of the draft. Currently, the Army is divided into the Regular Army, the Army Reserve, and the Army National Guard. Some states further maintain state defense forces, as a type of reserve to the National Guard, while all states maintain regulations for state militias. The U.S. Army is also divided into several branches and functional areas. Branches include officers, warrant officers, and enlisted Soldiers while functional areas consist of officers who are reclassified from their former branch into a functional area. However, officers continue to wear the branch insignia of their former branch in most cases, as functional areas do not generally have discrete insignia. Some branches, such as Special Forces, operate similarly to functional areas in that individuals may not join their ranks until having served in another Army branch. Careers in the Army can extend into cross-functional areas for officers, warrant officers, enlisted, and civilian personnel. Before 1933, the Army National Guard members were considered state militia until they were mobilized into the U.S. Army, typically at the onset of war. Since the 1933 amendment to the National Defense Act of 1916, all Army National Guard soldiers have held dual status. They serve as National Guardsmen under the authority of the governor of their state or territory and as reserve members of the U.S. Army under the authority of the president, in the Army National Guard of the United States. Since the adoption of the total force policy, in the aftermath of the Vietnam War, reserve component soldiers have taken a more active role in U.S. military operations. For example, Reserve and Guard units took part in the Gulf War, peacekeeping in Kosovo, Afghanistan, and the 2003 invasion of Iraq. Headquarters, United States Department of the Army (HQDA): See Structure of the United States Army for a detailed treatment of the history, components, administrative and operational structure and the branches and functional areas of the Army. The U.S. Army is made up of three components: the active component, the Regular Army; and two reserve components, the Army National Guard and the Army Reserve. Both reserve components are primarily composed of part-time soldiers who train once a month – known as battle assemblies or unit training assemblies (UTAs) – and conduct two to three weeks of annual training each year. Both the Regular Army and the Army Reserve are organized under Title 10 of the United States Code, while the National Guard is organized under Title 32. While the Army National Guard is organized, trained, and equipped as a component of the U.S. Army, when it is not in federal service it is under the command of individual state and territorial governors. However, the District of Columbia National Guard reports to the U.S. president, not the district's mayor, even when not federalized. Any or all of the National Guard can be federalized by presidential order and against the governor's wishes. The U.S. Army is led by a civilian secretary of the Army, who has the statutory authority to conduct all the affairs of the army under the authority, direction, and control of the secretary of defense. The chief of staff of the Army, who is the highest-ranked military officer in the army, serves as the principal military adviser and executive agent for the secretary of the Army, i.e., its service chief; and as a member of the Joint Chiefs of Staff, a body composed of the service chiefs from each of the four military services belonging to the Department of Defense who advise the president of the United States, the secretary of defense and the National Security Council on operational military matters, under the guidance of the chairman and vice chairman of the Joint Chiefs of Staff. In 1986, the Goldwater–Nichols Act mandated that operational control of the services follows a chain of command from the president to the secretary of defense directly to the unified combatant commanders, who have control of all units in their geographic or functional area of responsibility, thus the secretaries of the military departments (and their respective service chiefs underneath them) only have the responsibility to organize, train and equip their service components. The army provides trained forces to the combatant commanders for use as directed by the secretary of defense. In 2013, the army shifted to six geographical commands that align with the six geographical unified combatant commands (CCMD): The army also transformed its base unit from divisions to brigades. Division lineage will be retained, but the divisional headquarters will be able to command any brigade, not just brigades that carry their divisional lineage. The central part of this plan is that each brigade will be modular, i.e., all brigades of the same type will be exactly the same and thus any brigade can be commanded by any division. As specified before the 2013 end-strength re-definitions, the three major types of brigade combat teams are: In addition, there are combat support and service support modular brigades. Combat support brigades include aviation (CAB) brigades, which will come in heavy and light varieties, fires (artillery) brigades (now transforms to division artillery) and expeditionary military intelligence brigades. Combat service support brigades include sustainment brigades and come in several varieties and serve the standard support role in an army. The U.S. Army's conventional combat capability currently consists of 12 active divisions as well as several independent maneuver units. The 2nd Infantry Division, based in South Korea, is the only active duty division that does not have permanently assigned brigade combat teams (BCTs). The 2nd Infantry Division relies on rotational Stryker BCTs from other divisions based stateside to fulfill the necessary combat-arms role. From 2013 through 2017, the Army sustained organizational and end-strength reductions after several years of growth. In June 2013, the Army announced plans to downsize to 32 active brigade combat teams by 2015 to match a reduction in active-duty strength to 490,000 soldiers. Army Chief of Staff Raymond Odierno projected that the Army was to shrink to "450,000 in the active component, 335,000 in the National Guard and 195,000 in U.S. Army Reserve" by 2018. However, this plan was scrapped by the incoming Trump administration, with subsequent plans to expand the Army by 16,000 soldiers to a total of 476,000 by October 2017. The National Guard and the Army Reserve will see a smaller expansion. The Army's maneuver organization was most recently altered by the reorganization of United States Army Alaska into the 11th Airborne Division, transferring the 1st and 4th Brigade Combat Teams of the 25th Infantry Division under a separate operational headquarters to reflect the brigades' distinct, Arctic-oriented mission. As part of the reorganization, the 1–11 (formerly 1–25) Stryker Brigade Combat Team will reorganize as an Infantry Brigade Combat Team. Following this transition, the active component BCTs will number 11 Armored brigades, 6 Stryker brigades, and 14 Infantry brigades. Within the Army National Guard and United States Army Reserve, there are a further eight divisions, 27 brigade combat teams, additional combat support and combat service support brigades, and independent cavalry, infantry, artillery, aviation, engineer, and support battalions. The Army Reserve in particular provides virtually all psychological operations and civil affairs units. United States Army Europe and Africa (USAREUR-AF) United States Army Western Hemisphere Command (USAWHC) United States Army Pacific (USARPAC) For a description of U.S. Army tactical organizational structure, see: a U.S. context[broken anchor] and also a global context. Special operations forces United States Army Special Operations Command (Airborne) (USASOC): Medical Department The United States Army Medical Department (AMEDD), formerly the Army Medical Service (AMS), is the primary healthcare organization of the United States Army and is led by the Surgeon General of the United States Army (TSG), a three-star lieutenant general, who (by policy) also serves as the Commanding General, United States Army Medical Command (MEDCOM). TSG is assisted by a Deputy Surgeon General and a full staff, the Office of the Surgeon General (OTSG). The incumbent Surgeon General is Lieutenant General Mary K. Izaguirre (since 25 January 2024). AMEDD encompasses the Army's six non-combat, medical-focused specialty branches (or "Corps"), these branches are: the Medical Corps, Nurse Corps, Dental Corps, Veterinary Corps, Medical Service Corps, and Medical Specialist Corps. Each of these branches is headed by a Corps Chief that reports directly to the Surgeon General. Personnel The Army's Talent Management Task Force (TMTF) has deployed IPPS-A, the Integrated Personnel and Pay System - Army, an app which serves the National Guard, and on 17 January 2023 the Army Reserve and Active Army. Soldiers were reminded to update their information using the legacy systems to keep their payroll and personnel information current by December 2021. IPPS-A is the Human Resources system for the Army, is available for download for Android, or the Apple store. It will be used for future promotions and other personnel decisions. Among the changes are: Below are the U.S. Army ranks authorized for use today and their equivalent NATO designations. Although no living officer currently holds the rank of General of the Army, it is still authorized by Congress for use in wartime. There are several paths to becoming a commissioned officer including the United States Military Academy, Reserve Officers' Training Corps, Officer Candidate School, and direct commissioning. Regardless of which road an officer takes, the insignia are the same. Certain professions including physicians, pharmacists, nurses, lawyers and chaplains are commissioned directly into the Army. Most army commissioned officers (those who are generalists) are promoted based on an "up or out" system. A more flexible talent management process is underway. The Defense Officer Personnel Management Act of 1980 establishes rules for the timing of promotions and limits the number of officers that can serve at any given time. Army regulations call for addressing all personnel with the rank of general as "General (last name)" regardless of the number of stars. Likewise, both colonels and lieutenant colonels are addressed as "Colonel (last name)" and first and second lieutenants as "Lieutenant (last name)". Warrant officers are single track, specialty officers with subject matter expertise in a particular area. They are initially appointed as warrant officers (in the rank of WO1) by the secretary of the Army, but receive their commission upon promotion to chief warrant officer two (CW2). By regulation, warrant officers are addressed as "Mr. (last name)" or "Ms. (last name)" by senior officers and as "sir" or "ma'am" by all enlisted personnel. However, many personnel address warrant officers as "Chief (last name)" within their units regardless of rank. Sergeants and corporals are referred to as NCOs, short for non-commissioned officers. This distinguishes corporals from the more numerous specialists who have the same pay grade but do not exercise leadership responsibilities. Beginning in 2021, all corporals will be required to conduct structured self-development for the NCO ranks, completing the basic leader course (BLC), or else be laterally assigned as specialists. Specialists who have completed BLC and who have been recommended for promotion will be permitted to wear corporal rank before their recommended promotion as NCOs. Privates and privates first class (E3) are addressed as "Private (last name)", specialists as "Specialist (last name)", corporals as "Corporal (last name)" and sergeants, staff sergeants, sergeants first class and master sergeants all as "Sergeant (last name)". First sergeants are addressed as "First Sergeant (last name)" and sergeants major and command sergeants major are addressed as "Sergeant Major (last name)". No insignia Training in the U.S. Army is generally divided into two categories – individual and collective. Because of COVID-19 precautions, the first two weeks of basic training — not including processing and out-processing – incorporate social distancing and indoor desk-oriented training. Once the recruits have tested negative for COVID-19 for two weeks, the remaining 8 weeks follow the traditional activities for most recruits, followed by Advanced Individualized Training (AIT) where they receive training for their military occupational specialties (MOS). Some individual's MOSs range anywhere from 14 to 20 weeks of One Station Unit Training (OSUT), which combines Basic Training and AIT. The length of AIT school varies by the MOS. The length of time spent in AIT depends on the soldier's MOS. Certain highly technical MOS training requires many months (e.g., foreign language translators). Depending on the army's needs, Basic Combat Training for combat arms soldiers is conducted at several locations. Still, two of the longest-running are the Armor School and the Infantry School, both at Fort Moore, Georgia. Sergeant Major of the Army Dailey notes that an infantrymen's pilot program for One Station Unit Training (OSUT) extends 8 weeks beyond Basic Training and AIT, to 22 weeks. The pilot, designed to boost infantry readiness ended in December 2018. The new Infantry OSUT covered the M240 machine gun as well as the M249 squad automatic weapon. The redesigned Infantry OSUT started in 2019. Depending on the result of the 2018 pilot, OSUTs could also extend training in other combat arms beyond the infantry. One Station Unit Training will be extended to 22 weeks for Armor by Fiscal Year 2021. Additional OSUTs are expanding to Cavalry, Engineer, and Military Police (MP) in the succeeding Fiscal Years. A new training assignment for junior officers was instituted, that they serve as platoon leaders for Basic Combat Training (BCT) platoons. These lieutenants will assume many of the administrative, logistical, and day-to-day tasks formerly performed by the drill sergeants of those platoons and are expected to "lead, train, and assist with maintaining and enhancing the morale, welfare and readiness" of the drill sergeants and their BCT platoons. These lieutenants are also expected to stem any inappropriate behaviors they witness in their platoons, to free up the drill sergeants for training. The United States Army Combat Fitness Test (ACFT) was introduced in 2018 to 60 battalions spread throughout the Army. The test and scoring system is the same for all soldiers, regardless of gender. It takes an hour to complete, including rest periods. The ACFT supersedes the Army Physical Fitness Test (APFT), as being more relevant to survival in combat. Six events were determined to better predict which muscle groups of the body were adequately conditioned for combat actions: three deadlifts, a standing power throw of a ten-pound medicine ball, hand-release pushups (which replace the traditional pushup), a sprint/drag/carry 250 yard event, three pull-ups with leg tucks (or a plank test in lieu of the leg tuck), a mandatory rest period, and a two-mile run. As of 1 October 2020 all soldiers from all three components (Regular Army, Reserve, and National Guard) are subject to this test. The ACFT now tests all soldiers in basic training as of October 2020[update]. The ACFT became the official test of record on 1 October 2020; before that day, every Army unit was required to complete a diagnostic ACFT (All Soldiers with valid APFT scores can use them until March 2022. The Holistic Health and Fitness (H2F) System is one way that soldiers can prepare.). The ACFT movements directly translate to movements on the battlefield. Following their basic and advanced training at the individual level, soldiers may choose to continue their training and apply for an "additional skill identifier" (ASI). The ASI allows the army to take a wide-ranging MOS and focus it on a more specific MOS. For example, a combat medic, whose duties are to provide pre-hospital emergency treatment, may receive ASI training to become a cardiovascular specialist, a dialysis specialist, or even a licensed practical nurse. For commissioned officers, training includes pre-commissioning training, known as Basic Officer Leader Course A, either at USMA or via ROTC, or by completing OCS. After commissioning, officers undergo branch-specific training at the Basic Officer Leaders Course B, (formerly called Officer Basic Course), which varies in time and location according to their future assignments. Officers will continue to attend standardized training at different stages of their careers. Collective training at the unit level takes place at the unit's assigned station, but the most intensive training at higher echelons is conducted at the three combat training centers (CTC); the National Training Center (NTC) at Fort Irwin, California, the Joint Readiness Training Center (JRTC) at Fort Johnson, Louisiana and the Joint Multinational Training Center (JMRC) at the Hohenfels Training Area in Hohenfels and Grafenwöhr, Germany. ReARMM is the Army Force Generation process approved in 2020 to meet the need to continuously replenish forces for deployment at the unit level and for other echelons as required by the mission. Individual-level replenishment still requires training at a unit level, which is conducted at the continental U.S. (CONUS) replacement center (CRC) at Fort Bliss, in New Mexico and Texas before their individual deployment. The United States Army has faced recruiting challenges since the COVID-19 pandemic. The Army has implemented the Future Soldier Prep Course (FSPC) to address these issues. This program is designed to assist potential recruits who may initially need to meet the Army's physical fitness or academic standards. In the fiscal year ending 30 September 2023, approximately 13,000 of the 55,000 recruits, or 24%, participated in the FSPC. This indicates a significant reliance on the program to fill recruitment quotas. The FSPC offers both physical fitness and academic training. However, most participants enroll in the academic component, which focuses on subjects like basic math, English, and other essential skills. Equipment The chief of staff of the Army has identified six modernization priorities, these being (in order): artillery, ground vehicles, aircraft, network, air/missile defense, and soldier lethality. The United States Army employs various weapons to provide light firepower at short ranges. The most common weapon type used by the army is the M4 carbine, a compact variant of the M16 rifle, which is being replaced gradually by the M7 rifle among close combat units. The primary sidearm in the U.S. Army is the M17 pistol through the Modular Handgun System program. Soldiers are also equipped with various hand grenades, such as the M67 fragmentation grenade and M18 smoke grenade. Many units are supplemented with a variety of specialized weapons, including the M249 SAW (Squad Automatic Weapon), to provide suppressive fire at the squad level. Some units, such as the 10th Mountain Division, have replaced their M249s with Mk 48s. Indirect fire is provided by the M320 grenade launcher. The M1014 Joint Service Combat Shotgun or the Mossberg 590 Shotgun are used for door breaching and close-quarters combat. The M14EBR is used by designated marksmen. Snipers use the M107 Long Range Sniper Rifle, the M2010 Enhanced Sniper Rifle and the M110 Semi-Automatic Sniper Rifle. The army employs various crew-served weapons to provide heavy firepower at ranges exceeding that of individual weapons. The M240 is the U.S. Army's standard Medium Machine Gun. The M2 heavy machine gun is generally used as a vehicle-mounted machine gun. In the same way, the 40 mm MK 19 grenade machine gun is mainly used by motorized units. The U.S. Army uses three types of mortar for indirect fire support when heavier artillery may not be appropriate or available. The smallest of these is the 60 mm M224, normally assigned at the infantry company level. At the next higher echelon, infantry battalions are typically supported by a section of 81 mm M252 mortars. The largest mortar in the army's inventory is the 120 mm M120/M121, which is usually employed by mechanized units. Fire support for light infantry units is provided by towed howitzers, including the 105 mm M119A1 and the 155 mm M777. The U.S. Army utilizes a variety of direct-fire rockets and missiles to provide infantry with an Anti-Armor Capability. The AT4 is an unguided projectile that can destroy armor and bunkers at ranges up to 500 meters. The FIM-92 Stinger is a shoulder-launched, heat seeking anti-aircraft missile. The FGM-148 Javelin and BGM-71 TOW are anti-tank guided missiles. U.S. Army doctrine puts a premium on mechanized warfare. It fields the highest vehicle-to-soldier ratio in the world as of 2009. The army's most common vehicle is the High Mobility Multipurpose Wheeled Vehicle (HMMWV), commonly called the Humvee, which is capable of serving as a cargo/troop carrier, weapons platform and ambulance, among many other roles. While they operate a wide variety of combat support vehicles, one of the most common types centers on the family of HEMTT vehicles. The M1A2 Abrams is the army's main battle tank, while the M2A3 Bradley is the standard infantry fighting vehicle. Other vehicles include the Stryker, the M113 armored personnel carrier and multiple types of Mine Resistant Ambush Protected (MRAP) vehicles. The U.S. Army's principal artillery weapons are the M109A7 Paladin self-propelled howitzer and the M270 multiple launch rocket system (MLRS), both mounted on tracked platforms and assigned to heavy mechanized units. While the United States Army Aviation Branch operates a few fixed-wing aircraft, it mainly operates several types of rotary-wing aircraft. These include the AH-64 Apache attack helicopter, the UH-60 Black Hawk utility tactical transport helicopter and the CH-47 Chinook heavy-lift transport helicopter. Restructuring plans call for reduction of 750 aircraft and from seven to four types. The Army is evaluating two fixed-wing aircraft demonstrators; ARES, and Artemis are under evaluation to replace the Guardrail ISR (Intelligence, surveillance and reconnaissance) aircraft. Under the Johnson-McConnell agreement of 1966, the Army agreed to limit its fixed-wing aviation role to administrative mission support (light unarmed aircraft which cannot operate from forward positions). For UAVs, the Army is deploying at least one company of drone MQ-1C Gray Eagles to each Active Army division. The Army Combat Uniform (ACU) currently features a camouflage pattern known as Operational Camouflage Pattern (OCP); OCP replaced a pixel-based pattern known as Universal Camouflage Pattern (UCP) in 2019. On 11 November 2018, the Army announced a new version of 'Army Greens' based on uniforms worn during World War II that will become the standard garrison service uniform. The blue Army Service Uniform will remain as the dress uniform. The Army Greens are projected to be first fielded in the summer of 2020.[needs update] The beret flash of enlisted personnel displays their distinctive unit insignia (shown above). The U.S. Army's black beret is no longer worn with the ACU for garrison duty, having been permanently replaced with the patrol cap. After years of complaints that it was not suited well for most work conditions, Army Chief of Staff General Martin Dempsey eliminated it for wear with the ACU in June 2011. Soldiers who are currently in a unit in jump status still wear berets, whether the wearer is parachute-qualified or not (maroon beret), while members of Security Force Assistance Brigades (SFABs) wear brown berets. Members of the 75th Ranger Regiment and the Airborne and Ranger Training Brigade (tan beret) and Special Forces (rifle green beret) may wear it with the Army Service Uniform for non-ceremonial functions. Unit commanders may still direct the wear of patrol caps in these units in training environments or motor pools. The Army has relied heavily on tents to provide the various facilities needed while on deployment (Force Provider Expeditionary (FPE)).: p.146 The most common tent uses for the military are as temporary barracks (sleeping quarters), DFAC buildings (dining facilities), forward operating bases (FOBs), after-action review (AAR), tactical operations center (TOC), morale, welfare and recreation (MWR) facilities, as well as security checkpoints. Furthermore, most of these tents are set up and operated through the support of Natick Soldier Systems Center. Each FPE contains billeting, latrines, showers, laundry and kitchen facilities for 50–150 Soldiers,: p.146 and is stored in Army Prepositioned Stocks 1, 2, 4 and 5. This provisioning allows combatant commanders to position soldiers as required in their Area of Responsibility, within 24 to 48 hours. The U.S. Army is beginning to use a more modern tent called the deployable rapid assembly shelter (DRASH). In 2008, DRASH became part of the Army's Standard Integrated Command Post System. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Job_Control_Language] | [TOKENS: 4616]
Contents Job Control Language Job Control Language (JCL) is a programming language for scripting and launching batch jobs on IBM mainframe computers. JCL code determines which programs to run, and with which files and devices for input and output. Parameters in the JCL can also provide accounting information for tracking the resources used by a job and can set which machine the job should run on. There are two major variants based on host platform and associated lineage. One version is available on the platform lineage that starts with DOS/360 and has progressed to z/VSE. The other version starts with OS/360 and continues to z/OS which includes JES extensions, Job Entry Control Language (JECL). The variants share basic syntax and concepts but have significant differences. The VM operating system does not have JCL as such; the CP and CMS components each have command languages. Generically, a job control language is any programming language for job control; not just the IBM mainframe one. Terminology JCL-specific terminology includes: A file, either temporary or permanent; it may be located on a disk drive, tape storage, or another device. A collection of files; a PDS is commonly used to store textual data such as source code, assembler macros (SYS1.MACLIB), system configuration (SYS1.PARMLIB), reusable JCL procedures (SYS1.PROCLIB), etc. As a collection of files, a PDS is like to an archive file (ZIP, TAR, etc.) which in turn is like a file system directory. A PDS can contain executable code (load modules or program objects) which makes a PDS like a Unix-based static library. A member, once stored, cannot be updated although a member can be deleted and replaced, such as via the IEBUPDTE utility. Since the 1989 release of MVS DFP 3.2, an enhanced version, partitioned data set extended (PDSE) has been available. A file (data set) in a PDS. A member can be accessed by specifying the name of the PDS with the member name in parentheses. For example, the system macro GETMAIN in SYS1.MACLIB can be referenced as SYS1.MACLIB(GETMAIN). A complete Unix environment running as part of the MVS base control program. It allows Unix files, scripts, tasks, and programs to run on a mainframe in a POSIX-compliant Unix environment without virtualization. Motivation Originally, mainframe systems were oriented toward batch processing. Many batch jobs require setup, with specific requirements for main storage, and dedicated devices such as magnetic tapes, private disk volumes, and printers set up with special forms. JCL was developed as a means of ensuring that all required resources are available before a job is scheduled to run. For example, many systems, such as Linux allow identification of required datasets to be specified on the command line, and therefore subject to substitution by the shell, or generated by the program at run-time. On these systems the operating system job scheduler has little or no idea of the requirements of the job. In contrast, JCL explicitly specifies all required datasets and devices. The scheduler can pre-allocate the resources prior to releasing the job to run. This helps to avoid "deadlock", where job A holds resource R1 and requests resource R2, while concurrently running job B holds resource R2 and requests R1. In such cases the only solution is for the computer operator to terminate one of the jobs, which then needs to be restarted. With job control, if job A is scheduled to run, job B will not be started until job A completes or releases the required resources. Features common to DOS and OS JCL For both DOS and OS the unit of work is the job. A job consists of one or several steps, each of which is a request to run one specific program. For example, before the days of relational databases, a job to produce a printed report for management might consist of the following steps: a user-written program to select the appropriate records and copy them to a temporary file; sort the temporary file into the required order, usually using a general-purpose utility; a user-written program to present the information in a way that is easy for the end-users to read and includes other useful information such as sub-totals; and a user-written program to format selected pages of the end-user information for display on a monitor or terminal. In both DOS and OS JCL the first "card" must be the JOB card, which: Procedures (commonly called procs) are pre-written JCL for steps or groups of steps, inserted into a job. Both JCLs allow such procedures. Procs are used for repeating steps which are used several times in one job, or in several different jobs. They save programmer time and reduce the risk of errors. To run a procedure one simply includes in the JCL file a single "card" which copies the procedure from a specified file, and inserts it into the jobstream. Also, procs can include parameters to customize the procedure for each use. Both DOS and OS JCL have a maximum usable line length of 80 characters, because when DOS/360 and OS/360 were first used the main method of providing new input to a computer system was 80-column punched cards. It later became possible to submit jobs via disk or tape files with longer record lengths, but the operating system's job submission components ignored everything after character 80. Strictly speaking both operating system families use only 71 characters per line. Characters 73-80 are usually card sequence numbers which the system printed on the end-of-job report and are useful for identifying the locations of any errors reported by the operating system. Character 72 is usually left blank, but it can contain a nonblank character to indicate that the JCL statement is continued onto the next card. All commands, parameter names and values have to be in capitals, except for USS filenames. All lines except for in-stream input (see below) have to begin with a slash "/", and all lines which the operating system processes have to begin with two slashes // - always starting in the first column. However, there are two exceptions: the delimiter statement and the comment statement. A delimiter statements begins with a slash and an asterisk (/*), and a comment statement in OS JCL begins with a pair of slashes and asterisk (//*) or an asterisk in DOS JCL. Many JCL statements are too long to fit within 71 characters, but can be extended on to an indefinite number of continuation cards by: The structure of the most common types of card is: DOS and OS JCL both allow in-stream input, i.e. "cards" which are to be processed by the application program rather than the operating system. Data which is to be kept for a long time will normally be stored on disk, but before the use of interactive terminals became common the only way to create and edit such disk files was by supplying the new data on cards. DOS and OS JCL have different ways of signaling the start of in-stream input, but both end in-stream input with /* at column 1 of the card following the last in-stream data card. This makes the operating system resume processing JCL in the card following the /* card. Fred Brooks, who supervised the OS/360 project in which JCL was created, called it "the worst computer programming language ever devised by anybody, anywhere" in The Design of Design, where he used it as the example in the chapter "How Expert Designers Go Wrong". He attributed this to the failure of the designers to realize that JCL is, in fact, a programming language. Much of the complexity of OS JCL, in particular, derives from the large number of options for specifying dataset information. While files on Unix-like operating systems are abstracted into ordered streams of bytes, with the task of reading and writing structured data belonging exclusively with user-level programs (which, ultimately, ingest and emit such streams), and the practical details of data storage and access handled in large part by the operating system without the knowledge of user programs; datasets on OS/360 and its successors expose their file types and sizes, record types and lengths, block sizes, device-specific information like magnetic tape density, and label information. Although there are system defaults for many options, there is still a lot to be specified by the programmer, through a combination of JCL and information coded in the program. The more information coded in the program, the less flexible it is, since information in the program overrides anything in the JCL; thus, most information is usually supplied through JCL. For example, to copy a file on Unix operating system, the user would enter a command like: The following example, using JCL, might be used to copy a file on OS/360: A second explanation for the complexity of JCL is the different expectations for running a job from those found in a PC or Unix-like environment. Later versions of the DOS/360 and OS/360 operating systems retain most features of the original JCL—although some simplification has been made, to avoid forcing customers to rewrite all their JCL files.[citation needed] Many users save as a procedure any set of JCL statements which is likely to be used more than once or twice. The syntax of OS JCL is similar to the syntax of macros in System/360 assembly language, and would therefore have been familiar to programmers at a time when many programs were coded in assembly language. DOS JCL DOS JCL parameters are positional, which makes them harder to read and write, but easier for the system to parse. DOS JCL to some extent mitigates the difficulties of positional parameters by using more statements with fewer parameters than OS JCL. In the example the ASSGN, DLBL and EXTENT statements do the same work (specifying where a new disk file should be stored) as a single DD statement in OS JCL. In the original DOS/360 and in most versions of DOS/VS one had to specify the model number of the device which was to be used for each disk or tape file—even for existing files and for temporary files which would be deleted at the end of the job. This meant that, if a customer upgraded to more modern equipment, many JCL files had to be changed. Later members of the DOS/360 family reduced the number of situations in which device model numbers were required. DOS/360 originally required the programmer to specify the location and size of all files on DASD. The EXTENT card specifies the volume on which the extent resides, the starting absolute track, and the number of tracks. For z/VSE a file can have up to 256 extents on different volumes. OS JCL OS JCL consists of three basic statement types: Right from the start, JCL for the OS family (up to and including z/OS) was more flexible and easier to use. The following examples use the old style of syntax which was provided right from the launch of System/360 in 1964. The old syntax is still quite common in jobs that have been running for decades with only minor changes. Each JCL statement is divided into five fields: Identifier-Field should be concatenated with Name-Field, i.e. there should be no spaces between them. All of the major parameters of OS JCL statements are identified by keywords and can be presented in any order. A few of these contain two or more sub-parameters, such as SPACE (how much disk space to allocate to a new file) and DCB (detailed specification of a file's layout) in the example above. Sub-parameters are sometimes positional, as in SPACE, but the most complex parameters, such as DCB, have keyword sub-parameters. Positional parameter must precede keyword parameters. Keyword parameters always assign values to a keyword using the equals sign (=). The DD statement is used to reference data. This statement links a program's internal description of a dataset to the data on external devices: disks, tapes, cards, printers, etc. The DD may provide information such as a device type (e.g. '181','2400-5','TAPE'), a volume serial number for tapes or disks, and the description of the data file, called the DCB subparameter after the Data Control Block (DCB) in the program used to identify the file. Information describing the file can come from three sources: The DD card information, the dataset label information for an existing file stored on tape or disk, and the DCB macro coded in the program. When the file is opened this data is merged, with the DD information taking precedence over the label information, and the DCB information taking precedence over both. The updated description is then written back to the dataset label. This can lead to unintended consequences if incorrect DCB information is provided. Because of the parameters listed above and specific information for various access methods and devices the DD statement is the most complex JCL statement. In one IBM reference manual description of the DD statement occupies over 130 pages—more than twice as much as the JOB and EXEC statements combined. The DD statement allows inline data to be injected into the job stream. This is useful for providing control information to utilities such as IDCAMS, SORT, etc. as well as providing input data to programs. From the very beginning, the JCL for the OS family of operating systems offered a high degree of device independence. Even for new files which were to be kept after the end of the job one could specify the device type in generic terms, e.g., UNIT=DISK, UNIT=TAPE, or UNIT=SYSSQ (tape or disk). Of course, if it mattered one could specify a model number or even a specific device address. Procedures permit grouping one or more "EXEC PGM=" and DD statements and then invoking them with "EXEC PROC=procname" -or- simply "EXEC procname" A facility called a Procedure Library allowed pre-storing procedures. Procedures can also be included in the job stream by terminating the procedure with a // PEND statement, then invoking it by name the same way as if it were in a procedure library. For example: OS JCL procedures were parameterized from the start, making them rather like macros or even simple subroutines and thus increasing their reusability in a wide range of situations. In this example, all the values beginning with ampersands "&" are parameters which will be specified when a job requests that the procedure be used. The PROC statement, in addition to giving the procedure a name, allows the programmer to specify default values for each parameter. So one could use the one procedure in this example to create new files of many different sizes and layouts. For example: In multi-step jobs, a later step can use a referback instead of specifying in full a file which has already been specified in an earlier step. For example: Here, MYPR02 uses the file identified as NEWFILE in step MYPR01 (DSN means "dataset name" and specifies the name of the file; a DSN could not exceed 44 characters). In jobs which contain a mixture of job-specific JCL and procedure calls, a job-specific step can refer back to a file which was fully specified in a procedure, for example: where DSN=*.STEP01.MYPR01.NEWFILE means "use the file identified as NEWFILE in step MYPR01 of the procedure used by step STEP01 of this job". Using the name of the step which called the procedure rather than the name of the procedure allows a programmer to use the same procedure several times in the same job without confusion about which instance of the procedure is used in the referback. JCL files can be long and complex, and the language is not easy to read. OS JCL allows programmers to include two types of explanatory comment: OS JCL allows programmers to concatenate ("chain") input files so that they appear to the program as one file, for example The 2nd and third statements have no value in the name field, so OS treats them as concatenations. The files must be of the same basic type (almost always sequential), and must have the same record length, however the block length need not be the same. In early versions of the OS (certainly before OS/360 R21.8) the block length must be in decreasing order, or the user must inspect each instance and append to the named DD statement the maximum block length found, as in, for example, In later versions of the OS (certainly after OS/MVS R3.7 with the appropriate "selectable units") the OS itself, during allocation, would inspect each instance in a concatenation and would substitute the maximum block length which was found. A usual fallback was to simply determine the maximum possible block length on the device, and specify that on the named DD statement, as in, for example, The purpose of this fallback was to ensure that the access method would allocate an input buffer set which was large enough to accommodate any and all of the specified datasets. OS expects programs to set a return code which specifies how successful the program thought it was. The most common conventional values are:: p.87 OS JCL refers to the return code as COND ("condition code"), and can use it to decide whether to run subsequent steps. However, unlike most modern programming languages, conditional steps in OS JCL are not executed if the specified condition is true—thus giving rise to the mnemonic, "If it's true, pass on through [without running the code]." To complicate matters further, the condition can only be specified after the step to which it refers. For example: means: This translates to the following pseudocode: Note that by reading the steps containing COND statements backwards, one can understand them fairly easily. This is an example of logical transposition. However, IBM later introduced IF condition in JCL thereby making coding somewhat easier for programmers while retaining the COND parameter (to avoid making changes to the existing JCLs where COND parm is used). The COND parameter may also be specified on the JOB statement. If so the system "performs the same return code tests for every step in a job. If a JOB statement return code test is satisfied, the job terminates." Jobs use a number of IBM utility programs to assist in the processing of data. Utilities are most useful in batch processing. The utilities can be grouped into three sets: OS JCL is undeniably complex and has been described as "user hostile". As one instructional book on JCL asked, "Why do even sophisticated programmers hesitate when it comes to Job Control Language?" The book stated that many programmers either copied control cards without really understanding what they did, or "believed the prevalent rumors that JCL was horrible, and only 'die-hard' computer-types ever understood it" and handed the task of figuring out the JCL statements to someone else. Such an attitude could be found in programming language textbooks, which preferred to focus on the language itself and not how programs in it were run. As one Fortran IV textbook said when listing possible error messages from the WATFOR compiler: "Have you been so foolish as to try to write your own 'DD' system control cards? Cease and desist forthwith; run, do not walk, for help." Nevertheless, some books that went into JCL in detail emphasized that once it was learned to an at least somewhat proficient degree, one gained freedom from installation-wide defaults and much better control over how an IBM system processed your workload. Another book commented on the complexity but said, "take heart. The JCL capability you will gain from [the preceding chapter] is all that most programmers will ever need." Job Entry Control Language On IBM mainframe systems Job Entry Control Language or JECL is the set of command language control statements that provide information for the spooling subsystem – JES2 or JES3 on z/OS or VSE/POWER for z/VSE. JECL statements may "specify on which network computer to run the job, when to run the job, and where to send the resulting output." JECL is distinct from job control language (JCL), which instructs the operating system how to run the job. There are different versions of JECL for the three environments. An early version of Job Entry Control Language for OS/360 Remote Job Entry (Program Number 360S-RC-536) used the identifier .. in columns 1–2 of the input record and consisted of a single control statement: JED (Job Entry Definition). "Workstation Commands" such as LOGON, LOGOFF, and STATUS also began with .. . Although the term had not yet been developed, HASP did have similar functionality to what would become the JECL of JES, including /* syntax. For JES2 JECL statements start with /*, for JES3 they start with //*, except for remote /*SIGNON and /*SIGNOFF commands. The commands for the two systems are completely different. The following JES2 JECL statements are used in z/OS 1.2.0. The following JES3 JECL statements are used in z/OS 1.2.0 For VSE JECL statements start with '* $$' (note the single space). The Job Entry Control Language defines the start and end lines of JCL jobs. It advises VSE/POWER how this job is handled. JECL statements define the job name (used by VSE/POWER), the class in which the job is processed, and the disposition of the job (i.e. D, L, K, H). Example: Other systems Other mainframe batch systems had some form of job control language, whether called that or not; their syntax was completely different from IBM versions, but they usually provided similar capabilities. Such a language would have control cards with a special indicator, such as an initial dollar sign with $JOB being the first such card, interspersed with cards containing program code, data to be run, and so on. Interactive systems include "command languages"—command files (such as PCDOS ".bat" files) can be run non-interactively, but these usually do not provide as robust an environment for running unattended jobs as JCL. On some computer systems the job control language and the interactive command language may be different. For example, TSO on z/OS systems uses CLIST or Rexx as command languages along with JCL for batch work. On other systems these may be the same. See also References Sources
========================================
[SOURCE: https://en.wikipedia.org/wiki/Category:Unidentified_flying_objects] | [TOKENS: 93]
Category:Unidentified flying objects All articles related to unidentified flying objects (UFOs), a general term for all kinds of mysteriously unexplained, anomalous objects or entities allegedly seen floating in the sky. Subcategories This category has the following 5 subcategories, out of 5 total. Pages in category "Unidentified flying objects" The following 9 pages are in this category, out of 9 total. This list may not reflect recent changes.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Birthday#cite_note-38] | [TOKENS: 4101]
Contents Birthday A birthday is the anniversary of the birth of a person or the figurative birth of an institution. Birthdays of people are celebrated in numerous cultures, often with birthday gifts, birthday cards, a birthday party, or a rite of passage. Many religions celebrate the birth of their founders or religious figures with special holidays (e.g. Christmas, Mawlid, Buddha's Birthday, Krishna Janmashtami, and Gurpurb). There is a distinction between birthday and birthdate (also known as date of birth): the former, except for February 29, occurs each year (e.g. January 15), while the latter is the complete date when a person was born (e.g. January 15, 2001). Coming of age In most legal systems, one becomes a legal adult on a particular birthday when they reach the age of majority (usually between 12 and 21), and reaching age-specific milestones confers particular rights and responsibilities. At certain ages, one may become eligible to leave full-time education, become subject to military conscription or to enlist in the military, to consent to sexual intercourse, to marry with parental consent, to marry without parental consent, to vote, to run for elected office, to legally purchase (or consume) alcohol and tobacco products, to purchase lottery tickets, or to obtain a driver's licence. The age of majority is when minors cease to legally be considered children and assume control over their persons, actions, and decisions, thereby terminating the legal control and responsibilities of their parents or guardians over and for them. Most countries set the age of majority at 18, though it varies by jurisdiction. Many cultures celebrate a coming of age birthday when a person reaches a particular year of life. Some cultures celebrate landmark birthdays in early life or old age. In many cultures and jurisdictions, if a person's real birthday is unknown (for example, if they are an orphan), their birthday may be adopted or assigned to a specific day of the year, such as January 1. Racehorses are reckoned to become one year old in the year following their birth on January 1 in the Northern Hemisphere and August 1 in the Southern Hemisphere.[relevant?] Birthday parties In certain parts of the world, an individual's birthday is celebrated by a party featuring a specially made cake. Presents are bestowed on the individual by the guests appropriate to their age. Other birthday activities may include entertainment (sometimes by a hired professional, i.e., a clown, magician, or musician) and a special toast or speech by the birthday celebrant. The last stanza of Patty Hill's and Mildred Hill's famous song, "Good Morning to You" (unofficially titled "Happy Birthday to You") is typically sung by the guests at some point in the proceedings. In some countries, a piñata takes the place of a cake. The birthday cake may be decorated with lettering and the person's age, or studded with the same number of lit candles as the age of the individual. The celebrated individual may make a silent wish and attempt to blow out the candles in one breath; if successful, superstition holds that the wish will be granted. In many cultures, the wish must be kept secret or it will not "come true". Birthdays as holidays Historically significant people's birthdays, such as national heroes or founders, are often commemorated by an official holiday marking the anniversary of their birth. Some notables, particularly monarchs, have an official birthday on a fixed day of the year, which may not necessarily match the day of their birth, but on which celebrations are held. In Mahayana Buddhism, many monasteries celebrate the anniversary of Buddha's birth, usually in a highly formal, ritualized manner. They treat Buddha's statue as if it was Buddha himself as if he were alive; bathing, and "feeding" him. Jesus Christ's traditional birthday is celebrated as Christmas Eve or Christmas Day around the world, on December 24 or 25, respectively. As some Eastern churches use the Julian calendar, December 25 will fall on January 7 in the Gregorian calendar. These dates are traditional and have no connection with Jesus's actual birthday, which is not recorded in the Gospels. Similarly, the birthdays of the Virgin Mary and John the Baptist are liturgically celebrated on September 8 and June 24, especially in the Roman Catholic and Eastern Orthodox traditions (although for those Eastern Orthodox churches using the Julian calendar the corresponding Gregorian dates are September 21 and July 7 respectively). As with Christmas, the dates of these celebrations are traditional and probably have no connection with the actual birthdays of these individuals. Catholic saints are remembered by a liturgical feast on the anniversary of their "birth" into heaven a.k.a. their day of death. In Hinduism, Ganesh Chaturthi is a festival celebrating the birth of the elephant-headed deity Ganesha in extensive community celebrations and at home. Figurines of Ganesha are made for the holiday and are widely sold. Sikhs celebrate the anniversary of the birth of Guru Nanak and other Sikh gurus, which is known as Gurpurb. Mawlid is the anniversary of the birth of Muhammad and is celebrated on the 12th or 17th day of Rabi' al-awwal by adherents of Sunni and Shia Islam respectively. These are the two most commonly accepted dates of birth of Muhammad. However, there is much controversy regarding the permissibility of celebrating Mawlid, as some Muslims judge the custom as an unacceptable practice according to Islamic tradition. In Iran, Mother's Day is celebrated on the birthday of Fatima al-Zahra, the daughter of Muhammad. Banners reading Ya Fatima ("O Fatima") are displayed on government buildings, private buildings, public streets and car windows. Religious views In Judaism, rabbis are divided about celebrating this custom, although the majority of the faithful accept it. In the Torah, the only mention of a birthday is the celebration of Pharaoh's birthday in Egypt (Genesis 40:20). Although the birthday of Jesus of Nazareth is celebrated as a Christian holiday on December 25, historically the celebrating of an individual person's birthday has been subject to theological debate. Early Christians, notes The World Book Encyclopedia, "considered the celebration of anyone's birth to be a pagan custom." Origen, in his commentary "On Levites," wrote that Christians should not only refrain from celebrating their birthdays but should look at them with disgust as a pagan custom. A saint's day was typically celebrated on the anniversary of their martyrdom or death, considered the occasion of or preparation for their entrance into Heaven or the New Jerusalem. Ordinary folk in the Middle Ages celebrated their saint's day (the saint they were named after), but nobility celebrated the anniversary of their birth.[citation needed] The "Squire's Tale", one of Chaucer's Canterbury Tales, opens as King Cambuskan proclaims a feast to celebrate his birthday. In the Modern era, the Catholic Church, the Eastern Orthodox Church and Protestantism, i.e. the three main branches of Christianity, as well as almost all Christian religious denominations, consider celebrating birthdays acceptable or at most a choice of the individual. An exception is Jehovah's Witnesses, who do not celebrate them for various reasons: in their interpretation this feast has pagan origins, was not celebrated by early Christians, is negatively expounded in the Holy Scriptures and has customs linked to superstition and magic. In some historically Roman Catholic and Eastern Orthodox countries,[a] it is common to have a 'name day', otherwise known as a 'Saint's day'. It is celebrated in much the same way as a birthday, but it is held on the official day of a saint with the same Christian name as the birthday person; the difference being that one may look up a person's name day in a calendar, or easily remember common name days (for example, John or Mary); however in pious traditions, the two were often made to concur by giving a newborn the name of a saint celebrated on its day of confirmation, more seldom one's birthday. Some are given the name of the religious feast of their christening's day or birthday, for example, Noel or Pascal (French for Christmas and "of Easter"); as another example, Togliatti was given Palmiro as his first name because he was born on Palm Sunday. The birthday does not reflect Islamic tradition, and because of this, the majority of Muslims refrain from celebrating it. Others do not object, as long as it is not accompanied by behavior contrary to Islamic tradition. A good portion of Muslims (and Arab Christians) who have emigrated to the United States and Europe celebrate birthdays as customary, especially for children, while others abstain. Hindus celebrate the birth anniversary day every year when the day that corresponds to the lunar month or solar month (Sun Signs Nirayana System – Sourava Mana Masa) of birth and has the same asterism (Star/Nakshatra) as that of the date of birth. That age is reckoned whenever Janma Nakshatra of the same month passes. Hindus regard death to be more auspicious than birth, since the person is liberated from the bondages of material society. Also, traditionally, rituals and prayers for the departed are observed on the 5th and 11th days, with many relatives gathering. Historical and cultural perspectives According to Herodotus (5th century BC), of all the days in the year, the one which the Persians celebrate most is their birthday. It was customary to have the board furnished on that day with an ampler supply than common: the richer people eat wholly baked cow, horse, camel, or donkey (Greek: ὄνον), while the poorer classes use instead the smaller kinds of cattle. On his birthday, the king anointed his head and presented gifts to the Persians. According to the law of the Royal Supper, on that day "no one should be refused a request". The rule for drinking was "No restrictions". In ancient Rome, a birthday (dies natalis) was originally an act of religious cultivation (cultus). A dies natalis was celebrated annually for a temple on the day of its founding, and the term is still used sometimes for the anniversary of an institution such as a university. The temple founding day might become the "birthday" of the deity housed there. March 1, for example, was celebrated as the birthday of the god Mars. Each human likewise had a natal divinity, the guardian spirit called the Genius, or sometimes the Juno for a woman, who was owed religious devotion on the day of birth, usually in the household shrine (lararium). The decoration of a lararium often shows the Genius in the role of the person carrying out the rites. A person marked their birthday with ritual acts that might include lighting an altar, saying prayers, making vows (vota), anointing and wreathing a statue of the Genius, or sacrificing to a patron deity. Incense, cakes, and wine were common offerings. Celebrating someone else's birthday was a way to show affection, friendship, or respect. In exile, the poet Ovid, though alone, celebrated not only his own birthday rite but that of his far distant wife. Birthday parties affirmed social as well as sacred ties. One of the Vindolanda tablets is an invitation to a birthday party from the wife of one Roman officer to the wife of another. Books were a popular birthday gift, sometimes handcrafted as a luxury edition or composed especially for the person honored. Birthday poems are a minor but distinctive genre of Latin literature. The banquets, libations, and offerings or gifts that were a regular part of most Roman religious observances thus became part of birthday celebrations for individuals. A highly esteemed person would continue to be celebrated on their birthday after death, in addition to the several holidays on the Roman calendar for commemorating the dead collectively. Birthday commemoration was considered so important that money was often bequeathed to a social organization to fund an annual banquet in the deceased's honor. The observance of a patron's birthday or the honoring of a political figure's Genius was one of the religious foundations for imperial cult or so-called "emperor worship." The Chinese word for "year(s) old" (t 歲, s 岁, suì) is entirely different from the usual word for "year(s)" (年, nián), reflecting the former importance of Chinese astrology and the belief that one's fate was bound to the stars imagined to be in opposition to the planet Jupiter at the time of one's birth. The importance of this duodecennial orbital cycle only survives in popular culture as the 12 animals of the Chinese zodiac, which change each Chinese New Year and may be used as a theme for some gifts or decorations. Because of the importance attached to the influence of these stars in ancient China and throughout the Sinosphere, East Asian age reckoning previously began with one at birth and then added years at each Chinese New Year, so that it formed a record of the suì one had lived through rather than of the exact amount of time from one's birth. This method—which can differ by as much as two years of age from other systems—is increasingly uncommon and is not used for official purposes in the PRC or on Taiwan, although the word suì is still used for describing age. Traditionally, Chinese birthdays—when celebrated—were reckoned using the lunisolar calendar, which varies from the Gregorian calendar by as much as a month forward or backward depending on the year. Celebrating the lunisolar birthday remains common on Taiwan while growing increasingly uncommon on the mainland. Birthday traditions reflected the culture's deep-seated focus on longevity and wordplay. From the homophony in some dialects between 酒 ("rice wine") and 久 (meaning "long" in the sense of time passing), osmanthus and other rice wines are traditional gifts for birthdays in China. Longevity noodles are another traditional food consumed on the day, although western-style birthday cakes are increasingly common among urban Chinese. Hongbaos—red envelopes stuffed with money, now especially the red 100 RMB notes—are the usual gift from relatives and close family friends for most children. Gifts for adults on their birthdays are much less common, although the birthday for each decade is a larger occasion that might prompt a large dinner and celebration. The Japanese reckoned their birthdays by the Chinese system until the Meiji Reforms. Celebrations remained uncommon or muted until after the American occupation that followed World War II.[citation needed] Children's birthday parties are the most important, typically celebrated with a cake, candles, and singing. Adults often just celebrate with their partner. In North Korea, the Day of the Sun, Kim Il Sung's birthday, is the most important public holiday of the country, and Kim Jong Il's birthday is celebrated as the Day of the Shining Star. North Koreans are not permitted to celebrate birthdays on July 8 and December 17 because these were the dates of the deaths of Kim Il Sung and Kim Jong Il, respectively. More than 100,000 North Koreans celebrate displaced birthdays on July 9 and December 18 instead to avoid these dates. A person born on July 8 before 1994 may change their birthday, with official recognition. South Korea was one of the last countries to use a form of East Asian age reckoning for many official purposes. Prior to June 2023, three systems were used together—"Korean ages" that start with 1 at birth and increase every January 1st with the Gregorian New Year, "year ages" that start with 0 at birth and otherwise increase the same way, and "actual ages" that start with 0 at birth and increase each birthday. First birthday celebrations was heavily celebrated, despite usually having little to do with the child's age. In June 2023, all Korean ages were set back at least one year, and official ages henceforth are reckoned only by birthdays. In Ghana, children wake up on their birthday to a special treat called oto, which is a patty made from mashed sweet potato and eggs fried in palm oil. Later they have a birthday party where they usually eat stew and rice and a dish known as kelewele, which is fried plantain chunks. Distribution through the year Birthdays are fairly evenly distributed throughout the year, with some seasonal effects. In the United States, there tend to be more births in September and October. This may be because there is a holiday season nine months before (the human gestation period is about nine months), or because the longest nights of the year also occur in the Northern Hemisphere nine months before. However, the holidays affect birth rates more than the winter: New Zealand, a Southern Hemisphere country, has the same September and October peak with no corresponding peak in March and April. The least common birthdays tend to fall around public holidays, such as Christmas, New Year's Day and fixed-date holidays such as Independence Day in the US, which falls on July 4. Between 1973 and 1999, September 16 was the most common birthday in the United States, and December 25 was the least common birthday (other than February 29 because of leap years). In 2011, October 5 and 6 were reported as the most frequently occurring birthdays. New Zealand's most common birthday is September 29, and the least common birthday is December 25. The ten most common birthdays all fall within a thirteen-day period, between September 22 and October 4. The ten least common birthdays (other than February 29) are December 24–27, January 1–2, February 6, March 22, April 1, and April 25. This is based on all live births registered in New Zealand between 1980 and 2017. Positive and negative associations with culturally significant dates may influence birth rates. The study shows a 5.3% decrease in spontaneous births and a 16.9% decrease in Caesarean births on Halloween, compared to dates occurring within one week before and one week after the October holiday. In contrast, on Valentine's Day, there is a 3.6% increase in spontaneous births and a 12.1% increase in Caesarean births. In Sweden, 9.3% of the population is born in March and 7.3% in November, when a uniform distribution would give 8.3%. In the Gregorian calendar (a common solar calendar), February in a leap year has 29 days instead of the usual 28, so the year lasts 366 days instead of the usual 365. A person born on February 29 may be called a "leapling" or a "leaper". In common years, they usually celebrate their birthdays on February 28. In some situations, March 1 is used as the birthday in a non-leap year since it is the day following February 28. Technically, a leapling will have fewer birthday anniversaries than their age in years. This phenomenon is exploited when a person claims to be only a quarter of their actual age, by counting their leap-year birthday anniversaries only. In Gilbert and Sullivan's 1879 comic opera The Pirates of Penzance, Frederic the pirate apprentice discovers that he is bound to serve the pirates until his 21st birthday rather than until his 21st year. For legal purposes, legal birthdays depend on how local laws count time intervals. An individual's Beddian birthday, named in tribute to firefighter Bobby Beddia, occurs during the year that their age matches the last two digits of the year they were born. Some studies show people are more likely to die on their birthdays, with explanations including excessive drinking, suicide, cardiovascular events due to high stress or happiness, efforts to postpone death for major social events, and death certificate paperwork errors. See also References Notes External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lithosphere] | [TOKENS: 1628]
Contents Lithosphere A lithosphere is the rigid outermost rocky shell of a terrestrial planet or natural satellite. On Earth, it is composed of the crust and the lithospheric mantle, the topmost portion of the upper mantle that behaves elastically on time scales of up to thousands of years or more. The crust and upper mantle are distinguished on the basis of chemistry and mineralogy. Earth's lithosphere Earth's lithosphere, which constitutes the hard and rigid outer vertical layer of the Earth, includes the crust and the lithospheric mantle (or mantle lithosphere), the uppermost part of the mantle that is not convecting. The layer below the lithosphere is called the asthenosphere, which is the weaker, hotter, and deeper part of the upper mantle that is able to convect. The lithosphere–asthenosphere boundary is defined by a difference in response to stress. The lithosphere remains rigid for very long periods of geologic time in which it deforms elastically and through brittle failure, while the asthenosphere deforms viscously and accommodates strain through plastic deformation.[citation needed] Due to this definition of the lithosphere– asthenosphere boundary, the thickness of the lithosphere is considered to be the depth to the isotherm associated with the transition between brittle and viscous behavior. The temperature at which olivine becomes ductile (~1,000 °C or 1,830 °F) is often used to set this isotherm because olivine is generally the weakest mineral in the upper mantle. The lithosphere is subdivided horizontally into tectonic plates, which often include terranes accreted from other plates.[citation needed] The concept of the lithosphere as Earth's strong outer layer was described by the English mathematician A. E. H. Love in his 1911 monograph "Some problems of Geodynamics" and further developed by the American geologist Joseph Barrell, who wrote a series of papers about the concept and introduced the term "lithosphere". The concept was based on the presence of significant gravity anomalies over continental crust, from which he inferred that there must exist a strong, solid upper layer (which he called the lithosphere) above a weaker layer which could flow (which he called the asthenosphere). These ideas were expanded by the Canadian geologist Reginald Aldworth Daly in 1940 with his seminal work "Strength and Structure of the Earth." They have been broadly accepted by geologists and geophysicists. These concepts of a strong lithosphere resting on a weak asthenosphere are essential to the theory of plate tectonics.[citation needed] The lithosphere can be divided into oceanic and continental lithosphere. Oceanic lithosphere is associated with oceanic crust (having a mean density of about 2.9 grams per cubic centimetre or 0.10 pounds per cubic inch) and exists in the ocean basins. Continental lithosphere is associated with continental crust (having a mean density of about 2.7 grams per cubic centimetre or 0.098 pounds per cubic inch) and underlies the continents and continental shelves. Oceanic lithosphere consists mainly of mafic crust and ultramafic mantle (peridotite) and is denser than continental lithosphere. Young oceanic lithosphere, found at mid-ocean ridges, is no thicker than the crust, but oceanic lithosphere thickens as it ages and moves away from the mid-ocean ridge. The oldest oceanic lithosphere is typically about 140 kilometres (87 mi) thick. This thickening occurs by conductive cooling, which converts hot asthenosphere into lithospheric mantle and causes the oceanic lithosphere to become increasingly thick and dense with age. In fact, oceanic lithosphere is a thermal boundary layer for the convection in the mantle. The thickness of the mantle part of the oceanic lithosphere can be approximated as a thermal boundary layer that thickens as the square root of time.[citation needed] h ∼ 2 κ t {\displaystyle h\,\sim \,2\,{\sqrt {\kappa t}}} Here, h {\displaystyle h} is the thickness of the oceanic mantle lithosphere, κ {\displaystyle \kappa } is the thermal diffusivity (approximately 1.0×10−6 m2/s or 6.5×10−4 sq ft/min) for silicate rocks, and t {\displaystyle t} is the age of the given part of the lithosphere. The age is often equal to L/V, where L is the distance from the spreading centre of the mid-ocean ridge, and V is the velocity of the lithospheric plate. Oceanic lithosphere is less dense than asthenosphere for a few tens of millions of years but after this becomes increasingly denser than asthenosphere. While chemically differentiated oceanic crust is lighter than asthenosphere, thermal contraction of the mantle lithosphere makes it denser than the asthenosphere. The gravitational instability of mature oceanic lithosphere has the effect that at subduction zones, oceanic lithosphere invariably sinks underneath the overriding lithosphere, which can be oceanic or continental. New oceanic lithosphere is constantly being produced at mid-ocean ridges and is recycled back to the mantle at subduction zones. As a result, oceanic lithosphere is much younger than continental lithosphere: the oldest oceanic lithosphere is about 170 million years old, while parts of the continental lithosphere are billions of years old. Geophysical studies in the early 21st century posit that large pieces of the lithosphere have been subducted into the mantle as deep as 2,900 kilometres (1,800 mi) to near the core-mantle boundary, while others "float" in the upper mantle. Yet others stick down into the mantle as far as 400 kilometres (250 mi) but remain "attached" to the continental plate above, similar to the extent of the old concept of "tectosphere" revisited by Jordan in 1988. Subducting lithosphere remains rigid (as demonstrated by deep earthquakes along Wadati–Benioff zone) to a depth of about 600 kilometres (370 mi). Continental lithosphere has a range in thickness from about 40 kilometres (25 mi) to perhaps 280 kilometres (170 mi); the upper approximately 30 to 50 kilometres (19 to 31 mi) of typical continental lithosphere is crust. The crust is distinguished from the upper mantle by the change in chemical composition that takes place at the Moho discontinuity. The oldest parts of continental lithosphere underlie cratons, and the mantle lithosphere there is thicker and less dense than typical; the relatively low density of such mantle "roots of cratons" helps to stabilize these regions. Because of its relatively low density, continental lithosphere that arrives at a subduction zone cannot subduct much further than about 100 km (62 mi) before resurfacing. As a result, continental lithosphere is not recycled at subduction zones the way oceanic lithosphere is recycled. Instead, continental lithosphere is a nearly permanent feature of the Earth. Mantle xenoliths Geoscientists can directly study the nature of the subcontinental mantle by examining mantle xenoliths brought up in kimberlite, lamproite, and other volcanic pipes. The histories of these xenoliths have been investigated by many methods, including analyses of abundances of isotopes of osmium and rhenium. Such studies have confirmed that mantle lithospheres below some cratons have persisted for periods in excess of 3 billion years, despite the mantle flow that accompanies plate tectonics. Microorganisms The upper part of the lithosphere is a large habitat for microorganisms, with some found more than 4.8 km (3 mi) below Earth's surface. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Template:Extraterrestrial_life] | [TOKENS: 82]
Contents Template:Extraterrestrial life This template's initial visibility currently defaults to autocollapse, meaning that if there is another collapsible item on the page (a navbox, sidebar, or table with the collapsible attribute), it is hidden apart from its title bar; if not, it is fully visible. To change this template's initial visibility, the |state= parameter may be used: See also
========================================